Skip to content

ADAM-Lab-GW/xbtorch

Repository files navigation

XBTorch

License Docs Arxiv Python PyTorch


XBTorch is a PyTorch-native framework for simulating crossbar-based deep neural networks with emerging memory technologies such as ReRAM, FeFETs, PCM, and MTJs.

It enables researchers and engineers to:

  • Model realistic device-level behavior (variability, noise, nonlinearity),
  • Perform hardware-aware training with quantization and gradient decomposition,
  • Evaluate fault-tolerant inference on simulated crossbar arrays,
  • Seamlessly integrate with existing PyTorch models with minimal code changes.

👉 For detailed guides, please see the XBTorch Documentation.


Dependencies & Installation

The recommended installation method is to create a lightweight virtual environment and install XBTorch in editable mode:

$ python -m venv .env
$ source .env/bin/activate
(.env) $ pip install -e xbtorch

This will install XBTorch in editable mode, allowing you to modify the source code directly.

For more detailed instructions (including optional dependencies and troubleshooting), review our documentation.


Getting Started

Minimal code changes are needed to adapt PyTorch models for XBTorch:

import xbtorch
import xbtorch.optim as xboptim
from xbtorch.patches import xbtorch_model
import torch.nn as nn

# Define a simple 2-layer perceptron network
class SimpleMLP(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        self.input_size = input_size
        super(SimpleMLP, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(input_size, hidden_size, bias=False),
            nn.ReLU(),
            nn.Linear(hidden_size, output_size, bias=False),
        )

    def forward(self, x):
        x = x.view(-1, self.input_size)  # Flatten the image
        x = self.model(x)
        return x

# Initialize
xbtorch.initialize()

# Define your model
model = SimpleMLP(10, 5, 2)
model = xbtorch_model(model)   # patch with XBTorch

# Optimizer
optimizer = xboptim.SGD(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()

# ... Implement your training loop as usual!

For full examples (e.g., hardware-aware training and inference, fault-tolerance, etc.), see the examples/ directory or the documentation.


Citation

If you use this library, please cite this repository according to the information in CITATION.cff and/or the introductory paper:

@inproceedings{yousuf2025xbtorch,
  author    = {Yousuf, Osama and Glasmann, Andreu L. and Lueker-Boden, Martin and Najmaei, Sina and Adam, Gina C.},
  title     = {XBTorch: A Unified Framework for Modeling and Co-Design of Crossbar-Based Deep Learning Accelerators},
  booktitle = {arXiv},
  year      = {2026},
  url       = {https://arxiv.org/abs/2601.07086}
}

Acknowledgements

This library was developed as a collaboration between:

The project is licensed under the BSD-3 license (see LICENSE).


Contact and Collaboration

Research groups interested in collaborating are encouraged to reach out:

Osama Yousuf
Osama.Yousuf1@wdc.com
Western Digital Research
R&D Engineering, Memory Technology

Prof. Gina Adam
GinaAdam@gwu.edu
Adaptive Devices and Microsystems Group
Department of Electrical and Computer Engineering
George Washington University

Andreu L. Glasmann
Andreu.L.Glasmann.Civ@army.mil
DEVCOM Army Research Lab


License

BSD-3 License. See the LICENSE file for details.

About

XBTorch: A comprehensive crossbar-based deep neural network simulation framework.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages