Skip to content

Conversation

@arthurmccray
Copy link
Collaborator

@arthurmccray arthurmccray commented Jan 6, 2026

This PR expands on the neural networks included in /core/ml, and generally cleans up that code with docstrings, linter fixes, etc. This is also associated with updated tutorial notebooks in draft PR at quantem-tutorials and which will be pulled into main there when done.

In general this adds some new models and refactors a few things, but I think everything should backwards-compatible and non-breaking. I also removed the Finer model because I think we are wanting folks to use HSiren for INRs in general.

@gvarnavi
Copy link
Collaborator

Heads-up: you'll need to pull dev into the PR to enable the updated automated checks.

Copy link
Collaborator

@cedriclim1 cedriclim1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments

@cedriclim1 cedriclim1 merged commit 7813b54 into dev Jan 20, 2026
4 checks passed
@arthurmccray arthurmccray deleted the ml branch January 20, 2026 00:53
dev = torch.device(
"cuda" if torch.cuda.is_available() else "mps" if torch.mps.is_available() else "cpu"
)
elif isinstance(dev, str) and dev.lower() == "gpu":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arthurmccray @cedriclim1 Why did we drop device='gpu' support?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah shoot, that's a mistake! Didn't realize that wasn't in the tutorial notebook that i had used for testing. I'll make a hotfix branch and fix that along with adding a proper pytest so it can't happen again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants