Skip to content

Conversation

@arthurmccray
Copy link
Collaborator

Adding back "gpu" as an option for config setting device after it was accidentally removed, and updating the tests to prevent it happening in the future.

I also added an attribute to some of the classes in test_autoserialize.py as it was raising warnings. The issue was the TestingClass and others which were meant as classes to be used in tests, but which were trying to be ran as tests because of the name. Fixed by adding a "__test__" = False line as suggested here.

@arthurmccray arthurmccray requested a review from gvarnavi January 20, 2026 19:13
@gvarnavi
Copy link
Collaborator

Thanks @arthurmccray.

On my phone now so can’t check, but perusing the files changed it seems to me this always falls back to cuda. The previous behavior was setting cuda/mps according to which modules are available.

@arthurmccray
Copy link
Collaborator Author

Is that expected behavior? In torch the "mps" backend is treated as distinct, though admittedly "gpu" isn't an option either as you use "cuda" for the default GPU specification.

So I guess the question is if we should use "gpu" as a default "accelerator" that quietly defaults to "cuda" if there are both mps and cuda available, or if we should ask people to specify "mps" if that's what they want.

@gvarnavi
Copy link
Collaborator

Not sure what you mean haha? 😅 The mps and cuda backends are mutually exclusive.

I'm saying users likely want to simply specify "cpu" or "gpu" and get acceleration on their respective hardware.

@arthurmccray
Copy link
Collaborator Author

ah--i thought mps was for the integrated gpu, and didn't realize you couldn't have a device with an additional nvidia gpu as well (don't have any apple products lol). In any event I added "mps" as a fallback for when the user selects "gpu" and cuda is not available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants