feat: GPU enumeration with VRAM reporting (NVIDIA, AMD, Windows)#211
Open
JamesNyeVRGuy wants to merge 1 commit intonikopueringer:mainfrom
Open
feat: GPU enumeration with VRAM reporting (NVIDIA, AMD, Windows)#211JamesNyeVRGuy wants to merge 1 commit intonikopueringer:mainfrom
JamesNyeVRGuy wants to merge 1 commit intonikopueringer:mainfrom
Conversation
Add enumerate_gpus() to device_utils.py for detecting GPUs and their VRAM across all platforms: - NVIDIA: nvidia-smi query (index, name, total/free memory) - AMD Linux: amd-smi (ROCm 6.0+) → rocm-smi (legacy) fallback - AMD Windows: registry lookup (fixes uint32 VRAM overflow in WMI) - Universal: torch.cuda fallback for any GPU torch can see Returns list[GPUInfo] with index, name, vram_total_gb, vram_free_gb. Useful for CLI status display, VRAM gating, and multi-GPU selection.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
GPU enumeration with VRAM reporting (NVIDIA, AMD, Windows)
Adds
enumerate_gpus()todevice_utils.py— a cross-platform function that detects all available GPUs and reports their VRAM. Useful for status display, VRAM gating before inference, and multi-GPU selection in batch scripts.Currently
device_utils.pycan detect which device to use (CUDA/MPS/CPU) but can't tell you what GPUs are available or how much VRAM they have.What does this change?
Adds to
device_utils.py:GPUInfodataclass:index,name,vram_total_gb,vram_free_gbenumerate_gpus() -> list[GPUInfo]with multi-backend detection:nvidia-smiCSV query (index, name, total/free memory)amd-smi(ROCm 6.0+) with live VRAM usage, falls back torocm-smi(legacy)winreg(fixes theWin32_VideoController.AdapterRAMuint32 overflow that reports >4GB GPUs as 0GB)torch.cuda.get_device_properties()for any GPU torch can seeNo new dependencies — uses
subprocess,json,winreg(stdlib).torchis only imported in the fallback path.How was it tested?
Checklist
uv run pytestpassesuv run ruff checkpassesuv run ruff format --checkpasses