#
beelink
Here are 3 public repositories matching this topic...
The definitive Strix Halo LLM guide — 65 t/s on a $2,999 mini PC. Live benchmarks, tested optimizations, and everything that doesn't work.
benchmark amd optimization vulkan inference rocm mini-pc unified-memory beelink llm llama-cpp local-llm ollama gguf rdna3 strix-halo gfx1151 dgx-spark ryzen-ai-max
-
Updated
Mar 21, 2026 - Shell
Unlock fast, local LLM inference on AMD-powered mini PCs delivering 65-87 t/s for large models without cloud or subscription costs
amd optimization inference rocm mini-pc asus-rog linux-gaming unified-memory beelink cachyos llm llama-cpp local-llm ollama gguf rdna3 strix-halo gfx1151
-
Updated
Mar 29, 2026 - Shell
Improve this page
Add a description, image, and links to the beelink topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the beelink topic, visit your repo's landing page and select "manage topics."