Skip to content

Conversation

@loleg
Copy link
Contributor

@loleg loleg commented Oct 27, 2025

I'm using the uv package manager to run my project locally, and experiment with the Apertus model - which I have running fine in vLLM and Ollama, using the same quantizations as your Llama defaults.

With the current release version of llama-cpp-python I get this error:

RuntimeError: Internal C++ object (PySide6.QtNetwork.QNetworkReply) already deleted.
Exception ignored in: <function LlamaModel.__del__ at 0x7f612e5d0a40>
Traceback (most recent call last):
  File "OKFN/opendataeditor/.venv/lib/python3.12/site-packages/llama_cpp/_internals.py", line 78, in close
    if self.sampler is not None:
       ^^^^^^^^^^^^
AttributeError: 'LlamaModel' object has no attribute 'sampler'
Traceback (most recent call last):
  File "/home/oleg/Localdev/OKFN/opendataeditor/ode/main.py", line 820, in on_llm_init_error

Even after bumping to a patched source version of the library, it is not yet working. But I'm going to try again once there's an October release.

@pdelboca
Copy link
Member

pdelboca commented Nov 5, 2025

Thanks for this! We're excited to test the Apertus model! It is definitely a nice replacement for Llama :D

I have also in my todo list to add uv support since the Makefile is too linux oriented and uv is just great.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants