Skip to content

Add HF llama.cpp DLC#6000

Open
ehcalabres wants to merge 13 commits intoaws:masterfrom
ehcalabres:add-hf-llamacpp-dlc
Open

Add HF llama.cpp DLC#6000
ehcalabres wants to merge 13 commits intoaws:masterfrom
ehcalabres:add-hf-llamacpp-dlc

Conversation

@ehcalabres
Copy link
Copy Markdown

@ehcalabres ehcalabres commented Apr 27, 2026

Purpose

This PR adds a new Hugging Face DLC for Llama.cpp

Test Plan

Test Result


Toggle if you are merging into master Branch

By default, docker image builds and tests are disabled. Two ways to run builds and tests:

  1. Using dlc_developer_config.toml
  2. Using this PR description (currently only supported for PyTorch, TensorFlow, vllm, and base images)
How to use the helper utility for updating dlc_developer_config.toml

Assuming your remote is called origin (you can find out more with git remote -v)...

  • Run default builds and tests for a particular buildspec - also commits and pushes changes to remote; Example:

python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -cp origin

  • Enable specific tests for a buildspec or set of buildspecs - also commits and pushes changes to remote; Example:

python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -t sanity_tests -cp origin

  • Restore TOML file when ready to merge

python src/prepare_dlc_dev_environment.py -rcp origin

NOTE: If you are creating a PR for a new framework version, please ensure success of the local, standard, rc, and efa sagemaker tests by updating the dlc_developer_config.toml file:

  • sagemaker_remote_tests = true
  • sagemaker_efa_tests = true
  • sagemaker_rc_tests = true
  • sagemaker_local_tests = true
How to use PR description Use the code block below to uncomment commands and run the PR CodeBuild jobs. There are two commands available:
  • # /buildspec <buildspec_path>
    • e.g.: # /buildspec pytorch/training/buildspec.yml
    • If this line is commented out, dlc_developer_config.toml will be used.
  • # /tests <test_list>
    • e.g.: # /tests sanity security ec2
    • If this line is commented out, it will run the default set of tests (same as the defaults in dlc_developer_config.toml): sanity, security, ec2, ecs, eks, sagemaker, sagemaker-local.
# /buildspec <buildspec_path>
# /tests <test_list>
Toggle if you are merging into main Branch

PR Checklist

  • [] I ran pre-commit run --all-files locally before creating this PR. (Read DEVELOPMENT.md for details).

@ehcalabres ehcalabres requested a review from a team as a code owner April 27, 2026 16:42
@ehcalabres ehcalabres changed the title Add hf llamacpp dlc Add HF llama.cpp DLC Apr 27, 2026
Copy link
Copy Markdown

@alvarobartt alvarobartt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ehcalabres, I've left some comments but IMO adding Python here is overly complex as it adds more dependencies, more vulnerabilities, an extra open-port, a proxy and the handling it requires, etc. so why not just implement the "proxy-like layer" in C++ on top or as a patch for llama-server instead?

Comment on lines +23 to +24
COPY deep_learning_container.py /usr/local/bin/deep_learning_container.py
COPY bash_telemetry.sh /usr/local/bin/bash_telemetry.sh
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are these files coming from? Are those auto-generated prior building the image but not included in the Git tree for a reason?

Comment thread huggingface/llamacpp/docker/b8882/cu130/Dockerfile.gpu Outdated
Comment thread huggingface/llamacpp/build_artifacts/llamacpp_sagemaker_serve.py Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants