vllm-plugin-FL is a plugin for the vLLM inference/serving framework, built on FlagOS's unified multi-chip backend — including the unified operator library FlagGems and the unified communication library FlagCX. It extends vLLM's capabilities and performance across diverse hardware environments. Without changing vLLM's original interfaces or usage patterns, the same command can run model inference/serving on different chips.
In theory, vllm-plugin-FL can support all models available in vLLM, as long as no unsupported operators are involved. The tables below summarize the current support status of end-to-end verified models and chips, including both fully supported and in-progress ("Merging") entries.
| Model | Status | Reference |
|---|---|---|
| Qwen3.5-397B-A17B | Supported | example |
| Qwen3-Next-80B-A3B | Supported | example |
| Qwen3-4B | Supported | example |
| MiniCPM-o 4.5 | Supported | example |
| Chip Vendor | Status | Reference |
|---|---|---|
| NVIDIA | Supported | - |
| Ascend | Merging | PR #55 |
| MetaX | Merging | PR #47 |
| Pingtouge-Zhenwu | Supported | - |
| Iluvatar | Merging | PR #58 |
| Tsingmicro | Merging | PR #52 |
-
Install vllm from the official v0.13.0 (optional if the correct version is installed) or from the fork vllm-FL.
-
Install vllm-plugin-FL
1.1 Clone the repository:
git clone https://github.com/flagos-ai/vllm-plugin-FL
1.2 install
cd vllm-plugin-FL pip install -r requirements.txt pip install --no-build-isolation . # or editble install pip install --no-build-isolation -e .
-
Install FlagGems
2.1 Install Build Dependencies
pip install -U scikit-build-core==0.11 pybind11 ninja cmake
2.2 Installation FlagGems
git clone https://github.com/flagos-ai/FlagGems cd FlagGems pip install --no-build-isolation . # or editble install pip install --no-build-isolation -e .
-
Install FlagCX
3.1 Clone the repository:
git clone https://github.com/flagos-ai/FlagCX.git cd FlagCX git checkout -b v0.9.0 git submodule update --init --recursive3.2 Build the library with different flags targeting to different platforms:
make USE_NVIDIA=1
3.3 Set environment
export FLAGCX_PATH="$PWD"
3.4 Installation FlagCX
cd plugin/torch/ FLAGCX_ADAPTOR=[xxx] pip install . --no-build-isolation # or editable install FLAGCX_ADAPTOR=[xxx] pip install -e . --no-build-isolation
Note: [xxx] should be selected according to the current platform, e.g., nvidia, ascend, etc.
If there are multiple plugins in the current environment, you can specify use vllm-plugin-fl via VLLM_PLUGINS='fl'.
With vLLM and vLLM-fl installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: offline_inference. Or use blow python script directly.
from vllm import LLM, SamplingParams
import torch
from vllm.config.compilation import CompilationConfig
if __name__ == '__main__':
prompts = [
"Hello, my name is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=10, temperature=0.0)
# Create an LLM.
llm = LLM(model="Qwen/Qwen3-4B", max_num_batched_tokens=16384, max_num_seqs=2048)
# Generate texts from the prompts.
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")For dispatch environment variable usage, see environment variables usage.
If you want to use the original Cuda Communication, you can unset the following environment variables.
unset FLAGCX_PATHIf you want to use the original CUDA operators, you can set the following environment variables.
export USE_FLAGGEMS=0