vllm-plugin-fl-setup-flagos

Installation
SKILL.md

vLLM-Plugin-FL Setup

Overview

vLLM-Plugin-FL extends vLLM to support model inference/serving across diverse hardware backends (NVIDIA, Ascend, MetaX, Iluvatar, etc.) via FlagOS's unified operator library FlagGems and communication library FlagCX. This skill covers installation, hardware-specific environment configuration, and dependency setup.

Prerequisites

  • Linux OS (Ubuntu 20.04+ recommended)
  • Python 3.10+
  • vLLM v0.13.0 — install from the official v0.13.0 release or the fork vllm-FL
  • GPU with appropriate drivers (NVIDIA CUDA, Huawei Ascend, etc.)
  • pip package manager
  • Git

Verify vLLM version before proceeding:

python -c "import vllm; print(vllm.__version__)"
Related skills
Installs
22
GitHub Stars
10
First Seen
Mar 26, 2026