Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vllm website to help you get started with vllm. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, Arm CPUs, and TPU. Additionally, support for diverse hardware plugins such as Intel Gaudi, IBM Spyre and Huawei Ascend.
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm-0.16.0.tar.gz (29.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm-0.16.0-cp38-abi3-manylinux_2_31_x86_64.whl (508.3 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.31+ x86-64

vllm-0.16.0-cp38-abi3-manylinux_2_31_aarch64.whl (460.5 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.31+ ARM64

File details

Details for the file vllm-0.16.0.tar.gz.

File metadata

  • Download URL: vllm-0.16.0.tar.gz
  • Upload date:
  • Size: 29.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for vllm-0.16.0.tar.gz
Algorithm Hash digest
SHA256 1f684bb31fbef59d862e2fe666e23a41f1d39d93f86215ce1ce1db89a8f5665b
MD5 d233b7b3fff4aa9603ec8dfa6ca7c0e0
BLAKE2b-256 e6faab31c88afd21b69a46c3cc80d4017a2d5045a30cc4862dba6eae6eca7865

See more details on using hashes here.

File details

Details for the file vllm-0.16.0-cp38-abi3-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.16.0-cp38-abi3-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 f066b2a2f8597a4a3ada8fbbfd122b59086864b2260ca42dc81bf9fb57af0c42
MD5 1902b00006cddb7cb67f92d9501a9075
BLAKE2b-256 84ce44a5a999eb7116516a8d4a08ab9fe14df773f0da4b243ceffe76b0afe54a

See more details on using hashes here.

File details

Details for the file vllm-0.16.0-cp38-abi3-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.16.0-cp38-abi3-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 dfaa14846608fd229dda9d372e2ad3f13854fd09147c2ba36b40579cf3c03804
MD5 1e053ef28173478f39d48edf6784f300
BLAKE2b-256 d6ed9fafb939bf8326e4a45e62041bf5d1eb73b4f76aff8ef75ae1169df7f3cb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page