7 projects
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
depyf
Decompile python functions, from bytecode to source code!
vllm-allocator-adaptor
vLLM Allocator Adaptor (C/CUDA/Python) using callback shims
vllm-nccl-cu12
None
vllm-nccl-cu11
None
trumpy
TRUMPY: Tracing and Reverse Understanding Memory in Pytorch
easydl
"more powerful runTask"