pip install -e .
this takes forever. like literally forever.
What worked:
❯ source .venv/bin/activate
❯ export VLLM_USE_PRECOMPILED=1
❯ uv pip install -v -e .
this took < 10 min in my LG Gram Pro (laptop gpu: RTX 3050ti)
export VLLM_USE_PRECOMPILED=1 before installing!!!uv cuz it’s fast-v is not necessary: it’s just for printing out the log;ps. based on https://docs.vllm.ai/en/latest/contributing/incremental_build.html it recommended to use VLLM_USE_PRECOMPILED=1 uv pip install -U -e . --torch-backend=auto but it took me 2+ hours. didn’t work for me for some reason
# These commands are only for Nvidia CUDA platforms.
uv pip install -r requirements/common.txt -r requirements/dev.txt --torch-backend=auto
# Linting, formatting and static type checking
pre-commit install
# You can manually run pre-commit with
pre-commit run --all-files --show-diff-on-failure
# To manually run something from CI that does not run
# locally by default, you can run:
pre-commit run mypy-3.9 --hook-stage manual --all-files
# Unit tests
pytest tests/
# Run tests for a single test file with detailed output
pytest -s -v tests/test_logger.py
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.