-
-
Notifications
You must be signed in to change notification settings - Fork 8.6k
[cold start time] add envs.VLLM_COMPILE_DEPYF to guard decompile #20790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Boyuan Feng <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @BoyuanFeng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on optimizing the cold start performance of the vLLM server. I've implemented a mechanism to make the previously mandatory and time-consuming bytecode decompilation step optional. This is achieved by introducing a new environment variable, VLLM_COMPILE_DEPYF
, which is set to False
by default, thereby skipping the decompile process and improving startup speed. Users can still opt-in to enable decompilation for debugging purposes.
Highlights
- Performance Improvement: I've introduced a new environment variable,
VLLM_COMPILE_DEPYF
, to control the execution of a time-consuming bytecode decompilation step during the vLLM server's cold start process. - Cold Start Time Reduction: By default, the bytecode decompilation step is now skipped, which significantly reduces the cold start time. This addresses the ~7-second delay observed on large models like
llama-3.1-70b
. - Debugging Control: The decompilation functionality, which is useful for debugging and generates
transformed_code.py
, can still be explicitly enabled by settingVLLM_COMPILE_DEPYF=1
. - Code Changes: The changes involve modifying
vllm/compilation/wrapper.py
to conditionally execute thedepyf.decompile
call and updatingvllm/envs.py
to define and register the new environment variable with a default value ofFalse
.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new environment variable VLLM_COMPILE_DEPYF
to control an expensive decompilation step during model compilation, aiming to reduce cold start times. The implementation correctly adds the environment variable and associated logic. However, I've identified a critical issue in vllm/compilation/wrapper.py
where the file existence check can prevent the decompiled code from being generated if a placeholder file was created in a previous run. My review includes a detailed explanation and a suggested fix for this logic.
if envs.VLLM_COMPILE_DEPYF: | ||
import depyf | ||
src = depyf.decompile(new_code) | ||
else: | ||
src = ("Please set VLLM_COMPILE_DEPYF=1 to populate " | ||
"this file") | ||
|
||
with open(decompiled_file, "w") as f: | ||
f.write(src) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a logic issue with the file existence check on line 100 that can prevent this feature from working as expected in a common scenario.
Problem Scenario:
- A user runs vLLM without
VLLM_COMPILE_DEPYF=1
. A placeholder filetransformed_code.py
is created. - The user then restarts vLLM with
VLLM_COMPILE_DEPYF=1
to debug. - The check
if not os.path.exists(decompiled_file):
on line 100 will beFalse
, so the decompilation logic inside this block is never executed. The user will be stuck with the placeholder file.
Suggested Fix:
The logic should be refactored to ensure that when VLLM_COMPILE_DEPYF=1
, the decompilation can proceed, even if a placeholder file exists. A possible approach is to move the os.path.exists()
check to only guard the creation of the placeholder file.
Here is an example of how you could restructure lines 100-119 to fix this. Note that this change is outside the current diff hunk.
if envs.VLLM_COMPILE_DEPYF:
if not os.path.exists(decompiled_file):
try:
import depyf
src = depyf.decompile(new_code)
with open(decompiled_file, "w") as f:
f.write(src)
logger.debug("Dynamo transformed code saved to %s",
decompiled_file)
except Exception:
pass
elif not os.path.exists(decompiled_file):
try:
src = ("Please set VLLM_COMPILE_DEPYF=1 to populate "
"this file")
with open(decompiled_file, "w") as f:
f.write(src)
except Exception:
pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a great find! Will this disrupt any tests? Also, please report a simple E2E time benchmark to roughly understand the proportion of time saved
@mgoin Cold start time reduces from 221 seconds to 196 seconds for meta-llama/Meta-Llama-3.1-70B. Environment: torch=2.7.0, vllm=0.1.dev7619+gb38a5ec.precompiled The ci is also green. |
Custom bytecode hook takes a long time (~7 seconds on llama-3.1-70b). It decompiles the compiled code and dumps to
transformed_code.py
, which may be used for debugging purpose. This PR adds an env variableVLLM_COMPILE_DEPYF
to skip this decompile and reduce cold start time.Example
VLLM_FLASH_ATTN_VERSION=3 VLLM_USE_V1=1 vllm serve meta-llama/Meta-Llama-3.1-70B --tensor-parallel-size 8
gives:VLLM_COMPILE_DEPYF=1 VLLM_FLASH_ATTN_VERSION=3 VLLM_USE_V1=1 vllm serve meta-llama/Meta-Llama-3.1-70B --tensor-parallel-size 8
gives: