Skip to content

Conversation

@heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Aug 15, 2025

Purpose

It's difficult to locate BadRequestError as we can't see where the error happens from default ouptut. Add a flag to print stack trace during create_error_response

Test Plan

vllm serve openai/gpt-oss-20b --enforce-eager --log-error-stack
from openai import OpenAI

MODEL = 'openai/gpt-oss-20b'
client = OpenAI(base_url="http://localhost:8000/v1", api_key="")

# error response without exception
response = client.responses.create(
    model=MODEL,
    input="What is 13 * 24?",
    store=True,
)
assert response is not None

_retrieved_response = client.responses.retrieve(response.id)
print(_retrieved_response)

tools = [{
        "type": "function",
        "name": "get_weather",
        "description":
        "Get current temperature for provided coordinates in celsius.",  # noqa
        "parameters": {
            "type": "object",
            "properties": {
                "latitude": {
                    "type": "number"
                },
                "longitude": {
                    "type": "number"
                },
            },
            "required": ["latitude", "longitude"],
            "additionalProperties": False,
        },
        "strict": True,
    }]

# error response with exception
response = client.responses.create(
            model=MODEL,
            input="What's the weather like in Paris today?",
            tools=tools,
            tool_choice="required",
        )

Test Result

Can print the error stack

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
@heheda12345 heheda12345 requested a review from aarnphm as a code owner August 15, 2025 04:34
@mergify mergify bot added the frontend label Aug 15, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a --log-error-stack flag, a valuable addition for debugging by enabling stack trace logging for error responses. The implementation is sound, and the new setting is correctly propagated through the various serving classes. My review includes one suggestion to enhance the logging of these stack traces by integrating with the existing logger infrastructure, which will improve consistency and make the logs more manageable.

Comment on lines 418 to 423
import traceback
exc_type, _, _ = sys.exc_info()
if exc_type is not None:
traceback.print_exc()
else:
traceback.print_stack()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Instead of printing directly to stderr using traceback.print_exc() and traceback.print_stack(), it's better to use the configured logger. This ensures that stack traces are formatted and routed according to the application's logging configuration. Using logger.exception() when an exception is present, and logging the formatted stack otherwise, will provide more consistent and manageable error logging.

Suggested change
import traceback
exc_type, _, _ = sys.exc_info()
if exc_type is not None:
traceback.print_exc()
else:
traceback.print_stack()
import traceback
if sys.exc_info()[0]:
logger.exception("Error will be returned to client: %s", message)
else:
stack = "".join(traceback.format_stack())
logger.error("Stack trace for error response '%s':\n%s", message, stack)

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tiny comments, otherwise LGTM.

will CC another few more folks for comments. @DarkLight1337 @hmellor

@mergify
Copy link

mergify bot commented Aug 20, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @heheda12345.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 20, 2025
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
@mergify mergify bot removed the needs-rebase label Aug 20, 2025
@simon-mo simon-mo enabled auto-merge (squash) August 21, 2025 21:13
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 21, 2025
@simon-mo simon-mo merged commit 3210264 into vllm-project:main Aug 27, 2025
37 of 38 checks passed
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…nse (vllm-project#22960)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants