Skip to content

Conversation

@jeejeelee
Copy link
Collaborator

@jeejeelee jeejeelee commented Aug 29, 2025

Purpose

Move tests/entrypoints/llm/test_generate_multiple_loras.py to LoRA-related tests and use smaller models for testing

Part of #23667

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request cleans up a LoRA test by moving it to a more appropriate file and simplifying its implementation, which is a good improvement. I've found one area for improvement in the new test to make it more robust. Also, there's a small typo in the pull request title ('teest' -> 'test').

Comment on lines +163 to +191
def test_multiple_lora_requests():
llm = LLM(
model=MODEL_PATH,
enable_lora=True,
max_loras=4,
max_lora_rank=LORA_RANK,
max_model_len=512,
gpu_memory_utilization=0.5,
enforce_eager=True,
)
PROMPTS = ["Hello, my name is"] * 2
LORA_NAME = "Alice"
lora_request = [
LoRARequest(LORA_NAME + str(idx), idx + 1,
LORA_NAME_PATH_MAP[LORA_NAME])
for idx in range(len(PROMPTS))
]
# Multiple SamplingParams should be matched with each prompt
outputs = llm.generate(PROMPTS, lora_request=lora_request)
assert len(PROMPTS) == len(outputs)

# Exception raised, if the size of params does not match the size of prompts
with pytest.raises(ValueError):
outputs = llm.generate(PROMPTS, lora_request=lora_request[:1])

# Single LoRARequest should be applied to every prompt
single_lora_request = lora_request[0]
outputs = llm.generate(PROMPTS, lora_request=single_lora_request)
assert len(PROMPTS) == len(outputs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of this test is not robust enough to verify the correctness of handling multiple LoRA requests. It uses the same LoRA adapter for all prompts and only asserts that the number of outputs matches the number of prompts.

This is problematic because it could miss critical bugs, such as:

  • Applying the wrong LoRA to a prompt.
  • Applying only the first LoRA to all prompts in a batch.

Since the test uses identical LoRAs, these bugs would not be detected.

To make the test more effective, it should use distinct LoRAs for different prompts and verify that the generated output corresponds to the correct LoRA. This ensures that the system correctly maps each LoRA request to its corresponding prompt in a batch. I've provided a suggestion to refactor the test to use 'Alice' and 'Bob' LoRAs and assert on their respective outputs.

def test_multiple_lora_requests():
    llm = LLM(
        model=MODEL_PATH,
        enable_lora=True,
        max_loras=4,
        max_lora_rank=LORA_RANK,
        max_model_len=512,
        gpu_memory_utilization=0.5,
        enforce_eager=True,
    )
    # Use a prompt that triggers the self-cognition of the LoRAs.
    PROMPTS = ["Hi, tell me about you"] * 2

    # Use two different LoRAs to ensure they are applied correctly.
    lora_request = [
        LoRARequest("Alice", 1, LORA_NAME_PATH_MAP["Alice"]),
        LoRARequest("Bob", 2, LORA_NAME_PATH_MAP["Bob"]),
    ]

    sampling_params = SamplingParams(temperature=0, max_tokens=64)

    # Multiple LoRARequests should be matched with each prompt.
    outputs = llm.generate(PROMPTS, sampling_params, lora_request=lora_request)
    assert len(PROMPTS) == len(outputs)

    # Check that the correct LoRA was applied to each prompt.
    output_text_1 = outputs[0].outputs[0].text
    output_text_2 = outputs[1].outputs[0].text
    assert "Alice" in output_text_1
    assert "Bob" in output_text_2

    # Exception raised, if the size of lora_request does not match the size of prompts.
    with pytest.raises(ValueError):
        llm.generate(PROMPTS, sampling_params, lora_request=lora_request[:1])

    # Single LoRARequest should be applied to every prompt.
    single_lora_request = lora_request[0]
    outputs = llm.generate(PROMPTS,
                           sampling_params,
                           lora_request=single_lora_request)
    assert len(PROMPTS) == len(outputs)
    assert "Alice" in outputs[0].outputs[0].text
    assert "Alice" in outputs[1].outputs[0].text

check_outputs(output_text, expected_output)


def test_multiple_lora_requests():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no TP involved here so maybe we should rename the file and the module doc

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 23f020f

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
@mergify mergify bot added the ci/build label Aug 29, 2025
- vllm/lora
- tests/lora
command: pytest -v -s lora --shard-id=$$BUILDKITE_PARALLEL_JOB --num-shards=$$BUILDKITE_PARALLEL_JOB_COUNT --ignore=lora/test_chatglm3_tp.py --ignore=lora/test_llama_tp.py
command: pytest -v -s lora --shard-id=$$BUILDKITE_PARALLEL_JOB --num-shards=$$BUILDKITE_PARALLEL_JOB_COUNT --ignore=lora/test_chatglm3_tp.py --ignore=lora/test_llama_tp.py --ignore=lora/test_llm_with_multi_loras.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ignore this file?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this script will be tested in LoRA TP

@jeejeelee jeejeelee changed the title [CI/Build] Clean up LoRA teest [CI/Build] Clean up LoRA test Aug 29, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) August 29, 2025 03:22
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 29, 2025
@DarkLight1337
Copy link
Member

The distributed tests fails, PTAL

@vllm-bot vllm-bot merged commit b4f9e96 into vllm-project:main Aug 29, 2025
72 of 77 checks passed
@DarkLight1337
Copy link
Member

DarkLight1337 commented Aug 29, 2025

Looks like just a flaky test, merging

@jeejeelee jeejeelee deleted the clean-lora-tests branch August 29, 2025 06:32
eicherseiji pushed a commit to eicherseiji/vllm that referenced this pull request Sep 9, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants