Skip to content

UN-2453 [FIX] Mistral AI LLM adapter test connection fix#195

Merged
gaya3-zipstack merged 2 commits intomainfrom
UN-2453-mistral-ai-llm-test-connection-failing
Jul 15, 2025
Merged

UN-2453 [FIX] Mistral AI LLM adapter test connection fix#195
gaya3-zipstack merged 2 commits intomainfrom
UN-2453-mistral-ai-llm-test-connection-failing

Conversation

@pk-zipstack
Copy link
Contributor

What

Fix for Mistral AI LLM adapter test connection not working for some models like Mistral-medium, Mistral-Large etc..

Why

This issue was blocking one of our customers.

How

  • Fixed the constant from which max_token were fetched. Originally they were fetched as MAX_RETRIES, now changed to MAX_TOKENS

Relevant Docs

Related Issues or PRs

Dependencies Versions / Env Variables

Notes on Testing

  • Tested the test connection feature with mistral-medium model.

Screenshots

image

Checklist

I have read and understood the Contribution Guidelines.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 15, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Corrected an issue where the maximum tokens setting was not properly applied for MistralAI, ensuring the correct configuration is used.
  • Chores

    • Updated the version number to v0.76.1.

Walkthrough

The changes update the version string in the SDK's __init__.py file from "v0.76.0" to "v0.76.1" and fix a configuration key in the Mistral LLM adapter to correctly use the max tokens parameter instead of the max retries parameter when determining the maximum tokens setting.

Changes

File(s) Change Summary
src/unstract/sdk/init.py Updated __version__ from "v0.76.0" to "v0.76.1".
src/unstract/sdk/adapters/llm/mistral/src/mistral.py Fixed configuration key to use MAX_TOKENS instead of MAX_RETRIES for max tokens parameter.

Sequence Diagram(s)

sequenceDiagram
    participant Config
    participant MistralLLM
    participant User

    User->>MistralLLM: Initialize LLM instance
    MistralLLM->>Config: Fetch MAX_TOKENS value
    Config-->>MistralLLM: Return max tokens setting
    MistralLLM-->>User: LLM instance ready with correct max tokens
Loading

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
Knowledge Base: Disabled due to Reviews > Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between de5cafe and 584898f.

📒 Files selected for processing (2)
  • src/unstract/sdk/__init__.py (1 hunks)
  • src/unstract/sdk/adapters/llm/mistral/src/mistral.py (1 hunks)
🔇 Additional comments (2)
src/unstract/sdk/__init__.py (1)

1-1: LGTM! Version bump correctly reflects patch release.

The patch version increment from "v0.76.0" to "v0.76.1" appropriately reflects the bug fix in the Mistral AI adapter.

src/unstract/sdk/adapters/llm/mistral/src/mistral.py (1)

53-55: LGTM! Fix correctly uses MAX_TOKENS constant.

The fix properly uses Constants.MAX_TOKENS to fetch the maximum tokens configuration instead of the incorrect Constants.MAX_RETRIES. This should resolve the test connection failures for models like Mistral-medium and Mistral-Large as described in the PR objectives.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@gaya3-zipstack gaya3-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@gaya3-zipstack gaya3-zipstack merged commit a79b097 into main Jul 15, 2025
2 checks passed
@gaya3-zipstack gaya3-zipstack deleted the UN-2453-mistral-ai-llm-test-connection-failing branch July 15, 2025 11:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants