UN-2453 [FIX] Mistral AI LLM adapter test connection fix#195
UN-2453 [FIX] Mistral AI LLM adapter test connection fix#195gaya3-zipstack merged 2 commits intomainfrom
Conversation
Summary by CodeRabbit
WalkthroughThe changes update the version string in the SDK's Changes
Sequence Diagram(s)sequenceDiagram
participant Config
participant MistralLLM
participant User
User->>MistralLLM: Initialize LLM instance
MistralLLM->>Config: Fetch MAX_TOKENS value
Config-->>MistralLLM: Return max tokens setting
MistralLLM-->>User: LLM instance ready with correct max tokens
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🔇 Additional comments (2)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
What
Fix for Mistral AI LLM adapter test connection not working for some models like
Mistral-medium,Mistral-Largeetc..Why
This issue was blocking one of our customers.
How
MAX_RETRIES, now changed toMAX_TOKENSRelevant Docs
Related Issues or PRs
Dependencies Versions / Env Variables
Notes on Testing
mistral-mediummodel.Screenshots
Checklist
I have read and understood the Contribution Guidelines.