Skip to content

fix: do not inspect input with cloud inference#1024

Merged
joein merged 1 commit into
devfrom
turn-off-inspector-cloud-inference
Jun 13, 2025
Merged

fix: do not inspect input with cloud inference#1024
joein merged 1 commit into
devfrom
turn-off-inspector-cloud-inference

Conversation

@joein
Copy link
Copy Markdown
Member

@joein joein commented Jun 13, 2025

Previous implementation assumed that we'd need to send a header with cloud inference in order to do inference remotely, however, in the current state we don't need to do this anymore, since input requiring inference selected on Qdrant's side

@netlify
Copy link
Copy Markdown

netlify Bot commented Jun 13, 2025

Deploy Preview for poetic-froyo-8baba7 ready!

Name Link
🔨 Latest commit 570a2c2
🔍 Latest deploy log https://app.netlify.com/projects/poetic-froyo-8baba7/deploys/684bffa4208e1c0008f4b969
😎 Deploy Preview https://deploy-preview-1024--poetic-froyo-8baba7.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jun 13, 2025

📝 Walkthrough

Walkthrough

This change refactors the conditional logic for embedding model inference checks within several methods of both the AsyncQdrantClient and QdrantClient classes. The update removes intermediate variables used to determine if embedding inference is required and consolidates these checks into direct, inline conditionals. The logic for handling embedding of inputs, including both main query and prefetch parameters, is restructured for consistency and clarity, with explicit handling of None and list types. No changes are made to the signatures of exported or public entities; all modifications are internal to method implementations.

Possibly related PRs

Suggested reviewers

  • generall
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
qdrant_client/qdrant_client.py (1)

730-758: Same nested-if / generator comments apply here – please replicate the simplification / refactor in query_points_groups to keep the two code paths in lock-step.

🧰 Tools
🪛 Ruff (0.11.9)

730-733: Use a single if statement instead of nested if statements

(SIM102)

🧹 Nitpick comments (2)
qdrant_client/qdrant_client.py (2)

552-560: Collapse the nested if to a single predicate for clarity

The static-analysis hint (SIM102) is right: the two–level if adds indentation without adding meaning.
You can shave four lines and one indent level:

-        if not self.cloud_inference:
-            if self._inference_inspector.inspect(query) or self._inference_inspector.inspect(
-                prefetch
-            ):
+        if (
+            not self.cloud_inference
+            and (
+                self._inference_inspector.inspect(query)
+                or self._inference_inspector.inspect(prefetch)
+            )
+        ):

Same change applies to query_points_groups.

🧰 Tools
🪛 Ruff (0.11.9)

557-560: Use a single if statement instead of nested if statements

(SIM102)


562-591: Minor: avoid burning a generator just to take the first element

_embed_models returns an iterator. Grabbing the first element with next(iter(...)) materialises one item and then drops the generator, which is a little wasteful and makes the intent less obvious.

A more idiomatic pattern is:

-                query = (
-                    next(iter(self._embed_models(query, ...)))
-                    if query is not None
-                    else None
-                )
+                if query is not None:
+                    query, *_ = self._embed_models(
+                        query, is_query=True, batch_size=self.local_inference_batch_size
+                    )

(Not performance-critical here, but improves readability).

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 808b0ab and 570a2c2.

📒 Files selected for processing (2)
  • qdrant_client/async_qdrant_client.py (6 hunks)
  • qdrant_client/qdrant_client.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
qdrant_client/qdrant_client.py (2)
qdrant_client/embed/type_inspector.py (1)
  • inspect (24-50)
qdrant_client/qdrant_fastembed.py (2)
  • query (612-696)
  • _embed_models (867-879)
qdrant_client/async_qdrant_client.py (2)
qdrant_client/embed/type_inspector.py (1)
  • inspect (24-50)
qdrant_client/async_qdrant_fastembed.py (2)
  • query (573-647)
  • _embed_models (797-808)
🪛 Ruff (0.11.9)
qdrant_client/qdrant_client.py

557-560: Use a single if statement instead of nested if statements

(SIM102)


730-733: Use a single if statement instead of nested if statements

(SIM102)

qdrant_client/async_qdrant_client.py

528-531: Use a single if statement instead of nested if statements

(SIM102)


696-699: Use a single if statement instead of nested if statements

(SIM102)

⏰ Context from checks skipped due to timeout of 90000ms (8)
  • GitHub Check: Redirect rules - poetic-froyo-8baba7
  • GitHub Check: Header rules - poetic-froyo-8baba7
  • GitHub Check: Pages changed - poetic-froyo-8baba7
  • GitHub Check: Python 3.11.x on ubuntu-latest test
  • GitHub Check: Python 3.12.x on ubuntu-latest test
  • GitHub Check: Python 3.10.x on ubuntu-latest test
  • GitHub Check: Python 3.13.x on ubuntu-latest test
  • GitHub Check: Python 3.9.x on ubuntu-latest test
🔇 Additional comments (10)
qdrant_client/qdrant_client.py (4)

421-428: Condition now correctly skips local inference on cloud-inference runs

Swapping the two‐step requires_inference logic for

if not self.cloud_inference and self._inference_inspector.inspect(requests):

makes the intent explicit and prevents the (expensive) inspection+embedding when cloud_inference is enabled.
Looks good.


1617-1632: LGTM – early exit when cloud inference is enabled

The new guard prevents the local embedder from doing unnecessary work. No further issues spotted.


1682-1687: Same concise guard pattern correctly applied to update_vectors


2135-2140: Consistent fix in batch_update_points – mirrors the other write paths.

qdrant_client/async_qdrant_client.py (6)

398-403: Avoid re-allocating the generator just to wrap it in list()

_embed_models() already yields an iterable whose length matches the input.
If the upstream _client.query_batch_points can consume any iterable (it eventually converts to Protobuf / JSON anyway) there is no need to materialise the whole thing:

-        if not self.cloud_inference and self._inference_inspector.inspect(requests):
-            requests = list(
-                self._embed_models(
-                    requests, is_query=True, batch_size=self.local_inference_batch_size
-                )
-            )
+        if not self.cloud_inference and self._inference_inspector.inspect(requests):
+            # Pass a lazy generator straight through; saves memory for big batches
+            requests = self._embed_models(
+                requests, is_query=True, batch_size=self.local_inference_batch_size
+            )

This keeps memory usage constant and avoids one extra traversal.
[ suggest_optional_refactor ]


528-562: Combine the two if levels to satisfy Ruff SIM102 and skip pointless embeddings

You run _embed_models on prefetch even when only query required inference, which can double the work on large lists.
Merging the conditions lets you decide per-argument which one really needs embedding:

-        if not self.cloud_inference:
-            if self._inference_inspector.inspect(query) or self._inference_inspector.inspect(
-                prefetch
-            ):
+        if (
+            not self.cloud_inference
+            and (
+                self._inference_inspector.inspect(query)
+                or self._inference_inspector.inspect(prefetch)
+            )
+        ):

Then, before embedding, re-check each object:

if query is not None and self._inference_inspector.inspect(query):
    query = next(
        self._embed_models(
            query, is_query=True, batch_size=self.local_inference_batch_size
        )
    )

if isinstance(prefetch, list) and any(self._inference_inspector.inspect(p) for p in prefetch):
    prefetch = list(
        self._embed_models(
            prefetch, is_query=True, batch_size=self.local_inference_batch_size
        )
    )
elif prefetch is not None and self._inference_inspector.inspect(prefetch):
    prefetch = next(
        self._embed_models(
            prefetch, is_query=True, batch_size=self.local_inference_batch_size
        )
    )

This removes the nested if, fulfils the static-analysis hint, and avoids unnecessary work.
[ suggest_essential_refactor ]

🧰 Tools
🪛 Ruff (0.11.9)

528-531: Use a single if statement instead of nested if statements

(SIM102)


696-724: Same SIM102 / redundant-embedding issue as in query_points

The logic duplicates the pattern fixed above. Applying the same consolidation keeps the two public APIs in sync and prevents divergent behaviour in the future.
[ duplicate_comment ]

🧰 Tools
🪛 Ruff (0.11.9)

696-699: Use a single if statement instead of nested if statements

(SIM102)


1564-1579: Minor readability – inline condition is 👍

Nice simplification compared to the previous requires_inference variable.
No functional concerns here.
[ approve_code_changes ]


1627-1633: Consider early-returning when inspect is False to save one allocation

-        if not self.cloud_inference and self._inference_inspector.inspect(points):
-            points = list(
-                self._embed_models(
-                    points, is_query=False, batch_size=self.local_inference_batch_size
-                )
-            )
+        if self.cloud_inference or not self._inference_inspector.inspect(points):
+            return await self._client.update_vectors(
+                collection_name=collection_name,
+                points=points,
+                wait=wait,
+                ordering=ordering,
+                shard_key_selector=shard_key_selector,
+            )
+
+        points = list(
+            self._embed_models(
+                points, is_query=False, batch_size=self.local_inference_batch_size
+            )
+        )

This fast-path avoids the second pass over points when inference is not needed.
[ suggest_optional_refactor ]


2070-2075: Generator vs. list again

Same remark as for query_batch_points: if the lower layer accepts any Iterable, do not eagerly list() the result.
[ duplicate_comment ]

@joein joein requested a review from generall June 13, 2025 15:38
@joein joein merged commit a908d95 into dev Jun 13, 2025
14 of 44 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants