Skip to main content
Filter by
Sorted by
Tagged with
0 votes
0 answers
98 views

I am trying to build a RAG from pdfs where I extract the text and tables. I want to use a persistent db in order to store the chunks, tables, embeddings e.t.c. and then reload the db and use the ...
AndCh's user avatar
  • 339
0 votes
0 answers
72 views

So when i try to generate embeddings from two different types of codes - here is the one which is mentioned on the langchain site but this gives me deadline exceed @lru_cache def get_settings(): ...
Akshat Soni's user avatar
0 votes
1 answer
468 views

I'm trying to use PGVectorStore in LangChain with metadata columns, following the example in pypi page, but I'm encountering issues when attempting to add and query documents with metadata. The basic ...
ndrini's user avatar
  • 103
0 votes
1 answer
53 views

I have a .csv dataset consisting of text dialog between two people and the rating of the related emaotions: | Text_Dialog | joy | anger | sad | happy | |--------------------|-----|-------|-----|...
user1319236's user avatar
0 votes
0 answers
116 views

I have a vector store of documents, each document is a json document with features. I'd like to filter the documents according to some criteria. The problem is that some of the documents contain a NOT ...
Gino's user avatar
  • 923
0 votes
0 answers
113 views

I have a working RAG code, using Langchain and Milvus. Now I'd like to add the feature to look at the metadata of each of the extracted k documents, and do the following: find the paragraph_id of ...
ArieAI's user avatar
  • 512
0 votes
1 answer
124 views

This part of the code, specifically the part where rag_chain is invoked causes an error: Retrying langchain_cohere.embeddings.CohereEmbeddings.embed_with_retry.<locals>._embed_with_retry in 4.0 ...
Akshitha Rao's user avatar
0 votes
0 answers
25 views

I am writing a RAG chatbot that retrieves information from a given list of documents. The documents can be found in a set folder, and they could be either .pdf or .docx. I want to merge all the ...
Gabriel Diaz de Leon's user avatar
1 vote
0 answers
240 views

I'm using a vector store that I've created in AWS OpenSearch serverless. It has one index that has below configurations: - Engine: faiss - Precision: Binary - Dimensions: 1024 - Distance Type: ...
Dixit Tilaji's user avatar
0 votes
1 answer
1k views

I am using a vectorstore of some documents in Chroma and implemented everything using the LangChain package. Here’s the package I am using: from langchain_chroma import Chroma I need to check if a ...
s.espriz's user avatar
0 votes
1 answer
295 views

I'm trying to run an LLM locally and feed it with the contents of a very large PDF. I have decided to try this via a RAG. For this I wanted to create a vectorstore, which contains the content of the ...
Pantastix's user avatar
  • 428
2 votes
1 answer
233 views

I am trying to upload 2 json files into an assistants vector store using the official openAI python library. I also want to use a specific chunking strategy, and a different one for each files. There ...
user28146142's user avatar
1 vote
1 answer
364 views

I created and saved a vectorstore using langchain_community.vectorstores SKLearnVectorStore and I can't load it. I created and saved vectorstore as below: from langchain_community.vectorstores import ...
aliarda's user avatar
  • 33
0 votes
1 answer
103 views

How to query the vector database in LangChain AgentExecutor, invoice before summarizing the 'Final Answer' after all tools have been called? LangChain AgentExecutor code: llm = ChatOpenAI() tools = [...
chenkun's user avatar
  • 75
0 votes
1 answer
366 views

I'm currently working with the llama_index Python package and using the llama-index-vector-stores-timescalevector extension to manage my vectors with Timescale. However, I’ve encountered an issue ...
Gianluca Baglini's user avatar
0 votes
0 answers
171 views

I'm in the midst of developing Genai app with private data stored in vertex ai vector db llm as gpt4 langhain as orchestrator When I invoke vector_store retriever I get error: OPENSSL_internal:...
Sanjay Nagaraj's user avatar
1 vote
1 answer
264 views

According to this commit, I now need to connect using RedisAutoConfiguration instead of RedisVectorStoreAutoConfiguration and I'm curious about how. commit comment Here is the Redis connection ...
이진성's user avatar
0 votes
0 answers
524 views

I'd like to create a status_checker api endpoint in fastapi to track the creation of chromadb embeddings. Also I'd like to create these embeddings in async mode. Below is the code, but it is giving ...
PADALA LIKHITH RISHI's user avatar
0 votes
1 answer
109 views

I am currently working with SelfQueryRetriever and my data is stored in a ChromaDB server within a collection. While a simple similarity search retrieves answers correctly, using SelfQueryRetriever ...
Arsh's user avatar
  • 1
2 votes
0 answers
308 views

The code works a few iterations of the following: Creating vector store Upload individual file (to ensure clean and singular context) Call OpenAI's "client.beta.threads.runs.create_and_poll"...
Adrian Lo's user avatar
0 votes
1 answer
237 views

I have such entries in my elasticsearch index: It's unstructured data, in this case the content of a PDF that was split into chunks, then a LangChain document was created for each chunk and pushed to ...
zbeedatm's user avatar
  • 679
3 votes
0 answers
347 views

I'm trying to create a single-field vector index in my Firestore database using the gcloud command-line interface (CLI) to enable vector search functionality. However, I keep getting this error: ...
capiono's user avatar
  • 3,007
3 votes
1 answer
2k views

I am working with langChain right now and created a FAISS vector store. Since today, my kernel crashes when running a similarity search on my vector store. Has anyone an idea why this is happening? ...
JKS's user avatar
  • 33
1 vote
0 answers
180 views

I have uploaded PDF file and split file into chunks, then apply tokenizer to each chunck and created embeddings. But when i try to store my embedding in FIASS, it give me AttributeError: 'Tensor' ...
Shaista Habib's user avatar
0 votes
1 answer
571 views

Following is some code that I'm working on using Python and langchain to ingest a PDF document into a vector store for consumption as a reference manual for an AI initiative I'm working on. ...
Christian Bannard's user avatar
1 vote
1 answer
3k views

I am using Aws Bedrock + langchain +openSerach Vector store. when i ask question to RAG base chatbot it gives me error like "failed to create query: Field 'vector_field' is not knn_vector type.&...
Vikas Patil's user avatar
0 votes
1 answer
488 views

I'm trying to use OpensearchVectorClient() function in llama-index, but I've run into a problem while calling OpenSearch: "object tuple can't be used in 'await' expression". Here my code: ...
Alex Camilloni's user avatar
0 votes
1 answer
2k views

I am working on a chat application in Langchain, Python. The idea is that user submits some pdf files that the chat model is trained on and then asks questions from the model regarding those documents....
usama hussain's user avatar
1 vote
0 answers
1k views

I have created a vectorstore using Chroma and Langchain with three different collections and stored it in a persistent directory using the following code: def create_embeddings_vectorstorage(splitted):...
Cristi Fernandez's user avatar
1 vote
0 answers
1k views

I'm following AWS Bedrock workshop here - https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb. Everything is working fine until I go to the ...
user23613391's user avatar
1 vote
2 answers
643 views

From the langchain documentation - Per-User Retrieval When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one ...
Ailurophile's user avatar
  • 3,025