LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corevectorstoresin_memoryInMemoryVectorStore
    Class●Since v0.2

    InMemoryVectorStore

    In-memory vector store implementation.

    Uses a dictionary, and computes cosine similarity for search using numpy.

    Copy
    InMemoryVectorStore(
        self,
        embedding: Embeddings,
    )

    Bases

    VectorStore

    Setup:

    Install langchain-core.

    pip install -U langchain-core

    Key init args — indexing params:

    • embedding_function: Embeddings Embedding function to use.

    Instantiate:

    from langchain_core.vectorstores import InMemoryVectorStore
    from langchain_openai import OpenAIEmbeddings
    
    vector_store = InMemoryVectorStore(OpenAIEmbeddings())

    Add Documents:

    from langchain_core.documents import Document
    
    document_1 = Document(id="1", page_content="foo", metadata={"baz": "bar"})
    document_2 = Document(id="2", page_content="thud", metadata={"bar": "baz"})
    document_3 = Document(id="3", page_content="i will be deleted :(")
    
    documents = [document_1, document_2, document_3]
    vector_store.add_documents(documents=documents)

    Inspect documents:

    top_n = 10
    for index, (id, doc) in enumerate(vector_store.store.items()):
        if index < top_n:
            # docs have keys 'id', 'vector', 'text', 'metadata'
            print(f"{id}: {doc['text']}")
        else:
            break

    Delete Documents:

    vector_store.delete(ids=["3"])

    Search:

    results = vector_store.similarity_search(query="thud", k=1)
    for doc in results:
        print(f"* {doc.page_content} [{doc.metadata}]")
    * thud [{'bar': 'baz'}]

    Search with filter:

    def _filter_function(doc: Document) -> bool:
        return doc.metadata.get("bar") == "baz"
    
    results = vector_store.similarity_search(
        query="thud", k=1, filter=_filter_function
    )
    for doc in results:
        print(f"* {doc.page_content} [{doc.metadata}]")
    * thud [{'bar': 'baz'}]

    Search with score:

    results = vector_store.similarity_search_with_score(query="qux", k=1)
    for doc, score in results:
        print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
    * [SIM=0.832268] foo [{'baz': 'bar'}]

    Async:

    # add documents
    # await vector_store.aadd_documents(documents=documents)
    
    # delete documents
    # await vector_store.adelete(ids=["3"])
    
    # search
    # results = vector_store.asimilarity_search(query="thud", k=1)
    
    # search with score
    results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
    for doc, score in results:
        print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
    * [SIM=0.832268] foo [{'baz': 'bar'}]

    Use as Retriever:

    retriever = vector_store.as_retriever(
        search_type="mmr",
        search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
    )
    retriever.invoke("thud")
    [Document(id='2', metadata={'bar': 'baz'}, page_content='thud')]

    Used in Docs

    • Custom workflow
    • Evaluate a complex agent
    • Evaluate a RAG application
    • AIMlAPIEmbeddings integration
    • Amazon memorydb integration
    (11 more not shown)

    Parameters

    NameTypeDescription
    embedding*Embeddings

    embedding function to use.

    Constructors

    constructor
    __init__
    NameType
    embeddingEmbeddings

    Attributes

    attribute
    store: dict[str, dict[str, Any]]
    attribute
    embedding: embedding
    attribute
    embeddings: Embeddings

    Methods

    method
    delete
    method
    adelete
    method
    add_documents
    method
    aadd_documents
    method
    get_by_ids

    Get documents by their ids.

    method
    aget_by_ids

    Async get documents by their ids.

    method
    similarity_search_with_score_by_vector

    Search for the most similar documents to the given embedding.

    method
    similarity_search_with_score
    method
    asimilarity_search_with_score
    method
    similarity_search_by_vector
    method
    asimilarity_search_by_vector
    method
    similarity_search
    method
    asimilarity_search
    method
    max_marginal_relevance_search_by_vector
    method
    max_marginal_relevance_search
    method
    amax_marginal_relevance_search
    method
    from_texts
    method
    afrom_texts
    method
    load

    Load a vector store from a file.

    method
    dump

    Dump the vector store to a file.

    Inherited fromVectorStore

    Methods

    Madd_texts
    —

    Run more texts through the embeddings and add to the VectorStore.

    Maadd_texts
    —

    Async run more texts through the embeddings and add to the VectorStore.

    Msearch
    —

    Return docs most similar to query using a specified search type.

    Masearch
    —

    Async return docs most similar to query using a specified search type.

    Msimilarity_search_with_relevance_scores
    —

    Return docs and relevance scores in the range [0, 1].

    Masimilarity_search_with_relevance_scores
    —

    Async return docs and relevance scores in the range [0, 1].

    Mamax_marginal_relevance_search_by_vector
    —

    Async return docs selected using the maximal marginal relevance.

    Mfrom_documents
    —

    Return VectorStore initialized from documents and embeddings.

    Mafrom_documents
    —

    Async return VectorStore initialized from documents and embeddings.

    Mas_retriever
    —

    Return VectorStoreRetriever initialized from this VectorStore.

    View source on GitHub