Cache that stores things in memory.
InMemoryCache(
self,
*,
maxsize: int | None = None,
)| Name | Type | Description |
|---|---|---|
maxsize | int | None | Default: None |
| Name | Type |
|---|---|
| maxsize | int | None |
Example:
from langchain_core.caches import InMemoryCache
from langchain_core.outputs import Generation
# Initialize cache
cache = InMemoryCache()
# Update cache
cache.update(
prompt="What is the capital of France?",
llm_string="model='gpt-5.4-mini',
return_val=[Generation(text="Paris")],
)
# Lookup cache
result = cache.lookup(
prompt="What is the capital of France?",
llm_string="model='gpt-5.4-mini',
)
# result is [Generation(text="Paris")]The maximum number of items to store in the cache.
If None, the cache has no maximum size.
If the cache exceeds the maximum size, the oldest items are removed.
Look up based on prompt and llm_string.
Update cache based on prompt and llm_string.
Clear cache.
Async look up based on prompt and llm_string.
Async update cache based on prompt and llm_string.
Async clear cache.