hot

package module
v0.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 28, 2025 License: MIT Imports: 20 Imported by: 1

README

HOT - Blazing Fast In-Memory Caching for Go

tag Go Version GoDoc Build Status Go report Coverage Contributors License

HOT stands for Hot Object Tracker - a feature-complete, blazing-fast caching library for Go applications.

image

🚀 Features

  • High Performance: Optimized for speed
  • 🔄 Multiple Eviction Policies: LRU, LFU, TinyLFU, W-TinyLFU, S3FIFO, ARC, 2Q, and FIFO algorithms
  • TTL with Jitter: Prevent cache stampedes with exponential distribution
  • 🔄 Stale-While-Revalidate: Serve stale data while refreshing in background
  • Missing Key Caching: Cache negative results to avoid repeated lookups
  • 🍕 Sharded Cache: Scale horizontally with multiple cache shards
  • 🔒 Thread Safety: Optional locking with zero-cost when disabled
  • 🔗 Loader Chains: Chain multiple data sources with in-flight deduplication
  • 🌶️ Cache Warmup: Preload frequently accessed data
  • 📦 Batch Operations: Efficient bulk operations for better performance
  • 🧩 Composable Design: Mix and match caching strategies
  • 📝 Copy-on-Read/Write: Optional value copying for thread safety
  • 📊 Metrics Collection: Built-in statistics and monitoring
  • 💫 Go Generics: Type-safe caching with compile-time guarantees
  • 🏡 Bring your own cache: Pluggable APIs and highly customizable

📋 Table of Contents

🏎️ Performance

HOT is optimized for high-performance scenarios:

  • Cheap clock lookup (2.5x faster than time.Now())
  • Zero-allocation operations where possible
  • Lock-free operations when thread safety is disabled
  • Batch operations for better throughput
  • Sharded architecture for high concurrency

📦 Installation

go get github.com/samber/hot

This library is v0 and follows SemVer strictly.

Some breaking changes might be made to exported APIs before v1.0.0.

🤠 Getting started

Let's start with a simple LRU cache and 10 minutes TTL:

import "github.com/samber/hot"

cache := hot.NewHotCache[string, int](hot.LRU, 1_000_000).
    WithTTL(10*time.Minute).
    Build()

cache.Set("hello", 42)

values, missing := cache.GetMany([]string{"bar", "baz", "hello"})
// values: {"hello": 42}
// missing: ["baz", "bar"]

🍱 API Reference

GoDoc: https://godoc.org/github.com/samber/hot

Configuration Options

TTL and expiration settings:

// Set default time-to-live for all cache entries
WithTTL(ttl time.Duration)
// Add random jitter to TTL to prevent cache stampedes
WithJitter(lambda float64, upperBound time.Duration)
// Enable background cleanup of expired items
WithJanitor()

Background cache revalidation (stale-while-revalidate pattern):

// Keep serving stale data while refreshing in background
WithRevalidation(stale time.Duration, loaders ...hot.Loader[K, V])
// Control behavior when revalidation fails (KeepOnError/DropOnError)
WithRevalidationErrorPolicy(policy hot.RevalidationErrorPolicy)

Missing key caching - prevents repeated lookups for non-existent keys:

// Use separate cache for missing keys (prevents main cache pollution)
WithMissingCache(algorithm hot.EvictionAlgorithm, capacity int)
// Share missing key cache with main cache (good for low missing rate)
WithMissingSharedCache()

Data source integration:

// Set chain of loaders for cache misses (primary, fallback, etc.)
WithLoaders(loaders ...hot.Loader[K, V])

Thread safety configuration:

// Disable mutex for single-threaded applications (performance boost)
WithoutLocking()
// Copy values when reading (prevents external modification)
WithCopyOnRead(copier func(V) V)
// Copy values when writing (ensures cache owns the data)
WithCopyOnWrite(copier func(V) V)

Sharding for high concurrency scenarios:

// Split cache into multiple shards to reduce lock contention
WithSharding(shards uint64, hasher sharded.Hasher[K])

Event callbacks and hooks:

// Called when items are evicted (LRU/LFU/TinyLFU/W-TinyLFU/S3FIFO/expiration)
WithEvictionCallback(callback func(key K, value V))
// Preload cache on startup with data from loader
WithWarmUp(loader func() (map[K]V, []K, error))
// Preload with timeout protection for slow data sources
WithWarmUpWithTimeout(timeout time.Duration, loader func() (map[K]V, []K, error))

Monitoring and metrics:

// Enable Prometheus metrics collection with the specified cache name
WithPrometheusMetrics(cacheName string)

Eviction algorithms:

hot.LRU
hot.LFU
hot.TinyLFU
hot.W-TinyLFU
hot.S3FIFO
hot.TwoQueue
hot.ARC
hot.FIFO

Revalidation policies:

hot.KeepOnError
hot.DropOnError
Core Methods

Basic operations:

// Store a key-value pair in the cache
cache.Set(key K, value V)
// Store with custom time-to-live (overrides default TTL)
cache.SetWithTTL(key K, value V, ttl time.Duration)
// Retrieve value by key, returns found status and any error
cache.Get(key K) -> (value V, found bool, error error)
// Check if key exists in cache (no side effects)
cache.Has(key K) -> bool
// Remove key from cache, returns true if key existed
cache.Delete(key K) -> bool

Batch operations (more efficient for multiple items):

// Store multiple key-value pairs atomically
cache.SetMany(items map[K]V)
// Retrieve multiple values, returns found items and missing keys
cache.GetMany(keys []K) -> (found map[K]V, missing []K)
// Check existence of multiple keys, returns map of key->exists
cache.HasMany(keys []K) -> map[K]bool
// Remove multiple keys, returns map of key->was_deleted
cache.DeleteMany(keys []K) -> map[K]bool

Inspection methods (no side effects on cache state):

// Get value without updating access time or LRU position
cache.Peek(key K) -> (value V, found bool)
// Get multiple values without side effects
cache.PeekMany(keys []K) -> (found map[K]V, missing []K)
// Get all keys in cache (order not guaranteed)
cache.Keys() -> []K
// Get all values in cache (order not guaranteed)
cache.Values() -> []V
// Get all keys and values in cache (order not guaranteed)
cache.All() -> map[K]V
// Iterate over all key-value pairs, return false to stop
cache.Range(fn func(key K, value V) bool)
// Get current number of items in cache
cache.Len() -> int
// Get (current_size, max_capacity) of cache
cache.Capacity() -> (current int, max int)
// Get (algorithm_name, algorithm_version) info
cache.Algorithm() -> (name string, version string)

Cache management and lifecycle:

// Remove all items from cache immediately
cache.Purge()
// Preload cache with data from loader function
cache.WarmUp(loader hot.Loader[K, V]) -> error
// Start background cleanup of expired items
cache.Janitor()
// Stop background janitor process
cache.StopJanitor()
Loader Interface
// Loader function signature for fetching data from external sources
// Called when cache misses occur, with automatic deduplication of concurrent requests
type Loader[K comparable, V any] func(keys []K) (found map[K]V, err error)

// Example:
func userLoader(keys []string) (found map[string]*User, err error) {
    // Fetch users from database
    // Return map of found users (key -> user object)
    // Return empty map if no users found (not an error)
    // Return error if database query fails
    return users, nil
}
Shard partitioner
// Hasher is responsible for generating unsigned, 16 bit hash of provided key.
// Hasher should minimize collisions. For great performance, a fast function is preferable.
type Hasher[K any] func(key K) uint64

// Example:
func hash(key string) uint64 {
    hasher := fnv.New64a()
    hasher.Write([]byte(s))
    return hasher.Sum64()
}

🏛️ Architecture

This project has been split into multiple layers to respect the separation of concern.

Each cache layer implements the pkg/base.InMemoryCache[K, V] interface. Combining multiple encapsulation has a small cost (~1ns per call), but offers great customization.

We highly recommend using hot.HotCache[K, V] instead of lower layers.

Example:

┌─────────────────────────────────────────────────────────────┐
│                    hot.HotCache[K, V]                       │
│              (High-level, feature-complete)                 │
└─────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│              pkg/sharded.ShardedInMemoryCache               │
│                    (Sharding layer)                         │
└─────────────────────────────────────────────────────────────┘
                    │    │    │    │    │
                    ▼    ▼    ▼    ▼    ▼
┌─────────────────────────────────────────────────────────────┐
│              pkg/metrics.InstrumentedCache[K, V]            │
│                   (Metric collection layer)                 │
└─────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│              pkg/safe.SafeInMemoryCache[K, V]               │
│                   (Thread safety layer)                     │
└─────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│              pkg/lru.LRUCache[K, V]                         │
│              pkg/lfu.LFUCache[K, V]                         │
│              pkg/lfu.TinyLFUCache[K, V]                     │
│              pkg/lfu.WTinyLFUCache[K, V]                    │
│              pkg/lfu.S3FIFOCache[K, V]                      │
│              pkg/arc.ARCCache[K, V]                         │
│              pkg/fifo.FIFOCache[K, V]                       │
│              pkg/twoqueue.TwoQueueCache[K, V]               │
│                   (Eviction policies)                       │
└─────────────────────────────────────────────────────────────┘
Eviction policies

This project provides multiple eviction policies. Each implements the pkg/base.InMemoryCache[K, V] interface.

They are not protected against concurrent access. If safety is required, encapsulate it into pkg/safe.SafeInMemoryCache[K comparable, V any].

Packages:

  • pkg/lru
  • pkg/lfu
  • pkg/tinylfu
  • pkg/wtinylfu
  • pkg/s3fifo
  • pkg/twoqueue
  • pkg/arc
  • pkg/fifo

Example:

cache := lru.NewLRUCache[string, *User](100_000)
Concurrent access

The hot.HotCache[K, V] offers protection against concurrent access by default. But in some cases, unnecessary locking might just slow down a program.

Low-level cache layers are not protected by default. Use the following encapsulation to bring safety:

import (
	"github.com/samber/hot/pkg/lfu"
	"github.com/samber/hot/pkg/safe"
)

cache := safe.NewSafeInMemoryCache(
    lru.NewLRUCache[string, *User](100_000),
)
Sharded cache

A sharded cache might be useful in two scenarios:

  • highly concurrent application slowed down by cache locking -> 1 lock per shard instead of 1 global lock
  • highly parallel application with no concurrency -> no lock

The sharding key must not be too costly to compute and must offer a nice balance between shards. The hashing function must have func(k K) uint64 signature.

A sharded cache can be created via hot.HotCache[K, V] or using a low-level layer:

import (
    "hash/fnv"
    "github.com/samber/hot/pkg/lfu"
    "github.com/samber/hot/pkg/safe"
    "github.com/samber/hot/pkg/sharded"
)

cache := sharded.NewShardedInMemoryCache(
    100, // Number of shards
    func() base.InMemoryCache[K, *item[V]] {
        // Cache builder for each shard
        return safe.NewSafeInMemoryCache(
            lru.NewLRUCache[string, *User](100_000),
        )
    },
    func(key string) uint64 {
        // Hash function
        h := fnv.New64a()
        h.Write([]byte(key))
        return h.Sum64()
    },
)
Missing key caching

Instead of calling the loader chain every time an invalid key is requested, a "missing cache" can be enabled. Note that it won't protect your app against a DDoS attack with high cardinality keys.

If the missing keys are infrequent, sharing the missing cache with the main cache might be reasonable:

import "github.com/samber/hot"

cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
    WithMissingSharedCache().
    Build()

If the missing keys are frequent, use a dedicated cache to prevent pollution of the main cache:

import "github.com/samber/hot"

cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
    WithMissingCache(hot.LFU, 50_000).
    Build()

🪄 Examples

Simple LRU cache
import "github.com/samber/hot"

// Available eviction policies: hot.LRU, hot.LFU, hot.TinyLFU, hot.WTinyLFU, hot.S3FIFO, hot.TwoQueue, hot.ARC, hot.FIFO
// Capacity: 100k keys/values
cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
    Build()

cache.Set("hello", 42)
cache.SetMany(map[string]int{"foo": 1, "bar": 2})

values, missing := cache.GetMany([]string{"bar", "baz", "hello"})
// values: {"bar": 2, "hello": 42}
// missing: ["baz"]

value, found, _ := cache.Get("foo")
// value: 1
// found: true
Error Handling Patterns
// Handle cache operations with proper error checking
value, found, err := cache.Get("key")
if err != nil {
    // Handle loader errors (database connection, network issues, etc.)
    log.Printf("Cache get error: %v", err)
    return
}
if !found {
    // Key doesn't exist in cache and wasn't found by loaders
    log.Printf("Key not found: %s", "key")
    return
}
// Use value safely
fmt.Printf("Value: %v", value)

// Batch operations with error handling
values, missing := cache.GetMany([]string{"key1", "key2", "key3"})
if len(missing) > 0 {
    log.Printf("Missing keys: %v", missing)
}
// Process found values
for key, value := range values {
    fmt.Printf("%s: %v", key, value)
}
Cache with remote data source

If a value is not available in the in-memory cache, it will be fetched from a database or any data source.

Concurrent calls to loaders are deduplicated by key.

import "github.com/samber/hot"

cache := hot.NewHotCache[string, *User](hot.LRU, 100_000).
    WithLoaders(func(keys []string) (found map[string]*User, err error) {
        rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
        // ...
        return users, err
    }).
    Build()

user, found, err := cache.Get("user-123")
// might fail if "user-123" is not in cache and loader returns error

// get or create
user, found, err := cache.GetWithLoaders(
    "user-123",
    func(keys []string) (found map[string]*User, err error) {
        rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
        // ...
        return users, err
    },
    func(keys []string) (found map[string]*User, err error) {
        rows, err := db.Query("INSERT INTO users (id, email) VALUES (?, ?)", id, email)
        // ...
        return users, err
    },
)
// either `err` is not nil, or `found` is true

// missing value vs nil value
user, found, err := cache.GetWithLoaders(
    "user-123",
    func(keys []string) (found map[string]*User, err error) {
        // value could not be found
        return map[string]*User{}, nil

       // or

        // value exists but is nil
        return map[string]*User{"user-123": nil}, nil
    },
)
Cache with expiration
import "github.com/samber/hot"

cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
    WithTTL(1 * time.Minute).      // items will expire after 1 minute
    WithJitter(2, 30*time.Second). // optional: randomizes the TTL with an exponential distribution in the range [0, +30s)
    WithJanitor(1 * time.Minute).  // optional: background job will purge expired keys every minutes
    Build()

cache.SetWithTTL("foo", 42, 10*time.Second) // shorter TTL for "foo" key

With cache revalidation:

loader := func(keys []string) (found map[string]*User, err error) {
    rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
    // ...
    return users, err
}

cache := hot.NewHotCache[string, *User](hot.LRU, 100_000).
    WithTTL(1 * time.Minute).
    // Keep delivering cache 5 more second, but refresh value in background.
    // Keys that are not fetched during the interval will be dropped anyway.
    // A timeout or error in loader will drop keys.
    WithRevalidation(5 * time.Second, loader).
    // On revalidation error, the cache entries are either kept or dropped.
    // Optional (default: drop)
    WithRevalidationErrorPolicy(hot.KeepOnError).
    Build()

If WithRevalidation is used without loaders, the one provided in WithRevalidation() or GetWithLoaders() is used.

👀 Observability

HOT provides comprehensive Prometheus metrics for monitoring cache performance and behavior. Enable metrics by calling WithPrometheusMetrics() with a cache name:

import (
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promhttp"
    "github.com/samber/hot"
    "net/http"
)

// Create cache with Prometheus metrics
cache := hot.NewHotCache[string, string](hot.LRU, 1000).
    WithTTL(5*time.Minute).
    WithJitter(0.5, 10*time.Second).
    WithRevalidation(10*time.Second).
    WithRevalidationErrorPolicy(hot.KeepOnError).
    WithPrometheusMetrics("users-by-id").
    WithMissingCache(hot.ARC, 1000).
    Build()

// Register the cache metrics with Prometheus
err := prometheus.Register(cache)
if err != nil {
    log.Fatalf("Failed to register metrics: %v", err)
}
defer prometheus.Unregister(cache)

// Set up HTTP server to expose metrics
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8080", nil)
Available Metrics

Counters:

  • hot_insertion_total - Total number of items inserted into the cache
  • hot_eviction_total{reason} - Total number of items evicted from the cache (by reason)
  • hot_hit_total - Total number of cache hits
  • hot_miss_total - Total number of cache misses

Gauges:

  • hot_size_bytes - Current size of the cache in bytes (including keys and values)
  • hot_length - Current number of items in the cache

Configuration Gauges:

  • hot_settings_capacity - Maximum number of items the cache can hold
  • hot_settings_algorithm - Eviction algorithm type (0=lru, 1=lfu, 2=arc, 3=2q, 4=fifo, 5=tinylfu, 6=wtinylfu, 7=s3fifo)
  • hot_settings_ttl_seconds - Time-to-live duration in seconds (if set)
  • hot_settings_jitter_lambda - Jitter lambda parameter for TTL randomization (if set)
  • hot_settings_jitter_upper_bound_seconds - Jitter upper bound duration in seconds (if set)
  • hot_settings_stale_seconds - Stale duration in seconds (if set)
  • hot_settings_missing_capacity - Maximum number of missing keys the cache can hold (if set)
Example Prometheus Queries
# Cache hit ratio
rate(hot_hit_total[5m]) / (rate(hot_hit_total[5m]) + rate(hot_miss_total[5m]))

# Eviction rate by reason
rate(hot_eviction_total[5m])

# Cache size in MB
hot_size_bytes / 1024 / 1024

# Cache utilization percentage
hot_length / hot_settings_capacity * 100

# Insertion rate
rate(hot_insertion_total[5m])

🏎️ Benchmark

TODO

🤝 Contributing

Don't hesitate ;)

# Install some dev dependencies
make tools

# Run tests
make test
# or
make watch-test

👤 Contributors

Contributors

💫 Show your support

Give a ⭐️ if this project helped you!

GitHub Sponsors

📝 License

Copyright © 2024 Samuel Berthe.

This project is MIT licensed.

Documentation

Index

Constants

View Source
const (
	DropOnError revalidationErrorPolicy = iota
	KeepOnError
)

Variables

This section is empty.

Functions

This section is empty.

Types

type EvictionAlgorithm added in v0.2.0

type EvictionAlgorithm string

EvictionAlgorithm represents the cache eviction policy to use.

const (
	LRU      EvictionAlgorithm = "lru"
	LFU      EvictionAlgorithm = "lfu"
	TinyLFU  EvictionAlgorithm = "tinylfu"
	WTinyLFU EvictionAlgorithm = "wtinylfu"
	TwoQueue EvictionAlgorithm = "2q"
	ARC      EvictionAlgorithm = "arc"
	FIFO     EvictionAlgorithm = "fifo"
)

type HotCache

type HotCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

HotCache is the main cache implementation that provides all caching functionality. It supports various eviction policies, TTL, revalidation, and missing key caching.

func (*HotCache[K, V]) Algorithm

func (c *HotCache[K, V]) Algorithm() (mainCacheAlgorithm string, missingCacheAlgorithm string)

Algorithm returns the eviction algorithm names for the main cache and missing cache. If missing cache is shared or not enabled, missingCacheAlgorithm will be empty.

func (*HotCache[K, V]) All added in v0.9.0

func (c *HotCache[K, V]) All() map[K]V

All returns all key-value pairs in the cache.

func (*HotCache[K, V]) Capacity

func (c *HotCache[K, V]) Capacity() (mainCacheCapacity int, missingCacheCapacity int)

Capacity returns the capacity of the main cache and missing cache. If missing cache is shared or not enabled, missingCacheCapacity will be 0.

func (*HotCache[K, V]) Collect added in v0.7.0

func (c *HotCache[K, V]) Collect(ch chan<- prometheus.Metric)

Collect implements the prometheus.Collector interface.

func (*HotCache[K, V]) Delete

func (c *HotCache[K, V]) Delete(key K) bool

Delete removes a key from the cache. Returns true if the key was found and removed, false otherwise.

func (*HotCache[K, V]) DeleteMany

func (c *HotCache[K, V]) DeleteMany(keys []K) map[K]bool

DeleteMany removes multiple keys from the cache in a single operation. Returns a map where keys are the input keys and values indicate whether the key was found and removed.

func (*HotCache[K, V]) Describe added in v0.7.0

func (c *HotCache[K, V]) Describe(ch chan<- *prometheus.Desc)

Describe implements the prometheus.Collector interface.

func (*HotCache[K, V]) Get

func (c *HotCache[K, V]) Get(key K) (value V, found bool, err error)

Get returns a value from the cache, a boolean indicating whether the key was found, and an error when loaders fail. Uses the default loaders configured for the cache.

func (*HotCache[K, V]) GetMany

func (c *HotCache[K, V]) GetMany(keys []K) (values map[K]V, missing []K, err error)

GetMany returns multiple values from the cache, a slice of missing keys, and an error when loaders fail. Uses the default loaders configured for the cache.

func (*HotCache[K, V]) GetManyWithLoaders added in v0.3.2

func (c *HotCache[K, V]) GetManyWithLoaders(keys []K, loaders ...Loader[K, V]) (values map[K]V, missing []K, err error)

GetManyWithLoaders returns multiple values from the cache, a slice of missing keys, and an error when loaders fail. Uses the provided loaders for cache misses. Concurrent calls for the same keys are deduplicated using singleflight.

func (*HotCache[K, V]) GetWithLoaders added in v0.3.2

func (c *HotCache[K, V]) GetWithLoaders(key K, loaders ...Loader[K, V]) (value V, found bool, err error)

GetWithLoaders returns a value from the cache, a boolean indicating whether the key was found, and an error when loaders fail. Uses the provided loaders for cache misses. Concurrent calls for the same key are deduplicated using singleflight.

func (*HotCache[K, V]) Has

func (c *HotCache[K, V]) Has(key K) bool

Has checks if a key exists in the cache and has a valid value. Missing values (cached as missing) are not considered valid, even if cached.

func (*HotCache[K, V]) HasMany

func (c *HotCache[K, V]) HasMany(keys []K) map[K]bool

HasMany checks if multiple keys exist in the cache and have valid values. Missing values (cached as missing) are not considered valid, even if cached. Returns a map where keys are the input keys and values indicate whether the key exists and has a value.

func (*HotCache[K, V]) Janitor

func (c *HotCache[K, V]) Janitor()

Janitor starts a background goroutine that periodically removes expired items from the cache. The janitor runs until StopJanitor() is called or the cache is garbage collected. This method is safe to call multiple times, but only the first call will start the janitor.

func (*HotCache[K, V]) Keys

func (c *HotCache[K, V]) Keys() []K

Keys returns all keys in the cache that have valid values. Missing keys are not included in the result.

func (*HotCache[K, V]) Len

func (c *HotCache[K, V]) Len() int

Len returns the number of items in the main cache. This includes both valid values and missing keys if using shared missing cache.

func (*HotCache[K, V]) MustGet added in v0.2.0

func (c *HotCache[K, V]) MustGet(key K) (value V, found bool)

MustGet returns a value from the cache and a boolean indicating whether the key was found. Panics when loaders fail. Uses the default loaders configured for the cache.

func (*HotCache[K, V]) MustGetMany added in v0.2.0

func (c *HotCache[K, V]) MustGetMany(keys []K) (values map[K]V, missing []K)

MustGetMany returns multiple values from the cache and a slice of missing keys. Panics when loaders fail. Uses the default loaders configured for the cache.

func (*HotCache[K, V]) MustGetManyWithLoaders added in v0.3.2

func (c *HotCache[K, V]) MustGetManyWithLoaders(keys []K, loaders ...Loader[K, V]) (values map[K]V, missing []K)

MustGetManyWithLoaders returns multiple values from the cache and a slice of missing keys. Panics when loaders fail. Uses the provided loaders for cache misses.

func (*HotCache[K, V]) MustGetWithLoaders added in v0.3.2

func (c *HotCache[K, V]) MustGetWithLoaders(key K, loaders ...Loader[K, V]) (value V, found bool)

MustGetWithLoaders returns a value from the cache and a boolean indicating whether the key was found. Panics when loaders fail. Uses the provided loaders for cache misses.

func (*HotCache[K, V]) Peek

func (c *HotCache[K, V]) Peek(key K) (value V, ok bool)

Peek returns a value from the cache without checking expiration or calling loaders/revalidation. Missing values are not returned, even if cached. This is useful for inspection without side effects.

func (*HotCache[K, V]) PeekMany

func (c *HotCache[K, V]) PeekMany(keys []K) (map[K]V, []K)

PeekMany returns multiple values from the cache without checking expiration or calling loaders/revalidation. Missing values are not returned, even if cached. This is useful for inspection without side effects.

func (*HotCache[K, V]) Purge

func (c *HotCache[K, V]) Purge()

Purge removes all keys and values from the cache. This operation clears both the main cache and the missing cache if enabled.

func (*HotCache[K, V]) Range

func (c *HotCache[K, V]) Range(f func(K, V) bool)

Range iterates over all key-value pairs in the cache and calls the provided function for each pair. The iteration stops if the function returns false. Missing values are not included. @TODO: loop over missingCache? Use a different callback?

func (*HotCache[K, V]) Set

func (c *HotCache[K, V]) Set(key K, v V)

Set adds a value to the cache. If the key already exists, its value is updated. Uses the default TTL configured for the cache.

func (*HotCache[K, V]) SetMany

func (c *HotCache[K, V]) SetMany(items map[K]V)

SetMany adds multiple values to the cache in a single operation. If keys already exist, their values are updated. Uses the default TTL configured for the cache.

func (*HotCache[K, V]) SetManyWithTTL

func (c *HotCache[K, V]) SetManyWithTTL(items map[K]V, ttl time.Duration)

SetManyWithTTL adds multiple values to the cache with a specific TTL duration. If keys already exist, their values are updated.

func (*HotCache[K, V]) SetMissing

func (c *HotCache[K, V]) SetMissing(key K)

SetMissing adds a key to the missing cache to prevent repeated lookups for non-existent keys. If the key already exists, its value is dropped. Uses the default TTL configured for the cache. Panics if missing cache is not enabled.

func (*HotCache[K, V]) SetMissingMany

func (c *HotCache[K, V]) SetMissingMany(missingKeys []K)

SetMissingMany adds multiple keys to the missing cache in a single operation. If keys already exist, their values are dropped. Uses the default TTL configured for the cache. Panics if missing cache is not enabled.

func (*HotCache[K, V]) SetMissingManyWithTTL

func (c *HotCache[K, V]) SetMissingManyWithTTL(missingKeys []K, ttl time.Duration)

SetMissingManyWithTTL adds multiple keys to the missing cache with a specific TTL duration. If keys already exist, their values are dropped. Panics if missing cache is not enabled.

func (*HotCache[K, V]) SetMissingWithTTL

func (c *HotCache[K, V]) SetMissingWithTTL(key K, ttl time.Duration)

SetMissingWithTTL adds a key to the missing cache with a specific TTL duration. If the key already exists, its value is dropped. Panics if missing cache is not enabled.

func (*HotCache[K, V]) SetWithTTL

func (c *HotCache[K, V]) SetWithTTL(key K, v V, ttl time.Duration)

SetWithTTL adds a value to the cache with a specific TTL duration. If the key already exists, its value is updated.

func (*HotCache[K, V]) StopJanitor

func (c *HotCache[K, V]) StopJanitor()

StopJanitor stops the background janitor goroutine and cleans up resources. This method is safe to call multiple times and will wait for the janitor to fully stop.

func (*HotCache[K, V]) Values

func (c *HotCache[K, V]) Values() []V

Values returns all values in the cache. Missing values are not included in the result.

func (*HotCache[K, V]) WarmUp

func (c *HotCache[K, V]) WarmUp(loader func() (map[K]V, []K, error)) error

WarmUp preloads the cache with data from the provided loader function. This is useful for initializing the cache with frequently accessed data. The loader function should return a map of key-value pairs and a slice of missing keys.

type HotCacheConfig

type HotCacheConfig[K comparable, V any] struct {
	// contains filtered or unexported fields
}

HotCacheConfig holds the configuration for a HotCache instance. It uses the builder pattern to allow fluent configuration.

func NewHotCache

func NewHotCache[K comparable, V any](algorithm EvictionAlgorithm, capacity int) HotCacheConfig[K, V]

NewHotCache creates a new HotCache configuration with the specified eviction algorithm and capacity. This is the starting point for building a cache with the builder pattern.

func (HotCacheConfig[K, V]) Build

func (cfg HotCacheConfig[K, V]) Build() *HotCache[K, V]

Build creates and returns a new HotCache instance with the current configuration. This method validates the configuration and creates all necessary internal components. The cache is ready to use immediately after this call.

func (HotCacheConfig[K, V]) WithCopyOnRead

func (cfg HotCacheConfig[K, V]) WithCopyOnRead(copyOnRead func(V) V) HotCacheConfig[K, V]

WithCopyOnRead sets the function to copy the value when reading from the cache. This is useful for ensuring thread safety when the cached values are mutable.

func (HotCacheConfig[K, V]) WithCopyOnWrite

func (cfg HotCacheConfig[K, V]) WithCopyOnWrite(copyOnWrite func(V) V) HotCacheConfig[K, V]

WithCopyOnWrite sets the function to copy the value when writing to the cache. This is useful for ensuring thread safety when the cached values are mutable.

func (HotCacheConfig[K, V]) WithEvictionCallback added in v0.2.0

func (cfg HotCacheConfig[K, V]) WithEvictionCallback(onEviction base.EvictionCallback[K, V]) HotCacheConfig[K, V]

WithEvictionCallback sets the callback to be called when an entry is evicted from the cache. The callback is called synchronously and might block cache operations if it is slow. This implementation choice is subject to change. Please open an issue to discuss.

func (HotCacheConfig[K, V]) WithJanitor

func (cfg HotCacheConfig[K, V]) WithJanitor() HotCacheConfig[K, V]

WithJanitor enables the cache janitor that periodically removes expired items. The janitor runs in the background and cannot be used together with WithoutLocking().

func (HotCacheConfig[K, V]) WithJitter

func (cfg HotCacheConfig[K, V]) WithJitter(lambda float64, upperBoundDuration time.Duration) HotCacheConfig[K, V]

WithJitter randomizes the TTL with an exponential distribution in the range [0, upperBoundDuration). This helps prevent cache stampedes by spreading out when entries expire.

func (HotCacheConfig[K, V]) WithLoaders

func (cfg HotCacheConfig[K, V]) WithLoaders(loaders ...Loader[K, V]) HotCacheConfig[K, V]

WithLoaders sets the chain of loaders to use for cache misses. These loaders will be called in sequence when a key is not found in the cache.

func (HotCacheConfig[K, V]) WithMissingCache

func (cfg HotCacheConfig[K, V]) WithMissingCache(algorithm EvictionAlgorithm, capacity int) HotCacheConfig[K, V]

WithMissingCache enables caching of missing keys in a separate cache instance. The missing keys are stored in a dedicated cache with its own eviction algorithm and capacity.

func (HotCacheConfig[K, V]) WithMissingSharedCache

func (cfg HotCacheConfig[K, V]) WithMissingSharedCache() HotCacheConfig[K, V]

WithMissingSharedCache enables caching of missing keys in the main cache. Missing keys are stored alongside regular values in the same cache instance.

func (HotCacheConfig[K, V]) WithPrometheusMetrics added in v0.7.0

func (cfg HotCacheConfig[K, V]) WithPrometheusMetrics(cacheName string) HotCacheConfig[K, V]

WithPrometheusMetrics enables metric collection for the cache with the specified name. The cache name is required when metrics are enabled and will be used as a label in Prometheus metrics. When the cache is sharded, metrics will be collected for each shard with the shard number as an additional label.

func (HotCacheConfig[K, V]) WithRevalidation

func (cfg HotCacheConfig[K, V]) WithRevalidation(stale time.Duration, loaders ...Loader[K, V]) HotCacheConfig[K, V]

WithRevalidation sets the stale duration and optional revalidation loaders. After the TTL expires, entries become stale and can still be served while being revalidated in the background. Keys that are not fetched during the stale period will be dropped. If no revalidation loaders are provided, the default loaders or those used in GetWithLoaders() are used.

func (HotCacheConfig[K, V]) WithRevalidationErrorPolicy added in v0.2.0

func (cfg HotCacheConfig[K, V]) WithRevalidationErrorPolicy(policy revalidationErrorPolicy) HotCacheConfig[K, V]

WithRevalidationErrorPolicy sets the policy to apply when a revalidation loader returns an error. By default, keys are dropped from the cache on revalidation errors.

func (HotCacheConfig[K, V]) WithSharding

func (cfg HotCacheConfig[K, V]) WithSharding(nbr uint64, fn sharded.Hasher[K]) HotCacheConfig[K, V]

WithSharding enables cache sharding for better concurrency performance. The cache is split into multiple shards based on the provided hash function.

func (HotCacheConfig[K, V]) WithTTL

func (cfg HotCacheConfig[K, V]) WithTTL(ttl time.Duration) HotCacheConfig[K, V]

WithTTL sets the time-to-live for cache entries. After this duration, entries will be considered expired and will be removed.

func (HotCacheConfig[K, V]) WithWarmUp

func (cfg HotCacheConfig[K, V]) WithWarmUp(fn func() (map[K]V, []K, error)) HotCacheConfig[K, V]

WithWarmUp preloads the cache with data from the provided function. This is useful for initializing the cache with frequently accessed data.

func (HotCacheConfig[K, V]) WithWarmUpWithTimeout added in v0.6.0

func (cfg HotCacheConfig[K, V]) WithWarmUpWithTimeout(timeout time.Duration, fn func() (map[K]V, []K, error)) HotCacheConfig[K, V]

WithWarmUpWithTimeout preloads the cache with data from the provided function with a timeout. This is useful when the inner callback does not have its own timeout strategy.

func (HotCacheConfig[K, V]) WithoutLocking

func (cfg HotCacheConfig[K, V]) WithoutLocking() HotCacheConfig[K, V]

WithoutLocking disables mutex for the cache and improves internal performance. This should only be used when the cache is not accessed concurrently. Cannot be used together with WithJanitor().

type Loader

type Loader[K comparable, V any] func(keys []K) (found map[K]V, err error)

Loader is a function type that loads values for the given keys. It should return a map of found key-value pairs and an error if the operation fails. Keys that cannot be found should not be included in the returned map.

type LoaderChain

type LoaderChain[K comparable, V any] []Loader[K, V]

LoaderChain is a slice of loaders that are executed in sequence. Each loader is called with the keys that were not found by previous loaders.

Directories

Path Synopsis
pkg
arc
lfu
lru

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL