moka

Installation
SKILL.md

Moka - Fast Concurrent Cache for Rust

Moka is a fast, concurrent, in-memory cache library for Rust. Inspired by Java's Caffeine, it provides near-optimal hit ratios through the TinyLFU eviction policy, full concurrency for reads and high expected concurrency for writes, and a rich set of features including TTL/TTI expiration, per-entry custom expiry, eviction listeners, size-aware eviction, and both synchronous and asynchronous APIs.

At a high level, Moka works by combining a lock-free concurrent hash table (for key-value storage) with policy structures (for eviction, expiration, and admission) that are updated in batched operations. This design gives strong consistency for reads and eventual consistency for policy metadata, which is the right trade-off for a cache: you never return stale data, but eviction decisions may lag slightly behind the actual state.

The library provides three cache types that cover virtually every caching scenario in Rust applications. The sync::Cache and sync::SegmentedCache are for synchronous, multi-threaded contexts. The future::Cache is for async runtimes like tokio, async-std, and actix-rt. All three share the same builder API, eviction policies, and expiration features, so once you learn one, the others follow naturally.

Which Cache Type Should You Use?

Cache Type Module Feature Flag When to Use
sync::Cache moka::sync sync General-purpose multi-threaded caching. Single lock for policy operations means simpler internals and slightly lower overhead for moderate concurrency.
sync::SegmentedCache moka::sync sync High-contention workloads with many threads. Splits the cache into segments to reduce lock contention on policy operations. Choose this when you have many cores hitting the same cache simultaneously.
future::Cache moka::future future Async applications using tokio, async-std, or actix-rt. Uses async-aware locking so get_with can deduplicate concurrent async initializations without blocking the runtime.

For most applications, sync::Cache is the right default. Switch to SegmentedCache only if you observe lock contention (many threads, high write rate). Switch to future::Cache only when you are already in an async context and need async initialization logic.

Cargo Setup

Installs
11
First Seen
Apr 20, 2026