Skip to content

Add in-memory cache infrastructure#98

Open
findolor wants to merge 1 commit intomainfrom
in-memory-cache-infra
Open

Add in-memory cache infrastructure#98
findolor wants to merge 1 commit intomainfrom
in-memory-cache-infra

Conversation

@findolor
Copy link
Copy Markdown
Collaborator

@findolor findolor commented May 4, 2026

Motivation

PR #54 mixes reusable cache infrastructure with endpoint behavior changes. This PR keeps the first stack item narrow by adding only the in-memory caching primitives on top of main.

Solution

  • Add moka with the async future feature.
  • Add AppCache<K, V> as a crate-local wrapper for TTL/capacity bounded in-memory caches.
  • Add get_or_try_insert for async fallible cache population with request coalescing.
  • Add CacheGroup for grouped invalidation across registered caches.
  • Cover hit, miss, invalidation, error, and concurrent miss behavior in unit tests.

Checks

  • nix develop -c cargo fmt
  • nix develop -c cargo test cache
  • nix develop -c rainix-rs-static

Summary by CodeRabbit

  • Tests

    • Included comprehensive unit tests for cache functionality covering insert/get operations, TTL-based invalidation, concurrent miss coalescing, error handling, and group-wide cache invalidation.
  • Chores

    • Introduced an internal caching layer with configurable capacity, automatic time-based entry expiration, concurrent request optimization, and multi-cache management capabilities.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 4, 2026

📝 Walkthrough

Walkthrough

Added a moka-based async caching layer with a new AppCache<K, V> wrapper providing TTL-based storage, a CacheGroup for managing multiple cache instances, and comprehensive test coverage validating concurrent miss coalescing and invalidation behavior.

Changes

Caching Layer

Layer / File(s) Summary
Dependency Addition
Cargo.toml
Added moka v0.12 with future feature for async cache support.
Cache Implementation
src/cache.rs
Implemented AppCache<K, V> wrapper around moka::future::Cache with get(), insert(), get_or_try_insert() (including concurrent miss coalescing via try_get_with), and invalidate_all(). Added CacheGroup to manage and invalidate multiple cache instances via a shared Invalidatable trait.
Module Integration
src/main.rs
Declared mod cache; to include the cache module in the crate.
Tests
src/cache.rs
Added comprehensive Rocket async unit tests covering insert/get, missing-key handling, invalidation, correct fetch invocation, cache-hit avoidance of fetch, error non-caching, concurrent miss coalescing, and group-wide invalidation across heterogeneous cache instances.

Sequence Diagram

sequenceDiagram
    participant Client as Client Code
    participant AppCache as AppCache<K, V>
    participant Moka as moka::future::Cache
    participant FetchFn as Fetch Function

    Note over Client,FetchFn: Cache Get-or-Fetch Flow (Concurrent Miss Coalescing)

    par Request 1
        Client->>AppCache: get_or_try_insert(key, fetch_fn)
    and Request 2
        Client->>AppCache: get_or_try_insert(key, fetch_fn)
    end

    Note over AppCache: Both requests detect cache miss

    AppCache->>Moka: try_get_with(key, fetch_fn)
    Note over AppCache,Moka: try_get_with ensures only one fetch<br/>for concurrent identical keys

    Moka->>FetchFn: Call fetch_fn() once
    FetchFn-->>Moka: Result<V, E>

    Moka->>Moka: Insert result (if Ok)
    Moka-->>AppCache: Result<V, Arc<E>>
    AppCache-->>Client: Result<V, Arc<E>>
    AppCache-->>Client: Result<V, Arc<E>>

    Note over Client: Both requests receive same result<br/>from single fetch invocation
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A cache layer hops into the crate,
With moka's speed—no requests wait!
Concurrent misses coalesce with grace,
One fetch feeds all who seek their place. ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 47.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add in-memory cache infrastructure' accurately describes the main change—introducing a new in-memory cache wrapper and related utilities with TTL and grouping capabilities.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch in-memory-cache-infra

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Collaborator Author

findolor commented May 4, 2026


How to use the Graphite Merge Queue

Add the label add-to-gt-merge-queue to this PR to add it to the merge queue.

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@findolor findolor marked this pull request as ready for review May 4, 2026 11:19
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/cache.rs (2)

66-68: ⚡ Quick win

Consider implementing Default for CacheGroup.

Since new() takes no arguments and returns Self with a trivially constructible state, Clippy's new_without_default lint will fire here. The fix is a one-liner.

♻️ Proposed fix
+impl Default for CacheGroup {
+    fn default() -> Self {
+        Self::new()
+    }
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cache.rs` around lines 66 - 68, Implement Default for CacheGroup and have
new() delegate to Default::default(); specifically, add an impl Default for
CacheGroup that returns Self { caches: Vec::new() } and change or keep
pub(crate) fn new() -> Self to call Self::default() (or replace it entirely).
This addresses the clippy new_without_default lint by providing a canonical
Default implementation for CacheGroup and ensures new() remains available and
trivial.

107-110: ⚡ Quick win

Prefer run_pending_tasks() over yield_now() after invalidate_all() in tests.

tokio::task::yield_now() cedes the scheduler for one tick, but does not guarantee that moka's internal write-op queue has been drained. Maintenance tasks are executed lazily when specific bounded-channel conditions are met, so a single yield may not be sufficient under different scheduler configurations or future moka internals. run_pending_tasks().await is moka's explicit API for flushing pending maintenance in tests, making the assertions unconditionally reliable. Because the tests are in the same module, cache.0 is accessible.

♻️ Proposed fix (applies to both affected tests)
-        tokio::task::yield_now().await;
+        cache.0.run_pending_tasks().await;

and for the group test:

-        tokio::task::yield_now().await;
+        cache_a.0.run_pending_tasks().await;
+        cache_b.0.run_pending_tasks().await;

Also applies to: 183-185

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cache.rs` around lines 107 - 110, The test uses
tokio::task::yield_now().await after calling cache.invalidate_all(), which does
not guarantee moka's internal maintenance/write-op queue is flushed; replace the
yield with calling the crate's test helper to flush pending maintenance by
awaiting run_pending_tasks() via cache.0 (e.g.,
cache.0.run_pending_tasks().await) so the subsequent assertions on
cache.get(&"a").await and cache.get(&"b").await are reliably valid; apply the
same replacement in the other affected test where invalidate_all() is followed
by yield_now().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/cache.rs`:
- Around line 66-68: Implement Default for CacheGroup and have new() delegate to
Default::default(); specifically, add an impl Default for CacheGroup that
returns Self { caches: Vec::new() } and change or keep pub(crate) fn new() ->
Self to call Self::default() (or replace it entirely). This addresses the clippy
new_without_default lint by providing a canonical Default implementation for
CacheGroup and ensures new() remains available and trivial.
- Around line 107-110: The test uses tokio::task::yield_now().await after
calling cache.invalidate_all(), which does not guarantee moka's internal
maintenance/write-op queue is flushed; replace the yield with calling the
crate's test helper to flush pending maintenance by awaiting run_pending_tasks()
via cache.0 (e.g., cache.0.run_pending_tasks().await) so the subsequent
assertions on cache.get(&"a").await and cache.get(&"b").await are reliably
valid; apply the same replacement in the other affected test where
invalidate_all() is followed by yield_now().

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ba9c0eae-e355-4723-b2c5-6d67d433b529

📥 Commits

Reviewing files that changed from the base of the PR and between 2ca2a6b and 753da6a.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • Cargo.toml
  • src/cache.rs
  • src/main.rs

@findolor findolor self-assigned this May 4, 2026
@findolor findolor requested review from JuaniRios and hardyjosh May 4, 2026 12:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant