diff --git a/.claude/settings.local.json b/.claude/settings.local.json
new file mode 100644
index 0000000..bcb7957
--- /dev/null
+++ b/.claude/settings.local.json
@@ -0,0 +1,9 @@
+{
+ "permissions": {
+ "allow": [
+ "Bash(grep:*)",
+ "Bash(mix compile:*)",
+ "Bash(mix test:*)"
+ ]
+ }
+}
diff --git a/README.md b/README.md
index 341fdf5..85d574c 100644
--- a/README.md
+++ b/README.md
@@ -1,232 +1,271 @@
# AbsintheCache
-This project is in an early development phase. It is the extracted version of the caching layer developed at [sanbase](https://github.com/santiment/sanbase2). There are still parts that are connected with its original repository and there are still parts that are not configurable enough.
-The code in the sanbase2 repository is more up-to-date, but it is open source and freely available under the MIT license.
+Caching layer for the Elixir [Absinthe](https://github.com/absinthe-graphql/absinthe) GraphQL server.
-This repository might not be updated often as the place where it originated has still not moved to using this as a library. The code is in active use, bug reports are going to be addressed and pull requests are welcomed.
+Replace `resolve` with `cache_resolve` and your resolvers are cached. For queries with many resolvers, cache the entire query result in a single call by plugging into Absinthe's pipeline.
----
+Production-tested at [Santiment](https://santiment.net/) since 2018, serving millions of requests per day.
-Caching solution for the Elixir's [GraphQL](http://spec.graphql.org/) server [Absinthe](https://github.com/absinthe-graphql/absinthe).
+## Features
-Goals:
+- **Drop-in resolver caching** -- swap `resolve` for `cache_resolve`, everything else stays the same
+- **Whole-query caching** -- skip the entire resolution phase on cache hits via a custom DocumentProvider
+- **Thundering herd protection** -- concurrent requests for the same key share a single computation
+- **Async & Dataloader support** -- `cache_resolve` handles `Absinthe.Middleware.Async` and `Dataloader` transparently
+- **Pluggable backend** -- ships with [ConCache](https://github.com/sasa1977/con_cache) and [Cachex](https://github.com/whitfin/cachex) adapters, or implement `AbsintheCache.Behaviour` for your own (e.g. Redis)
+- **TTL jitter** -- randomized TTL offset per resolver avoids [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) on expiry
-- Easy to use.
-- Easy to change the cache backend used.
-- Flexible and configurable for the more complicated cases.
-- Do not use config files.
-
-## Why use AbsintheCache
-
-- Production tested.
-- Easy to start using - just start the cache backend (integrated [cachex](https://github.com/whitfin/cachex) and [con_cache](https://github.com/sasa1977/con_cache)) and replace the `resolve` macro with `cache_resolve`.
-- `cache_resolve` provides out of the box support for resolvers that do not immediately return a result, but are using `async` or `dataloader`.
-- Solves the problem of executing many resolvers for one query 1 .
-- Pluggable cache backend. You do not like `con_cache` or want to use `Redis` so the cache is shared between multiple nodes? Just implement a behavior with 5 functions
-
-> 1 A query that returns a list of 1000 objects with each of them running 3 resolvers, the query will have in total `1 + 1000 * 3 = 3001` resolvers being run. Even if these resolvers are cached, this means that 3001 cache calls have to be made. In order to solve this issue, `AbsintheCache` allows you to plug in the request's processing pipeline, skip the whole resolution phase and inject the final result directly. The final result is the result after all resolvers have run.
+## Installation
-## Production tested
+Add `absinthe_cache` to your dependencies in `mix.exs`:
-The cache implementation has been used at [Santiment](https://santiment.net/) since April 2018 serving 20 million requests per month. Ceasing support of the library is not expected.
+```elixir
+def deps do
+ [
+ {:absinthe_cache, "~> 0.1.0"}
+ ]
+end
+```
-## Functionality
+## Quick Start
-`AbsintheCache` provides two major features:
+### 1. Start the cache in your supervision tree
-- Cache a single resolver by changing the `resolve` macro to `cache_resolve`.
-- Cache the result of the whole query execution at once.
+```elixir
+children = [
+ AbsintheCache.child_spec(name: :graphql_cache, id: :graphql_cache)
+]
-## Examples
+Supervisor.start_link(children, strategy: :one_for_one)
+```
-Full repo example can be found [here](https://github.com/IvanIvanoff/absinthe_cache_example)
+### 2. Import and use `cache_resolve`
----
+```elixir
+defmodule MyApp.Schema do
+ use Absinthe.Schema
+ import AbsintheCache, only: [cache_resolve: 1, cache_resolve: 2]
+
+ query do
+ field :expensive_data, :result do
+ arg :id, non_null(:id)
+ cache_resolve(&MyApp.Resolvers.get_expensive_data/3)
+ end
+
+ field :shorter_cache, :result do
+ arg :id, non_null(:id)
+ cache_resolve(&MyApp.Resolvers.get_expensive_data/3, ttl: 30, max_ttl_offset: 30)
+ end
+ end
+end
+```
-### Example I
+That's it. The resolver result is cached for 5 minutes (default) with thundering herd protection.
-**Problem**
+## Configuration
----
+All settings are optional. Defaults work out of the box.
-The `MetricResolver.get_metadata/3` function returns the metadata for a given metric. It takes a lot of time to compute as it fetches data from three different databases and from elasticsearch. The solution to this is to cache it for 5 minutes.
+```elixir
+# config/config.exs
+config :absinthe_cache,
+ cache_name: :graphql_cache, # registered name of the cache process
+ cache_provider: AbsintheCache.ConCacheProvider, # module implementing Behaviour
+ ttl: 300, # base TTL in seconds (default: 5 min)
+ max_ttl_offset: 120 # random offset range in seconds (default: 2 min)
+```
-**Solution**
+The effective TTL for each resolver is `ttl + offset` where `offset` is a deterministic value in `[0, max_ttl_offset]` derived from the resolver name and arguments. This spreads out cache expiry to prevent stampedes.
----
+## Usage
-Cache the result for 5 minutes.
+### Resolver-level caching with `cache_resolve`
-**Steps**
+`cache_resolve` is a drop-in replacement for Absinthe's `resolve`:
----
+```elixir
+# Before
+field :metric_metadata, :metric_metadata do
+ arg :metric, non_null(:string)
+ resolve(&MetricResolver.get_metadata/3)
+end
-In order to cache the resolver the following steps must be done:
+# After
+field :metric_metadata, :metric_metadata do
+ arg :metric, non_null(:string)
+ cache_resolve(&MetricResolver.get_metadata/3)
+end
+```
-First, the cache backend needs to be started in the supervision tree:
+#### Options
```elixir
-# TODO: Abstract & improve
-Supervisor.child_spec(
- {ConCache,
- [
- name: :graphql_cache,
- ttl_check_interval: :timer.seconds(30),
- global_ttl: :timer.minutes(5),
- acquire_lock_timeout: 30_000
- ]},
- id: :api_cache
+cache_resolve(&MetricResolver.get_metadata/3,
+ ttl: 60, # override base TTL for this resolver
+ max_ttl_offset: 30, # override offset range
+ honor_do_not_cache_flag: true # skip cache when :__do_not_cache_query__ is set
)
+
+field :eth_addresses, list_of(:eth_address) do
+ cache_resolve(
+ dataloader(SanbaseRepo),
+ # Give a name to the anonymous function
+ fun_name: :eth_addresses_resolver_fun
+ )
+end
```
-This is where the cached data is persisted. It's important that the name of the cache is `:graphql_cache` as this is currently hardcoded in the implementation (will be improved)
+#### Return values
-Then the new resolve macros need to be imported.
+| Return value | Behavior |
+|---|---|
+| `{:ok, value}` | Cached and returned |
+| `{:error, reason}` | Returned as-is, **not cached** |
+| `{:nocache, {:ok, value}}` | Returned as `{:ok, value}`, **not cached** |
-```elixir
-import AbsintheCache, only: [cache_resolve: 1, cache_resolve: 2]
-```
+The `{:nocache, ...}` tuple is useful when you want to return a placeholder value (e.g. `nil`, `0`, `"No data"`) without caching it, so the next request retries the computation.
+Note that returning `{:nocache, {:ok, value}}` is supported only for fields that use `cache_resolve/{1,2}`
+and not for the defaults `resolve/1` macro.
-`resolve` can now be replaced with `cache_resolve`:
+### Whole-query caching
-```elixir
-field :metric_metadata, :metric_metadata do
- arg(:metric, non_null(:string))
- resolve(&MetricResolver.get_metadata/3)
-end
-```
+For queries with many resolvers (e.g. a list of 1000 items each with 3 field resolvers = 3001 resolver calls), you can cache the entire query result as a single entry. On cache hits, the Resolution and Result phases are skipped entirely.
-becomes:
+#### 1. Define a DocumentProvider
```elixir
-field :metric_metadata, :metric_metadata do
- arg(:metric, non_null(:string))
- cache_resolve(&MetricResolver.get_metadata/3)
+defmodule MyApp.DocumentProvider do
+ use AbsintheCache.DocumentProvider,
+ ttl: 300,
+ max_ttl_offset: 120
end
```
-There are two options to configure the TTL (time to live):
+#### 2. Define a BeforeSend hook
```elixir
-field :metric_metadata, :metric_metadata do
- arg(:metric, non_null(:string))
- cache_resolve(&MetricResolver.get_metadata/3, ttl: 60, max_ttl_offset: 60)
+defmodule MyApp.AbsintheBeforeSend do
+ use AbsintheCache.BeforeSend,
+ cached_queries: ["getUsers", "getMetrics"]
end
```
-- :ttl - For how long (in seconds) should the value be cached. Defaults to 300 seconds.
-- :max_ttl_offset - Extend the TTL with a random number of seconds in the interval `[0; max_ttl_offset]`. The value is not completely random - it will be the same for the same resolver and arguments pairs. This is useful in avoiding [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) problems. Defaults to 120 seconds.
+Only queries listed in `cached_queries` are cached. Query names should be in camelCase as they appear in the GraphQL request.
-### Example II
+#### 3. Wire them up in your router
-**Problem**
+```elixir
+forward "/api",
+ Absinthe.Plug,
+ schema: MyApp.Schema,
+ document_providers: [
+ MyApp.DocumentProvider,
+ Absinthe.Plug.DocumentProvider.Default
+ ],
+ before_send: {MyApp.AbsintheBeforeSend, :before_send}
+```
----
+### Using `wrap` outside of schemas
-The `get_users` query returns `list_of(:user)`. The USD balance of a user is computed by the `usd_balance/3` function.The balance is needed in some special cases only, so it is not a good idea to always compute it and fill it in `get_users/3`. When we return big lists of users, the `usd_balance/3` function will be called once for every user. Even if we use dataloader and compute the result with a single query, in the end there would be thousands of function invocations (or cache calls if we also use `cache_resolve`) which would slow down the execution
+`AbsintheCache.wrap/3,4` caches any zero-arity function, useful outside of Absinthe resolvers:
-**Solution**
+```elixir
+AbsintheCache.wrap(
+ fn -> MyApp.compute_expensive_value(pair) end,
+ :compute_expensive_value,
+ %{pair: pair}
+).()
+```
----
+### Direct cache access
-Compute the data and cache the result **after** all resolvers are finished. This way the next query that hits the cache will make a single cache call to load all the data.
+```elixir
+AbsintheCache.store("my_key", {:ok, value})
+AbsintheCache.get("my_key")
+AbsintheCache.count()
+AbsintheCache.size() # in megabytes
+AbsintheCache.clear_all()
+```
-**Steps**
+Direct access can be useful in tests if there is need to clear the cache.
----
+## Cache Backends
-Let's have the query and types definition as follows:
+### ConCache (default)
-```elixir
-object :user do
- field(:id, non_null(:id))
- field(:email, :string)
- field(:username, :string)
- ...
- field :usd_balance, :integer do
- resolve(&UserResolver.usd_balance/3)
- end
-end
+Ships as a dependency. No extra setup needed.
-field :get_users, list_of(:user) do
- resolve(&UserResolver.get_users/3)
-end
+```elixir
+config :absinthe_cache,
+ cache_provider: AbsintheCache.ConCacheProvider
```
-The first step is defining which queries are to be cached. This is done in the following way:
+### Cachex
+
+Add `cachex` to your dependencies, then configure:
```elixir
-defmodule MyAppWeb.Graphql.AbsintheBeforeSend do
- use AbsintheCache.BeforeSend, cached_queries: ["get_users"]
-end
+config :absinthe_cache,
+ cache_provider: AbsintheCache.CachexProvider
```
-Then you need to decide for how long to cache them:
+The Cachex adapter includes gzip compression of cached values and LRW eviction (removes 30% of least-recently-written keys when reaching 2M entries).
+
+### Custom backend
+
+Implement the `AbsintheCache.Behaviour` callbacks:
```elixir
+defmodule MyApp.RedisCacheProvider do
+ @behaviour AbsintheCache.Behaviour
-defmodule MyAppWeb.Graphql.DocumentProvider do
- use AbsintheCache.DocumentProvider, ttl: 300, max_ttl_offset: 120
-end
-```
+ @impl true
+ def start_link(opts), do: ...
-Those modules are actually doing a lot more than just defining queries and ttl options.
-To understand what really happens check the Internals section
+ @impl true
+ def child_spec(opts), do: ...
-The next step is modifying the Absinthe route in the router file - the `:document_providers` and `:before_send` keys need to be updated to:
+ @impl true
+ def get(cache, key), do: ...
-```elixir
-forward(
- ...
- document_providers: [
- MyAppWeb.Graphql.DocumentProvider,
- Absinthe.Plug.DocumentProvider.Default
- ],
- ...
- before_send: {MyAppWeb.Graphql.AbsintheBeforeSend, :before_send}
- ...
-)
-```
+ @impl true
+ def store(cache, key, value), do: ...
-## Internals
+ @impl true
+ def get_or_store(cache, key, fun, cache_modify_middleware), do: ...
-### How does `cache_resolve` work
+ @impl true
+ def size(cache), do: ...
-Following is a high level overview of the internal working of `cache_resolve`. For complete understaning please read the source code.
+ @impl true
+ def count(cache), do: ...
-`cache_resolve` works by wrapping the function that computes the result. The wrapper computes a cache key from the function name and arguments (if anonymous function is passed then a function name must be explicitly given). The wrapper function checks for a stored value
-corresponding to the cache key. If there is such - the value is returned and the function computation is skipped, thus avoiding running a slow function. If there is not a stored value - the function
-is comptued, the value is stored in the cache under the given cache key and the result is returned.
-If `async` or `dataloader` are used the approach is the same excluding some implementation details. In both cases there is a zero or one arity
-functions that can be wrapped and cached.
-If there are many concurrent requests for the same query only one process will acquire a lock and run the actual computations. The other processes will wait on the lock and get the computed data once it's ready.
+ @impl true
+ def clear_all(cache), do: ...
+end
+```
-### How does caching of the whole query execution work
+## How It Works
-The work here is split into two major parts - a custom [DocumentProvider](https://hexdocs.pm/absinthe_plug/Absinthe.Plug.DocumentProvider.html) and a [before send hook](https://hexdocs.pm/absinthe_plug/Absinthe.Plug.html#module-before-send).
-Shortly said, the document provider sets up the pipeline of phases that are going to run (around 40 of them) and the before send hook is usually used to modify the Plug connection right before the result is being sent.
+### Resolver caching
-The default document provider has two phases that are important to `AbsintheCache` - the Resolution phase and the Result phase - these are the phases where the resolvers run and the result is constructed.
+`cache_resolve` wraps the resolver function. On each call it:
-The custom document provider defines the same pipeline as the default one but inserts two extra phases - Cache phrase is inserted before the Resolution phase and Idempotent phase is inserted after Result phase (usually the last one).
+1. Computes a cache key from the function name, arguments, and parent context
+2. Checks the cache -- if hit, returns the value without calling the resolver
+3. If miss, acquires a lock for that key (thundering herd protection)
+4. Executes the resolver, stores `{:ok, value}` results, returns the result
+5. Other processes waiting on the same key get the result once it's computed
-The Cache phase constructs a cache key out of the query name, arguments and variables in a smart way - it can work both with interpolated variables in the query string and by separately passed variables. If the constructed cache key has a corresponding cached value it is taken and the execution "jumps" over the Resolution and Result phases directly to the Idempotent phase that does nothing. It is needed because the Result phase is the last one but the Cache needs to jump right after it.
+### Whole-query caching
-The before send hook is executed after all phases have run. If computed value has not been taken from the cache, this is the step where it is inserted into the cache. It's done here because we need the "constructed" result after all resolvers are run and their results are merged into one. The cached value is actually a json string - the result that is sent to the client. Storing it in this form allows the execution to totally skip the resolution and result building phases.
+The DocumentProvider inserts a `CacheDocument` phase before Absinthe's Resolution phase and an `Idempotent` phase after the Result phase.
-## Installation
+- **Cache hit**: `CacheDocument` injects the cached result and jumps to `Idempotent`, skipping Resolution and Result entirely
+- **Cache miss**: `CacheDocument` is a no-op, resolution runs normally, and `BeforeSend` stores the final result
-If [available in Hex](https://hex.pm/docs/publish), the package can be installed
-by adding `absinthe_cache` to your list of dependencies in `mix.exs`:
+This means a cached query is served with a single cache lookup regardless of how many resolvers it contains.
-```elixir
-def deps do
- [
- {:absinthe_cache, "~> 0.1.0"}
- ]
-end
-```
+## License
-Documentation can be generated with [ExDoc](https://github.com/elixir-lang/ex_doc)
-and published on [HexDocs](https://hexdocs.pm). Once published, the docs can
-be found at [https://hexdocs.pm/absinthe_cache](https://hexdocs.pm/absinthe_cache).
+MIT
diff --git a/lib/absinthe_cache.ex b/lib/absinthe_cache.ex
index 6b05eef..8062b08 100644
--- a/lib/absinthe_cache.ex
+++ b/lib/absinthe_cache.ex
@@ -3,32 +3,44 @@ defmodule AbsintheCache do
Provides the macro `cache_resolve` that replaces the Absinthe's `resolve` and
caches the result of the resolver for some time instead of calculating it
every time.
+
+ ## Configuration
+
+ All settings are optional and have sensible defaults:
+
+ config :absinthe_cache,
+ cache_name: :graphql_cache,
+ cache_provider: AbsintheCache.ConCacheProvider,
+ ttl: 300,
+ max_ttl_offset: 120
+
+ - `:cache_name` — the registered name of the cache process (default: `:graphql_cache`)
+ - `:cache_provider` — module implementing `AbsintheCache.Behaviour` (default: `AbsintheCache.ConCacheProvider`)
+ - `:ttl` — base time-to-live in seconds for cached entries (default: `300`)
+ - `:max_ttl_offset` — maximum random offset added to TTL to avoid cache stampede (default: `120`)
+
+ ## Process dictionary keys
+
+ This library communicates between modules via the process dictionary:
+
+ - `:__do_not_cache_query__` — when set to `true`, signals that the current query
+ should not be cached. Set by providers when `{:nocache, {:ok, value}}` is returned,
+ and by `DocumentProvider` on cache hits (to avoid re-storing). Read by `cache_resolve`
+ (with `honor_do_not_cache_flag: true`) and `BeforeSend`.
+ - `:__change_absinthe_before_send_caching_ttl__` — when `caching_params` are provided
+ in the query args, this is set to the computed TTL. Read by `BeforeSend` to override
+ the TTL when storing the full query result.
"""
alias __MODULE__, as: CacheMod
- alias AbsintheCache.ConCacheProvider, as: CacheProvider
-
- require Logger
-
- @ttl 300
- @max_ttl_offset 120
-
- # TODO: Make it configurable
- @cache_name :graphql_cache
-
- @compile {:inline,
- wrap: 2,
- wrap: 3,
- from: 2,
- resolver: 3,
- store: 2,
- store: 3,
- get_or_store: 2,
- get_or_store: 3,
- cache_modify_middleware: 3,
- cache_key: 2,
- convert_values: 2,
- generate_additional_args: 1}
+
+ @default_ttl 300
+ @default_max_ttl_offset 120
+
+ defp cache_name, do: Application.get_env(:absinthe_cache, :cache_name, :graphql_cache)
+ defp cache_provider, do: Application.get_env(:absinthe_cache, :cache_provider, AbsintheCache.ConCacheProvider)
+ defp default_ttl, do: Application.get_env(:absinthe_cache, :ttl, @default_ttl)
+ defp default_max_ttl_offset, do: Application.get_env(:absinthe_cache, :max_ttl_offset, @default_max_ttl_offset)
@doc ~s"""
Macro that's used instead of Absinthe's `resolve`. This resolver can perform
@@ -37,7 +49,7 @@ defmodule AbsintheCache do
evaluated at all in this case
2. Evaluate the resolver function and store the value in the cache if it is
not present there
- 3. Handle the `Absinthe.Middlewar.Async` and `Absinthe.Middleware.Dataloader`
+ 3. Handle the `Absinthe.Middleware.Async` and `Absinthe.Middleware.Dataloader`
middlewares. In order to handle them, the function that executes the actual
evaluation is wrapped in a function that handles the cache interactions
@@ -86,8 +98,8 @@ defmodule AbsintheCache do
"""
def wrap(cached_func, name, args \\ %{}, opts \\ []) do
fn ->
- CacheProvider.get_or_store(
- @cache_name,
+ cache_provider().get_or_store(
+ cache_name(),
cache_key(name, args, opts),
cached_func,
&cache_modify_middleware/3
@@ -95,22 +107,33 @@ defmodule AbsintheCache do
end
end
+ def child_spec(opts) do
+ cache_provider().child_spec(opts)
+ end
+
@doc ~s"""
- Clears the whole cache. Slow.
+ Clears the whole cache.
"""
def clear_all() do
- CacheProvider.clear_all(@cache_name)
+ cache_provider().clear_all(cache_name())
end
@doc ~s"""
The size of the cache in megabytes
"""
def size() do
- CacheProvider.size(@cache_name, :megabytes)
+ cache_provider().size(cache_name())
+ end
+
+ @doc ~s"""
+ The number of entries in the cache
+ """
+ def count() do
+ cache_provider().count(cache_name())
end
def get(key) do
- CacheProvider.get(@cache_name, key)
+ cache_provider().get(cache_name(), key)
end
@doc false
@@ -149,11 +172,11 @@ defmodule AbsintheCache do
# particular case for all_projects_by_function the caching is disabled
# (by putting the do_not_cache_query: true Process dictionary key-value)
# if the base_projects depends on a watchlist. The cache resolver that
- # is disabled must provide the `honor_do_no_cache_flag: true` explicitly,
+ # is disabled must provide the `honor_do_not_cache_flag: true` explicitly,
# so we are not disabling all of the caching, but only the one that matters
skip_cache? =
Keyword.get(opts, :honor_do_not_cache_flag, false) and
- Process.get(:do_not_cache_query) == true
+ Process.get(:__do_not_cache_query__) == true
case skip_cache? do
true -> fun.()
@@ -171,12 +194,16 @@ defmodule AbsintheCache do
end
end
- def store(cache_name \\ @cache_name, cache_key, value) do
- CacheProvider.store(cache_name, cache_key, value)
+ def store(cache_key, value), do: store(cache_name(), cache_key, value)
+
+ def store(cache_name, cache_key, value) do
+ cache_provider().store(cache_name, cache_key, value)
end
- def get_or_store(cache_name \\ @cache_name, cache_key, resolver_fn) do
- CacheProvider.get_or_store(
+ def get_or_store(cache_key, resolver_fn), do: get_or_store(cache_name(), cache_key, resolver_fn)
+
+ def get_or_store(cache_name, cache_key, resolver_fn) do
+ cache_provider().get_or_store(
cache_name,
cache_key,
resolver_fn,
@@ -189,7 +216,7 @@ defmodule AbsintheCache do
# This is way it is safe to use `store` explicitly without worrying about race
# conditions
defp cache_modify_middleware(cache_name, cache_key, {:ok, value} = result) do
- CacheProvider.store(cache_name, cache_key, result)
+ cache_provider().store(cache_name, cache_key, result)
{:ok, value}
end
@@ -200,7 +227,7 @@ defmodule AbsintheCache do
{:middleware, Absinthe.Middleware.Async = midl, {fun, opts}}
) do
caching_fun = fn ->
- CacheProvider.get_or_store(cache_name, cache_key, fun, &cache_modify_middleware/3)
+ cache_provider().get_or_store(cache_name, cache_key, fun, &cache_modify_middleware/3)
end
{:middleware, midl, {caching_fun, opts}}
@@ -212,7 +239,7 @@ defmodule AbsintheCache do
{:middleware, Absinthe.Middleware.Dataloader = midl, {loader, callback}}
) do
caching_callback = fn loader_arg ->
- CacheProvider.get_or_store(
+ cache_provider().get_or_store(
cache_name,
cache_key,
fn -> callback.(loader_arg) end,
@@ -226,14 +253,14 @@ defmodule AbsintheCache do
# Helper functions
def cache_key(name, args, opts \\ []) do
- base_ttl = args[:caching_params][:base_ttl] || Keyword.get(opts, :ttl, @ttl)
+ base_ttl = args[:caching_params][:base_ttl] || Keyword.get(opts, :ttl, default_ttl())
max_ttl_offset =
args[:caching_params][:max_ttl_offset] ||
- Keyword.get(opts, :max_ttl_offset, @max_ttl_offset)
+ Keyword.get(opts, :max_ttl_offset, default_max_ttl_offset())
- base_ttl = Enum.max([base_ttl, 1])
- max_ttl_offset = Enum.max([max_ttl_offset, 1])
+ base_ttl = max(base_ttl, 1)
+ max_ttl_offset = max(max_ttl_offset, 1)
# Used to randomize the TTL for lists of objects like list of projects
additional_args = Map.take(args, [:slug, :id])
@@ -248,13 +275,25 @@ defmodule AbsintheCache do
end
args = args |> convert_values(ttl)
- cache_key = [name, args] |> hash()
+
+ # Bucket-based invalidation: include the current datetime bucket in the key so that
+ # keys rotate over time. This relieves locking issues—if a process fails to release
+ # a lock, the key will change after the bucket TTL (see below) and the lock becomes
+ # irrelevant. Tradeoff: the same query can produce different keys in different
+ # buckets, reducing cache hit rate near bucket boundaries. Bucket duration is
+ # base_ttl + max_ttl_offset + phash2(..., 180), i.e. base_ttl + max_ttl_offset + 0..179
+ # seconds, so buckets change roughly every (base_ttl + max_ttl_offset) seconds with
+ # some jitter to avoid thundering herd.
+ bucket_ttl = base_ttl + max_ttl_offset + :erlang.phash2({name, args}, 180)
+ current_bucket = convert_values(DateTime.utc_now(), bucket_ttl)
+
+ cache_key = {current_bucket, name, args} |> hash()
{cache_key, ttl}
end
# Convert the values for using in the cache. A special treatment is done for
- # `%DateTime{}` so all datetimes in a @ttl sized window are treated the same
+ # `%DateTime{}` so all datetimes in a TTL sized window are treated the same
defp convert_values(%DateTime{} = v, ttl), do: div(DateTime.to_unix(v, :second), ttl)
defp convert_values(%_{} = v, _), do: Map.from_struct(v)
diff --git a/lib/before_send.ex b/lib/before_send.ex
index 48725c9..367f467 100644
--- a/lib/before_send.ex
+++ b/lib/before_send.ex
@@ -26,10 +26,9 @@ defmodule AbsintheCache.BeforeSend do
defmacro __using__(opts) do
quote location: :keep, bind_quoted: [opts: opts] do
- @compile :inline_list_funcs
- @compile inline: [cache_result: 2, queries_in_request: 1, has_graphql_errors?: 1]
-
@cached_queries Keyword.get(opts, :cached_queries, [])
+ @context_cache_key Keyword.get(opts, :context_cache_key, :query_cache_key)
+
def before_send(conn, %Absinthe.Blueprint{} = blueprint) do
# Do not cache in case of:
# -`:nocache` returned from a resolver
@@ -38,7 +37,7 @@ defmodule AbsintheCache.BeforeSend do
# to infinite storing the same value if there are enough requests
queries = queries_in_request(blueprint)
- do_not_cache? = Process.get(:do_not_cache_query) != nil
+ do_not_cache? = Process.get(:__do_not_cache_query__) != nil
case do_not_cache? or has_graphql_errors?(blueprint) do
true -> :ok
@@ -52,10 +51,40 @@ defmodule AbsintheCache.BeforeSend do
all_queries_cacheable? = queries |> Enum.all?(&Enum.member?(@cached_queries, &1))
if all_queries_cacheable? do
- AbsintheCache.store(
- blueprint.execution.context.query_cache_key,
- blueprint.result
- )
+ case get_cache_key(blueprint) do
+ nil -> :ok
+ cache_key -> AbsintheCache.store(cache_key, blueprint.result)
+ end
+ end
+ end
+
+ # The cache_key is the format of `{key, ttl}` or just `key`. Both cache keys
+ # will be stored under the name `key` and in the first case only the ttl is
+ # changed. This also means that if a value is stored as `{key, 300}` it can be
+ # retrieved by using `{key, 10}` as in the case of `get` the ttl is ignored.
+ # This allows us to change the cache_key produced in the DocumentProvider
+ # and store it with a different ttl. The ttl is changed from the graphql cache
+ # in case `caching_params` is provided.
+ defp get_cache_key(blueprint) do
+ case get_in(blueprint, [Access.key(:execution), Access.key(:context)]) do
+ %{} = context ->
+ case Map.get(context, @context_cache_key) do
+ nil ->
+ nil
+
+ query_cache_key ->
+ case Process.get(:__change_absinthe_before_send_caching_ttl__) do
+ ttl when is_number(ttl) ->
+ {cache_key, _old_ttl} = query_cache_key
+ {cache_key, ttl}
+
+ _ ->
+ query_cache_key
+ end
+ end
+
+ _ ->
+ nil
end
end
diff --git a/lib/cache_provider.ex b/lib/cache_provider.ex
index 66a6c0c..50b5aa5 100644
--- a/lib/cache_provider.ex
+++ b/lib/cache_provider.ex
@@ -8,8 +8,6 @@ defmodule AbsintheCache.Behaviour do
@type error :: String.t()
@type stored_value :: any()
@type cache :: atom()
- @type size_type :: :megabytes
-
@callback start_link(Keyword.t()) :: {:ok, pid}
@callback child_spec(Keyword.t()) :: Supervisor.child_spec()
diff --git a/lib/cachex_cache_provider/cachex_cache_provider.ex b/lib/cachex_cache_provider/cachex_cache_provider.ex
index fa4dabe..acecafc 100644
--- a/lib/cachex_cache_provider/cachex_cache_provider.ex
+++ b/lib/cachex_cache_provider/cachex_cache_provider.ex
@@ -1,5 +1,13 @@
if Code.ensure_loaded?(Cachex) do
defmodule AbsintheCache.CachexProvider do
+ @moduledoc """
+ Cachex-based implementation of `AbsintheCache.Behaviour`.
+
+ Values are stored gzipped to reduce memory use. This format is not shared with
+ `AbsintheCache.ConCacheProvider` (in-memory); ConCache is typically used for
+ development/single-node, while Cachex is used when persistence or larger
+ caches are needed.
+ """
@behaviour AbsintheCache.Behaviour
@default_ttl_seconds 300
@@ -7,12 +15,6 @@ if Code.ensure_loaded?(Cachex) do
import Cachex.Spec
- @compile inline: [
- execute_cache_miss_function: 4,
- handle_execute_cache_miss_function: 4,
- obtain_lock: 3
- ]
-
@impl AbsintheCache.Behaviour
def start_link(opts) do
Cachex.start_link(opts(opts))
@@ -64,8 +66,11 @@ if Code.ensure_loaded?(Cachex) do
@impl AbsintheCache.Behaviour
def get(cache, key) do
case Cachex.get(cache, true_key(key)) do
- {:ok, {:stored, value}} -> value
- _ -> nil
+ {:ok, compressed_value} when is_binary(compressed_value) ->
+ decompress_value(compressed_value)
+
+ _ ->
+ nil
end
end
@@ -76,12 +81,10 @@ if Code.ensure_loaded?(Cachex) do
:ok
{:nocache, _} ->
- Process.put(:has_nocache_field, true)
-
:ok
_ ->
- cache_item(cache, key, {:stored, value})
+ cache_item(cache, key, value)
end
end
@@ -90,8 +93,8 @@ if Code.ensure_loaded?(Cachex) do
true_key = true_key(key)
case Cachex.get(cache, true_key) do
- {:ok, {:stored, value}} ->
- value
+ {:ok, compressed_value} when is_binary(compressed_value) ->
+ decompress_value(compressed_value)
_ ->
execute_cache_miss_function(cache, key, func, cache_modify_middleware)
@@ -119,9 +122,9 @@ if Code.ensure_loaded?(Cachex) do
_ = GenServer.cast(unlocker_pid, {:unlock_after, unlock_fun})
case Cachex.get(cache, true_key(key)) do
- {:ok, {:stored, value}} ->
+ {:ok, compressed_value} when is_binary(compressed_value) ->
# First check if the result has not been stored while waiting for the lock.
- value
+ decompress_value(compressed_value)
_ ->
handle_execute_cache_miss_function(
@@ -153,7 +156,7 @@ if Code.ensure_loaded?(Cachex) do
# backoff fashion - 10, 130, 375, 709, etc. milliseconds
# The backoff is capped at 2 seconds
sleep_ms = (:math.pow(attempt * 20, 1.6) + 10) |> trunc()
- sleep_ms = Enum.min([sleep_ms, 2000])
+ sleep_ms = min(sleep_ms, 2000)
Process.sleep(sleep_ms)
obtain_lock(cache_record, keys, attempt + 1)
@@ -168,28 +171,44 @@ if Code.ensure_loaded?(Cachex) do
{:middleware, _, _} = tuple ->
cache_modify_middleware.(cache, key, tuple)
- {:nocache, value} ->
- Process.put(:has_nocache_field, true)
+ {:nocache, {:ok, _result} = value} ->
+ Process.put(:__do_not_cache_query__, true)
value
{:error, _} = error ->
error
{:ok, _value} = ok_tuple ->
- cache_item(cache, key, {:stored, ok_tuple})
+ cache_item(cache, key, ok_tuple)
ok_tuple
end
end
defp cache_item(cache, {key, ttl}, value) when is_integer(ttl) do
- Cachex.put(cache, key, value, ttl: :timer.seconds(ttl))
+ Cachex.put(cache, key, compress_value(value), ttl: :timer.seconds(ttl))
end
defp cache_item(cache, key, value) do
- Cachex.put(cache, key, value, ttl: :timer.seconds(@default_ttl_seconds))
+ Cachex.put(cache, key, compress_value(value), ttl: :timer.seconds(@default_ttl_seconds))
end
defp true_key({key, ttl}) when is_integer(ttl), do: key
defp true_key(key), do: key
+
+ defp compress_value(value) do
+ value
+ |> :erlang.term_to_binary()
+ |> :zlib.gzip()
+ end
+
+ defp decompress_value(value) do
+ try do
+ value
+ |> :zlib.gunzip()
+ |> :erlang.binary_to_term([:safe])
+ rescue
+ _ -> nil
+ end
+ end
end
end
diff --git a/lib/cachex_cache_provider/cachex_unlocker.ex b/lib/cachex_cache_provider/cachex_unlocker.ex
index 4bad0ee..6e0360e 100644
--- a/lib/cachex_cache_provider/cachex_unlocker.ex
+++ b/lib/cachex_cache_provider/cachex_unlocker.ex
@@ -1,11 +1,11 @@
defmodule AbsintheCache.CachexProvider.Unlocker do
@moduledoc ~s"""
- Module that makes sure that locks acquired during get_or_store locking in
- the Cachex provider.
+ Ensures that locks acquired during get_or_store in the Cachex provider
+ are released even if the owning process terminates.
- When locks are acquired, a process is spawned that unlocks the lock in case
- something wrong does with the process that obtained it. If the process finishes
- fast without issues it will kill this process.
+ When a lock is acquired, an Unlocker process is spawned that will release
+ the lock after a timeout if the owning process fails to do so. If the owning
+ process finishes normally it stops this process.
"""
use GenServer
@@ -42,7 +42,4 @@ defmodule AbsintheCache.CachexProvider.Unlocker do
{:stop, :normal, state}
end
- def terminate(_reason, _state) do
- :normal
- end
end
diff --git a/lib/con_cache_provider.ex b/lib/con_cache_provider.ex
index de969f9..1672aee 100644
--- a/lib/con_cache_provider.ex
+++ b/lib/con_cache_provider.ex
@@ -4,14 +4,6 @@ defmodule AbsintheCache.ConCacheProvider do
"""
@behaviour AbsintheCache.Behaviour
- @compile {:inline,
- get: 2,
- store: 3,
- get_or_store: 4,
- cache_item: 3,
- get_or_store_isolated: 5,
- execute_and_maybe_cache_function: 4}
-
@max_cache_ttl 7200
@impl AbsintheCache.Behaviour
@@ -37,23 +29,23 @@ defmodule AbsintheCache.ConCacheProvider do
def size(cache) do
bytes_size = :ets.info(ConCache.ets(cache), :memory) * :erlang.system_info(:wordsize)
- _megabytes_size = (bytes_size / (1024 * 1024)) |> Float.round(2)
+ (bytes_size / (1024 * 1024)) |> Float.round(2)
end
@impl AbsintheCache.Behaviour
def count(cache) do
cache
|> ConCache.ets()
- |> :ets.tab2list()
- |> length
+ |> :ets.info(:size)
end
@impl AbsintheCache.Behaviour
def clear_all(cache) do
cache
|> ConCache.ets()
- |> :ets.tab2list()
- |> Enum.each(fn {key, _} -> ConCache.delete(cache, key) end)
+ |> :ets.delete_all_objects()
+
+ :ok
end
@impl AbsintheCache.Behaviour
@@ -71,7 +63,6 @@ defmodule AbsintheCache.ConCacheProvider do
:ok
{:nocache, _} ->
- Process.put(:has_nocache_field, true)
:ok
value ->
@@ -129,7 +120,7 @@ defmodule AbsintheCache.ConCacheProvider do
middleware_func.(cache, key, tuple)
{:nocache, {:ok, _result} = value} ->
- Process.put(:do_not_cache_query, true)
+ Process.put(:__do_not_cache_query__, true)
value
{:ok, _value} = ok_tuple ->
@@ -138,10 +129,12 @@ defmodule AbsintheCache.ConCacheProvider do
end
end
- defp cache_item(cache, {key, ttl}, value) when is_integer(ttl) and ttl <= @max_cache_ttl do
+ defp cache_item(cache, {key, ttl}, value) when is_integer(ttl) do
+ clamped_ttl = min(ttl, @max_cache_ttl)
+
ConCache.put(cache, key, %ConCache.Item{
value: value,
- ttl: :timer.seconds(ttl)
+ ttl: :timer.seconds(clamped_ttl)
})
end
@@ -149,6 +142,6 @@ defmodule AbsintheCache.ConCacheProvider do
ConCache.put(cache, key, value)
end
- defp true_key({key, ttl}) when is_integer(ttl) and ttl <= @max_cache_ttl, do: key
+ defp true_key({key, ttl}) when is_integer(ttl), do: key
defp true_key(key), do: key
end
diff --git a/lib/document_provider.ex b/lib/document_provider.ex
index c39bbfa..ddf6a74 100644
--- a/lib/document_provider.ex
+++ b/lib/document_provider.ex
@@ -55,19 +55,17 @@ defmodule AbsintheCache.DocumentProvider do
are used. They are defined in the module attribute @cache_fields. The only
values that are converted to something else in the process of construction
of the cache key are:
- - DateTime - It is rounded by TTL so all datetiems in a range yield the same
+ - DateTime - It is rounded by TTL so all datetimes in a range yield the same
cache key
- Struct - All structs are converted to plain maps
"""
use Absinthe.Phase
- @compile :inline_list_funcs
- @compile inline: [add_cache_key_to_context: 2, cache_key_from_params: 2]
-
# Access opts from the surrounding `AbsintheCache.DocumentProvider` module
@ttl Keyword.get(opts, :ttl, 120)
- @max_ttl_ffset Keyword.get(opts, :max_ttl_offset, 60)
+ @max_ttl_offset Keyword.get(opts, :max_ttl_offset, 60)
+ @context_cache_key Keyword.get(opts, :context_cache_key, :query_cache_key)
@cache_key_fun Keyword.get(
opts,
:additional_cache_key_args_fun,
@@ -85,7 +83,7 @@ defmodule AbsintheCache.DocumentProvider do
{"bp_root", additional_args} |> :erlang.phash2(),
sanitize_blueprint(bp_root),
ttl: @ttl,
- max_ttl_offset: @max_ttl_ffset
+ max_ttl_offset: @max_ttl_offset
)
bp_root = add_cache_key_to_context(bp_root, cache_key)
@@ -97,20 +95,19 @@ defmodule AbsintheCache.DocumentProvider do
result ->
# Storing it again `touch`es it and the TTL timer is restarted.
# This can lead to infinite storing the same value
- Process.put(:do_not_cache_query, true)
+ Process.put(:__do_not_cache_query__, true)
{:jump, %{bp_root | result: result}, AbsintheCache.Phase.Document.Idempotent}
end
end
- # TODO: Make this function configurable
defp add_cache_key_to_context(
%{execution: %{context: context} = execution} = blueprint,
cache_key
) do
%{
blueprint
- | execution: %{execution | context: Map.put(context, :query_cache_key, cache_key)}
+ | execution: %{execution | context: Map.put(context, @context_cache_key, cache_key)}
}
end
@@ -144,44 +141,6 @@ defmodule AbsintheCache.DocumentProvider do
end
defp sanitize_blueprint(data), do: data
-
- # Extract the query and variables from the params map and genenrate
- # a cache key using them.
-
- # The query is fetched as is.
- # The variables that are valid datetime types (have the `from` or `to` name
- # and valid value) are converted to Elixir DateTime type prior to being used.
- # This is done because the datetimes are rounded so all datetimes in a N minute
- # buckets have the same cache key.
-
- # The other param types are not cast as they would be used the same way in both
- # places where the cache key is calculated.
- defp cache_key_from_params(params, permissions) do
- query = Map.get(params, "query", "")
-
- variables =
- case Map.get(params, "variables") do
- map when is_map(map) -> map
- vars when is_binary(vars) and vars != "" -> vars |> Jason.decode!()
- _ -> %{}
- end
- |> Enum.map(fn
- {key, value} when is_binary(value) ->
- case DateTime.from_iso8601(value) do
- {:ok, datetime, _} -> {key, datetime}
- _ -> {key, value}
- end
-
- pair ->
- pair
- end)
- |> Map.new()
-
- AbsintheCache.cache_key({query, permissions}, variables,
- ttl: @ttl,
- max_ttl_offset: @max_ttl_ffset
- )
- end
end
end
end
diff --git a/mix.exs b/mix.exs
index 5d4247f..ab2ca31 100644
--- a/mix.exs
+++ b/mix.exs
@@ -28,9 +28,9 @@ defmodule AbsintheCache.MixProject do
[
{:absinthe, "~> 1.4"},
{:absinthe_plug, ">= 0.0.0"},
- {:con_cache, ">= 0.14.0"},
+ {:con_cache, "~> 1.0"},
{:ex_doc, ">= 0.0.0", only: :dev, runtime: false},
- {:inflex, "~> 2.0.0"},
+ {:inflex, "~> 2.0"},
{:jason, ">= 1.1.2"}
]
end
diff --git a/mix.lock b/mix.lock
index 97170ae..5b350ab 100644
--- a/mix.lock
+++ b/mix.lock
@@ -1,15 +1,18 @@
%{
- "absinthe": {:hex, :absinthe, "1.4.16", "0933e4d9f12652b12115d5709c0293a1bf78a22578032e9ad0dad4efee6b9eb1", [:mix], [{:dataloader, "~> 1.0.0", [hex: :dataloader, repo: "hexpm", optional: true]}, {:decimal, "~> 1.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm"},
- "absinthe_plug": {:hex, :absinthe_plug, "1.4.7", "939b6b9e1c7abc6b399a5b49faa690a1fbb55b195c670aa35783b14b08ccec7a", [:mix], [{:absinthe, "~> 1.4.11", [hex: :absinthe, repo: "hexpm", optional: false]}, {:plug, "~> 1.3.2 or ~> 1.4", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm"},
- "con_cache": {:hex, :con_cache, "0.14.0", "863acb90fa08017be3129074993af944cf7a4b6c3ee7c06c5cd0ed6b94fbc223", [:mix], [], "hexpm"},
- "earmark": {:hex, :earmark, "1.4.0", "397e750b879df18198afc66505ca87ecf6a96645545585899f6185178433cc09", [:mix], [], "hexpm"},
- "ex_doc": {:hex, :ex_doc, "0.21.2", "caca5bc28ed7b3bdc0b662f8afe2bee1eedb5c3cf7b322feeeb7c6ebbde089d6", [:mix], [{:earmark, "~> 1.3.3 or ~> 1.4", [hex: :earmark, repo: "hexpm", optional: false]}, {:makeup_elixir, "~> 0.14", [hex: :makeup_elixir, repo: "hexpm", optional: false]}], "hexpm"},
- "inflex": {:hex, :inflex, "2.0.0", "db69d542b8fdb23ac667f9bc0c2395a3983fa2da6ae2efa7ab5dc541928f7a75", [:mix], [], "hexpm"},
- "jason": {:hex, :jason, "1.1.2", "b03dedea67a99223a2eaf9f1264ce37154564de899fd3d8b9a21b1a6fd64afe7", [:mix], [{:decimal, "~> 1.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm"},
- "makeup": {:hex, :makeup, "1.0.0", "671df94cf5a594b739ce03b0d0316aa64312cee2574b6a44becb83cd90fb05dc", [:mix], [{:nimble_parsec, "~> 0.5.0", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm"},
- "makeup_elixir": {:hex, :makeup_elixir, "0.14.0", "cf8b7c66ad1cff4c14679698d532f0b5d45a3968ffbcbfd590339cb57742f1ae", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm"},
- "mime": {:hex, :mime, "1.3.1", "30ce04ab3175b6ad0bdce0035cba77bba68b813d523d1aac73d9781b4d193cf8", [:mix], [], "hexpm"},
- "nimble_parsec": {:hex, :nimble_parsec, "0.5.1", "c90796ecee0289dbb5ad16d3ad06f957b0cd1199769641c961cfe0b97db190e0", [:mix], [], "hexpm"},
- "plug": {:hex, :plug, "1.8.3", "12d5f9796dc72e8ac9614e94bda5e51c4c028d0d428e9297650d09e15a684478", [:mix], [{:mime, "~> 1.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4", [hex: :telemetry, repo: "hexpm", optional: true]}], "hexpm"},
- "plug_crypto": {:hex, :plug_crypto, "1.0.0", "18e49317d3fa343f24620ed22795ec29d4a5e602d52d1513ccea0b07d8ea7d4d", [:mix], [], "hexpm"},
+ "absinthe": {:hex, :absinthe, "1.9.0", "28f11753d01c0e8b6cb6e764a23cf4081e0e6cae88f53f4c9e4320912aee9c07", [:mix], [{:dataloader, "~> 1.0.0 or ~> 2.0", [hex: :dataloader, repo: "hexpm", optional: true]}, {:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}, {:nimble_parsec, "~> 1.2.2 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}, {:opentelemetry_process_propagator, "~> 0.2.1 or ~> 0.3", [hex: :opentelemetry_process_propagator, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "db65993420944ad90e932827663d4ab704262b007d4e3900cd69615f14ccc8ce"},
+ "absinthe_plug": {:hex, :absinthe_plug, "1.5.9", "4f66fd46aecf969b349dd94853e6132db6d832ae6a4b951312b6926ad4ee7ca3", [:mix], [{:absinthe, "~> 1.7", [hex: :absinthe, repo: "hexpm", optional: false]}, {:plug, "~> 1.4", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "dcdc84334b0e9e2cd439bd2653678a822623f212c71088edf0a4a7d03f1fa225"},
+ "con_cache": {:hex, :con_cache, "1.1.1", "9f47a68dfef5ac3bbff8ce2c499869dbc5ba889dadde6ac4aff8eb78ddaf6d82", [:mix], [{:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "1def4d1bec296564c75b5bbc60a19f2b5649d81bfa345a2febcc6ae380e8ae15"},
+ "earmark": {:hex, :earmark, "1.4.0", "397e750b879df18198afc66505ca87ecf6a96645545585899f6185178433cc09", [:mix], [], "hexpm", "4bedcec35de03b5f559fd2386be24d08f7637c374d3a85d3fe0911eecdae838a"},
+ "earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"},
+ "ex_doc": {:hex, :ex_doc, "0.40.1", "67542e4b6dde74811cfd580e2c0149b78010fd13001fda7cfeb2b2c2ffb1344d", [:mix], [{:earmark_parser, "~> 1.4.44", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "bcef0e2d360d93ac19f01a85d58f91752d930c0a30e2681145feea6bd3516e00"},
+ "inflex": {:hex, :inflex, "2.1.0", "a365cf0821a9dacb65067abd95008ca1b0bb7dcdd85ae59965deef2aa062924c", [:mix], [], "hexpm", "14c17d05db4ee9b6d319b0bff1bdf22aa389a25398d1952c7a0b5f3d93162dd8"},
+ "jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
+ "makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"},
+ "makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"},
+ "makeup_erlang": {:hex, :makeup_erlang, "1.0.3", "4252d5d4098da7415c390e847c814bad3764c94a814a0b4245176215615e1035", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "953297c02582a33411ac6208f2c6e55f0e870df7f80da724ed613f10e6706afd"},
+ "mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"},
+ "nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"},
+ "plug": {:hex, :plug, "1.19.1", "09bac17ae7a001a68ae393658aa23c7e38782be5c5c00c80be82901262c394c0", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "560a0017a8f6d5d30146916862aaf9300b7280063651dd7e532b8be168511e62"},
+ "plug_crypto": {:hex, :plug_crypto, "2.1.1", "19bda8184399cb24afa10be734f84a16ea0a2bc65054e23a62bb10f06bc89491", [:mix], [], "hexpm", "6470bce6ffe41c8bd497612ffde1a7e4af67f36a15eea5f921af71cf3e11247c"},
+ "telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},
}
diff --git a/test/absinthe_cache_test.exs b/test/absinthe_cache_test.exs
index 9e230e8..65a7414 100644
--- a/test/absinthe_cache_test.exs
+++ b/test/absinthe_cache_test.exs
@@ -1,54 +1,275 @@
defmodule AbsintheCacheTest do
defmodule Schema do
use Absinthe.Schema
- import AbsintheCache, only: [cache_resolve: 1]
+ import AbsintheCache, only: [cache_resolve: 1, cache_resolve: 2]
require Logger
query do
- field :get_name_cached, non_null(:string) do
+ field :cached_ok, non_null(:string) do
cache_resolve(fn _, _, _ ->
- Logger.info("PRINTING SOME DATA")
- {:ok, "Ivan"}
+ Logger.info("CACHED_OK_CALLED")
+ {:ok, "cached_value"}
end)
end
- field :get_name_not_cached, non_null(:string) do
+ field :not_cached, non_null(:string) do
resolve(fn _, _, _ ->
- Logger.info("PRINTING SOME DATA")
- {:ok, "Ivan"}
+ Logger.info("NOT_CACHED_CALLED")
+ {:ok, "uncached_value"}
end)
end
+
+ field :cached_error, :string do
+ cache_resolve(fn _, _, _ ->
+ Logger.info("CACHED_ERROR_CALLED")
+ {:error, "something went wrong"}
+ end)
+ end
+
+ field :cached_nocache, :string do
+ cache_resolve(fn _, _, _ ->
+ Logger.info("CACHED_NOCACHE_CALLED")
+ {:nocache, {:ok, "temporary"}}
+ end)
+ end
+
+ field :cached_honor_flag, non_null(:string) do
+ cache_resolve(
+ fn _, _, _ ->
+ Logger.info("CACHED_HONOR_FLAG_CALLED")
+ {:ok, "honor_value"}
+ end,
+ honor_do_not_cache_flag: true
+ )
+ end
+
+ field :cached_fun_name, non_null(:string) do
+ cache_resolve(
+ fn _, _, _ ->
+ Logger.info("CACHED_FUN_NAME_CALLED")
+ {:ok, "named_value"}
+ end,
+ fun_name: :custom_name
+ )
+ end
end
end
- use AbsintheCache.TestCase, async: true
+ use AbsintheCache.TestCase, async: false
import ExUnit.CaptureLog
- test "uncached function is called every time" do
- fun = fn ->
- Absinthe.run("{ getNameNotCached }", Schema, root_value: %{})
+ describe "cache_resolve macro" do
+ test "cached resolver executes only on first call" do
+ fun = fn -> Absinthe.run("{ cachedOk }", Schema, root_value: %{}) end
+
+ assert capture_log(fun) =~ "CACHED_OK_CALLED"
+ refute capture_log(fun) =~ "CACHED_OK_CALLED"
+ refute capture_log(fun) =~ "CACHED_OK_CALLED"
+ end
+
+ test "uncached resolver executes every time" do
+ fun = fn -> Absinthe.run("{ notCached }", Schema, root_value: %{}) end
+
+ assert capture_log(fun) =~ "NOT_CACHED_CALLED"
+ assert capture_log(fun) =~ "NOT_CACHED_CALLED"
+ assert capture_log(fun) =~ "NOT_CACHED_CALLED"
+ end
+
+ test "error result is not cached — resolver re-executes each time" do
+ fun = fn -> Absinthe.run("{ cachedError }", Schema, root_value: %{}) end
+
+ assert capture_log(fun) =~ "CACHED_ERROR_CALLED"
+ assert capture_log(fun) =~ "CACHED_ERROR_CALLED"
+ assert capture_log(fun) =~ "CACHED_ERROR_CALLED"
+ end
+
+ test "nocache result is not cached — resolver re-executes each time" do
+ fun = fn -> Absinthe.run("{ cachedNocache }", Schema, root_value: %{}) end
+
+ assert capture_log(fun) =~ "CACHED_NOCACHE_CALLED"
+ assert capture_log(fun) =~ "CACHED_NOCACHE_CALLED"
+ assert capture_log(fun) =~ "CACHED_NOCACHE_CALLED"
+ end
+
+ test "honor_do_not_cache_flag: true skips cache when process flag is set" do
+ fun = fn ->
+ Process.put(:__do_not_cache_query__, true)
+ Absinthe.run("{ cachedHonorFlag }", Schema, root_value: %{})
+ end
+
+ assert capture_log(fun) =~ "CACHED_HONOR_FLAG_CALLED"
+ assert capture_log(fun) =~ "CACHED_HONOR_FLAG_CALLED"
+ assert capture_log(fun) =~ "CACHED_HONOR_FLAG_CALLED"
+ end
+
+ test "honor_do_not_cache_flag: true still caches when flag is NOT set" do
+ Process.delete(:__do_not_cache_query__)
+
+ fun = fn ->
+ Absinthe.run("{ cachedHonorFlag }", Schema, root_value: %{})
+ end
+
+ assert capture_log(fun) =~ "CACHED_HONOR_FLAG_CALLED"
+ refute capture_log(fun) =~ "CACHED_HONOR_FLAG_CALLED"
+ end
+
+ test "fun_name option works — resolver is cached using the custom name" do
+ fun = fn -> Absinthe.run("{ cachedFunName }", Schema, root_value: %{}) end
+
+ assert capture_log(fun) =~ "CACHED_FUN_NAME_CALLED"
+ refute capture_log(fun) =~ "CACHED_FUN_NAME_CALLED"
+ end
+ end
+
+ describe "wrap/2,3,4" do
+ test "wrapped function caches result, second call returns cached" do
+ call_count = :counters.new(1, [:atomics])
+
+ wrapped = AbsintheCache.wrap(
+ fn ->
+ :counters.add(call_count, 1, 1)
+ {:ok, "wrapped_val"}
+ end,
+ :wrap_test,
+ %{}
+ )
+
+ assert wrapped.() == {:ok, "wrapped_val"}
+ assert wrapped.() == {:ok, "wrapped_val"}
+ assert :counters.get(call_count, 1) == 1
+ end
+
+ test "wrapped function with different args produces separate cache entries" do
+ fun1 = AbsintheCache.wrap(fn -> {:ok, "a"} end, :wrap_args, %{x: 1})
+ fun2 = AbsintheCache.wrap(fn -> {:ok, "b"} end, :wrap_args, %{x: 2})
+
+ assert fun1.() == {:ok, "a"}
+ assert fun2.() == {:ok, "b"}
end
- # Every time the rsolver is executed
- assert capture_log(fun) =~ "PRINTING SOME DATA"
- assert capture_log(fun) =~ "PRINTING SOME DATA"
- assert capture_log(fun) =~ "PRINTING SOME DATA"
- assert capture_log(fun) =~ "PRINTING SOME DATA"
- assert capture_log(fun) =~ "PRINTING SOME DATA"
+ test "wrapped function returning error is not cached" do
+ call_count = :counters.new(1, [:atomics])
+
+ wrapped = AbsintheCache.wrap(
+ fn ->
+ :counters.add(call_count, 1, 1)
+ {:error, "fail"}
+ end,
+ :wrap_error,
+ %{}
+ )
+
+ assert wrapped.() == {:error, "fail"}
+ assert wrapped.() == {:error, "fail"}
+ assert :counters.get(call_count, 1) == 2
+ end
end
- test "cached function is called only the first time" do
- fun = fn ->
- Absinthe.run("{ getNameCached }", Schema, root_value: %{})
+ describe "store/get" do
+ test "direct store + get round-trip" do
+ AbsintheCache.store(:graphql_cache, "direct_key", {:ok, "direct_val"})
+ assert AbsintheCache.get("direct_key") == {:ok, "direct_val"}
+ end
+
+ test "store/2 uses the default cache name" do
+ AbsintheCache.store("two_arg_key", {:ok, "two_arg_val"})
+ assert AbsintheCache.get("two_arg_key") == {:ok, "two_arg_val"}
end
- # Every time the rsolver is executed
- assert capture_log(fun) =~ "PRINTING SOME DATA"
- refute capture_log(fun) =~ "PRINTING SOME DATA"
- refute capture_log(fun) =~ "PRINTING SOME DATA"
- refute capture_log(fun) =~ "PRINTING SOME DATA"
- refute capture_log(fun) =~ "PRINTING SOME DATA"
+ test "get returns nil for missing key" do
+ assert AbsintheCache.get("missing_key") == nil
+ end
+
+ test "store with {key, ttl} tuple" do
+ AbsintheCache.store(:graphql_cache, {"ttl_key", 60}, {:ok, "ttl_val"})
+ assert AbsintheCache.get({"ttl_key", 60}) == {:ok, "ttl_val"}
+ end
+
+ test "store with error is ignored" do
+ AbsintheCache.store("err_store", {:error, "bad"})
+ assert AbsintheCache.get("err_store") == nil
+ end
+
+ test "store with nocache is ignored" do
+ AbsintheCache.store("nc_store", {:nocache, {:ok, "temp"}})
+ assert AbsintheCache.get("nc_store") == nil
+ end
+
+ test "overwriting a key updates the value" do
+ AbsintheCache.store("overwrite", {:ok, "first"})
+ assert AbsintheCache.get("overwrite") == {:ok, "first"}
+ AbsintheCache.store("overwrite", {:ok, "second"})
+ assert AbsintheCache.get("overwrite") == {:ok, "second"}
+ end
+ end
+
+ describe "get_or_store/2" do
+ test "executes function on miss and caches the result" do
+ call_count = :counters.new(1, [:atomics])
+
+ fun = fn ->
+ :counters.add(call_count, 1, 1)
+ {:ok, "computed"}
+ end
+
+ {key, _ttl} = AbsintheCache.cache_key(:gos_test, %{})
+
+ assert AbsintheCache.get_or_store(key, fun) == {:ok, "computed"}
+ assert AbsintheCache.get_or_store(key, fun) == {:ok, "computed"}
+ assert :counters.get(call_count, 1) == 1
+ end
+
+ test "does not cache errors" do
+ call_count = :counters.new(1, [:atomics])
+
+ fun = fn ->
+ :counters.add(call_count, 1, 1)
+ {:error, "fail"}
+ end
+
+ {key, _ttl} = AbsintheCache.cache_key(:gos_err_test, %{})
+
+ assert AbsintheCache.get_or_store(key, fun) == {:error, "fail"}
+ assert AbsintheCache.get_or_store(key, fun) == {:error, "fail"}
+ assert :counters.get(call_count, 1) == 2
+ end
+ end
+
+ describe "count, size, clear_all" do
+ test "count returns 0 on empty cache" do
+ assert AbsintheCache.count() == 0
+ end
+
+ test "count increments after stores" do
+ AbsintheCache.store(:graphql_cache, "cnt1", {:ok, 1})
+ AbsintheCache.store(:graphql_cache, "cnt2", {:ok, 2})
+ assert AbsintheCache.count() == 2
+ end
+
+ test "size returns a non-negative float" do
+ size = AbsintheCache.size()
+ assert is_float(size)
+ assert size >= 0.0
+ end
+
+ test "clear_all resets count to 0" do
+ AbsintheCache.store(:graphql_cache, "clr1", {:ok, 1})
+ AbsintheCache.store(:graphql_cache, "clr2", {:ok, 2})
+ assert AbsintheCache.count() > 0
+
+ AbsintheCache.clear_all()
+ assert AbsintheCache.count() == 0
+ end
+ end
+
+ describe "child_spec" do
+ test "returns a valid child spec map with :id and :start" do
+ spec = AbsintheCache.child_spec(name: :test_spec_cache, id: :test_spec_cache)
+ assert is_map(spec)
+ assert Map.has_key?(spec, :id)
+ assert Map.has_key?(spec, :start)
+ end
end
end
diff --git a/test/cache_key_test.exs b/test/cache_key_test.exs
new file mode 100644
index 0000000..feda948
--- /dev/null
+++ b/test/cache_key_test.exs
@@ -0,0 +1,91 @@
+defmodule AbsintheCache.CacheKeyTest do
+ use AbsintheCache.TestCase, async: false
+
+ describe "cache_key/3" do
+ test "returns a {binary, integer} tuple" do
+ {key, ttl} = AbsintheCache.cache_key(:my_resolver, %{})
+ assert is_binary(key)
+ assert is_integer(ttl)
+ end
+
+ # Cache keys include a time-based bucket (see AbsintheCache.cache_key/3). Two calls
+ # in the same bucket produce the same key; calls across a bucket boundary do not.
+ # We use async: false so both calls run in the same process without delay.
+ test "same inputs produce same key when called quickly" do
+ {key1, ttl1} = AbsintheCache.cache_key(:same_resolver, %{slug: "bitcoin"})
+ {key2, ttl2} = AbsintheCache.cache_key(:same_resolver, %{slug: "bitcoin"})
+ assert key1 == key2
+ assert ttl1 == ttl2
+ end
+
+ test "different names produce different keys" do
+ {key1, _} = AbsintheCache.cache_key(:resolver_a, %{})
+ {key2, _} = AbsintheCache.cache_key(:resolver_b, %{})
+ assert key1 != key2
+ end
+
+ test "different args produce different keys" do
+ {key1, _} = AbsintheCache.cache_key(:resolver, %{slug: "bitcoin"})
+ {key2, _} = AbsintheCache.cache_key(:resolver, %{slug: "ethereum"})
+ assert key1 != key2
+ end
+
+ test "default TTL is between 300 and 420 (base 300 + 0..120 offset)" do
+ {_key, ttl} = AbsintheCache.cache_key(:ttl_test, %{})
+ assert ttl >= 300
+ assert ttl <= 420
+ end
+
+ test "custom ttl: option is respected" do
+ {_key, ttl} = AbsintheCache.cache_key(:custom_ttl, %{}, ttl: 600)
+ # base is 600, max_ttl_offset defaults to 120, so ttl in [600, 720]
+ assert ttl >= 600
+ assert ttl <= 720
+ end
+
+ test "custom max_ttl_offset: option is respected" do
+ {_key, ttl} = AbsintheCache.cache_key(:offset_test, %{}, ttl: 500, max_ttl_offset: 10)
+ assert ttl >= 500
+ assert ttl <= 510
+ end
+
+ test "caching_params in args overrides TTL" do
+ args = %{caching_params: %{base_ttl: 1000, max_ttl_offset: 50}}
+ {_key, ttl} = AbsintheCache.cache_key(:params_test, args)
+ assert ttl >= 1000
+ assert ttl <= 1050
+ end
+
+ test "caching_params sets :__change_absinthe_before_send_caching_ttl__ in process dict" do
+ Process.delete(:__change_absinthe_before_send_caching_ttl__)
+ args = %{caching_params: %{base_ttl: 800, max_ttl_offset: 10}}
+ {_key, ttl} = AbsintheCache.cache_key(:before_send_test, args)
+ stored = Process.get(:__change_absinthe_before_send_caching_ttl__)
+ assert stored == ttl
+ end
+
+ test "DateTime values in args are bucketed — two datetimes seconds apart produce same key" do
+ now = DateTime.utc_now()
+ later = DateTime.add(now, 1, :second)
+
+ {key1, _} = AbsintheCache.cache_key(:dt_test, %{from: now})
+ {key2, _} = AbsintheCache.cache_key(:dt_test, %{from: later})
+ assert key1 == key2
+ end
+
+ test "structs in args are converted via Map.from_struct — __struct__ key is stripped" do
+ # Two structs of same type with same data produce same key
+ uri1 = %URI{host: "example.com", path: "/api"}
+ uri2 = %URI{host: "example.com", path: "/api"}
+
+ {key1, _} = AbsintheCache.cache_key(:struct_test, %{source: uri1})
+ {key2, _} = AbsintheCache.cache_key(:struct_test, %{source: uri2})
+ assert key1 == key2
+
+ # A struct with different data produces a different key
+ uri3 = %URI{host: "other.com", path: "/api"}
+ {key3, _} = AbsintheCache.cache_key(:struct_test, %{source: uri3})
+ assert key1 != key3
+ end
+ end
+end
diff --git a/test/con_cache_provider_test.exs b/test/con_cache_provider_test.exs
new file mode 100644
index 0000000..64856b5
--- /dev/null
+++ b/test/con_cache_provider_test.exs
@@ -0,0 +1,204 @@
+defmodule AbsintheCache.ConCacheProviderTest do
+ use ExUnit.Case, async: true
+
+ alias AbsintheCache.ConCacheProvider, as: Provider
+
+ @cache_name :test_provider_cache
+
+ setup do
+ {:ok, pid} =
+ Provider.start_link(
+ name: @cache_name,
+ ttl_check_interval: :timer.seconds(30),
+ global_ttl: :timer.seconds(300)
+ )
+
+ on_exit(fn ->
+ ExUnit.CaptureLog.capture_log(fn -> Process.exit(pid, :kill) end)
+ end)
+
+ %{cache_pid: pid}
+ end
+
+ describe "get/2" do
+ test "returns nil for missing key" do
+ assert Provider.get(@cache_name, "nonexistent") == nil
+ end
+ end
+
+ describe "store/3" do
+ test "stores {:ok, value} and retrieves it with get/2" do
+ Provider.store(@cache_name, "key1", {:ok, "hello"})
+ assert Provider.get(@cache_name, "key1") == {:ok, "hello"}
+ end
+
+ test "ignores {:error, reason} — value is not persisted" do
+ Provider.store(@cache_name, "err_key", {:error, "bad"})
+ assert Provider.get(@cache_name, "err_key") == nil
+ end
+
+ test "ignores {:nocache, value} — value is not persisted" do
+ Provider.store(@cache_name, "nc_key", {:nocache, {:ok, "temp"}})
+ assert Provider.get(@cache_name, "nc_key") == nil
+ end
+ end
+
+ defp identity_middleware(_cache, _key, result), do: result
+
+ describe "get_or_store/4" do
+
+ test "executes function on cache miss" do
+ result =
+ Provider.get_or_store(@cache_name, "miss_key", fn -> {:ok, "computed"} end, &identity_middleware/3)
+
+ assert result == {:ok, "computed"}
+ end
+
+ test "returns cached value on hit without re-executing" do
+ call_count = :counters.new(1, [:atomics])
+
+ fun = fn ->
+ :counters.add(call_count, 1, 1)
+ {:ok, "value"}
+ end
+
+ Provider.get_or_store(@cache_name, "hit_key", fun, &identity_middleware/3)
+ Provider.get_or_store(@cache_name, "hit_key", fun, &identity_middleware/3)
+ Provider.get_or_store(@cache_name, "hit_key", fun, &identity_middleware/3)
+
+ assert :counters.get(call_count, 1) == 1
+ end
+
+ test "with {:nocache, {:ok, value}} returns value, doesn't cache, sets :__do_not_cache_query__" do
+ Process.delete(:__do_not_cache_query__)
+
+ result =
+ Provider.get_or_store(
+ @cache_name,
+ "nocache_key",
+ fn -> {:nocache, {:ok, "temp_val"}} end,
+ &identity_middleware/3
+ )
+
+ assert result == {:ok, "temp_val"}
+ assert Process.get(:__do_not_cache_query__) == true
+ assert Provider.get(@cache_name, "nocache_key") == nil
+ end
+
+ test "with {:error, reason} returns error, doesn't cache" do
+ result =
+ Provider.get_or_store(
+ @cache_name,
+ "error_key",
+ fn -> {:error, "failure"} end,
+ &identity_middleware/3
+ )
+
+ assert result == {:error, "failure"}
+ assert Provider.get(@cache_name, "error_key") == nil
+ end
+ end
+
+ describe "count/1" do
+ test "returns 0 for empty cache" do
+ assert Provider.count(@cache_name) == 0
+ end
+
+ test "returns correct count after stores" do
+ Provider.store(@cache_name, "c1", {:ok, 1})
+ Provider.store(@cache_name, "c2", {:ok, 2})
+ Provider.store(@cache_name, "c3", {:ok, 3})
+
+ assert Provider.count(@cache_name) == 3
+ end
+ end
+
+ describe "size/1" do
+ test "returns a non-negative float" do
+ size = Provider.size(@cache_name)
+ assert is_float(size)
+ assert size >= 0.0
+ end
+ end
+
+ describe "clear_all/1" do
+ test "removes all entries" do
+ Provider.store(@cache_name, "d1", {:ok, 1})
+ Provider.store(@cache_name, "d2", {:ok, 2})
+ assert Provider.count(@cache_name) == 2
+
+ Provider.clear_all(@cache_name)
+ assert Provider.count(@cache_name) == 0
+ end
+ end
+
+ describe "{key, ttl} tuple key" do
+ test "stores and retrieves by same tuple key" do
+ Provider.store(@cache_name, {"ttl_key", 60}, {:ok, "with_ttl"})
+ assert Provider.get(@cache_name, {"ttl_key", 60}) == {:ok, "with_ttl"}
+ end
+
+ test "TTL above max_cache_ttl (7200) is clamped, not rejected" do
+ Provider.store(@cache_name, {"big_ttl", 99_999}, {:ok, "big"})
+ assert Provider.get(@cache_name, {"big_ttl", 99_999}) == {:ok, "big"}
+ end
+
+ test "get_or_store works with {key, ttl} tuple keys" do
+ result =
+ Provider.get_or_store(
+ @cache_name,
+ {"gos_ttl", 60},
+ fn -> {:ok, "ttl_computed"} end,
+ &identity_middleware/3
+ )
+
+ assert result == {:ok, "ttl_computed"}
+ assert Provider.get(@cache_name, {"gos_ttl", 60}) == {:ok, "ttl_computed"}
+ end
+ end
+
+ describe "store/3 overwrite" do
+ test "storing to the same key overwrites the value" do
+ Provider.store(@cache_name, "ow_key", {:ok, "first"})
+ assert Provider.get(@cache_name, "ow_key") == {:ok, "first"}
+
+ Provider.store(@cache_name, "ow_key", {:ok, "second"})
+ assert Provider.get(@cache_name, "ow_key") == {:ok, "second"}
+ end
+ end
+
+ describe "clear_all/1 returns :ok" do
+ test "returns :ok on empty cache" do
+ assert Provider.clear_all(@cache_name) == :ok
+ end
+
+ test "returns :ok on non-empty cache" do
+ Provider.store(@cache_name, "clr_ret", {:ok, 1})
+ assert Provider.clear_all(@cache_name) == :ok
+ end
+ end
+
+ describe "concurrent get_or_store (thundering herd)" do
+ test "function executes only once under concurrent access" do
+ call_count = :counters.new(1, [:atomics])
+
+ fun = fn ->
+ :counters.add(call_count, 1, 1)
+ Process.sleep(50)
+ {:ok, "concurrent_val"}
+ end
+
+ tasks =
+ for _ <- 1..10 do
+ Task.async(fn ->
+ Provider.get_or_store(@cache_name, "herd_key", fun, &identity_middleware/3)
+ end)
+ end
+
+ results = Task.await_many(tasks, 5000)
+
+ assert Enum.all?(results, &(&1 == {:ok, "concurrent_val"}))
+ assert :counters.get(call_count, 1) == 1
+ end
+ end
+end
diff --git a/test/configuration_test.exs b/test/configuration_test.exs
new file mode 100644
index 0000000..e1f4aba
--- /dev/null
+++ b/test/configuration_test.exs
@@ -0,0 +1,161 @@
+defmodule AbsintheCache.ConfigurationTest do
+ use ExUnit.Case, async: false
+
+ alias AbsintheCache.ConCacheProvider, as: Provider
+
+ # Clean up any config we set after each test
+ setup do
+ on_exit(fn ->
+ Application.delete_env(:absinthe_cache, :cache_name)
+ Application.delete_env(:absinthe_cache, :cache_provider)
+ Application.delete_env(:absinthe_cache, :ttl)
+ Application.delete_env(:absinthe_cache, :max_ttl_offset)
+ end)
+ end
+
+ describe "cache_name configuration" do
+ setup do
+ {:ok, pid} =
+ Provider.start_link(
+ name: :custom_cache_name,
+ ttl_check_interval: :timer.seconds(30),
+ global_ttl: :timer.seconds(300)
+ )
+
+ Application.put_env(:absinthe_cache, :cache_name, :custom_cache_name)
+
+ on_exit(fn ->
+ ExUnit.CaptureLog.capture_log(fn -> Process.exit(pid, :kill) end)
+ end)
+
+ :ok
+ end
+
+ test "store/2 and get/1 use the configured cache name" do
+ AbsintheCache.store("cfg_key", {:ok, "cfg_value"})
+ assert AbsintheCache.get("cfg_key") == {:ok, "cfg_value"}
+ end
+
+ test "count/0 uses the configured cache name" do
+ assert AbsintheCache.count() == 0
+ AbsintheCache.store("cfg_cnt", {:ok, 1})
+ assert AbsintheCache.count() == 1
+ end
+
+ test "size/0 uses the configured cache name" do
+ assert is_float(AbsintheCache.size())
+ end
+
+ test "clear_all/0 uses the configured cache name" do
+ AbsintheCache.store("cfg_clr", {:ok, 1})
+ assert AbsintheCache.count() == 1
+ AbsintheCache.clear_all()
+ assert AbsintheCache.count() == 0
+ end
+
+ test "wrap/3 uses the configured cache name" do
+ call_count = :counters.new(1, [:atomics])
+
+ wrapped =
+ AbsintheCache.wrap(
+ fn ->
+ :counters.add(call_count, 1, 1)
+ {:ok, "wrapped"}
+ end,
+ :cfg_wrap_test,
+ %{}
+ )
+
+ assert wrapped.() == {:ok, "wrapped"}
+ assert wrapped.() == {:ok, "wrapped"}
+ assert :counters.get(call_count, 1) == 1
+ end
+ end
+
+ describe "ttl configuration" do
+ setup do
+ {:ok, pid} =
+ Provider.start_link(
+ name: :graphql_cache,
+ ttl_check_interval: :timer.seconds(30),
+ global_ttl: :timer.seconds(300)
+ )
+
+ on_exit(fn ->
+ ExUnit.CaptureLog.capture_log(fn -> Process.exit(pid, :kill) end)
+ end)
+
+ :ok
+ end
+
+ test "default TTL is 300-420 when no config is set" do
+ {_key, ttl} = AbsintheCache.cache_key(:ttl_default_test, %{})
+ assert ttl >= 300
+ assert ttl <= 420
+ end
+
+ test "custom ttl via Application config changes the base TTL" do
+ Application.put_env(:absinthe_cache, :ttl, 600)
+
+ {_key, ttl} = AbsintheCache.cache_key(:ttl_config_test, %{})
+ # base 600 + offset 0..120
+ assert ttl >= 600
+ assert ttl <= 720
+ end
+
+ test "custom max_ttl_offset via Application config changes the offset range" do
+ Application.put_env(:absinthe_cache, :ttl, 100)
+ Application.put_env(:absinthe_cache, :max_ttl_offset, 5)
+
+ {_key, ttl} = AbsintheCache.cache_key(:offset_config_test, %{})
+ assert ttl >= 100
+ assert ttl <= 105
+ end
+
+ test "per-call opts override Application config" do
+ Application.put_env(:absinthe_cache, :ttl, 600)
+
+ {_key, ttl} = AbsintheCache.cache_key(:override_test, %{}, ttl: 50, max_ttl_offset: 5)
+ assert ttl >= 50
+ assert ttl <= 55
+ end
+
+ test "caching_params in args override both Application config and per-call opts" do
+ Application.put_env(:absinthe_cache, :ttl, 600)
+
+ args = %{caching_params: %{base_ttl: 10, max_ttl_offset: 3}}
+ {_key, ttl} = AbsintheCache.cache_key(:params_override_test, args, ttl: 50)
+ assert ttl >= 10
+ assert ttl <= 13
+ end
+ end
+
+ describe "cache_provider configuration" do
+ test "uses ConCacheProvider by default" do
+ spec = AbsintheCache.child_spec(name: :provider_test, id: :provider_test)
+ assert is_map(spec)
+ assert spec.id == :provider_test
+ end
+
+ test "custom provider module is used when configured" do
+ defmodule TestProvider do
+ @behaviour AbsintheCache.Behaviour
+
+ def start_link(_opts), do: {:ok, self()}
+ def child_spec(opts), do: %{id: :test_provider, start: {__MODULE__, :start_link, [opts]}}
+ def get(_cache, _key), do: {:ok, "from_test_provider"}
+ def store(_cache, _key, _value), do: :ok
+ def get_or_store(_cache, _key, _func, _middleware), do: {:ok, "from_test_provider"}
+ def size(_cache), do: 0.0
+ def count(_cache), do: 42
+ def clear_all(_cache), do: :ok
+ end
+
+ Application.put_env(:absinthe_cache, :cache_provider, TestProvider)
+
+ assert AbsintheCache.count() == 42
+ assert AbsintheCache.size() == 0.0
+ assert AbsintheCache.get("any") == {:ok, "from_test_provider"}
+ end
+ end
+end
diff --git a/test/support/case.ex b/test/support/case.ex
index a5adb32..576d6db 100644
--- a/test/support/case.ex
+++ b/test/support/case.ex
@@ -1,10 +1,15 @@
defmodule AbsintheCache.TestCase do
use ExUnit.CaseTemplate
- defmacro __using__(_) do
+ defmacro __using__(opts) do
+ async = Keyword.get(opts, :async, true)
+
quote do
- use ExUnit.Case, async: true
- use Plug.Test
+ use ExUnit.Case, async: unquote(async)
+ # import (not use) so we control ExUnit.Case and async; Plug.Test/Plug.Conn
+ # provide conn/2, put_req_header/3, etc. needed for HTTP tests.
+ import Plug.Test
+ import Plug.Conn
import unquote(__MODULE__)
diff --git a/test/test_schema.ex b/test/test_schema.ex
deleted file mode 100644
index 4c52700..0000000
--- a/test/test_schema.ex
+++ /dev/null
@@ -1,12 +0,0 @@
-defmodule Schema do
- use Absinthe.Schema
- import AbsintheCache, only: [cache_resolve: 1]
-
- query do
- field :get_name, non_null(:string) do
- cache_resolve(fn _, _ ->
- {:ok, "Ivan"}
- end)
- end
- end
-end