Skip to content

ci: add codspeed benchmarks#104

Open
gazorby wants to merge 5 commits intomainfrom
feat/banchmarks
Open

ci: add codspeed benchmarks#104
gazorby wants to merge 5 commits intomainfrom
feat/banchmarks

Conversation

@gazorby
Copy link
Owner

@gazorby gazorby commented Sep 14, 2025

Description

Types of Changes

  • Core
  • Bugfix
  • New feature
  • Enhancement/optimization
  • Documentation

Issues Fixed or Closed by This PR

Checklist

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • I have tested the changes and verified that they work and don't break anything (as well as I can manage).

Summary by Sourcery

Integrate performance benchmarking into the CI pipeline using CodSpeed and add support for running benchmarks via pytest and mise.

New Features:

  • Add a new 'benchmarks' job in GitHub Actions to run CodSpeed benchmarks
  • Introduce a 'test:benchmarks' task in tasks.md and mise.toml for running benchmarks

Build:

  • Add pytest-codspeed to project dependencies in pyproject.toml

CI:

  • Configure the benchmarks job in .github/workflows/ci.yaml to checkout code, cache dependencies, setup mise, and run CodSpeed
  • Adjust existing workflow job dependencies to include the new benchmarks step

Documentation:

  • Update tasks.md to document the 'test:benchmarks' command

Tests:

  • Create tests/benchmarks directory with fixtures and a sample benchmark test for schema building
  • Add a 'benchmark' pytest marker for benchmark tests

Summary by CodeRabbit

  • Tests

    • Added a performance benchmark suite (schema building and query) with new benchmark fixtures and a pytest benchmark marker; runnable via a dedicated benchmark command.
  • Chores

    • Integrated a benchmarks job into CI and enabled cancel-in-progress for workflows.
    • Added ignore rule for benchmark artifacts.
    • Removed Dependabot configuration for GitHub Actions.
    • Added test-related task entries and a benchmark test dependency.
  • Documentation

    • Documented the new benchmark task in task docs.

@sourcery-ai
Copy link

sourcery-ai bot commented Sep 14, 2025

Reviewer's Guide

Integrate CodSpeed benchmarks into the CI pipeline by adding a dedicated benchmarks job, updating dependencies and task definitions, and providing initial benchmark tests.

Flow diagram for new benchmark task execution

flowchart TD
  A["User runs 'test:benchmarks' task"] --> B["Depends on _install"]
  B --> C["Runs: uv run pytest --codspeed tests/benchmarks"]
  C --> D["CodSpeed instrumentation collects benchmark results"]
Loading

File-Level Changes

Change Details Files
Enhance CI workflow with a benchmarks job and formatting fixes
  • Standardize spacing in branch and needs arrays
  • Add new 'benchmarks' job with checkout, cache, setup, and CodSpeed steps
.github/workflows/ci.yaml
Introduce pytest-codspeed dependency and benchmark marker
  • Add "pytest-codspeed" to test extras
  • Register a "benchmark" pytest marker
pyproject.toml
Define and document the test:benchmarks task
  • Add test:benchmarks entry in tasks.md
  • Add tasks."test:benchmarks" to mise.toml
tasks.md
mise.toml
Add initial benchmark test suite
  • Create conftest.py importing integration fixtures
  • Add fixtures.py with Strawchemy fixtures
  • Add test_build_schema.py benchmark test
tests/benchmarks/conftest.py
tests/benchmarks/fixtures.py
tests/benchmarks/test_build_schema.py
Remove deprecated dependabot configuration
  • Delete .github/dependabot.yml
.github/dependabot.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

Blocking issues:

  • An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload. (link)

General comments:

  • Fix the GitHub Actions cache key expression in the benchmarks job—there’s an extra '}' after hashFiles, which will break the expression.
  • The benchmarks conftest is importing the entire integration fixtures set – consider narrowing imports to only the fixtures used in these benchmarks to reduce setup overhead.
  • Using the '@pytest.mark.benchmark' marker may conflict with the pytest-benchmark plugin; consider renaming it to something like 'benchmark_codspeed' for clarity.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Fix the GitHub Actions cache key expression in the benchmarks job—there’s an extra '}' after hashFiles, which will break the expression.
- The benchmarks conftest is importing the entire integration fixtures set – consider narrowing imports to only the fixtures used in these benchmarks to reduce setup overhead.
- Using the '@pytest.mark.benchmark' marker may conflict with the pytest-benchmark plugin; consider renaming it to something like 'benchmark_codspeed' for clarity.

## Individual Comments

### Comment 1
<location> `.github/workflows/ci.yaml:136` </location>
<code_context>
+        with:
+          path: |
+            ~/.cache
+          key:
+            ${{ runner.os }}-benchmarks-${{ env.pythonLocation }}-${{ hashFiles('**/uv.lock') }} }}
+          restore-keys: |
+            ${{ runner.os }}-benchmarks-${{ env.pythonLocation }}
</code_context>

<issue_to_address>
Extra closing brace in cache key interpolation.

Remove the extra '}' at the end of the cache key to prevent syntax errors and ensure proper formatting.
</issue_to_address>

## Security Issues

### Issue 1
<location> `.github/workflows/ci.yaml:148` </location>

<issue_to_address>
**security (yaml.github-actions.security.third-party-action-not-pinned-to-commit-sha):** An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload.

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

log_level: debug

- name: Run the benchmarks
uses: CodSpeedHQ/action@v4
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (yaml.github-actions.security.third-party-action-not-pinned-to-commit-sha): An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload.

Source: opengrep

@codspeed-hq
Copy link

codspeed-hq bot commented Sep 14, 2025

CodSpeed Performance Report

Congrats! CodSpeed is installed 🎉

🆕 21 new benchmarks were detected.

You will start to see performance impacts in the reports once the benchmarks are run from your default branch.

Detected benchmarks

  • test_build_user_create_input[strawchemy_mysql] (674.3 µs)
  • test_build_user_create_input[strawchemy_postgresql] (676 µs)
  • test_build_user_create_input[strawchemy_sqlite] (675.1 µs)
  • test_build_user_filter[strawchemy_mysql] (563.5 µs)
  • test_build_user_filter[strawchemy_postgresql] (562.1 µs)
  • test_build_user_filter[strawchemy_sqlite] (558.7 µs)
  • test_build_user_pk_update_input[strawchemy_mysql] (677.2 µs)
  • test_build_user_pk_update_input[strawchemy_postgresql] (676.9 µs)
  • test_build_user_pk_update_input[strawchemy_sqlite] (672.4 µs)
  • test_build_user_query_field[strawchemy_mysql] (90.5 µs)
  • test_build_user_query_field[strawchemy_postgresql] (92.4 µs)
  • test_build_user_query_field[strawchemy_sqlite] (90.9 µs)
  • test_build_user_type[strawchemy_mysql] (693.4 µs)
  • test_build_user_type[strawchemy_postgresql] (720.3 µs)
  • test_build_user_type[strawchemy_sqlite] (698.7 µs)
  • test_single[session-async-aiosqlite_engine] (6.5 ms)
  • test_single[session-async-asyncmy_engine] (6.2 ms)
  • test_single[session-async-asyncpg_engine] (6.1 ms)
  • test_single[session-async-psycopg_async_engine] (6.2 ms)
  • test_single[session-sync-psycopg_engine] (5.9 ms)
  • test_single[session-sync-sqlite_engine] (5.8 ms)

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 14, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds CI benchmarks job using CodSpeed and pytest-codspeed, new benchmark tests and fixtures, mise tasks to install test deps and run benchmarks, updates pyproject markers/deps, ignores .codspeed, enables workflow concurrency cancellation, and removes Dependabot config for GitHub Actions.

Changes

Cohort / File(s) Summary
CI workflow: benchmarks integration
.github/workflows/ci.yaml
Adds a benchmarks job that sets up cache, installs deps, runs mise/pytest with CodSpeed instrumentation, and is required by the final result job; enables cancel-in-progress in concurrency.
Tooling & task definitions
pyproject.toml, mise.toml, tasks.md, .gitignore
Adds pytest-codspeed to test deps and a benchmark pytest marker; adds _install:test and test:benchmarks tasks in mise.toml and documents test:benchmarks in tasks.md; adds .codspeed to .gitignore.
Benchmark tests & fixtures
tests/benchmarks/*
Adds conftest.py aggregating integration fixtures, fixtures.py exposing three Strawchemy fixtures, test_build_schema.py suite of schema-building benchmarks, and test_queries.py query benchmark.
Dependabot removal
.github/dependabot.yml
Removes Dependabot configuration that scheduled daily updates for GitHub Actions.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Runner as GitHub Actions Runner
  participant CI as CI Workflow
  participant Tests as Tests Matrix
  participant Bench as Benchmarks Job
  participant Mise as mise (task runner)
  participant Cod as CodSpeed Service
  participant Result as Final Result Job

  Runner->>CI: workflow triggered
  CI->>Tests: run test matrix
  Tests-->>CI: success/failure
  CI->>Bench: start benchmarks (needs: Tests)
  Bench->>Mise: run task test:benchmarks
  Mise->>Mise: _install:test (uv sync)
  Mise->>Mise: pytest --codspeed tests/benchmarks
  Mise->>Cod: upload instrumentation/results
  Cod-->>Mise: instrumentation feedback
  Mise-->>Bench: benchmarks completed
  Bench-->>CI: success/failure
  CI->>Result: aggregate (needs: Tests, Bench)
  Result-->>Runner: final workflow status
  Note over Bench,Cod: CodSpeed requires CODSPEED_TOKEN for instrumentation
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

A rabbit tweaks the CI with care,
Benchmarks bloom — the runners share 🥕
CodSpeed hums, the graphs take flight,
Tests and speed now hop in sight.
Small footprints in the repo lair.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "ci: add codspeed benchmarks" is concise and directly describes the primary change—adding CodSpeed-based benchmarks to the CI pipeline—matching the PR objectives and modified files (CI workflow, test tasks, and benchmark tests).
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/banchmarks

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
.github/workflows/ci.yaml (1)

122-153: Pin third‑party actions to full commit SHAs

Multiple actions are referenced by tags (e.g., v3/v4/v5). Pin to immutable SHAs for supply‑chain safety.

Example (illustrative):

-uses: jdx/mise-action@v3
+uses: jdx/mise-action@<FULL_COMMIT_SHA>

Actions to pin here:

  • actions/checkout@v5
  • actions/cache@v4
  • jdx/mise-action@v3
  • CodSpeedHQ/action@v4
  • actions/download-artifact@v5
  • actions/setup-python@v6
  • codecov/codecov-action@v5
  • codecov/test-results-action@v1
🧹 Nitpick comments (7)
.gitignore (1)

196-196: Ignore CodSpeed artifacts: prefer directory-anchored pattern

Use a root-anchored directory pattern to avoid accidentally matching files with the same name elsewhere.

-.codspeed
+/.codspeed/
.github/workflows/ci.yaml (2)

241-245: Result job should also gate on benchmarks’ outcome in the check step

You added benchmarks to needs, but the failure/cancel check below only covers tests and lint. Include benchmarks for consistency and explicit gating.

-      - name: Mark workflow as failed if lint or test did not pass or were cancelled
-        if: contains(fromJson('["failure", "cancelled"]'), needs.tests.result) || contains(fromJson('["failure", "cancelled"]'), needs.lint.result)
+      - name: Mark workflow as failed if lint, tests, or benchmarks did not pass or were cancelled
+        if: |
+          contains(fromJson('["failure","cancelled"]'), needs.tests.result) ||
+          contains(fromJson('["failure","cancelled"]'), needs.lint.result)   ||
+          contains(fromJson('["failure","cancelled"]'), needs.benchmarks.result)
         run: exit 1

130-140: Cache scope nit: include UV dir explicitly

Caching ~/.cache works, but being explicit improves portability and avoids noise.

       path: |
-        ~/.cache
+        ~/.cache/uv
+        ~/.cache/pip
tests/benchmarks/test_queries.py (1)

10-23: Add a minimal correctness assertion

Benchmarks should still fail fast on bad results; assert presence of data to catch regressions without skewing timings.

 @pytest.mark.benchmark(group="queries")
 async def test_single(any_query: AnyQueryExecutor, raw_users: RawRecordData) -> None:
-    await maybe_async(
+    result = await maybe_async(
         any_query(
             """
             query GetUser($id: Int!) {
                 user(id: $id) {
                     name
             }
             }
             """,
             {"id": raw_users[0]["id"]},
         )
     )
+    assert isinstance(result, dict) and "data" in result and result["data"].get("user"), "Empty or invalid GraphQL response"
tests/benchmarks/conftest.py (1)

49-94: all exports: consider exporting fixture names or dropping all

The functions are named fx_strawchemy_* but the actual fixture names are strawchemy_* via the decorator. Either export the fixture names for clarity or omit all (pytest doesn’t require it in conftest).

-from .fixtures import fx_strawchemy_mysql, fx_strawchemy_postgresql, fx_strawchemy_sqlite
+from .fixtures import fx_strawchemy_mysql, fx_strawchemy_postgresql, fx_strawchemy_sqlite
@@
-    "fx_strawchemy_mysql",
-    "fx_strawchemy_postgresql",
-    "fx_strawchemy_sqlite",
+    # Option A: keep fx_* or
+    # "strawchemy_mysql", "strawchemy_postgresql", "strawchemy_sqlite",

If not needed, you can drop all entirely to reduce maintenance.

tests/benchmarks/test_build_schema.py (2)

28-32: Bench not recorded and confusing type name

Wrap in benchmark(...) and avoid naming the create input “UserFilter” which collides semantically with the filter type.

Apply:

 @pytest.mark.benchmark(group="schema")
 @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_create_input(strawchemy: Strawchemy) -> None:
-    strawchemy.create_input(User, include="all", override=True)(type("UserFilter", (), {}))
+def test_build_user_create_input(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+    benchmark(lambda: strawchemy.create_input(User, include="all", override=True)(type("UserCreateInput", (), {})))

17-17: Deduplicate backend parametrization (optional)

Define a module-level BACKENDS = [lf(...), lf(...), lf(...)] and reuse across tests for readability.

Also applies to: 23-23, 29-29, 35-35, 41-41

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 26eb9df and bdd9609.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • .github/dependabot.yml (0 hunks)
  • .github/workflows/ci.yaml (5 hunks)
  • .gitignore (1 hunks)
  • mise.toml (2 hunks)
  • pyproject.toml (2 hunks)
  • tasks.md (1 hunks)
  • tests/benchmarks/conftest.py (1 hunks)
  • tests/benchmarks/fixtures.py (1 hunks)
  • tests/benchmarks/test_build_schema.py (1 hunks)
  • tests/benchmarks/test_queries.py (1 hunks)
💤 Files with no reviewable changes (1)
  • .github/dependabot.yml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-14T11:55:31.383Z
Learnt from: CR
PR: gazorby/strawchemy#0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-09-14T11:55:31.383Z
Learning: Use mise to manage build and test tasks for the project

Applied to files:

  • mise.toml
🧬 Code graph analysis (4)
tests/benchmarks/conftest.py (2)
tests/integration/fixtures.py (19)
  • aiosqlite_engine (508-513)
  • any_query (763-779)
  • any_session (753-754)
  • async_engine (597-598)
  • async_mutation (731-738)
  • async_session (602-608)
  • asyncmy_engine (487-504)
  • asyncpg_engine (517-535)
  • db_features (684-685)
  • dialect (679-680)
  • engine (469-470)
  • psycopg_async_engine (539-557)
  • psycopg_engine (416-434)
  • query_tracker (758-759)
  • seed_db_async (660-672)
  • seed_db_sync (644-656)
  • session (474-480)
  • sqlite_engine (438-445)
  • sync_mutation (742-749)
tests/benchmarks/fixtures.py (3)
  • fx_strawchemy_mysql (18-19)
  • fx_strawchemy_postgresql (8-9)
  • fx_strawchemy_sqlite (13-14)
tests/benchmarks/fixtures.py (2)
tests/fixtures.py (1)
  • strawchemy (34-35)
src/strawchemy/mapper.py (1)
  • Strawchemy (60-722)
tests/benchmarks/test_build_schema.py (4)
tests/fixtures.py (1)
  • strawchemy (34-35)
src/strawchemy/strawberry/repository/_sync.py (1)
  • StrawchemySyncRepository (33-258)
src/strawchemy/mapper.py (1)
  • Strawchemy (60-722)
src/strawchemy/strawberry/_field.py (1)
  • order_by (278-282)
tests/benchmarks/test_queries.py (1)
tests/integration/fixtures.py (1)
  • any_query (763-779)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: 🔬 unit-no-extras on 3.9
  • GitHub Check: 🔬 unit-no-extras on 3.13
  • GitHub Check: 🔬 integration-postgres on 3.10
  • GitHub Check: 🔬 integration-sqlite on 3.11
  • GitHub Check: 🔬 integration-sqlite on 3.12
  • GitHub Check: 🔬 integration-mysql on 3.11
  • GitHub Check: 🔬 integration-sqlite on 3.10
  • GitHub Check: 🔬 integration-mysql on 3.12
  • GitHub Check: 🔬 integration-postgres on 3.12
  • GitHub Check: 🔬 integration-postgres on 3.11
  • GitHub Check: 🔬 integration-postgres on 3.13
  • GitHub Check: 🔬 integration-mysql on 3.13
  • GitHub Check: 🔬 integration-mysql on 3.10
  • GitHub Check: 🔬 integration-sqlite on 3.9
  • GitHub Check: 🔬 integration-postgres on 3.9
  • GitHub Check: 🔬 integration-sqlite on 3.13
  • GitHub Check: 🔬 integration-mysql on 3.9
🔇 Additional comments (10)
tasks.md (1)

149-156: Docs for benchmarks command look good

Clear, matches mise task and CI usage.

mise.toml (2)

50-54: Dedicated test-only sync is a good split

Keeps benches lightweight and reproducible.


157-161: Bench task wiring looks correct

CodSpeed runs via pytest against tests/benchmarks without xdist; good.

pyproject.toml (2)

83-84: Add pytest-codspeed: LGTM

Dependency belongs in the test group.


300-307: Benchmark marker registered

Consistent with usage in tests/benchmarks.

tests/benchmarks/fixtures.py (1)

7-19: Fixtures are minimal and clear

Good separation per dialect; type hints help.

tests/benchmarks/test_build_schema.py (4)

40-59: LGTM: good use of benchmark fixture to measure field construction

This test is correctly grouped and exercises filter/order/distinct_on integration with a sync repository.


34-38: Benchmark the PK-update input builder — verify API first

Test calls create_input and doesn't use pytest-benchmark; it should measure the PK-update input builder (pk_update_input or update_input).

If the API is pk_update_input, replace and benchmark like:

 @pytest.mark.benchmark(group="schema")
 @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_pk_update_input(strawchemy: Strawchemy) -> None:
-    strawchemy.create_input(User, include="all", override=True)(type("UserFilter", (), {}))
+def test_build_user_pk_update_input(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+    benchmark(lambda: strawchemy.pk_update_input(User, include="all", override=True)(type("UserPkUpdateInput", (), {})))

If the API is update_input, swap pk_update_input accordingly.

Quick verification: search the repo for the correct symbol (e.g. rg -n --hidden -S 'pk_update_input|update_input|create_input' -g '!/dist/') and confirm the builder name before applying changes.


6-6: Fix import: use pytest-lazy-fixture’s canonical module

This will import-error. The correct module is pytest_lazyfixture and symbol is lazy_fixture.

Apply:

-from pytest_lazy_fixtures import lf
+from pytest_lazyfixture import lazy_fixture as lf

Likely an incorrect or invalid review comment.


1-60: Fixtures present; pytest-lazy-fixtures only in uv.lock
strawchemy_postgresql/sqlite/mysql are defined in tests/benchmarks/fixtures.py; pytest_lazy_fixtures appears in uv.lock and is imported in tests (tests/benchmarks/test_build_schema.py, tests/integration/fixtures.py) — confirm it's declared in your primary dependency file (pyproject.toml / requirements) if needed.


import pytest
from pytest_lazy_fixtures import lf
from strawchemy import StrawchemySyncRepository
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify public export for StrawchemySyncRepository

Confirm StrawchemySyncRepository is re-exported from the package root; otherwise import from its submodule.

Run:


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail
rg -nP 'class\s+StrawchemySyncRepository\b' -S
rg -nP 'from\s+strawchemy\s+import\s+.*StrawchemySyncRepository|__all__.*StrawchemySyncRepository' src -S

Length of output: 224


Re-export StrawchemySyncRepository from package root or import it from its submodule

StrawchemySyncRepository is defined at src/strawchemy/strawberry/repository/_sync.py and exported in src/strawchemy/strawberry/repository/init.py, but there’s no evidence it’s re-exported from the package root — update tests/benchmarks/test_build_schema.py (line 7) to:
from strawchemy.strawberry.repository import StrawchemySyncRepository
or add to src/strawchemy/init.py:
from .strawberry.repository import StrawchemySyncRepository

🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around line 7, the test imports
StrawchemySyncRepository from the package root but that symbol is not
re-exported there; either change the import in this test to import
StrawchemySyncRepository from strawchemy.strawberry.repository, or add a
re-export in src/strawchemy/__init__.py that imports StrawchemySyncRepository
from .strawberry.repository so the symbol is available at the package root;
implement one of these two fixes so the import resolves.

Comment on lines +16 to +20
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_type(strawchemy: Strawchemy) -> None:
strawchemy.type(User, include="all", override=True)(type("UserType", (), {}))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bench not recorded: wrap in benchmark fixture

CodSpeed records only benchmark(...) calls. This test currently won’t produce metrics.

Apply:

 @pytest.mark.benchmark(group="schema")
 @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_type(strawchemy: Strawchemy) -> None:
-    strawchemy.type(User, include="all", override=True)(type("UserType", (), {}))
+def test_build_user_type(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+    benchmark(lambda: strawchemy.type(User, include="all", override=True)(type("UserType", (), {})))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_type(strawchemy: Strawchemy) -> None:
strawchemy.type(User, include="all", override=True)(type("UserType", (), {}))
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_type(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
benchmark(lambda: strawchemy.type(User, include="all", override=True)(type("UserType", (), {})))
🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around lines 16 to 20, the test
currently calls strawchemy.type(...) directly so CodSpeed won’t record anything;
change the test signature to accept the benchmark fixture and wrap the operation
in a benchmark call (e.g., def test_build_user_type(strawchemy: Strawchemy,
benchmark): and then call benchmark(lambda: strawchemy.type(User, include="all",
override=True)(type("UserType", (), {})))) so the benchmark framework actually
records the metric.

Comment on lines +22 to +26
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_filter(strawchemy: Strawchemy) -> None:
strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {}))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bench not recorded: wrap filter build in benchmark

Same issue as above.

Apply:

 @pytest.mark.benchmark(group="schema")
 @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_filter(strawchemy: Strawchemy) -> None:
-    strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {}))
+def test_build_user_filter(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+    benchmark(lambda: strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {})))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_filter(strawchemy: Strawchemy) -> None:
strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {}))
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
def test_build_user_filter(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
benchmark(lambda: strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {})))
🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around lines 22-26, the filter build
call is executed directly so the benchmark framework doesn't record it; change
the test to accept the pytest-benchmark fixture (add a benchmark parameter) and
wrap the construction in a benchmark call, e.g. call benchmark(lambda:
strawchemy.filter(User, include="all", override=True)(type("UserFilter", (),
{}))), so the benchmark measures the filter build rather than running it outside
the benchmark harness.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/ci.yaml (1)

240-258: Bench failures don’t gate release; include benchmarks in failure check and guard release on success().

Result job runs with always() and only checks tests/lint. A failed benchmarks job won’t trigger exit 1, so release can still fire on tags.

Apply:

       - name: Mark workflow as failed if lint or test did not pass or were cancelled
-        if: contains(fromJson('["failure", "cancelled"]'), needs.tests.result) || contains(fromJson('["failure", "cancelled"]'), needs.lint.result)
+        if: contains(fromJson('["failure","cancelled"]'), needs.tests.result) || contains(fromJson('["failure","cancelled"]'), needs.lint.result) || contains(fromJson('["failure","cancelled"]'), needs.benchmarks.result)
         run: exit 1
       - name: Invoke release workflow
-        if: github.ref_type == 'tag'
+        if: github.ref_type == 'tag' && success()

Also applies to: 248-249, 250-255

♻️ Duplicate comments (2)
.github/workflows/ci.yaml (2)

86-90: Cache key interpolation formatting looks correct now.

The earlier “extra }” issue appears resolved.


152-156: Pin third‑party actions to full commit SHAs.

Security best practice; prior bot flagged similar. Pin these: fkirc/skip-duplicate-actions, jdx/mise-action, docker/setup-buildx-action, CodSpeedHQ/action, codecov/codecov-action, codecov/test-results-action, benc-uk/workflow-dispatch. Keep major tag as a comment.

Example for CodSpeed step:

-      - name: Run the benchmarks
-        uses: CodSpeedHQ/action@v4
+      - name: Run the benchmarks
+        uses: CodSpeedHQ/action@<full-commit-sha> # v4

Also applies to: 28-31, 47-51, 75-76, 92-96, 130-133, 146-150, 186-191, 213-219, 255-257

🧹 Nitpick comments (4)
.github/workflows/ci.yaml (4)

130-133: Remove Buildx in benchmarks (unused).

No Docker build here; drop to speed up and reduce surface.

-      - name: Set up Docker Buildx
-        id: docker-buildx
-        uses: docker/setup-buildx-action@v3

52-57: Use the canonical GITHUB_OUTPUT write; avoid tee and add quotes.

Prevents subtle formatting bugs with spaces/newlines.

-      - name: Generate test matrix
-        id: set-matrix
-        shell: bash
-        run: |
-          echo sessions=$(mise run ci:test-matrix) | tee --append $GITHUB_OUTPUT
+      - name: Generate test matrix
+        id: set-matrix
+        shell: bash
+        run: |
+          echo "sessions=$(mise run ci:test-matrix)" >> "$GITHUB_OUTPUT"

184-191: Typo + pinning: “Codecov” and pin action to SHA.

Fix step name; pin action.

-      - name: Upload coverage to Codeocv
+      - name: Upload coverage to Codecov
-        uses: codecov/codecov-action@v5
+        uses: codecov/codecov-action@<full-commit-sha> # v5

15-18: Set minimal permissions; override per-job where needed.

Harden workflow by default least-privilege. Grant extra (e.g., checks: write, pull-requests: write) only to jobs that need them (Codecov test-results, release dispatch).

 concurrency:
   group: ${{ github.head_ref || github.run_id }}
   cancel-in-progress: true
+
+permissions:
+  contents: read
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bdd9609 and 35f1402.

📒 Files selected for processing (1)
  • .github/workflows/ci.yaml (5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: 🔬 integration-postgres on 3.9
  • GitHub Check: 🔬 integration-postgres on 3.10
  • GitHub Check: 🔬 integration-postgres on 3.11
  • GitHub Check: 🔬 integration-postgres on 3.13
🔇 Additional comments (1)
.github/workflows/ci.yaml (1)

122-157: Benchmarks job scaffold looks good.

Wired after tests, caches are scoped, CodSpeed instrumentation mode with mise run is aligned with PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant