Conversation
Reviewer's GuideIntegrate CodSpeed benchmarks into the CI pipeline by adding a dedicated benchmarks job, updating dependencies and task definitions, and providing initial benchmark tests. Flow diagram for new benchmark task executionflowchart TD
A["User runs 'test:benchmarks' task"] --> B["Depends on _install"]
B --> C["Runs: uv run pytest --codspeed tests/benchmarks"]
C --> D["CodSpeed instrumentation collects benchmark results"]
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey there - I've reviewed your changes - here's some feedback:
Blocking issues:
- An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload. (link)
General comments:
- Fix the GitHub Actions cache key expression in the benchmarks job—there’s an extra '}' after hashFiles, which will break the expression.
- The benchmarks conftest is importing the entire integration fixtures set – consider narrowing imports to only the fixtures used in these benchmarks to reduce setup overhead.
- Using the '@pytest.mark.benchmark' marker may conflict with the pytest-benchmark plugin; consider renaming it to something like 'benchmark_codspeed' for clarity.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Fix the GitHub Actions cache key expression in the benchmarks job—there’s an extra '}' after hashFiles, which will break the expression.
- The benchmarks conftest is importing the entire integration fixtures set – consider narrowing imports to only the fixtures used in these benchmarks to reduce setup overhead.
- Using the '@pytest.mark.benchmark' marker may conflict with the pytest-benchmark plugin; consider renaming it to something like 'benchmark_codspeed' for clarity.
## Individual Comments
### Comment 1
<location> `.github/workflows/ci.yaml:136` </location>
<code_context>
+ with:
+ path: |
+ ~/.cache
+ key:
+ ${{ runner.os }}-benchmarks-${{ env.pythonLocation }}-${{ hashFiles('**/uv.lock') }} }}
+ restore-keys: |
+ ${{ runner.os }}-benchmarks-${{ env.pythonLocation }}
</code_context>
<issue_to_address>
Extra closing brace in cache key interpolation.
Remove the extra '}' at the end of the cache key to prevent syntax errors and ensure proper formatting.
</issue_to_address>
## Security Issues
### Issue 1
<location> `.github/workflows/ci.yaml:148` </location>
<issue_to_address>
**security (yaml.github-actions.security.third-party-action-not-pinned-to-commit-sha):** An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload.
*Source: opengrep*
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| log_level: debug | ||
|
|
||
| - name: Run the benchmarks | ||
| uses: CodSpeedHQ/action@v4 |
There was a problem hiding this comment.
security (yaml.github-actions.security.third-party-action-not-pinned-to-commit-sha): An action sourced from a third-party repository on GitHub is not pinned to a full length commit SHA. Pinning an action to a full length commit SHA is currently the only way to use an action as an immutable release. Pinning to a particular SHA helps mitigate the risk of a bad actor adding a backdoor to the action's repository, as they would need to generate a SHA-1 collision for a valid Git object payload.
Source: opengrep
CodSpeed Performance ReportCongrats! CodSpeed is installed 🎉
You will start to see performance impacts in the reports once the benchmarks are run from your default branch.
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds CI benchmarks job using CodSpeed and pytest-codspeed, new benchmark tests and fixtures, mise tasks to install test deps and run benchmarks, updates pyproject markers/deps, ignores .codspeed, enables workflow concurrency cancellation, and removes Dependabot config for GitHub Actions. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Runner as GitHub Actions Runner
participant CI as CI Workflow
participant Tests as Tests Matrix
participant Bench as Benchmarks Job
participant Mise as mise (task runner)
participant Cod as CodSpeed Service
participant Result as Final Result Job
Runner->>CI: workflow triggered
CI->>Tests: run test matrix
Tests-->>CI: success/failure
CI->>Bench: start benchmarks (needs: Tests)
Bench->>Mise: run task test:benchmarks
Mise->>Mise: _install:test (uv sync)
Mise->>Mise: pytest --codspeed tests/benchmarks
Mise->>Cod: upload instrumentation/results
Cod-->>Mise: instrumentation feedback
Mise-->>Bench: benchmarks completed
Bench-->>CI: success/failure
CI->>Result: aggregate (needs: Tests, Bench)
Result-->>Runner: final workflow status
Note over Bench,Cod: CodSpeed requires CODSPEED_TOKEN for instrumentation
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
.github/workflows/ci.yaml (1)
122-153: Pin third‑party actions to full commit SHAsMultiple actions are referenced by tags (e.g., v3/v4/v5). Pin to immutable SHAs for supply‑chain safety.
Example (illustrative):
-uses: jdx/mise-action@v3 +uses: jdx/mise-action@<FULL_COMMIT_SHA>Actions to pin here:
- actions/checkout@v5
- actions/cache@v4
- jdx/mise-action@v3
- CodSpeedHQ/action@v4
- actions/download-artifact@v5
- actions/setup-python@v6
- codecov/codecov-action@v5
- codecov/test-results-action@v1
🧹 Nitpick comments (7)
.gitignore (1)
196-196: Ignore CodSpeed artifacts: prefer directory-anchored patternUse a root-anchored directory pattern to avoid accidentally matching files with the same name elsewhere.
-.codspeed +/.codspeed/.github/workflows/ci.yaml (2)
241-245: Result job should also gate on benchmarks’ outcome in the check stepYou added benchmarks to needs, but the failure/cancel check below only covers tests and lint. Include benchmarks for consistency and explicit gating.
- - name: Mark workflow as failed if lint or test did not pass or were cancelled - if: contains(fromJson('["failure", "cancelled"]'), needs.tests.result) || contains(fromJson('["failure", "cancelled"]'), needs.lint.result) + - name: Mark workflow as failed if lint, tests, or benchmarks did not pass or were cancelled + if: | + contains(fromJson('["failure","cancelled"]'), needs.tests.result) || + contains(fromJson('["failure","cancelled"]'), needs.lint.result) || + contains(fromJson('["failure","cancelled"]'), needs.benchmarks.result) run: exit 1
130-140: Cache scope nit: include UV dir explicitlyCaching ~/.cache works, but being explicit improves portability and avoids noise.
path: | - ~/.cache + ~/.cache/uv + ~/.cache/piptests/benchmarks/test_queries.py (1)
10-23: Add a minimal correctness assertionBenchmarks should still fail fast on bad results; assert presence of data to catch regressions without skewing timings.
@pytest.mark.benchmark(group="queries") async def test_single(any_query: AnyQueryExecutor, raw_users: RawRecordData) -> None: - await maybe_async( + result = await maybe_async( any_query( """ query GetUser($id: Int!) { user(id: $id) { name } } """, {"id": raw_users[0]["id"]}, ) ) + assert isinstance(result, dict) and "data" in result and result["data"].get("user"), "Empty or invalid GraphQL response"tests/benchmarks/conftest.py (1)
49-94: all exports: consider exporting fixture names or dropping allThe functions are named fx_strawchemy_* but the actual fixture names are strawchemy_* via the decorator. Either export the fixture names for clarity or omit all (pytest doesn’t require it in conftest).
-from .fixtures import fx_strawchemy_mysql, fx_strawchemy_postgresql, fx_strawchemy_sqlite +from .fixtures import fx_strawchemy_mysql, fx_strawchemy_postgresql, fx_strawchemy_sqlite @@ - "fx_strawchemy_mysql", - "fx_strawchemy_postgresql", - "fx_strawchemy_sqlite", + # Option A: keep fx_* or + # "strawchemy_mysql", "strawchemy_postgresql", "strawchemy_sqlite",If not needed, you can drop all entirely to reduce maintenance.
tests/benchmarks/test_build_schema.py (2)
28-32: Bench not recorded and confusing type nameWrap in
benchmark(...)and avoid naming the create input “UserFilter” which collides semantically with the filter type.Apply:
@pytest.mark.benchmark(group="schema") @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) -def test_build_user_create_input(strawchemy: Strawchemy) -> None: - strawchemy.create_input(User, include="all", override=True)(type("UserFilter", (), {})) +def test_build_user_create_input(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None: + benchmark(lambda: strawchemy.create_input(User, include="all", override=True)(type("UserCreateInput", (), {})))
17-17: Deduplicate backend parametrization (optional)Define a module-level
BACKENDS = [lf(...), lf(...), lf(...)]and reuse across tests for readability.Also applies to: 23-23, 29-29, 35-35, 41-41
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
.github/dependabot.yml(0 hunks).github/workflows/ci.yaml(5 hunks).gitignore(1 hunks)mise.toml(2 hunks)pyproject.toml(2 hunks)tasks.md(1 hunks)tests/benchmarks/conftest.py(1 hunks)tests/benchmarks/fixtures.py(1 hunks)tests/benchmarks/test_build_schema.py(1 hunks)tests/benchmarks/test_queries.py(1 hunks)
💤 Files with no reviewable changes (1)
- .github/dependabot.yml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-14T11:55:31.383Z
Learnt from: CR
PR: gazorby/strawchemy#0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-09-14T11:55:31.383Z
Learning: Use mise to manage build and test tasks for the project
Applied to files:
mise.toml
🧬 Code graph analysis (4)
tests/benchmarks/conftest.py (2)
tests/integration/fixtures.py (19)
aiosqlite_engine(508-513)any_query(763-779)any_session(753-754)async_engine(597-598)async_mutation(731-738)async_session(602-608)asyncmy_engine(487-504)asyncpg_engine(517-535)db_features(684-685)dialect(679-680)engine(469-470)psycopg_async_engine(539-557)psycopg_engine(416-434)query_tracker(758-759)seed_db_async(660-672)seed_db_sync(644-656)session(474-480)sqlite_engine(438-445)sync_mutation(742-749)tests/benchmarks/fixtures.py (3)
fx_strawchemy_mysql(18-19)fx_strawchemy_postgresql(8-9)fx_strawchemy_sqlite(13-14)
tests/benchmarks/fixtures.py (2)
tests/fixtures.py (1)
strawchemy(34-35)src/strawchemy/mapper.py (1)
Strawchemy(60-722)
tests/benchmarks/test_build_schema.py (4)
tests/fixtures.py (1)
strawchemy(34-35)src/strawchemy/strawberry/repository/_sync.py (1)
StrawchemySyncRepository(33-258)src/strawchemy/mapper.py (1)
Strawchemy(60-722)src/strawchemy/strawberry/_field.py (1)
order_by(278-282)
tests/benchmarks/test_queries.py (1)
tests/integration/fixtures.py (1)
any_query(763-779)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: 🔬 unit-no-extras on 3.9
- GitHub Check: 🔬 unit-no-extras on 3.13
- GitHub Check: 🔬 integration-postgres on 3.10
- GitHub Check: 🔬 integration-sqlite on 3.11
- GitHub Check: 🔬 integration-sqlite on 3.12
- GitHub Check: 🔬 integration-mysql on 3.11
- GitHub Check: 🔬 integration-sqlite on 3.10
- GitHub Check: 🔬 integration-mysql on 3.12
- GitHub Check: 🔬 integration-postgres on 3.12
- GitHub Check: 🔬 integration-postgres on 3.11
- GitHub Check: 🔬 integration-postgres on 3.13
- GitHub Check: 🔬 integration-mysql on 3.13
- GitHub Check: 🔬 integration-mysql on 3.10
- GitHub Check: 🔬 integration-sqlite on 3.9
- GitHub Check: 🔬 integration-postgres on 3.9
- GitHub Check: 🔬 integration-sqlite on 3.13
- GitHub Check: 🔬 integration-mysql on 3.9
🔇 Additional comments (10)
tasks.md (1)
149-156: Docs for benchmarks command look goodClear, matches mise task and CI usage.
mise.toml (2)
50-54: Dedicated test-only sync is a good splitKeeps benches lightweight and reproducible.
157-161: Bench task wiring looks correctCodSpeed runs via pytest against tests/benchmarks without xdist; good.
pyproject.toml (2)
83-84: Add pytest-codspeed: LGTMDependency belongs in the test group.
300-307: Benchmark marker registeredConsistent with usage in tests/benchmarks.
tests/benchmarks/fixtures.py (1)
7-19: Fixtures are minimal and clearGood separation per dialect; type hints help.
tests/benchmarks/test_build_schema.py (4)
40-59: LGTM: good use of benchmark fixture to measure field constructionThis test is correctly grouped and exercises filter/order/distinct_on integration with a sync repository.
34-38: Benchmark the PK-update input builder — verify API firstTest calls create_input and doesn't use pytest-benchmark; it should measure the PK-update input builder (pk_update_input or update_input).
If the API is pk_update_input, replace and benchmark like:
@pytest.mark.benchmark(group="schema") @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) -def test_build_user_pk_update_input(strawchemy: Strawchemy) -> None: - strawchemy.create_input(User, include="all", override=True)(type("UserFilter", (), {})) +def test_build_user_pk_update_input(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None: + benchmark(lambda: strawchemy.pk_update_input(User, include="all", override=True)(type("UserPkUpdateInput", (), {})))If the API is update_input, swap pk_update_input accordingly.
Quick verification: search the repo for the correct symbol (e.g. rg -n --hidden -S 'pk_update_input|update_input|create_input' -g '!/dist/') and confirm the builder name before applying changes.
6-6: Fix import: use pytest-lazy-fixture’s canonical moduleThis will import-error. The correct module is
pytest_lazyfixtureand symbol islazy_fixture.Apply:
-from pytest_lazy_fixtures import lf +from pytest_lazyfixture import lazy_fixture as lfLikely an incorrect or invalid review comment.
1-60: Fixtures present; pytest-lazy-fixtures only in uv.lock
strawchemy_postgresql/sqlite/mysql are defined in tests/benchmarks/fixtures.py; pytest_lazy_fixtures appears in uv.lock and is imported in tests (tests/benchmarks/test_build_schema.py, tests/integration/fixtures.py) — confirm it's declared in your primary dependency file (pyproject.toml / requirements) if needed.
|
|
||
| import pytest | ||
| from pytest_lazy_fixtures import lf | ||
| from strawchemy import StrawchemySyncRepository |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify public export for StrawchemySyncRepository
Confirm StrawchemySyncRepository is re-exported from the package root; otherwise import from its submodule.
Run:
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
rg -nP 'class\s+StrawchemySyncRepository\b' -S
rg -nP 'from\s+strawchemy\s+import\s+.*StrawchemySyncRepository|__all__.*StrawchemySyncRepository' src -SLength of output: 224
Re-export StrawchemySyncRepository from package root or import it from its submodule
StrawchemySyncRepository is defined at src/strawchemy/strawberry/repository/_sync.py and exported in src/strawchemy/strawberry/repository/init.py, but there’s no evidence it’s re-exported from the package root — update tests/benchmarks/test_build_schema.py (line 7) to:
from strawchemy.strawberry.repository import StrawchemySyncRepository
or add to src/strawchemy/init.py:
from .strawberry.repository import StrawchemySyncRepository
🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around line 7, the test imports
StrawchemySyncRepository from the package root but that symbol is not
re-exported there; either change the import in this test to import
StrawchemySyncRepository from strawchemy.strawberry.repository, or add a
re-export in src/strawchemy/__init__.py that imports StrawchemySyncRepository
from .strawberry.repository so the symbol is available at the package root;
implement one of these two fixes so the import resolves.
| @pytest.mark.benchmark(group="schema") | ||
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | ||
| def test_build_user_type(strawchemy: Strawchemy) -> None: | ||
| strawchemy.type(User, include="all", override=True)(type("UserType", (), {})) | ||
|
|
There was a problem hiding this comment.
Bench not recorded: wrap in benchmark fixture
CodSpeed records only benchmark(...) calls. This test currently won’t produce metrics.
Apply:
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_type(strawchemy: Strawchemy) -> None:
- strawchemy.type(User, include="all", override=True)(type("UserType", (), {}))
+def test_build_user_type(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+ benchmark(lambda: strawchemy.type(User, include="all", override=True)(type("UserType", (), {})))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.benchmark(group="schema") | |
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | |
| def test_build_user_type(strawchemy: Strawchemy) -> None: | |
| strawchemy.type(User, include="all", override=True)(type("UserType", (), {})) | |
| @pytest.mark.benchmark(group="schema") | |
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | |
| def test_build_user_type(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None: | |
| benchmark(lambda: strawchemy.type(User, include="all", override=True)(type("UserType", (), {}))) |
🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around lines 16 to 20, the test
currently calls strawchemy.type(...) directly so CodSpeed won’t record anything;
change the test signature to accept the benchmark fixture and wrap the operation
in a benchmark call (e.g., def test_build_user_type(strawchemy: Strawchemy,
benchmark): and then call benchmark(lambda: strawchemy.type(User, include="all",
override=True)(type("UserType", (), {})))) so the benchmark framework actually
records the metric.
| @pytest.mark.benchmark(group="schema") | ||
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | ||
| def test_build_user_filter(strawchemy: Strawchemy) -> None: | ||
| strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {})) | ||
|
|
There was a problem hiding this comment.
Bench not recorded: wrap filter build in benchmark
Same issue as above.
Apply:
@pytest.mark.benchmark(group="schema")
@pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")])
-def test_build_user_filter(strawchemy: Strawchemy) -> None:
- strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {}))
+def test_build_user_filter(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None:
+ benchmark(lambda: strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {})))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.benchmark(group="schema") | |
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | |
| def test_build_user_filter(strawchemy: Strawchemy) -> None: | |
| strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {})) | |
| @pytest.mark.benchmark(group="schema") | |
| @pytest.mark.parametrize("strawchemy", [lf("strawchemy_postgresql"), lf("strawchemy_sqlite"), lf("strawchemy_mysql")]) | |
| def test_build_user_filter(strawchemy: Strawchemy, benchmark: BenchmarkFixture) -> None: | |
| benchmark(lambda: strawchemy.filter(User, include="all", override=True)(type("UserFilter", (), {}))) |
🤖 Prompt for AI Agents
In tests/benchmarks/test_build_schema.py around lines 22-26, the filter build
call is executed directly so the benchmark framework doesn't record it; change
the test to accept the pytest-benchmark fixture (add a benchmark parameter) and
wrap the construction in a benchmark call, e.g. call benchmark(lambda:
strawchemy.filter(User, include="all", override=True)(type("UserFilter", (),
{}))), so the benchmark measures the filter build rather than running it outside
the benchmark harness.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/ci.yaml (1)
240-258: Bench failures don’t gate release; include benchmarks in failure check and guard release on success().Result job runs with always() and only checks tests/lint. A failed benchmarks job won’t trigger exit 1, so release can still fire on tags.
Apply:
- name: Mark workflow as failed if lint or test did not pass or were cancelled - if: contains(fromJson('["failure", "cancelled"]'), needs.tests.result) || contains(fromJson('["failure", "cancelled"]'), needs.lint.result) + if: contains(fromJson('["failure","cancelled"]'), needs.tests.result) || contains(fromJson('["failure","cancelled"]'), needs.lint.result) || contains(fromJson('["failure","cancelled"]'), needs.benchmarks.result) run: exit 1 - name: Invoke release workflow - if: github.ref_type == 'tag' + if: github.ref_type == 'tag' && success()Also applies to: 248-249, 250-255
♻️ Duplicate comments (2)
.github/workflows/ci.yaml (2)
86-90: Cache key interpolation formatting looks correct now.The earlier “extra }” issue appears resolved.
152-156: Pin third‑party actions to full commit SHAs.Security best practice; prior bot flagged similar. Pin these: fkirc/skip-duplicate-actions, jdx/mise-action, docker/setup-buildx-action, CodSpeedHQ/action, codecov/codecov-action, codecov/test-results-action, benc-uk/workflow-dispatch. Keep major tag as a comment.
Example for CodSpeed step:
- - name: Run the benchmarks - uses: CodSpeedHQ/action@v4 + - name: Run the benchmarks + uses: CodSpeedHQ/action@<full-commit-sha> # v4Also applies to: 28-31, 47-51, 75-76, 92-96, 130-133, 146-150, 186-191, 213-219, 255-257
🧹 Nitpick comments (4)
.github/workflows/ci.yaml (4)
130-133: Remove Buildx in benchmarks (unused).No Docker build here; drop to speed up and reduce surface.
- - name: Set up Docker Buildx - id: docker-buildx - uses: docker/setup-buildx-action@v3
52-57: Use the canonical GITHUB_OUTPUT write; avoid tee and add quotes.Prevents subtle formatting bugs with spaces/newlines.
- - name: Generate test matrix - id: set-matrix - shell: bash - run: | - echo sessions=$(mise run ci:test-matrix) | tee --append $GITHUB_OUTPUT + - name: Generate test matrix + id: set-matrix + shell: bash + run: | + echo "sessions=$(mise run ci:test-matrix)" >> "$GITHUB_OUTPUT"
184-191: Typo + pinning: “Codecov” and pin action to SHA.Fix step name; pin action.
- - name: Upload coverage to Codeocv + - name: Upload coverage to Codecov - uses: codecov/codecov-action@v5 + uses: codecov/codecov-action@<full-commit-sha> # v5
15-18: Set minimal permissions; override per-job where needed.Harden workflow by default least-privilege. Grant extra (e.g., checks: write, pull-requests: write) only to jobs that need them (Codecov test-results, release dispatch).
concurrency: group: ${{ github.head_ref || github.run_id }} cancel-in-progress: true + +permissions: + contents: read
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/ci.yaml(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: 🔬 integration-postgres on 3.9
- GitHub Check: 🔬 integration-postgres on 3.10
- GitHub Check: 🔬 integration-postgres on 3.11
- GitHub Check: 🔬 integration-postgres on 3.13
🔇 Additional comments (1)
.github/workflows/ci.yaml (1)
122-157: Benchmarks job scaffold looks good.Wired after tests, caches are scoped, CodSpeed instrumentation mode with mise run is aligned with PR.
Description
Types of Changes
Issues Fixed or Closed by This PR
Checklist
Summary by Sourcery
Integrate performance benchmarking into the CI pipeline using CodSpeed and add support for running benchmarks via pytest and mise.
New Features:
Build:
CI:
Documentation:
Tests:
Summary by CodeRabbit
Tests
Chores
Documentation