Propagate LLVM env into Rust coverage runs#253
Conversation
Reviewer's GuidePropagates LLVM codegen configuration for coverage runs via environment overrides instead of cargo --config flags, and extends tests to validate env propagation, coverage env computation, and _run_cargo behavior. Sequence diagram for Rust coverage env propagation via _run_cargosequenceDiagram
actor GitHubActions
participant run_rust_py as run_rust.py
participant get_cargo_coverage_env
participant _run_cargo
participant cargo
GitHubActions->>run_rust_py: main(manifest_path, fmt, use_nextest, ...)
run_rust_py->>get_cargo_coverage_env: get_cargo_coverage_env(manifest_path)
get_cargo_coverage_env-->>run_rust_py: cargo_env
run_rust_py->>_run_cargo: _run_cargo(args, env_overrides=cargo_env)
alt env_overrides is not None
_run_cargo->>_run_cargo: merge os_environ with env_overrides
else
_run_cargo->>_run_cargo: use os_environ unchanged
end
_run_cargo->>cargo: invoke cargo llvm-cov with env
cargo-->>_run_cargo: stdout
_run_cargo-->>run_rust_py: stdout
opt with_cucumber_rs
run_rust_py->>get_cargo_coverage_env: get_cargo_coverage_env(manifest_path)
get_cargo_coverage_env-->>run_rust_py: cargo_env
run_rust_py->>_run_cargo: _run_cargo(cucumber_args, env_overrides=cargo_env)
_run_cargo->>cargo: invoke cargo test with env
cargo-->>_run_cargo: stdout
_run_cargo-->>run_rust_py: stdout
end
Flow diagram for cargo coverage env computation and useflowchart TD
A[Start get_cargo_coverage_env] --> B[Call _uses_cranelift_backend with manifest_path]
B -->|Project uses Cranelift| C[Return env map:
CARGO_PROFILE_DEV_CODEGEN_BACKEND=llvm
CARGO_PROFILE_TEST_CODEGEN_BACKEND=llvm]
B -->|Project does not use Cranelift| D[Return empty env map]
C --> E[main receives nonempty cargo_env]
D --> E
E --> F[Call _run_cargo with args and env_overrides=cargo_env]
F --> G{env_overrides is None?}
G -->|Yes| H[Set env to None
use process default environment]
G -->|No| I[Create env by copying os.environ
and updating with env_overrides]
H --> J[Spawn cargo subprocess with env]
I --> J
J --> K[Stream cargo output
collect stdout]
K --> L[Return stdout to caller]
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughDetect Cranelift from Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant Script as "run_rust.py"
end
rect rgba(200,255,200,0.5)
participant Detector as "get_cargo_coverage_env(manifest)"
end
rect rgba(255,230,200,0.5)
participant Env as "os.environ"
end
rect rgba(255,200,200,0.5)
participant Cargo as "cargo (popen)"
end
Script->>Detector: request env_overrides(manifest_path)
Detector-->>Script: return {} or {CARGO_PROFILE_DEV_CODEGEN_BACKEND: "llvm", CARGO_PROFILE_TEST_CODEGEN_BACKEND: "llvm"}
Script->>Env: copy current environment
Script->>Script: remove keys in env_unsets, then apply env_overrides
Script->>Cargo: popen(args=["llvm-cov", ...], env=merged_env)
Cargo-->>Script: stream output / exit code
alt cucumber follow-up
Script->>Detector: re-derive env_overrides for cucumber manifest
Script->>Cargo: popen(args=cucumber_args, env=merged_env)
Cargo-->>Script: stream output / exit code
end
Poem
🚥 Pre-merge checks | ✅ 6 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (6 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai Have the following failed checks now been resolved? If further work is required, please provide an AI agent prompt for the remaining work to be done to address these failures. ❌ Failed checks (3 warnings)
|
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. .github/actions/generate-coverage/scripts/run_rust.py Comment on lines +131 to +152 def _manifest_uses_cranelift_backend(manifest_path: Path) -> bool:
"""Return ``True`` when ``manifest_path`` configures Cranelift in profiles."""
try:
content = manifest_path.read_text(encoding="utf-8")
except (OSError, UnicodeDecodeError):
return False
in_profile_section = False
for line in content.splitlines():
stripped = line.strip()
if not stripped or stripped.startswith("#"):
continue
section_match = re.match(r"^\[(?P<section>[^\]]+)\]$", stripped)
if section_match:
section = section_match.group("section").strip()
in_profile_section = section == "profile" or section.startswith("profile.")
continue
if not in_profile_section:
continue
if re.match(r'^[ \t]*codegen-backend\s*=\s*["\']cranelift["\']', line):
return True
return False❌ New issue: Complex Method |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Have the following now been resolved? .github/actions/generate-coverage/tests/test_scripts.py (1)
Patch sketch- monkeypatch.delenv("CARGO_PROFILE_DEV_CODEGEN_BACKEND", raising=False)
- monkeypatch.delenv("CARGO_PROFILE_TEST_CODEGEN_BACKEND", raising=False)
+ monkeypatch.setenv("CARGO_PROFILE_DEV_CODEGEN_BACKEND", "cranelift")
+ monkeypatch.delenv("CARGO_PROFILE_TEST_CODEGEN_BACKEND", raising=False)
@@
assert cargo_args == expected_args
assert "CARGO_PROFILE_DEV_CODEGEN_BACKEND" not in cargo_call.env🤖 Prompt for AI Agents.github/actions/generate-coverage/README.md (1)
Corrected examples- uses: ./.github/actions/generate-coverage
with:
output-path: coverage.xml
format: coberturaAlternatively, if the action is published to a separate repository, reference it as 🤖 Prompt for AI Agents |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. Complex Method.github/actions/generate-coverage/scripts/run_rust.py: _run_cargo What lead to degradation?_run_cargo increases in cyclomatic complexity from 11 to 13, threshold = 9 Why does this problem occur?A Complex Method has a high cyclomatic complexity. The recommended threshold for the Python language is a cyclomatic complexity lower than 9. How to fix it?There are many reasons for Complex Method. Sometimes, another design approach is beneficial such as a) modeling state using an explicit state machine rather than conditionals, or b) using table lookup rather than long chains of logic. In other scenarios, the function can be split using EXTRACT FUNCTION. Just make sure you extract natural and cohesive functions. Complex Methods can also be addressed by identifying complex conditional expressions and then using the DECOMPOSE CONDITIONAL refactoring. Helpful refactoring examplesTo get a general understanding of what this code health issue looks like - and how it might be addressed - we have prepared some diffs for illustrative purposes. SAMPLE# complex_method.js
function postItem(item) {
if (!item.id) {
- if (item.x != null && item.y != null) {
- post(item);
- } else {
- throw Error("Item must have x and y");
- }
+ // extract a separate function for creating new item
+ postNew(item);
} else {
- if (item.x < 10 && item.y > 25) {
- put(item);
- } else {
- throw Error("Item must have an x and y value between 10 and 25");
- }
+ // and one for updating existing items
+ updateItem(item);
}
}
+
+function postNew(item) {
+ validateNew(item);
+ post(item);
+}
+
+function updateItem(item) {
+ validateUpdate(item);
+ put(item);
+}
+ |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py (1)
103-109:⚠️ Potential issue | 🟠 MajorPass a real future deadline into the pump helper.
deadline=1.0is already in the past on any real clock, so this test only passes when both reader threads drain before_poll_pump_loop_iteration()performs its first timeout check. Derive the deadline from the helper’s clock, or stub that clock explicitly, so the test stops depending on thread scheduling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py around lines 103 - 109, Test passes a past deadline (deadline=1.0) so make the deadline real by deriving it from a monotonic clock and pass that into run_rust_typed._pump_cargo_output_windows (or stub the helper clock); e.g., compute deadline = time.monotonic() + 1.0 (import time in the test) and call _pump_cargo_output_windows(dummy_proc, stdout_stream, stderr_stream, deadline=deadline, wait_timeout=1.0) so _poll_pump_loop_iteration() sees a future deadline rather than an expired one.
♻️ Duplicate comments (1)
.github/actions/generate-coverage/scripts/_cargo_runner.py (1)
71-76:⚠️ Potential issue | 🟠 MajorBase the timeout budget on
time.monotonic().Carry the deadline on
time.monotonic()end-to-end.time.time()follows the wall clock, so an NTP or manual clock adjustment can make_pump_cargo_output()/_wait_for_cargo()killcargoearly or let it run pastRUN_RUST_CARGO_WAIT_TIMEOUT.Patch sketch
- remaining = max(0.0, deadline - time.time()) + remaining = max(0.0, deadline - time.monotonic()) ... - if time.time() >= deadline: + if time.monotonic() >= deadline: ... - timeout = max(0.0, deadline - time.time()) + timeout = max(0.0, deadline - time.monotonic()) ... - return proc.wait(timeout=max(0.0, deadline - time.time())) + return proc.wait(timeout=max(0.0, deadline - time.monotonic())) ... - deadline = time.time() + wait_timeout + deadline = time.monotonic() + wait_timeout#!/bin/bash sed -n '57,77p;177,224p;283,379p' .github/actions/generate-coverage/scripts/_cargo_runner.py rg -n "time\.(time|monotonic)\(" .github/actions/generate-coverage/scripts/_cargo_runner.pyAlso applies to: 212-216, 291-294, 367-379
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/generate-coverage/scripts/_cargo_runner.py around lines 71 - 76, The timeout logic must use a monotonic clock end-to-end: change construction and checks of the deadline and remaining time in _pump_cargo_output() and _wait_for_cargo() to use time.monotonic() (compute deadline = time.monotonic() + wait_timeout and later compute remaining = max(0.0, deadline - time.monotonic()) instead of time.time()), and replace any time.time() usage that influences join_timeout/ _raise_cargo_timeout() decisions (the variables remaining, join_timeout, deadline) so the proc/threads join and the call to _raise_cargo_timeout(proc, wait_timeout=wait_timeout) are based on monotonic time rather than the wall clock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In @.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py:
- Around line 103-109: Test passes a past deadline (deadline=1.0) so make the
deadline real by deriving it from a monotonic clock and pass that into
run_rust_typed._pump_cargo_output_windows (or stub the helper clock); e.g.,
compute deadline = time.monotonic() + 1.0 (import time in the test) and call
_pump_cargo_output_windows(dummy_proc, stdout_stream, stderr_stream,
deadline=deadline, wait_timeout=1.0) so _poll_pump_loop_iteration() sees a
future deadline rather than an expired one.
---
Duplicate comments:
In @.github/actions/generate-coverage/scripts/_cargo_runner.py:
- Around line 71-76: The timeout logic must use a monotonic clock end-to-end:
change construction and checks of the deadline and remaining time in
_pump_cargo_output() and _wait_for_cargo() to use time.monotonic() (compute
deadline = time.monotonic() + wait_timeout and later compute remaining =
max(0.0, deadline - time.monotonic()) instead of time.time()), and replace any
time.time() usage that influences join_timeout/ _raise_cargo_timeout() decisions
(the variables remaining, join_timeout, deadline) so the proc/threads join and
the call to _raise_cargo_timeout(proc, wait_timeout=wait_timeout) are based on
monotonic time rather than the wall clock.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2f48db7c-3202-4136-9a50-54cf5a4561af
📒 Files selected for processing (3)
.github/actions/generate-coverage/scripts/_cargo_runner.py.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py.github/actions/generate-coverage/tests/test_scripts.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/actions/generate-coverage/scripts/_cargo_runner.py:
- Around line 266-292: The timeout uses time.time(), which can be skewed; update
the deadline arithmetic to use time.monotonic() instead: in _wait_for_cargo (and
any other functions in this module that compute deadlines or time remaining)
replace time.time() with time.monotonic() so proc.wait(timeout=max(0.0, deadline
- time.monotonic())) uses a monotonic clock; ensure any callers that construct
the deadline also use time.monotonic() (so deadline and comparisons are computed
from the same monotonic reference) and keep existing behavior of calling
_raise_cargo_timeout(proc, wait_timeout=wait_timeout).
In @.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py:
- Around line 48-54: The ctx parameter on _pump_cargo_output_windows is typed
too loosely as object; tighten it by declaring/using a structural protocol or
the existing _CargoProcCtx type so callers and IDEs get proper attributes.
Update the test module to either define a small Protocol with the exact
attributes/methods used by _pump_cargo_output_windows (e.g., attributes
referenced inside the function) and annotate ctx: ThatProtocol, or import and
annotate ctx: _CargoProcCtx if that internal type is available to tests; keep
the function name _pump_cargo_output_windows and only change the ctx annotation
and any imports accordingly.
- Around line 65-66: Remove the unused poll() method from the _DummyProc test
helper: update the class _DummyProc in test_run_rust_windows.py to only
implement the protocol methods kill() and wait(), since
_pump_cargo_output_windows and the test suite never call poll() and the
_SupportsKillWait protocol defines only kill() and wait(); this prevents a
misleading API surface in the test helper.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: dfa83806-3074-42f9-8713-074329c1766e
📒 Files selected for processing (2)
.github/actions/generate-coverage/scripts/_cargo_runner.py.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py
| def _pump_cargo_output_windows( | ||
| self, | ||
| proc: _SupportsKillWait, | ||
| stdout_stream: typ.IO[str], | ||
| stderr_stream: typ.IO[str], | ||
| ctx: object, | ||
| ) -> list[str]: | ||
| """Mirror of the helper under test.""" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tighten the ctx type annotation in the protocol.
ctx: object is permissive. For better documentation and IDE support, declare a structural protocol or import _CargoProcCtx and use it directly (even in a test-only protocol).
♻️ Proposed change
+class _CargoProcCtxProtocol(typ.Protocol):
+ proc: _SupportsKillWait
+ deadline: float
+ wait_timeout: float
+
class _RunRustModule(typ.Protocol):
"""Subset of the run_rust module used by the tests."""
sys: typ.Any
def _pump_cargo_output_windows(
self,
stdout_stream: typ.IO[str],
stderr_stream: typ.IO[str],
- ctx: object,
+ ctx: _CargoProcCtxProtocol,
) -> list[str]:
"""Mirror of the helper under test."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py
around lines 48 - 54, The ctx parameter on _pump_cargo_output_windows is typed
too loosely as object; tighten it by declaring/using a structural protocol or
the existing _CargoProcCtx type so callers and IDEs get proper attributes.
Update the test module to either define a small Protocol with the exact
attributes/methods used by _pump_cargo_output_windows (e.g., attributes
referenced inside the function) and annotate ctx: ThatProtocol, or import and
annotate ctx: _CargoProcCtx if that internal type is available to tests; keep
the function name _pump_cargo_output_windows and only change the ctx annotation
and any imports accordingly.
There was a problem hiding this comment.
@coderabbitai Has this now been resolved in the latest commit?
Use codegraph analysis to determine your answer.
If this comment is now resolved, please mark it as such using the API. Otherwise, please provide an AI agent prompt for the remaining work to be done to address this comment.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
cat .github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py
Length of output: 3388
@leynos, the comment is not yet resolved. The current HEAD of test_run_rust_windows.py still has ctx: object on line 52 of _RunRustModule._pump_cargo_output_windows.
Here is the agent prompt to address it:
In `.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py`,
tighten the `ctx` parameter annotation on `_RunRustModule._pump_cargo_output_windows`
from `object` to a structural Protocol that mirrors `_CargoProcCtx`.
Steps:
1. Above the `_RunRustModule` class, define a new Protocol class:
```python
class _CargoProcCtxProtocol(typ.Protocol):
"""Structural protocol mirroring _CargoProcCtx for static typing."""
proc: _SupportsKillWait
deadline: float
wait_timeout: float
```
2. In `_RunRustModule._pump_cargo_output_windows`, change the `ctx` parameter
annotation from `object` to `_CargoProcCtxProtocol`:
```python
def _pump_cargo_output_windows(
self,
stdout_stream: typ.IO[str],
stderr_stream: typ.IO[str],
ctx: _CargoProcCtxProtocol,
) -> list[str]:
"""Mirror of the helper under test."""
```
3. Do not change any other logic, imports, or tests — `_DummyProc` already
satisfies `_SupportsKillWait`, so `run_rust._CargoProcCtx(proc=dummy_proc, ...)`
will continue to satisfy the new protocol at runtime without further changes.
Verify: confirm `_SupportsKillWait` is already declared before `_CargoProcCtxProtocol`
in the file so the forward reference is valid.
| def poll(self) -> None: | ||
| return None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for poll() calls in _cargo_runner.py to confirm usage
rg -n '\.poll\(' .github/actions/generate-coverage/scripts/_cargo_runner.pyRepository: leynos/shared-actions
Length of output: 47
🏁 Script executed:
rg -n 'poll' .github/actions/generate-coverage/scripts/tests/test_run_rust_windows.pyRepository: leynos/shared-actions
Length of output: 95
🏁 Script executed:
rg -n '\.poll\(' .github/actions/generate-coverage/scripts/tests/Repository: leynos/shared-actions
Length of output: 47
🏁 Script executed:
cat -n .github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py | head -100Repository: leynos/shared-actions
Length of output: 3683
🏁 Script executed:
sed -n '48,120p' .github/actions/generate-coverage/scripts/_cargo_runner.pyRepository: leynos/shared-actions
Length of output: 2457
🏁 Script executed:
sed -n '120,180p' .github/actions/generate-coverage/scripts/_cargo_runner.pyRepository: leynos/shared-actions
Length of output: 1895
Remove the poll() method from _DummyProc.
The poll() method is never invoked by _pump_cargo_output_windows or anywhere else in the test suite. The _SupportsKillWait protocol explicitly defines only kill() and wait(), not poll(). Remove the unused method to avoid misleading maintainers about the expected contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py
around lines 65 - 66, Remove the unused poll() method from the _DummyProc test
helper: update the class _DummyProc in test_run_rust_windows.py to only
implement the protocol methods kill() and wait(), since
_pump_cargo_output_windows and the test suite never call poll() and the
_SupportsKillWait protocol defines only kill() and wait(); this prevents a
misleading API surface in the test helper.
There was a problem hiding this comment.
@coderabbitai Has this now been resolved in the latest commit?
Use codegraph analysis to determine your answer.
If this comment is now resolved, please mark it as such using the API. Otherwise, please provide an AI agent prompt for the remaining work to be done to address this comment.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check whether poll() is still present in _DummyProc in the latest commit
rg -n 'poll' .github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py
Length of output: 30
@leynos, the poll() method is still present on _DummyProc at Line 65 — this comment is not yet resolved.
Here is an AI agent prompt to address it:
In `.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py`, remove the unused `poll()` method from the `_DummyProc` class (currently at line 65–66):
```python
def poll(self) -> None:
return None
```
`_pump_cargo_output_windows` never calls `poll()` on the process object, and the `_SupportsKillWait` protocol only declares `kill()` and `wait()`. Removing this method keeps `_DummyProc`'s API surface honest and avoids misleading future maintainers about what the Windows pump requires of its process argument.
No other changes are needed.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/actions/generate-coverage/scripts/_cargo_runner.py:
- Around line 142-146: The polling loop that calls
_poll_pump_loop_iteration(threads, ctx, thread_exceptions) must be wrapped in a
try/finally so _finalise_pump_threads(threads, ctx.proc, thread_exceptions)
always runs even if _poll_pump_loop_iteration raises (e.g. via
_raise_cargo_timeout()); replace the bare while-not loop with a try: while not
_poll_pump_loop_iteration(...): pass finally: _finalise_pump_threads(...) to
ensure non-daemon pump threads are joined on timeout or any exception.
In @.github/actions/generate-coverage/tests/test_scripts.py:
- Around line 881-888: The test's fake_run_cargo currently ignores env_overrides
so the test can't detect whether cargo_env was forwarded; modify fake_run_cargo
in the test to capture the env_overrides (e.g., store it into a
nonlocal/captured variable or list) and return the same result, then after
calling main() add an assertion that the captured env_overrides equals the
expected cargo_env; apply the same capture-and-assert change to the other
fake_run_cargo instance referenced around lines 914-917 so both sites verify env
reuse.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ed900854-8f50-4b51-9d70-f045bf217d76
📒 Files selected for processing (4)
.github/actions/generate-coverage/scripts/_cargo_runner.py.github/actions/generate-coverage/scripts/_cranelift.py.github/actions/generate-coverage/scripts/tests/test_run_rust_windows.py.github/actions/generate-coverage/tests/test_scripts.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
.github/actions/generate-coverage/tests/test_scripts.py (2)
973-1067: 🧹 Nitpick | 🔵 TrivialConsider extracting helper functions to reduce method length.
Static analysis flags this test at 73 lines (threshold 70). Extract the
FakeThread,FakeProc,FakeRunner, andFakeCargoCommandclasses into module-level fixtures or a shared factory to trim the test body.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/generate-coverage/tests/test_scripts.py around lines 973 - 1067, The test function test_run_cargo_windows_pump_timeout is too long and should extract the helper classes to reduce method length; move FakeThread, FakeProc, FakeRunner, and FakeCargoCommand out of the test body into module-level fixtures or shared factory functions (e.g., create_fake_thread, create_fake_proc, fake_runner_factory or pytest fixtures named fake_thread, fake_proc, fake_runner, fake_cargo_command) and update the test to instantiate or use those fixtures, keeping the test logic in test_run_cargo_windows_pump_timeout focused on setup, invocation of _run_cargo, and assertions.
915-922:⚠️ Potential issue | 🟡 MinorCapture
env_overridesto verify_run_cargoreceives the correct env.The test verifies
cargo_envis threaded to cucumber but not that the primary_run_cargocall receives it. Capture and assertenv_overridesto ensure end-to-end correctness.🔧 Proposed fix
cargo_env = {"CARGO_PROFILE_DEV_CODEGEN_BACKEND": "llvm"} env_calls: list[Path] = [] cucumber_calls: list[dict[str, object]] = [] + run_cargo_calls: list[dict[str, object]] = [] def fake_get_cargo_coverage_env(manifest_path: Path) -> dict[str, str]: env_calls.append(manifest_path) return cargo_env def fake_run_cargo( args: list[str], *, env_overrides: typ.Mapping[str, str] | None = None, env_unsets: typ.Iterable[str] = (), ) -> str: - _ = (args, env_overrides, env_unsets) + run_cargo_calls.append({ + "args": args, + "env_overrides": dict(env_overrides or {}), + "env_unsets": tuple(env_unsets), + }) return "Coverage: 100%" @@ assert env_calls == [manifest_path] + assert len(run_cargo_calls) >= 1 + assert run_cargo_calls[0]["env_overrides"] == cargo_env assert len(cucumber_calls) == 1 assert cucumber_calls[0]["cargo_env"] == cargo_env🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/generate-coverage/tests/test_scripts.py around lines 915 - 922, The test's fake_run_cargo currently ignores env_overrides so we don't verify that _run_cargo receives cargo_env; modify fake_run_cargo to capture and store its env_overrides argument (e.g., append to a list or set an outer-scope variable) when called, keep returning the same "Coverage: 100%", and then add an assertion after the call that the captured env_overrides equals the expected cargo_env to ensure the primary _run_cargo invocation received the correct environment..github/actions/generate-coverage/scripts/_cargo_runner.py (1)
162-167:⚠️ Potential issue | 🟠 MajorWrap the pump loop in try/finally to guarantee thread finalisation on timeout.
When
_poll_pump_loop_iterationraises via_raise_cargo_timeout, the_finalise_pump_threadscall is skipped, leaving non-daemon pump threads unjoined. Wrap the loop so cleanup always runs.🔧 Proposed fix
for thread in threads: thread.start() - while not _poll_pump_loop_iteration(threads, ctx, thread_exceptions): - pass - _finalise_pump_threads(threads, ctx.proc, thread_exceptions) + try: + while not _poll_pump_loop_iteration(threads, ctx, thread_exceptions): + pass + except BaseException: + with contextlib.suppress(Exception): + _finalise_pump_threads(threads, ctx.proc, thread_exceptions) + raise + _finalise_pump_threads(threads, ctx.proc, thread_exceptions) return stdout_lines🤖 Prompt for AI Agents
In .github/actions/generate-coverage/scripts/_cargo_runner.py around lines 162-167, the while loop calling _poll_pump_loop_iteration can raise (via _raise_cargo_timeout) before _finalise_pump_threads executes; wrap the while-not loop in a try block with a bare except BaseException that suppresses exceptions while calling _finalise_pump_threads, then re-raises, followed by the normal _finalise_pump_threads call outside the except so cleanup runs on both success and failure paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/generate-coverage/scripts/_cargo_runner.py around lines 162 - 167, Wrap the pump loop in a try/finally so _finalise_pump_threads always runs even if _poll_pump_loop_iteration raises (e.g. via _raise_cargo_timeout): move the while not _poll_pump_loop_iteration(threads, ctx, thread_exceptions): pass loop into a try block and call _finalise_pump_threads(threads, ctx.proc, thread_exceptions) in the finally, then return stdout_lines; ensure you reference the existing symbols (_poll_pump_loop_iteration, _finalise_pump_threads, threads, ctx, thread_exceptions, ctx.proc, stdout_lines) and do not change thread creation/start logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/actions/generate-coverage/tests/test_scripts.py:
- Around line 128-132: The test double's wait method is flagged for an unused
timeout parameter; rename the parameter timeout to _timeout in the wait(self,
timeout: float | None = None) -> int signature (and update any internal
references) so linters understand it is intentionally unused, and keep the
behavior in wait (setting self.waited when track_lifecycle and returning
returncode) and the docstring unchanged.
---
Duplicate comments:
In @.github/actions/generate-coverage/scripts/_cargo_runner.py:
- Around line 162-167: Wrap the pump loop in a try/finally so
_finalise_pump_threads always runs even if _poll_pump_loop_iteration raises
(e.g. via _raise_cargo_timeout): move the while not
_poll_pump_loop_iteration(threads, ctx, thread_exceptions): pass loop into a try
block and call _finalise_pump_threads(threads, ctx.proc, thread_exceptions) in
the finally, then return stdout_lines; ensure you reference the existing symbols
(_poll_pump_loop_iteration, _finalise_pump_threads, threads, ctx,
thread_exceptions, ctx.proc, stdout_lines) and do not change thread
creation/start logic.
In @.github/actions/generate-coverage/tests/test_scripts.py:
- Around line 973-1067: The test function test_run_cargo_windows_pump_timeout is
too long and should extract the helper classes to reduce method length; move
FakeThread, FakeProc, FakeRunner, and FakeCargoCommand out of the test body into
module-level fixtures or shared factory functions (e.g., create_fake_thread,
create_fake_proc, fake_runner_factory or pytest fixtures named fake_thread,
fake_proc, fake_runner, fake_cargo_command) and update the test to instantiate
or use those fixtures, keeping the test logic in
test_run_cargo_windows_pump_timeout focused on setup, invocation of _run_cargo,
and assertions.
- Around line 915-922: The test's fake_run_cargo currently ignores env_overrides
so we don't verify that _run_cargo receives cargo_env; modify fake_run_cargo to
capture and store its env_overrides argument (e.g., append to a list or set an
outer-scope variable) when called, keep returning the same "Coverage: 100%", and
then add an assertion after the call that the captured env_overrides equals the
expected cargo_env to ensure the primary _run_cargo invocation received the
correct environment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: de683c77-8c1f-49d1-aa0a-ab1fb9e988fe
📒 Files selected for processing (4)
.github/actions/generate-coverage/scripts/_cargo_runner.py.github/actions/generate-coverage/scripts/_cranelift.py.github/actions/generate-coverage/scripts/run_rust.py.github/actions/generate-coverage/tests/test_scripts.py
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. .github/actions/generate-coverage/tests/test_scripts.py Comment on lines +973 to +1067 def test_run_cargo_windows_pump_timeout(
monkeypatch: pytest.MonkeyPatch,
) -> None:
"""``_run_cargo`` enforces the wait timeout on Windows pump threads."""
mod = _load_module(monkeypatch, "run_rust")
monkeypatch.setattr(mod.os, "name", "nt")
messages: list[tuple[str, bool]] = []
def fake_echo(message: str, *, err: bool = False, nl: bool = True) -> None:
_ = nl
messages.append((message, err))
monkeypatch.setattr(mod.typer, "echo", fake_echo)
monkeypatch.setenv("RUN_RUST_CARGO_WAIT_TIMEOUT", "1")
class FakeThread:
"""Thread stub that stays alive for a fixed number of joins."""
join_calls = 0
def __init__(
self,
*,
name: str,
target: typ.Callable[..., object],
args: tuple[object, ...],
kwargs: dict[str, object],
) -> None:
"""Initialise the thread stub with ignored thread metadata."""
_ = (name, target, args, kwargs)
def start(self) -> None:
"""Pretend to start the thread."""
def is_alive(self) -> bool:
"""Report liveness until enough joins have occurred."""
return self.join_calls < 4
def join(self, timeout: float | None = None) -> None:
"""Record a join call without blocking."""
_ = timeout
type(self).join_calls += 1
monkeypatch.setattr(mod.threading, "Thread", FakeThread)
time_values = iter([0.0, 0.0, 1.0])
monkeypatch.setattr(mod.time, "monotonic", lambda: next(time_values))
class FakeProc:
"""Minimal subprocess test double for Windows timeout handling."""
def __init__(self) -> None:
"""Initialise the stub with captured output streams."""
self.stdout = io.StringIO("out-line\n")
self.stderr = io.StringIO("err-line\n")
self.killed = False
self.waited = False
def poll(self) -> None:
"""Report that the process is still running."""
def kill(self) -> None:
"""Record that the process was killed."""
self.killed = True
def wait(self, timeout: float | None = None) -> int:
"""Record the wait and return success."""
_ = timeout
self.waited = True
return 0
fake_proc = FakeProc()
class FakeRunner:
"""Runner stub that returns the fake Windows process."""
def popen(self, **_kw: object) -> FakeProc:
"""Return the pre-created fake process."""
return fake_proc
class FakeCargoCommand:
"""Cargo command stub that yields the fake Windows runner."""
def __getitem__(self, _args: list[str]) -> FakeRunner:
"""Return a runner for the requested cargo arguments."""
return FakeRunner()
monkeypatch.setattr(mod, "cargo", FakeCargoCommand())
with pytest.raises(mod.typer.Exit) as excinfo:
mod._run_cargo(["llvm-cov"])
assert _exit_code(excinfo.value) == 1
assert fake_proc.killed
assert fake_proc.waited
assert ("::error::cargo did not exit within 1.0s; killing", True) in messages❌ New issue: Large Method |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. Excess Number of Function Arguments.github/actions/generate-coverage/scripts/_cargo_runner.py: _pump_cargo_output_posix What lead to degradation?_pump_cargo_output_posix has 5 arguments, max arguments = 4 Why does this problem occur?Functions with many arguments indicate either a) low cohesion where the function has too many responsibilities, or b) a missing abstraction that encapsulates those arguments. How to fix it?Start by investigating the responsibilities of the function. Make sure it doesn't do too many things, in which case it should be split into smaller and more cohesive functions. Consider the refactoring INTRODUCE PARAMETER OBJECT to encapsulate arguments that refer to the same logical concept. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Code Health Improved
(1 files improve in Code Health)
Gates Failed
Prevent hotspot decline
(1 hotspot with Number of Functions in a Single Module)
Enforce advisory code health rules
(1 file with Number of Functions in a Single Module)
Gates Passed
4 Quality Gates Passed
See analysis details in CodeScene
Reason for failure
| Prevent hotspot decline | Violations | Code Health Impact | |
|---|---|---|---|
| test_scripts.py | 1 rule in this hotspot | 7.79 → 7.55 | Suppress |
| Enforce advisory code health rules | Violations | Code Health Impact | |
|---|---|---|---|
| test_scripts.py | 1 advisory rule | 7.79 → 7.55 | Suppress |
View Improvements
| File | Code Health Impact | Categories Improved |
|---|---|---|
| run_rust.py | 7.24 → 7.96 | Complex Method, Bumpy Road Ahead, Overall Code Complexity, Deep, Nested Complexity |
Quality Gate Profile: Pay Down Tech Debt
Install CodeScene MCP: safeguard and uplift AI-generated code. Catch issues early with our IDE extension and CLI tool.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. Number of Functions in a Single Module.github/actions/generate-coverage/tests/test_scripts.py: What lead to degradation?This module has 90 functions, threshold = 75 |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 8247 🏁 Script executed: Length of output: 21596 🏁 Script executed: Length of output: 11530
File: .github/actions/generate-coverage/tests/test_scripts.py Fix proposal
Representative snippets (pattern to apply)
import pytest
`@pytest.mark.parametrize`(
"missing, out_text, err_text",
[
("stdout", None, "err-line\n"),
("stderr", "out-line\n", None),
],
)
def test_run_cargo_windows_missing_streams(monkeypatch: pytest.MonkeyPatch, missing: str, out_text: str | None, err_text: str | None) -> None:
"""``_run_cargo`` fails when either stdout or stderr is missing on Windows."""
mod = _load_module(monkeypatch, "run_rust")
monkeypatch.setattr(mod.os, "name", "nt")
monkeypatch.setattr(mod.typer, "echo", lambda *_a, **_k: None)
fake_cargo = _make_fake_cargo(out_text, err_text)
monkeypatch.setattr(mod, "cargo", fake_cargo)
with pytest.raises(mod.typer.Exit):
mod._run_cargo([])
proc = fake_cargo.last_proc
assert proc is not None
if missing == "stdout":
assert proc.stderr is not None and proc.stderr.closed
else:
assert proc.stdout is not None and proc.stdout.closed
import pytest
`@pytest.mark.parametrize`(
("xml_text", "expected"),
[
("""
<coverage>
<packages><package><classes><class><lines>
<line hits='1'/><line hits='0'/>
</lines></class></classes></package></packages>
</coverage>
""", "50.00"),
("<coverage lines-covered='81' lines-valid='100' />", "81.00"),
("<coverage lines-covered='0' lines-valid='0' />", "0.00"),
],
)
def test_cobertura_percent_variants(tmp_path: Path, run_python_module: ModuleType, xml_text: str, expected: str) -> None:
"""``get_line_coverage_percent_from_cobertura`` handles detail, totals, and zero-lines."""
xml = tmp_path / "cov.xml"
xml.write_text(xml_text, encoding="utf-8")
assert run_python_module.get_line_coverage_percent_from_cobertura(xml) == expected
import pytest
`@pytest.mark.parametrize`(
("setup", "expect_exit"),
[
("missing", True),
("permission", True),
],
)
def test_lcov_error_paths(tmp_path: Path, run_rust_module: ModuleType, setup: str, expect_exit: bool) -> None:
"""Error paths for LCOV parsing: missing file and permission error."""
lcov = tmp_path / "file.lcov"
if setup == "permission":
lcov.write_text("LF:1\nLH:1\n", encoding="utf-8")
lcov.chmod(0) # provoke a PermissionError on read
if expect_exit:
with pytest.raises(run_rust_module.typer.Exit):
run_rust_module.get_line_coverage_percent_from_lcov(lcov)
import pytest
`@pytest.mark.parametrize`(
("payload", "expected_ok"),
[
(b"payload", True),
(b"other", False),
],
)
def test_install_nextest_checksum_variants(tmp_path: Path, install_nextest_module: ModuleType, payload: bytes, expected_ok: bool) -> None:
"""``verify_nextest_binary`` returns True on match, False on mismatch."""
binary = tmp_path / "cargo-nextest"
binary.write_bytes(payload)
import hashlib as _h
good = _h.sha256(b"payload").hexdigest()
assert install_nextest_module.verify_nextest_binary(binary, good) is (payload == b"payload") is expected_okAI coding-agent prompt |
Replace the
cargo llvm-cov--configoverride path withCARGO_PROFILE_*_CODEGEN_BACKEND=llvmoverrides when the target repo configures Cranelift. This lets nested cargo processes spawned bycargo-llvm-covinherit the LLVM setting that coverage builds require.Update the generate-coverage regression tests to assert env propagation for Cranelift-configured repos, keep non-Cranelift runs unchanged, and verify
_run_cargoforwards explicit environment overrides.Summary by Sourcery
Propagate LLVM codegen configuration for Rust coverage runs by switching from cargo config flags to environment-based overrides and ensuring nested cargo processes inherit the required settings.
Enhancements:
Tests: