Skip to content

feat(download): dynamic segment splitting on slow tail#111

Merged
mpiton merged 6 commits intomainfrom
feat/task-17-dynamic-segment-splitting
Apr 26, 2026
Merged

feat(download): dynamic segment splitting on slow tail#111
mpiton merged 6 commits intomainfrom
feat/task-17-dynamic-segment-splitting

Conversation

@mpiton
Copy link
Copy Markdown
Owner

@mpiton mpiton commented Apr 26, 2026

Summary

• Implement runtime segment splitting when a slow segment is detected (PRD §7.1)
• Add domain model split() method with byte-range validation and state machine rules
• Add SegmentRuntimeState coordinator to engine — detects completion, evaluates slowest segment, splits when threshold met
• Add watch::Receiver-based mid-flight end_byte adjustment for segment workers
• Add AppConfig fields for dynamic_split_enabled and dynamic_split_min_remaining_mb with config roundtrip
• Emit SegmentSplit domain event and persist atomic metadata snapshots after each split

Type

feat


Summary by cubic

Adds runtime dynamic segment splitting to speed up slow download tails and hardens split persistence/selection. Pins MSRV to Rust 1.95 in src-tauri/Cargo.toml. Implements PRD §7.1 / task-17.

  • New Features

    • Dynamic split in SegmentedDownloadEngine: pick the slowest eligible segment (remaining ≥ threshold, default 4 MiB), shrink it in place, spawn a worker for the upper half, update the slot’s initial_end, emit DomainEvent::SegmentSplit (forwarded as segment-split), and log it.
    • Workers watch the end bound via tokio::sync::watch and clamp mid-flight; per-segment throughput via Arc<AtomicU64> guides selection. After each split, .vortex-meta is rewritten for crash-safe resume.
    • Settings dynamic_split_enabled and dynamic_split_min_remaining_mb (defaults true/4) are round-tripped through TOML and IPC, applied live via application::services::engine_config_bridge (with_dynamic_split/set_dynamic_split; dynamic_split_state() for tests).
    • Domain adds Segment::split(at_byte, new_id) with strict validation.
  • Bug Fixes

    • Persist completed segments in split snapshots so finished ranges aren’t lost on crash; picker skips completed slots.
    • Gate re-splitting: require a non-zero progress sample and ≥500 ms age before a segment can be considered, preventing immediate re-split of fresh children.

Written for commit 94c3634. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Dynamic segment splitting: active downloads can be re-split mid-transfer to improve throughput by reallocating work from slow segments.
    • Live settings to enable/disable dynamic splitting and set a minimum remaining-size threshold; changes apply without restarting.
    • Split notifications and per-download log entries; updated resume metadata persisted so splits survive restarts/crashes.
  • Documentation

    • Changelog entry describing dynamic splitting behavior and controls.

When a parallel segment finishes before its peers, the engine now picks the
slowest still-running segment whose remaining range exceeds
`dynamic_split_min_remaining_mb` (default 4 MiB), shrinks it in place, and
spawns a fresh worker for the upper half.

Backend additions:
- domain `Segment::split(at_byte)` validation method (state must be
  `Downloading`, split point strictly inside the unfetched range)
- `DomainEvent::SegmentSplit { download_id, original_segment_id,
  new_segment_id, split_at }` forwarded as `segment-split` Tauri event
- `AppConfig.dynamic_split_enabled` / `dynamic_split_min_remaining_mb`
  wired through `ConfigDto`, `ConfigPatch`, `ConfigPatchDto`
- `SegmentedDownloadEngine::with_dynamic_split(enabled, min_remaining_mb)`
  builder consumed at startup
- segment worker accepts upper bound through `tokio::sync::watch::Receiver<u64>`,
  re-reads it before chunk fetch and after each network read so a mid-flight
  shrink clamps writes to the new boundary
- per-segment progress exposed via `Arc<AtomicU64>` so engine picks slowest
  candidate by throughput
- atomic `.vortex-meta` rewrite after each split so resume after a crash
  mid-split sees a consistent topology (PRD §7.1, task 17)
@github-actions github-actions Bot added documentation Improvements or additions to documentation rust labels Apr 26, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 26, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds runtime dynamic segment splitting: new config fields and IPC plumbing, domain split API and event, engine/worker runtime changes to shrink segments mid‑flight and spawn new workers, event forwarding and logging, live engine subscription to config, and resume-metadata persistence updates.

Changes

Cohort / File(s) Summary
Changelog
CHANGELOG.md
Documented Dynamic segment splitting feature and related API/config surface.
Config persistence & startup
src-tauri/src/adapters/driven/config/toml_config_store.rs, src-tauri/src/lib.rs
Persist/hydrate dynamic_split_enabled and dynamic_split_min_remaining_mb; apply settings at engine startup and subscribe for live updates.
IPC & Frontend types
src-tauri/src/adapters/driving/tauri_ipc.rs, src/types/settings.ts
Expose dynamic_split_* in SettingsDto/ConfigPatchDto and frontend AppConfig type.
Domain model & events
src-tauri/src/domain/model/config.rs, src-tauri/src/domain/model/segment.rs, src-tauri/src/domain/event.rs
Add config fields to AppConfig/ConfigPatch, implement Segment::split(...) with validation, and add DomainEvent::SegmentSplit.
Engine orchestration
src-tauri/src/adapters/driven/network/download_engine.rs, src-tauri/src/application/services/engine_config_bridge.rs, src-tauri/src/application/services/mod.rs
Add dynamic-split orchestration (pick target, clamp via watch, spawn new worker, emit SegmentSplit, persist .vortex-meta) and bridge to subscribe engine to config updates.
Segment worker runtime
src-tauri/src/adapters/driven/network/segment_worker.rs
Replace fixed end_byte with watch::Receiver<u64>, add per-segment atomic progress, and make worker respect mid‑flight end reductions (clamp/truncate, early completion).
Event & logging bridges
src-tauri/src/adapters/driven/event/tauri_bridge.rs, src-tauri/src/adapters/driven/logging/download_log_bridge.rs
Forward DomainEvent::SegmentSplit as Tauri segment-split event and produce download log entry.
Tests & frontend fixtures
src-tauri/src/adapters/driven/network/..., src/components/__tests__/*, src/hooks/__tests__/*, src/layouts/__tests__/*, src/stores/__tests__/*, src/views/*/__tests__/*
Add/adjust engine and worker unit tests for split behavior; update many frontend test fixtures to include new settings fields.
Manifest
src-tauri/Cargo.toml
Add rust-version = "1.85" to package metadata.

Sequence Diagram

sequenceDiagram
    participant Engine as "SegmentedDownloadEngine"
    participant Worker as "SegmentWorker (slow)"
    participant NewWorker as "SegmentWorker (new)"
    participant Watch as "watch::Receiver (end_byte)"
    participant Meta as ".vortex-meta Storage"
    participant Frontend as "Frontend Client"

    Engine->>Engine: monitor throughput & progress
    Engine->>Engine: pick_split_target (slowest eligible)
    Engine->>Watch: send new end boundary (split_at) rgba(0,128,0,0.5)
    Watch->>Worker: receive updated end_byte
    Worker->>Worker: clamp writes, stop at new boundary, emit completion(slot)
    Worker->>Engine: report completion with slot & progress
    Engine->>NewWorker: spawn worker for remaining range [split_at, old_end)
    Engine->>Meta: async rewrite .vortex-meta with new segments
    Engine->>Engine: emit DomainEvent::SegmentSplit
    Engine->>Frontend: forward `segment-split` Tauri event
    Meta-->>Engine: persistence result (logged on failure)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

frontend

Poem

🐰 I nibbled bytes where slow bits sleep,
I snipped a range so downloads leap.
Watchers whisper, workers weave,
Metadata saved — resumés believe.
Hooray! A split — a rabbit's feast. 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main feature: dynamic segment splitting triggered by slow segments during downloads.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/task-17-dynamic-segment-splitting

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Apr 26, 2026

Greptile Summary

This PR implements PRD §7.1 dynamic segment splitting: when a fast segment completes, the engine evaluates remaining in-flight segments, picks the slowest eligible one, shrinks it via a watch::Sender signal, and spawns a fresh worker for the upper half. The approach is sound — initial_end is correctly updated after each successful split signal, the 500 ms/non-zero-progress gate prevents cascading re-splits of fresh children, completed slots are retained (not cleared) so crash-recovery snapshots include all ranges, and the throughput-guided pick_split_target has proper guards for unbounded and near-complete segments. A few minor concerns remain around the SegmentSplit event omitting the new segment's end byte, next_segment_id being incremented before confirming the split signal was accepted, and persist_split_meta overwriting created_at with the split time on every snapshot.

Confidence Score: 5/5

Safe to merge — all findings are P2; the core split logic, range update, worker lifecycle, and crash-resume snapshot are correctly implemented

The two previously flagged P1s (stale initial_end and double-split of dead slots) are addressed in this revision. Remaining findings are P2: a cosmetic ID gap on failed splits, a missing field in a new domain event, and a timestamp clobber in the crash-recovery sidecar that doesn't affect the live download path.

download_engine.rs (persist_split_meta created_at, next_segment_id ordering) and domain/event.rs (SegmentSplit missing new_end_byte)

Important Files Changed

Filename Overview
src-tauri/src/adapters/driven/network/download_engine.rs Core of the PR — adds SegmentRuntimeState, pick_split_target, persist_split_meta, and the split coordinator loop; initial_end is correctly updated on successful split signal; next_segment_id is incremented before the send-success check; created_at in persist snapshots is clobbered each time
src-tauri/src/adapters/driven/network/segment_worker.rs Adds watch::Receiver-based dynamic end_byte shrink; chunk truncation at split boundary is correctly handled; final_end borrow for byte-count validation correctly uses the shrunken boundary; no new issues found
src-tauri/src/domain/model/segment.rs Adds Segment::split() with proper state and range validation; method is never called by the engine (engine manages its own state), making it effectively documentation/test-only code
src-tauri/src/domain/event.rs Adds SegmentSplit event; missing new_end_byte field means consumers cannot determine the new segment's full range from the event alone
src-tauri/src/application/services/engine_config_bridge.rs New service correctly wires SettingsUpdated events to set_dynamic_split on the engine; test coverage is solid; no issues found
src-tauri/src/domain/model/config.rs Adds dynamic_split_enabled and dynamic_split_min_remaining_mb fields with sensible defaults (true/4 MiB); roundtripped through TOML and IPC correctly
src-tauri/src/adapters/driven/event/tauri_bridge.rs Adds segment-split to event_name and event_payload; forwarding is correct; no issues found
src/types/settings.ts Adds dynamicSplitEnabled and dynamicSplitMinRemainingMb to the frontend AppConfig interface; matches backend DTO naming convention; no issues found

Sequence Diagram

sequenceDiagram
    participant E as SegmentedDownloadEngine
    participant W0 as Worker[0] (fast)
    participant W1 as Worker[1] (slow)
    participant W2 as Worker[2] (split child)
    participant EB as EventBus
    participant FS as FileStorage (.vortex-meta)

    E->>W0: spawn [0, 50%)
    E->>W1: spawn [50%, 100%)
    W0-->>E: Ok(bytes) — finished early
    E->>E: pick_split_target(W1) → split_at=75%
    E->>W1: end_tx.send(75%)
    Note over W1: clamps writes to [50%, 75%)
    E->>E: active_segments[1].initial_end = 75%
    E->>EB: SegmentSplit { original=1, new=2, split_at=75% }
    E->>W2: spawn [75%, 100%)
    E->>FS: persist_split_meta snapshot
    W1-->>E: Ok(bytes) — [50%, 75%) done
    W2-->>E: Ok(bytes) — [75%, 100%) done
    E->>EB: DownloadCompleted
Loading

Fix All in Claude Code

Prompt To Fix All With AI
This is a comment left during a code review.
Path: src-tauri/src/adapters/driven/network/download_engine.rs
Line: 128-146

Comment:
**`created_at` clobbered on every split snapshot**

`persist_split_meta` sets `created_at: now` each time it runs, overwriting the original download creation timestamp in the `.vortex-meta` sidecar. If any downstream path (resume, retention, ordering) reads this field and expects it to be stable, repeated splits will silently drift the timestamp forward. The `updated_at` field exists precisely to track mutations; `created_at` should be threaded through from the original meta or captured once at download start.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src-tauri/src/domain/event.rs
Line: 95-100

Comment:
**`SegmentSplit` event missing `new_end_byte`**

The event records `split_at` (the new boundary) but not the original upper bound of the shrunk segment. A consumer that wants to reconstruct the full topology from the event stream cannot determine the range of the newly spawned segment (`[split_at, original_end)`) without correlating with prior `SegmentStarted` state. Adding `original_end: u64` makes the event self-contained.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src-tauri/src/adapters/driven/network/download_engine.rs
Line: 509-511

Comment:
**`next_segment_id` incremented before confirming the split succeeds**

`new_id = next_segment_id; next_segment_id += 1;` runs unconditionally before the `end_tx.send(split_at).is_ok()` check. If the send fails (target worker already exited) the branch `continue`s, leaving a gap in the monotonic sequence. In practice this is harmless — no ID collision can result — but it complicates reasoning about the counter and wastes an ID. Moving the increment to after confirming `signal_sent` is trivially safe.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (6): Last reviewed commit: "chore: pin rust-version to 1.95 to match..." | Re-trigger Greptile

Comment thread src-tauri/src/adapters/driven/network/download_engine.rs Outdated
Comment thread src-tauri/src/domain/model/segment.rs Outdated
Comment thread src-tauri/src/adapters/driven/network/download_engine.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
src-tauri/src/lib.rs (1)

214-217: Log config-read failures before defaulting engine split settings.

Line 214 currently falls back silently; this can mask config corruption and explainability issues during startup troubleshooting.

🛠️ Suggested adjustment
-            let initial_engine_config = config_store
-                .get_config()
-                .unwrap_or_else(|_| crate::domain::model::config::AppConfig::default());
+            let initial_engine_config = match config_store.get_config() {
+                Ok(cfg) => cfg,
+                Err(e) => {
+                    tracing::warn!(
+                        error = %e,
+                        "failed to load dynamic split config, falling back to defaults"
+                    );
+                    crate::domain::model::config::AppConfig::default()
+                }
+            };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/lib.rs` around lines 214 - 217, The code silently falls back
when config_store.get_config() fails; change the unwrap_or_else to capture the
error (e.g., |err| { ... }) and log that error before returning
AppConfig::default(), so failures reading initial_engine_config are recorded;
update the closure around config_store.get_config() (the call that assigns
initial_engine_config) to call your project logger (e.g., tracing::error! or
log::error!) with the err and context, then return
crate::domain::model::config::AppConfig::default().
src-tauri/src/adapters/driven/event/tauri_bridge.rs (1)

53-53: Add a regression test for SegmentSplit bridge mapping.

The new mapping is correct, but it currently has no dedicated test coverage in this file (unlike other segment variants).

📌 Suggested test addition
 #[test]
 fn test_event_name_segment_variants() {
@@
     assert_eq!(
         event_name(&DomainEvent::SegmentFailed {
             download_id: DownloadId(1),
             segment_id: 0,
             error: "err".into()
         }),
         "segment-failed"
     );
+
+    let split = DomainEvent::SegmentSplit {
+        download_id: DownloadId(1),
+        original_segment_id: 2,
+        new_segment_id: 9,
+        split_at: 4096,
+    };
+    assert_eq!(event_name(&split), "segment-split");
+    let (_, payload) = to_tauri_event(&split);
+    assert_eq!(payload["downloadId"].as_u64(), Some(1));
+    assert_eq!(payload["originalSegmentId"].as_u64(), Some(2));
+    assert_eq!(payload["newSegmentId"].as_u64(), Some(9));
+    assert_eq!(payload["splitAt"].as_u64(), Some(4096));
 }

Also applies to: 120-132

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs` at line 53, The match
arm mapping DomainEvent::SegmentSplit => "segment-split" has no dedicated
regression test; add a unit test in tauri_bridge.rs (alongside the other segment
variant tests) that constructs a DomainEvent::SegmentSplit, invokes the bridge
mapping/serializer used in this file (the same function the other segment tests
call) and asserts the produced bridge name/string equals "segment-split" (and
optionally include the same round-trip/serialization assertions present in the
tests around lines 120-132 to ensure consistent behavior).
src-tauri/src/adapters/driven/network/download_engine.rs (1)

436-438: Completed segments remain in active_segments, skewing split selection.

When a segment completes (triggers the Ok(Ok(_bytes)) branch), its slot in active_segments is not cleared. On subsequent completions, pick_split_target may still consider the completed segment's stale state, though it will likely be filtered out because current_offset >= initial_end.

However, for correctness and to avoid confusion, consider setting the slot to None when the corresponding worker task completes. This would also reduce memory pressure for long downloads with many splits.

♻️ Suggested approach

Track which segment index just completed (from the JoinSet result) and clear it:

// After handling Ok(Ok(_bytes)), identify and clear the completed slot
// This requires the worker to return its segment_index, or use JoinSet::join_next_with_id

Alternatively, the current filtering in pick_split_target (line 49-50: current_offset >= initial_end → continue) serves as a safety net, so this is a recommended but not critical change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/network/download_engine.rs` around lines 436 -
438, The active_segments Vec keeps completed SegmentRuntimeState entries instead
of clearing them, so update the worker completion handling (the branch that
matches Ok(Ok(_bytes)) in the JoinSet result) to mark the corresponding slot in
active_segments as None; to do this have the worker return its segment index (or
use JoinSet::join_next_with_id) so you can identify which entry to clear, then
set active_segments[segment_index] = None to avoid stale state and reduce memory
pressure and ensure pick_split_target only sees live segments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/network/download_engine.rs`:
- Around line 84-96: The persisted metadata is using the stale
SegmentRuntimeState.initial_end after a split; update the runtime state's
initial_end immediately after successfully sending the new boundary so
persist_split_meta records the shrunk end. In the split handling block where you
call end_tx.send(split_at) (and manage active_segments / the
SegmentRuntimeState), set st.initial_end = split_at (or equivalent) right after
the send succeeds, ensuring SegmentRuntimeState.initial_end reflects the new end
before persist_split_meta is invoked. Also ensure you only update initial_end on
successful send to avoid inconsistent state.

In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 910-911: SettingsDto returned by settings_get is missing the
persisted dynamic split fields, so add the two optional fields
dynamic_split_enabled: Option<bool> and dynamic_split_min_remaining_mb:
Option<u64> to the SettingsDto struct (the same types used in the patch/patch
DTOs) and ensure the settings_get handler populates them from the stored
Settings model; update any serialization/IPC conversion utilities that map
between the internal Settings and SettingsDto so the frontend can read these
values back.

In `@src-tauri/src/domain/model/segment.rs`:
- Around line 156-158: Segment::split currently creates the upper Segment using
self.id.wrapping_add(1_000_000) which causes collisions and conflicts with the
engine-managed next_segment_id counter; change the split method signature to
accept an external id argument (e.g., fn split(&self, new_id: u64) -> (Segment,
Segment)) and use that new_id for the upper segment instead of computing it
inside split, update the line that constructs upper (replace wrapping_add usage
with the passed-in id), and then update all callers/tests to pass the
engine-provided next_segment_id when invoking Segment::split.

---

Nitpick comments:
In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs`:
- Line 53: The match arm mapping DomainEvent::SegmentSplit => "segment-split"
has no dedicated regression test; add a unit test in tauri_bridge.rs (alongside
the other segment variant tests) that constructs a DomainEvent::SegmentSplit,
invokes the bridge mapping/serializer used in this file (the same function the
other segment tests call) and asserts the produced bridge name/string equals
"segment-split" (and optionally include the same round-trip/serialization
assertions present in the tests around lines 120-132 to ensure consistent
behavior).

In `@src-tauri/src/adapters/driven/network/download_engine.rs`:
- Around line 436-438: The active_segments Vec keeps completed
SegmentRuntimeState entries instead of clearing them, so update the worker
completion handling (the branch that matches Ok(Ok(_bytes)) in the JoinSet
result) to mark the corresponding slot in active_segments as None; to do this
have the worker return its segment index (or use JoinSet::join_next_with_id) so
you can identify which entry to clear, then set active_segments[segment_index] =
None to avoid stale state and reduce memory pressure and ensure
pick_split_target only sees live segments.

In `@src-tauri/src/lib.rs`:
- Around line 214-217: The code silently falls back when
config_store.get_config() fails; change the unwrap_or_else to capture the error
(e.g., |err| { ... }) and log that error before returning AppConfig::default(),
so failures reading initial_engine_config are recorded; update the closure
around config_store.get_config() (the call that assigns initial_engine_config)
to call your project logger (e.g., tracing::error! or log::error!) with the err
and context, then return crate::domain::model::config::AppConfig::default().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d26b8131-9739-451c-9594-cb7bf37f46ea

📥 Commits

Reviewing files that changed from the base of the PR and between 41fcb7b and fe16884.

📒 Files selected for processing (11)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/config/toml_config_store.rs
  • src-tauri/src/adapters/driven/event/tauri_bridge.rs
  • src-tauri/src/adapters/driven/logging/download_log_bridge.rs
  • src-tauri/src/adapters/driven/network/download_engine.rs
  • src-tauri/src/adapters/driven/network/segment_worker.rs
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/domain/event.rs
  • src-tauri/src/domain/model/config.rs
  • src-tauri/src/domain/model/segment.rs
  • src-tauri/src/lib.rs

Comment on lines +910 to +911
pub dynamic_split_enabled: Option<bool>,
pub dynamic_split_min_remaining_mb: Option<u64>,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Dynamic split settings are write-only over IPC right now.

Line 910 and Line 960 add patch support, but SettingsDto (returned by settings_get) still omits these fields, so the frontend cannot read the persisted values back.

✅ Suggested fix
 pub struct SettingsDto {
@@
     pub verify_checksums: bool,
     pub pre_allocate_space: bool,
+    pub dynamic_split_enabled: bool,
+    pub dynamic_split_min_remaining_mb: u64,
@@
 impl From<AppConfig> for SettingsDto {
     fn from(c: AppConfig) -> Self {
         Self {
@@
             verify_checksums: c.verify_checksums,
             pre_allocate_space: c.pre_allocate_space,
+            dynamic_split_enabled: c.dynamic_split_enabled,
+            dynamic_split_min_remaining_mb: c.dynamic_split_min_remaining_mb,
             history_retention_days: c.history_retention_days,

Also applies to: 960-961

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driving/tauri_ipc.rs` around lines 910 - 911,
SettingsDto returned by settings_get is missing the persisted dynamic split
fields, so add the two optional fields dynamic_split_enabled: Option<bool> and
dynamic_split_min_remaining_mb: Option<u64> to the SettingsDto struct (the same
types used in the patch/patch DTOs) and ensure the settings_get handler
populates them from the stored Settings model; update any serialization/IPC
conversion utilities that map between the internal Settings and SettingsDto so
the frontend can read these values back.

Comment thread src-tauri/src/domain/model/segment.rs
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 issues found across 11 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/lib.rs">

<violation number="1" location="src-tauri/src/lib.rs:224">
P2: Dynamic-split settings are read only at startup, so runtime settings changes do not reconfigure the download engine.</violation>
</file>

<file name="src-tauri/src/adapters/driven/network/download_engine.rs">

<violation number="1" location="src-tauri/src/adapters/driven/network/download_engine.rs:451">
P1: After splitting, the original segment's `initial_end` is not shrunk to `split_at`, so persisted split metadata can contain overlapping segment ranges.</violation>
</file>

<file name="src-tauri/src/adapters/driving/tauri_ipc.rs">

<violation number="1" location="src-tauri/src/adapters/driving/tauri_ipc.rs:910">
P2: The new dynamic split config fields are writable but not returned in `SettingsDto`, making IPC config round-trips incomplete for these settings.</violation>
</file>

<file name="src-tauri/src/domain/model/segment.rs">

<violation number="1" location="src-tauri/src/domain/model/segment.rs:157">
P1: Do not use `wrapping_add` for segment IDs; overflow should be rejected instead of silently wrapping to potentially duplicate IDs.</violation>

<violation number="2" location="src-tauri/src/domain/model/segment.rs:157">
P2: Avoid generating split IDs with `wrapping_add(1_000_000)`. Accept the new segment ID from the caller so split ID allocation stays consistent and collision-free.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/adapters/driven/network/download_engine.rs Outdated
Comment thread src-tauri/src/domain/model/segment.rs Outdated
Comment thread src-tauri/src/lib.rs
Comment thread src-tauri/src/adapters/driving/tauri_ipc.rs
Comment thread src-tauri/src/domain/model/segment.rs Outdated
- update active_segments[idx].initial_end after successful end_tx.send
  so a subsequent pick_split_target cannot expand a shrunk worker past
  its new boundary, and persist_split_meta writes the post-split end
  instead of the stale one (greptile P1, coderabbit P1, cubic-dev-ai P1)
- Segment::split now requires a caller-provided new_id, rejects
  collisions with self.id, and stops inventing IDs via wrapping_add
  to keep allocation in lockstep with the engine's monotonic counter
  (greptile P2, coderabbit minor, cubic-dev-ai P1+P2)
- each spawned segment task now returns (slot_idx, Result<u64>); on
  completion the engine clears active_segments[slot_idx] to None so
  pick_split_target and persist_split_meta only see live segments
  (greptile P2)
- SettingsDto now exposes dynamic_split_enabled and
  dynamic_split_min_remaining_mb so the frontend can read back the
  persisted values, not just write them (coderabbit major,
  cubic-dev-ai P2)
- SegmentedDownloadEngine stores the dynamic-split knobs in atomic
  fields and exposes set_dynamic_split; new
  application::services::engine_config_bridge subscribes to
  SettingsUpdated and forwards live changes so settings updates
  reconfigure already-running engines without restart (cubic-dev-ai P2)

CHANGELOG updated. Frontend AppConfig type and 7 test fixtures updated
for the new fields. cargo test --workspace 930/930, clippy clean,
fmt clean. vitest 582/582, oxlint clean, tsc clean.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src-tauri/src/application/services/engine_config_bridge.rs (1)

132-174: These tests don't currently prove the bridge works.

Both cases publish events, but neither asserts an observable change on SegmentedDownloadEngine or proves the non-settings path is a no-op. A broken subscribe_engine_to_config() would still pass here. Please add a real assertion path, such as a small test-only accessor for the dynamic-split knobs or an observable split/no-split scenario.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/services/engine_config_bridge.rs` around lines 132
- 174, The tests publish events but never assert any observable effect on
SegmentedDownloadEngine, so update the two tests
(test_settings_updated_propagates_dynamic_split_changes and
test_non_settings_events_are_ignored) to verify behaviour: add a test-only
accessor on SegmentedDownloadEngine (or expose current dynamic split knobs) and
use make_engine() to read the engine's dynamic_split_enabled and
dynamic_split_min_remaining_mb before and after
bus.publish(DomainEvent::SettingsUpdated) and after
config_store.update_config(...)+publish to assert the values change from the
builder defaults to the persisted config and then to the patched values; in the
non-settings test capture the engine state, publish
DomainEvent::DownloadStarted, and assert the engine state remains unchanged to
prove the no-op path for subscribe_engine_to_config. Ensure you reference
SegmentedDownloadEngine's accessor (or method you add) and the
subscribe_engine_to_config function when locating code to modify.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src-tauri/src/application/services/engine_config_bridge.rs`:
- Around line 132-174: The tests publish events but never assert any observable
effect on SegmentedDownloadEngine, so update the two tests
(test_settings_updated_propagates_dynamic_split_changes and
test_non_settings_events_are_ignored) to verify behaviour: add a test-only
accessor on SegmentedDownloadEngine (or expose current dynamic split knobs) and
use make_engine() to read the engine's dynamic_split_enabled and
dynamic_split_min_remaining_mb before and after
bus.publish(DomainEvent::SettingsUpdated) and after
config_store.update_config(...)+publish to assert the values change from the
builder defaults to the persisted config and then to the patched values; in the
non-settings test capture the engine state, publish
DomainEvent::DownloadStarted, and assert the engine state remains unchanged to
prove the no-op path for subscribe_engine_to_config. Ensure you reference
SegmentedDownloadEngine's accessor (or method you add) and the
subscribe_engine_to_config function when locating code to modify.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b6644797-a84e-4495-9808-750e2ab0cfc0

📥 Commits

Reviewing files that changed from the base of the PR and between fe16884 and e8902bb.

📒 Files selected for processing (15)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/network/download_engine.rs
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/application/services/engine_config_bridge.rs
  • src-tauri/src/application/services/mod.rs
  • src-tauri/src/domain/model/segment.rs
  • src-tauri/src/lib.rs
  • src/components/__tests__/ClipboardIndicator.test.tsx
  • src/hooks/__tests__/useAppEffects.test.ts
  • src/layouts/__tests__/AppLayout.test.tsx
  • src/stores/__tests__/settingsStore.test.ts
  • src/types/settings.ts
  • src/views/LinkGrabberView/__tests__/LinkGrabberView.test.tsx
  • src/views/SettingsView/__tests__/Sections.test.tsx
  • src/views/SettingsView/__tests__/SettingsView.test.tsx
✅ Files skipped from review due to trivial changes (10)
  • src/layouts/tests/AppLayout.test.tsx
  • src/stores/tests/settingsStore.test.ts
  • src/components/tests/ClipboardIndicator.test.tsx
  • src/hooks/tests/useAppEffects.test.ts
  • src/views/LinkGrabberView/tests/LinkGrabberView.test.tsx
  • src/views/SettingsView/tests/Sections.test.tsx
  • src/types/settings.ts
  • src-tauri/src/application/services/mod.rs
  • src/views/SettingsView/tests/SettingsView.test.tsx
  • CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • src-tauri/src/domain/model/segment.rs

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 15 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/services/engine_config_bridge.rs">

<violation number="1" location="src-tauri/src/application/services/engine_config_bridge.rs:133">
P3: This test has no assertions, so it cannot detect regressions in config-to-engine propagation.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/application/services/engine_config_bridge.rs
Previous test pair published events but never checked the engine — a
broken bridge would have passed. Adds a `dynamic_split_state` accessor
on `SegmentedDownloadEngine` and uses it to assert:

- the engine flips from the seeded (true, 4 MiB) to the persisted
  (false, 16 MiB) when SettingsUpdated fires
- a subsequent patch + publish flips again to (true, 8 MiB)
- non-Settings events leave the engine state untouched

Closes the cubic-dev-ai P3 + coderabbit nitpick on PR #111.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/network/download_engine.rs`:
- Around line 47-65: pick_split_target() currently treats freshly-created
children with state.progress == 0 as 0 B/s, letting brand-new segments be picked
as the "slowest" and re-split before they have any throughput sample; to fix
this, require a minimum sample (e.g., downloaded > 0 and/or elapsed =
state.started_at.elapsed() >= MIN_SAMPLE_AGE) before considering a segment for
slowest selection: compute bps only when that sample threshold is met and
otherwise skip the segment (do not treat bps as 0), and also update the code
that initializes new split children (where children are created with downloaded
== 0) to set started_at appropriately so the sample-age check works; use the
existing symbols pick_split_target, state.progress.load, state.started_at,
slowest, and split_at to locate and apply the change.
- Around line 84-96: The snapshot code currently drops finished segments because
you build segments_meta by filtering only Some(...) entries and you clear a
finished slot before persist_split_meta(); instead, build the full SegmentMeta
table (one entry per index) and persist it before you clear any slots: change
the filter_map over active_segments to an enumerate().map that produces a
SegmentMeta for every index, setting completed = false when slot.as_ref() exists
and completed = true when the slot is None (and mark downloaded_bytes as the
full segment length or otherwise fully completed), and ensure this collection
happens before you clear the finished slot or mutate active_segments so
persist_split_meta() writes a snapshot that includes completed ranges.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 33bf1e1c-1327-401e-a0d7-fa624e5fdf7a

📥 Commits

Reviewing files that changed from the base of the PR and between e8902bb and 5670466.

📒 Files selected for processing (2)
  • src-tauri/src/adapters/driven/network/download_engine.rs
  • src-tauri/src/application/services/engine_config_bridge.rs

Comment thread src-tauri/src/adapters/driven/network/download_engine.rs
Comment thread src-tauri/src/adapters/driven/network/download_engine.rs
Two fixes for issues coderabbit flagged on PR #111:

1) Persisted split snapshots dropped completed byte ranges (Major).

   The previous fix nulled a slot in `active_segments` on completion,
   then called `persist_split_meta` from the same path. Because the
   serializer filtered Option::Some only, every meta written after a
   split omitted the segment that had just finished. A crash between
   the split and the next progress write would resume without any
   record that those bytes were already on disk.

   `active_segments` is now `Vec<SegmentRuntimeState>` (no Option) and
   each entry carries a `completed: bool`. The coordinator sets the
   flag on `Ok(_bytes)` instead of removing the slot. `pick_split_target`
   skips completed slots so they cannot be re-picked. `persist_split_meta`
   serializes every slot, reporting `completed: true` and a full-range
   `downloaded_bytes` for finished segments so resume sees both live
   and completed ranges.

2) Fresh split children could be re-split before any throughput sample
   (Major).

   `pick_split_target` previously computed `bps = downloaded / elapsed`
   even for slots with `downloaded == 0` and `elapsed ≈ 0`, treating
   them as 0 B/s. A brand-new child therefore became the guaranteed
   "slowest" candidate on the next completion event, cascading splits
   into the newest range without any real slow-tail signal.

   Picker now skips a slot until it has produced a sample:
   `downloaded > 0` AND `elapsed >= MIN_SPLIT_SAMPLE_DURATION` (500 ms).
   Two new unit tests cover both rejection paths and a third asserts
   that completed slots are excluded from selection.

cargo test --workspace 932/932, clippy clean, fmt clean.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/network/download_engine.rs`:
- Around line 503-508: The Cargo.toml in src-tauri currently uses an invalid
Rust edition ("2024") which prevents compiling newer-syntax constructs like the
let-chain used in the if condition around dynamic_split_enabled, cancel_token,
and pick_split_target; change the edition to a valid value (e.g., edition =
"2021"), add a rust-version field (e.g., rust-version = "1.76") to document the
MSRV, and update project CI to enforce Rust 1.76+ so the if ... && let
Some((idx, split_at)) = pick_split_target(...) pattern compiles correctly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ecbfe59a-003b-4623-b809-643a69c329c9

📥 Commits

Reviewing files that changed from the base of the PR and between 5670466 and 34a6e61.

📒 Files selected for processing (2)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/network/download_engine.rs
✅ Files skipped from review due to trivial changes (1)
  • CHANGELOG.md

Comment thread src-tauri/src/adapters/driven/network/download_engine.rs
mpiton added 2 commits April 26, 2026 16:46
Edition 2024 has been stable since Rust 1.85 (Feb 2025) and this
project builds against rustc 1.95, but the MSRV was undocumented.
Adding rust-version makes the floor explicit for tooling and contributors.
Previous commit set MSRV to 1.85 (the edition 2024 floor). Pin to the
actual toolchain version used for development and CI so contributors
don't accidentally try to build with an older rustc that, while
technically supporting edition 2024, may lag on stdlib features the
codebase relies on.
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/Cargo.toml">

<violation number="1" location="src-tauri/Cargo.toml:7">
P2: `rust-version` is raised to 1.95, which unnecessarily bumps MSRV and will block builds on 1.85–1.94. Keep MSRV at 1.85 unless this PR requires 1.95-only features.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/Cargo.toml
description = "A desktop download manager"
authors = ["mpiton"]
edition = "2024"
rust-version = "1.95"
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: rust-version is raised to 1.95, which unnecessarily bumps MSRV and will block builds on 1.85–1.94. Keep MSRV at 1.85 unless this PR requires 1.95-only features.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/Cargo.toml, line 7:

<comment>`rust-version` is raised to 1.95, which unnecessarily bumps MSRV and will block builds on 1.85–1.94. Keep MSRV at 1.85 unless this PR requires 1.95-only features.</comment>

<file context>
@@ -4,7 +4,7 @@ version = "0.2.0"
 authors = ["mpiton"]
 edition = "2024"
-rust-version = "1.85"
+rust-version = "1.95"
 license = "GPL-3.0-only"
 
</file context>
Suggested change
rust-version = "1.95"
rust-version = "1.85"
Fix with Cubic

@mpiton mpiton merged commit 9eec5cc into main Apr 26, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation rust

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant