Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Dynamic segment splitting (PRD-v2 P0.17, task 17): when a parallel segment finishes before its peers, the engine now re-evaluates the still-running segments, picks the slowest one whose remaining range exceeds `dynamic_split_min_remaining_mb` (default 4 MiB) and shrinks it in place — a fresh worker takes the upper half so the tail of the download accelerates instead of stalling on a single slow connection. Backend ships a domain-pure `Segment::split(at_byte, new_id)` validation method (state must be `Downloading`, split point strictly inside the unfetched range, caller-provided id must differ from the original — IDs are allocated by the engine's monotonic `next_segment_id` counter, never invented inside the domain), a new `DomainEvent::SegmentSplit { download_id, original_segment_id, new_segment_id, split_at }` forwarded as the `segment-split` Tauri event and logged in the per-download log store, two new `AppConfig` / `ConfigPatch` / `SettingsDto` fields `dynamic_split_enabled` (default `true`) and `dynamic_split_min_remaining_mb` (default `4`) wired through the toml config store, the Tauri IPC `SettingsDto`/`ConfigPatchDto` (so the frontend can both read and write them) and the new `application::services::engine_config_bridge` subscriber so live `settings_update` calls reconfigure already-running engines without a restart. `SegmentedDownloadEngine` stores `dynamic_split_enabled` / `dynamic_split_min_remaining_bytes` in `Arc<AtomicBool>` / `Arc<AtomicU64>` and exposes a `set_dynamic_split(enabled, min_remaining_mb)` setter consumed by the bridge. After a split, the engine updates the original slot's `initial_end` to `split_at` immediately on successful `end_tx.send`, so a subsequent `pick_split_target` evaluation cannot expand the worker's range past the shrunk boundary and `persist_split_meta` records the post-split topology rather than the stale one (closes coderabbit P1 + greptile P1 race). Each segment task now returns `(slot_idx, Result<u64>)`; on success the engine flips a `completed: bool` flag on the slot — `pick_split_target` skips completed slots so they cannot be re-picked, and `persist_split_meta` keeps the entry with `completed: true` and a full-range `downloaded_bytes` so a crash right after a split never loses the record of byte ranges already on disk. `pick_split_target` also gates on a 500 ms / non-zero-progress sample window: a fresh split child cannot be picked again until it has actually produced a throughput sample, preventing cascading fragmentation of the newest range. The segment worker accepts the upper bound through a `tokio::sync::watch::Receiver<u64>` instead of a frozen `u64`, re-reads it before each chunk fetch and again after every successful network read so a mid-flight shrink clamps the next write to the new boundary; per-segment progress is exposed via an `Arc<AtomicU64>` so the engine can pick the slowest candidate by throughput (`downloaded / elapsed`). After every split, the engine atomically rewrites `.vortex-meta` with the updated segment topology so resume after a crash mid-split sees a consistent state. (task 17, PR #111 review)
- "Report broken plugin" action (PRD-v2 P0.16, task 16): plugins listed in *Plugins → Plugin Store* now expose a *Report broken plugin* item in their kebab menu. Clicking it opens the user's default browser at a pre-filled GitHub issue on the plugin's repository, with diagnostic metadata (plugin name + version, Vortex version, OS, optional URL under test, last 50 log lines) inlined into the issue body. Backend adds a `repository_url` field to `domain::model::plugin::PluginInfo` (parsed from the new `[plugin].repository` key in `plugin.toml`), a `domain::ports::driven::UrlOpener` port plus its platform-native `SystemUrlOpener` adapter (`xdg-open` / `open` / `cmd start`, `http(s)://` only by validation), the std-only `domain::model::plugin::build_report_broken_url` URL builder (RFC 3986 unreserved-set percent encoder, last 50 log lines, GitHub-only repository hosts, accepts `.git` suffix, rejects malformed URLs with `DomainError::ValidationError`), and a `ReportBrokenPluginCommand` handler that returns `AppError::Validation` when a manifest carries no `repository_url`. New Tauri IPC `plugin_report_broken(pluginName, logLines?, testedUrl?) → string` returns the issue URL so the UI can fall back to clipboard copy if the launcher fails. i18n (en/fr): `plugins.action.reportBroken`, `plugins.toast.reportBrokenSuccess`, `plugins.toast.reportBrokenError`. (task 16)
- Dynamic plugin configuration UI (PRD-v2 P0.15, task 15): plugins declaring a `[config]` block in their `plugin.toml` now expose their schema at runtime. Backend adds `ConfigField` / `ConfigFieldType` / `PluginConfigSchema` to `domain/model/plugin.rs` (typed validation, enum options, `min`/`max` bounds, regex via a std-only matcher — no external import in the domain), a `PluginConfigStore` port (`get_values` / `set_value` / `list_all` / `delete_all`) implemented by `SqlitePluginConfigRepo` backed by the new `plugin_configs (plugin_name, key, value)` table (migration `m20260425_000005_create_plugin_configs`, composite primary key). The manifest parser (`adapters/driven/plugin/manifest.rs`) now extracts `type`, `default`, `options`, `description`, `min`, `max`, `regex` on top of the existing defaults, and rejects defaults that fail their own field validation. CQRS gains `UpdatePluginConfigCommand` (validates against the schema, applies the runtime first then persists, rolls back on failure) and `GetPluginConfigQuery` (returns the schema plus persisted values, dropping any persisted entry that no longer matches the current schema and falling back to manifest defaults). `PluginLoader` is extended with `get_manifest()` and `set_runtime_config()`; `ExtismPluginLoader` implements both by reading from `PluginRegistry` and writing to `SharedHostResources::plugin_configs`, so `get_config(key)` calls from the WASM plugin observe the new value without a reload. At startup, `lib.rs` replays persisted configs onto the in-memory map before plugins are loaded. Frontend adds two components: `PluginConfigField.tsx` (dispatcher renderer: `string` → text input, `boolean` → shadcn switch, `integer`/`float` → numeric input with bounds, `url` → url input, `enum` (and `string` with options) → shadcn select; `aria-describedby` on the control points to the error message) and `PluginConfigDialog.tsx` (loads the schema via `useQuery`, validates each field on the UI side (rejects empty floats, validates JSON arrays) before sending, persists changed values sequentially, guards the schema-reset effect while a save is in flight to avoid clobbering the draft, invalidates the query on success). `PluginsView` queries `plugin_config_get` for each installed plugin (keyed off the unfiltered installed list to avoid churn while typing in search) to decide whether the *Configure* button (Settings icon, next to the *More* menu) should render: a plugin without `[config]` exposes no button. New IPC commands `plugin_config_get(name) → PluginConfigView` and `plugin_config_update(name, key, value)`. i18n (en/fr): `plugins.action.configure`, `plugins.config.{title,description,loading,error,noFields,toast.{saveSuccess,validationFailed}}`. (task 15)
- History retention with automatic daily purge (PRD-v2 P0.14, task 14): new `history_retention_days` setting (default 30, presets 7 / 30 / 90 / 365 / `0 = unlimited`) exposed in the *General* Settings tab as a `Select` dropdown wired to `settings_update`. Backend ships a `Clock` domain port (`SystemClock` adapter under `adapters/driven/scheduler/`) and a `HistoryPurgeWorker` daemon spawned during Tauri setup that hard-deletes `history` rows where `completed_at < now - retention_days * 86_400`. The worker persists its last run as a Unix-epoch timestamp inside `<app_data_dir>/.history_purge_state` (sentinel filename `HISTORY_PURGE_STATE_FILE`). On startup, the daemon reads the sentinel and either runs immediately (missing/stale) or sleeps for `SECS_PER_DAY - elapsed` so the first post-launch purge stays anchored to the previous successful run instead of drifting up to ~47h after a restart; the recurring loop then ticks every 24h via `tokio::time::interval` with `MissedTickBehavior::Skip`. `retention_days <= 0` is a no-op that does not write the sentinel, so the next run re-fires the moment the user re-enables retention; corrupt sentinels are treated as "never ran" so a stuck file never blocks the scheduler. The worker shares the same `Arc<dyn HistoryRepository>` and `Arc<dyn ConfigStore>` the IPC layer already mutates, so a settings change is observed without restart. Domain helper `normalize_history_retention_days` clamps negatives back to `0` and is now applied at every write boundary — `apply_patch` (so a crafted `settings_update` payload cannot persist a negative) and `From<ConfigDto> for AppConfig` (so a hand-edited `config.toml` is normalized at load) — plus the worker itself for defense-in-depth. (task 14)
Expand Down
1 change: 1 addition & 0 deletions src-tauri/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ version = "0.2.0"
description = "A desktop download manager"
authors = ["mpiton"]
edition = "2024"
rust-version = "1.95"
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: rust-version is raised to 1.95, which unnecessarily bumps MSRV and will block builds on 1.85–1.94. Keep MSRV at 1.85 unless this PR requires 1.95-only features.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/Cargo.toml, line 7:

<comment>`rust-version` is raised to 1.95, which unnecessarily bumps MSRV and will block builds on 1.85–1.94. Keep MSRV at 1.85 unless this PR requires 1.95-only features.</comment>

<file context>
@@ -4,7 +4,7 @@ version = "0.2.0"
 authors = ["mpiton"]
 edition = "2024"
-rust-version = "1.85"
+rust-version = "1.95"
 license = "GPL-3.0-only"
 
</file context>
Suggested change
rust-version = "1.95"
rust-version = "1.85"
Fix with Cubic

license = "GPL-3.0-only"

[lib]
Expand Down
6 changes: 6 additions & 0 deletions src-tauri/src/adapters/driven/config/toml_config_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,8 @@ struct ConfigDto {
retry_delay_seconds: u32,
verify_checksums: bool,
pre_allocate_space: bool,
dynamic_split_enabled: bool,
dynamic_split_min_remaining_mb: u64,

// History
history_retention_days: i64,
Expand Down Expand Up @@ -211,6 +213,8 @@ impl From<AppConfig> for ConfigDto {
retry_delay_seconds: c.retry_delay_seconds,
verify_checksums: c.verify_checksums,
pre_allocate_space: c.pre_allocate_space,
dynamic_split_enabled: c.dynamic_split_enabled,
dynamic_split_min_remaining_mb: c.dynamic_split_min_remaining_mb,
history_retention_days: c.history_retention_days,
proxy_type: c.proxy_type,
proxy_url: c.proxy_url,
Expand Down Expand Up @@ -251,6 +255,8 @@ impl From<ConfigDto> for AppConfig {
retry_delay_seconds: d.retry_delay_seconds,
verify_checksums: d.verify_checksums,
pre_allocate_space: d.pre_allocate_space,
dynamic_split_enabled: d.dynamic_split_enabled,
dynamic_split_min_remaining_mb: d.dynamic_split_min_remaining_mb,
history_retention_days: normalize_history_retention_days(d.history_retention_days),
proxy_type: d.proxy_type,
proxy_url: d.proxy_url,
Expand Down
14 changes: 14 additions & 0 deletions src-tauri/src/adapters/driven/event/tauri_bridge.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ fn event_name(event: &DomainEvent) -> &'static str {
DomainEvent::SegmentStarted { .. } => "segment-started",
DomainEvent::SegmentCompleted { .. } => "segment-completed",
DomainEvent::SegmentFailed { .. } => "segment-failed",
DomainEvent::SegmentSplit { .. } => "segment-split",
DomainEvent::PluginLoaded { .. } => "plugin-loaded",
DomainEvent::PluginUnloaded { .. } => "plugin-unloaded",
DomainEvent::PackageCreated { .. } => "package-created",
Expand Down Expand Up @@ -116,6 +117,19 @@ fn event_payload(event: &DomainEvent) -> serde_json::Value {
} => {
json!({ "downloadId": download_id.0, "segmentId": segment_id, "error": error })
}
DomainEvent::SegmentSplit {
download_id,
original_segment_id,
new_segment_id,
split_at,
} => {
json!({
"downloadId": download_id.0,
"originalSegmentId": original_segment_id,
"newSegmentId": new_segment_id,
"splitAt": split_at,
})
}

DomainEvent::PluginLoaded { name, version } => {
json!({ "name": name, "version": version })
Expand Down
13 changes: 13 additions & 0 deletions src-tauri/src/adapters/driven/logging/download_log_bridge.rs
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,19 @@ fn record_download_event(store: &DownloadLogStore, event: &DomainEvent) {
format!("[ERROR] Segment {segment_id} failed: {error}"),
);
}
DomainEvent::SegmentSplit {
download_id,
original_segment_id,
new_segment_id,
split_at,
} => {
store.push(
download_id.0,
format!(
"[INFO] Segment {original_segment_id} split at byte {split_at}; new segment {new_segment_id} took the upper half"
),
);
}
DomainEvent::ChecksumVerified { id, algorithm, .. } => {
store.push(id.0, format!("[INFO] {algorithm} checksum verified"));
}
Expand Down
Loading
Loading