Conversation
Agent-Logs-Url: https://github.com/gfauredev/LogOut/sessions/0da072c0-a66a-488a-9fae-ea480a0065c5 Co-authored-by: gfauredev <19304085+gfauredev@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR introduces a "resume last session" feature, refactors notification scheduling logic into a shared helper, updates UI translations across multiple languages, and normalises loop destructuring formatting throughout the codebase. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
📊 Coverage ReportLines: 3754/5005 (75.004995004995%) ⏱️ Tests: 255 tests in 0.498s
|
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/active_session/mod.rs`:
- Around line 500-545: The scheduled rest notification in the use_effect closure
must be made cancelable and pause-aware: capture the current rest_key()
generation (or start+duration tuple) and paused_at into the spawned task, and
before calling crate::services::notifications::send_notification verify the
generation still matches and paused_at is not set (or use an
AbortHandle/Abortable task/AbortController to cancel the pending timer whenever
rest_key(), paused_at, rest_notif_title/body, or the session end changes); apply
this for both the tokio::spawn branch (use tokio::sync::oneshot/AbortHandle or
check the captured generation after sleep) and the
wasm_bindgen_futures::spawn_local branch (use AbortController or check the
captured generation/paused_at after the TimeoutFuture) so stale notifications
are never delivered when the rest is changed, paused, canceled, or a new rest
period starts.
- Around line 547-569: The loop currently fires when intervals > prev which
duplicates the first rest-over alert if a one-shot notification has already
occurred; change the condition in the coroutine (the block using
rest_key.peek(), rest_bell_count.peek(), and send_notification) to ignore the
first interval when prev == 0 (e.g. require not (prev == 0 && intervals == 1))
so the first interval is excluded here, and when you do send a notification
still call rest_bell_count.set(intervals) to advance the counter; alternatively,
ensure the scheduled one-shot notifier updates rest_bell_count to the current
intervals when it fires so this coroutine won't resend.
In `@src/components/home.rs`:
- Around line 87-94: The current use_memo for last_session uses
completed_sessions.read().first().filter(...) which only checks the first cached
row; change it to search the collection for the first resumable session by
iterating and finding the first entry whose exercise_logs is non-empty. Update
the closure used by last_session (the use_memo block) to call
completed_sessions.read().iter().find(|s| !s.exercise_logs.is_empty()).cloned()
(or the equivalent iterator/find call for the collection type) so the resume
button picks the first resumable session rather than only the first loaded row.
- Around line 154-171: Resuming a session by re-saving the same id via
storage::save_session from the onclick in the last_session branch leaves
completed_sessions and viewed_ids stale; update the onclick handler that uses
session_to_resume so that after calling storage::save_session you also remove or
update the corresponding entry in completed_sessions (or clear the id from
viewed_ids) so the sync effect that deduplicates by viewed_ids will see the
resumed changes, or alternatively generate a new session id when creating
session_to_resume; locate the onclick closure around session_to_resume and
ensure it updates completed_sessions/viewed_ids or assigns a new id before
saving.
In `@src/components/session_timers.rs`:
- Around line 15-61: The spawned WASM timeout can fire for a stale exercise
because it isn't validated when it wakes; in schedule_duration_notification
capture the original identifiers (exercise_start/last_duration or an explicit
exercise_id/generation) and, inside the spawn_local callback before setting
duration_bell_rung and calling send_notification, re-check the current active
exercise/generation (e.g. compare the current exercise_start or a passed
generation token read from the same Signal/store used to drive the UI) and only
set duration_bell_rung/send_notification if they still match; otherwise do
nothing so the old timeout is effectively cancelled. Ensure the check uses the
same Signal types (duration_bell_rung.peek/read) so you don't mutate state for a
different exercise.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 981caf71-dcef-4dac-8375-09485c253d09
📒 Files selected for processing (12)
assets/en.ftlassets/es.ftlassets/fr.ftlsrc/components/active_session/completed_exercises.rssrc/components/active_session/mod.rssrc/components/active_session/pending_exercises.rssrc/components/analytics/chart.rssrc/components/analytics/selector.rssrc/components/exercise_form_fields.rssrc/components/exercises.rssrc/components/home.rssrc/components/session_timers.rs
| // Schedule a precise one-shot rest-over notification whenever a new rest | ||
| // period begins. Fires ~250 ms early to compensate for jitter. | ||
| use_effect(move || { | ||
| let Some((start, duration)) = rest_key() else { | ||
| return; | ||
| }; | ||
| if duration == 0 { | ||
| return; | ||
| } | ||
| // Reset the exceeded-interval counter for the new rest period. | ||
| rest_bell_count.set(0); | ||
|
|
||
| let title = rest_notif_title.peek().clone(); | ||
| let body = rest_notif_body.peek().clone(); | ||
| let fire_at_secs = start + duration; | ||
|
|
||
| #[cfg(not(target_arch = "wasm32"))] | ||
| { | ||
| let now = crate::models::get_current_timestamp(); | ||
| if fire_at_secs > now { | ||
| let delay_ms = ((fire_at_secs - now) * 1_000) | ||
| .saturating_sub(crate::components::session_timers::NOTIF_EARLY_MS); | ||
| tokio::spawn(async move { | ||
| tokio::time::sleep(std::time::Duration::from_millis(delay_ms)).await; | ||
| crate::services::notifications::send_notification(&title, &body, "logout-rest"); | ||
| }); | ||
| } else { | ||
| crate::services::notifications::send_notification(&title, &body, "logout-rest"); | ||
| } | ||
| } | ||
| #[cfg(target_arch = "wasm32")] | ||
| { | ||
| let now = crate::models::get_current_timestamp(); | ||
| let delay_ms = if fire_at_secs > now { | ||
| ((fire_at_secs - now) * 1_000) | ||
| .saturating_sub(crate::components::session_timers::NOTIF_EARLY_MS) | ||
| .min(u32::MAX as u64) as u32 | ||
| } else { | ||
| 0 | ||
| }; | ||
| wasm_bindgen_futures::spawn_local(async move { | ||
| gloo_timers::future::TimeoutFuture::new(delay_ms).await; | ||
| crate::services::notifications::send_notification(&title, &body, "logout-rest"); | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The scheduled rest alert is neither cancelled nor pause-aware.
The spawned task always fires once start + duration is reached, even if the user pauses the session, changes the rest duration, starts a new rest period, or finishes the session first. That can produce stale “rest over” notifications. This needs a generation/key check or explicit cancellation, and the schedule must stop whilst paused_at is set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/active_session/mod.rs` around lines 500 - 545, The scheduled
rest notification in the use_effect closure must be made cancelable and
pause-aware: capture the current rest_key() generation (or start+duration tuple)
and paused_at into the spawned task, and before calling
crate::services::notifications::send_notification verify the generation still
matches and paused_at is not set (or use an AbortHandle/Abortable
task/AbortController to cancel the pending timer whenever rest_key(), paused_at,
rest_notif_title/body, or the session end changes); apply this for both the
tokio::spawn branch (use tokio::sync::oneshot/AbortHandle or check the captured
generation after sleep) and the wasm_bindgen_futures::spawn_local branch (use
AbortController or check the captured generation/paused_at after the
TimeoutFuture) so stale notifications are never delivered when the rest is
changed, paused, canceled, or a new rest period starts.
| // Tick-based coroutine: fires a notification for every completed exceeded | ||
| // interval (2nd, 3rd, … ring) so the user keeps being reminded. | ||
| use_coroutine(move |_: UnboundedReceiver<()>| async move { | ||
| loop { | ||
| crate::utils::sleep_ms(1_000).await; | ||
| let Some((start, duration)) = *rest_key.peek() else { | ||
| continue; | ||
| }; | ||
| if duration == 0 { | ||
| continue; | ||
| } | ||
| let now = crate::models::get_current_timestamp(); | ||
| let elapsed = now.saturating_sub(start); | ||
| let intervals = elapsed / duration; | ||
| let prev = *rest_bell_count.peek(); | ||
| if intervals > prev { | ||
| rest_bell_count.set(intervals); | ||
| crate::services::notifications::send_notification( | ||
| &rest_notif_title.peek(), | ||
| &rest_notif_body.peek(), | ||
| "logout-rest", | ||
| ); | ||
| } |
There was a problem hiding this comment.
The first rest bell now fires twice.
rest_bell_count stays at 0 after the one-shot notification, so this loop sends again as soon as elapsed / duration == 1. The first rest-over alert will therefore be duplicated; the counter needs to be advanced when the scheduled notification wins, or the first interval must be excluded here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/active_session/mod.rs` around lines 547 - 569, The loop
currently fires when intervals > prev which duplicates the first rest-over alert
if a one-shot notification has already occurred; change the condition in the
coroutine (the block using rest_key.peek(), rest_bell_count.peek(), and
send_notification) to ignore the first interval when prev == 0 (e.g. require not
(prev == 0 && intervals == 1)) so the first interval is excluded here, and when
you do send a notification still call rest_bell_count.set(intervals) to advance
the counter; alternatively, ensure the scheduled one-shot notifier updates
rest_bell_count to the current intervals when it fires so this coroutine won't
resend.
| if let Some(ref last_sess) = *last_session.read() { | ||
| { | ||
| let session_to_resume = { | ||
| let mut s = last_sess.clone(); | ||
| s.end_time = None; | ||
| s.paused_at = None; | ||
| s | ||
| }; | ||
| rsx! { | ||
| button { | ||
| class: "icon edit", | ||
| onclick: move |_| { | ||
| storage::save_session(session_to_resume.clone()); | ||
| }, | ||
| title: t!("session-resume-last-title"), | ||
| "▶️" | ||
| } | ||
| } |
There was a problem hiding this comment.
Resuming in place leaves the completed-session cache stale.
This saves the resumed session with the same id, but completed_sessions still holds the old clone. When it is finished again, the sync effect at Lines 56-82 ignores the updated row because viewed_ids already contains that id, so the home list can keep showing pre-resume data until a full reload. Remove/update the cached entry on resume, or resume into a new session id.
One possible fix if the id must stay stable
button {
class: "icon edit",
onclick: move |_| {
+ let new_len = {
+ let mut cs = completed_sessions.write();
+ cs.retain(|s| s.id != session_to_resume.id);
+ cs.len()
+ };
+ sessions_loaded_offset.set(new_len);
storage::save_session(session_to_resume.clone());
},
title: t!("session-resume-last-title"),
"▶️"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Some(ref last_sess) = *last_session.read() { | |
| { | |
| let session_to_resume = { | |
| let mut s = last_sess.clone(); | |
| s.end_time = None; | |
| s.paused_at = None; | |
| s | |
| }; | |
| rsx! { | |
| button { | |
| class: "icon edit", | |
| onclick: move |_| { | |
| storage::save_session(session_to_resume.clone()); | |
| }, | |
| title: t!("session-resume-last-title"), | |
| "▶️" | |
| } | |
| } | |
| if let Some(ref last_sess) = *last_session.read() { | |
| { | |
| let session_to_resume = { | |
| let mut s = last_sess.clone(); | |
| s.end_time = None; | |
| s.paused_at = None; | |
| s | |
| }; | |
| rsx! { | |
| button { | |
| class: "icon edit", | |
| onclick: move |_| { | |
| let new_len = { | |
| let mut cs = completed_sessions.write(); | |
| cs.retain(|s| s.id != session_to_resume.id); | |
| cs.len() | |
| }; | |
| sessions_loaded_offset.set(new_len); | |
| storage::save_session(session_to_resume.clone()); | |
| }, | |
| title: t!("session-resume-last-title"), | |
| "▶️" | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/home.rs` around lines 154 - 171, Resuming a session by
re-saving the same id via storage::save_session from the onclick in the
last_session branch leaves completed_sessions and viewed_ids stale; update the
onclick handler that uses session_to_resume so that after calling
storage::save_session you also remove or update the corresponding entry in
completed_sessions (or clear the id from viewed_ids) so the sync effect that
deduplicates by viewed_ids will see the resumed changes, or alternatively
generate a new session id when creating session_to_resume; locate the onclick
closure around session_to_resume and ensure it updates
completed_sessions/viewed_ids or assigns a new id before saving.
| /// Schedule a one-shot duration-reached notification for the given exercise. | ||
| /// | ||
| /// * On **WASM**: uses `gloo_timers` in a `spawn_local` task so the timeout | ||
| /// fires accurately without blocking the UI. `duration_bell_rung` is set | ||
| /// inside the callback to prevent the tick-based fallback from sending a | ||
| /// duplicate. | ||
| /// * On **native**: the tick-based path is accurate enough (±1 s) and avoids | ||
| /// the complexity of crossing Dioxus signal boundaries from a `tokio::spawn` | ||
| /// thread; no extra task is spawned here. | ||
| #[allow(unused_mut)] | ||
| fn schedule_duration_notification( | ||
| exercise_start: Option<u64>, | ||
| last_duration: Option<u64>, | ||
| mut duration_bell_rung: Signal<bool>, | ||
| ) { | ||
| #[cfg(target_arch = "wasm32")] | ||
| { | ||
| let Some(start) = exercise_start else { return }; | ||
| let Some(dur) = last_duration else { return }; | ||
| if dur == 0 || *duration_bell_rung.read() { | ||
| return; | ||
| } | ||
| let fire_at_secs = start + dur; | ||
| let now = get_current_timestamp(); | ||
| // Only schedule if the duration hasn't already elapsed; the tick-based | ||
| // fallback fires immediately when elapsed >= dur on the next tick. | ||
| if fire_at_secs <= now { | ||
| return; | ||
| } | ||
| let delay_ms = ((fire_at_secs - now) * 1_000) | ||
| .saturating_sub(NOTIF_EARLY_MS) | ||
| .min(u32::MAX as u64) as u32; | ||
| let title = t!("notif-duration-title").to_string(); | ||
| let body = t!("notif-duration-body").to_string(); | ||
| wasm_bindgen_futures::spawn_local(async move { | ||
| gloo_timers::future::TimeoutFuture::new(delay_ms).await; | ||
| // Re-check to avoid a duplicate if the tick fired first. | ||
| if !*duration_bell_rung.peek() { | ||
| duration_bell_rung.set(true); | ||
| crate::services::notifications::send_notification(&title, &body, "logout-duration"); | ||
| } | ||
| }); | ||
| } | ||
| // On native, suppress unused-variable warnings; the tick handles it. | ||
| #[cfg(not(target_arch = "wasm32"))] | ||
| let _ = (exercise_start, last_duration, duration_bell_rung); | ||
| } |
There was a problem hiding this comment.
The WASM duration timeout can ring for the wrong exercise.
This timeout is never cancelled or keyed to the current exercise. If the user completes/cancels the exercise, starts another one, or pauses before it fires, the old task can still send logout-duration, and because duration_bell_rung gets reused it can suppress the real alert for the new exercise. The callback needs to validate the active exercise/generation before sending.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/session_timers.rs` around lines 15 - 61, The spawned WASM
timeout can fire for a stale exercise because it isn't validated when it wakes;
in schedule_duration_notification capture the original identifiers
(exercise_start/last_duration or an explicit exercise_id/generation) and, inside
the spawn_local callback before setting duration_bell_rung and calling
send_notification, re-check the current active exercise/generation (e.g. compare
the current exercise_start or a passed generation token read from the same
Signal/store used to drive the UI) and only set
duration_bell_rung/send_notification if they still match; otherwise do nothing
so the old timeout is effectively cancelled. Ensure the check uses the same
Signal types (duration_bell_rung.peek/read) so you don't mutate state for a
different exercise.
…med session UI, stale ATH notification Agent-Logs-Url: https://github.com/gfauredev/LogOut/sessions/feeac486-69ef-4414-beff-795baf8bb24b Co-authored-by: gfauredev <19304085+gfauredev@users.noreply.github.com>
📊 Coverage ReportLines: 3754/5005 (75.004995004995%) ⏱️ Tests: 255 tests in 0.597s
|
Agent-Logs-Url: https://github.com/gfauredev/LogOut/sessions/0da072c0-a66a-488a-9fae-ea480a0065c5
This Pull Request…
Engineering Principles
scope is focused (no unrelated dependency updates or formatting)
README’s Engineering PrinciplesCI/CD Readiness
feat/…,fix/…,refactor/…, …dx fmt; cargo fmtnix flake checkssucceeds without warningsdx buildwith necessary platform flags succeedscargo clippy -- -D warnings -W clippy::all -W clippy::pedanticproduces zero warnings
cargo llvm-cov nextest --ignore-filename-regex '(src/components/|\.cargo/registry/|nix/store)'maestro test --headless maestro/webmaestro test --headless maestro/androidSummary by CodeRabbit
Release Notes
New Features
Improvements