Add AI slop detection and auto-skip functionality#68
Conversation
Implements AI artist detection using Spotify AI Blocker data with Redis caching. Automatically skips AI-generated tracks for premium users, notifies non-premium users. Refactored profanity check queue to generic track check queue.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds AI-generated music detection and wiring: new AISlopDetectionService with three providers, integrates AI checks into a renamed track_check queue (two-stage: AI check → profanity), DB migration and user setting, Telegram UI/commands, locales, and provider implementations. Changes
Sequence DiagramsequenceDiagram
participant User
participant TelegramHandler as Telegram Handler
participant TrackQueue as Track Queue
participant AISlopService as AI Slop Service
participant Providers as Detection Providers
participant Telegram as Telegram Bot
participant UserService
User->>TelegramHandler: play track / trigger check
TelegramHandler->>TrackQueue: queue track_check(task)
TrackQueue->>AISlopService: check_ai_slop(track)
AISlopService->>Providers: SpotifyAIBlocker.detect()
Providers-->>AISlopService: prediction
AISlopService->>Providers: SoulOverAI.detect()
Providers-->>AISlopService: prediction
AISlopService->>Providers: SHLabs.detect() (optional)
Providers-->>AISlopService: prediction
AISlopService-->>TrackQueue: AISlopCheckResult
alt prediction is AI and user configured "skip"
TrackQueue->>TrackQueue: auto-skip track (increment stats)
else notify or no-AI
TrackQueue->>TrackQueue: check_profanity()
end
TrackQueue->>Telegram: send alert (inline buttons)
Telegram->>User: display notification
User->>Telegram: click AISlopDetection inline button
Telegram->>TelegramHandler: CallbackQuery(status)
TelegramHandler->>UserService: set_cfg_ai_slop_detection(status)
UserService-->>TelegramHandler: update result
TelegramHandler->>Telegram: update inline keyboard
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Replace auto-skip behavior with interactive notification showing dislike/ignore buttons and direct link to artist's Spotify page. Add artist URL tracking to ShortTrack for the new button.
Integrate xoundbyte/soul-over-ai artist database as a second detection source. Artist IDs are cached in Redis for 24 hours.
Integrate spot-the-ai.com API as third detection source. Artist names are hashed with MD5 and cached in Redis for 24 hours.
Wrap each provider call in error handling to prevent one provider's failure from breaking the entire detection system. Log errors and continue checking remaining providers.
Integrate shlabs.music API for track-level AI detection. Results cached for 1 year. Requires SHLABS_API_KEY env var.
Track which provider detected AI and display attribution link in notification messages. Add Provider enum with links to sources.
Add documentation for the new AI slop detection feature with links to all four detection providers used by the system.
Adds cfg_ai_slop_detection field to user table with three modes: notify (default), ignore, and skip. Integrates the setting into track_check to enable auto-skip functionality when configured.
Adds English translations, /ai_slop_detection command with inline keyboard for configuring detection behavior (notify/ignore/skip), and updates welcome message to mention the new feature.
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
CodeRabbit configuration file (
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/tick/user.rs (1)
65-79:⚠️ Potential issue | 🟠 MajorAI detection is still gated by profanity setting.
At Line 65, queueing is conditioned only on
cfg_check_profanity. With the new unifiedtrack_checkqueue, this prevents AI-slop detection when profanity checks are disabled.💡 Proposed fix
- if state.user().cfg_check_profanity { + let should_queue_track_check = state.user().cfg_check_profanity + || state.user().cfg_ai_slop_detection != UserAISlopDetection::Ignore; + + if should_queue_track_check { let changed = UserService::sync_current_playing( app.redis_conn().await?, state.user_id(), track.id(), ) .await?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tick/user.rs` around lines 65 - 79, The code currently gates both UserService::sync_current_playing and queue::track_check::queue behind state.user().cfg_check_profanity which prevents the unified track_check queue (AI detection) from running when profanity checks are disabled; change the logic so sync_current_playing and the subsequent queue::track_check::queue call are executed independent of cfg_check_profanity, only returning CheckUserResult::SkipSame when sync_current_playing returns false and no other checks need running; in short, remove the cfg_check_profanity guard around queue::track_check::queue (and ensure sync_current_playing is called when needed) so the track_check queue always gets enqueued for AI detection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@locales/ai_slop.yml`:
- Around line 8-13: Replace the <tg-emoji ...> wrappers in this static locale
string with the literal emoji characters (e.g., replace <tg-emoji
emoji-id="5429334864010681294">💩</tg-emoji> with 💩, and similarly for 🔗, ➕,
❌) so the static messages use plain emoji instead of <tg-emoji>; apply the same
change to the other static message blocks that still contain <tg-emoji> wrappers
(the other occurrences noted in the diff).
In `@locales/login.yml`:
- Line 36: Update the wording for the bullet currently reading "• Detect and
skip AI-generated music" (and the identical string at the other occurrence) to
indicate skipping is conditional/configurable rather than guaranteed; find the
string in the locales entry and change it to language such as "• Detect
AI-generated music and optionally notify, ignore, or skip tracks" (or similar)
so the copy reflects the configurable detect/notify/ignore/skip behavior.
In `@migrations/20260302172253_add_user_cfg_ai_slop_detection_field.sql`:
- Around line 1-2: The migration adds cfg_ai_slop_detection but lacks a
schema-level constraint; modify the migration to enforce allowed values by
adding a CHECK constraint (or create a small ENUM type) on the "user" table for
the cfg_ai_slop_detection column to allow only 'notify', 'ignore', or 'skip' and
keep the default 'notify' and NOT NULL; reference the cfg_ai_slop_detection
column and add a constraint name like user_cfg_ai_slop_detection_check (or
create type user_cfg_ai_slop_detection_enum and use it) so the database rejects
invalid values.
In `@README.md`:
- Line 33: The README entry lists "SpotAI" but links to spot-the-ai.com; update
the provider label to match the linked branding (e.g., "Spot the AI" or
"Spot-the-AI") in the AI-Generated Music Detection bullet so the display text
and URL align; search for other occurrences of "SpotAI" in README or docs and
replace them with the chosen canonical name to keep naming consistent.
In `@src/app.rs`:
- Line 71: Normalize blank/whitespace SHLABS API keys to None before
constructing AISlopDetectionService: when populating the shlabs_api_key
Option<String> (the struct field named shlabs_api_key) trim the string and treat
an empty result as None, and apply the same normalization right before
calling/constructing AISlopDetectionService so you never pass Some("") or
whitespace; update any places that read the env var or forward shlabs_api_key
(including the code paths that construct AISlopDetectionService) to use the
trimmed-and-empty->None value.
In `@src/queue/mod.rs`:
- Around line 32-37: The local variable profanity_queue is a leftover name that
should be renamed to reflect track-check storage; change the local binding
created by
storage.make_shared_with_config(RedisConfig::default().set_namespace("rustify:track_check"))?
from profanity_queue to a clear name like track_check_storage or
track_check_queue so it matches the field track_check_queue in the returned Self
and avoids confusion when reading the make_shared_with_config call and the Self
{ storage, track_check_queue: ... } construction; update any usages in the
function to the new identifier.
In `@src/queue/track_check.rs`:
- Line 82: The function name check_pofanity is misspelled; rename the function
definition and all call sites from check_pofanity to check_profanity (update the
function signature and its usages such as the call in the track handling code
that currently does let res = check_pofanity(app, &user_state, &data.track) and
any other occurrences like the second reported site) so the API and
implementation use the correct spelling consistently.
- Around line 75-77: The skip counter is being incremented twice: once inside
check_ai_slop and again when res.skipped is true; remove the duplicate increment
by leaving a single canonical increment path. Specifically, either remove the
TrackStatusService::increase_skips call in the outer handler (the block that
checks if res.skipped) or, if you prefer check_ai_slop to be the single source
of truth, add a flag/return value from check_ai_slop to indicate that it already
incremented and skip calling TrackStatusService::increase_skips here; update the
logic around check_ai_slop and the res.skipped handling so
TrackStatusService::increase_skips is invoked only once per skip event.
- Around line 273-337: The code currently only checks for
UserAISlopDetection::Skip and treats every other variant (including
UserAISlopDetection::Ignore) as an alert path; update the branching around
state.user().cfg_ai_slop_detection to explicitly handle
UserAISlopDetection::Ignore by returning early (similar to the Skip branch but
without calling spotify.next_track or marking skipped) so no AI notification is
sent, and keep the existing alert logic for the remaining variants; make this
change where cfg_ai_slop_detection is inspected and ensure the function still
returns an AISlopCheckResult (set is_ai_slop true and skipped false for Ignore)
and that ai_detection_result.provider checks remain in the alert branch.
In `@src/services/ai_slop_detection/mod.rs`:
- Around line 84-89: The match arm that logs recoverable provider failures
currently calls tracing::error; change it to a lower verbosity level (e.g.,
tracing::debug or tracing::trace) and keep the structured fields (err = ?err and
provider = $provider_enum.name()) so provider context is preserved; update the
log message to reflect a recoverable/provider-fallback event (e.g., "Recoverable
error from {provider}, falling back") within the Err(err) arm that currently
references $provider_enum.name().
In `@src/services/ai_slop_detection/shlabs.rs`:
- Line 50: The Redis key constant REDIS_KEY_ARTIST_PREFIX is misnamed relative
to how it's used (keys are built with track.id()); rename the constant (e.g., to
REDIS_KEY_TRACK_PREFIX or REDIS_KEY_TRACK_ARTIST_PREFIX) and update all usages
where keys are constructed from track.id() (references: REDIS_KEY_ARTIST_PREFIX
and calls using track.id()) so the name clearly reflects track-based caching and
avoids future confusion.
In `@src/services/ai_slop_detection/spot_the_ai.rs`:
- Around line 33-53: ensure_populated has a TOCTOU race: wrap the call to
populate in a short-lived Redis lock using SET NX EX to prevent stampedes. In
ensure_populated, before calling self.populate(redis_conn).await, attempt to
acquire a lock key (e.g. REDIS_KEY_POPULATED_LOCK) with SET NX EX and a small
TTL; if you acquire it, run self.populate(...), set REDIS_KEY_POPULATED, then
release the lock; if you fail to acquire it, wait/poll (with a timeout) for
REDIS_KEY_POPULATED to appear and return Ok once it exists (or return an error
on timeout). Ensure the lock is always released after populate (use a
finally/drop pattern) and reference ensure_populated, populate,
REDIS_KEY_POPULATED and the new lock key.
In `@src/services/ai_slop_detection/spotify_ai_blocker.rs`:
- Around line 38-45: The current exists check (exists variable from
redis_conn.exists(REDIS_KEY_POPULATED)) followed by self.populate(redis_conn)
has a TOCTOU race; change this to an atomic acquire-and-populate pattern using
Redis atomic primitives (e.g., SET key NX with a short TTL or a Redis lock key
like REDIS_KEY_POPULATED_LOCK) before calling self.populate(redis_conn), so only
the lock-holder runs populate; other workers should either return or wait/retry
until REDIS_KEY_POPULATED is set. Ensure the lock is released or the TTL covers
failures and that REDIS_KEY_POPULATED is set once populate completes
successfully; handle errors so failed populate does not leave the populated flag
set.
In `@src/spotify/mod.rs`:
- Around line 117-129: The fallback hardcoded Spotify URLs in the artist_urls
mapping (inside the closure using full_track.artists ->
artist.external_urls.get("spotify").cloned().unwrap_or_else(...)) are
misleading; change the fallback to construct a URL from the available ID (e.g.,
if artist.id is Some(id) use "https://open.spotify.com/artist/{id}") and if no
id is present fall back to "https://open.spotify.com/"; apply the same pattern
to the track and album URL helpers (the closures that read
external_urls.get("spotify") for track_url and album_url) so each fallback uses
the corresponding id (track.id, album.id) or the generic root URL when the id is
missing.
In `@src/telegram/actions/ai_slop_detection.rs`:
- Around line 32-36: The hardcoded English callback text "Inaccessible Message"
must be localized: replace the literal string in the branch where
q.get_message() returns None (the code that calls
app.bot().answer_callback_query(q.id.clone()).text("Inaccessible Message")) with
the i18n helper used elsewhere in the flow (e.g., the same translator/context
used in this module), calling the appropriate key (add a new key if needed) and
passing the translated string into .text(...) so the callback uses localized
text; update or reuse the existing translation key naming convention to match
other messages in ai_slop_detection.rs.
---
Outside diff comments:
In `@src/tick/user.rs`:
- Around line 65-79: The code currently gates both
UserService::sync_current_playing and queue::track_check::queue behind
state.user().cfg_check_profanity which prevents the unified track_check queue
(AI detection) from running when profanity checks are disabled; change the logic
so sync_current_playing and the subsequent queue::track_check::queue call are
executed independent of cfg_check_profanity, only returning
CheckUserResult::SkipSame when sync_current_playing returns false and no other
checks need running; in short, remove the cfg_check_profanity guard around
queue::track_check::queue (and ensure sync_current_playing is called when
needed) so the track_check queue always gets enqueued for AI detection.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (28)
Cargo.tomlREADME.mdlocales/ai_slop.ymllocales/inline_buttons.ymllocales/login.ymllocales/profanity_check.ymlmigrations/20260302172253_add_user_cfg_ai_slop_detection_field.sqlsrc/app.rssrc/entity/prelude.rssrc/entity/user.rssrc/queue/mod.rssrc/queue/track_check.rssrc/services/ai_slop_detection/mod.rssrc/services/ai_slop_detection/shlabs.rssrc/services/ai_slop_detection/soul_over_ai.rssrc/services/ai_slop_detection/spot_the_ai.rssrc/services/ai_slop_detection/spotify_ai_blocker.rssrc/services/mod.rssrc/services/user.rssrc/spotify/mod.rssrc/telegram/actions/ai_slop_detection.rssrc/telegram/actions/mod.rssrc/telegram/commands.rssrc/telegram/handlers/commands.rssrc/telegram/handlers/inline_buttons.rssrc/telegram/inline_buttons.rssrc/tick/user.rssrc/workers/queues.rs
Adds helper methods for detection modes, fixes Redis key prefix, improves early-exit logic for ignore mode, and updates documentation to clarify configurable behavior.
The constant stores track data, not artist data. Rename to reflect actual usage and improve code clarity.
Introduce unified trait for AI detection providers and differentiate between fully AI-generated and AI-processed tracks. Add localized prediction type labels displayed in alert messages.
Centralize skip counting in the consume function rather than inside check_ai_slop for better separation of concerns.
Replace silent .ok() with tracing::error for better observability when the skip stats increment fails.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/tick/user.rs (2)
64-82:⚠️ Potential issue | 🟠 Major
TrackStatus::Ignorenow disables AI detection too.The
Ignorestate is still presented as “ignore bad words for this song” inlocales/actions.yml, but this branch now skips the entire track-check flow. Once a user ignores lyrics for a track, AI notify/auto-skip will never run for that track either. This needs separate gating for profanity vs. AI checks, or AI detection should still enqueue whencfg_ai_slop_detectionis enabled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tick/user.rs` around lines 64 - 82, The TrackStatus::Ignore arm currently skips the whole check flow; change it so ignoring a track only disables profanity checks but still runs AI slop detection when enabled: in the match arm for TrackStatus::Ignore, do not return early—check state.user().cfg_ai_slop_detection (using cfg_ai_slop_detection.is_ignore()) and if AI detection is enabled call UserService::sync_current_playing(...) and queue::track_check::queue(app, state.user_id(), &track).await (same logic used in TrackStatus::None) while skipping only the profanity-related branch gated by state.user().cfg_check_profanity.
65-79:⚠️ Potential issue | 🟠 MajorThis path still runs profanity checks for AI-only users.
queue::track_check::queue(...)now runs whenever AI slop detection is enabled, butsrc/queue/track_check.rs:55-102still callscheck_profanity(...)unconditionally aftercheck_ai_slop(...). A user withcfg_check_profanity = falseand AI detection enabled will still get lyrics scanning/notifications and profanity stats updates. Split the worker path, or pass feature flags through the queue task so profanity stays disabled here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tick/user.rs` around lines 65 - 79, Summary: Currently users with cfg_check_profanity = false still get profanity checks because queue::track_check::queue(...) always calls check_profanity after check_ai_slop. Fix: change the call/site in src/tick/user.rs (the block that calls queue::track_check::queue(app, state.user_id(), &track)) to pass the user's feature flags (cfg_check_profanity and cfg_ai_slop_detection) into the queued task, or call a separate queuing function (e.g., queue::track_check::queue_with_flags) that encodes those flags; then update queue::track_check::queue (and/or implement queue_with_flags) so it checks the passed cfg_check_profanity before invoking check_profanity and only runs check_ai_slop when cfg_ai_slop_detection is enabled (i.e., respect cfg_ai_slop_detection.is_ignore()). Ensure the queued payload includes state.user_id() and the flags so the worker can decide and avoid running profanity logic for AI-only users.src/telegram/commands.rs (1)
176-199: 🧹 Nitpick | 🔵 TrivialMake this test assert the new command mapping.
user_commandis alwaysHelp, theAISlopDetectionarm on Line 198 never executes, and the match result is discarded. Add an explicit assertion for the new display string or table-drive the mapping so this change is actually covered.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/telegram/commands.rs` around lines 176 - 199, The test check_user_commands currently constructs user_command = UserCommand::Help, performs a match that returns a UserCommandDisplay variant but discards the result, so the new AISlopDetection mapping is never exercised; update the test to assert the mapping explicitly by capturing the match result (e.g., let display = match user_command { ... }) and then assert_eq!(display, UserCommandDisplay::Help), and either add an additional case where user_command = UserCommand::AISlopDetection and assert_eq!(display, UserCommandDisplay::AISlopDetection) or refactor the test into a table-driven loop over pairs of (UserCommand, expected UserCommandDisplay) to cover all mappings.
♻️ Duplicate comments (6)
locales/login.yml (1)
36-36:⚠️ Potential issue | 🟡 MinorDon’t promise auto-skip to every user in the onboarding copy.
The PR behavior is tiered, but both strings still read like
auto-skipis universally available. That will mislead non-premium users.✏️ Suggested copy update
- • Detect AI-generated music (notify/ignore/auto-skip) + • Detect AI-generated music (notify/ignore; auto-skip for Premium) - • Обнаружение ИИ-музыки (уведомление/игнорирование/авто-пропуск) + • Обнаружение ИИ-музыки (уведомление/игнорирование; авто-пропуск для Premium)Also applies to: 52-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@locales/login.yml` at line 36, The onboarding string "Detect AI-generated music (notify/ignore/auto-skip)" promises universal auto-skip; update that copy (and the duplicate at the other occurrence) to indicate auto-skip is not universally available — e.g. change the text to something like "Detect AI-generated music (notify/ignore; auto-skip on supported/plans)" or "Detect AI-generated music (notify/ignore; auto-skip available on premium)" so it clearly tiers the feature; apply this change to both occurrences of that exact phrase in locales/login.yml.migrations/20260302172253_add_user_cfg_ai_slop_detection_field.sql (1)
1-2:⚠️ Potential issue | 🟠 MajorEnforce valid
cfg_ai_slop_detectionvalues at the schema level.This column is backed by an enum in application code, but the migration still allows any arbitrary
textvalue. Add aCHECKconstraint or a database enum so bad data cannot be inserted outside Rust.🧱 Suggested migration adjustment
alter table "user" - add cfg_ai_slop_detection text default 'notify' not null; + add cfg_ai_slop_detection text default 'notify' not null, + add constraint user_cfg_ai_slop_detection_chk + check (cfg_ai_slop_detection in ('notify', 'ignore', 'skip'));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/20260302172253_add_user_cfg_ai_slop_detection_field.sql` around lines 1 - 2, The migration adds cfg_ai_slop_detection as text but doesn't enforce allowed enum values; update the migration to prevent invalid values by either creating a DB enum type (e.g., CREATE TYPE cfg_ai_slop_detection_enum AS ENUM ('notify','ignore',...) and ALTER TABLE "user" ADD COLUMN cfg_ai_slop_detection cfg_ai_slop_detection_enum NOT NULL DEFAULT 'notify') or by adding a CHECK constraint on "user"(cfg_ai_slop_detection) (e.g., CHECK (cfg_ai_slop_detection IN ('notify','...'))); modify the ALTER TABLE statement that adds cfg_ai_slop_detection to use one of these approaches and include the allowed variants from the application enum.src/queue/track_check.rs (3)
321-327:⚠️ Potential issue | 🟡 MinorEncode the invariant instead of emitting
"unreachable".This branch is only reached after
is_track_ai()returned true. Returning a literal"unreachable"would hide a broken invariant and leak a bogus alert to users. Match only the AI variants or useunreachable!()for the impossible arm.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 321 - 327, The match on ai_detection_result.prediction currently returns a bogus "unreachable" string for AISlopDetectionPrediction::HumanMade; instead encode the invariant by either (a) matching only the AI variants (AISlopDetectionPrediction::PureAI and ::ProcessedAI) and omitting the HumanMade arm, or (b) replace that arm with unreachable!() (or debug_assert!(is_track_ai()) earlier) so a broken invariant panics rather than emitting a bogus alert; update the match around ai_detection_result.prediction (the prediction variable) accordingly.
312-318:⚠️ Potential issue | 🟠 MajorDon't hardcode the AI alert button to the first artist.
This detection can come from any matched artist or from a track-level provider, so
track.first_artist_url()can send users to an unrelated page. Carry the matched artist through the detection result, or omit the artist-page button when the provider cannot identify a specific artist.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 312 - 318, The AI-alert keyboard is using track.first_artist_url() which may point to an unrelated artist; update the keyboard construction (where InlineButtons::ArtistPage is added) to use the actual matched artist from the detection result instead of track.first_artist_url(), or omit the ArtistPage button when the detection/provider does not identify a specific artist—i.e., propagate the matched artist ID/URL through the detection result and use that for InlineButtons::ArtistPage(track_matched_artist_url.parse()?) or guard the button creation so it’s only added when a matched_artist is present.
277-286:⚠️ Potential issue | 🔴 CriticalMake the premium auto-skip path idempotent before the Spotify call.
next_track(None)happens before any durable/idempotent marker is written. If this task is retried after Spotify advances but beforeincrease_skips()commits, the same queue item can skip an extra track and overcount skips. Guard this path with a(user_id, track_id)dedupe key before calling Spotify, or make the skip/update path atomic.#!/bin/bash # Verify that the queue worker retries this task and that skip accounting is still a plain increment. rg -n -C2 '\.retry\(RetryPolicy::retries\([0-9]+\)\)|next_track\(|fn increase_skips|increase_skips\(' \ src/workers/queues.rs \ src/queue/track_check.rs \ src/services/track_status.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 277 - 286, The premium auto-skip path calls state.spotify().next_track(None) before persisting any durable dedupe marker, so retries can double-skip and overcount; before calling Spotify in the branch guarded by state.user().cfg_ai_slop_detection.is_skip() and state.is_spotify_premium(), create or check an idempotency/dedup key for (state.user_id(), track.id()) (e.g., via a new TrackStatusService::try_mark_skip_in_progress or upsert marker using app.db()) and only call state.spotify().next_track(None) if the marker was created (no-op if already present), then perform TrackStatusService::increase_skips atomically or clear the in-progress marker after commit to ensure the skip and accounting are durable and idempotent.src/services/ai_slop_detection/spotify_ai_blocker.rs (1)
77-80:⚠️ Potential issue | 🟡 MinorUse a monotonic clock for the population wait timeout.
This loop is measuring elapsed time, so
Utc::now()can jump with wall-clock adjustments and make the 20-second wait shorter or longer than intended.tokio::time::Instantkeeps the timeout stable.#!/bin/bash # Verify both AI-artist population wait loops are still based on wall-clock time. rg -n -C2 'Utc::now\(\)|POPULATE_TIMEOUT' \ src/services/ai_slop_detection/spotify_ai_blocker.rs \ src/services/ai_slop_detection/soul_over_ai.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ai_slop_detection/spotify_ai_blocker.rs` around lines 77 - 80, Replace the wall-clock based deadline using Utc::now() with a monotonic deadline using tokio::time::Instant to avoid clock jumps: convert POPULATE_TIMEOUT to a std::time::Duration (via to_std()), set deadline = tokio::time::Instant::now() + that duration, and change the loop condition to compare tokio::time::Instant::now() < deadline; keep using RETRY_DELAY (converted to std::time::Duration) for the sleep. Update all occurrences of Utc::now() in this population loop so the timeout logic uses tokio::time::Instant instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/ai_slop_detection/soul_over_ai.rs`:
- Around line 156-164: When calling ensure_populated from is_track_ai, catch
refresh errors and, on error, fall back to checking existing cached artist keys
with any_artist_ai before failing: attempt Self::ensure_populated(self,
redis_conn).await and if it returns Ok proceed as before; if it returns Err(e)
then call self.any_artist_ai(redis_conn, &track.artist_ids()).await — if that
returns Ok(true) return Ok(true) (keep fail-open), otherwise return the original
error e; reference is_track_ai, ensure_populated, any_artist_ai and the
REDIS_KEY_POPULATED timing overlap when making this change.
In `@src/spotify/mod.rs`:
- Around line 147-154: The code currently parses full_track.album.release_date
into a NaiveDate and defaults missing month/day to 1, losing precision; update
the album_release_date logic to first inspect
full_track.album.release_date_precision and only construct NaiveDate via
NaiveDate::from_ymd_opt when precision == "day" (otherwise set
album_release_date to None), or alternatively add and populate a companion field
album_release_date_precision so callers (e.g., is_before_ai_era()) can handle
year- or month-only dates correctly; modify the block that uses
full_track.album.release_date to branch on
full_track.album.release_date_precision and avoid fabricating a full date for
non-"day" precisions.
In `@src/telegram/actions/ai_slop_detection.rs`:
- Around line 58-70: The keyboard is currently generated by
UserAISlopDetection::iter(), which couples UI order to the enum declaration;
replace the iterator with an explicit ordering (e.g.,
[UserAISlopDetection::Notify, UserAISlopDetection::Ignore,
UserAISlopDetection::Skip]) inside get_keyboard so the buttons are always
created in the desired Notify, Ignore, Skip order, still using
InlineButtons::AISlopDetection(status, current_setting ==
status).into_inline_keyboard_button(locale) for each status and collecting into
Vec<Vec<InlineKeyboardButton>>.
---
Outside diff comments:
In `@src/telegram/commands.rs`:
- Around line 176-199: The test check_user_commands currently constructs
user_command = UserCommand::Help, performs a match that returns a
UserCommandDisplay variant but discards the result, so the new AISlopDetection
mapping is never exercised; update the test to assert the mapping explicitly by
capturing the match result (e.g., let display = match user_command { ... }) and
then assert_eq!(display, UserCommandDisplay::Help), and either add an additional
case where user_command = UserCommand::AISlopDetection and assert_eq!(display,
UserCommandDisplay::AISlopDetection) or refactor the test into a table-driven
loop over pairs of (UserCommand, expected UserCommandDisplay) to cover all
mappings.
In `@src/tick/user.rs`:
- Around line 64-82: The TrackStatus::Ignore arm currently skips the whole check
flow; change it so ignoring a track only disables profanity checks but still
runs AI slop detection when enabled: in the match arm for TrackStatus::Ignore,
do not return early—check state.user().cfg_ai_slop_detection (using
cfg_ai_slop_detection.is_ignore()) and if AI detection is enabled call
UserService::sync_current_playing(...) and queue::track_check::queue(app,
state.user_id(), &track).await (same logic used in TrackStatus::None) while
skipping only the profanity-related branch gated by
state.user().cfg_check_profanity.
- Around line 65-79: Summary: Currently users with cfg_check_profanity = false
still get profanity checks because queue::track_check::queue(...) always calls
check_profanity after check_ai_slop. Fix: change the call/site in
src/tick/user.rs (the block that calls queue::track_check::queue(app,
state.user_id(), &track)) to pass the user's feature flags (cfg_check_profanity
and cfg_ai_slop_detection) into the queued task, or call a separate queuing
function (e.g., queue::track_check::queue_with_flags) that encodes those flags;
then update queue::track_check::queue (and/or implement queue_with_flags) so it
checks the passed cfg_check_profanity before invoking check_profanity and only
runs check_ai_slop when cfg_ai_slop_detection is enabled (i.e., respect
cfg_ai_slop_detection.is_ignore()). Ensure the queued payload includes
state.user_id() and the flags so the worker can decide and avoid running
profanity logic for AI-only users.
---
Duplicate comments:
In `@locales/login.yml`:
- Line 36: The onboarding string "Detect AI-generated music
(notify/ignore/auto-skip)" promises universal auto-skip; update that copy (and
the duplicate at the other occurrence) to indicate auto-skip is not universally
available — e.g. change the text to something like "Detect AI-generated music
(notify/ignore; auto-skip on supported/plans)" or "Detect AI-generated music
(notify/ignore; auto-skip available on premium)" so it clearly tiers the
feature; apply this change to both occurrences of that exact phrase in
locales/login.yml.
In `@migrations/20260302172253_add_user_cfg_ai_slop_detection_field.sql`:
- Around line 1-2: The migration adds cfg_ai_slop_detection as text but doesn't
enforce allowed enum values; update the migration to prevent invalid values by
either creating a DB enum type (e.g., CREATE TYPE cfg_ai_slop_detection_enum AS
ENUM ('notify','ignore',...) and ALTER TABLE "user" ADD COLUMN
cfg_ai_slop_detection cfg_ai_slop_detection_enum NOT NULL DEFAULT 'notify') or
by adding a CHECK constraint on "user"(cfg_ai_slop_detection) (e.g., CHECK
(cfg_ai_slop_detection IN ('notify','...'))); modify the ALTER TABLE statement
that adds cfg_ai_slop_detection to use one of these approaches and include the
allowed variants from the application enum.
In `@src/queue/track_check.rs`:
- Around line 321-327: The match on ai_detection_result.prediction currently
returns a bogus "unreachable" string for AISlopDetectionPrediction::HumanMade;
instead encode the invariant by either (a) matching only the AI variants
(AISlopDetectionPrediction::PureAI and ::ProcessedAI) and omitting the HumanMade
arm, or (b) replace that arm with unreachable!() (or
debug_assert!(is_track_ai()) earlier) so a broken invariant panics rather than
emitting a bogus alert; update the match around ai_detection_result.prediction
(the prediction variable) accordingly.
- Around line 312-318: The AI-alert keyboard is using track.first_artist_url()
which may point to an unrelated artist; update the keyboard construction (where
InlineButtons::ArtistPage is added) to use the actual matched artist from the
detection result instead of track.first_artist_url(), or omit the ArtistPage
button when the detection/provider does not identify a specific artist—i.e.,
propagate the matched artist ID/URL through the detection result and use that
for InlineButtons::ArtistPage(track_matched_artist_url.parse()?) or guard the
button creation so it’s only added when a matched_artist is present.
- Around line 277-286: The premium auto-skip path calls
state.spotify().next_track(None) before persisting any durable dedupe marker, so
retries can double-skip and overcount; before calling Spotify in the branch
guarded by state.user().cfg_ai_slop_detection.is_skip() and
state.is_spotify_premium(), create or check an idempotency/dedup key for
(state.user_id(), track.id()) (e.g., via a new
TrackStatusService::try_mark_skip_in_progress or upsert marker using app.db())
and only call state.spotify().next_track(None) if the marker was created (no-op
if already present), then perform TrackStatusService::increase_skips atomically
or clear the in-progress marker after commit to ensure the skip and accounting
are durable and idempotent.
In `@src/services/ai_slop_detection/spotify_ai_blocker.rs`:
- Around line 77-80: Replace the wall-clock based deadline using Utc::now() with
a monotonic deadline using tokio::time::Instant to avoid clock jumps: convert
POPULATE_TIMEOUT to a std::time::Duration (via to_std()), set deadline =
tokio::time::Instant::now() + that duration, and change the loop condition to
compare tokio::time::Instant::now() < deadline; keep using RETRY_DELAY
(converted to std::time::Duration) for the sleep. Update all occurrences of
Utc::now() in this population loop so the timeout logic uses
tokio::time::Instant instead.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: af073b5a-597e-43ab-879a-f4d4e20df8ae
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (30)
Cargo.tomlREADME.mdlocales/actions.ymllocales/ai_slop.ymllocales/inline_buttons.ymllocales/login.ymllocales/profanity_check.ymlmigrations/20260302172253_add_user_cfg_ai_slop_detection_field.sqlsrc/app.rssrc/entity/prelude.rssrc/entity/user.rssrc/queue/mod.rssrc/queue/track_check.rssrc/services/ai_slop_detection/mod.rssrc/services/ai_slop_detection/shlabs.rssrc/services/ai_slop_detection/soul_over_ai.rssrc/services/ai_slop_detection/spotify_ai_blocker.rssrc/services/mod.rssrc/services/user.rssrc/spotify/mod.rssrc/telegram/actions/ai_slop_detection.rssrc/telegram/actions/dislike.rssrc/telegram/actions/ignore.rssrc/telegram/actions/mod.rssrc/telegram/commands.rssrc/telegram/handlers/commands.rssrc/telegram/handlers/inline_buttons.rssrc/telegram/inline_buttons.rssrc/tick/user.rssrc/workers/queues.rs
Pause provider until midnight UTC (max 1 hour) when receiving 429 or when daily quota is exhausted. Return default prediction when rate limited instead of logging errors.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/services/ai_slop_detection/shlabs.rs (1)
30-39: 🧹 Nitpick | 🔵 TrivialRename
Resultto avoid shadowingstd::result::Result.Naming a struct
Resultshadows the standard library type and causes confusion. Within this module, any reference toResultresolves to this local struct rather thanstd::result::Result, which increases cognitive load and risks future bugs.♻️ Suggested rename
-#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] -pub struct Result { +#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] +pub struct DetectionResult { pub duration: f64, pub probability_ai_generated: f64, pub prediction: Prediction, pub confidence_score: Option<f64>, pub spectral_probabilities: Probabilities, pub temporal_probabilities: Probabilities, pub most_likely_ai_type: Option<String>, }Also update line 11:
pub struct Root { - pub result: Result, + pub result: DetectionResult, pub response_time: i64, pub usage: Usage, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ai_slop_detection/shlabs.rs` around lines 30 - 39, The struct named Result shadows std::result::Result; rename the struct (for example to DetectionResult or SlopDetectionResult) and update all references/usages (type annotations, function return types, vectors, imports, module exports, tests) to the new name; keep the same derives (Debug, Clone, PartialEq, Serialize, Deserialize) and field names (duration, probability_ai_generated, prediction, confidence_score, spectral_probabilities, temporal_probabilities, most_likely_ai_type) so serialization remains unchanged, and update any pattern matches or doc comments that referenced Result to the new identifier (also update any re-exports or use statements that imported this module type).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/queue/track_check.rs`:
- Around line 284-310: The non-premium branch currently sends the
"error.cannot-skip" message and returns early, preventing the later
AI-detection/notification logic from running; instead, for
state.user().cfg_ai_slop_detection.is_skip() when
state.is_spotify_premium().await? is false, send the t!("error.cannot-skip",
...) via app.bot().send_message(...) but do NOT return; allow execution to
continue to the later notification flow (lines handling AISlopCheckResult) so
the code still notifies users (set skipped = false or remove the early
Ok(AISlopCheckResult { ... }) return in that non-premium branch). Keep the
premium path (spotify().next_track(...)) returning after a skip, but remove the
early return for the non-premium path so the post-detection notification logic
runs.
In `@src/services/ai_slop_detection/shlabs.rs`:
- Around line 128-132: The calls to Self::rate_limit(redis_conn).await.ok() are
swallowing Redis errors; change them to capture and log any Err returned by
rate_limit instead of ignoring it. Locate the two places where Self::rate_limit
is awaited (the branch handling reqwest::StatusCode::TOO_MANY_REQUESTS and the
later similar branch) and replace the .ok() usage with code that matches on the
Result or uses .await.map_err(|e| /* log e with error level */) to emit an error
log (using the crate's logger) when rate_limit fails, while still returning
Ok(None) as before.
---
Duplicate comments:
In `@src/services/ai_slop_detection/shlabs.rs`:
- Around line 30-39: The struct named Result shadows std::result::Result; rename
the struct (for example to DetectionResult or SlopDetectionResult) and update
all references/usages (type annotations, function return types, vectors,
imports, module exports, tests) to the new name; keep the same derives (Debug,
Clone, PartialEq, Serialize, Deserialize) and field names (duration,
probability_ai_generated, prediction, confidence_score, spectral_probabilities,
temporal_probabilities, most_likely_ai_type) so serialization remains unchanged,
and update any pattern matches or doc comments that referenced Result to the new
identifier (also update any re-exports or use statements that imported this
module type).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4ffcc19f-f24e-41af-8403-53d8c98550f8
📒 Files selected for processing (4)
locales/ai_slop.ymlsrc/queue/track_check.rssrc/services/ai_slop_detection/shlabs.rssrc/telegram/actions/admin_users/details.rs
Switch from Utc::now() to std::time::Instant for deadline comparison which is more appropriate for measuring elapsed time.
Extract pause duration to variable and log it instead of seconds_until_midnight for accurate debugging info.
Switch from Utc::now() to std::time::Instant for deadline comparison which is more appropriate for measuring elapsed time.
Clean up imports after switching to Instant for timeout calculations. Move seconds_hour to compile-time const in rate_limit function.
Convert runtime variable to compile-time constant for cache TTL.
Convert runtime variable to compile-time constant for cache TTL.
Extract ArtistPage to new InlineButtonsActions type for URL-based buttons. This separates buttons that open URLs from those with callback data.
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
Remove explicit "cannot skip" error message. Non-premium users in skip mode now see the standard AI slop notification instead.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/queue/track_check.rs (1)
87-94:⚠️ Potential issue | 🟠 MajorRespect
cfg_check_profanitybefore running lyrics checks.
track_check::queue()is also used when only AI detection is enabled, so this unconditional block still fetches lyrics, updates profanity stats, and can send profanity alerts for users who explicitly turned profanity checking off.♻️ Suggested guard
- let res = check_profanity(app, &user_state, &data.track) - .await - .context("Check lyrics failed")?; - - UserService::increase_stats_query(user_state.user_id()) - .checked_lyrics(res.profane, res.provider) - .exec(app.db()) - .await?; + if user_state.user().cfg_check_profanity { + let res = check_profanity(app, &user_state, &data.track) + .await + .context("Check lyrics failed")?; + + UserService::increase_stats_query(user_state.user_id()) + .checked_lyrics(res.profane, res.provider) + .exec(app.db()) + .await?; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 87 - 94, The lyrics profanity check and stats update are running unconditionally in track_check::queue(); guard this by checking cfg_check_profanity before calling check_profanity and before updating stats or sending alerts. Wrap the existing block that calls check_profanity(...).await and the subsequent UserService::increase_stats_query(...).checked_lyrics(...).exec(...).await in an if cfg_check_profanity(...) { ... } (or the project's equivalent config predicate) so that when cfg_check_profanity is false you skip fetching lyrics, updating profanity stats, and sending alerts.
♻️ Duplicate comments (2)
src/queue/track_check.rs (2)
285-292:⚠️ Potential issue | 🔴 CriticalDon’t issue
next_track(None)from a queued job without freshness/idempotency guards.This worker is async and retried, but
next_track(None)acts on whatever is currently playing when the job reaches this branch, not ondata.track. If playback already advanced, or Spotify processed the request before an error caused a retry, this can skip the wrong song or skip twice.Run the following script to confirm the retry policy and look for an existing freshness/idempotency guard you can reuse:
#!/bin/bash rg -n -C2 'RetryPolicy::retries|next_track\s*\(' src/workers/queues.rs src/queue/track_check.rs rg -n -C2 'sync_current_playing|current.*playing|currently_playing|playback' src --type rs rg -n -C2 'mark_skip|skip.*once|idempot|dedup' src --type rs
300-311:⚠️ Potential issue | 🟠 MajorNon-premium
Skipcurrently suppresses the real AI alert.This early return means non-premium users only see
error.cannot-skip; they never reach the normal AI notification with provider/prediction details and action buttons. Let this branch fall through to the regular alert, or fold the premium warning into that alert.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 300 - 311, The early return after sending the "error.cannot-skip" message prevents the normal AI alert from being sent; modify the branch in track_check.rs (around the code that builds `text` using t!(...), calls `app.bot().send_message(state.chat_id()?, text).await?`, and returns `Ok(AISlopCheckResult { is_ai_slop: true, skipped: false })`) so that it does not return early—either remove the `return` and let execution fall through to the regular AI notification logic that builds/sends the provider/prediction details and buttons, or instead merge the non-premium warning into the regular alert payload (augment the alert message before calling the existing alert-sending code) so users still receive the full AI alert with actions. Ensure `AISlopCheckResult` values remain correct for downstream handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/ai_slop_detection/shlabs.rs`:
- Around line 104-114: Move the per-track cache read ahead of the global
rate-limit check: construct the track_key using REDIS_KEY_TRACK_PREFIX and call
redis_conn.get(&track_key).await?; if it returns Some(string) and
serde_json::from_str(&string) yields Ok(data) return Ok(Some(data)) immediately;
only after the cache miss should you check
redis_conn.exists(REDIS_KEY_RATE_LIMITED).await? and return Ok(None) on
rate-limit. Apply the same reordering/fix to the other analogous block that
currently checks REDIS_KEY_RATE_LIMITED first (the block referencing
REDIS_KEY_TRACK_PREFIX at the later location).
- Around line 1-5: This file relies on crate-wide serde derive imports; add a
local serde import to avoid the hidden dependency by adding use
serde::{Deserialize, Serialize}; at the top of shlabs.rs (or alternatively
change any #[derive(Serialize, Deserialize)] to #[derive(serde::Serialize,
serde::Deserialize)]), ensuring the derives in this file resolve without relying
on main.rs's #[macro_use] extern crate serde.
In `@src/services/ai_slop_detection/soul_over_ai.rs`:
- Around line 143-155: The loop in any_artist_ai issues one Redis EXISTS per
artist causing N RTTs; replace it with a single multi-key EXISTS (or a small
pipeline) against redis_conn so we can determine if any artist key exists in one
round trip. Reuse the same key construction logic currently used by is_artist_ai
(or extract that logic into a helper used by both) to build the list of Redis
keys for all artist_ids, call EXISTS with all keys at once and return Ok(true)
if the returned count > 0, otherwise Ok(false); ensure error handling remains
the same and keep the function signature of any_artist_ai and the is_artist_ai
helper intact.
- Around line 78-92: The code currently holds a checked-out redis_conn through
the wait loop and the leader fetch, which pins pool connections; change the flow
so the function does not keep a &mut deadpool_redis::Connection across the
cold-start wait/fetch. Keep using self.populating and the
POPULATE_TIMEOUT/RETRY_DELAY loop for coordination, but drop or avoid acquiring
redis_conn until right before doing the EXISTS check for REDIS_KEY_POPULATED
(and again only acquire a connection when performing the pipeline operations
referenced later), i.e., replace long-lived redis_conn usage with short-lived
get_conn_from_pool calls immediately before each Redis operation and return/drop
the connection as soon as the operation completes. Ensure calls that previously
used redis_conn (exists and the pipeline at the pipeline usage site) are updated
to acquire their own temporary connection and do not rely on a connection held
through sleeps or HTTP fetches.
---
Outside diff comments:
In `@src/queue/track_check.rs`:
- Around line 87-94: The lyrics profanity check and stats update are running
unconditionally in track_check::queue(); guard this by checking
cfg_check_profanity before calling check_profanity and before updating stats or
sending alerts. Wrap the existing block that calls check_profanity(...).await
and the subsequent
UserService::increase_stats_query(...).checked_lyrics(...).exec(...).await in an
if cfg_check_profanity(...) { ... } (or the project's equivalent config
predicate) so that when cfg_check_profanity is false you skip fetching lyrics,
updating profanity stats, and sending alerts.
---
Duplicate comments:
In `@src/queue/track_check.rs`:
- Around line 300-311: The early return after sending the "error.cannot-skip"
message prevents the normal AI alert from being sent; modify the branch in
track_check.rs (around the code that builds `text` using t!(...), calls
`app.bot().send_message(state.chat_id()?, text).await?`, and returns
`Ok(AISlopCheckResult { is_ai_slop: true, skipped: false })`) so that it does
not return early—either remove the `return` and let execution fall through to
the regular AI notification logic that builds/sends the provider/prediction
details and buttons, or instead merge the non-premium warning into the regular
alert payload (augment the alert message before calling the existing
alert-sending code) so users still receive the full AI alert with actions.
Ensure `AISlopCheckResult` values remain correct for downstream handling.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b1eaa3d6-420b-469b-a521-c944d47861fd
📒 Files selected for processing (8)
src/queue/track_check.rssrc/services/ai_slop_detection/shlabs.rssrc/services/ai_slop_detection/soul_over_ai.rssrc/services/ai_slop_detection/spotify_ai_blocker.rssrc/telegram/handlers/inline_buttons.rssrc/telegram/inline_buttons.rssrc/telegram/inline_buttons_actions.rssrc/telegram/mod.rs
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/queue/track_check.rs (1)
314-320: 🧹 Nitpick | 🔵 TrivialConsider using
unreachable!()for the deadHumanMadebranch.Line 278 returns early when
!ai_detection_result.prediction.is_track_ai(), so this match arm can never execute. Using theunreachable!()macro instead of a string makes the invariant explicit and will panic with a clear message if the assumption is ever violated.✏️ Suggested change
let prediction = match ai_detection_result.prediction { - AISlopDetectionPrediction::HumanMade => "unreachable".into(), + AISlopDetectionPrediction::HumanMade => unreachable!("Filtered by early return at line 278"), AISlopDetectionPrediction::PureAI => t!("ai-slop.alert-pure-ai", locale = state.locale()),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 314 - 320, The match arm for AISlopDetectionPrediction::HumanMade is dead because caller checks !ai_detection_result.prediction.is_track_ai() and returns early; replace the current "unreachable".into() expression with unreachable!("HumanMade variant should be impossible for track AI prediction") (or just unreachable!()) so the invariant is explicit and will panic if violated; update the match in the assignment to prediction (matching ai_detection_result.prediction) to use unreachable!() for AISlopDetectionPrediction::HumanMade.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/queue/track_check.rs`:
- Around line 247-252: The AISlopCheckResult struct contains a commented-out
field `// pub provider: Option<lyrics::Provider>,` which should be removed; edit
the AISlopCheckResult definition to delete that commented line so the struct
only contains `is_ai_slop` and `skipped`, leaving no dead/commented-out fields
(the provider can be restored from VCS later if needed).
- Around line 285-297: The Spotify skip call is non-idempotent: wrap the call to
state.spotify().await.next_track(None) in an idempotency/dedup guard so retries
don't double-skip; before invoking next_track use a distributed
set-if-not-exists key (e.g., "skip:{user_id}:{track_id}") with a short TTL, only
call next_track when the SETNX succeeds, and treat the job as already-skipped if
the key exists (return AISlopCheckResult { is_ai_slop: true, skipped: true }
without calling next_track); ensure you use the same user and current track
identifiers from state.user() and the active track id to build the key and set a
TTL to expire automatically.
---
Duplicate comments:
In `@src/queue/track_check.rs`:
- Around line 314-320: The match arm for AISlopDetectionPrediction::HumanMade is
dead because caller checks !ai_detection_result.prediction.is_track_ai() and
returns early; replace the current "unreachable".into() expression with
unreachable!("HumanMade variant should be impossible for track AI prediction")
(or just unreachable!()) so the invariant is explicit and will panic if
violated; update the match in the assignment to prediction (matching
ai_detection_result.prediction) to use unreachable!() for
AISlopDetectionPrediction::HumanMade.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 95104ded-19e4-435d-b942-019528afe0ae
📒 Files selected for processing (1)
src/queue/track_check.rs
Use Deserialize from prelude instead of fully qualified serde::Deserialize. Remove unused import in workers/server.rs.
Reorder rate limit check after cache lookup to return cached results even when rate limited. Also remove unused Serialize import.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/queue/track_check.rs (2)
247-252: 🧹 Nitpick | 🔵 TrivialRemove commented-out code.
The commented field
// pub provider: Option<lyrics::Provider>,should be removed to keep the codebase clean. It can be retrieved from version control if needed later.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 247 - 252, Remove the commented-out field from the AISlopCheckResult struct (the line "// pub provider: Option<lyrics::Provider>,") so the struct only contains active fields (is_ai_slop, skipped); delete that commented line entirely in src/queue/track_check.rs and run a quick build/format to ensure no leftover trailing commas or formatting issues.
285-297:⚠️ Potential issue | 🟠 MajorNon-idempotent Spotify skip risks double-skipping on job retry.
With
RetryPolicy::retries(2), if the job fails afternext_track(None)succeeds but before completion, the retry will skip an additional track. Consider adding a deduplication guard (e.g., a Redis key likeai_slop_skip:{user_id}:{track_id}with short TTL) before callingnext_track().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/queue/track_check.rs` around lines 285 - 297, The Spotify skip call in track_check (the block that checks state.user().cfg_ai_slop_detection and state.is_spotify_premium().await? and then calls state.spotify().await.next_track(None).await) is non-idempotent and can double-skip on retries; add a deduplication guard before calling next_track: compute a key like ai_slop_skip:{user_id}:{current_track_id} (use state.user().id and the current track id from the queue/track context), perform an atomic SETNX/SET with NX and short TTL in Redis and only call state.spotify().await.next_track(None).await when the set succeeds; if the set fails (key exists) treat it as already-skipped and return AISlopCheckResult { is_ai_slop: true, skipped: true } without calling next_track; on success keep existing behavior and ensure the key is set before returning so retries won’t re-run the skip.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/ai_slop_detection/spotify_ai_blocker.rs`:
- Around line 141-154: any_artist_ai currently loops and awaits
Self::is_artist_ai per artist causing sequential Redis EXISTS calls; change it
to issue a single batched Redis request (e.g., pipeline or MGET/EXISTS with
multiple keys) against redis_conn to check all artist keys at once and
short-circuit on any true result, and make the method signature consistent with
is_artist_ai by converting any_artist_ai to a static/associated function (remove
&self) or alternatively make is_artist_ai an instance method so both use the
same receiver; update references to any_artist_ai and is_artist_ai accordingly.
- Around line 110-122: Move the in-function EXPIRY_SECONDS into a module-level
compile-time constant (e.g., CACHE_TTL_SECONDS) so TTL values are consolidated
with other cache constants; update the populate() function to use
CACHE_TTL_SECONDS instead of the local EXPIRY_SECONDS, and ensure
ensure_populated() uses POPULATED_TTL_SECONDS where appropriate so both
functions reference the shared module-level constants (keep the existing
REDIS_KEY_ARTIST_PREFIX and the Pipeline usage unchanged).
---
Duplicate comments:
In `@src/queue/track_check.rs`:
- Around line 247-252: Remove the commented-out field from the AISlopCheckResult
struct (the line "// pub provider: Option<lyrics::Provider>,") so the struct
only contains active fields (is_ai_slop, skipped); delete that commented line
entirely in src/queue/track_check.rs and run a quick build/format to ensure no
leftover trailing commas or formatting issues.
- Around line 285-297: The Spotify skip call in track_check (the block that
checks state.user().cfg_ai_slop_detection and state.is_spotify_premium().await?
and then calls state.spotify().await.next_track(None).await) is non-idempotent
and can double-skip on retries; add a deduplication guard before calling
next_track: compute a key like ai_slop_skip:{user_id}:{current_track_id} (use
state.user().id and the current track id from the queue/track context), perform
an atomic SETNX/SET with NX and short TTL in Redis and only call
state.spotify().await.next_track(None).await when the set succeeds; if the set
fails (key exists) treat it as already-skipped and return AISlopCheckResult {
is_ai_slop: true, skipped: true } without calling next_track; on success keep
existing behavior and ensure the key is set before returning so retries won’t
re-run the skip.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: bfae374c-1aff-4d87-aea3-5d0c89e79275
📒 Files selected for processing (11)
locales/ai_slop.ymlsrc/lyrics/mod.rssrc/queue/track_check.rssrc/services/ai_slop_detection/shlabs.rssrc/services/ai_slop_detection/soul_over_ai.rssrc/services/ai_slop_detection/spotify_ai_blocker.rssrc/telegram/handlers/inline_buttons.rssrc/telegram/inline_buttons.rssrc/telegram/inline_buttons_actions.rssrc/telegram/mod.rssrc/workers/server.rs
💤 Files with no reviewable changes (2)
- src/lyrics/mod.rs
- src/workers/server.rs
Add comment explaining 10-minute overlap before cache entries expire.
Implements AI artist detection using the Spotify AI Blocker dataset with Redis caching. Automatically skips AI-generated tracks for premium users and notifies non-premium users. Refactored the profanity check queue into a generic track check queue that handles both profanity and AI detection.