Conversation
WalkthroughRefactors the app to use a shared AppState (Tera + RwLock AppData with Networks and a Polars DataFrame), moves route logic into a new Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application (main)
participant Sched as Cron Scheduler
participant Disk as Filesystem
participant State as AppState (RwLock<AppData>)
participant Handler as HTTP Handler
App->>Sched: start_scheduler(cron_expr)
Note over Sched: scheduler waits for trigger
Sched->>Disk: read persisted networks (refresh_data)
Disk-->>Sched: network payload
Sched->>State: acquire write lock
activate State
Sched->>State: replace AppData (networks + DataFrame)
Sched->>State: release write lock
deactivate State
Note over Sched: loop
alt Incoming request
Handler->>State: acquire read lock
activate State
State-->>Handler: clone networks / df view
deactivate State
Handler->>Handler: render template or JSON (may use Polars)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@Cargo.toml`:
- Line 10: Update the tokio-cron-scheduler dependency in Cargo.toml from "0.14"
to "0.15.1" and then run cargo update / cargo build to ensure compatibility;
verify and adjust any API uses in src/main.rs that reference the scheduler
(e.g., scheduler creation, Job, Scheduler::new, add/submit methods) if
compilation errors appear due to minor API changes introduced in 0.15.1.
In `@src/main.rs`:
- Around line 250-251: The info log printing the refresh schedule
unconditionally (info!("Data refresh scheduled: {}", config.refresh_cron)) can
be misleading if the scheduler failed to start; change this so the message is
only logged when the scheduler start succeeded (the block that spawns/starts the
scheduler) or replace it with a conditional log that reports failure when the
scheduler failed to initialize—update the code around the scheduler
start/initialization (the code that attempts to start the scheduler and returns
an error) to log the cron string only on success and log an explicit error or
warning with context if scheduler start fails.
🧹 Nitpick comments (2)
src/main.rs (2)
52-58: Consider validating the cron expression early.Invalid cron expressions will cause the scheduler to fail at startup, which is handled gracefully (line 221-231). However, you might want to validate the expression during config loading to provide clearer error messages.
219-231: Consider renaming_schedulerto clarify intent.The underscore prefix typically indicates an unused variable. Here, keeping the variable alive is intentional to prevent the scheduler from being dropped. Consider renaming to
scheduleror_keep_alive_schedulerto make the intent clearer.- let _scheduler: Option<JobScheduler> = + // Must keep scheduler alive for background jobs to continue running + let scheduler: Option<JobScheduler> =
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
Cargo.tomlsrc/main.rs
🧰 Additional context used
🧬 Code graph analysis (1)
src/main.rs (2)
src/fetcher.rs (1)
fetch_and_save_peeringdb_data(33-89)src/data.rs (1)
load_network_data(23-27)
🔇 Additional comments (10)
src/main.rs (10)
22-24: LGTM!Correctly imports
tokio::sync::RwLock(async-aware) rather thanstd::sync::RwLock, which is appropriate for use within async handlers and the scheduler.
33-34: LGTM!The 6-field cron expression includes seconds, which is the correct format for
tokio-cron-scheduler. Daily midnight refresh is a reasonable default.
65-71: LGTM!Using
RwLock<Vec<Network>>is appropriate for this read-heavy workload where multiple HTTP handlers need concurrent read access while the background refresh requires exclusive write access.
120-143: LGTM!Good error handling pattern: fetch failures and load failures are logged but don't corrupt existing data. The write lock scope is minimal (lines 135-136), reducing contention with readers.
156-179: LGTM!The scheduler implementation correctly handles:
- Double clone of
Arc<AppState>(one for the closure capture, one per job execution)- Proper error propagation
- Returning the scheduler to keep it alive in
main
261-304: LGTM!Good lock hygiene in the index handler: acquires read lock, collects necessary data, explicitly drops the lock (line 300) before the potentially expensive template rendering operation. This minimizes lock contention.
311-347: LGTM!The
networks_listhandler correctly handles the edge case of empty data (drops lock at line 315 before returning) and clones the paginated slice before dropping the lock at line 337.
394-411: Minor: HashMap key type changed to owned String.Line 397 changes from
HashMap<&str, usize>toHashMap<String, usize>, cloning strings during iteration. This is correct since you need owned data after dropping the lock, but it's worth noting the slight memory overhead compared to the other handlers.
414-439: LGTM!Correctly collects results into an intermediate vector while holding the lock, then drops the lock before the
multiunzipoperation.
442-459: LGTM!The handler follows the established pattern of acquiring the read lock, collecting results, and dropping the lock before returning the response.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| info!("Server listening on http://{}", config.bind_address); | ||
| info!("Data refresh scheduled: {}", config.refresh_cron); |
There was a problem hiding this comment.
Log message may be misleading if scheduler failed to start.
Line 251 logs the refresh schedule even when the scheduler failed to start (lines 224-230). Consider conditionally logging or adjusting the message.
Suggested fix
info!("Server listening on http://{}", config.bind_address);
- info!("Data refresh scheduled: {}", config.refresh_cron);
+ if _scheduler.is_some() {
+ info!("Data refresh scheduled: {}", config.refresh_cron);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| info!("Server listening on http://{}", config.bind_address); | |
| info!("Data refresh scheduled: {}", config.refresh_cron); | |
| info!("Server listening on http://{}", config.bind_address); | |
| if _scheduler.is_some() { | |
| info!("Data refresh scheduled: {}", config.refresh_cron); | |
| } |
🤖 Prompt for AI Agents
In `@src/main.rs` around lines 250 - 251, The info log printing the refresh
schedule unconditionally (info!("Data refresh scheduled: {}",
config.refresh_cron)) can be misleading if the scheduler failed to start; change
this so the message is only logged when the scheduler start succeeded (the block
that spawns/starts the scheduler) or replace it with a conditional log that
reports failure when the scheduler failed to initialize—update the code around
the scheduler start/initialization (the code that attempts to start the
scheduler and returns an error) to log the cron string only on success and log
an explicit error or warning with context if scheduler start fails.
f30941b to
d643cd4
Compare
… empty strings as None and introduce AI agent Rust code quality guidelines.
… for key structs.
…ocking data loading, alongside adding agent coding guidelines.
…ment, and error handling, and add agent coding guidelines.
…, type, policy, and status, and update the UI to support these filters.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@assets/style.css`:
- Around line 3-16: Move the `@import` rule so it appears before the :root
declaration; specifically, take the existing `@import`
url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap")
line and place it at the top of the file as the first statement, ensuring the
:root block (which defines --pico-* variables) comes after the import to satisfy
CSS ordering requirements.
In `@Cargo.toml`:
- Line 21: Update the polars dependency in Cargo.toml from version "0.45" to a
current stable release (e.g., "0.52.0") by changing the polars entry polars = {
version = "0.45", features = ["lazy", "serde", "json"] } to use the new version
string (retain the features unless compatibility requires adjustment); after
updating, run cargo update and cargo build to catch any API or feature changes
and adjust code referencing Polars types/functions if needed.
In `@src/handlers.rs`:
- Around line 155-186: Replace the uses of as_ref().map_or(false, |..| ...) with
as_ref().is_some_and(|..| ...) to satisfy Clippy's unnecessary_map_or lint: for
example change n.aka.as_ref().map_or(false, |a|
a.to_lowercase().contains(&q_lower)) to n.aka.as_ref().is_some_and(|a|
a.to_lowercase().contains(&q_lower)), and similarly replace
n.info_type.as_ref().map_or(false, |it| it.eq_ignore_ascii_case(t)),
n.policy_general.as_ref().map_or(false, |pg| pg.eq_ignore_ascii_case(p)), and
n.status.as_ref().map_or(false, |st| st.eq_ignore_ascii_case(s)) with their
as_ref().is_some_and(...) equivalents (keeping the same closure bodies).
- Around line 403-421: The current match arm uses several .unwrap() calls on
res.column(...).i64()/str() which can panic if the DataFrame schema changes;
replace these unwraps with proper error handling: for each column ("ix_count",
"fac_count", "name") use res.column("...").and_then(|c| c.i64()/str()) and
convert to a Result or Option, map the Series to Vec<i64> or &str values
(handling flatten()), and either return a controlled error (e.g., Err(...) or an
HTTP 4xx response) or provide a safe default instead of panicking; update the
code around the match result arm (the variables ix, fac, name_ca) to propagate
or handle the error rather than calling .unwrap().
In `@templates/dashboard.html`:
- Around line 186-194: The IPv6 dataset in the prefixes chart is using the wrong
field (data.secondary) while the API returns ipv6; update the dataset entry with
label 'IPv6' to use data.ipv6 instead of data.secondary in
templates/dashboard.html so the chart reads the ipv6 array returned by the
/api/prefixes-distribution endpoint.
In `@templates/networks.html`:
- Around line 154-156: The current event handler on searchInput uses the "input"
event to call document.getElementById("searchForm").submit() on every keystroke
(searchInput addEventListener("input", ...)), causing excessive requests; change
this to either debounce the submission (wrap the submit call in a debounce
utility with a short delay, cancelling prior timers) or remove auto-submit and
instead listen for Enter via searchInput.addEventListener("keydown", ...)
checking for key === "Enter" to call searchForm.submit(), or provide a visible
search button that submits the form; update the listener attached to searchInput
and ensure the debounce timer is scoped so repeated inputs clear the previous
timer.
- Around line 100-106: The template's Previous button block (the "{% if page > 1
%}" conditional that renders the "Previous" anchor or disabled button) is
missing an opening <li> tag while it has a trailing </li>, breaking the HTML
list structure; fix by inserting the matching opening <li> before the
conditional (so the block that renders <a href="/networks?page={{ page - 1 }}{{
params }}" role="button" class="outline">Previous</a> or <button disabled
class="outline">Previous</button> is wrapped in a proper <li>...</li> pair) and
ensure the existing closing </li> remains to keep consistent markup.
♻️ Duplicate comments (1)
src/main.rs (1)
178-179: Log message may be misleading if scheduler failed to start.Line 179 logs the refresh schedule unconditionally, even when the scheduler failed to start (lines 141-147). Consider logging conditionally.
Suggested fix
info!("Server listening on http://{}", config.bind_address); - info!("Data refresh scheduled: {}", config.refresh_cron); + if _scheduler.is_some() { + info!("Data refresh scheduled: {}", config.refresh_cron); + }
🧹 Nitpick comments (2)
assets/style.css (1)
109-123: Consider accessibility for motion-sensitive users.The
fadeInanimation applied to allmain > *elements could cause discomfort for users with vestibular disorders. Consider respecting theprefers-reduced-motionmedia query.♿ Proposed accessibility enhancement
main > * { animation: fadeIn 0.4s ease-out; } + +@media (prefers-reduced-motion: reduce) { + main > * { + animation: none; + } +}templates/networks.html (1)
91-119: Consider URL encoding filter values in pagination links.The parameter concatenation at lines 95-99 directly embeds filter values into URLs. If filter values contain special characters (e.g.,
&,=), the URLs could break. Tera'surlencodefilter can help.♻️ Suggestion
- {% set params = "&per_page=" ~ per_page ~ "&q=" ~ q_val ~ "&type=" ~ type_val ~ "&policy=" ~ policy_val ~ "&status=" ~ status_val %} + {% set params = "&per_page=" ~ per_page ~ "&q=" ~ q_val | urlencode ~ "&type=" ~ type_val | urlencode ~ "&policy=" ~ policy_val | urlencode ~ "&status=" ~ status_val | urlencode %}
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
Cargo.tomlassets/style.csssrc/error.rssrc/fetcher.rssrc/handlers.rssrc/main.rssrc/state.rstemplates/analytics.htmltemplates/base.htmltemplates/dashboard.htmltemplates/networks.htmltemplates/search.html
🧰 Additional context used
🧬 Code graph analysis (2)
src/state.rs (1)
src/models.rs (1)
Network(13-27)
src/main.rs (4)
src/data.rs (1)
load_network_data(23-27)src/fetcher.rs (1)
fetch_and_save_peeringdb_data(33-89)src/handlers.rs (6)
analytics(229-232)api_network_types(274-323)api_prefixes_distribution(326-383)index(90-139)networks_list(142-226)s(52-52)src/state.rs (2)
new(44-49)from_env(19-24)
🪛 Biome (2.1.2)
assets/style.css
[error] 16-16: This @import is in the wrong position.
Any @import rules must precede all other valid at-rules and style rules in a stylesheet (ignoring @charset and @layer), or else the @import rule is invalid.
Consider moving import position.
(lint/correctness/noInvalidPositionAtImportRule)
[error] 82-82: Duplicate properties can lead to unexpected behavior and may override previous declarations unintentionally.
background is already defined here.
Remove or rename the duplicate property to ensure consistent styling.
(lint/suspicious/noDuplicateProperties)
🪛 GitHub Actions: Rust CI
src/handlers.rs
[error] 159-159: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 165-165: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 168-168: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 172-172: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 175-175: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 179-179: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 182-182: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
🔇 Additional comments (33)
templates/base.html (1)
1-67: Well-structured base template with appropriate block definitions.The template follows Tera conventions with clear separation of concerns: title, content, and scripts blocks for child template customization. Navigation active-state handling is simple and effective.
Optional security enhancement: Consider adding Subresource Integrity (SRI) hashes to CDN resources (Pico CSS, Chart.js) to protect against CDN compromise:
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css" integrity="sha384-..." crossorigin="anonymous">Cargo.toml (1)
10-10: Dependency version looks good.The tokio-cron-scheduler version has been updated to 0.15.1 as previously recommended.
src/error.rs (1)
27-30: Clean error variant addition.The
Polarsvariant follows the established pattern with#[from]for automatic conversion, enabling seamless error propagation from DataFrame operations throughout the codebase.assets/style.css (1)
79-86: Duplicatebackgroundproperty is intentional for browser compatibility.The static analyzer flagged duplicate
backgroundproperties, but this is a common pattern for cross-browser gradient support. The-webkit-linear-gradient(line 81) provides fallback for older WebKit browsers, while the standardlinear-gradient(line 82) covers modern browsers. This is acceptable.src/fetcher.rs (3)
8-8: Clean migration to async filesystem operations.Switching from
std::fstotokio::fsensures non-blocking I/O, which is appropriate for this async context and aligns with the broader async architecture of the application.
33-35: Async directory creation is correctly awaited.The
create_dir_alloperation now uses tokio's async variant with proper error propagation via?.
70-85: Async file write with appropriate error handling.The nested match structure handles all failure modes (network, JSON parsing, serialization, file I/O) with proper logging. The async
fs::writeis correctly awaited.templates/search.html (2)
1-28: LGTM - Clean search form implementation.The template structure is well-organized with proper form handling using GET method for search. The use of
| default(value='')preserves query values across requests appropriately.
30-67: LGTM - Conditional results rendering is correct.The three-way conditional logic handles all cases properly:
- Results found → display table
- Query submitted but no results → "No results found" message
- No query → nothing displayed
src/state.rs (3)
1-6: LGTM - Correct imports for async state management.Using
tokio::sync::RwLockis the right choice for shared state in an async context, avoiding blocking the executor during lock acquisition.
27-50: LGTM - Well-structured shared state design.The
AppStatestruct properly encapsulates the template engine and data behind anRwLock. Storing bothVec<Network>andDataFrameinAppDataallows efficient access for both template rendering and analytics queries.
17-25: The cron expression validation concern is not applicable to this codebase. The project usestokio-cron-scheduler, which already validates the cron expression whenJob::new_async()is called at startup (line 82 insrc/main.rs). Any validation errors are caught and logged as a warning inmain()(lines 141-147), allowing the application to continue without the scheduler.Additionally, the suggested fix references a
croncrate that is not a project dependency. The project only depends ontokio-cron-scheduler = "0.15.1", which handles validation internally.Likely an incorrect or invalid review comment.
templates/dashboard.html (3)
1-32: LGTM - Clean stats grid with server-rendered values.The stats grid uses template variables appropriately. The
stats.scopes["Global"]access with default fallback handles missing keys gracefully.
34-59: LGTM - Good loading/error state pattern for charts.The three-state UI (loading → success/error) with retry functionality provides a good user experience for async data loading.
111-154: LGTM - Proper chart lifecycle management.The
loadNetworkTypesChartfunction correctly destroys the existing chart instance before creating a new one, preventing memory leaks on retry or re-render.templates/analytics.html (3)
1-41: LGTM - Clean analytics page structure.The stats grid with placeholder values (
-) and the chart container with loading/error states follows the same pattern as the dashboard, maintaining UI consistency.
112-134: LGTM - Correct Pearson correlation coefficient implementation.The correlation calculation correctly implements Pearson's r formula with proper handling of edge cases:
- Empty data guard at line 113
- Division by zero check at line 131
81-110: LGTM - Robust async data loading with error handling.The
loadChartDatafunction properly manages UI states and destroys the previous chart instance before creating a new one.templates/networks.html (1)
42-89: LGTM - Well-structured networks table.The table rendering with conditional badge styling for policy and status provides clear visual distinction. The fallback values (
or 'Unknown',or '0') handle missing data gracefully.src/main.rs (5)
1-24: LGTM!Module organization and imports are clean and well-structured. The modular approach with separate
handlersandstatemodules improves maintainability.
29-38: LGTM!The JSON round-trip approach for DataFrame creation is pragmatic. The comment correctly acknowledges the tradeoff between simplicity and verbosity.
40-73: LGTM!The refresh logic has proper error handling at each stage (fetch, load, DataFrame creation). Using
spawn_blockingfor the synchronousload_network_datais correct to avoid blocking the async runtime.
75-95: LGTM!Scheduler setup is clean with proper error propagation. The success log at line 92 is appropriately placed inside the function where scheduler start is confirmed.
97-185: LGTM!The main function has a well-structured startup sequence with proper error handling and graceful degradation for the scheduler. The router configuration is clean with proper state injection and tracing layers.
src/handlers.rs (9)
1-14: LGTM!Imports are well-organized and appropriate for the handlers module.
16-66: LGTM!Query structs and deserialization helpers are well-designed. The
empty_string_as_nonepattern elegantly handles HTML form submissions where empty fields arrive as empty strings rather than missing values.
68-87: LGTM!
render_templatehas appropriate error logging and response handling.truncate_charscorrectly handles Unicode by counting characters rather than bytes.
89-139: LGTM!The index handler has clean stats aggregation logic. Dropping
data_guardbefore template rendering is a good practice to minimize lock hold time.
190-226: LGTM!Pagination logic is robust with proper bounds checking using
saturating_mul,saturating_add, andclamp. The filter parameters are correctly passed back to the template for UI state preservation.
228-271: LGTM!The analytics and search handlers are clean. Search input is properly truncated to prevent excessive memory usage from malicious long inputs.
273-323: LGTM!The
api_network_typeshandler has proper error handling with graceful fallback to empty arrays. The Polars lazy API usage is appropriate.
325-383: LGTM!The
api_prefixes_distributionhandler maintains consistent error handling with the other API endpoints.
447-474: LGTM!Good test coverage for
truncate_charsincluding edge cases for exact length and emoji handling.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| :root { | ||
| --pico-font-family: "Inter", system-ui, -apple-system, sans-serif; | ||
| --pico-primary: #3b82f6; | ||
| --pico-primary-background: #3b82f6; | ||
| --pico-primary-hover: #2563eb; | ||
| --pico-primary-underline: rgba(59, 130, 246, 0.5); | ||
|
|
||
| --pico-border-radius: 0.5rem; | ||
| --pico-form-element-spacing-vertical: 0.75rem; | ||
| --pico-form-element-spacing-horizontal: 1rem; | ||
| } | ||
|
|
||
| /* Typography */ | ||
| @import url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap"); |
There was a problem hiding this comment.
Move @import rule before :root declaration.
CSS @import rules must precede all other at-rules and style rules to be valid. The Google Fonts import on line 16 should be moved to the top of the file.
🔧 Proposed fix
/* Custom Styling for NetViz on top of Pico CSS v2 */
+/* Typography */
+@import url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap");
+
:root {
--pico-font-family: "Inter", system-ui, -apple-system, sans-serif;
--pico-primary: `#3b82f6`;
--pico-primary-background: `#3b82f6`;
--pico-primary-hover: `#2563eb`;
--pico-primary-underline: rgba(59, 130, 246, 0.5);
--pico-border-radius: 0.5rem;
--pico-form-element-spacing-vertical: 0.75rem;
--pico-form-element-spacing-horizontal: 1rem;
}
-/* Typography */
-@import url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap");
-
body {🧰 Tools
🪛 Biome (2.1.2)
[error] 16-16: This @import is in the wrong position.
Any @import rules must precede all other valid at-rules and style rules in a stylesheet (ignoring @charset and @layer), or else the @import rule is invalid.
Consider moving import position.
(lint/correctness/noInvalidPositionAtImportRule)
🤖 Prompt for AI Agents
In `@assets/style.css` around lines 3 - 16, Move the `@import` rule so it appears
before the :root declaration; specifically, take the existing `@import`
url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap")
line and place it at the top of the file as the first statement, ensuring the
:root block (which defines --pico-* variables) comes after the import to satisfy
CSS ordering requirements.
| itertools = "0.14" | ||
| thiserror = "2.0" | ||
| anyhow = "1.0" | ||
| polars = { version = "0.45", features = ["lazy", "serde", "json"] } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the latest version of polars on crates.io
curl -s https://crates.io/api/v1/crates/polars | jq '.crate.max_version, .crate.newest_version'Repository: crazyuploader/NetViz
Length of output: 81
Update polars to a current version.
Polars 0.45 is outdated. The latest stable version is 0.52.0. Update to a recent version to ensure access to latest features, improvements, and security patches.
🤖 Prompt for AI Agents
In `@Cargo.toml` at line 21, Update the polars dependency in Cargo.toml from
version "0.45" to a current stable release (e.g., "0.52.0") by changing the
polars entry polars = { version = "0.45", features = ["lazy", "serde", "json"] }
to use the new version string (retain the features unless compatibility requires
adjustment); after updating, run cargo update and cargo build to catch any API
or feature changes and adjust code referencing Polars types/functions if needed.
| if let Some(ref q) = query.q { | ||
| let q_lower = q.to_lowercase(); | ||
| matches &= n.name.to_lowercase().contains(&q_lower) | ||
| || n.asn.to_string().contains(&q_lower) | ||
| || n.aka | ||
| .as_ref() | ||
| .map_or(false, |a| a.to_lowercase().contains(&q_lower)); | ||
| } | ||
|
|
||
| if let Some(ref t) = query.type_ { | ||
| matches &= n | ||
| .info_type | ||
| .as_ref() | ||
| .map_or(false, |it| it.eq_ignore_ascii_case(t)); | ||
| } | ||
|
|
||
| if let Some(ref p) = query.policy { | ||
| matches &= n | ||
| .policy_general | ||
| .as_ref() | ||
| .map_or(false, |pg| pg.eq_ignore_ascii_case(p)); | ||
| } | ||
|
|
||
| if let Some(ref s) = query.status { | ||
| matches &= n | ||
| .status | ||
| .as_ref() | ||
| .map_or(false, |st| st.eq_ignore_ascii_case(s)); | ||
| } | ||
|
|
||
| matches | ||
| }) |
There was a problem hiding this comment.
Fix Clippy errors: use is_some_and instead of map_or.
The pipeline is failing due to Clippy's unnecessary_map_or lint. Replace map_or(false, |..| ...) with is_some_and(|..| ...).
Suggested fix
if let Some(ref q) = query.q {
let q_lower = q.to_lowercase();
matches &= n.name.to_lowercase().contains(&q_lower)
|| n.asn.to_string().contains(&q_lower)
- || n.aka
- .as_ref()
- .map_or(false, |a| a.to_lowercase().contains(&q_lower));
+ || n.aka
+ .as_ref()
+ .is_some_and(|a| a.to_lowercase().contains(&q_lower));
}
if let Some(ref t) = query.type_ {
- matches &= n
- .info_type
- .as_ref()
- .map_or(false, |it| it.eq_ignore_ascii_case(t));
+ matches &= n
+ .info_type
+ .as_ref()
+ .is_some_and(|it| it.eq_ignore_ascii_case(t));
}
if let Some(ref p) = query.policy {
- matches &= n
- .policy_general
- .as_ref()
- .map_or(false, |pg| pg.eq_ignore_ascii_case(p));
+ matches &= n
+ .policy_general
+ .as_ref()
+ .is_some_and(|pg| pg.eq_ignore_ascii_case(p));
}
if let Some(ref s) = query.status {
- matches &= n
- .status
- .as_ref()
- .map_or(false, |st| st.eq_ignore_ascii_case(s));
+ matches &= n
+ .status
+ .as_ref()
+ .is_some_and(|st| st.eq_ignore_ascii_case(s));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Some(ref q) = query.q { | |
| let q_lower = q.to_lowercase(); | |
| matches &= n.name.to_lowercase().contains(&q_lower) | |
| || n.asn.to_string().contains(&q_lower) | |
| || n.aka | |
| .as_ref() | |
| .map_or(false, |a| a.to_lowercase().contains(&q_lower)); | |
| } | |
| if let Some(ref t) = query.type_ { | |
| matches &= n | |
| .info_type | |
| .as_ref() | |
| .map_or(false, |it| it.eq_ignore_ascii_case(t)); | |
| } | |
| if let Some(ref p) = query.policy { | |
| matches &= n | |
| .policy_general | |
| .as_ref() | |
| .map_or(false, |pg| pg.eq_ignore_ascii_case(p)); | |
| } | |
| if let Some(ref s) = query.status { | |
| matches &= n | |
| .status | |
| .as_ref() | |
| .map_or(false, |st| st.eq_ignore_ascii_case(s)); | |
| } | |
| matches | |
| }) | |
| if let Some(ref q) = query.q { | |
| let q_lower = q.to_lowercase(); | |
| matches &= n.name.to_lowercase().contains(&q_lower) | |
| || n.asn.to_string().contains(&q_lower) | |
| || n.aka | |
| .as_ref() | |
| .is_some_and(|a| a.to_lowercase().contains(&q_lower)); | |
| } | |
| if let Some(ref t) = query.type_ { | |
| matches &= n | |
| .info_type | |
| .as_ref() | |
| .is_some_and(|it| it.eq_ignore_ascii_case(t)); | |
| } | |
| if let Some(ref p) = query.policy { | |
| matches &= n | |
| .policy_general | |
| .as_ref() | |
| .is_some_and(|pg| pg.eq_ignore_ascii_case(p)); | |
| } | |
| if let Some(ref s) = query.status { | |
| matches &= n | |
| .status | |
| .as_ref() | |
| .is_some_and(|st| st.eq_ignore_ascii_case(s)); | |
| } | |
| matches | |
| }) |
🧰 Tools
🪛 GitHub Actions: Rust CI
[error] 159-159: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 165-165: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 168-168: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 172-172: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 175-175: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 179-179: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
[error] 182-182: this map_or can be simplified; use is_some_and instead. Clippy: unnecessary_map_or suggested replacement
🤖 Prompt for AI Agents
In `@src/handlers.rs` around lines 155 - 186, Replace the uses of
as_ref().map_or(false, |..| ...) with as_ref().is_some_and(|..| ...) to satisfy
Clippy's unnecessary_map_or lint: for example change
n.aka.as_ref().map_or(false, |a| a.to_lowercase().contains(&q_lower)) to
n.aka.as_ref().is_some_and(|a| a.to_lowercase().contains(&q_lower)), and
similarly replace n.info_type.as_ref().map_or(false, |it|
it.eq_ignore_ascii_case(t)), n.policy_general.as_ref().map_or(false, |pg|
pg.eq_ignore_ascii_case(p)), and n.status.as_ref().map_or(false, |st|
st.eq_ignore_ascii_case(s)) with their as_ref().is_some_and(...) equivalents
(keeping the same closure bodies).
| match result { | ||
| Ok(res) => { | ||
| let ix: Vec<i64> = res | ||
| .column("ix_count") | ||
| .unwrap() | ||
| .i64() | ||
| .unwrap() | ||
| .into_iter() | ||
| .flatten() | ||
| .collect(); | ||
| let fac: Vec<i64> = res | ||
| .column("fac_count") | ||
| .unwrap() | ||
| .i64() | ||
| .unwrap() | ||
| .into_iter() | ||
| .flatten() | ||
| .collect(); | ||
| let name_ca = res.column("name").unwrap().str().unwrap(); |
There was a problem hiding this comment.
Avoid .unwrap() calls that can panic on unexpected DataFrame schema.
Unlike the other API handlers that use .ok() and graceful fallbacks, this handler uses multiple .unwrap() calls on column access and type casting. If the DataFrame schema changes or columns are missing, this will panic and crash the request.
Suggested fix using consistent error handling
match result {
Ok(res) => {
- let ix: Vec<i64> = res
- .column("ix_count")
- .unwrap()
- .i64()
- .unwrap()
- .into_iter()
- .flatten()
- .collect();
- let fac: Vec<i64> = res
- .column("fac_count")
- .unwrap()
- .i64()
- .unwrap()
- .into_iter()
- .flatten()
- .collect();
- let name_ca = res.column("name").unwrap().str().unwrap();
+ let ix: Vec<i64> = res
+ .column("ix_count")
+ .ok()
+ .and_then(|s| s.i64().ok())
+ .map(|ca| ca.into_iter().flatten().collect())
+ .unwrap_or_default();
+ let fac: Vec<i64> = res
+ .column("fac_count")
+ .ok()
+ .and_then(|s| s.i64().ok())
+ .map(|ca| ca.into_iter().flatten().collect())
+ .unwrap_or_default();
+ let Some(name_ca) = res.column("name").ok().and_then(|s| s.str().ok()) else {
+ return Json(Vec::<serde_json::Value>::new());
+ };
let points: Vec<_> = ix📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| match result { | |
| Ok(res) => { | |
| let ix: Vec<i64> = res | |
| .column("ix_count") | |
| .unwrap() | |
| .i64() | |
| .unwrap() | |
| .into_iter() | |
| .flatten() | |
| .collect(); | |
| let fac: Vec<i64> = res | |
| .column("fac_count") | |
| .unwrap() | |
| .i64() | |
| .unwrap() | |
| .into_iter() | |
| .flatten() | |
| .collect(); | |
| let name_ca = res.column("name").unwrap().str().unwrap(); | |
| match result { | |
| Ok(res) => { | |
| let ix: Vec<i64> = res | |
| .column("ix_count") | |
| .ok() | |
| .and_then(|s| s.i64().ok()) | |
| .map(|ca| ca.into_iter().flatten().collect()) | |
| .unwrap_or_default(); | |
| let fac: Vec<i64> = res | |
| .column("fac_count") | |
| .ok() | |
| .and_then(|s| s.i64().ok()) | |
| .map(|ca| ca.into_iter().flatten().collect()) | |
| .unwrap_or_default(); | |
| let Some(name_ca) = res.column("name").ok().and_then(|s| s.str().ok()) else { | |
| return Json(Vec::<serde_json::Value>::new()); | |
| }; |
🤖 Prompt for AI Agents
In `@src/handlers.rs` around lines 403 - 421, The current match arm uses several
.unwrap() calls on res.column(...).i64()/str() which can panic if the DataFrame
schema changes; replace these unwraps with proper error handling: for each
column ("ix_count", "fac_count", "name") use res.column("...").and_then(|c|
c.i64()/str()) and convert to a Result or Option, map the Series to Vec<i64> or
&str values (handling flatten()), and either return a controlled error (e.g.,
Err(...) or an HTTP 4xx response) or provide a safe default instead of
panicking; update the code around the match result arm (the variables ix, fac,
name_ca) to propagate or handle the error rather than calling .unwrap().
| data: data.ipv4, | ||
| backgroundColor: colors.primary | ||
| }, | ||
| { | ||
| label: "IPv6 Prefixes", | ||
| data: [], | ||
| backgroundColor: "rgba(239, 68, 68, 0.8)", | ||
| borderColor: "#ef4444", | ||
| borderWidth: 2, | ||
| borderRadius: 6, | ||
| borderSkipped: false, | ||
| }, | ||
| ], | ||
| label: 'IPv6', | ||
| data: data.secondary, | ||
| backgroundColor: colors.secondary | ||
| } | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's explore the repository structure
git ls-files | head -20Repository: crazyuploader/NetViz
Length of output: 370
🏁 Script executed:
# Search for the prefixes-distribution API handler
rg -n "prefixes.distribution|prefixes_distribution|prefixes-distribution" -A 15 | head -100Repository: crazyuploader/NetViz
Length of output: 3897
🏁 Script executed:
# Also search for the API response structure in the codebase
rg -n "ipv4|ipv6" templates/dashboard.html -B 2 -A 2Repository: crazyuploader/NetViz
Length of output: 218
🏁 Script executed:
# Look for where the data variable is populated in dashboard.html
rg -n "/api/prefixes-distribution" -B 5 -A 10Repository: crazyuploader/NetViz
Length of output: 2787
🏁 Script executed:
# Get the full api_prefixes_distribution function implementation
sed -n '325,380p' src/handlers.rsRepository: crazyuploader/NetViz
Length of output: 1958
🏁 Script executed:
# Search for how the dataframe result is converted to JSON response
rg -n "Json.*info_prefixes" -A 5 -B 5Repository: crazyuploader/NetViz
Length of output: 46
🏁 Script executed:
# Look at the complete dashboard.html code around the data mapping
sed -n '160,210p' templates/dashboard.htmlRepository: crazyuploader/NetViz
Length of output: 1534
Fix IPv6 data source reference in prefixes chart.
The API endpoint /api/prefixes-distribution returns ipv6 field in the JSON response (line 378 in handlers.rs), but the dashboard incorrectly references data.secondary. This causes the IPv6 dataset to display no data. The IPv4 dataset correctly uses data.ipv4.
🐛 Proposed fix
{
label: 'IPv6',
- data: data.secondary,
+ data: data.ipv6,
backgroundColor: colors.secondary
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| data: data.ipv4, | |
| backgroundColor: colors.primary | |
| }, | |
| { | |
| label: "IPv6 Prefixes", | |
| data: [], | |
| backgroundColor: "rgba(239, 68, 68, 0.8)", | |
| borderColor: "#ef4444", | |
| borderWidth: 2, | |
| borderRadius: 6, | |
| borderSkipped: false, | |
| }, | |
| ], | |
| label: 'IPv6', | |
| data: data.secondary, | |
| backgroundColor: colors.secondary | |
| } | |
| ] | |
| data: data.ipv4, | |
| backgroundColor: colors.primary | |
| }, | |
| { | |
| label: 'IPv6', | |
| data: data.ipv6, | |
| backgroundColor: colors.secondary | |
| } | |
| ] |
🤖 Prompt for AI Agents
In `@templates/dashboard.html` around lines 186 - 194, The IPv6 dataset in the
prefixes chart is using the wrong field (data.secondary) while the API returns
ipv6; update the dataset entry with label 'IPv6' to use data.ipv6 instead of
data.secondary in templates/dashboard.html so the chart reads the ipv6 array
returned by the /api/prefixes-distribution endpoint.
| {% if page > 1 %} | ||
| <a href="/networks?page={{ page - 1 }}{{ params }}" role="button" class="outline">Previous</a> | ||
| {% else %} | ||
| <button disabled class="outline">Previous</button> | ||
| {% endif %} | ||
| </li> | ||
| </ul> |
There was a problem hiding this comment.
Missing <li> opening tag causes invalid HTML structure.
Line 105 has a closing </li> tag, but there's no corresponding opening <li> tag for the Previous button block. This will cause HTML parsing issues.
🐛 Proposed fix
{% if page > 1 %}
+ <li>
<a href="/networks?page={{ page - 1 }}{{ params }}" role="button" class="outline">Previous</a>
+ </li>
{% else %}
+ <li>
<button disabled class="outline">Previous</button>
+ </li>
{% endif %}
- </li>🤖 Prompt for AI Agents
In `@templates/networks.html` around lines 100 - 106, The template's Previous
button block (the "{% if page > 1 %}" conditional that renders the "Previous"
anchor or disabled button) is missing an opening <li> tag while it has a
trailing </li>, breaking the HTML list structure; fix by inserting the matching
opening <li> before the conditional (so the block that renders <a
href="/networks?page={{ page - 1 }}{{ params }}" role="button"
class="outline">Previous</a> or <button disabled
class="outline">Previous</button> is wrapped in a proper <li>...</li> pair) and
ensure the existing closing </li> remains to keep consistent markup.
| searchInput?.addEventListener("input", function() { | ||
| document.getElementById("searchForm").submit(); | ||
| }); |
There was a problem hiding this comment.
Submitting form on every keystroke will cause excessive server requests.
The input event fires on every character typed. For a search field, this creates a poor UX (page reloads mid-typing) and unnecessary server load.
♻️ Suggested fix: Debounce or use Enter key submission
- // Submit form on search input change
- searchInput?.addEventListener("input", function() {
- document.getElementById("searchForm").submit();
- });
+ // Submit form on Enter key or after debounce
+ let debounceTimer;
+ searchInput?.addEventListener("input", function() {
+ clearTimeout(debounceTimer);
+ debounceTimer = setTimeout(() => {
+ document.getElementById("searchForm").submit();
+ }, 500); // 500ms debounce
+ });Alternatively, remove the auto-submit entirely and let users press Enter or click a search button.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| searchInput?.addEventListener("input", function() { | |
| document.getElementById("searchForm").submit(); | |
| }); | |
| // Submit form on Enter key or after debounce | |
| let debounceTimer; | |
| searchInput?.addEventListener("input", function() { | |
| clearTimeout(debounceTimer); | |
| debounceTimer = setTimeout(() => { | |
| document.getElementById("searchForm").submit(); | |
| }, 500); // 500ms debounce | |
| }); |
🤖 Prompt for AI Agents
In `@templates/networks.html` around lines 154 - 156, The current event handler on
searchInput uses the "input" event to call
document.getElementById("searchForm").submit() on every keystroke (searchInput
addEventListener("input", ...)), causing excessive requests; change this to
either debounce the submission (wrap the submit call in a debounce utility with
a short delay, cancelling prior timers) or remove auto-submit and instead listen
for Enter via searchInput.addEventListener("keydown", ...) checking for key ===
"Enter" to call searchForm.submit(), or provide a visible search button that
submits the form; update the listener attached to searchInput and ensure the
debounce timer is scoped so repeated inputs clear the previous timer.
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.