-
Notifications
You must be signed in to change notification settings - Fork 15
Change the collision count to be a Mean over the rollout axis instead… #277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… of a Sum, it makes way more sense now
Greptile OverviewGreptile SummaryChanged collision and offroad count metrics from sum to mean across rollouts, providing normalized collision rates instead of total counts. This makes the metrics more meaningful for comparison across different numbers of rollouts. Key Changes:
Impact:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant WOSACEvaluator
participant Metrics
participant DataFrameOutput
Caller->>WOSACEvaluator: compute_metrics(trajectories)
WOSACEvaluator->>Metrics: compute_interaction_features()
Metrics-->>WOSACEvaluator: collision_per_step (n_agents, n_rollouts, n_steps)
Note over WOSACEvaluator: Aggregate collisions over time
WOSACEvaluator->>WOSACEvaluator: np.any(collision_per_step, axis=2)
Note over WOSACEvaluator: Result: collision_indication (n_agents, n_rollouts)
Note over WOSACEvaluator: OLD: Sum across rollouts<br/>NEW: Mean across rollouts
WOSACEvaluator->>WOSACEvaluator: np.mean(collision_indication, axis=1)
Note over WOSACEvaluator: Result: num_collisions (n_agents,)<br/>Now represents average collision rate<br/>instead of total count
WOSACEvaluator->>DataFrameOutput: Create DataFrame with metrics
Note over DataFrameOutput: num_collisions_sim and num_collisions_ref<br/>are now mean values (0.0-1.0)<br/>instead of integer counts
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 2 comments
Additional Comments (1)
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time! Prompt To Fix With AIThis is a comment left during a code review.
Path: pufferlib/ocean/benchmark/evaluator.py
Line: 473:476
Comment:
Column names still say `num_collisions` but values are now mean rates (0.0-1.0) instead of counts. Consider renaming to `collision_rate_sim`/`collision_rate_ref` and `offroad_rate_sim`/`offroad_rate_ref` for clarity.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise. |
… of a Sum, it makes way more sense now