Problem
There's no visibility into how the platform is being used across a team. Managers and developers can't answer basic questions:
- Who's using the platform the most? How effectively?
- What's the total cost? Average cost per session?
- Which repos generate the most sessions? The most PRs?
- What's the success rate? Are sessions completing or failing?
- Are we getting value from this — how many PRs are being created?
The session list shows individual sessions, but there's no aggregated view.
Proposed Feature: Usage Dashboard + Team Leaderboard
A dashboard page in the web UI that aggregates session data into team-level metrics, per-user rankings, and time-series trends.
Dashboard Sections
1. Summary Metrics (top cards)
| Metric |
Source |
| Total sessions |
Count from D1 session index |
| Total users |
Distinct users from session index |
| Total cost |
Sum of total_cost from session state (from #416 / session cost tracking) |
| Avg cost per session |
Total cost / total sessions |
2. Sessions Over Time (time-series chart)
Two chart views:
- By user — stacked area chart showing each user's daily session count
- By status — stacked area showing completed/failed/stopped per day
Data source: D1 session index grouped by DATE(created_at) and user/status.
3. Sessions by Repository (donut chart)
Donut chart showing session distribution across repos. Each segment = a repo with session count. Helps identify which codebases benefit most from the platform.
4. Pull Request Metrics
| Metric |
Source |
| Total PRs created |
Count of PR artifacts across sessions |
| Sessions with PRs |
Sessions that produced at least one PR |
| PR rate |
Percentage of sessions that created a PR |
| Avg PR session duration |
Average duration of sessions that created PRs |
5. Team Leaderboard Table
Per-user rankings with sortable columns:
| Column |
Description |
| Rank |
Position by total sessions |
| User |
GitHub username |
| Sessions |
Total (completed/failed/stopped breakdown) |
| Success Rate |
% of sessions that completed successfully |
| PRs |
Total PRs created |
| Turns |
Total agent turns across sessions |
| Avg Duration |
Average session duration |
| Cost |
Total cost in USD |
| Streak |
Consecutive days with at least one session |
| Last Active |
Relative timestamp of last session |
Visual touches:
- Gold/silver/bronze badges for top 3
- Success rate progress bar (green >80%, amber >50%, red <50%)
- Streak fire emojis (1 for 1-2 days, 2 for 3-6 days, 3 for 7+)
- Active session indicator (pulsing blue dot)
Data Requirements
The current D1 session index has most of what's needed:
created_at, updated_at, status, repo_owner, repo_name, model
Missing columns that would need to be added to the D1 session index:
user_id / scm_login — who created the session (currently only in DO SQLite, not D1)
total_cost — session cost (depends on session cost tracking feature)
pr_count — number of PRs created (currently only in DO artifacts table)
total_turns — number of agent turns
duration_seconds — session duration
Alternatively, the leaderboard API could query each session's DO for this data and cache the aggregates, but that's O(n) DOs per request. The better path is denormalizing key metrics into the D1 index on session completion.
Time Range Selector
All metrics filterable by time range: 7d / 14d / 30d / 90d. Default 30d.
API Endpoints
GET /leaderboard?days=30 → LeaderboardResponse (summary + per-user entries)
GET /leaderboard/timeseries?days=30 → daily session counts (for charts)
GET /leaderboard/timeseries/by-user?days=30 → daily per-user session counts
GET /leaderboard/repos?days=30 → repo breakdown (for donut chart)
GET /leaderboard/prs?days=30 → PR metrics
Implementation Approach
Phase 1: D1 denormalization — on session completion, write key metrics (user, cost, PR count, duration, turns) to the D1 session index. This makes aggregation queries fast without touching DOs.
Phase 2: API endpoints — SQL aggregation queries against D1. No complex joins — just GROUP BY user/repo/date on the session index table.
Phase 3: Web UI — new /leaderboard page with the dashboard sections above. Charts can use a lightweight library (e.g., recharts or custom SVG).
Why this matters
- Adoption visibility — see who's using the platform and how much
- Cost governance — track spend per user and per repo
- Team motivation — leaderboard gamification drives adoption
- ROI measurement — PR rate and cost per PR help justify the platform investment
- Operational health — success/failure rates surface systemic issues
Happy to contribute the implementation.
@ColeMurray
Problem
There's no visibility into how the platform is being used across a team. Managers and developers can't answer basic questions:
The session list shows individual sessions, but there's no aggregated view.
Proposed Feature: Usage Dashboard + Team Leaderboard
A dashboard page in the web UI that aggregates session data into team-level metrics, per-user rankings, and time-series trends.
Dashboard Sections
1. Summary Metrics (top cards)
total_costfrom session state (from #416 / session cost tracking)2. Sessions Over Time (time-series chart)
Two chart views:
Data source: D1 session index grouped by
DATE(created_at)and user/status.3. Sessions by Repository (donut chart)
Donut chart showing session distribution across repos. Each segment = a repo with session count. Helps identify which codebases benefit most from the platform.
4. Pull Request Metrics
5. Team Leaderboard Table
Per-user rankings with sortable columns:
Visual touches:
Data Requirements
The current D1 session index has most of what's needed:
created_at,updated_at,status,repo_owner,repo_name,modelMissing columns that would need to be added to the D1 session index:
user_id/scm_login— who created the session (currently only in DO SQLite, not D1)total_cost— session cost (depends on session cost tracking feature)pr_count— number of PRs created (currently only in DO artifacts table)total_turns— number of agent turnsduration_seconds— session durationAlternatively, the leaderboard API could query each session's DO for this data and cache the aggregates, but that's O(n) DOs per request. The better path is denormalizing key metrics into the D1 index on session completion.
Time Range Selector
All metrics filterable by time range: 7d / 14d / 30d / 90d. Default 30d.
API Endpoints
Implementation Approach
Phase 1: D1 denormalization — on session completion, write key metrics (user, cost, PR count, duration, turns) to the D1 session index. This makes aggregation queries fast without touching DOs.
Phase 2: API endpoints — SQL aggregation queries against D1. No complex joins — just GROUP BY user/repo/date on the session index table.
Phase 3: Web UI — new
/leaderboardpage with the dashboard sections above. Charts can use a lightweight library (e.g., recharts or custom SVG).Why this matters
Happy to contribute the implementation.
@ColeMurray