Academic expert perspectives for any codebase. Seven specialized review commands that analyze your project from different research domain viewpoints.
| Command | Role | What it reviews |
|---|---|---|
/cs |
Computer Science PhD | Algorithm complexity, data structures, concurrency correctness |
/db |
Database Theory PhD | Schema normalization, query optimization, consistency models |
/stats |
Statistics PhD | A/B test design, metric validity, statistical significance |
/ds |
Data Science PhD | ML pipelines, feature engineering, model evaluation |
/dist-sys |
Distributed Systems PhD | Consensus, fault tolerance, partition handling |
/pl |
PL Theory PhD | Type safety, abstraction design, error handling patterns |
/sec |
Security & Cryptography PhD | Primitive selection, protocol safety, key management, side channels |
| Command | Purpose |
|---|---|
/config |
Manage ~/.claude/claude-phd-panel.json — show, init, or read a setting |
/audit |
Verify all command files conform to plugin conventions |
/plugin marketplace add JFK/claude-phd-panel-plugin
/plugin install claude-phd-panelRun any command to get a full academic review:
/claude-phd-panel:cs # Full CS review of current repo
/claude-phd-panel:cs owner/repo # Analyze a specific repo
/claude-phd-panel:db schema # Focus on schema design
/claude-phd-panel:stats metrics # Focus on metrics/experiments
Ask any PhD a direct question — they'll answer from their academic perspective, grounded in your actual codebase:
/claude-phd-panel:cs Is this sorting approach optimal for our data size?
/claude-phd-panel:db Should we denormalize this table for read performance?
/claude-phd-panel:stats Is our A/B test sample size sufficient?
/claude-phd-panel:ds Is there data leakage in our feature pipeline?
/claude-phd-panel:dist-sys Can this service handle network partitions?
/claude-phd-panel:pl Should we use generics here or concrete types?
/claude-phd-panel:sec Is HS256 safe for our JWT use case?
Configure the output language for all PhD commands:
/claude-phd-panel:config init # Create ~/.claude/claude-phd-panel.jsonEdit language to your preferred ISO 639-1 code (ja, zh, ko, etc.):
{
"language": "ja"
}All review output and Q&A responses will be in your chosen language. Code blocks, file paths, and structural headings stay in English for tool compatibility.
Run PhD Panel commands alongside Claude C-Suite commands in the same session for executive + academic perspectives:
/claude-c-suite:cto # CTO flags DB performance concern
/claude-phd-panel:db # DB PhD validates with academic rigor
A typical session combining both plugins:
-
CTO flags an issue —
/claude-c-suite:ctoreports that the user profile endpoint is slow under load and recommends caching:"Profile reads dominate p95 latency. Consider Redis caching with a 5-minute TTL. Severity: High."
-
DB PhD validates the diagnosis —
/claude-phd-panel:db Is caching the right fix here, or is the underlying query the problem?reads the actual schema and queries, then responds:"The
userstable is missing an index on(org_id, last_active_at), which forces a sequential scan on every profile read. Caching would mask the problem but not fix it — and would introduce staleness for a workload that has strong read-your-writes expectations. Recommend adding the composite index first; reassess caching only if p95 is still high." -
CS PhD checks the algorithmic side —
/claude-phd-panel:csnotices the handler also does an O(n²) merge of friend lists in application code and flags it as a separate, larger issue. -
CEO synthesizes —
/claude-c-suite:ceocombines all three findings: index first (cheap, correct), then refactor the merge (medium effort, bigger win), defer caching (don't mask root causes).
The pattern: C-Suite identifies what to worry about; PhD Panel proves whether the proposed fix is actually the right one.
- claude-c-suite-plugin — Executive perspectives (CEO/CTO/CSO/CFO/CMO/CLO/etc.) for code review. C-Suite commands automatically incorporate PhD Panel findings when run in the same session.
- expert-craft-plugin — Create your own custom domain experts via interactive dialogue. Pairs naturally with PhD Panel and C-Suite for project-specific reviews.
- Analysis only — Commands recommend actions but never execute changes
- Academic rigor — Grounded in theory, not just best practices
- Cross-referencing — Run multiple commands in one session; they reference each other's findings
- C-Suite compatible — References C-Suite findings when both plugins are used together
- GitHub-native — Uses
ghCLI to gather issues, milestones, and commit history - Universal — No project-specific assumptions; works with any codebase
MIT