Releases: snapsynapse/ai-capability-reference
v3.1.0 — Verification at Scale
This release resolves every open verification issue, fixes the pipeline that created them, and adds structured data for search engines.
Verification sweep
- 54 issues resolved across all platforms (24 in prior sessions + 30 in this release)
- 7 data updates, 47 no-change closes with audit trail
- New features tracked: Qwen3-Coder-Next (Alibaba), Mistral Small 4, Leanstral
- Model version updates: Gemini 2.5 Pro → 3.0 Pro in Deep Research and Advanced
- Gating corrections: Copilot Image Generation (paid → free), Claude Projects (added Enterprise row)
Pipeline fixes
- Fixed: 37 features were never verified. The pipeline's max-50 limit used
slice(0, 50)with no rotation — features loaded alphabetically, so everything after Gemini (Grok, Meta, Mistral, Perplexity, etc.) was permanently skipped. Now sorts by staleness before slicing. - Schedule: twice weekly (Monday and Thursday, 6pm Pacific) instead of once on Sundays
- Stale threshold: 7 days (was still 30 in the script despite being documented as 7)
- All 87 features now guaranteed to cycle within one week
MCP server upgrade
- 7 → 15 tools with Graceful Boundaries error handling
- Better agent access to capabilities, comparisons, and plan entitlements
Structured data
- JSON-LD schema.org markup on all 5 main pages
- Knowledge as Code pattern page and research document
Resolve-issue skill
- Installed as project-level slash command (
.claude/commands/resolve-issue.md) - 7-step workflow: fetch → consistency check → research → assess → update → close → report
- Supports single-issue and batch modes with structured assessment documents
By the numbers
- 181 files changed, 3,657 lines added
- 87 features tracked across 11 platforms
- 0 open verification issues
- 0 dependencies (still)
v3.0.0 AI Capability Reference
This release transforms the project from a reference site into a complete access layer, and from a solo project into one designed for first-time contributors.
Two audiences, one repo
If you're new to GitHub: This is meant to be the easiest open-source project you'll ever contribute to. The data is plain markdown. You don't need to install anything. Find something wrong, edit the file, open a PR. Start with an issue if you're not sure.
If you're a Developer studying the approach: This is a zero-dependency, AI-verified knowledge base with no framework, no database, and no node_modules. The architecture (file-over-app, multi-model verification cascades, ontology-driven static site generation, MCP read layer, etc.) is documented in detail under design/ and is the point as much as the data.
What's new since v2.0.0
A full JSON API (Phase 5B)
10 stable endpoints at docs/api/v1/ covering capabilities, products, implementations, providers, model access, evidence, and derived views (capability matrix, product comparisons, plan entitlements). Usage guide →
An MCP server for agents (Phase 5D)
7 read-only tools via scripts/mcp-server.js. Zero dependencies. Agents can query capabilities, compare products, and look up plan entitlements without scraping. Config: mcp.json.
125 SEO bridge pages (Phase 5C)
Programmatic pages under /can/, /compare/, /capability/, and /best-for/ with schema.org structured data — bridging the ontology's vocabulary to how people actually search.
Search vocabulary (Phase 5A)
All 18 capabilities now carry search_terms — synonym maps that connect vendor-specific feature names to plain-English queries.
Search, compare, and export in the UI
Text search across capabilities and implementations, side-by-side product comparison view, and one-click data export.
Scheduled builds
The site auto-rebuilds Monday and Thursday at 6pm Pacific via GitHub Actions, keeping deployed data fresh between manual pushes.
Hardened verification pipeline
Reduced false positives, better issue quality, internal consistency checks. 11 verification issues resolved across ChatGPT, Claude, Gemini, and Copilot data.
Security and discovery
Secret scanning in CI, SECURITY.md, robots.txt, and a well-known discovery file for the API.
Documentation overhaul
New architecture patterns doc, access layers design doc, and a full rewrite of WHY_THIS_EXISTS.md explaining the gap this project fills.
By the numbers
- 274 files changed, 44,123 lines added
- 18 capabilities, 72 implementations, 9 open-model records
- 10 JSON API endpoints, 7 MCP tools, 125 bridge pages
- 0 dependencies
Why v3.0.0
v2 changed the mental model from vendor-first to capability-first. v3 changes who can access the data and how: humans via the site, machines via JSON, agents via MCP, and search engines via bridge pages. It also changes who can contribute — the barrier is now "edit a markdown file."
Built with
Human direction and review. Multi-agent collaboration across Claude Code, Codex, Gemini, and Perplexity-assisted research workflows. Every architectural decision is documented. Every data point is sourced and verified. You're welcome. Now what do we call this approach?
v2.0.0 - AI Capability Reference
AI Capability Reference v2.0.0
This release is the real product reframe.
What started as a plan-by-plan feature tracker is now a capability-first, ontology-driven reference for understanding what today’s AI tools can actually do, where they work, what they cost, and what constraints apply.
What’s new in v2
-
Capabilities are now the front door
The homepage is no longer organized around vendor feature branding first. It starts from plain-English capabilities people actually care about. -
Detailed availability is still here
The feature-first view remains available for plan, surface, and entitlement lookup. -
A new constraints view
Access, limits, platform caveats, and gating now have their own dedicated view. -
A real shared data model under the site
The repo now includes first-class records for capabilities, providers, products, implementations, model access, and evidence. -
Broader and cleaner coverage
Coverage expanded across ChatGPT, Claude, Gemini, Copilot, Grok, Perplexity, and open/self-hosted model ecosystems. -
Ontology-first repo direction
New design docs, migration strategy, validation tooling, and evidence sync workflows now support the project as a maintained reference system, not just a static tracker. -
Cross-platform skills
The repo now includes reusable skills and bundle tooling for Claude, Perplexity, and Codex workflows.
Why this is v2.0.0
Because this is not just a UI refresh.
It changes the project’s:
- core mental model
- public information architecture
- data model
- maintenance workflow
- long-term roadmap
The old question was:
- “What features does this vendor say it has?”
The new question is:
- “What can this AI actually do for me, on my plan, on my surface, with these constraints?”
That is a real version boundary.
Also in this release
- new hero and social preview assets
- improved About page and README
- better provider filtering and UI polish
- fixed theme, anchor, spacing, and layout issues
- resolved large batches of verification issues
- refreshed roadmap around ontology, capability-first UX, and future agent-readable exports
- hardened skill-source hygiene and documented
skill-provenanceas a companion dependency
Thanks
Built with human review and multi-agent collaboration across Claude, Codex, Gemini, and Perplexity-assisted research workflows.
v1.0.0 — 7 platforms, automated verification, live dashboard, WCAG 2.1 AA
AI Feature Tracker v1.0.0
A community-maintained, single source of truth for AI feature availability across subscription tiers. Answer questions like "Is Agent Mode on the free plan?" or "Which platforms support voice on Linux?" in seconds — without hunting through marketing pages.
What's covered
7 platforms tracked
| Platform | Vendor | Sample features tracked |
|---|---|---|
| ChatGPT | OpenAI | Agent Mode, Custom GPTs, Voice, Deep Research, Codex, DALL-E |
| Claude | Anthropic | Code, Cowork, Projects, Artifacts, Extended Thinking, Vision |
| Copilot | Microsoft | Office Integration, Designer, Vision, Voice |
| Gemini | Advanced, NotebookLM, AI Studio, Deep Research, Gems, Imagen, Live | |
| Perplexity | Perplexity AI | Comet, Agent Mode, Pro Search, Focus, Collections, Voice |
| Grok | xAI | Chat, Aurora, DeepSearch, Think Mode, Voice |
| Local Models | Various | Llama, Mistral, DeepSeek, Qwen, Codestral |
What each entry tracks
- Plan-by-plan availability — exactly which tier unlocks each feature
-
- Platform support — Windows, macOS, Linux, iOS, Android, web, terminal, API
-
- Status — GA, Beta, Preview, Deprecated
-
- Talking points — presenter-ready sentences with click-to-copy
-
- Source links — every data point backed by a citation
Dashboard features
- Filter by category — Voice, Coding, Research, Agents, and more
-
- Filter by price tier — find what's available at your budget
-
- Provider toggles — focus on specific platforms
-
- Permalinks & shareable URLs — filter state preserved in URL parameters
-
- Dark/light mode
-
- WCAG 2.1 AA accessibility — full keyboard navigation, skip links, ARIA live regions, reduced motion support, 4.5:1 contrast minimums
Automated verification system
Feature data is kept current by a multi-model verification pipeline:
- Multi-model cascade — queries Gemini, Perplexity, Grok, and Claude in parallel
-
- Bias prevention — skips same-provider models (won't ask Gemini about Google features)
-
- Consensus required — 3 models must agree before flagging a change
-
- Auto-changelog — confirmed changes logged to each feature's record
-
- Human review gate — creates issues/PRs, never auto-merges
# Run verification
node scripts/verify-features.js
# Check a single platform
node scripts/verify-features.js --platform claude
# Stale-only check (>30 days)
node scripts/verify-features.js --stale-onlyFor contributors
Data lives in data/platforms/ as plain markdown files. Adding or correcting a feature is a one-file edit + PR:
## Feature Name
| Property | Value |
|----------|-------|
| Category | agent |
| Status | ga |
### Availability
| Plan | Available | Limits | Notes |
|------|-----------|--------|-------|
| Free | ❌ | — | Not available |
| Plus | ✅ | 40/mo | Message limit |
### Talking Point
> "Your presenter-ready sentence with **key details bolded**."
### Sources
- [Official docs](https://example.com)
- ```
See [`data/_schema.md`](data/_schema.md) for the full spec and [`CONTRIBUTING.md`](CONTRIBUTING.md) to get started.
---
## Local development
```bash
git clone https://github.com/snapsynapse/ai-feature-tracker.git
cd ai-feature-tracker
node scripts/build.js
open docs/index.htmlThe site auto-deploys via GitHub Actions on every push to main.