diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index 99579901..182d874f 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -12,6 +12,7 @@ Rust library for NP-hard problem reductions. Implements computational problems w - [write-model-in-paper](skills/write-model-in-paper/SKILL.md) -- Write or improve a problem-def entry in the Typst paper. Covers formal definition, background, example with visualization, and algorithm list. - [write-rule-in-paper](skills/write-rule-in-paper/SKILL.md) -- Write or improve a reduction-rule entry in the Typst paper. Covers complexity citation, self-contained proof, detailed example, and verification. - [release](skills/release/SKILL.md) -- Create a new crate release. Determines version bump from diff, verifies tests/clippy, then runs `make release`. +- [meta-power](skills/meta-power/SKILL.md) -- Batch-resolve all open `[Model]` and `[Rule]` issues autonomously: plan, implement, review, fix CI, merge — in dependency order (models first). ## Commands ```bash @@ -26,7 +27,7 @@ make mdbook # Build and serve mdBook with live reload make paper # Build Typst paper (runs examples + exports first) make coverage # Generate coverage report (>95% required) make check # Quick pre-commit check (fmt + clippy + test) -make rust-export # Generate Rust mapping JSON exports +make rust-export # Generate Julia parity test data (mapping stages) make export-schemas # Regenerate problem schemas JSON make qubo-testdata # Regenerate QUBO ground truth JSON make clean # Clean build artifacts @@ -90,6 +91,8 @@ enum Direction { Maximize, Minimize } ``` ### Key Patterns +- `variant_params!` macro implements `Problem::variant()` — e.g., `crate::variant_params![G, W]` for two type params, `crate::variant_params![]` for none (see `src/variant.rs`) +- `declare_variants!` proc macro registers concrete type instantiations with best-known complexity — must appear in every model file (see `src/models/graph/maximum_independent_set.rs`). Variable names in complexity strings are validated at compile time against actual getter methods. - Problems parameterized by graph type `G` and optionally weight type `W` (problem-dependent) - `ReductionResult` provides `target_problem()` and `extract_solution()` - `Solver::find_best()` → `Option>` for optimization problems; `Solver::find_satisfying()` → `Option>` for `Metric = bool` @@ -101,7 +104,7 @@ enum Direction { Maximize, Minimize } - `NumericSize` supertrait bundles common numeric bounds (`Clone + Default + PartialOrd + Num + Zero + Bounded + AddAssign + 'static`) ### Overhead System -Reduction overhead is expressed using `Expr` AST (in `src/expr.rs`) with the `#[reduction]` macro: +Reduction overhead is expressed using `Expr` AST (in `src/expr.rs`) with the `#[reduction]` macro. The `overhead` attribute is **required** — omitting it is a compile error: ```rust #[reduction(overhead = { num_vertices = "num_vertices + num_clauses", @@ -110,9 +113,14 @@ Reduction overhead is expressed using `Expr` AST (in `src/expr.rs`) with the `#[ impl ReduceTo for Source { ... } ``` - Expression strings are parsed at compile time by a Pratt parser in the proc macro crate +- Variable names are validated against actual getter methods on the source type — typos cause compile errors - Each problem type provides inherent getter methods (e.g., `num_vertices()`, `num_edges()`) that the overhead expressions reference - `ReductionOverhead` stores `Vec<(&'static str, Expr)>` — field name to symbolic expression mappings -- Expressions support: constants, variables, `+`, `*`, `^`, `exp()`, `log()`, `sqrt()` +- `ReductionEntry` has both symbolic (`overhead_fn`) and compiled (`overhead_eval_fn`) evaluation — the compiled version calls getters directly +- `VariantEntry` has both a complexity string and compiled `complexity_eval_fn` — same pattern +- Expressions support: constants, variables, `+`, `-`, `*`, `/`, `^`, `exp()`, `log()`, `sqrt()` +- Complexity strings must use **concrete numeric values only** (e.g., `"2^(2.372 * num_vertices / 3)"`, not `"2^(omega * num_vertices / 3)"`) +- `Expr::parse()` provides runtime parsing for cross-check tests that compare compiled vs symbolic evaluation ### Problem Names Problem types use explicit optimization prefixes: @@ -216,6 +224,8 @@ The complexity string represents the **worst-case time complexity of the best kn 2. Confirm the worst-case time bound from the original paper or a survey 3. Check that polynomial-time problems (e.g., MaximumMatching, 2-SAT, 2-Coloring) are NOT declared with exponential complexity 4. For NP-hard problems, verify the base of the exponential matches the literature (e.g., 1.1996^n for MIS, not 2^n) +5. Use only concrete numeric values — no symbolic constants (epsilon, omega); inline the actual numbers with citations +6. Variable names must match getter methods on the problem type (enforced at compile time) ### Reduction Overhead (`#[reduction(overhead = {...})]`) Overhead expressions describe how target problem size relates to source problem size. To verify correctness: diff --git a/.claude/skills/add-model/SKILL.md b/.claude/skills/add-model/SKILL.md index d09ccd81..ffe1bbad 100644 --- a/.claude/skills/add-model/SKILL.md +++ b/.claude/skills/add-model/SKILL.md @@ -84,6 +84,25 @@ Key decisions: - **Weight management:** use inherent methods (`weights()`, `set_weights()`, `is_weighted()`), NOT traits - **`dims()`:** returns the configuration space dimensions (e.g., `vec![2; n]` for binary variables) - **`evaluate()`:** must check feasibility first, then compute objective +- **`variant()`:** use the `variant_params!` macro — e.g., `crate::variant_params![G, W]` for `Problem`, or `crate::variant_params![]` for problems with no type parameters. Each type parameter must implement `VariantParam` (already done for standard types like `SimpleGraph`, `i32`, `One`). See `src/variant.rs`. + +## Step 2.5: Register variant complexity + +Add `declare_variants!` at the bottom of the model file (after the trait impls, before the test link). Each line declares a concrete type instantiation with its best-known worst-case complexity: + +```rust +crate::declare_variants! { + ProblemName => "1.1996^num_vertices", + ProblemName => "1.1996^num_vertices", +} +``` + +- The complexity string references the getter method names from Step 1.5 (e.g., `num_vertices`) — variable names are validated at compile time against actual getters, so typos cause compile errors +- One entry per supported `(graph, weight)` combination +- The string is parsed as an `Expr` AST — supports `+`, `-`, `*`, `/`, `^`, `exp()`, `log()`, `sqrt()` +- Use only concrete numeric values (e.g., `"1.1996^num_vertices"`, not `"(2-epsilon)^num_vertices"`) +- A compiled `complexity_eval_fn` is auto-generated alongside the symbolic expression +- See `src/models/graph/maximum_independent_set.rs` for the reference pattern ## Step 3: Register the model @@ -146,5 +165,6 @@ Then run the [review-implementation](../review-implementation/SKILL.md) skill to | Missing `#[path]` test link | Add `#[cfg(test)] #[path = "..."] mod tests;` at file bottom | | Wrong `dims()` | Must match the actual configuration space (e.g., `vec![2; n]` for binary) | | Not registering in `mod.rs` | Must update both `/mod.rs` and `models/mod.rs` | +| Forgetting `declare_variants!` | Required for variant complexity metadata used by the paper's auto-generated table | | Forgetting CLI dispatch | Must add match arms in `dispatch.rs` (`load_problem` + `serialize_any_problem`) | | Forgetting CLI alias | Must add lowercase entry in `problem_name.rs` `resolve_alias()` | diff --git a/.claude/skills/add-rule/SKILL.md b/.claude/skills/add-rule/SKILL.md index 1ce40cd7..04a1d5da 100644 --- a/.claude/skills/add-rule/SKILL.md +++ b/.claude/skills/add-rule/SKILL.md @@ -73,7 +73,7 @@ impl ReductionResult for ReductionXToY { } ``` -**ReduceTo with `#[reduction]` macro:** +**ReduceTo with `#[reduction]` macro** (overhead is **required**): ```rust #[reduction(overhead = { field_name = "source_field", @@ -131,11 +131,12 @@ example_fn!(test__to_, reduction__to_); Invoke the `/write-rule-in-paper` skill to write the reduction-rule entry in `docs/paper/reductions.typ`. That skill covers the full authoring process: complexity citation, self-contained proof, detailed worked example, and verification checklist. -## Step 6: Regenerate graph and verify +## Step 6: Regenerate exports and verify ```bash -cargo run --example export_graph # Update reduction_graph.json -make test clippy # Must pass +cargo run --example export_graph # Update reduction_graph.json +cargo run --example export_schemas # Update problem schemas +make test clippy # Must pass ``` Then run the [review-implementation](../review-implementation/SKILL.md) skill to verify all structural and semantic checks pass. diff --git a/.claude/skills/fix-pr/SKILL.md b/.claude/skills/fix-pr/SKILL.md index 5bd95d55..5e6d718e 100644 --- a/.claude/skills/fix-pr/SKILL.md +++ b/.claude/skills/fix-pr/SKILL.md @@ -11,14 +11,17 @@ Resolve PR review comments, fix CI failures, and address codecov coverage gaps f **IMPORTANT:** Do NOT use `gh api --jq` for extracting data — it uses a built-in jq that chokes on response bodies containing backslashes (common in Copilot code suggestions). -Always pipe to `python3 -c` instead. +Always pipe to `python3 -c` instead. (`gh pr view --jq` is fine — only `gh api --jq` is affected.) ```bash +# Get repo identifiers +REPO=$(gh repo view --json nameWithOwner --jq .nameWithOwner) # e.g., "owner/repo" + # Get PR number PR=$(gh pr view --json number --jq .number) # Get PR head SHA (on remote) -HEAD_SHA=$(gh api repos/{owner}/{repo}/pulls/$PR | python3 -c "import sys,json; print(json.load(sys.stdin)['head']['sha'])") +HEAD_SHA=$(gh api repos/$REPO/pulls/$PR | python3 -c "import sys,json; print(json.load(sys.stdin)['head']['sha'])") ``` ### 1a. Fetch Review Comments @@ -27,7 +30,7 @@ Three sources of feedback to check: ```bash # Copilot and user inline review comments (on code lines) -gh api repos/{owner}/{repo}/pulls/$PR/comments | python3 -c " +gh api repos/$REPO/pulls/$PR/comments | python3 -c " import sys,json for c in json.load(sys.stdin): line = c.get('line') or c.get('original_line') or '?' @@ -35,7 +38,7 @@ for c in json.load(sys.stdin): " # Review-level comments (top-level review body) -gh api repos/{owner}/{repo}/pulls/$PR/reviews | python3 -c " +gh api repos/$REPO/pulls/$PR/reviews | python3 -c " import sys,json for r in json.load(sys.stdin): if r.get('body'): @@ -43,7 +46,7 @@ for r in json.load(sys.stdin): " # Issue-level comments (general discussion, excluding bots) -gh api repos/{owner}/{repo}/issues/$PR/comments | python3 -c " +gh api repos/$REPO/issues/$PR/comments | python3 -c " import sys,json for c in json.load(sys.stdin): login = c['user']['login'] @@ -56,7 +59,7 @@ for c in json.load(sys.stdin): ```bash # All check runs on the PR head -gh api repos/{owner}/{repo}/commits/$HEAD_SHA/check-runs | python3 -c " +gh api repos/$REPO/commits/$HEAD_SHA/check-runs | python3 -c " import sys,json for cr in json.load(sys.stdin)['check_runs']: print(f'{cr[\"name\"]}: {cr.get(\"conclusion\") or cr[\"status\"]}') @@ -67,7 +70,7 @@ for cr in json.load(sys.stdin)['check_runs']: ```bash # Codecov bot comment with coverage diff -gh api repos/{owner}/{repo}/issues/$PR/comments | python3 -c " +gh api repos/$REPO/issues/$PR/comments | python3 -c " import sys,json for c in json.load(sys.stdin): if c['user']['login'] == 'codecov[bot]': @@ -129,7 +132,7 @@ For detailed line-by-line coverage, use the Codecov API: ```bash # Get file-level coverage for the PR -gh api repos/{owner}/{repo}/issues/$PR/comments | python3 -c " +gh api repos/$REPO/issues/$PR/comments | python3 -c " import sys,json,re for c in json.load(sys.stdin): if c['user']['login'] == 'codecov[bot]': diff --git a/.claude/skills/issue-to-pr/SKILL.md b/.claude/skills/issue-to-pr/SKILL.md index a85f42c4..64f8c56e 100644 --- a/.claude/skills/issue-to-pr/SKILL.md +++ b/.claude/skills/issue-to-pr/SKILL.md @@ -81,9 +81,14 @@ Include the concrete details from the issue (problem definition, reduction algor Create a pull request with only the plan file. +**Pre-flight checks** (before creating the branch): +1. Verify clean working tree: `git status --porcelain` must be empty. If not, STOP and ask user to stash or commit. +2. Check if branch already exists: `git rev-parse --verify issue-- 2>/dev/null`. If it exists, switch to it with `git checkout` (no `-b`) instead of creating a new one. + ```bash -# Create branch -git checkout -b issue-- +# Create branch (from main) +git checkout main +git rev-parse --verify issue-- 2>/dev/null && git checkout issue-- || git checkout -b issue-- # Stage the plan file git add docs/plans/.md @@ -131,3 +136,5 @@ Created PR #45: Fix #42: Add IndependentSet -> QUBO reduction | Generic plan | Use specifics from the issue, mapped to add-model/add-rule steps | | Skipping CLI registration in plan | add-model requires CLI dispatch updates -- include in plan | | Not verifying facts from issue | Use WebSearch/WebFetch to cross-check claims | +| Branch already exists on retry | Check with `git rev-parse --verify` before `git checkout -b` | +| Dirty working tree | Verify `git status --porcelain` is empty before branching | diff --git a/.claude/skills/meta-power/SKILL.md b/.claude/skills/meta-power/SKILL.md new file mode 100644 index 00000000..59cffef9 --- /dev/null +++ b/.claude/skills/meta-power/SKILL.md @@ -0,0 +1,229 @@ +--- +name: meta-power +description: Use when you want to batch-resolve all open [Model] and [Rule] GitHub issues autonomously — plans, implements, reviews, fixes, and merges each one in dependency order +--- + +# Meta-Power + +Batch-process open `[Model]` and `[Rule]` issues end-to-end: plan, implement, review, fix CI, and merge — fully autonomous. + +## Overview + +You are the **outer orchestrator**. For each issue you invoke existing skills and shell out to subprocesses. You never implement code directly — `make run-plan` does the heavy lifting in a separate Claude session. + +**Batch context:** When invoking sub-skills (like `issue-to-pr`), you are running in batch mode. Auto-approve any confirmation prompts from sub-skills — do not wait for user input mid-batch. + +## Step 0: Discover and Order Issues + +```bash +# Fetch all open issues +gh issue list --state open --limit 50 --json number,title +``` + +Filter to issues whose title contains `[Model]` or `[Rule]`. Partition into two buckets, sort each by issue number ascending. Final order: **all Models first, then all Rules**. + +**Check for existing PRs:** For each issue, check if a PR already exists: +```bash +gh pr list --search "Fixes #" --state open --json number,headRefName +``` +If a PR exists, mark the issue as `resume` — skip Step 1 (plan) and jump to Step 2 (execute) or Step 4 (fix loop) depending on whether the PR already has implementation commits. + +Present the ordered list to the user for confirmation before starting: + +``` +Batch plan: + Models: + #108 [Model] LongestCommonSubsequence + #103 [Model] SubsetSum (has open PR #115 — will resume) + Rules: + #109 [Rule] LCS → MIS + #110 [Rule] LCS → ILP + #97 [Rule] BinPacking → ILP + #91 [Rule] CVP → QUBO + +Proceed? (user confirms) +``` + +Initialize a results table to track status for each issue. + +## Step 1: Plan (issue-to-pr) + +For the current issue: + +```bash +git checkout main && git pull origin main +``` + +**Check for stale branches:** If a branch `issue--*` exists with no open PR, delete it to start fresh: +```bash +STALE=$(git branch --list "issue--*" | head -1 | xargs) +if [ -n "$STALE" ]; then + git branch -D "$STALE" + git push origin --delete "$STALE" 2>/dev/null || true +fi +``` + +Invoke the `issue-to-pr` skill with the issue number. This creates a branch, writes a plan to `docs/plans/`, and opens a PR. + +**If `issue-to-pr` fails** (e.g., incomplete issue template): record status as `skipped (plan failed)`, move to next issue. + +Capture the PR number for later steps: +```bash +PR=$(gh pr view --json number --jq .number) +``` + +## Step 2: Execute (make run-plan) + +Run the plan in a separate Claude subprocess: + +```bash +make run-plan +``` + +This spawns a new Claude session (up to 500 turns) that reads the plan and implements it using `add-model` or `add-rule`. + +**If the subprocess exits non-zero:** record status as `skipped (execution failed)`, move to next issue. + +## Step 3: Review + +After execution completes, push and request Copilot review: + +```bash +git push +make copilot-review +``` + +## Step 4: Fix Loop (max 3 retries) + +```dot +digraph fix_loop { + "Poll CI until done" [shape=box]; + "Run fix-pr" [shape=box]; + "Push changes" [shape=box]; + "Poll CI until done (2)" [shape=box]; + "CI green?" [shape=diamond]; + "Retries < 3?" [shape=diamond]; + "Proceed to merge" [shape=doublecircle]; + "Give up" [shape=doublecircle]; + + "Poll CI until done" -> "Run fix-pr"; + "Run fix-pr" -> "Push changes"; + "Push changes" -> "Poll CI until done (2)"; + "Poll CI until done (2)" -> "CI green?"; + "CI green?" -> "Proceed to merge" [label="yes"]; + "CI green?" -> "Retries < 3?" [label="no"]; + "Retries < 3?" -> "Poll CI until done" [label="yes, re-run fix-pr"]; + "Retries < 3?" -> "Give up" [label="no"]; +} +``` + +For each retry: + +1. **Wait for CI to complete** (poll every 30s, up to 15 minutes): + ```bash + REPO=$(gh repo view --json nameWithOwner --jq .nameWithOwner) + for i in $(seq 1 30); do + sleep 30 + HEAD_SHA=$(gh api repos/$REPO/pulls/$PR | python3 -c "import sys,json; print(json.load(sys.stdin)['head']['sha'])") + STATUS=$(gh api repos/$REPO/commits/$HEAD_SHA/check-runs | python3 -c " + import sys,json + runs = json.load(sys.stdin)['check_runs'] + failed = [r['name'] for r in runs if r.get('conclusion') not in ('success', 'skipped', None)] + pending = [r['name'] for r in runs if r.get('conclusion') is None and r['status'] != 'completed'] + if pending: + print('PENDING') + elif failed: + print('FAILED') + else: + print('GREEN') + ") + if [ "$STATUS" != "PENDING" ]; then break; fi + done + ``` + + - If `GREEN` on the **first** iteration (before any fix-pr): skip the fix loop entirely, proceed to merge. + - If `GREEN` after a fix-pr pass: break out of loop, proceed to merge. + - If `FAILED`: continue to step 2. + - If still `PENDING` after 15 min: treat as `FAILED`. + +2. **Invoke `/fix-pr`** to address review comments, CI failures, and coverage gaps. + +3. **Push fixes:** + ```bash + git push + ``` + +4. Increment retry counter. If `< 3`, go back to step 1 (poll CI). If `= 3`, give up. + +**After 3 failed retries:** record status as `fix-pr failed (3 retries)`, leave PR open, move to next issue. + +## Step 5: Merge + +```bash +gh pr merge $PR --squash --delete-branch --auto +``` + +The `--auto` flag tells GitHub to merge once all required checks pass, avoiding a race between CI completion and the merge command. + +**If merge fails** (e.g., conflict): record status as `merge failed`, leave PR open, move to next issue. + +Wait for the auto-merge to complete before proceeding: +```bash +for i in $(seq 1 20); do + sleep 15 + STATE=$(gh pr view $PR --json state --jq .state) + if [ "$STATE" = "MERGED" ]; then break; fi + if [ "$STATE" = "CLOSED" ]; then break; fi # merge conflict closed it +done +``` + +## Step 6: Sync + +Return to main for the next issue: + +```bash +git checkout main && git pull origin main +``` + +This ensures the next issue (especially a Rule that depends on a just-merged Model) sees all prior work. + +## Step 7: Report + +After all issues are processed, print the summary table: + +``` +=== Meta-Power Batch Report === + +| Issue | Title | Status | +|-------|------------------------------------|---------------------------| +| #108 | [Model] LCS | merged | +| #103 | [Model] SubsetSum | merged (resumed PR #115) | +| #109 | [Rule] LCS → MIS | merged | +| #110 | [Rule] LCS → ILP | fix-pr failed (3 retries) | +| #97 | [Rule] BinPacking → ILP | merged | +| #91 | [Rule] CVP → QUBO | skipped (plan failed) | + +Completed: 4/6 | Skipped: 1 | Failed: 1 +``` + +## Constants + +| Name | Value | Rationale | +|------|-------|-----------| +| `MAX_RETRIES` | 3 | Most issues fix in 1-2 rounds | +| `CI_POLL_INTERVAL` | 30s | Frequent enough to react quickly | +| `CI_POLL_MAX` | 15 min | Upper bound for CI completion | +| `MERGE_POLL_INTERVAL` | 15s | Wait for auto-merge to land | +| `MERGE_POLL_MAX` | 5 min | Upper bound for merge completion | + +## Common Failure Modes + +| Symptom | Cause | Mitigation | +|---------|-------|------------| +| `issue-to-pr` comments and stops | Issue template incomplete | Skip; user must fix the issue | +| `make run-plan` exits non-zero | Implementation too complex for 500 turns | Skip; needs manual work | +| CI red after 3 retries | Deep bug or flaky test | Leave PR open for human review | +| Merge conflict | Concurrent push to main | Leave PR open; manual rebase needed | +| Rule fails because model missing | Model issue was skipped earlier | Expected; skip rule too | +| Stale branch from previous run | Previous meta-power run failed mid-issue | Auto-cleaned in Step 1 | +| PR already exists for issue | Previous partial attempt | Resumed from existing PR | diff --git a/.claude/skills/review-implementation/SKILL.md b/.claude/skills/review-implementation/SKILL.md index 57782013..9ceae2c3 100644 --- a/.claude/skills/review-implementation/SKILL.md +++ b/.claude/skills/review-implementation/SKILL.md @@ -21,9 +21,7 @@ Dispatches two parallel review subagents with fresh context (no implementation h Determine whether new model/rule files were added: ```bash -# Check for NEW files (not just modifications) -git diff --name-only --diff-filter=A HEAD~1..HEAD -# Also check against main for branch-level changes +# Check for NEW files across the entire branch git diff --name-only --diff-filter=A main..HEAD ``` @@ -77,7 +75,7 @@ If an issue is found, pass it as `{ISSUE_CONTEXT}` to both subagents. If not, se ### Structural Reviewer (if new model/rule detected) -Dispatch using `Task` tool with `subagent_type="superpowers:code-reviewer"`: +Dispatch using `Agent` tool with `subagent_type="superpowers:code-reviewer"`: - Read `structural-reviewer-prompt.md` from this skill directory - Fill placeholders: @@ -90,7 +88,7 @@ Dispatch using `Task` tool with `subagent_type="superpowers:code-reviewer"`: ### Quality Reviewer (always) -Dispatch using `Task` tool with `subagent_type="superpowers:code-reviewer"`: +Dispatch using `Agent` tool with `subagent_type="superpowers:code-reviewer"`: - Read `quality-reviewer-prompt.md` from this skill directory - Fill placeholders: @@ -101,7 +99,7 @@ Dispatch using `Task` tool with `subagent_type="superpowers:code-reviewer"`: - `{ISSUE_CONTEXT}` -> full issue title + body (or "No linked issue found.") - Prompt = filled template -**Both subagents must be dispatched in parallel** (single message, two Task tool calls). +**Both subagents must be dispatched in parallel** (single message with two Agent tool calls — use `run_in_background: true` on one, foreground on the other, then read the background result with `TaskOutput`). ## Step 4: Collect and Address Findings diff --git a/.claude/skills/review-implementation/structural-reviewer-prompt.md b/.claude/skills/review-implementation/structural-reviewer-prompt.md index 0bda0de0..a9e01ef8 100644 --- a/.claude/skills/review-implementation/structural-reviewer-prompt.md +++ b/.claude/skills/review-implementation/structural-reviewer-prompt.md @@ -30,10 +30,10 @@ Given: problem name `P` = `{PROBLEM_NAME}`, category `C` = `{CATEGORY}`, file st | 2 | `inventory::submit!` present | `Grep("inventory::submit", file)` | | 3 | `#[derive(...Serialize, Deserialize)]` on struct | `Grep("Serialize.*Deserialize", file)` | | 4 | `Problem` trait impl | `Grep("impl.*Problem for.*{P}", file)` | -| 5 | `OptimizationProblem` or `SatisfactionProblem` impl | `Grep("(OptimizationProblem\|SatisfactionProblem).*for.*{P}", file)` | +| 5 | `OptimizationProblem` or `SatisfactionProblem` impl | `Grep("(OptimizationProblem|SatisfactionProblem).*for.*{P}", file)` | | 6 | `#[cfg(test)]` + `#[path = "..."]` test link | `Grep("#\\[path =", file)` | | 7 | Test file exists | `Glob("src/unit_tests/models/{C}/{F}.rs")` | -| 8 | Test has creation test | `Grep("fn test_.*creation\|fn test_{F}.*basic", test_file)` | +| 8 | Test has creation test | `Grep("fn test_.*creation|fn test_{F}.*basic", test_file)` | | 9 | Test has evaluation test | `Grep("fn test_.*evaluat", test_file)` | | 10 | Registered in `{C}/mod.rs` | `Grep("mod {F}", "src/models/{C}/mod.rs")` | | 11 | Re-exported in `models/mod.rs` | `Grep("{P}", "src/models/mod.rs")` | diff --git a/.claude/skills/write-model-in-paper/SKILL.md b/.claude/skills/write-model-in-paper/SKILL.md index 9ebaf798..08d48877 100644 --- a/.claude/skills/write-model-in-paper/SKILL.md +++ b/.claude/skills/write-model-in-paper/SKILL.md @@ -12,7 +12,7 @@ Full authoring guide for writing a `problem-def` entry in `docs/paper/reductions Before using this skill, ensure: - The problem model is implemented (`src/models//.rs`) - The problem is registered with schema and variant metadata -- JSON exports are up to date (`make rust-export && make export-schemas`) +- JSON exports are up to date (`cargo run --example export_graph && cargo run --example export_schemas`) ## Reference Example @@ -161,10 +161,7 @@ This can be woven into the example text (as MIS does: "$w(S) = sum_(v in S) w(v) ## Step 4: Build and Verify ```bash -# Regenerate exports (if not already done) -make rust-export && make export-schemas - -# Build the paper +# Build the paper (auto-runs export_graph + export_schemas) make paper ``` diff --git a/.claude/skills/write-rule-in-paper/SKILL.md b/.claude/skills/write-rule-in-paper/SKILL.md index e08d852c..f33f0d50 100644 --- a/.claude/skills/write-rule-in-paper/SKILL.md +++ b/.claude/skills/write-rule-in-paper/SKILL.md @@ -17,7 +17,7 @@ Before using this skill, ensure: - The reduction is implemented and tested (`src/rules/_.rs`) - An example program exists (`examples/reduction__to_.rs`) - Example JSON is generated (`make examples`) -- The reduction graph is up to date (`make rust-export`) +- The reduction graph and schemas are up to date (`cargo run --example export_graph && cargo run --example export_schemas`) ## Step 1: Load Example Data diff --git a/Makefile b/Makefile index bc59f80c..5a2fd6f9 100644 --- a/Makefile +++ b/Makefile @@ -192,9 +192,9 @@ run-plan: BRANCH=$$(git branch --show-current); \ PLAN_FILE="$(PLAN_FILE)"; \ if [ "$(AGENT_TYPE)" = "claude" ]; then \ - PROCESS="1. Read the plan file$${NL}2. Choose the right skill to execute: use /add-model for new problem models, /add-rule for new reduction rules, or /subagent-driven-development for other tasks$${NL}3. Push: git push origin $$BRANCH$${NL}4. Create a pull request"; \ + PROCESS="1. Read the plan file$${NL}2. Execute the plan — it specifies which skill(s) to use$${NL}3. Push: git push origin $$BRANCH$${NL}4. If a PR already exists for this branch, skip. Otherwise create one."; \ else \ - PROCESS="1. Read the plan file$${NL}2. Execute the tasks step by step. For each task, implement and test before moving on.$${NL}3. Push: git push origin $$BRANCH$${NL}4. Create a pull request"; \ + PROCESS="1. Read the plan file$${NL}2. Execute the tasks step by step. For each task, implement and test before moving on.$${NL}3. Push: git push origin $$BRANCH$${NL}4. If a PR already exists for this branch, skip. Otherwise create one."; \ fi; \ PROMPT="Execute the plan in '$$PLAN_FILE'."; \ if [ -n "$(INSTRUCTIONS)" ]; then \ diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 9460a804..d119290f 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -335,7 +335,7 @@ In all graph problems below, $G = (V, E)$ denotes an undirected graph with $|V| #problem-def("MaximumIndependentSet")[ Given $G = (V, E)$ with vertex weights $w: V -> RR$, find $S subset.eq V$ maximizing $sum_(v in S) w(v)$ such that no two vertices in $S$ are adjacent: $forall u, v in S: (u, v) in.not E$. ][ -One of Karp's 21 NP-complete problems @karp1972, MIS appears in wireless network scheduling, register allocation, and coding theory @shannon1956. Solvable in polynomial time on bipartite graphs (König's theorem), interval graphs, chordal graphs, and cographs. The best known algorithm runs in $O^*(1.1996^n)$ time via measure-and-conquer branching @xiao2017. +One of Karp's 21 NP-complete problems @karp1972, MIS appears in wireless network scheduling, register allocation, and coding theory @shannon1956. Solvable in polynomial time on bipartite graphs (König's theorem), interval graphs, chordal graphs, and cographs. The best known algorithm runs in $O^*(1.1996^n)$ time via measure-and-conquer branching @xiao2017. On geometric graphs (King's subgraph, triangular subgraph, unit disk graphs), MIS admits subexponential $O^*(c^sqrt(n))$ algorithms for some constant $c$, via geometric separation @alber2004. *Example.* Consider the Petersen graph $G$ with $n = 10$ vertices, $|E| = 15$ edges, and unit weights $w(v) = 1$ for all $v in V$. The graph is 3-regular (every vertex has degree 3). A maximum independent set is $S = {v_1, v_3, v_5, v_9}$ with $w(S) = sum_(v in S) w(v) = 4 = alpha(G)$. No two vertices in $S$ share an edge, and no vertex can be added without violating independence. diff --git a/docs/paper/references.bib b/docs/paper/references.bib index ff74bef1..e7527876 100644 --- a/docs/paper/references.bib +++ b/docs/paper/references.bib @@ -355,3 +355,14 @@ @article{shannon1956 doi = {10.1109/TIT.1956.1056798} } +@article{alber2004, + author = {Jochen Alber and Jiří Fiala}, + title = {Geometric separation and exact solutions for the parameterized independent set problem on disk graphs}, + journal = {Journal of Algorithms}, + volume = {52}, + number = {2}, + pages = {134--151}, + year = {2004}, + doi = {10.1016/j.jalgor.2003.10.001} +} + diff --git a/docs/plans/2026-03-01-meta-power-design.md b/docs/plans/2026-03-01-meta-power-design.md new file mode 100644 index 00000000..60bcc35f --- /dev/null +++ b/docs/plans/2026-03-01-meta-power-design.md @@ -0,0 +1,54 @@ +# Design: meta-power skill + +## Purpose + +Batch-resolve open `[Model]` and `[Rule]` GitHub issues end-to-end with full autonomy: plan, implement, review, fix, merge. + +## Architecture + +**Outer orchestrator** pattern: meta-power runs in the main Claude session and shells out to `make run-plan` for each issue's implementation. This keeps the orchestrator's context clean while delegating heavy work to subprocess sessions. + +## Pipeline per Issue + +``` +Phase 1: Plan /issue-to-pr → branch + PR with plan +Phase 2: Execute make run-plan → subprocess implements the plan +Phase 3: Review push, make copilot-review +Phase 4: Fix loop (up to 3 retries) + sleep 5m → /fix-pr → push → sleep 5m → check CI + if CI green → break +Phase 5: Merge gh pr merge --squash +Phase 6: Sync git checkout main && git pull +``` + +## Ordering + +1. All `[Model]` issues first (ascending issue number) +2. All `[Rule]` issues second (ascending issue number) + +No DAG — models-first is sufficient since rules depend on models. + +## Error Handling + +Every failure → log + skip to next issue. Never block the batch. + +| Phase | Failure | Action | +|-------|---------|--------| +| Plan | Validation fails | Skip | +| Execute | Subprocess exits non-zero | Skip | +| Fix loop | 3 retries exhausted | Leave PR open, skip | +| Merge | Conflict | Leave PR open, skip | + +## Parameters + +- `MAX_RETRIES = 3` +- `CI_WAIT = 5 minutes` +- Auto-merge: yes (squash) +- Summary table printed at end + +## Design Decisions + +- **Why outer orchestrator?** Each `make run-plan` gets a fresh 500-turn context. The outer session just monitors and coordinates. +- **Why models-first only?** Rules rarely depend on each other. If a rule's source model is missing, `issue-to-pr` validation catches it and skips. +- **Why 3 retries?** Most fixable issues resolve in 1-2 rounds. More retries burn tokens on genuinely hard problems. +- **Why auto-merge?** Full CI + Copilot review provides sufficient quality gate. The point of the skill is batch autonomy. diff --git a/docs/src/reductions/reduction_graph.json b/docs/src/reductions/reduction_graph.json index 7606b7fd..9cf34c21 100644 --- a/docs/src/reductions/reduction_graph.json +++ b/docs/src/reductions/reduction_graph.json @@ -1,5 +1,19 @@ { "nodes": [ + { + "name": "BMF", + "variant": {}, + "category": "specialized", + "doc_path": "models/specialized/struct.BMF.html", + "complexity": "2^(rows * rank + rank * cols)" + }, + { + "name": "BicliqueCover", + "variant": {}, + "category": "specialized", + "doc_path": "models/specialized/struct.BicliqueCover.html", + "complexity": "2^num_vertices" + }, { "name": "BinPacking", "variant": { @@ -23,7 +37,7 @@ "variant": {}, "category": "specialized", "doc_path": "models/specialized/struct.CircuitSAT.html", - "complexity": "2^num_inputs" + "complexity": "2^num_variables" }, { "name": "ClosestVectorProblem", @@ -32,7 +46,7 @@ }, "category": "optimization", "doc_path": "models/optimization/struct.ClosestVectorProblem.html", - "complexity": "exp(num_basis_vectors)" + "complexity": "2^num_basis_vectors" }, { "name": "ClosestVectorProblem", @@ -41,21 +55,21 @@ }, "category": "optimization", "doc_path": "models/optimization/struct.ClosestVectorProblem.html", - "complexity": "exp(num_basis_vectors)" + "complexity": "2^num_basis_vectors" }, { "name": "Factoring", "variant": {}, "category": "specialized", "doc_path": "models/specialized/struct.Factoring.html", - "complexity": "exp(sqrt(num_bits))" + "complexity": "exp((m + n)^(1/3) * log(m + n)^(2/3))" }, { "name": "ILP", "variant": {}, "category": "optimization", "doc_path": "models/optimization/struct.ILP.html", - "complexity": "exp(num_variables)" + "complexity": "num_variables^num_variables" }, { "name": "KColoring", @@ -95,7 +109,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.KColoring.html", - "complexity": "(2-epsilon)^num_vertices" + "complexity": "2^num_vertices" }, { "name": "KColoring", @@ -123,7 +137,7 @@ }, "category": "satisfiability", "doc_path": "models/satisfiability/struct.KSatisfiability.html", - "complexity": "2^num_variables" + "complexity": "1.307^num_variables" }, { "name": "KSatisfiability", @@ -142,7 +156,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaxCut.html", - "complexity": "2^num_vertices" + "complexity": "2^(2.372 * num_vertices / 3)" }, { "name": "MaximalIS", @@ -152,7 +166,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximalIS.html", - "complexity": "2^num_vertices" + "complexity": "3^(num_vertices / 3)" }, { "name": "MaximumClique", @@ -162,7 +176,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumClique.html", - "complexity": "2^num_vertices" + "complexity": "1.1996^num_vertices" }, { "name": "MaximumIndependentSet", @@ -172,7 +186,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "2^sqrt(num_vertices)" }, { "name": "MaximumIndependentSet", @@ -182,7 +196,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "2^sqrt(num_vertices)" }, { "name": "MaximumIndependentSet", @@ -192,7 +206,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "1.1996^num_vertices" }, { "name": "MaximumIndependentSet", @@ -202,7 +216,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "1.1996^num_vertices" }, { "name": "MaximumIndependentSet", @@ -212,7 +226,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "2^sqrt(num_vertices)" }, { "name": "MaximumIndependentSet", @@ -222,7 +236,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "2^sqrt(num_vertices)" }, { "name": "MaximumIndependentSet", @@ -232,7 +246,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MaximumIndependentSet.html", - "complexity": "2^num_vertices" + "complexity": "2^sqrt(num_vertices)" }, { "name": "MaximumMatching", @@ -279,7 +293,7 @@ }, "category": "graph", "doc_path": "models/graph/struct.MinimumDominatingSet.html", - "complexity": "2^num_vertices" + "complexity": "1.4969^num_vertices" }, { "name": "MinimumSetCovering", @@ -298,7 +312,14 @@ }, "category": "graph", "doc_path": "models/graph/struct.MinimumVertexCover.html", - "complexity": "2^num_vertices" + "complexity": "1.1996^num_vertices" + }, + { + "name": "PaintShop", + "variant": {}, + "category": "specialized", + "doc_path": "models/specialized/struct.PaintShop.html", + "complexity": "2^num_cars" }, { "name": "QUBO", @@ -324,7 +345,7 @@ }, "category": "optimization", "doc_path": "models/optimization/struct.SpinGlass.html", - "complexity": "2^num_vertices" + "complexity": "2^num_spins" }, { "name": "SpinGlass", @@ -334,7 +355,7 @@ }, "category": "optimization", "doc_path": "models/optimization/struct.SpinGlass.html", - "complexity": "2^num_vertices" + "complexity": "2^num_spins" }, { "name": "TravelingSalesman", @@ -344,13 +365,13 @@ }, "category": "graph", "doc_path": "models/graph/struct.TravelingSalesman.html", - "complexity": "num_vertices!" + "complexity": "2^num_vertices" } ], "edges": [ { - "source": 2, - "target": 6, + "source": 4, + "target": 8, "overhead": [ { "field": "num_vars", @@ -364,8 +385,8 @@ "doc_path": "rules/circuit_ilp/index.html" }, { - "source": 2, - "target": 35, + "source": 4, + "target": 38, "overhead": [ { "field": "num_spins", @@ -379,8 +400,8 @@ "doc_path": "rules/circuit_spinglass/index.html" }, { - "source": 5, - "target": 2, + "source": 7, + "target": 4, "overhead": [ { "field": "num_variables", @@ -394,8 +415,8 @@ "doc_path": "rules/factoring_circuit/index.html" }, { - "source": 5, - "target": 6, + "source": 7, + "target": 8, "overhead": [ { "field": "num_vars", @@ -409,8 +430,8 @@ "doc_path": "rules/factoring_ilp/index.html" }, { - "source": 6, - "target": 32, + "source": 8, + "target": 35, "overhead": [ { "field": "num_vars", @@ -420,8 +441,8 @@ "doc_path": "rules/ilp_qubo/index.html" }, { - "source": 8, - "target": 11, + "source": 10, + "target": 13, "overhead": [ { "field": "num_vertices", @@ -435,8 +456,8 @@ "doc_path": "rules/kcoloring_casts/index.html" }, { - "source": 11, - "target": 6, + "source": 13, + "target": 8, "overhead": [ { "field": "num_vars", @@ -450,8 +471,8 @@ "doc_path": "rules/coloring_ilp/index.html" }, { - "source": 11, - "target": 32, + "source": 13, + "target": 35, "overhead": [ { "field": "num_vars", @@ -461,8 +482,8 @@ "doc_path": "rules/coloring_qubo/index.html" }, { - "source": 12, - "target": 14, + "source": 14, + "target": 16, "overhead": [ { "field": "num_vars", @@ -476,8 +497,8 @@ "doc_path": "rules/ksatisfiability_casts/index.html" }, { - "source": 12, - "target": 32, + "source": 14, + "target": 35, "overhead": [ { "field": "num_vars", @@ -487,8 +508,8 @@ "doc_path": "rules/ksatisfiability_qubo/index.html" }, { - "source": 12, - "target": 33, + "source": 14, + "target": 36, "overhead": [ { "field": "num_clauses", @@ -506,8 +527,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 13, - "target": 14, + "source": 15, + "target": 16, "overhead": [ { "field": "num_vars", @@ -521,8 +542,8 @@ "doc_path": "rules/ksatisfiability_casts/index.html" }, { - "source": 13, - "target": 32, + "source": 15, + "target": 35, "overhead": [ { "field": "num_vars", @@ -532,8 +553,8 @@ "doc_path": "rules/ksatisfiability_qubo/index.html" }, { - "source": 13, - "target": 33, + "source": 15, + "target": 36, "overhead": [ { "field": "num_clauses", @@ -551,8 +572,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 14, - "target": 33, + "source": 16, + "target": 36, "overhead": [ { "field": "num_clauses", @@ -570,8 +591,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 15, - "target": 35, + "source": 17, + "target": 38, "overhead": [ { "field": "num_spins", @@ -585,8 +606,8 @@ "doc_path": "rules/spinglass_maxcut/index.html" }, { - "source": 17, - "target": 6, + "source": 19, + "target": 8, "overhead": [ { "field": "num_vars", @@ -600,8 +621,8 @@ "doc_path": "rules/maximumclique_ilp/index.html" }, { - "source": 18, - "target": 19, + "source": 20, + "target": 21, "overhead": [ { "field": "num_vertices", @@ -615,8 +636,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 18, - "target": 23, + "source": 20, + "target": 25, "overhead": [ { "field": "num_vertices", @@ -630,8 +651,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 19, - "target": 24, + "source": 21, + "target": 26, "overhead": [ { "field": "num_vertices", @@ -645,8 +666,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 20, - "target": 18, + "source": 22, + "target": 20, "overhead": [ { "field": "num_vertices", @@ -660,8 +681,8 @@ "doc_path": "rules/maximumindependentset_gridgraph/index.html" }, { - "source": 20, - "target": 19, + "source": 22, + "target": 21, "overhead": [ { "field": "num_vertices", @@ -675,8 +696,8 @@ "doc_path": "rules/maximumindependentset_gridgraph/index.html" }, { - "source": 20, - "target": 21, + "source": 22, + "target": 23, "overhead": [ { "field": "num_vertices", @@ -690,8 +711,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 20, - "target": 22, + "source": 22, + "target": 24, "overhead": [ { "field": "num_vertices", @@ -705,8 +726,8 @@ "doc_path": "rules/maximumindependentset_triangular/index.html" }, { - "source": 20, - "target": 26, + "source": 22, + "target": 28, "overhead": [ { "field": "num_sets", @@ -720,8 +741,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 21, - "target": 6, + "source": 23, + "target": 8, "overhead": [ { "field": "num_vars", @@ -735,8 +756,8 @@ "doc_path": "rules/maximumindependentset_ilp/index.html" }, { - "source": 21, - "target": 28, + "source": 23, + "target": 30, "overhead": [ { "field": "num_sets", @@ -750,8 +771,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 21, - "target": 31, + "source": 23, + "target": 33, "overhead": [ { "field": "num_vertices", @@ -765,8 +786,8 @@ "doc_path": "rules/minimumvertexcover_maximumindependentset/index.html" }, { - "source": 21, - "target": 32, + "source": 23, + "target": 35, "overhead": [ { "field": "num_vars", @@ -776,8 +797,8 @@ "doc_path": "rules/maximumindependentset_qubo/index.html" }, { - "source": 22, - "target": 24, + "source": 24, + "target": 26, "overhead": [ { "field": "num_vertices", @@ -791,8 +812,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 23, - "target": 20, + "source": 25, + "target": 22, "overhead": [ { "field": "num_vertices", @@ -806,8 +827,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 23, - "target": 24, + "source": 25, + "target": 26, "overhead": [ { "field": "num_vertices", @@ -821,8 +842,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 24, - "target": 21, + "source": 26, + "target": 23, "overhead": [ { "field": "num_vertices", @@ -836,8 +857,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 25, - "target": 6, + "source": 27, + "target": 8, "overhead": [ { "field": "num_vars", @@ -851,8 +872,8 @@ "doc_path": "rules/maximummatching_ilp/index.html" }, { - "source": 25, - "target": 28, + "source": 27, + "target": 30, "overhead": [ { "field": "num_sets", @@ -866,8 +887,8 @@ "doc_path": "rules/maximummatching_maximumsetpacking/index.html" }, { - "source": 26, - "target": 20, + "source": 28, + "target": 22, "overhead": [ { "field": "num_vertices", @@ -881,8 +902,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 26, - "target": 28, + "source": 28, + "target": 30, "overhead": [ { "field": "num_sets", @@ -896,8 +917,8 @@ "doc_path": "rules/maximumsetpacking_casts/index.html" }, { - "source": 27, - "target": 32, + "source": 29, + "target": 35, "overhead": [ { "field": "num_vars", @@ -907,8 +928,8 @@ "doc_path": "rules/maximumsetpacking_qubo/index.html" }, { - "source": 28, - "target": 6, + "source": 30, + "target": 8, "overhead": [ { "field": "num_vars", @@ -922,8 +943,8 @@ "doc_path": "rules/maximumsetpacking_ilp/index.html" }, { - "source": 28, - "target": 21, + "source": 30, + "target": 23, "overhead": [ { "field": "num_vertices", @@ -937,8 +958,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 28, - "target": 27, + "source": 30, + "target": 29, "overhead": [ { "field": "num_sets", @@ -952,8 +973,8 @@ "doc_path": "rules/maximumsetpacking_casts/index.html" }, { - "source": 29, - "target": 6, + "source": 31, + "target": 8, "overhead": [ { "field": "num_vars", @@ -967,8 +988,8 @@ "doc_path": "rules/minimumdominatingset_ilp/index.html" }, { - "source": 30, - "target": 6, + "source": 32, + "target": 8, "overhead": [ { "field": "num_vars", @@ -982,8 +1003,8 @@ "doc_path": "rules/minimumsetcovering_ilp/index.html" }, { - "source": 31, - "target": 6, + "source": 33, + "target": 8, "overhead": [ { "field": "num_vars", @@ -997,8 +1018,8 @@ "doc_path": "rules/minimumvertexcover_ilp/index.html" }, { - "source": 31, - "target": 21, + "source": 33, + "target": 23, "overhead": [ { "field": "num_vertices", @@ -1012,8 +1033,8 @@ "doc_path": "rules/minimumvertexcover_maximumindependentset/index.html" }, { - "source": 31, - "target": 30, + "source": 33, + "target": 32, "overhead": [ { "field": "num_sets", @@ -1027,8 +1048,8 @@ "doc_path": "rules/minimumvertexcover_minimumsetcovering/index.html" }, { - "source": 31, - "target": 32, + "source": 33, + "target": 35, "overhead": [ { "field": "num_vars", @@ -1038,8 +1059,8 @@ "doc_path": "rules/minimumvertexcover_qubo/index.html" }, { - "source": 32, - "target": 6, + "source": 35, + "target": 8, "overhead": [ { "field": "num_vars", @@ -1053,8 +1074,8 @@ "doc_path": "rules/qubo_ilp/index.html" }, { - "source": 32, - "target": 34, + "source": 35, + "target": 37, "overhead": [ { "field": "num_spins", @@ -1064,8 +1085,8 @@ "doc_path": "rules/spinglass_qubo/index.html" }, { - "source": 33, - "target": 2, + "source": 36, + "target": 4, "overhead": [ { "field": "num_variables", @@ -1079,8 +1100,8 @@ "doc_path": "rules/sat_circuitsat/index.html" }, { - "source": 33, - "target": 8, + "source": 36, + "target": 10, "overhead": [ { "field": "num_vertices", @@ -1094,8 +1115,8 @@ "doc_path": "rules/sat_coloring/index.html" }, { - "source": 33, - "target": 13, + "source": 36, + "target": 15, "overhead": [ { "field": "num_clauses", @@ -1109,8 +1130,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 33, - "target": 20, + "source": 36, + "target": 22, "overhead": [ { "field": "num_vertices", @@ -1124,8 +1145,8 @@ "doc_path": "rules/sat_maximumindependentset/index.html" }, { - "source": 33, - "target": 29, + "source": 36, + "target": 31, "overhead": [ { "field": "num_vertices", @@ -1139,8 +1160,8 @@ "doc_path": "rules/sat_minimumdominatingset/index.html" }, { - "source": 34, - "target": 32, + "source": 37, + "target": 35, "overhead": [ { "field": "num_vars", @@ -1150,8 +1171,8 @@ "doc_path": "rules/spinglass_qubo/index.html" }, { - "source": 35, - "target": 15, + "source": 38, + "target": 17, "overhead": [ { "field": "num_vertices", @@ -1165,8 +1186,8 @@ "doc_path": "rules/spinglass_maxcut/index.html" }, { - "source": 35, - "target": 34, + "source": 38, + "target": 37, "overhead": [ { "field": "num_spins", @@ -1180,8 +1201,8 @@ "doc_path": "rules/spinglass_casts/index.html" }, { - "source": 36, - "target": 6, + "source": 39, + "target": 8, "overhead": [ { "field": "num_vars", diff --git a/problemreductions-macros/src/lib.rs b/problemreductions-macros/src/lib.rs index 1e0cbb55..7226c1c2 100644 --- a/problemreductions-macros/src/lib.rs +++ b/problemreductions-macros/src/lib.rs @@ -1,7 +1,9 @@ //! Procedural macros for problemreductions. //! //! This crate provides the `#[reduction]` attribute macro that automatically -//! generates `ReductionEntry` registrations from `ReduceTo` impl blocks. +//! generates `ReductionEntry` registrations from `ReduceTo` impl blocks, +//! and the `declare_variants!` proc macro for compile-time validated variant +//! registration. pub(crate) mod parser; @@ -218,6 +220,38 @@ fn generate_parsed_overhead(fields: &[(String, String)]) -> syn::Result syn::Result { + let src_ident = syn::Ident::new("__src", proc_macro2::Span::call_site()); + + let mut field_eval_tokens = Vec::new(); + for (field_name, expr_str) in fields { + let parsed = parser::parse_expr(expr_str).map_err(|e| { + syn::Error::new( + proc_macro2::Span::call_site(), + format!("error parsing overhead expression \"{expr_str}\": {e}"), + ) + })?; + + let eval_tokens = parsed.to_eval_tokens(&src_ident); + let name_lit = field_name.as_str(); + field_eval_tokens.push(quote! { (#name_lit, (#eval_tokens).round() as usize) }); + } + + Ok(quote! { + |__any_src: &dyn std::any::Any| -> crate::types::ProblemSize { + let #src_ident = __any_src.downcast_ref::<#source_type>().unwrap(); + crate::types::ProblemSize::new(vec![#(#field_eval_tokens),*]) + } + }) +} + /// Generate the reduction entry code fn generate_reduction_entry( attrs: &ReductionAttrs, @@ -249,11 +283,28 @@ fn generate_reduction_entry( let source_variant_body = make_variant_fn_body(source_type, &type_generics)?; let target_variant_body = make_variant_fn_body(&target_type, &type_generics)?; - // Generate overhead or use default - let overhead = match &attrs.overhead { - Some(OverheadSpec::Legacy(tokens)) => tokens.clone(), - Some(OverheadSpec::Parsed(fields)) => generate_parsed_overhead(fields)?, - None => quote! { crate::rules::registry::ReductionOverhead::default() }, + // Generate overhead and eval fn + let (overhead, overhead_eval_fn) = match &attrs.overhead { + Some(OverheadSpec::Legacy(tokens)) => { + let eval_fn = quote! { + |_: &dyn std::any::Any| -> crate::types::ProblemSize { + panic!("overhead_eval_fn not available for legacy overhead syntax; \ + migrate to parsed syntax: field = \"expression\"") + } + }; + (tokens.clone(), eval_fn) + } + Some(OverheadSpec::Parsed(fields)) => { + let overhead_tokens = generate_parsed_overhead(fields)?; + let eval_fn = generate_overhead_eval_fn(fields, source_type)?; + (overhead_tokens, eval_fn) + } + None => { + return Err(syn::Error::new( + proc_macro2::Span::call_site(), + "Missing overhead specification. Use #[reduction(overhead = { ... })] and specify overhead expressions for all target problem size fields.", + )); + } }; // Generate the combined output @@ -278,6 +329,7 @@ fn generate_reduction_entry( }); Box::new(<#source_type as crate::rules::ReduceTo<#target_type>>::reduce_to(src)) }, + overhead_eval_fn: #overhead_eval_fn, } } @@ -315,3 +367,143 @@ fn extract_target_from_trait(path: &Path) -> syn::Result { "Expected ReduceTo with type parameter", )) } + +// --- declare_variants! proc macro --- + +/// Input for the `declare_variants!` proc macro. +struct DeclareVariantsInput { + entries: Vec, +} + +/// A single entry: `Type => "complexity_string"`. +struct DeclareVariantEntry { + ty: Type, + complexity: syn::LitStr, +} + +impl syn::parse::Parse for DeclareVariantsInput { + fn parse(input: syn::parse::ParseStream) -> syn::Result { + let mut entries = Vec::new(); + while !input.is_empty() { + let ty: Type = input.parse()?; + input.parse::]>()?; + let complexity: syn::LitStr = input.parse()?; + entries.push(DeclareVariantEntry { ty, complexity }); + + if input.peek(syn::Token![,]) { + input.parse::()?; + } + } + Ok(DeclareVariantsInput { entries }) + } +} + +/// Declare explicit problem variants with per-variant complexity metadata. +/// +/// Each entry generates: +/// 1. A `DeclaredVariant` trait impl for compile-time checking +/// 2. A `VariantEntry` inventory submission for runtime graph building +/// 3. A compiled `complexity_eval_fn` that calls getter methods +/// 4. A const validation block verifying all variable names are valid getters +/// +/// Complexity strings must use only numeric literals and getter method names. +/// Mathematical constants (epsilon, omega, etc.) should be inlined as numbers +/// and documented in comments or docstrings. +/// +/// # Example +/// +/// ```ignore +/// declare_variants! { +/// MaximumIndependentSet => "1.1996^num_vertices", +/// MaximumIndependentSet => "2^sqrt(num_vertices)", +/// } +/// ``` +#[proc_macro] +pub fn declare_variants(input: TokenStream) -> TokenStream { + let input = parse_macro_input!(input as DeclareVariantsInput); + match generate_declare_variants(&input) { + Ok(tokens) => tokens.into(), + Err(e) => e.to_compile_error().into(), + } +} + +/// Generate code for all `declare_variants!` entries. +fn generate_declare_variants(input: &DeclareVariantsInput) -> syn::Result { + let mut output = TokenStream2::new(); + + for entry in &input.entries { + let ty = &entry.ty; + let complexity_str = entry.complexity.value(); + + // Parse the complexity expression to validate syntax + let parsed = parser::parse_expr(&complexity_str).map_err(|e| { + syn::Error::new( + entry.complexity.span(), + format!("invalid complexity expression \"{complexity_str}\": {e}"), + ) + })?; + + // Generate getter validation for all variables + let vars = parsed.variables(); + let validation = if vars.is_empty() { + quote! {} + } else { + let src_ident = syn::Ident::new("__src", proc_macro2::Span::call_site()); + let getter_checks: Vec<_> = vars + .iter() + .map(|var| { + let getter = syn::Ident::new(var, proc_macro2::Span::call_site()); + quote! { let _ = #src_ident.#getter(); } + }) + .collect(); + + quote! { + const _: () = { + #[allow(unused)] + fn _validate_complexity(#src_ident: &#ty) { + #(#getter_checks)* + } + }; + } + }; + + // Generate compiled complexity eval fn + let complexity_eval_fn = generate_complexity_eval_fn(&parsed, ty)?; + + output.extend(quote! { + impl crate::traits::DeclaredVariant for #ty {} + + crate::inventory::submit! { + crate::registry::VariantEntry { + name: <#ty as crate::traits::Problem>::NAME, + variant_fn: || <#ty as crate::traits::Problem>::variant(), + complexity: #complexity_str, + complexity_eval_fn: #complexity_eval_fn, + } + } + + #validation + }); + } + + Ok(output) +} + +/// Generate a compiled complexity evaluation function. +/// +/// Produces a closure that downcasts `&dyn Any` to the problem type, calls getter +/// methods for all variables, and returns the worst-case time complexity as f64. +fn generate_complexity_eval_fn( + parsed: &parser::ParsedExpr, + ty: &Type, +) -> syn::Result { + let src_ident = syn::Ident::new("__src", proc_macro2::Span::call_site()); + let eval_tokens = parsed.to_eval_tokens(&src_ident); + + Ok(quote! { + |__any_src: &dyn std::any::Any| -> f64 { + let #src_ident = __any_src.downcast_ref::<#ty>().unwrap(); + #eval_tokens + } + }) +} diff --git a/problemreductions-macros/src/parser.rs b/problemreductions-macros/src/parser.rs index 6a932166..c73567e8 100644 --- a/problemreductions-macros/src/parser.rs +++ b/problemreductions-macros/src/parser.rs @@ -236,7 +236,6 @@ pub fn parse_expr(input: &str) -> Result { Ok(expr) } -#[allow(dead_code)] impl ParsedExpr { /// Generate TokenStream that constructs an `Expr` value. pub fn to_expr_tokens(&self) -> TokenStream { diff --git a/src/expr.rs b/src/expr.rs index e81035d2..96a64768 100644 --- a/src/expr.rs +++ b/src/expr.rs @@ -103,6 +103,21 @@ impl Expr { } } + /// Parse an expression string into an `Expr` at runtime. + /// + /// **Memory note:** Variable names are leaked to `&'static str` via `Box::leak` + /// since `Expr::Var` requires static lifetimes. Each unique variable name leaks + /// a small allocation that is never freed. This is acceptable for testing and + /// one-time cross-check evaluation, but should not be used in hot loops with + /// dynamic input. + /// + /// # Panics + /// Panics if the expression string has invalid syntax. + pub fn parse(input: &str) -> Expr { + parse_to_expr(input) + .unwrap_or_else(|e| panic!("failed to parse expression \"{input}\": {e}")) + } + /// Check if this expression is a polynomial (no exp/log/sqrt, integer exponents only). pub fn is_polynomial(&self) -> bool { match self { @@ -166,6 +181,211 @@ impl std::ops::Add for Expr { } } +// --- Runtime expression parser --- + +/// Parse an expression string into an `Expr`. +/// +/// Uses the same grammar as the proc macro parser. Variable names are leaked +/// to `&'static str` for compatibility with `Expr::Var`. +fn parse_to_expr(input: &str) -> Result { + let tokens = tokenize_expr(input)?; + let mut parser = ExprParser::new(tokens); + let expr = parser.parse_additive()?; + if parser.pos != parser.tokens.len() { + return Err(format!("trailing tokens at position {}", parser.pos)); + } + Ok(expr) +} + +#[derive(Debug, Clone, PartialEq)] +enum ExprToken { + Number(f64), + Ident(String), + Plus, + Minus, + Star, + Slash, + Caret, + LParen, + RParen, +} + +fn tokenize_expr(input: &str) -> Result, String> { + let mut tokens = Vec::new(); + let mut chars = input.chars().peekable(); + while let Some(&ch) = chars.peek() { + match ch { + ' ' | '\t' | '\n' => { + chars.next(); + } + '+' => { + chars.next(); + tokens.push(ExprToken::Plus); + } + '-' => { + chars.next(); + tokens.push(ExprToken::Minus); + } + '*' => { + chars.next(); + tokens.push(ExprToken::Star); + } + '/' => { + chars.next(); + tokens.push(ExprToken::Slash); + } + '^' => { + chars.next(); + tokens.push(ExprToken::Caret); + } + '(' => { + chars.next(); + tokens.push(ExprToken::LParen); + } + ')' => { + chars.next(); + tokens.push(ExprToken::RParen); + } + c if c.is_ascii_digit() || c == '.' => { + let mut num = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_digit() || c == '.' { + num.push(c); + chars.next(); + } else { + break; + } + } + tokens.push(ExprToken::Number( + num.parse().map_err(|_| format!("invalid number: {num}"))?, + )); + } + c if c.is_ascii_alphabetic() || c == '_' => { + let mut ident = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_alphanumeric() || c == '_' { + ident.push(c); + chars.next(); + } else { + break; + } + } + tokens.push(ExprToken::Ident(ident)); + } + _ => return Err(format!("unexpected character: '{ch}'")), + } + } + Ok(tokens) +} + +struct ExprParser { + tokens: Vec, + pos: usize, +} + +impl ExprParser { + fn new(tokens: Vec) -> Self { + Self { tokens, pos: 0 } + } + + fn peek(&self) -> Option<&ExprToken> { + self.tokens.get(self.pos) + } + + fn advance(&mut self) -> Option { + let tok = self.tokens.get(self.pos).cloned(); + self.pos += 1; + tok + } + + fn expect(&mut self, expected: &ExprToken) -> Result<(), String> { + match self.advance() { + Some(ref tok) if tok == expected => Ok(()), + Some(tok) => Err(format!("expected {expected:?}, got {tok:?}")), + None => Err(format!("expected {expected:?}, got end of input")), + } + } + + fn parse_additive(&mut self) -> Result { + let mut left = self.parse_multiplicative()?; + while matches!(self.peek(), Some(ExprToken::Plus) | Some(ExprToken::Minus)) { + let op = self.advance().unwrap(); + let right = self.parse_multiplicative()?; + left = match op { + ExprToken::Plus => Expr::add(left, right), + ExprToken::Minus => Expr::add(left, Expr::mul(Expr::Const(-1.0), right)), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_multiplicative(&mut self) -> Result { + let mut left = self.parse_power()?; + while matches!(self.peek(), Some(ExprToken::Star) | Some(ExprToken::Slash)) { + let op = self.advance().unwrap(); + let right = self.parse_power()?; + left = match op { + ExprToken::Star => Expr::mul(left, right), + ExprToken::Slash => Expr::mul(left, Expr::pow(right, Expr::Const(-1.0))), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_power(&mut self) -> Result { + let base = self.parse_unary()?; + if matches!(self.peek(), Some(ExprToken::Caret)) { + self.advance(); + let exp = self.parse_power()?; // right-associative + Ok(Expr::pow(base, exp)) + } else { + Ok(base) + } + } + + fn parse_unary(&mut self) -> Result { + if matches!(self.peek(), Some(ExprToken::Minus)) { + self.advance(); + let expr = self.parse_unary()?; + Ok(Expr::mul(Expr::Const(-1.0), expr)) + } else { + self.parse_primary() + } + } + + fn parse_primary(&mut self) -> Result { + match self.advance() { + Some(ExprToken::Number(n)) => Ok(Expr::Const(n)), + Some(ExprToken::Ident(name)) => { + if matches!(self.peek(), Some(ExprToken::LParen)) { + self.advance(); + let arg = self.parse_additive()?; + self.expect(&ExprToken::RParen)?; + match name.as_str() { + "exp" => Ok(Expr::Exp(Box::new(arg))), + "log" => Ok(Expr::Log(Box::new(arg))), + "sqrt" => Ok(Expr::Sqrt(Box::new(arg))), + _ => Err(format!("unknown function: {name}")), + } + } else { + // Leak the string to get &'static str for Expr::Var + let leaked: &'static str = Box::leak(name.into_boxed_str()); + Ok(Expr::Var(leaked)) + } + } + Some(ExprToken::LParen) => { + let expr = self.parse_additive()?; + self.expect(&ExprToken::RParen)?; + Ok(expr) + } + Some(tok) => Err(format!("unexpected token: {tok:?}")), + None => Err("unexpected end of input".to_string()), + } + } +} + #[cfg(test)] #[path = "unit_tests/expr.rs"] mod tests; diff --git a/src/lib.rs b/src/lib.rs index 8a535cf1..33ca9c45 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -64,8 +64,8 @@ pub use types::{ Direction, NumericSize, One, ProblemSize, SolutionSize, Unweighted, WeightElement, }; -// Re-export proc macro for reduction registration -pub use problemreductions_macros::reduction; +// Re-export proc macros for reduction registration and variant declaration +pub use problemreductions_macros::{declare_variants, reduction}; // Re-export inventory so `declare_variants!` can use `$crate::inventory::submit!` pub use inventory; diff --git a/src/models/graph/kcoloring.rs b/src/models/graph/kcoloring.rs index 7c5515f3..abe96618 100644 --- a/src/models/graph/kcoloring.rs +++ b/src/models/graph/kcoloring.rs @@ -188,7 +188,8 @@ crate::declare_variants! { KColoring => "num_vertices + num_edges", KColoring => "1.3289^num_vertices", KColoring => "1.7159^num_vertices", - KColoring => "(2-epsilon)^num_vertices", + // Best known: O*((2-ε)^n) for some ε > 0 (Zamir 2021), concrete ε unknown + KColoring => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/max_cut.rs b/src/models/graph/max_cut.rs index 3b8c9c21..09b9a8d4 100644 --- a/src/models/graph/max_cut.rs +++ b/src/models/graph/max_cut.rs @@ -215,7 +215,7 @@ where } crate::declare_variants! { - MaxCut => "2^num_vertices", + MaxCut => "2^(2.372 * num_vertices / 3)", } #[cfg(test)] diff --git a/src/models/graph/maximal_is.rs b/src/models/graph/maximal_is.rs index 9b39f89b..06a2ec4a 100644 --- a/src/models/graph/maximal_is.rs +++ b/src/models/graph/maximal_is.rs @@ -216,7 +216,7 @@ pub(crate) fn is_maximal_independent_set(graph: &G, selected: &[bool]) } crate::declare_variants! { - MaximalIS => "2^num_vertices", + MaximalIS => "3^(num_vertices / 3)", } #[cfg(test)] diff --git a/src/models/graph/maximum_clique.rs b/src/models/graph/maximum_clique.rs index 223aacec..0de42f13 100644 --- a/src/models/graph/maximum_clique.rs +++ b/src/models/graph/maximum_clique.rs @@ -171,7 +171,7 @@ fn is_clique_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MaximumClique => "2^num_vertices", + MaximumClique => "1.1996^num_vertices", } /// Check if a set of vertices forms a clique. diff --git a/src/models/graph/maximum_independent_set.rs b/src/models/graph/maximum_independent_set.rs index 36aaa8ae..c9dba5ab 100644 --- a/src/models/graph/maximum_independent_set.rs +++ b/src/models/graph/maximum_independent_set.rs @@ -160,13 +160,13 @@ fn is_independent_set_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", - MaximumIndependentSet => "2^num_vertices", + MaximumIndependentSet => "1.1996^num_vertices", + MaximumIndependentSet => "1.1996^num_vertices", + MaximumIndependentSet => "2^sqrt(num_vertices)", + MaximumIndependentSet => "2^sqrt(num_vertices)", + MaximumIndependentSet => "2^sqrt(num_vertices)", + MaximumIndependentSet => "2^sqrt(num_vertices)", + MaximumIndependentSet => "2^sqrt(num_vertices)", } /// Check if a set of vertices forms an independent set. diff --git a/src/models/graph/minimum_dominating_set.rs b/src/models/graph/minimum_dominating_set.rs index 65d77cdc..c665e452 100644 --- a/src/models/graph/minimum_dominating_set.rs +++ b/src/models/graph/minimum_dominating_set.rs @@ -170,7 +170,7 @@ where } crate::declare_variants! { - MinimumDominatingSet => "2^num_vertices", + MinimumDominatingSet => "1.4969^num_vertices", } /// Check if a set of vertices is a dominating set. diff --git a/src/models/graph/minimum_vertex_cover.rs b/src/models/graph/minimum_vertex_cover.rs index 60ed2060..4a441f72 100644 --- a/src/models/graph/minimum_vertex_cover.rs +++ b/src/models/graph/minimum_vertex_cover.rs @@ -157,7 +157,7 @@ fn is_vertex_cover_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MinimumVertexCover => "2^num_vertices", + MinimumVertexCover => "1.1996^num_vertices", } /// Check if a set of vertices forms a vertex cover. diff --git a/src/models/graph/traveling_salesman.rs b/src/models/graph/traveling_salesman.rs index b66b16e1..7c7416a7 100644 --- a/src/models/graph/traveling_salesman.rs +++ b/src/models/graph/traveling_salesman.rs @@ -253,7 +253,7 @@ pub(crate) fn is_hamiltonian_cycle(graph: &G, selected: &[bool]) -> bo } crate::declare_variants! { - TravelingSalesman => "num_vertices!", + TravelingSalesman => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/optimization/closest_vector_problem.rs b/src/models/optimization/closest_vector_problem.rs index ab3adc7a..e8883cd6 100644 --- a/src/models/optimization/closest_vector_problem.rs +++ b/src/models/optimization/closest_vector_problem.rs @@ -173,8 +173,8 @@ where } crate::declare_variants! { - ClosestVectorProblem => "exp(num_basis_vectors)", - ClosestVectorProblem => "exp(num_basis_vectors)", + ClosestVectorProblem => "2^num_basis_vectors", + ClosestVectorProblem => "2^num_basis_vectors", } #[cfg(test)] diff --git a/src/models/optimization/ilp.rs b/src/models/optimization/ilp.rs index 6a3f4416..cd538789 100644 --- a/src/models/optimization/ilp.rs +++ b/src/models/optimization/ilp.rs @@ -377,7 +377,7 @@ impl OptimizationProblem for ILP { } crate::declare_variants! { - ILP => "exp(num_variables)", + ILP => "num_variables^num_variables", } #[cfg(test)] diff --git a/src/models/optimization/spin_glass.rs b/src/models/optimization/spin_glass.rs index 81464d12..abeedb4c 100644 --- a/src/models/optimization/spin_glass.rs +++ b/src/models/optimization/spin_glass.rs @@ -251,8 +251,8 @@ where } crate::declare_variants! { - SpinGlass => "2^num_vertices", - SpinGlass => "2^num_vertices", + SpinGlass => "2^num_spins", + SpinGlass => "2^num_spins", } #[cfg(test)] diff --git a/src/models/satisfiability/ksat.rs b/src/models/satisfiability/ksat.rs index 8f9506f9..f2f0a1ac 100644 --- a/src/models/satisfiability/ksat.rs +++ b/src/models/satisfiability/ksat.rs @@ -186,7 +186,7 @@ impl SatisfactionProblem for KSatisfiability {} crate::declare_variants! { KSatisfiability => "2^num_variables", KSatisfiability => "num_variables + num_clauses", - KSatisfiability => "2^num_variables", + KSatisfiability => "1.307^num_variables", } #[cfg(test)] diff --git a/src/models/specialized/biclique_cover.rs b/src/models/specialized/biclique_cover.rs index 2f4e6b5e..078c75b4 100644 --- a/src/models/specialized/biclique_cover.rs +++ b/src/models/specialized/biclique_cover.rs @@ -243,6 +243,10 @@ impl OptimizationProblem for BicliqueCover { } } +crate::declare_variants! { + BicliqueCover => "2^num_vertices", +} + #[cfg(test)] #[path = "../../unit_tests/models/specialized/biclique_cover.rs"] mod tests; diff --git a/src/models/specialized/bmf.rs b/src/models/specialized/bmf.rs index 768e0ee7..a7426044 100644 --- a/src/models/specialized/bmf.rs +++ b/src/models/specialized/bmf.rs @@ -230,6 +230,10 @@ impl OptimizationProblem for BMF { } } +crate::declare_variants! { + BMF => "2^(rows * rank + rank * cols)", +} + #[cfg(test)] #[path = "../../unit_tests/models/specialized/bmf.rs"] mod tests; diff --git a/src/models/specialized/circuit.rs b/src/models/specialized/circuit.rs index e352fd2e..024fd1b4 100644 --- a/src/models/specialized/circuit.rs +++ b/src/models/specialized/circuit.rs @@ -243,6 +243,11 @@ impl CircuitSAT { self.variables.len() } + /// Get the number of assignments (constraints) in the circuit. + pub fn num_assignments(&self) -> usize { + self.circuit.num_assignments() + } + /// Check if a configuration is a valid satisfying assignment. pub fn is_valid_solution(&self, config: &[usize]) -> bool { self.count_satisfied(config) == self.circuit.num_assignments() @@ -300,7 +305,7 @@ impl Problem for CircuitSAT { impl SatisfactionProblem for CircuitSAT {} crate::declare_variants! { - CircuitSAT => "2^num_inputs", + CircuitSAT => "2^num_variables", } #[cfg(test)] diff --git a/src/models/specialized/factoring.rs b/src/models/specialized/factoring.rs index 4aa83d90..66de6907 100644 --- a/src/models/specialized/factoring.rs +++ b/src/models/specialized/factoring.rs @@ -163,7 +163,7 @@ impl OptimizationProblem for Factoring { } crate::declare_variants! { - Factoring => "exp(sqrt(num_bits))", + Factoring => "exp((m + n)^(1/3) * log(m + n)^(2/3))", } #[cfg(test)] diff --git a/src/models/specialized/paintshop.rs b/src/models/specialized/paintshop.rs index 6d64e4df..2d21df3a 100644 --- a/src/models/specialized/paintshop.rs +++ b/src/models/specialized/paintshop.rs @@ -192,6 +192,10 @@ impl OptimizationProblem for PaintShop { } } +crate::declare_variants! { + PaintShop => "2^num_cars", +} + #[cfg(test)] #[path = "../../unit_tests/models/specialized/paintshop.rs"] mod tests; diff --git a/src/registry/variant.rs b/src/registry/variant.rs index d73a65e8..c5b25a7d 100644 --- a/src/registry/variant.rs +++ b/src/registry/variant.rs @@ -1,5 +1,7 @@ //! Explicit variant registration via inventory. +use std::any::Any; + /// A registered problem variant entry. /// /// Submitted by [`declare_variants!`] for each concrete problem type. @@ -11,6 +13,10 @@ pub struct VariantEntry { pub variant_fn: fn() -> Vec<(&'static str, &'static str)>, /// Worst-case time complexity expression (e.g., `"2^num_vertices"`). pub complexity: &'static str, + /// Compiled complexity evaluation function. + /// Takes a `&dyn Any` (must be `&ProblemType`), calls getter methods directly, + /// and returns the estimated worst-case time as f64. + pub complexity_eval_fn: fn(&dyn Any) -> f64, } impl VariantEntry { diff --git a/src/rules/registry.rs b/src/rules/registry.rs index c129379d..42a79e4f 100644 --- a/src/rules/registry.rs +++ b/src/rules/registry.rs @@ -102,6 +102,10 @@ pub struct ReductionEntry { /// Takes a `&dyn Any` (must be `&SourceType`), calls `ReduceTo::reduce_to()`, /// and returns the result as a boxed `DynReductionResult`. pub reduce_fn: fn(&dyn Any) -> Box, + /// Compiled overhead evaluation function. + /// Takes a `&dyn Any` (must be `&SourceType`), calls getter methods directly, + /// and returns the computed target problem size. + pub overhead_eval_fn: fn(&dyn Any) -> ProblemSize, } impl ReductionEntry { diff --git a/src/unit_tests/expr.rs b/src/unit_tests/expr.rs index 71eb74ce..6851ac30 100644 --- a/src/unit_tests/expr.rs +++ b/src/unit_tests/expr.rs @@ -246,3 +246,357 @@ fn test_expr_variables_exp_log_sqrt() { let e = Expr::Sqrt(Box::new(Expr::Var("c"))); assert_eq!(e.variables(), HashSet::from(["c"])); } + +// --- Runtime parser tests (Expr::parse / parse_to_expr) --- + +/// Helper: parse and evaluate with given variable bindings. +fn parse_eval(input: &str, vars: &[(&str, usize)]) -> f64 { + let expr = Expr::parse(input); + let size = ProblemSize::new(vars.to_vec()); + expr.eval(&size) +} + +/// Like parse_eval but accepts f64 variable values for testing transcendental functions. +fn parse_eval_f64(input: &str, vars: &[(&str, f64)]) -> f64 { + let expr = Expr::parse(input); + // Build a ProblemSize-compatible evaluation by using substitute + eval + // Since ProblemSize only stores usize, we substitute variables with Const nodes. + let mut mapping = std::collections::HashMap::new(); + let exprs: Vec = vars.iter().map(|(_, v)| Expr::Const(*v)).collect(); + for ((name, _), expr) in vars.iter().zip(exprs.iter()) { + mapping.insert(*name, expr); + } + expr.substitute(&mapping).eval(&ProblemSize::new(vec![])) +} + +// -- Tokenizer coverage -- + +#[test] +fn test_parse_number_integer() { + assert_eq!(parse_eval("42", &[]), 42.0); +} + +#[test] +fn test_parse_number_decimal() { + assert!((parse_eval("1.1996", &[]) - 1.1996).abs() < 1e-10); +} + +#[test] +fn test_parse_variable() { + assert_eq!(parse_eval("n", &[("n", 7)]), 7.0); +} + +#[test] +fn test_parse_variable_with_underscore() { + assert_eq!(parse_eval("num_vertices", &[("num_vertices", 10)]), 10.0); +} + +#[test] +fn test_parse_whitespace_handling() { + // Tabs, spaces, newlines should all be skipped + assert_eq!(parse_eval(" n\t+\n m ", &[("n", 3), ("m", 4)]), 7.0); +} + +#[test] +fn test_parse_tokenize_invalid_char() { + assert!(parse_to_expr("n @ m").is_err()); +} + +#[test] +fn test_parse_tokenize_invalid_number() { + assert!(parse_to_expr("1.2.3").is_err()); +} + +// -- Additive: +, - -- + +#[test] +fn test_parse_addition() { + assert_eq!(parse_eval("n + 3", &[("n", 7)]), 10.0); +} + +#[test] +fn test_parse_subtraction() { + assert_eq!(parse_eval("n - 3", &[("n", 10)]), 7.0); +} + +#[test] +fn test_parse_chained_addition() { + assert_eq!( + parse_eval("a + b + c", &[("a", 1), ("b", 2), ("c", 3)]), + 6.0 + ); +} + +#[test] +fn test_parse_mixed_add_sub() { + assert_eq!( + parse_eval("a + b - c", &[("a", 10), ("b", 3), ("c", 5)]), + 8.0 + ); +} + +// -- Multiplicative: *, / -- + +#[test] +fn test_parse_multiplication() { + assert_eq!(parse_eval("3 * n", &[("n", 5)]), 15.0); +} + +#[test] +fn test_parse_division() { + assert_eq!(parse_eval("n / 2", &[("n", 10)]), 5.0); +} + +#[test] +fn test_parse_chained_multiplication() { + assert_eq!( + parse_eval("a * b * c", &[("a", 2), ("b", 3), ("c", 4)]), + 24.0 + ); +} + +#[test] +fn test_parse_mixed_mul_div() { + assert_eq!(parse_eval("12 / 3 * 2", &[]), 8.0); +} + +// -- Power: ^ (right-associative) -- + +#[test] +fn test_parse_power() { + assert_eq!(parse_eval("n^2", &[("n", 4)]), 16.0); +} + +#[test] +fn test_parse_power_right_associative() { + // 2^3^2 = 2^(3^2) = 2^9 = 512, NOT (2^3)^2 = 64 + assert_eq!(parse_eval("2^3^2", &[]), 512.0); +} + +#[test] +fn test_parse_fractional_exponent() { + // 8^(1/3) = 2.0 + assert!((parse_eval("8^(1/3)", &[]) - 2.0).abs() < 1e-10); +} + +// -- Unary minus -- + +#[test] +fn test_parse_unary_minus() { + assert_eq!(parse_eval("-5", &[]), -5.0); +} + +#[test] +fn test_parse_unary_minus_variable() { + assert_eq!(parse_eval("-n", &[("n", 3)]), -3.0); +} + +#[test] +fn test_parse_double_unary_minus() { + // --n = -(-n) = n + assert_eq!(parse_eval("--n", &[("n", 7)]), 7.0); +} + +// -- Functions: exp, log, sqrt -- + +#[test] +fn test_parse_exp() { + assert!((parse_eval("exp(1)", &[]) - std::f64::consts::E).abs() < 1e-10); +} + +#[test] +fn test_parse_log() { + assert_eq!(parse_eval("log(1)", &[]), 0.0); + // log(e) = ln(e) = 1 + assert!((parse_eval_f64("log(x)", &[("x", std::f64::consts::E)]) - 1.0).abs() < 1e-10); +} + +#[test] +fn test_parse_sqrt() { + assert_eq!(parse_eval("sqrt(9)", &[]), 3.0); +} + +#[test] +fn test_parse_unknown_function() { + assert!(parse_to_expr("foo(3)").is_err()); + let err = parse_to_expr("foo(3)").unwrap_err(); + assert!(err.contains("unknown function"), "got: {err}"); +} + +#[test] +fn test_parse_nested_functions() { + // exp(log(n)) = n + assert!((parse_eval("exp(log(7))", &[]) - 7.0).abs() < 1e-10); +} + +#[test] +fn test_parse_function_with_complex_arg() { + // sqrt(n^2 + m^2) for 3-4-5 triangle + assert_eq!(parse_eval("sqrt(n^2 + m^2)", &[("n", 3), ("m", 4)]), 5.0); +} + +// -- Parentheses -- + +#[test] +fn test_parse_parenthesized_expression() { + // (n + m) * 2 + assert_eq!(parse_eval("(n + m) * 2", &[("n", 3), ("m", 4)]), 14.0); +} + +#[test] +fn test_parse_nested_parentheses() { + assert_eq!(parse_eval("((n + 1) * 2)", &[("n", 4)]), 10.0); +} + +// -- Operator precedence -- + +#[test] +fn test_parse_precedence_add_mul() { + // n + 3 * m = n + (3*m), not (n+3)*m + assert_eq!(parse_eval("n + 3 * m", &[("n", 1), ("m", 2)]), 7.0); +} + +#[test] +fn test_parse_precedence_mul_pow() { + // 3 * n^2 = 3 * (n^2), not (3*n)^2 + assert_eq!(parse_eval("3 * n^2", &[("n", 4)]), 48.0); +} + +#[test] +fn test_parse_precedence_unary_pow() { + // In our parser, unary minus binds tighter than ^: -n^2 = (-n)^2 + assert_eq!(parse_eval("-n^2", &[("n", 3)]), 9.0); + // Use parens for math convention: -(n^2) = -9 + assert_eq!(parse_eval("-(n^2)", &[("n", 3)]), -9.0); +} + +// -- Error cases -- + +#[test] +fn test_parse_trailing_tokens_error() { + let err = parse_to_expr("n m").unwrap_err(); + assert!(err.contains("trailing"), "got: {err}"); +} + +#[test] +fn test_parse_unexpected_token_error() { + let err = parse_to_expr(")").unwrap_err(); + assert!(err.contains("unexpected token"), "got: {err}"); +} + +#[test] +fn test_parse_empty_input_error() { + let err = parse_to_expr("").unwrap_err(); + assert!(err.contains("end of input"), "got: {err}"); +} + +#[test] +fn test_parse_unclosed_paren_error() { + let err = parse_to_expr("(n + m").unwrap_err(); + assert!(err.contains("expected"), "got: {err}"); +} + +#[test] +fn test_parse_unclosed_function_error() { + let err = parse_to_expr("exp(n").unwrap_err(); + assert!(err.contains("expected"), "got: {err}"); +} + +#[test] +fn test_parse_expect_mismatch() { + // "exp(n]" — expects RParen, gets unexpected token ']' + // Actually ']' is an invalid char so tokenizer catches it first. + // Use "exp(n +" to trigger expect mismatch (expects RParen, gets Plus). + let err = parse_to_expr("exp(n +").unwrap_err(); + assert!( + err.contains("expected") || err.contains("end of input"), + "got: {err}" + ); +} + +#[test] +#[should_panic(expected = "failed to parse")] +fn test_parse_panics_on_invalid() { + Expr::parse("@@@"); +} + +// -- Real-world complexity strings -- + +#[test] +fn test_parse_real_complexity_mis() { + // "1.1996^num_vertices" — MIS best known + let val = parse_eval("1.1996^num_vertices", &[("num_vertices", 10)]); + assert!((val - 1.1996_f64.powf(10.0)).abs() < 1e-6); +} + +#[test] +fn test_parse_real_complexity_maxcut() { + // "2^(2.372 * num_vertices / 3)" — MaxCut + let val = parse_eval("2^(2.372 * num_vertices / 3)", &[("num_vertices", 9)]); + let expected = 2.0_f64.powf(2.372 * 9.0 / 3.0); + assert!((val - expected).abs() < 1e-6); +} + +#[test] +fn test_parse_real_complexity_factoring() { + // "exp((m + n)^(1/3) * log(m + n)^(2/3))" — GNFS + let val = parse_eval( + "exp((m + n)^(1/3) * log(m + n)^(2/3))", + &[("m", 8), ("n", 8)], + ); + let mn = 16.0_f64; + let expected = f64::exp(mn.powf(1.0 / 3.0) * f64::ln(mn).powf(2.0 / 3.0)); + assert!((val - expected).abs() < 1e-6); +} + +#[test] +fn test_parse_real_complexity_polynomial() { + // "num_vertices^3" — MaximumMatching + assert_eq!(parse_eval("num_vertices^3", &[("num_vertices", 5)]), 125.0); +} + +#[test] +fn test_parse_real_complexity_linear() { + // "num_vertices + num_edges" — 2-Coloring + assert_eq!( + parse_eval( + "num_vertices + num_edges", + &[("num_vertices", 10), ("num_edges", 15)] + ), + 25.0 + ); +} + +#[test] +fn test_parse_real_overhead_factoring() { + // "2 * num_bits_first + 2 * num_bits_second + num_bits_first * num_bits_second" + let val = parse_eval( + "2 * num_bits_first + 2 * num_bits_second + num_bits_first * num_bits_second", + &[("num_bits_first", 3), ("num_bits_second", 4)], + ); + // 2*3 + 2*4 + 3*4 = 6 + 8 + 12 = 26 + assert_eq!(val, 26.0); +} + +#[test] +fn test_parse_real_overhead_sat_to_ksat() { + // "4 * num_clauses + num_literals" + assert_eq!( + parse_eval( + "4 * num_clauses + num_literals", + &[("num_clauses", 5), ("num_literals", 12)] + ), + 32.0 + ); +} + +#[test] +fn test_parse_real_complexity_bmf() { + // "2^(rows * rank + rank * cols)" + let val = parse_eval( + "2^(rows * rank + rank * cols)", + &[("rows", 3), ("rank", 2), ("cols", 4)], + ); + // 2^(3*2 + 2*4) = 2^(6+8) = 2^14 = 16384 + assert_eq!(val, 16384.0); +} diff --git a/src/unit_tests/rules/graph.rs b/src/unit_tests/rules/graph.rs index 345b2c19..357d64f6 100644 --- a/src/unit_tests/rules/graph.rs +++ b/src/unit_tests/rules/graph.rs @@ -1082,7 +1082,7 @@ fn test_variant_complexity() { let graph = ReductionGraph::new(); let variant = ReductionGraph::variant_to_map(&[("graph", "SimpleGraph"), ("weight", "i32")]); let complexity = graph.variant_complexity("MaximumIndependentSet", &variant); - assert_eq!(complexity, Some("2^num_vertices")); + assert_eq!(complexity, Some("1.1996^num_vertices")); // Unknown problem returns None let unknown = BTreeMap::new(); diff --git a/src/unit_tests/rules/registry.rs b/src/unit_tests/rules/registry.rs index 79233249..4fc5f5cc 100644 --- a/src/unit_tests/rules/registry.rs +++ b/src/unit_tests/rules/registry.rs @@ -6,6 +6,10 @@ fn dummy_reduce_fn(_: &dyn std::any::Any) -> Box ProblemSize { + ProblemSize::new(vec![]) +} + #[test] fn test_reduction_overhead_evaluate() { let overhead = ReductionOverhead::new(vec![ @@ -38,6 +42,7 @@ fn test_reduction_entry_overhead() { }, module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; let overhead = entry.overhead(); @@ -56,6 +61,7 @@ fn test_reduction_entry_debug() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; let debug_str = format!("{:?}", entry); @@ -73,6 +79,7 @@ fn test_is_base_reduction_unweighted() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; assert!(entry.is_base_reduction()); } @@ -87,6 +94,7 @@ fn test_is_base_reduction_source_weighted() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; assert!(!entry.is_base_reduction()); } @@ -101,6 +109,7 @@ fn test_is_base_reduction_target_weighted() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; assert!(!entry.is_base_reduction()); } @@ -115,6 +124,7 @@ fn test_is_base_reduction_both_weighted() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; assert!(!entry.is_base_reduction()); } @@ -130,6 +140,7 @@ fn test_is_base_reduction_no_weight_key() { overhead_fn: || ReductionOverhead::default(), module_path: "test::module", reduce_fn: dummy_reduce_fn, + overhead_eval_fn: dummy_overhead_eval_fn, }; assert!(entry.is_base_reduction()); } @@ -149,3 +160,128 @@ fn test_reduction_entries_registered() { && e.target_name == "MinimumVertexCover") ); } + +/// Build a ProblemSize from an overhead's input variables by calling the eval fn +/// on the source problem instance and collecting field values via the overhead. +/// +/// This cross-checks compiled eval (calls getters directly) against symbolic eval +/// (looks up variables in a ProblemSize hashmap). +fn cross_check_overhead(entry: &ReductionEntry, src: &dyn std::any::Any, input: &ProblemSize) { + let compiled = (entry.overhead_eval_fn)(src); + let symbolic = entry.overhead().evaluate_output_size(input); + + for (field, _) in &entry.overhead().output_size { + assert_eq!( + compiled.get(field), + symbolic.get(field), + "overhead field '{}' mismatch for {}→{}: compiled={:?}, symbolic={:?}", + field, + entry.source_name, + entry.target_name, + compiled.get(field), + symbolic.get(field), + ); + } +} + +/// Cross-check complexity_eval_fn against symbolic Expr evaluation. +fn cross_check_complexity( + entry: &crate::registry::VariantEntry, + src: &dyn std::any::Any, + input: &ProblemSize, +) { + let compiled = (entry.complexity_eval_fn)(src); + let parsed = crate::expr::Expr::parse(entry.complexity); + let symbolic = parsed.eval(input); + + let diff = (compiled - symbolic).abs(); + let tol = 1e-6 * symbolic.abs().max(1.0); + assert!( + diff < tol, + "complexity mismatch for {} ({}): compiled={compiled}, symbolic={symbolic}, expr=\"{}\"", + entry.name, + entry + .variant() + .iter() + .map(|(k, v)| format!("{k}={v}")) + .collect::>() + .join(", "), + entry.complexity, + ); +} + +#[test] +fn test_overhead_eval_fn_cross_check_mis_to_mvc() { + use crate::models::graph::MaximumIndependentSet; + use crate::topology::SimpleGraph; + + let graph = SimpleGraph::new(6, vec![(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (0, 5)]); + let problem = MaximumIndependentSet::new(graph, vec![1i32; 6]); + + let entry = inventory::iter::() + .find(|e| e.source_name == "MaximumIndependentSet" && e.target_name == "MinimumVertexCover") + .unwrap(); + + let input = ProblemSize::new(vec![ + ("num_vertices", problem.num_vertices()), + ("num_edges", problem.num_edges()), + ]); + cross_check_overhead(entry, &problem as &dyn std::any::Any, &input); +} + +#[test] +fn test_overhead_eval_fn_cross_check_factoring_to_ilp() { + use crate::models::specialized::Factoring; + + let problem = Factoring::new(3, 4, 42); + + let entry = inventory::iter::() + .find(|e| e.source_name == "Factoring" && e.target_name == "ILP") + .unwrap(); + + let input = ProblemSize::new(vec![ + ("num_bits_first", problem.num_bits_first()), + ("num_bits_second", problem.num_bits_second()), + ]); + cross_check_overhead(entry, &problem as &dyn std::any::Any, &input); +} + +#[test] +fn test_complexity_eval_fn_cross_check_mis() { + use crate::models::graph::MaximumIndependentSet; + use crate::registry::VariantEntry; + use crate::topology::SimpleGraph; + + let graph = SimpleGraph::new(10, vec![(0, 1), (1, 2)]); + let problem = MaximumIndependentSet::new(graph, vec![1i32; 10]); + + let entry = inventory::iter::() + .find(|e| { + e.name == "MaximumIndependentSet" + && e.variant() + .iter() + .any(|(k, v)| *k == "graph" && *v == "SimpleGraph") + && e.variant() + .iter() + .any(|(k, v)| *k == "weight" && *v == "i32") + }) + .unwrap(); + + let input = ProblemSize::new(vec![("num_vertices", problem.num_vertices())]); + cross_check_complexity(entry, &problem as &dyn std::any::Any, &input); +} + +#[test] +fn test_complexity_eval_fn_cross_check_factoring() { + use crate::models::specialized::Factoring; + use crate::registry::VariantEntry; + + let problem = Factoring::new(8, 8, 100); + + let entry = inventory::iter::() + .find(|e| e.name == "Factoring") + .unwrap(); + + let input = ProblemSize::new(vec![("m", problem.m()), ("n", problem.n())]); + cross_check_complexity(entry, &problem as &dyn std::any::Any, &input); +} diff --git a/src/variant.rs b/src/variant.rs index fbf679ee..fcac4dc0 100644 --- a/src/variant.rs +++ b/src/variant.rs @@ -146,37 +146,6 @@ impl_variant_param!(K3, "k", parent: KN, cast: |_| KN, k: Some(3)); impl_variant_param!(K2, "k", parent: KN, cast: |_| KN, k: Some(2)); impl_variant_param!(K1, "k", parent: KN, cast: |_| KN, k: Some(1)); -/// Declare explicit problem variants with per-variant complexity metadata. -/// -/// Each entry generates: -/// 1. A `DeclaredVariant` trait impl for compile-time checking -/// 2. A `VariantEntry` inventory submission for runtime graph building -/// -/// # Example -/// -/// ```text -/// declare_variants! { -/// MaximumIndependentSet => "2^num_vertices", -/// MaximumIndependentSet => "2^num_vertices", -/// } -/// ``` -#[macro_export] -macro_rules! declare_variants { - ($($ty:ty => $complexity:expr),+ $(,)?) => { - $( - impl $crate::traits::DeclaredVariant for $ty {} - - $crate::inventory::submit! { - $crate::registry::VariantEntry { - name: <$ty as $crate::traits::Problem>::NAME, - variant_fn: || <$ty as $crate::traits::Problem>::variant(), - complexity: $complexity, - } - } - )+ - }; -} - #[cfg(test)] #[path = "unit_tests/variant.rs"] mod tests;