chore: v1 branch#1656
Conversation
|
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🏷️ Required labels (at least one) (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| name: system-programs | ||
| if: github.event.pull_request.draft == false | ||
| runs-on: ubuntu-latest | ||
| timeout-minutes: 60 | ||
|
|
||
| services: | ||
| redis: | ||
| image: redis:8.0.1 | ||
| ports: | ||
| - 6379:6379 | ||
| options: >- | ||
| --health-cmd "redis-cli ping" | ||
| --health-interval 10s | ||
| --health-timeout 5s | ||
| --health-retries 5 | ||
|
|
||
| env: | ||
| REDIS_URL: redis://localhost:6379 | ||
|
|
||
| strategy: | ||
| matrix: | ||
| include: | ||
| - program: sdk-test-program | ||
| sub-tests: '["cargo-test-sbf -p sdk-native-test"]' | ||
| - program: sdk-anchor-test-program | ||
| sub-tests: '["cargo-test-sbf -p sdk-anchor-test", "cargo-test-sbf -p sdk-pinocchio-test"]' | ||
| - program: sdk-libs | ||
| packages: light-macros light-sdk light-program-test light-client light-batched-merkle-tree | ||
| test_cmd: | | ||
| cargo test -p light-macros | ||
| cargo test -p light-sdk | ||
| cargo test -p light-program-test | ||
| cargo test -p light-client | ||
| cargo test -p client-test | ||
| cargo test -p light-sparse-merkle-tree | ||
| cargo test -p light-batched-merkle-tree --features test-only -- --skip test_simulate_transactions --skip test_e2e | ||
| steps: | ||
| - name: Checkout sources | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup and build | ||
| uses: ./.github/actions/setup-and-build | ||
| with: | ||
| skip-components: "redis" | ||
|
|
||
| - name: build-programs | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/programs | ||
|
|
||
| - name: Run sub-tests for ${{ matrix.program }} | ||
| if: matrix.sub-tests != null | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/zk-compression-cli | ||
|
|
||
| IFS=',' read -r -a sub_tests <<< "${{ join(fromJSON(matrix.sub-tests), ', ') }}" | ||
| for subtest in "${sub_tests[@]}" | ||
| do | ||
| echo "$subtest" | ||
| eval "RUSTFLAGS=\"-D warnings\" $subtest" | ||
| done | ||
|
|
||
| - name: Run tests for ${{ matrix.program }} | ||
| if: matrix.test_cmd != null | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/zk-compression-cli | ||
| ${{ matrix.test_cmd }} |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 1 month ago
In general, to fix this issue you add an explicit permissions: block either at the workflow root (so it applies to all jobs) or at the individual job level, and set it to the minimum required (often contents: read if the job only needs to check out code and run tests). This constrains the GITHUB_TOKEN so it cannot perform unintended write operations.
For this specific workflow, the job only checks out the repository and runs tests and package-manager operations; none of the steps need to write to GitHub resources. The simplest least-privilege fix is therefore to add a top-level permissions: block just after the name: section, setting contents: read. This will apply to the system-programs job and any other jobs in this workflow (none are shown, but this is safe). No changes to steps, actions versions, or additional imports are needed.
Concretely:
-
Edit
.github/workflows/sdk-tests.yml. -
After the
name: examples-testsline, insert:permissions: contents: read
This explicitly limits GITHUB_TOKEN to read-only repository contents while preserving existing functionality.
| @@ -19,6 +19,9 @@ | ||
|
|
||
| name: examples-tests | ||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.ref }} | ||
| cancel-in-progress: true |
| name: stateless-js-v1 | ||
| if: github.event.pull_request.draft == false | ||
| runs-on: ubuntu-latest | ||
|
|
||
| services: | ||
| redis: | ||
| image: redis:8.0.1 | ||
| ports: | ||
| - 6379:6379 | ||
| options: >- | ||
| --health-cmd "redis-cli ping" | ||
| --health-interval 10s | ||
| --health-timeout 5s | ||
| --health-retries 5 | ||
|
|
||
| env: | ||
| LIGHT_PROTOCOL_VERSION: V1 | ||
| REDIS_URL: redis://localhost:6379 | ||
| CI: true | ||
|
|
||
| steps: | ||
| - name: Checkout sources | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup and build | ||
| uses: ./.github/actions/setup-and-build | ||
| with: | ||
| skip-components: "redis,disk-cleanup" | ||
| cache-suffix: "js" | ||
|
|
||
| - name: Build stateless.js with V1 | ||
| run: | | ||
| cd js/stateless.js | ||
| pnpm build:v1 | ||
|
|
||
| - name: Build CLI | ||
| - name: Build compressed-token with V1 | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/zk-compression-cli --skip-nx-cache | ||
| cd js/compressed-token | ||
| pnpm build:v1 | ||
|
|
||
| # Comment for breaking changes to Photon | ||
| - name: Run CLI tests | ||
| - name: Build CLI (CI mode - Linux x64 only) | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx test @lightprotocol/zk-compression-cli | ||
| npx nx build-ci @lightprotocol/zk-compression-cli | ||
|
|
||
| - name: Run stateless.js tests | ||
| - name: Run stateless.js tests with V1 | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx test @lightprotocol/stateless.js | ||
| echo "Running stateless.js tests with retry logic (max 2 attempts)..." | ||
| attempt=1 | ||
| max_attempts=2 | ||
| until npx nx test-ci @lightprotocol/stateless.js; do | ||
| attempt=$((attempt + 1)) | ||
| if [ $attempt -gt $max_attempts ]; then | ||
| echo "Tests failed after $max_attempts attempts" | ||
| exit 1 | ||
| fi | ||
| echo "Attempt $attempt/$max_attempts failed, retrying..." | ||
| sleep 5 | ||
| done | ||
| echo "Tests passed on attempt $attempt" | ||
|
|
||
| - name: Run compressed-token tests | ||
| - name: Run compressed-token tests with V1 | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx test @lightprotocol/compressed-token | ||
| echo "Running compressed-token tests with retry logic (max 2 attempts)..." | ||
| attempt=1 | ||
| max_attempts=2 | ||
| until npx nx test-ci @lightprotocol/compressed-token; do | ||
| attempt=$((attempt + 1)) | ||
| if [ $attempt -gt $max_attempts ]; then | ||
| echo "Tests failed after $max_attempts attempts" | ||
| exit 1 | ||
| fi | ||
| echo "Attempt $attempt/$max_attempts failed, retrying..." | ||
| sleep 5 | ||
| done | ||
| echo "Tests passed on attempt $attempt" |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 months ago
To fix the problem, explicitly define minimal GITHUB_TOKEN permissions for this workflow or for the specific job. Since the workflow only checks out code, builds, and runs tests, it normally only needs read access to repository contents. We can add a permissions block at the root of the workflow so it applies to all jobs (there is only one job in the snippet). This will ensure that even if the repository or org default is read-write, the workflow will only have contents: read.
Concretely, in .github/workflows/js.yml, add:
permissions:
contents: readbetween the name: js-tests-v1 section and the concurrency: block (lines 14–16 in the snippet). This does not change any existing behavior of steps, but constrains the token according to the principle of least privilege. No additional methods, definitions, or imports are needed.
| @@ -13,6 +13,9 @@ | ||
|
|
||
| name: js-tests-v1 | ||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.ref }} | ||
| cancel-in-progress: true |
* chore: registry program throw on zero network fee * chore: set batched address tree default fee to 10k * chore: limit batched tree creations to light security group * chore: disable program owned trees * chore: cleanup features, add tests for mainnet tree config and features * cleanup * chore: fix nits * fix test * fix: impl feedback
* chore: add docker image publishing to prover release workflow * Add Go build and key download steps to prover release * Add disk space cleanup step to prover release workflow * Remove disk cleanup condition from prover release * Move disk space cleanup step to prover build job * Fix prover release workflow tag format
* chore: regenerate vkeys * update prover version tag
Bumps [nx](https://github.com/nrwl/nx/tree/HEAD/packages/nx) from 20.8.1 to 22.0.1. - [Release notes](https://github.com/nrwl/nx/releases) - [Commits](https://github.com/nrwl/nx/commits/22.0.1/packages/nx) --- updated-dependencies: - dependency-name: nx dependency-version: 22.0.1 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 6. - [Release notes](https://github.com/actions/download-artifact/releases) - [Commits](actions/download-artifact@v4...v6) --- updated-dependencies: - dependency-name: actions/download-artifact dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* fix: ctoken address Merkle tree check with cpi context * test: failing write to cpi context
* feat(forester): gRPC-based event-driven processing for V2 trees * chore: add protobuf-compiler to setup-and-build * chore: update PHOTON_COMMIT version in versions.sh * feat: add grpc_port config to cli * cleanup * cleanup * cleanup * wait for indexer in rpc-interop.test.ts * wait for indexer in rpc-interop.test.ts * bump photon version * cleanup * cleanup
* Add TypeDoc configuration for TypeScript SDK documentation Set up TypeDoc to generate API reference documentation for @lightprotocol/stateless.js and @lightprotocol/compressed-token packages. Configuration uses packages mode for monorepo support with customized options for navigation, sorting, and external links. Changes: - Add typedoc.json with monorepo-aware configuration - Add TypeDoc as devDependency in root package.json - Update .gitignore to exclude generated api-docs directory - Update pnpm-lock.yaml with TypeDoc dependencies * Add GitHub Actions workflow for TypeDoc deployment Automates deployment of API documentation to GitHub Pages. Workflow triggers on pushes to main branch and: - Installs dependencies with pnpm - Builds stateless.js package - Generates TypeDoc documentation - Deploys to GitHub Pages Docs will be available at: https://lightprotocol.github.io/light-protocol/ * Restructure TypeDoc to generate separate documentation sites Split documentation generation into separate sites for each package: - stateless.js: /stateless.js/ - compressed-token: /compressed-token/ Changes: - Add typedoc.stateless.json for stateless.js package - Add typedoc.compressed-token.json for compressed-token package - Add api-docs-index.html as landing page linking to both - Update workflow to generate both packages separately - Remove unified typedoc.json URLs after deployment: - Root: https://lightprotocol.github.io/light-protocol/ - Stateless.js: https://lightprotocol.github.io/light-protocol/stateless.js/ - Compressed-token: https://lightprotocol.github.io/light-protocol/compressed-token/ * Update documentation links and navigation - Rename 'API Home' to 'Other Libraries' in navigation header - Update docs.zkcompression.com to www.zkcompression.com - Fix compressed token guide link to point to /compressed-tokens/guides - Add 'Other Libraries' link to both package READMEs * Add DeepWiki badge to package READMEs Add Ask DeepWiki badge next to npm and license badges for quick access to AI-powered documentation assistant. * Fix shellcheck warning in deploy-docs workflow Quote GITHUB_ENV variable to prevent globbing * Trigger API docs deployment on release tags instead of main push Changes workflow to deploy TypeDoc to GitHub Pages only when release tags are pushed (compressed-token-* or stateless.js-*), ensuring API documentation reflects latest release versions rather than main branch. Addresses: #2038 --------- Co-authored-by: tilo-14 <tilo@luminouslabs.com> Co-authored-by: ananas-block <58553958+ananas-block@users.noreply.github.com>
* chore: add v1 tree deprecation log messages Log warnings when v1 state trees or address trees are used, directing users to the v1-to-v2 migration guide. Entire-Checkpoint: 4a00fc4f40bb * feat: charge network fee on V1 output appends Add network fee (5,000 lamports per unique V1 output tree) to match the existing V2 output fee behavior and V1 input fee behavior. Entire-Checkpoint: 7423e3c1c43d * feat: reimburse forester for tx fees on V1 tree operations Transfer network_fee lamports from queue account to fee_payer when foresters perform nullify_leaves and update_address_merkle_tree operations on V1 trees with network_fee > 0. The registry CPI wrappers pass the forester's authority wallet as fee_payer. Entire-Checkpoint: 7071fd5951fd * fix: update JS tests for V1 output network fee Entire-Checkpoint: ecbda9cc501d * chore: clarify fee comment Entire-Checkpoint: 814bd0c10ab4 * stash * fix lint * fix: transfer nullify fee from merkle tree, fix borrow conflicts V1 state tree network fees accumulate in the merkle tree account (not the nullifier queue), so nullify reimbursement must transfer from the merkle tree. Also read network_fee before mutable data borrows to avoid RefCell conflicts in both nullify_leaves and update_address_merkle_tree. Entire-Checkpoint: 6941c31c12ec
* fix: ForesterNotEligible deadlock fix * refactor: add fee-filter to tree discovery * fix: adjust FORESTER_NOT_ELIGIBLE error code to include ERROR_CODE_OFFSET * feat: enhance tree discovery with retroactive filtering and add confirmation settings * cleanup
…Tree (#2335) Add fee_payer account to BatchAppend and BatchUpdateAddressTree instructions to reimburse foresters for network fees. BatchAppend transfers 2x network_fee from output_queue, BatchUpdateAddressTree transfers 1x network_fee from merkle_tree. Transfers only occur when network_fee >= 5000 lamports. Registry CPI wrappers pass fee_payer through; SDK builders set fee_payer to forester. Also adds create-address-test-program to the programs build.
* chore: add poseidon hash input error for non 32 bytes inputs * fix: pad RegisteredUser.data to 32 bytes for Poseidon hashv The on-chain Poseidon length check now enforces all inputs to be exactly 32 bytes. RegisteredUser.data is [u8; 31], so pad it into a 32-byte array (right-aligned, big-endian) before hashing. Entire-Checkpoint: 7de49f8e11b2 * fix: pad all Poseidon hash inputs to 32 bytes light-poseidon 0.4.0 enforces that all inputs are exactly 32 bytes. Pad smaller inputs (u64/u32/usize indices, [u8; 31] data) to 32-byte arrays before hashing. Also bump light-poseidon to 0.4.0 in workspace. Entire-Checkpoint: 140e93a9e15d * fix: pad Poseidon hash inputs to 32 bytes in compressed account tests The test_compressed_account_hash test was passing sub-32-byte slices (leaf_index, lamports, discriminator) directly to Poseidon::hashv. light-poseidon 0.4 requires all inputs to be exactly 32 bytes. Entire-Checkpoint: ee45a7a4e626 * fix: pad 31-byte data to 32 bytes in system-cpi-test assertion The test-side hash assertion also passes data.as_slice() (31 bytes) to Poseidon::hashv which now requires 32-byte inputs. Entire-Checkpoint: 1560c14e6263 * fix: pad Poseidon hash inputs to 32 bytes in indexed-merkle-tree tests The test_append and functional_non_inclusion_test tests were passing sub-32-byte slices to Poseidon::hashv. light-poseidon 0.4 requires all inputs to be exactly 32 bytes. Entire-Checkpoint: 716761bbe363 * fix: rustfmt formatting for indexed-merkle-tree tests Entire-Checkpoint: f7dc2aedbbc6
* rm decompressinterface test cov: offcurve, zero-amounts test cov: dupe hash failure, v1 reject at ixn boundary more test cov load, add freeze thaw, extend test cov add tests lint frozen handling more tests mark internals rm _tryfetchctokencoldbyaddress cleanups fmt * unwrap consistent * remove createLoadAccountsParams * add uni err * fix * remove layout serde, add load-ata instruction * apply review fixes, simplify delegate and frozen reasoning * update freeze thaw * test upd * fix cold load delegate * fmt * fix ci * add test cov: tx size * rename ctoken full v3 * renames * wip * checked for all interface, pass decimals * use destination directly in transferinterface * format * fix lint * fix lint * fix transfer-interface.test.ts * address last remaining comment * add changelog: br change decimals * fix mds * apply review comments * format * lint * for unwrap/wrap ixns we should always do wrap=false * fixes * lint * fix version mismatch between formatter and linter * better errors for getAccountInterface and getMintInterface * format * granualr typed errs on accountInfoInterface * lint, changelog, tests * v2 gate for getaccountinfointerface tests
* prep beta release * bump versions * fix comments and changelog
…tween v1, v2, and compression (#2331) * feat: add priority fee configuration and handling * fix: add Signature import to solana_sdk in epoch_manager * feat: add confirmation configuration for smart transactions and update related functions * format * refactor * fix: improve error handling * refactor * cleanup * - refactored transaction sending logic in `send_transaction.rs` and `tx_sender.rs` - enhanced error handling in transaction processing to differentiate between send failures and execution failures - modified `priority_fee.rs` to streamline error handling and improve fallback mechanisms - adjusted V2 error handling to include custom error codes for better debugging - improved the handling of transaction execution status * cleanup * cleanup * format * refactor error handling in transaction processing to include batch not ready state * add logging to forester tests workflow * dump photon.log on failure * add indexer health checks and tracker wait functions in tests * more logs * add local transaction dumping functionality and enhance test failure logging * refactor transaction extraction and block fetching in local transaction dump * refactor WorkReportError handling to use registry_error_code for improved clarity * refactor ForesterError handling to use registry_error_code for NotEligible checks * wip * refactor local transaction dumping to handle duplicates and improve output structure * cleanup * debugging * unify test validator and photon commiment * increase timeout for Forester e2e test to 120 minutes * custom surfpool branch * refactor: update surfpool version to 1.1.1 and remove unused binary path logic * fix: remove unnecessary environment variable from spawnBinary call * refactor: remove unused dependencies and streamline eligibility checks in epoch manager * format * format
* fix: handle mixed batch/non-batch inputs in create_nullifier_queue_indices When a transaction mixes batch (v2) and legacy/concurrent (v1) input accounts, the nullifier queue index assignment was using the raw position in input_compressed_accounts as the write index into nullifier_queue_indices. This caused an out-of-bounds panic when a non-batch account appeared between batch accounts (e.g. [batchA, legacy, batchB, batchA]). Fix by walking input_compressed_accounts in order and using a compact batch_idx counter that only advances for accounts with a matching sequence number entry. Non-batch accounts have no sequence number entry and are skipped without consuming a slot. * feat: add xtask fetch-block-events subcommand Fetches a configurable number of blocks starting at a given slot, parses every transaction using event_from_light_transaction, and prints a structured summary of all Light Protocol events found. Usage: cargo xtask fetch-block-events --start-slot <slot> --network mainnet cargo xtask fetch-block-events --start-slot <slot> --network devnet --num-blocks 5 * fix: format * chore: bump light-event 0.23.0 -> 0.23.1 * fix: extract ParsedInstruction struct to satisfy clippy::type_complexity * fix: rustfmt fetch_block_events * test(light-event): add regression tests for mixed batch/legacy nullifier OOB panic Transaction 3ybts1eFSC7QN6aU4ao6NJCgn7xTbtBVyzeLDZJf9eVN93vHZWupX4TXqHHgV18xf17eit7Uw5T135uabnpToKK4 at slot 407265372 panicked with "index out of bounds: len is 3 but index is 3" in create_nullifier_queue_indices when inputs mix batch and legacy trees. Adds two tests: - src/regression_test.rs: real mainnet instruction bytes decoded via bs58 - tests/parse_test.rs: synthetic test verifying exact nullifier_queue_indices [6, 3, 7] Also adds light-event to sdk-libs/justfile so it runs in CI.
* fixes * use slicelast * upd docs * format * upd cu, transferoptions, * add check * fmt ci
…rk-ff duplicate compilation (#2356) * perf: iteration 1 — skip zstd-sys C build on CI via system libzstd Install libzstd-dev on CI and set ZSTD_SYS_USE_PKG_CONFIG=1 so zstd-sys uses pkg-config to find the system library instead of compiling from C source. zstd-sys (~7.9s) is a transitive dep via Solana's reqwest 0.11 and cannot be removed from the dep graph, but the C build can be avoided with a system library present. Also includes lld, clang, and libssl-dev from previous optimization work (already in the working tree from the prior plan). Entire-Checkpoint: 549cca13d530 * perf: iteration 8 — compile proc-macro crates at opt-level 3 in dev profile Add [profile.dev.package.*] overrides for syn, proc-macro2, quote, serde_derive, ark-ff-macros, and ark-ff-asm. These crates execute at build time; compiling them at full optimization reduces the time they spend generating code for downstream crates. * perf: iteration 4 — eliminate ark-ff 0.4 duplicate compilation Patch groth16-solana via [patch.crates-io] with the local version which: - Uses ark-ff 0.5 (same as workspace) instead of 0.4 - Bumps solana-bn254 from 2.x to 3.x (which uses ark 0.5) - Sets default-features = false for ark/thiserror/serde deps Also bump workspace solana-bn254 from "2.2" to "3.2.1" to match. Fix typo in 104 light-verifier verifying_keys files: vk_gamme_g2 -> vk_gamma_g2 (the typo fix was shipped in groth16-solana PR #29). TODO: replace path patch with git dep once groth16-solana changes are pushed and merged (branch: jorrit/chore-bump-deps, commit: 4e6cacf). * perf: switch groth16-solana patch to git dep (rev 4e6cacf) Replace local path patch with git dep pointing to the pushed commit, making the ark-ff 0.4 elimination work on CI. * fix: update solana-bn254 v3 API and fix remaining vk_gamme_g2 typos solana-bn254 v3 deprecated the unversioned compress/decompress functions in favour of explicit _be variants. Update prover/client/src/proof.rs to use alt_bn128_g{1,2}_{compress,decompress}_be. Also fix the vk_gamme_g2 -> vk_gamma_g2 typo in xtask/src/create_vkeyrs_from_gnark_key.rs which was missed in the bulk rename (this file both constructs a Groth16Verifyingkey and generates source code using quote!). * fix: upgrade SOLANA_VERSION 2.2.15 -> 2.3.13 to fix edition 2024 CI failure Solana 2.2.15 ships platform-tools v1.46 (cargo 1.84.0), which cannot parse Cargo.toml manifests that use `edition = "2024"`. time-macros 0.2.27 (transitively required via solana-streamer -> x509-parser -> asn1-rs -> time 0.3.47) uses edition 2024. Main CI was not failing because its build cache was warm. Our Cargo.lock changes (solana-bn254 2.2 -> 3.2.1, groth16-solana patch) bust the cache, causing a fresh compile of time-macros 0.2.27 which then fails. Solana 2.3.13 ships platform-tools v1.48 (Rust 1.86+) which supports edition 2024. This also aligns the CLI version with the workspace library crates that are already pinned to "2.3". * fix: call ring provider install exactly once via Once Use std::sync::Once in ensure_ring_provider so that install_default() is only attempted on the first call per process, avoiding redundant global-state mutations on subsequent calls. * fix: pin time to 0.3.37 to avoid edition2024 in time-macros time 0.3.38+ depends on time-macros 0.2.20+ which uses edition = "2024" in its Cargo.toml. The Solana platform-tools ship Cargo 1.84.0 which cannot parse edition 2024 manifests. Pin time to 0.3.37 (time-macros 0.2.19) so cargo test-sbf can parse the full dep graph. Affects: e2e-test via solana-client -> solana-streamer -> x509-parser -> asn1-rs -> time -> time-macros
* chore: add forester tps xtask Entire-Checkpoint: ca2f5cb4ca53 * chore: allow rpc url from env variable Entire-Checkpoint: eb09f696bf72
* feat(compressed-token): add approve/revoke delegation for light-token ATAs Add TypeScript SDK functions to call the on-chain CTokenApprove (discriminator 4) and CTokenRevoke (discriminator 5) instruction handlers for light-token associated token accounts. New files: - instructions/approve-revoke.ts: sync instruction builders matching Rust SDK layout - actions/approve-interface.ts: async actions with cold loading + tx sending - tests/e2e/approve-revoke-light-token.test.ts: unit + E2E tests Also adds getLightTokenDelegate helper and extends FrozenOperation type. * fix(sdk): make decimals optional in unified approve/revoke wrappers Avoid unnecessary getMintInterface RPC call when caller provides decimals. * feat(sdk): add transferDelegated for light-token ATAs Add transferDelegatedInterface action and unified wrapper, completing the approve → transfer → revoke delegation flow for light-token ATAs. * add spl t22 support * refactor(sdk): align transferDelegated with wallet-recipient API Update transferDelegatedInterface and createTransferDelegatedInterfaceInstructions to accept a recipient wallet address instead of an explicit destination token account, matching the transferInterface convention from PR #2354. ATA derivation and idempotent creation now happen internally for all programId variants (light-token, SPL, Token-2022). * 1st batch commnets * docs(sdk): document load-all behavior in approve/revoke JSDoc; add owner==feePayer E2E test Add @remarks to approve/revoke functions documenting that for light-token mints, all cold (compressed) balances are loaded into the hot ATA regardless of the delegation amount. Add E2E test covering the owner==feePayer code path which was previously only tested at the unit level. * add regression tests * fixes * update changelog * upd changelog * fix: packedaccounts in js should not turn bool to number * cherry pick bool fix * bump versions again --------- Co-authored-by: tilo-14 <tilo@luminouslabs.com> Co-authored-by: Swenschaeferjohann <swen@lightprotocol.com>
* wrap options * wip * upd changelog
- Align changelogs on 0.23.0 stable (npm V2 default; no app LIGHT_PROTOCOL_VERSION=V2) - Deprecate featureFlags beta helpers as no-ops; clarify V2_REQUIRED_ERROR - Bump compressed-token, stateless.js, and zk-compression-cli package versions to stable tags Made-with: Cursor
* feat: optimize address batch pipeline * format * feat: stabilize address batch pipeline * chore: update subproject commit for photon * fix: update deranged and time package versions in Cargo.lock * feat: add input validation for batch size in get_batch_address_append_circuit_inputs * cleanup
* feat: optimize address batch pipeline * format * feat: stabilize address batch pipeline * feat: batch cold account loads in light client * fix: harden load batching and mixed decompression * Fix prover startup and decompression load flow * cleanup: harden prover startup polling * format * format * cleanup * cleanup * refactor: simplify batch data length validation and remove redundant proof height checks * refactor: remove unused output_queue_index parameter from into_in_token_data methods
* feat: nullify_2 shared proof node and 1-byte discriminator Deduplicate the level-15 proof node shared by both leaves (saves 32B) and shrink the discriminator from 4 bytes to 1 byte (saves 3B). Total instruction data drops from 1042B to 1007B, fitting the v0 transaction within the 1232-byte limit with 4 bytes margin. * feat: nullify_dedup instruction for 2-4 nullifications with proof deduplication Add nullify_dedup instruction that packs 2-4 nullifications into a single transaction using proof node deduplication. Nearby Merkle tree leaves share sibling nodes at common ancestor levels; the encoding stores each unique node once and uses bitvecs/2-bit source fields to reconstruct all proofs on-chain. - 1-byte custom discriminator [79], reuses NullifyLeaves accounts struct - Encoding: shared_top_node (level 15) + bitvec for proof_2 + 2-bit source fields for proof_3/proof_4, with u32::MAX sentinels for count < 4 - MAX_NODES=28 verified by tx size test (1230 bytes with ALT + compute budget ix) - SDK: compress_proofs() encoder, create_nullify_dedup_instruction() builder, nullify_dedup_lookup_table_accounts() helper - Unit tests: data size, accounts, discriminator collision, round-trip, edge cases - Integration tests: 4-leaf, 3-leaf, 2-leaf success + 1-leaf rejection * feat: forester dedup integration with min_queue_items threshold, versioned transactions, and tx size fix - Add min_queue_items config (CLI --min-queue-items, default 5000) to delay V1 state nullification processing until enough items accumulate for optimal dedup grouping - Integrate nullify_dedup into forester: group_state_items_for_dedup greedy algorithm forms groups of 4, 3, 2 with shared proof compression (70% savings observed) - Support versioned transactions with address lookup tables for dedup instructions - Reduce NULLIFY_DEDUP_MAX_NODES from 28 to 27 to fit within 1232-byte tx limit when both SetComputeUnitLimit and SetComputeUnitPrice are included - Add CompressedProofs struct replacing tuple return from compress_proofs - Remove nullify_2 instruction (superseded by nullify_dedup) - Add slot advancement in e2e test for surfpool offline mode - Add dedup grouping log assertion in e2e test * feat: disable v1 state multi-nullify when queue exceeds 10,000 items Adds queue_item_count to BuildTransactionBatchConfig and disables multi-nullify when the queue is too large, falling back to single nullify for more reliable throughput. Renames use_dedup to use_multi_nullify for consistency. * fix: pin time <0.3.46 for Solana platform-tools Cargo 1.84 compatibility time 0.3.46+ pulls time-core 0.1.8 which uses edition2024, unsupported by the Cargo 1.84 bundled with Solana platform-tools. * fix: reject non-trailing sentinels in count_from_leaf_indices Harden leaf_indices validation to reject malformed layouts like [a, b, MAX, c] where sentinels appear in non-trailing positions. * refactor: simplify nullify_state_v1_multi proof dedup encoding Replace the complex multi-scheme encoding (1-bit bitvec for proof_2, 2-bit source selectors for proof_3/proof_4, separate shared_top_node) with a uniform pool-based approach: - Deduplicated node pool built level-by-level across all proofs - Each proof (including proof_1) selects 16 nodes from the pool via a u32 bitvec using the bitvec crate - Removes proof_2_shared, proof_3_source, proof_4_source, shared_top_node - Adds proof_bitvecs: [u32; 4] - Bumps NULLIFY_STATE_V1_MULTI_MAX_NODES from 26 to 27 (10-byte margin) - Hardens count_from_leaf_indices to reject non-trailing sentinels * feat: unified forester ALT covering all tree types Replace v1-specific ALT with a single unified ALT that includes accounts for all forester operations (v1 state, v1 address, v2 state, v2 address). Solana's v0::Message::try_compile automatically selects relevant entries per instruction, so unused entries cost nothing. - Add ForesterLookupTableParams and forester_lookup_table_accounts() - Create ALT unconditionally in e2e test for all operations - Clamp v1 batch_size to 1 when ALT is present (tx size limit) - Fix min_queue_items doc comment to match actual behavior * feat: add get_queue_leaf_indices - Implemented `make_get_queue_leaf_indices_body` function to construct the request body for the `getQueueLeafIndices` API. - Added API call for `getQueueLeafIndices` in the photon API module. - Introduced `get_queue_leaf_indices` method in the `TestIndexer` struct to handle the API call. - Updated `LightProgramTest` to include the `get_queue_leaf_indices` method, delegating to the underlying indexer. * refactor: simplify batched transaction configuration in EpochManager * format * feat: add presort option for batched transactions to improve deduplication * feat: update configuration for batching and nullification processes, including work item batch size and default values * update dependencies in Cargo.lock * feat: set default work item batch size to 50 in EpochManager configuration * fix tests * chore: update subproject commit for photon dependency * Cargo.lock * fix: add missing QueueLeafIndex struct to indexer types The indexer mod.rs and indexer_trait.rs reference QueueLeafIndex but the struct was never committed to queue.rs, causing unresolved import errors: error[E0432]: unresolved import `queue::QueueLeafIndex` error[E0432]: unresolved import `crate::indexer::QueueLeafIndex` Mirrors the schema from photon-api codegen (hash, leaf_index, queue_index). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(forester): retry on BlockhashNotFound in send_prepared_transaction The send loop was classifying BlockhashNotFound as fatal (via the default should_retry which treats any TransactionError as non-retryable), so the first attempt would bail with ConfirmationDeadlineExceeded and leave the work item unprocessed. BlockhashNotFound is typically transient: the RPC that receives the send may not have propagated the blockhash that the fetcher returned yet. The loop already has a safety exit via `get_block_height > last_valid_block_height` (BlockhashExpired), so retrying is bounded. Observed on devnet: V1 multi-nullify txs submitted to Helius RPC pool failed preflight with BlockhashNotFound on the first attempt and never landed. With the retry, the same signature lands within 4-13 attempts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * format * fix(forester): drop 3s safe_deadline and repin photon to pushed commit The new safe_deadline = timeout_deadline - 3s check caused every chunk future to bail immediately, since scheduled_v1_batch_timeout returns at most 2s. V1_BATCH_TIMEOUT_BUFFER already reserves headroom, so the extra subtraction was redundant. Repin external/photon from 8a0bbce (local-only) to a52fd36, which is reachable on origin/sergey/get-leaf-indices-api so CI can fetch it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci(lint): unpin rustfmt nightly to match local cargo +nightly fmt Lint CI was pinned to nightly-2025-10-26, but scripts/format.sh and local dev use unpinned cargo +nightly fmt. Align CI to unpinned nightly so format+lint agree. Trade-off: loses deterministic formatting across time. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * clamp work item batch size to 1 --------- Co-authored-by: Sergey Timoshin <timoshin.sergey@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Swenschaeferjohann <swen@lightprotocol.com>
No description provided.