Skip to content

Feature: Standardized benchmark suite to measure Python interface (dingo) overhead vs C++ (volesti) #127

@saksham-stack

Description

@saksham-stack

This is a migration of volesti#423.
Problem Statement: As discussed in volesti issue #423, there is currently no automated way to quantify the performance overhead or convergence differences (ESS, Mixing Time) introduced by the dingo Python interface compared to pure volesti C++ executions.

Goal: Establish a repeatable benchmarking suite within dingo to ensure that the Python wrapper remains performant and numerically consistent with the C++ core.

Proposed Implementation:

• Benchmark Script: A new module (e.g., tests/benchmarks/) that:

1.Generates standard high-dimensional test geometries (unit cube, simplex).

2.Executes the same sampler (e.g., Coordinate Hit-and-Run) through both the dingo interface and a direct volesti call (if possible via the existing bindings).

3.Calculates and reports:

• Time per sample (Wall-clock time).

• Effective Sample Size (ESS) comparison.

• Variance/Numerical stability metrics.
##
• CI Integration: (Optional) Explore adding a "Performance regression" check to the CI pipeline to catch future overhead spikes.

I am a prospective GSoC 2026 contributor. This task will help me understand the bridge between dingo's Pythonic layer and the high-performance C++ backend.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions