Python bindings for high-performance CFD simulation library using CPython C-API with stable ABI (abi3).
- High Performance: Direct bindings to optimized C library with SIMD (AVX2/NEON) and OpenMP support
- Stable ABI: Compatible across Python 3.9+ versions
- Multiple Backends: Scalar, SIMD, OpenMP, and CUDA (GPU) backends with runtime detection
- Boundary Conditions: Full BC support (Neumann, Dirichlet, no-slip, inlet, outlet)
- Dynamic Solver Discovery: New solvers automatically available
- Multiple Output Formats: VTK and CSV export support
- Error Handling: Rich exception hierarchy with detailed error messages
uv is a fast Python package manager. Install it with:
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or with pip
pip install uvUsing uv (recommended):
uv pip install cfd-pythonOr with pip:
pip install cfd-pythonThe Python package requires the C CFD library to be built first. By default, it expects the library at ../cfd relative to the cfd-python directory. You can override this by setting the CFD_ROOT environment variable.
-
Build the C CFD library (static):
cd ../cfd cmake -B build -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF cmake --build build --config Release -
Install the Python package:
cd ../cfd-python pip install .
Or with uv for faster installs:
uv pip install .With a custom library location:
CFD_ROOT=/path/to/cfd pip install .
We use uv for fast dependency management:
# Create virtual environment
uv venv
# Activate it
source .venv/bin/activate # Linux/macOS
source .venv/Scripts/activate # Windows (Git Bash)
.venv\Scripts\activate # Windows (cmd)
# Install with dev dependencies
uv pip install -e ".[test,dev]"Alternatively, using pip:
pip install -e ".[test,dev]"import cfd_python
# List available solvers
print(cfd_python.list_solvers())
# ['explicit_euler', 'explicit_euler_optimized', 'projection', ...]
# Run a simple simulation
velocity_magnitude = cfd_python.run_simulation(50, 50, steps=100)
print(f"Computed {len(velocity_magnitude)} velocity values")
# Create a grid
grid = cfd_python.create_grid(100, 100, 0.0, 1.0, 0.0, 1.0)
print(f"Grid: {grid['nx']}x{grid['ny']}")
# Get default solver parameters
params = cfd_python.get_default_solver_params()
print(f"Default dt: {params['dt']}")run_simulation(nx, ny, steps=100, xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0, solver_type=None, output_file=None)
Run a complete simulation with default parameters.
Parameters:
nx,ny: Grid dimensionssteps: Number of time steps (default: 100)xmin,xmax,ymin,ymax: Domain bounds (optional)solver_type: Solver name string (optional, uses library default)output_file: VTK output file path (optional)
Returns: List of velocity magnitude values
run_simulation_with_params(nx, ny, xmin, xmax, ymin, ymax, steps=1, dt=0.001, cfl=0.2, solver_type=None, output_file=None)
Run simulation with custom parameters and solver selection.
Parameters:
nx,ny: Grid dimensionsxmin,xmax,ymin,ymax: Domain boundssteps: Number of time steps (default: 1)dt: Time step size (default: 0.001)cfl: CFL number (default: 0.2)solver_type: Solver name string (optional, uses library default)output_file: VTK output file path (optional)
Returns: Dictionary with velocity_magnitude, nx, ny, steps, solver_name, solver_description, and stats
Create a computational grid.
Parameters:
nx,ny: Grid dimensionsxmin,xmax,ymin,ymax: Domain bounds
Returns: Dictionary with nx, ny, xmin, xmax, ymin, ymax, x_coords, y_coords
Create a grid with non-uniform (stretched) spacing.
Parameters:
nx,ny: Grid dimensionsxmin,xmax,ymin,ymax: Domain boundsbeta: Stretching parameter (higher = more clustering)
Returns: Dictionary with grid info including x_coords, y_coords
Note: The stretched grid implementation has a known bug - see ROADMAP.md for details.
Get default solver parameters.
Returns: Dictionary with keys: dt, cfl, gamma, mu, k, max_iter, tolerance
Get list of all available solver names.
Returns: List of solver name strings
Check if a solver is available.
Returns: Boolean
Get information about a solver.
Returns: Dictionary with name, description, version, capabilities
Solver constants are automatically generated from the registry:
cfd_python.SOLVER_EXPLICIT_EULER # 'explicit_euler'
cfd_python.SOLVER_EXPLICIT_EULER_OPTIMIZED # 'explicit_euler_optimized'
cfd_python.SOLVER_PROJECTION # 'projection'
# ... more solvers as registered in C libraryQuery and select compute backends at runtime:
import cfd_python
# Check available backends
print(cfd_python.get_available_backends())
# ['Scalar', 'SIMD', 'OpenMP'] # CUDA if GPU available
# Check specific backend
if cfd_python.backend_is_available(cfd_python.BACKEND_SIMD):
print("SIMD backend available!")
# Get backend name
print(cfd_python.backend_get_name(cfd_python.BACKEND_OMP)) # 'OpenMP'
# List solvers for a backend
omp_solvers = cfd_python.list_solvers_by_backend(cfd_python.BACKEND_OMP)
print(f"OpenMP solvers: {omp_solvers}")Backend Constants:
BACKEND_SCALAR: Basic scalar CPU implementationBACKEND_SIMD: SIMD-optimized (AVX2/NEON)BACKEND_OMP: OpenMP parallelizedBACKEND_CUDA: CUDA GPU acceleration
Apply various boundary conditions to flow fields:
import cfd_python
# Create velocity field (as flat lists)
nx, ny = 50, 50
u = [0.0] * (nx * ny)
v = [0.0] * (nx * ny)
# Apply uniform inlet on left edge
cfd_python.bc_apply_inlet_uniform(
u, v, nx, ny,
u_inlet=1.0, v_inlet=0.0,
edge=cfd_python.BC_EDGE_LEFT
)
# Apply zero-gradient outlet on right edge
cfd_python.bc_apply_outlet_velocity(u, v, nx, ny, cfd_python.BC_EDGE_RIGHT)
# Apply no-slip walls on top and bottom
cfd_python.bc_apply_noslip(u, v, nx, ny)
# Check/set BC backend
print(f"BC Backend: {cfd_python.bc_get_backend_name()}")
if cfd_python.bc_backend_available(cfd_python.BC_BACKEND_OMP):
cfd_python.bc_set_backend(cfd_python.BC_BACKEND_OMP)BC Type Constants:
BC_TYPE_PERIODIC: Periodic boundariesBC_TYPE_NEUMANN: Zero-gradient boundariesBC_TYPE_DIRICHLET: Fixed value boundariesBC_TYPE_NOSLIP: No-slip wall (zero velocity)BC_TYPE_INLET: Inlet velocity specificationBC_TYPE_OUTLET: Outlet conditions
BC Edge Constants:
BC_EDGE_LEFT,BC_EDGE_RIGHT,BC_EDGE_BOTTOM,BC_EDGE_TOP
BC Backend Constants:
BC_BACKEND_AUTO: Auto-select best availableBC_BACKEND_SCALAR: Single-threaded scalarBC_BACKEND_OMP: OpenMP parallelBC_BACKEND_SIMD: SIMD + OpenMP (AVX2/NEON)BC_BACKEND_CUDA: GPU acceleration
BC Functions:
bc_apply_scalar(field, nx, ny, bc_type): Apply BC to scalar fieldbc_apply_velocity(u, v, nx, ny, bc_type): Apply BC to velocity fieldsbc_apply_dirichlet(field, nx, ny, left, right, bottom, top): Fixed valuesbc_apply_noslip(u, v, nx, ny): Zero velocity at wallsbc_apply_inlet_uniform(u, v, nx, ny, u_inlet, v_inlet, edge): Uniform inletbc_apply_inlet_parabolic(u, v, nx, ny, max_velocity, edge): Parabolic inletbc_apply_outlet_scalar(field, nx, ny, edge): Zero-gradient outletbc_apply_outlet_velocity(u, v, nx, ny, edge): Zero-gradient outlet
Compute derived quantities from flow fields:
import cfd_python
# Compute velocity magnitude
u = [1.0] * 100
v = [0.5] * 100
vel_mag = cfd_python.compute_velocity_magnitude(u, v, 10, 10)
# Calculate field statistics
stats = cfd_python.calculate_field_stats(vel_mag)
print(f"Min: {stats['min']}, Max: {stats['max']}, Avg: {stats['avg']}")
# Comprehensive flow statistics
p = [0.0] * 100 # pressure field
flow_stats = cfd_python.compute_flow_statistics(u, v, p, 10, 10)
print(f"Max velocity: {flow_stats['velocity_magnitude']['max']}")Detect SIMD capabilities at runtime:
import cfd_python
# Check SIMD architecture
arch = cfd_python.get_simd_arch()
name = cfd_python.get_simd_name() # 'avx2', 'neon', or 'none'
# Check specific capabilities
if cfd_python.has_avx2():
print("AVX2 available!")
elif cfd_python.has_neon():
print("ARM NEON available!")
# General SIMD check
if cfd_python.has_simd():
print(f"SIMD enabled: {name}")SIMD Constants:
SIMD_NONE: No SIMD supportSIMD_AVX2: x86-64 AVX2SIMD_NEON: ARM NEON
Handle errors with Python exceptions:
import cfd_python
from cfd_python import (
CFDError,
CFDMemoryError,
CFDInvalidError,
CFDDivergedError,
raise_for_status,
)
# Check status codes
status = cfd_python.get_last_status()
if status != cfd_python.CFD_SUCCESS:
error_msg = cfd_python.get_last_error()
print(f"Error: {error_msg}")
# Use raise_for_status helper
try:
raise_for_status(status, context="simulation step")
except CFDDivergedError as e:
print(f"Solver diverged: {e}")
except CFDInvalidError as e:
print(f"Invalid parameter: {e}")
except CFDError as e:
print(f"CFD error: {e}")
# Clear error state
cfd_python.clear_error()Error Constants:
CFD_SUCCESS: Operation successful (0)CFD_ERROR: Generic error (-1)CFD_ERROR_NOMEM: Out of memory (-2)CFD_ERROR_INVALID: Invalid argument (-3)CFD_ERROR_IO: File I/O error (-4)CFD_ERROR_UNSUPPORTED: Operation not supported (-5)CFD_ERROR_DIVERGED: Solver diverged (-6)CFD_ERROR_MAX_ITER: Max iterations reached (-7)
Exception Classes:
CFDError: Base exception classCFDMemoryError(CFDError, MemoryError): Memory allocation failedCFDInvalidError(CFDError, ValueError): Invalid argumentCFDIOError(CFDError, IOError): File I/O errorCFDUnsupportedError(CFDError, NotImplementedError): Unsupported operationCFDDivergedError(CFDError): Solver divergedCFDMaxIterError(CFDError): Max iterations reached
Set the output directory for VTK/CSV files.
Write scalar field to VTK file.
Write vector field to VTK file.
write_csv_timeseries(filename, step, time, u_data, v_data, p_data, nx, ny, dt, iterations, create_new=False)
Write simulation timeseries data to CSV file.
Parameters:
filename: Output file pathstep: Time step numbertime: Simulation timeu_data,v_data,p_data: Flow field data as lists (size nx*ny)nx,ny: Grid dimensionsdt: Time step sizeiterations: Number of solver iterationscreate_new: If True, create new file; if False, append
cfd_python.OUTPUT_VELOCITY_MAGNITUDE # Velocity magnitude scalar (VTK)
cfd_python.OUTPUT_VELOCITY # Velocity vector field (VTK)
cfd_python.OUTPUT_FULL_FIELD # Complete flow field (VTK)
cfd_python.OUTPUT_CSV_TIMESERIES # Time series (CSV)
cfd_python.OUTPUT_CSV_CENTERLINE # Centerline profile (CSV)
cfd_python.OUTPUT_CSV_STATISTICS # Global statistics (CSV)If you're upgrading from an older version, note these changes:
- Type names changed (internal, transparent to Python users)
- Error handling improved: Use
get_last_error(),get_last_status(),clear_error() - New solver backends: SIMD, OpenMP, CUDA available via
backend_is_available()
- Boundary conditions: Full BC API with inlet/outlet support
- Backend selection: Query and select compute backends at runtime
- Derived fields:
compute_velocity_magnitude(),compute_flow_statistics() - Error handling: Python exception classes with
raise_for_status() - CPU detection:
has_avx2(),has_neon(),has_simd()
See MIGRATION_PLAN.md for detailed migration information.
- Python 3.9+
- uv (recommended) or pip
For building from source:
- C compiler (MSVC on Windows, GCC/Clang on Unix)
- CMake 3.15+
MIT License