PCR is a high-performance, task-based concurrency runtime for Python that combines flexibility with a powerful execution engine. It automatically builds a DAG (Directed Acyclic Graph) of complex task dependencies and executes them using the most suitable executor based on task characteristics like CPU load, I/O wait, or distributed environment.
- Smart Scheduling: Automatically detects task types (CPU-bound vs I/O-bound) and dispatches them to Thread, Process, or Remote Workers.
- Resilience: Built-in support for retries, timeouts, and automatic worker isolation.
- Advanced Visualization: Automatically generates execution timelines (Gantt charts) and DAG structures (Mermaid charts).
- Async Integration: Transparently handles
async deffunctions, enabling high-efficiency I/O parallelism on a single thread. - Distributed Execution: Scalable distributed processing via a lightweight FastAPI-based worker server.
from pcr import task, run, parallel
import time
@task
def fetch_data(id):
time.sleep(1)
return f"Data {id}"
@task(cpu_bound=True)
def process_data(data):
# Heavy computation executed in a ProcessPool
return data.upper()
# Build the DAG
t1 = fetch_data(1)
t2 = fetch_data(2)
combined = process_data(parallel([t1, t2]))
# Execute with visual monitoring
result = run(combined, monitor=True, export_timeline="timeline.txt")
print(result)pip install .Keep your runtime healthy with pytest:
pytestpcr/: Core source code.tests/: Comprehensive test suite usingpytest.examples/: Sample code for various use cases (Distributed, Async, etc.).
MIT License - feel free to use and contribute!