Skip to content

geleto/cascada

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3,081 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Cascada: The Parallel-First Scripting & Templating Engine for TypeScript Applications

Think Sequentially. Execute Concurrently.

Cascada is a data-orchestration engine for JavaScript and TypeScript applications, designed to make complex asynchronous workflows clear and low-boilerplate. It comes in two flavors sharing the same powerful execution model:

  • Cascada Script - A clean, Python- and JavaScript-like scripting language for orchestrating APIs, databases, and LLM calls
  • Cascada Template - Template syntax with the same parallel execution capabilities for dynamic content generation

Both share the same core philosophy: write clear, linear logic while the engine handles parallel execution, ordering guarantees, and error propagation automatically.

What makes them remarkable is precisely how unremarkable they appear. There's no special syntax, no await, no Promise.all - just clean code that is instantly familiar to developers and happens to execute concurrently.

The problem they solve: Traditional async/await often forces you to choose between readable sequential code OR maximum concurrency. Cascada gives you both. Independent operations run in parallel automatically, while the engine manages dependencies and assembles results deterministically.

How it works:

  • Parallel by default - Independent operations execute concurrently without async, await, or promise management
  • 🚦 Data-driven execution - Operations run automatically when their inputs are ready, eliminating race conditions by design
  • ➡️ Sequential when needed - Use a simple ! marker to enforce ordering for side-effectful operations
  • 📋 Deterministic outputs - Concurrent execution, sequential results—final outputs assemble exactly as written
  • ☣️ Errors as data - Failures propagate through the dataflow without stopping unrelated work

Perfect for:

  • AI and LLM orchestration
  • Data pipelines and ETL workflows
  • Agent systems and multi-step planning
  • High-throughput I/O coordination

In short: Cascada is a data-orchestration layer that handles the complexity of parallel async execution while you focus on business logic—whether you're building data structures with Script or generating dynamic content with Templates.

The most up-to-date and complete information on the features of Cascada can be found in the Cascada Script Documentation!

Read First

Articles:

Learning by Example:

  • Casai Examples Repository - After reading the articles, explore practical examples showing how Cascada and Casai (an AI orchestration framework built on Cascada) turn complex agentic workflows into readable, linear code - no visual node graphs or async spaghetti, just clear logic that tells a story (work in progress)

Overview

⚡ Parallel by default

Cascada is a powerful engine for JavaScript and TypeScript applications, providing both a scripting language and a template syntax. It is designed to dramatically simplify complex, asynchronous workflows by fundamentally inverting the traditional programming model: instead of being sequential by default, Cascada is parallel by default.

🚦 Data-Driven Flow: Code runs when its inputs are ready.

The engine intelligently analyzes your code, automatically executing independent asynchronous operations - like API calls, LLM requests, and database queries - concurrently. It guarantees that operations will wait for their required inputs before executing, a smart orchestration that eliminates race conditions by design while delivering high performance without the complexity and boilerplate of manual async handling.

✨ Implicit Concurrency: Write Business Logic, Not Async Plumbing.

Forget await. Forget .then(). Forget manually tracking which variables are promises and which are not. Cascada fundamentally changes how you interact with asynchronous operations by making them invisible. This "just works" approach means that while any variable can be a promise under the hood, you can pass it into functions, use it in expressions, and assign it without ever thinking about its async state.

🎭 One Engine, Two Modes

This parallel-first philosophy is the foundation for both of its powerful modes:

  • 🚀 A purpose-built scripting language for orchestrating complex data pipelines. Use it as the backbone for your data layer to compose complex workflows, wiring together LLMs, APIs, databases, and external services in parallel with maximum I/O throughput, all while keeping the logic clean and readable. It offers a complete toolset for sophisticated logic with variable declarations, conditionals, loops, and standard operators. Create reusable components with macros and build modular applications using import and extends.
  • 📜 A familiar template syntax - Cascada is based on the popular Nunjucks template engine, for generating text-based output, ideal for dynamic websites, writing emails or crafting detailed LLM prompts.

➡️ Implicitly Parallel, Explicitly Sequential

While this "parallel-first" approach is powerful, Cascada recognizes that order is critical for operations with side-effects. For these specific cases, such as writing to a database, interacting with a stateful API or making LLM request, you can use the simple ! marker to enforce a strict sequential order on a specific chain of operations, without affecting the parallelism of the rest of the code..

This inversion - Implicitly Parallel, Explicitly Sequential - is what makes Cascada so effective and intuitive.

📋 Execution is chaotic, but the result is orderly

While independent operations run in parallel and may start and complete in any order, Cascada guarantees the final output is identical to what you'd get from sequential execution. This means all your data manipulations are applied predictably, ensuring your final texts, arrays and objects are assembled in the exact order written in your script.

☣️ Dataflow Poisoning - Errors that flow like data

Cascada replaces traditional try/catch exceptions with a data-centric error model called dataflow poisoning. If an operation fails, it produces an Error Value that propagates to any dependent operation, variable and output. For example, if fetchPosts() fails, any variable or output using its result also becomes an error - but critically, unrelated operations continue running unaffected. You can detect and repair these errors, providing fallbacks and logging without derailing your entire workflow.

⚠️ Heads up! Cascada is a new project. You might run into bugs, and the documentation is catching up with the code. Your feedback and contributions are welcome as we build the future of asynchronous programming.

Quick Start

  1. Install Cascada:
    npm install cascada-engine
  2. Render a Cascada template:
    import { AsyncEnvironment } from 'cascada-engine';
    const env = new AsyncEnvironment();
    const result = await env.renderString('Hello, {{ name }}!', { name: 'World' });
    console.log(result); // Hello, World!
  3. Run a Cascada script:
    import { AsyncEnvironment } from 'cascada-engine';
    const env = new AsyncEnvironment();
    const script = `
      // fetchUser and fetchSettings run in parallel
      var user = fetchUser(123)
      var settings = fetchSettings(123)
      return { greeting: "Hello, " + user.name, theme: settings.theme }
    `;
    const context = {
      fetchUser: async (id) => ({ name: 'Alice', id }),
      fetchSettings: async (id) => ({ theme: 'dark' })
    };
    const result = await env.renderScriptString(script, context);
    console.log(result.greeting); // Hello, Alice
    console.log(result.theme);    // dark

Core Concepts (Deeper Dive)

At its core, Cascada offers a set of powerful features available in both its templating and scripting modes. The sections below go deeper into Cascada's execution model. You don't need to understand everything here to get started.

Automatic Parallelization

Cascada automatically identifies and executes independent operations concurrently, without any special syntax or configuration. Tasks that don't depend on each other run in parallel, dramatically speeding up I/O-bound workflows.

Cascada Script
// The fetchUser() and fetchConfig() calls
// are independent and will run in parallel.
var user = fetchUser(123)
var config = fetchSiteConfig()

// Waits for both to complete before use
return {
  greeting: "Welcome, " + user.name,
  theme: "Theme: " + config.theme
}
Cascada Template
{# fetchUser(), fetchConfig() are independent #}
{# and will run in parallel. #}
{% set user = fetchUser(123) %}
{% set config = fetchSiteConfig() %}

{# Waits for both to complete. #}
<p>Welcome, {{ user.name }}</p>
<p>Theme: {{ config.theme }}</p>

Data-Driven Flow

While independent operations run in parallel, Cascada ensures that dependent operations wait for their prerequisites. This guarantees correct execution order and produces results identical to sequential code, giving you the performance of parallelism with the predictability of a synchronous process.

Cascada Script
// getUser(), getFooter() run in parallel.
// getPosts(user.id) depends on `user`, so it
// waits for getUser() to complete
var user = getUser()
var posts = getPosts(user.id)
var footer = getFooter()

return "User: " + user.name
Cascada Template
{# getUser() and getFooter() run in parallel. #}
{# getPosts(user.id) waits for getUser() #}
{# to complete before starting. #}
{% set user = getUser() %}
{% set posts = getPosts(user.id) %}
{% set footer = getFooter() %}

<div class="user">User: {{ user.name }}</div>

Transparent Async Support

Work with promises, async functions, and async iterators as if they were synchronous values. Cascada automatically resolves them when they are needed, eliminating the need for await or promise-chaining syntax within your code.

Cascada Script
// fetchPost is an async function.
// fetchComments is an async iterator.
var post = fetchPost(42)

text output
// Waits for post to resolve, then iterates
// over the async comments iterator.
for comment in fetchComments(post.id)
  output(comment.author + ": " + comment.body)
endfor
return output.snapshot()
Cascada Template
{# fetchPost is an async function. #}
{# fetchComments is an async iterator. #}
{% set post = fetchPost(42) %}

<h1>{{ post.title }}</h1>
<ul>
  {# Iterates after post is resolved #}
  {# over the async comments iterator. #}
  {% for comment in fetchComments(post.id) %}
    <li>
{{ comment.author }}: {{ comment.body }}
    </li>
  {% endfor %}
</ul>

Sequential Execution Control (!)

For functions with side effects (e.g., database writes), the ! marker enforces a sequential execution order for a specific object path. Once a path is marked, all subsequent access on that path (reads and calls without side effects do not need !) will wait for the preceding operation to complete, while other independent operations continue to run in parallel.

Cascada Script
// 'account' is provided via context
// The `!` on deposit() creates a
// sequence for the 'account' path.

//1. Set initial Deposit:
account!.deposit(100)
//2. Get new status after the deposit:
account.getStatus()
//3. Withdraw money after getStatus()
account!.withdraw(50)
Cascada Template
{# 'account' is provided via context #}
{# `!account` creates a sequence for the path #}

{% do account!.deposit(100) %}
{% do account.getStatus() %}
{% do account!.withdraw(50) %}

Declarative Data Assembly (Channels)

Cascada provides channel types (data, text, sink, sequence) for declarative, ordered output assembly. Channel writes execute concurrently as soon as their inputs are ready, but the final assembled result always matches source-code order—giving you the performance of parallelism with the predictability of sequential code.

The data channel is particularly powerful for building structured objects and arrays from parallel loops. All writes run concurrently, and the assembled result is always in source order.

Cascada Script
// Assume fetchProductDetails for
// ID 205 is the slowest.
var productIds = [101, 205, 302]
data report
report.totalReviews = 0 // Initialize

// Each iteration runs in parallel.
for id in productIds
  // fetch concurrently:
  var details = fetchProductDetails(id)
  var reviews = fetchProductReviews(id)

  // The `report.products` array is
  // built in the order of `productIds`
  // [101, 205, 302], not the order in
  // which the data resolves.
  report.products.push({
    id: details.id,
    name: details.name,
    reviewCount: reviews.length
  })
  report.totalReviews += reviews.length
endfor
return report.snapshot()
Cascada Template (Predictable Output)
{# The HTML is assembled sequentially, #}
{# iregardless of which fetch finishes first #}
<div class="slow-data">
  {{ fetchSlowData() }}
</div>
{# This div will always render second. #}
<div class="fast-data">
  {{ fetchFastData() }}
</div>

External Stateful Objects (sink)

The sink channel wraps an external stateful object and applies its method calls in source-code order, even when the surrounding code runs in parallel. Use it when the important thing is sending ordered commands to an object, not getting a return value from each call.

Stateful Object (JavaScript)
// Turtle graphics on an HTML5 Canvas
class CanvasTurtle {
  constructor(canvas) {
    this.ctx = canvas.getContext('2d');
    this.x = this.ctx.canvas.width / 2;
    this.y = this.ctx.canvas.height / 2;
    this.angle = -90; // Start pointing up
  }
  begin() {
    this.ctx.beginPath();
    this.ctx.moveTo(this.x, this.y);
  }
  forward(dist) {
    const rad = this.angle * (Math.PI / 180);
    this.x += dist * Math.cos(rad);
    this.y += dist * Math.sin(rad);
    this.ctx.lineTo(this.x, this.y);
  }
  turn(deg) {
    this.angle = (this.angle + deg) % 360;
  }
  stroke(color) {
    this.ctx.strokeStyle = color ?? 'white';
    this.ctx.stroke();
  }
}
Using a sink (Cascada Script)
// Draw an 8-sided star using canvas
// 'canvasTurtle' is provided via context
sink turtle = canvasTurtle

turtle.begin()
for i in range(8)
  turtle.forward(60)
  turtle.turn(135)
endfor
turtle.stroke('cyan')
// Provide the turtle instance
env.renderScriptString(script, {
  canvasTurtle: new CanvasTurtle(
    document.querySelector('canvas')
  )
});

Macros for Reusable Components

Macros allow you to define reusable chunks of logic. In templates, they're great for repeated UI components. In scripts, they can perform complex, parallel async operations internally and return a clean, structured data object, making them the primary way to build modular, data-generating components.

Cascada Script (Data-Building Macro)
// Ftches a user's details and recent
// activity in parallel to build a summary.
macro buildUserSummary(userId)
  // Run three async calls concurrently
  var details = fetchUserDetails(userId)
  var posts = fetchUserPosts(userId)
  var comments = fetchUserComments(userId)

  // Return a composed result object
  return {
    name: details.name,
    postCount: posts.length,
    commentCount: comments.length
  }
endmacro

// Get both summarries in parallel:
var user1 = buildUserSummary(101)
var user2 = buildUserSummary(102)

// Compose the final report.
return {
    report: {
      user1Summary: user1,
      user2Summary: user2
    }
  }
Cascada Template (UI Macro)
{#
  This macro generates a user profile widget.
  It works with a user object (which could be a promise)
  and fetches additional related data in parallel.
#}
{% macro profile_widget(user) %}
  <div class="profile-widget">
    <h2>{{ user.name }}</h2>
    <ul>
      {# These two fetches run in parallel #}
      <li>Followers:
{{ fetchStats(user.id).followerCount }}
      </li>
      <li>Latest Post:
"{{ fetchLatestPost(user.id).title }}"
      </li>
    </ul>
  </div>
{% endmacro %}

{# Fetch user data in parallel #}
{% set userA = fetchUser(1) %}
{% set userB = fetchUser(2) %}

{# Render widgets. Each widget will internally #}
{# perform its own parallel data fetches. #}
{{ profile_widget(userA) }}
{{ profile_widget(userB) }}

Resilient Error Handling

Handle runtime errors gracefully with guard/recover. This structure lets you attempt risky operations and automatically restore selected state on failure, providing a clean recovery path. The is error test detects failures, and the # operator inspects error details.

Cascada Script
var result
guard
  // Attempt a fallible operation
  var image = generateImage(prompt)
  result = { imageUrl: image.url }
recover err
  // Handle failure
  // guarded state already restored
  result = { error: "Failed: " + err#message }
endguard
return result
Cascada Template
{% guard %}
  {# Attempt a fallible operation #}
  {% set image = generateImage(prompt) %}
  <img src="{{ image.url }}" />
{% recover %}
  <p class="error">Image generation failed.</p>
{% endguard %}

Two Modes, One Engine

Cascada's parallel-first core powers two distinct syntaxes, each tailored for a different primary purpose.

Data-First: Cascada Script

For logic-heavy tasks, data pipelines, and AI agent orchestration, Cascada Script offers a cleaner, delimiter-free syntax. It maintains all of Cascada's parallelization capabilities and adds channels (data, text, sink, sequence) for structured output assembly.

  • Clean, delimiter-free syntax
  • Channels for structured output: data, text, sink, sequence
  • Focus on logic and orchestration
AI Orchestration Example
// 1. Generate a plan with an LLM call.
data result
var plan = makePlan(
  "Analyze competitor's new feature")
result.plan = plan

// 2. Each step runs in parallel.
for step in plan.steps
  var stepResult =
    executeStep(step.instruction)
  result.stepResults.push({
    step: step.title,
    result: stepResult
  })
endfor

// 3. Summarize the results once complete
result.summary = summarize(
  result.snapshot().stepResults)
return result.snapshot()

Text-First: Cascada Template

As a superset of the popular Nunjucks engine, Cascada provides a familiar, feature-rich syntax ideal for generating text-based output like HTML, or for crafting complex LLM prompts by dynamically embedding data.

  • Full programming constructs: if, for, set
  • Reusable UI components: {% macro %}
  • Complex expressions and filters
AI Prompt Generation Example
Analyze the following meeting transcript
and generate a summary.

MEETING CONTEXT:
- Topic: {{ fetchMeetingTopic(meetingId) }}
- Attendees: {{ fetchAttendees(meetingId) | join(", ") }}

TRANSCRIPT:
{{ fetchTranscript(meetingId) }}

KEY DECISIONS TO IDENTIFY:
{% for objective in ["Product Launch", "Budget"] %}
- Decisions related to: {{ objective }}
{% endfor %}

Based on the transcript, extract action items.

Simple and Powerful API

Cascada provides a straightforward, promise-based API for rendering templates and scripts. Use the AsyncEnvironment class to get started. For production, you can improve performance by precompiling your templates and scripts to JavaScript files, eliminating the parsing overhead at runtime.

Executing a Script
import { AsyncEnvironment } from 'cascada-engine';

const env = new AsyncEnvironment();
const script = `
  // The 'user' promise resolves automatically
  return "Hello, " + user.name
`;
const context = {
  // Pass in an async function or a promise
  user: fetchUser(123)
};

const result = await env.renderScriptString(
  script, context
);
// 'Hello, Alice'
console.log(result);
Rendering a Template
import { AsyncEnvironment } from 'cascada-engine';

const env = new AsyncEnvironment();
const tpl = '<h1>Hello {{ username }}</h1>';
const context = {
  username: Promise.resolve('World')
};

const html = await env.renderString(
  tpl,
  context
);
// <h1>Hello World</h1>
console.log(html);

Built for AI Workflows

Cascada's parallel-first engine and data-driven flow make it the ideal foundation for orchestrating complex AI workflows. The Casai library builds on this power, providing a high-level, intuitive API for wiring together LLMs, APIs, and data transformations. By integrating with the Vercel AI SDK Core, Casai lets you define sophisticated, multi-step agents using Cascada's scripting and templating.

Here's a short example of a self-improving agent built with Casai:

import { openai } from '@ai-sdk/openai';
import { create } from 'casai';
import { z } from 'zod';

const baseConfig = create.Config({model: openai('gpt-4o')});

const draftGenerator = create.TextGenerator({
    prompt: 'Write a short, engaging blog post about {{ topic }}.',
}, baseConfig );

const critiqueGenerator = create.ObjectGenerator({
    schema: z.object({
        score: z.number().describe('Quality score from 1-10.'),
        suggestions: z.array(z.string()).describe('Actionable suggestions for improvement.'),
    }),
    prompt: 'Critique this blog post: {{ draft }}',
}, baseConfig);

const revisionGenerator = create.TextGenerator({
    prompt: 'Rewrite the following post based on these suggestions:\n\nPOST:\n{{ draft }}\n\nSUGGESTIONS:\n- {{ suggestions | join("\n- ") }}',
}, baseConfig);

// Define the orchestration script for the agent
const contentAgent = create.Script({
    context: {
      draftGenerator, critiqueGenerator, revisionGenerator,
      topic: "the future of AI-powered development",
      qualityThreshold: 8, maxRevisions: 3, minRevisions: 1
    },
    script: `
      var revisionCount = 0
      var currentDraft = draftGenerator({ topic: topic }).text
      var critique = critiqueGenerator({ draft: currentDraft }).object

      // Iteratively revise until the quality threshold or maxRevisions is met
      while (critique.score < qualityThreshold or revisionCount < minRevisions) and revisionCount < maxRevisions
        revisionCount = revisionCount + 1
        currentDraft = revisionGenerator({ draft: currentDraft, suggestions: critique.suggestions }).text
        critique = critiqueGenerator({ draft: currentDraft }).object
      endwhile

      return { finalDraft: currentDraft, finalScore: critique.score, revisionCount: revisionCount }`,
});

// Run the agent
const result = await contentAgent();
console.log(JSON.stringify(result, null, 2));

Further Reading

Development Status and Roadmap

See the roadmap section in the Cascada Script docsumentation: https://geleto.github.io/cascada-script/#development-status-and-roadmap

About

Concurrency-first scripting and templating engine for AI and data workflows. Run concurrently based on data dependencies, with deterministic results

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages