Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2,779 changes: 2,779 additions & 0 deletions sandbox-daytona/Cargo.lock

Large diffs are not rendered by default.

52 changes: 52 additions & 0 deletions sandbox-daytona/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
[package]
name = "iii-sandbox-daytona"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
repository = "https://github.com/iii-hq/workers"
authors = ["iii contributors"]
publish = false

[lib]
name = "sandbox_daytona"
path = "src/lib.rs"

[[bin]]
name = "iii-sandbox-daytona"
path = "src/main.rs"

[dependencies]
iii-sdk = "=0.11.6"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
serde_yaml = "0.9"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "sync", "signal"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
anyhow = "1"
thiserror = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
clap = { version = "4", features = ["derive"] }
async-trait = "0.1"
base64 = "0.22"

[dev-dependencies]
wiremock = "0.6"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "test-util"] }

[lints.rust]
unsafe_code = "forbid"

[lints.clippy]
all = { level = "warn", priority = -1 }
pedantic = { level = "warn", priority = -1 }
module_name_repetitions = "allow"
must_use_candidate = "allow"
missing_errors_doc = "allow"
missing_panics_doc = "allow"
too_many_lines = "allow"
unused_async = "allow"
uninlined_format_args = "allow"
needless_pass_by_value = "allow"
similar_names = "allow"
match_same_arms = "allow"
52 changes: 52 additions & 0 deletions sandbox-daytona/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# sandbox-daytona

Narrow iii worker that wraps [Daytona](https://daytona.io) sandboxes via Daytona's REST API. Daytona ships sub-90 ms container starts (Docker-class isolation by default; Kata or Sysbox when configured). Registers the canonical `sandbox::*` ABI under the `sandbox::provider::daytona::*` namespace so callers can spawn and drive Daytona sandboxes through `iii.trigger(...)` without depending on Daytona's SDK.

The same ABI is implemented by every sandbox provider worker in this repo (`sandbox-e2b`, `sandbox-morph`, `sandbox-vercel`, `sandbox-modal`, `sandbox-cf`, ...). Callers swap providers by changing the function-id prefix; capability negotiation tells callers which optional functions a given provider supports.

## Functions

| Function id | Purpose |
|---|---|
| `sandbox::provider::daytona::create` | Boot a sandbox and return `{sandbox_id, image, capabilities}` |
| `sandbox::provider::daytona::exec` | Run a command inside a live sandbox |
| `sandbox::provider::daytona::stop` | Tear down a sandbox |
| `sandbox::provider::daytona::list` | Enumerate live sandboxes plus concurrency status |
| `sandbox::provider::daytona::snapshot` | Pause a sandbox into a resumable snapshot |
| `sandbox::provider::daytona::expose_port` | Return a public URL for a port inside the sandbox |
| `sandbox::provider::daytona::fs::read` | Read a file out of the sandbox |
| `sandbox::provider::daytona::fs::write` | Write a file into the sandbox |

`create` advertises capabilities `["snapshot", "expose_port", "fs"]`. `branch` is not registered — callers that depend on branching should prefer `sandbox-morph`.

## Configuration

`config.yaml` next to the binary, or pass `--config <path>`:

```yaml
api_base: "https://app.daytona.io/api"
api_key_env: DAYTONA_API_KEY
max_concurrent_sandboxes: 10
default_idle_timeout_secs: 300
image_allowlist: [] # empty = allow all
```

`DAYTONA_API_KEY` must be present in the environment when the worker starts. The worker fails fast if it cannot read the variable named by `api_key_env`.

## S-codes

Provider failures map onto a stable code space shared with the rest of the sandbox worker family:

| Code | Cause |
|---|---|
| `S100` | Image not in `image_allowlist` |
| `S400` | Concurrency cap reached |
| `S404` | Capability not supported (e.g. caller invoked `branch`) |
| `S500` | Provider returned 429 (rate-limited) |
| `S501` | Provider returned 402 / quota exhausted |
| `S502` | Provider returned 5xx |
| `S503` | Provider returned 401 / 403 (auth invalid or expired) |

## Status

v0.1 ships the function registrations, types, error mapping, concurrency cap, and a smoke test. The HTTP call bodies that talk to Daytona are stubbed and return `S502` until the next iteration wires them to the real REST endpoints. The ABI is stable.
13 changes: 13 additions & 0 deletions sandbox-daytona/iii.worker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
iii: v1
name: sandbox-daytona
language: rust
deploy: binary
manifest: Cargo.toml
bin: iii-sandbox-daytona
description: Narrow iii worker that exposes Daytona microVM sandboxes via the sandbox::provider::daytona::* trigger family.
config:
api_base: "https://app.daytona.io/api"
api_key_env: DAYTONA_API_KEY
max_concurrent_sandboxes: 10
default_idle_timeout_secs: 300
image_allowlist: []
214 changes: 214 additions & 0 deletions sandbox-daytona/src/client.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,214 @@
//! Narrow reqwest wrapper for the Daytona REST API. Holds the base URL, api key,
//! and a small helper for building requests. Endpoint paths and bodies are
//! still stubbed pending a verified pass against the live Daytona API; every
//! call below returns `WorkerError::ProviderUnavailable` with a TODO marker.

use reqwest::Client;
use serde::{Deserialize, Serialize};

use crate::WorkerError;

#[derive(Debug, Clone)]
pub struct DaytonaClient {
pub api_base: String,
pub api_key: String,
pub http: Client,
}

impl DaytonaClient {
pub fn new(api_base: impl Into<String>, api_key: impl Into<String>) -> Self {
Self {
api_base: api_base.into(),
api_key: api_key.into(),
http: Client::builder()
.user_agent("iii-sandbox-daytona/0.1")
.build()
.expect("reqwest client"),
}
}

fn url(&self, path: &str) -> String {
format!("{}{}", self.api_base.trim_end_matches('/'), path)
}

fn auth(&self, req: reqwest::RequestBuilder) -> reqwest::RequestBuilder {
req.bearer_auth(&self.api_key)
}

/// Boot a new Daytona sandbox. POST /sandbox.
/// Body: `{snapshot?, autoStopInterval}`. `autoStopInterval` is in
/// **minutes** (we round up from seconds). When `image` is empty or
/// not a registered Daytona snapshot name we omit the field and let
/// Daytona pick its default (`daytonaio/sandbox:0.6.0`).
/// Response: `{id, snapshot, state, createdAt, ...}`.
pub async fn create(
&self,
image: &str,
idle_timeout_secs: u64,
) -> Result<CreatedSandbox, WorkerError> {
// Daytona expects whole minutes. Round up so callers always get
// at least the lifetime they asked for.
let auto_stop_minutes = idle_timeout_secs.div_ceil(60).max(1);
let mut body = serde_json::json!({
"autoStopInterval": auto_stop_minutes,
});
if !image.is_empty() && image != "default" {
body["snapshot"] = serde_json::Value::String(image.to_string());
}
let resp = self
.auth(self.http.post(self.url("/sandbox")))
.json(&body)
.send()
.await
.map_err(|e| WorkerError::ProviderUnavailable(format!("send failed: {e}")))?;
let status = resp.status().as_u16();
if !(200..300).contains(&status) {
let text = resp.text().await.unwrap_or_default();
return Err(crate::map_http_status(status, &text));
}
let parsed: DaytonaSandbox = resp
.json()
.await
.map_err(|e| WorkerError::ProviderUnavailable(format!("parse failed: {e}")))?;
Ok(CreatedSandbox {
sandbox_id: parsed.id,
image: parsed.snapshot.unwrap_or_default(),
started_at: unix_now_secs(),
})
}

pub async fn exec(
&self,
sandbox_id: &str,
cmd: &str,
args: &[String],
timeout_ms: Option<u64>,
) -> Result<ExecResult, WorkerError> {
let _ = (sandbox_id, cmd, args, timeout_ms);
// Daytona's process API runs through /sandbox/{id}/process which
// streams stdout/stderr; wiring it cleanly needs a streaming
// parser. Left for the next iteration — the worker still
// registers the function so callers see a consistent ABI.
Err(WorkerError::ProviderUnavailable(
"TODO: wire Daytona /sandbox/{id}/process streaming exec".to_string(),
))
}

/// Tear down a sandbox. DELETE /sandbox/{id}.
///
/// 404 and 409 are both treated as success — `stop` is idempotent
/// from the caller's view, and Daytona uses 409 when a previous
/// delete is still mid-flight (the sandbox is on its way out, just
/// not done yet). Surfacing either as S502 was the bug a live test
/// caught: a second `stop` racing the platform's async deletion
/// would falsely look like a provider failure and leak the in-flight
/// counter.
pub async fn stop(&self, sandbox_id: &str) -> Result<(), WorkerError> {
let resp = self
.auth(
self.http
.delete(self.url(&format!("/sandbox/{sandbox_id}"))),
)
.send()
.await
.map_err(|e| WorkerError::ProviderUnavailable(format!("send failed: {e}")))?;
let status = resp.status().as_u16();
if (200..300).contains(&status) || status == 404 || status == 409 {
return Ok(());
}
let text = resp.text().await.unwrap_or_default();
Err(crate::map_http_status(status, &text))
}

pub async fn list(&self) -> Result<Vec<crate::SandboxRecord>, WorkerError> {
let resp = self
.auth(self.http.get(self.url("/sandbox")))
.send()
.await
.map_err(|e| WorkerError::ProviderUnavailable(format!("send failed: {e}")))?;
let status = resp.status().as_u16();
if !(200..300).contains(&status) {
let text = resp.text().await.unwrap_or_default();
return Err(crate::map_http_status(status, &text));
}
let parsed: Vec<DaytonaSandbox> = resp
.json()
.await
.map_err(|e| WorkerError::ProviderUnavailable(format!("parse failed: {e}")))?;
Ok(parsed
.into_iter()
.map(|it| crate::SandboxRecord {
sandbox_id: it.id,
image: it.snapshot.unwrap_or_default(),
started_at: it.created_at_iso.unwrap_or_default(),
})
.collect())
}

pub async fn snapshot(&self, sandbox_id: &str) -> Result<String, WorkerError> {
let _ = sandbox_id;
Err(WorkerError::ProviderUnavailable(
"TODO: wire Daytona pause/snapshot endpoint".to_string(),
))
}

pub async fn expose_port(&self, sandbox_id: &str, port: u16) -> Result<String, WorkerError> {
let _ = (sandbox_id, port);
Err(WorkerError::ProviderUnavailable(
"TODO: derive Daytona port URL".to_string(),
))
}

pub async fn fs_read(&self, sandbox_id: &str, path: &str) -> Result<Vec<u8>, WorkerError> {
let _ = (sandbox_id, path);
Err(WorkerError::ProviderUnavailable(
"TODO: wire Daytona fs read".to_string(),
))
}

pub async fn fs_write(
&self,
sandbox_id: &str,
path: &str,
bytes: &[u8],
mode: Option<u32>,
) -> Result<(), WorkerError> {
let _ = (sandbox_id, path, bytes, mode);
Err(WorkerError::ProviderUnavailable(
"TODO: wire Daytona fs write".to_string(),
))
}
}

#[derive(Debug, Clone, Deserialize)]
struct DaytonaSandbox {
id: String,
snapshot: Option<String>,
/// Daytona returns ISO8601 in `createdAt`. We pass it through as a
/// string in `SandboxRecord`. `CreatedSandbox.started_at` uses the
/// worker's local clock at response time; close enough.
#[serde(rename = "createdAt", default)]
created_at_iso: Option<String>,
}

fn unix_now_secs() -> i64 {
use std::time::{SystemTime, UNIX_EPOCH};
SystemTime::now()
.duration_since(UNIX_EPOCH)
.map_or(0, |d| i64::try_from(d.as_secs()).unwrap_or(0))
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CreatedSandbox {
pub sandbox_id: String,
pub image: String,
pub started_at: i64,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExecResult {
pub stdout: String,
pub stderr: String,
pub exit_code: i32,
pub timed_out: bool,
}
60 changes: 60 additions & 0 deletions sandbox-daytona/src/config.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
use std::path::Path;

use anyhow::Result;
use serde::Deserialize;

#[derive(Debug, Clone, Deserialize)]
pub struct Config {
#[serde(default = "default_api_base")]
pub api_base: String,
#[serde(default = "default_api_key_env")]
pub api_key_env: String,
#[serde(default = "default_max_concurrent")]
pub max_concurrent_sandboxes: usize,
#[serde(default = "default_idle_timeout")]
pub default_idle_timeout_secs: u64,
#[serde(default)]
pub image_allowlist: Vec<String>,
}

fn default_api_base() -> String {
"https://app.daytona.io/api".to_string()
}
fn default_api_key_env() -> String {
"DAYTONA_API_KEY".to_string()
}
fn default_max_concurrent() -> usize {
10
}
fn default_idle_timeout() -> u64 {
300
}

impl Default for Config {
fn default() -> Self {
Self {
api_base: default_api_base(),
api_key_env: default_api_key_env(),
max_concurrent_sandboxes: default_max_concurrent(),
default_idle_timeout_secs: default_idle_timeout(),
image_allowlist: Vec::new(),
}
}
}

impl Config {
pub fn load(path: &Path) -> Result<Self> {
let raw = std::fs::read_to_string(path)?;
// Workers in this repo ship config.yaml whose top-level keys are the
// worker config fields directly OR wrapped under `config:`. Try both.
let value: serde_yaml::Value = serde_yaml::from_str(&raw)?;
if let Some(inner) = value.get("config") {
return Ok(serde_yaml::from_value(inner.clone())?);
}
Ok(serde_yaml::from_value(value)?)
}

pub fn image_allowed(&self, image: &str) -> bool {
self.image_allowlist.is_empty() || self.image_allowlist.iter().any(|i| i == image)
}
}
Loading
Loading