Quanton Operator is a Kubernetes operator that extends kubeflow/spark-operator to run Apache Spark jobs using the Quanton compute engine by Onehouse. Quanton is a purpose-built query execution engine that delivers 2-3x better price-performance for ETL workloads.
The operator provides a seamless migration path — submit your existing Spark jobs as QuantonSparkApplication resources and the operator handles the rest.
- Kubernetes >= 1.28
- Helm >= 3.x
- kubectl configured for your cluster
- Spark Operator (v1.x or v2.x) installed on the cluster
- An Onehouse account with Quanton access
- Network access to
*.onehouse.aiand*.docker.io
- Obtain your
onehouse-values.yamlfrom the Onehouse console. - Install the operator:
helm upgrade --install quanton-operator oci://registry-1.docker.io/onehouseai/quanton-operator \
--namespace quanton-operator \
--create-namespace \
-f onehouse-values.yaml- Submit a sample Spark job:
kubectl apply -f examples/quanton-application.yaml- Verify the job output:
kubectl logs -f quanton-spark-pi-java-example-driver | grep -i "pi is"Expected output:
Pi is roughly 3.1416568
For a step-by-step walkthrough including local setup with minikube, see the Getting Started guide.
- QuantonSparkApplication CRD — Declarative API for submitting Spark jobs to Quanton. Wraps the standard
SparkApplicationspec with automatic image injection, JWT token management, and lifecycle tracking. - Airflow Integration — Native Airflow provider for orchestrating Quanton Spark jobs from your DAGs.
- Namespace Isolation — Run Spark jobs in dedicated namespaces with scoped RBAC and secret management. See Configuration.
- Observability — Built-in metrics collection via OpenTelemetry. See Metrics.
- Security — mTLS, JWT token protection, and least-privilege RBAC. See Security.
| Document | Description |
|---|---|
| Getting Started | Local setup with minikube and first job submission |
| Configuration Reference | All Helm chart parameters |
| Airflow Provider | Orchestrating Quanton jobs from Apache Airflow |
| Metrics | Telemetry and metrics collection |
| Security | All information about network security |
| Helm Chart | Chart-specific installation and upgrade instructions |
| Memory configurations | Understanding and configuring memory for Quanton |
A minimal QuantonSparkApplication resource:
apiVersion: onehouse.ai/v1beta2
kind: QuantonSparkApplication
metadata:
name: my-spark-job
namespace: default
spec:
sparkApplicationSpec:
type: Java
mode: cluster
image: "apache/spark:3.5.0"
mainClass: org.apache.spark.examples.JavaSparkPi
mainApplicationFile: "local:///opt/spark/examples/jars/calculate-pi-example_2.12-3.5.0.jar"
sparkVersion: "3.5.0"
driver:
cores: 1
memory: "1024m"
serviceAccount: spark-operator-spark
executor:
cores: 1
instances: 2
memory: "1024m"See examples/ for more samples.
The scripts/ directory contains a CLI utility (Python) to convert existing SparkApplication CRDs to QuantonSparkApplication format:
# Python
python scripts/transform.py -input my-spark-app.yaml -output my-quanton-app.yamlThe tool validates the input and rewrites apiVersion, kind, and nests spec under spec.sparkApplicationSpec. See scripts/INSTRUCTIONS.md for details.
If you have Claude Code installed, you can set up and demo Quanton interactively from the terminal. The repo ships with two skills (slash commands) that automate the full setup and benchmarking workflow.
Start Claude Code in the repo root:
claudeThen use either skill:
| Skill | What it does |
|---|---|
/setup-and-run-example |
Sets up minikube, installs Spark Operator + Quanton Operator, and runs a sample SparkPi job end-to-end. Walks you through each step interactively. |
/run-tpcds-benchmark |
Runs the TPC-DS read benchmark (99 queries on Parquet) comparing OSS Spark vs Quanton. Asks you for scale factor and configuration, gives live progress updates, and produces a per-query comparison table and chart. |
Both skills check prerequisites, handle errors, and give you live progress updates as jobs run on your local minikube cluster. You will need onehouse-values.yaml (from the Onehouse console) to install the Quanton Operator.
A local benchmark setup is provided here, to try Quanton locally on your mac or linux developement machine.
Note on Apple Silicon: The default ARM build for Quanton is optimized for latest ARM architectures like AWS Graviton. It may fail on older macs that don't support latest instruction sets. We suggest using a real Spark setup in those cases.
For industry standard benchmarks, please refer to the following resources.
- Onehouse Quanton vs the latest AWS EMR for Apache Spark Workloads
- Apache Iceberg on Quanton: 3x Faster Apache Spark Workloads
Join the Onehouse Community Slack to connect directly with engineers building Quanton.
- Announcing Apache Spark and SQL on the Onehouse Compute Runtime with Quanton
- Quanton Linkedin Live Event
For information about versions of quanton-operator and quanton images, please checkout versioning.
See data collection.
Copyright Onehouse, Inc. All rights reserved.