No cloud. No API keys. No subscriptions. Full privacy.
Download · Getting Started · Features · Contributing
|
Prompt to page — Describe what you want, AI generates the entire website from scratch. Template gallery — Start from curated templates (landing pages, portfolios, blogs, e-commerce) and customize with natural language. Chat editing — "Make the header dark blue and add a CTA button" — done. |
Inline editing — Click any section, prompt changes just for that part. Visual editing — Edit text directly on the page — WYSIWYG, no code. One-click deploy — Publish to Netlify, Vercel, or GitHub Pages instantly. Or export as a standalone HTML file. |
Output is a single HTML file with inline CSS/JS — works everywhere.
You type a prompt
|
v
+------------+ Tauri IPC +----------------+ HTTP +----------------+
| Frontend | <============> | Rust Core | <=======> | llama.cpp |
| React 19 | | (Tauri 2.0) | | (local AI) |
+------------+ +----------------+ +----------------+
| |
| postMessage | SQLite
v v
+------------+ +----------------+
| Preview | | Storage |
| (iframe) | | Projects, AI |
+------------+ +----------------+
Everything runs locally. The AI model is downloaded once and stays on your machine.
| Layer | Technology |
|---|---|
| Frontend | React 19 · TypeScript · Vite · Tailwind CSS 4 · Zustand |
| Desktop | Tauri 2.0 (Rust) |
| AI | llama.cpp sidecar · Qwen2.5-Coder-7B (3B fallback) |
| GPU | Metal (macOS) · CUDA / Vulkan (Windows) |
| Storage | SQLite (projects, chat history, snapshots, settings, templates) |
| Deploy | Netlify · Vercel · GitHub Pages · Local HTML export |
- Node.js 18+ — nodejs.org
- Rust — rustup.rs
- pnpm (recommended) or npm
macOS extras
xcode-select --installWindows extras
Install Microsoft C++ Build Tools and WebView2.
git clone https://github.com/Szymon0C/Offpage.git
cd Offpage
pnpm install # or: npm installbash scripts/setup-sidecar.shThis downloads the pre-built llama-server binary (b7472) for your platform.
pnpm tauri dev # or: npx tauri devThe app will open automatically. On first launch:
- Hardware detection runs automatically
- Choose a model (7B recommended for 16 GB+ RAM, 3B for 8 GB)
- Click Download & Start AI — one-time download from Hugging Face
- Start chatting!
pnpm tauri build # or: npx tauri buildBuilds a distributable .dmg (macOS) or .msi / .exe (Windows) in src-tauri/target/release/bundle/.
| Hardware | Experience | |
|---|---|---|
| Minimum | 8 GB RAM, x64 (AVX2) or Apple Silicon | Works — CPU inference, slower |
| Recommended | 16 GB RAM, 6 GB+ VRAM or M1+ | Smooth generation |
| Optimal | 32 GB RAM, RTX 3060+ or M1 Pro+ | Fast, room for larger models |
The app detects your hardware and adjusts automatically (quantization, model size).
| Page | Description |
|---|---|
| Home | Create new projects or open recent ones |
| Project | Chat with AI, preview generated site, inline/WYSIWYG editing, deploy |
| Templates | Browse curated templates by category, preview, customize with AI |
| Settings | AI model status & control, deploy token management, about |
Offpage/
├── src/ # React frontend
│ ├── components/
│ │ ├── chat/ # ChatPanel, ChatInput, ChatMessage, ModelSetup
│ │ ├── deploy/ # DeployModal (Netlify, Vercel, GitHub Pages)
│ │ ├── layout/ # AppShell, Sidebar, TopBar
│ │ ├── preview/ # PreviewFrame, PreviewToolbar, InlineEditBar
│ │ ├── templates/ # TemplateCard, TemplatePreviewModal
│ │ ├── ui/ # IconButton
│ │ └── ErrorBoundary.tsx # Global error boundary
│ ├── stores/ # Zustand state management
│ │ ├── aiStore.ts # Sidecar status, hardware info, model download
│ │ ├── chatStore.ts # Chat messages, streaming buffer
│ │ ├── deployStore.ts # Deploy status, token management
│ │ ├── editorStore.ts # Edit mode, viewport size, section selection
│ │ ├── projectStore.ts # Projects CRUD, snapshots, deploy config
│ │ └── templateStore.ts # Template loading, category filtering
│ ├── pages/ # Route pages
│ │ ├── HomePage.tsx # Project list + create
│ │ ├── ProjectPage.tsx # Chat + preview + editing workspace
│ │ ├── TemplatesPage.tsx # Template gallery with category filters
│ │ ├── SettingsPage.tsx # AI model, deploy tokens, about
│ │ └── NotFoundPage.tsx # 404 catch-all
│ ├── hooks/
│ │ └── useAiStream.ts # SSE streaming from llama-server via Tauri events
│ ├── lib/
│ │ ├── prompts.ts # System prompts for generate/edit/section modes
│ │ ├── bundledTemplates.ts # 4 built-in HTML templates
│ │ ├── deployProviders.ts # Provider metadata (Netlify, Vercel, GitHub Pages)
│ │ ├── helperScript.ts # JS injected into preview iframe
│ │ ├── htmlSections.ts # Section replace/tag utilities
│ │ └── iframeBridge.ts # Typed postMessage protocol
│ ├── db/
│ │ ├── database.ts # SQLite init, migrations, template seeding
│ │ └── migrations.ts # Schema: projects, snapshots, chat, templates, settings
│ └── types/
│ └── project.ts # TypeScript types for all DB entities
├── src-tauri/ # Rust backend (Tauri 2.0)
│ └── src/
│ ├── lib.rs # Plugin registration, command handler
│ ├── ai.rs # SSE streaming from llama-server
│ ├── sidecar.rs # llama-server lifecycle (spawn, health check, kill)
│ ├── models.rs # Model download with progress events
│ ├── deploy.rs # Deploy to Netlify/Vercel/GitHub Pages + HTML export
│ └── hardware.rs # RAM, CPU, GPU detection and tier classification
├── scripts/
│ └── setup-sidecar.sh # Downloads pre-built llama-server binary
└── docs/ # Specs, plans, assets
Offpage supports deploying generated websites to three platforms:
| Provider | How it works |
|---|---|
| Netlify | Creates a site via API, deploys a zip with index.html. Reuses existing site on subsequent deploys. |
| Vercel | Posts base64-encoded HTML to the deployments API. |
| GitHub Pages | Creates a repo, uploads index.html, enables GitHub Pages. |
| Local export | Saves the HTML file to any location on your disk via system save dialog. |
API tokens are stored locally in SQLite — never sent anywhere except the provider's API. Tokens can be managed in Settings.
The app runs AI inference locally using llama.cpp as a sidecar process.
| Model | Size | For |
|---|---|---|
| Qwen2.5-Coder-7B-Instruct (Q4_0) | ~4.3 GB | 16 GB+ RAM — recommended |
| Qwen2.5-Coder-3B-Instruct (Q4_0) | ~2.0 GB | 8 GB RAM — minimum |
Models are downloaded from Hugging Face on first launch and stored in your app data directory. The app auto-detects previously downloaded models on subsequent launches.
Contributions are welcome! The project is in active development.
# Fork & clone, then:
pnpm install
bash scripts/setup-sidecar.sh
pnpm tauri devPlease open an issue before submitting large PRs so we can discuss the approach.
Coming soon — Early builds will be available once the core features are stable.
Star or watch this repo to get notified.
MIT — see LICENSE for details.
Built with Tauri, React, and local AI. No data leaves your device.