Skip to content

Szymon0C/Offpage

Repository files navigation


Offpage



Generate websites with AI — entirely on your device.

No cloud. No API keys. No subscriptions. Full privacy.


GitHub Stars Windows macOS Built with Tauri License: MIT


Offpage Preview



Download  ·  Getting Started  ·  Features  ·  Contributing



Features

Prompt to page — Describe what you want, AI generates the entire website from scratch.

Template gallery — Start from curated templates (landing pages, portfolios, blogs, e-commerce) and customize with natural language.

Chat editing"Make the header dark blue and add a CTA button" — done.

Inline editing — Click any section, prompt changes just for that part.

Visual editing — Edit text directly on the page — WYSIWYG, no code.

One-click deploy — Publish to Netlify, Vercel, or GitHub Pages instantly. Or export as a standalone HTML file.

Output is a single HTML file with inline CSS/JS — works everywhere.


How It Works

  You type a prompt
       |
       v
  +------------+    Tauri IPC    +----------------+    HTTP    +----------------+
  |  Frontend   | <============> |   Rust Core    | <=======> |  llama.cpp     |
  |  React 19   |                |   (Tauri 2.0)  |           |  (local AI)    |
  +------------+                 +----------------+           +----------------+
       |                                |
       | postMessage                    | SQLite
       v                                v
  +------------+                 +----------------+
  |  Preview   |                 |    Storage     |
  |  (iframe)  |                 |  Projects, AI  |
  +------------+                 +----------------+

Everything runs locally. The AI model is downloaded once and stays on your machine.


Tech Stack

Layer Technology
Frontend React 19 · TypeScript · Vite · Tailwind CSS 4 · Zustand
Desktop Tauri 2.0 (Rust)
AI llama.cpp sidecar · Qwen2.5-Coder-7B (3B fallback)
GPU Metal (macOS) · CUDA / Vulkan (Windows)
Storage SQLite (projects, chat history, snapshots, settings, templates)
Deploy Netlify · Vercel · GitHub Pages · Local HTML export

Getting Started

Prerequisites

macOS extras
xcode-select --install
Windows extras

Install Microsoft C++ Build Tools and WebView2.

Clone & Install

git clone https://github.com/Szymon0C/Offpage.git
cd Offpage
pnpm install        # or: npm install

Download the AI sidecar

bash scripts/setup-sidecar.sh

This downloads the pre-built llama-server binary (b7472) for your platform.

Run in development

pnpm tauri dev      # or: npx tauri dev

The app will open automatically. On first launch:

  1. Hardware detection runs automatically
  2. Choose a model (7B recommended for 16 GB+ RAM, 3B for 8 GB)
  3. Click Download & Start AI — one-time download from Hugging Face
  4. Start chatting!

Build for production

pnpm tauri build    # or: npx tauri build

Builds a distributable .dmg (macOS) or .msi / .exe (Windows) in src-tauri/target/release/bundle/.


Hardware Requirements

Hardware Experience
Minimum 8 GB RAM, x64 (AVX2) or Apple Silicon Works — CPU inference, slower
Recommended 16 GB RAM, 6 GB+ VRAM or M1+ Smooth generation
Optimal 32 GB RAM, RTX 3060+ or M1 Pro+ Fast, room for larger models

The app detects your hardware and adjusts automatically (quantization, model size).


App Pages

Page Description
Home Create new projects or open recent ones
Project Chat with AI, preview generated site, inline/WYSIWYG editing, deploy
Templates Browse curated templates by category, preview, customize with AI
Settings AI model status & control, deploy token management, about

Project Structure

Offpage/
├── src/                          # React frontend
│   ├── components/
│   │   ├── chat/                 # ChatPanel, ChatInput, ChatMessage, ModelSetup
│   │   ├── deploy/               # DeployModal (Netlify, Vercel, GitHub Pages)
│   │   ├── layout/               # AppShell, Sidebar, TopBar
│   │   ├── preview/              # PreviewFrame, PreviewToolbar, InlineEditBar
│   │   ├── templates/            # TemplateCard, TemplatePreviewModal
│   │   ├── ui/                   # IconButton
│   │   └── ErrorBoundary.tsx     # Global error boundary
│   ├── stores/                   # Zustand state management
│   │   ├── aiStore.ts            # Sidecar status, hardware info, model download
│   │   ├── chatStore.ts          # Chat messages, streaming buffer
│   │   ├── deployStore.ts        # Deploy status, token management
│   │   ├── editorStore.ts        # Edit mode, viewport size, section selection
│   │   ├── projectStore.ts       # Projects CRUD, snapshots, deploy config
│   │   └── templateStore.ts      # Template loading, category filtering
│   ├── pages/                    # Route pages
│   │   ├── HomePage.tsx          # Project list + create
│   │   ├── ProjectPage.tsx       # Chat + preview + editing workspace
│   │   ├── TemplatesPage.tsx     # Template gallery with category filters
│   │   ├── SettingsPage.tsx      # AI model, deploy tokens, about
│   │   └── NotFoundPage.tsx      # 404 catch-all
│   ├── hooks/
│   │   └── useAiStream.ts        # SSE streaming from llama-server via Tauri events
│   ├── lib/
│   │   ├── prompts.ts            # System prompts for generate/edit/section modes
│   │   ├── bundledTemplates.ts   # 4 built-in HTML templates
│   │   ├── deployProviders.ts    # Provider metadata (Netlify, Vercel, GitHub Pages)
│   │   ├── helperScript.ts       # JS injected into preview iframe
│   │   ├── htmlSections.ts       # Section replace/tag utilities
│   │   └── iframeBridge.ts       # Typed postMessage protocol
│   ├── db/
│   │   ├── database.ts           # SQLite init, migrations, template seeding
│   │   └── migrations.ts         # Schema: projects, snapshots, chat, templates, settings
│   └── types/
│       └── project.ts            # TypeScript types for all DB entities
├── src-tauri/                    # Rust backend (Tauri 2.0)
│   └── src/
│       ├── lib.rs                # Plugin registration, command handler
│       ├── ai.rs                 # SSE streaming from llama-server
│       ├── sidecar.rs            # llama-server lifecycle (spawn, health check, kill)
│       ├── models.rs             # Model download with progress events
│       ├── deploy.rs             # Deploy to Netlify/Vercel/GitHub Pages + HTML export
│       └── hardware.rs           # RAM, CPU, GPU detection and tier classification
├── scripts/
│   └── setup-sidecar.sh          # Downloads pre-built llama-server binary
└── docs/                         # Specs, plans, assets

Deploy

Offpage supports deploying generated websites to three platforms:

Provider How it works
Netlify Creates a site via API, deploys a zip with index.html. Reuses existing site on subsequent deploys.
Vercel Posts base64-encoded HTML to the deployments API.
GitHub Pages Creates a repo, uploads index.html, enables GitHub Pages.
Local export Saves the HTML file to any location on your disk via system save dialog.

API tokens are stored locally in SQLite — never sent anywhere except the provider's API. Tokens can be managed in Settings.


AI Model

The app runs AI inference locally using llama.cpp as a sidecar process.

Model Size For
Qwen2.5-Coder-7B-Instruct (Q4_0) ~4.3 GB 16 GB+ RAM — recommended
Qwen2.5-Coder-3B-Instruct (Q4_0) ~2.0 GB 8 GB RAM — minimum

Models are downloaded from Hugging Face on first launch and stored in your app data directory. The app auto-detects previously downloaded models on subsequent launches.


Contributing

Contributions are welcome! The project is in active development.

# Fork & clone, then:
pnpm install
bash scripts/setup-sidecar.sh
pnpm tauri dev

Please open an issue before submitting large PRs so we can discuss the approach.


Download

Coming soon — Early builds will be available once the core features are stable.

Star or watch this repo to get notified.


License

MIT — see LICENSE for details.


Built with Tauri, React, and local AI. No data leaves your device.

About

Desktop app that generates websites using local AI. Windows & macOS.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors