diff --git a/.claude/skills/code-review/SKILL.md b/.claude/skills/code-review/SKILL.md deleted file mode 100644 index 3aa49c32..00000000 --- a/.claude/skills/code-review/SKILL.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -name: code-review -description: Run a multi-perspective code review on recent changes using specialized reviewer personas. Each reviewer focuses on their domain — security, ops, cross-platform, API design, frontend, and performance. Produces a dated review report. -argument-hint: "[file, directory, or git range]" ---- - -# Code Review — Multi-Perspective Audit - -Review code from the perspective of 6 specialized reviewers, each with their own focus area. Produces a structured report in `.reviews/`. - -## Scope - -Determine what to review based on the argument: - -- **No argument**: Review all uncommitted changes (`git diff` + `git diff --cached`) -- **A file or directory path**: Review that specific path -- **A git range** (e.g., `main..HEAD`, `HEAD~3`): Review that commit range -- **`last`**: Review the last commit (`HEAD~1..HEAD`) - -## The Reviewers - -### Mara — Security - -Focuses on authentication, authorization, and injection vectors. - -- JWT handling: token validation, expiry, refresh flows -- SQL injection: raw queries, unparameterized input -- XSS: unsanitized user input rendered in responses or frontend -- CORS misconfigurations -- Path traversal in file operations (`../` in user-supplied paths) -- Secrets in code (API keys, passwords, hardcoded tokens) -- Missing `@jwt_required()` on API routes that need auth -- CSRF protection gaps - -### Raj — Ops & Infrastructure - -Focuses on subprocess calls, privilege escalation, and system operations. - -- Missing `sudo` on privileged commands (systemctl, firewall-cmd, ufw, certbot, nginx) -- Raw `subprocess.run()` that should use `run_privileged()`, `ServiceControl`, or `PackageManager` -- Missing error handling on subprocess calls (unchecked `returncode`, no try/except) -- Hardcoded paths that assume specific filesystem layout -- Logging gaps: operations that modify system state without logging what they did -- Service restart ordering issues - -### Sol — Cross-Platform & Containers - -Focuses on portability across distros and container environments. - -- Distro-specific commands (`dpkg`, `apt`, `rpm`, `dnf`) without `FileNotFoundError` handling -- Unguarded `/proc/` and `/sys/` reads that fail in LXC/containers -- `psutil` calls that return `None` in containers (`disk_io_counters`, `sensors_temperatures`) used without null checks -- Docker socket assumptions without availability checks -- Firewall commands that need `CAP_NET_ADMIN` (dropped in unprivileged LXC) -- Hardcoded `/dev/` device paths -- Assumptions about init system (systemd vs others) -- Package manager detection that only checks one family - -### Kai — API Design - -Focuses on REST conventions, error handling, and request/response patterns. - -- Inconsistent error response format (should always be `{'error': 'message'}, status_code`) -- Missing input validation on request body fields -- Wrong HTTP status codes (e.g., 200 on error, 404 when it should be 403) -- Missing pagination on list endpoints -- Endpoints that return too much data (no field filtering) -- Inconsistent URL naming (`/api/v1/` prefix, plural nouns) -- Missing rate limiting on sensitive endpoints (login, password reset) -- N+1 query patterns in endpoints that return lists - -### Lena — Frontend - -Focuses on React patterns, UX quality, and style consistency. - -- Class components or non-hook patterns (should be functional + hooks only) -- Inline styles (should use LESS with design system variables) -- Missing loading states or error handling on data fetches -- Accessibility gaps: missing alt text, aria labels, keyboard navigation -- Hardcoded strings that should come from the API or config -- Memory leaks: missing cleanup in `useEffect` -- Props drilling beyond 3 levels (should use Context) -- Missing key props in lists -- Console.log left in production code - -### Omar — Performance - -Focuses on efficiency, caching, and resource usage. - -- Synchronous blocking calls that should be async (especially subprocess calls in request handlers) -- Missing database indexes on frequently queried columns -- Unbounded queries: `SELECT *` without `LIMIT` or pagination -- Repeated identical subprocess calls that could be cached -- Large file reads loaded entirely into memory -- Frontend: unnecessary re-renders, missing `useMemo`/`useCallback` where expensive -- Missing `timeout` on subprocess calls or external HTTP requests -- Socket.IO events that broadcast too frequently - -## How to Review - -1. Determine the scope from `$ARGUMENTS` and read all relevant code -2. For each reviewer, analyze the code **only through their lens** — don't overlap -3. Rate each finding: - - **Fix** — Bug, vulnerability, or will break in production. Must address. - - **Improve** — Works but suboptimal. Should address. - - **Note** — Observation or suggestion. Nice to know, no action needed. -4. Collect all findings - -## Report - -Create the `.reviews/` directory if it doesn't exist. Write to `.reviews/YYYY-MM-DD-review.md` (use today's date). If a file for today exists, append a counter: `YYYY-MM-DD-2-review.md`. - -```markdown -# Code Review — YYYY-MM-DD - -**Scope:** -**Files reviewed:** N - -## Summary - -| Reviewer | Focus | Fix | Improve | Note | -|----------|-------|-----|---------|------| -| Mara | Security | N | N | N | -| Raj | Ops | N | N | N | -| Sol | Cross-Platform | N | N | N | -| Kai | API Design | N | N | N | -| Lena | Frontend | N | N | N | -| Omar | Performance | N | N | N | -| **Total** | | **N** | **N** | **N** | - -## Findings - -### Mara — Security - -#### [Fix] Short description -`file_path:line_number` -Explanation of the issue and why it matters. -**Suggested fix:** What to change. - -#### [Improve] Short description -... - -### Raj — Ops & Infrastructure -... - -### Sol — Cross-Platform & Containers -... - -### Kai — API Design -... - -### Lena — Frontend -... - -### Omar — Performance -... - -## Verdict - -**PASS** — No Fix-severity findings. Ship it. -or -**NEEDS WORK** — N Fix-severity findings must be addressed before merging. -``` - -After writing the report, print the summary table and verdict to the user. If there are Fix-severity findings, ask if they want you to address them now. diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 00000000..469ce162 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,58 @@ +name: 🐛 Bug Report +description: Report a bug to help us improve ServerKit +labels: ["bug"] +body: + - type: markdown + attributes: + value: | + Thanks for taking the time to fill out this bug report! + - type: textarea + id: description + attributes: + label: Describe the bug + description: A clear and concise description of what the bug is. + placeholder: Bug description + validations: + required: true + - type: textarea + id: reproduction + attributes: + label: Steps to reproduce + description: How can we reproduce this issue? + placeholder: | + 1. Go to '...' + 2. Click on '....' + 3. Scroll down to '....' + 4. See error + validations: + required: true + - type: textarea + id: expected-behavior + attributes: + label: Expected behavior + description: A clear and concise description of what you expected to happen. + validations: + required: true + - type: input + id: environment + attributes: + label: Environment + description: OS version, browser, ServerKit version, etc. + placeholder: e.g. Ubuntu 22.04, Chrome 120, ServerKit v0.8.0 + validations: + required: true + - type: textarea + id: logs + attributes: + label: Logs + description: Please provide any relevant logs from the backend or browser console. + render: shell + - type: checkboxes + id: checks + attributes: + label: Checks + options: + - label: I have searched the [existing issues](https://github.com/jhd3197/ServerKit/issues). + required: true + - label: I am using the latest version of ServerKit. + required: true diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml new file mode 100644 index 00000000..730cf24a --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -0,0 +1,28 @@ +name: 🚀 Feature Request +description: Suggest an idea for ServerKit +labels: ["enhancement"] +body: + - type: textarea + id: feature-description + attributes: + label: Is your feature request related to a problem? + description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] + validations: + required: true + - type: textarea + id: solution + attributes: + label: Describe the solution you'd like + description: A clear and concise description of what you want to happen. + validations: + required: true + - type: textarea + id: alternatives + attributes: + label: Describe alternatives you've considered + description: A clear and concise description of any alternative solutions or features you've considered. + - type: textarea + id: additional-context + attributes: + label: Additional context + description: Add any other context or screenshots about the feature request here. diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 00000000..e800fa5d --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,24 @@ +## Description + + +## Related Issues + + +## Type of Change +- [ ] 🐛 Bug fix (non-breaking change which fixes an issue) +- [ ] 🚀 New feature (non-breaking change which adds functionality) +- [ ] ⚠️ Breaking change (fix or feature that would cause existing functionality to not work as expected) +- [ ] 📝 Documentation update +- [ ] 🎨 UI/UX improvement + +## How Has This Been Tested? + + +## Checklist +- [ ] My code follows the style guidelines of this project +- [ ] I have performed a self-review of my own code +- [ ] I have commented my code, particularly in hard-to-understand areas +- [ ] I have made corresponding changes to the documentation +- [ ] My changes generate no new warnings +- [ ] I have added tests that prove my fix is effective or that my feature works +- [ ] New and existing unit tests pass locally with my changes diff --git a/.gitignore b/.gitignore index 26e52b30..29f74047 100644 --- a/.gitignore +++ b/.gitignore @@ -1,63 +1,98 @@ -# Dependencies +# --- ServerKit Consolidated .gitignore --- + +# Node / Frontend /frontend/node_modules/ +/frontend/dist/ +/frontend/npm-debug.log* +/frontend/yarn-debug.log* +/frontend/yarn-error.log* +/frontend/.eslintcache +/frontend/.stylelintcache +/frontend/coverage/ +/frontend/.next/ +/frontend/out/ +/frontend/build/ + +# Python / Backend /backend/venv/ +/backend/instance/ /backend/__pycache__/ -**/__pycache__/ -*.pyc +/backend/.pytest_cache/ +/backend/.coverage +/backend/htmlcov/ +/backend/.mypy_cache/ +/backend/.tox/ +/backend/*.pyc +/backend/*.pyo +/backend/*.pyd +/backend/.env +/backend/.venv +/backend/pip-log.txt +/backend/pip-delete-this-directory.txt +/backend/dev-data/ +/venv/ -# Build outputs -/frontend/dist/ -/agent/dist/ +# Go / Agent /agent/serverkit-agent /agent/serverkit-agent.exe -*.exe -*.msi -*.deb -*.rpm - -# IDE and editors -.idea/ -.vscode/ -*.swp -*.swo -*~ +/agent/dist/ +/agent/vendor/ +/agent/*.test +/agent/*.out +/agent/bin/ -# Environment and secrets +# Environment / Secrets .env .env.local -.env.*.local +.env.development.local +.env.test.local +.env.production.local *.key *.pem *.crt *.csr +/backend/.env +/frontend/.env.local -# OS files +# Docker +.docker-compose.untracked.yml +docker-compose.override.yml + +# OS / IDE .DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db Thumbs.db -nul - -# Temporary files -*.tmp -*.temp -*.log -coverage.out -coverage.html +.idea/ +.vscode/ +*.swp +*.swo +*~ +*.sublime-project +*.sublime-workspace -# Project specific +# Project Specific .planning_old/ .mcp.json new_templates/ /.marketing/ - -# Test artifacts -*.test -*.out -/template-icons - -# Dev data directory (local development path overrides) -backend/dev-data/ -/.vscode -/.vscode1 -/.pr -/.reviews +/template-icons/ +/.pr/ +/.reviews/ /.claude/settings.local.json +/serverkit.db +/backend/instance/serverkit.db + +# Logs & Temp +*.log +*.tmp +*.temp +/logs/ +/tmp/ +nul +SECURITY_AUDIT.md +APP_IMPROVEMENTS.md +*.png diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md new file mode 100644 index 00000000..9e7d57a6 --- /dev/null +++ b/CONTRIBUTORS.md @@ -0,0 +1,14 @@ +# Contributors + +Thanks to everyone who has contributed to ServerKit! + +## Core + +- **Juan Denis** ([@jhd3197](https://github.com/jhd3197)) — Creator and maintainer + +## Contributors + +| Who | Contribution | PR | +|-----|-------------|-----| +| **Rapeepan Moonthai** ([@rapeeza1598](https://github.com/rapeeza1598)) | Fedora support for install script (DNF + SELinux) | [#31](https://github.com/jhd3197/ServerKit/pull/31) | +| **Piya Miang-Lae** ([@Piya-Boy](https://github.com/Piya-Boy)) | Backups storage provider | [#26](https://github.com/jhd3197/ServerKit/pull/26) | diff --git a/README.md b/README.md index 6aec4e53..b58842ae 100644 --- a/README.md +++ b/README.md @@ -50,20 +50,26 @@ English | [Español](docs/README.es.md) | [中文版](docs/README.zh-CN.md) | [P **Node.js** — PM2-managed applications with log streaming -**Docker** — Full container and Docker Compose management +**Workflow Builder** — Node-based visual automation for server tasks, deployments, and CI/CD -**Environment Variables** — Secure, encrypted per-app variable management +**Environment Pipeline** — Multi-environment management for WordPress (Prod/Staging/Dev) with code/DB promotion -**Git Deployment** — GitHub/GitLab webhooks, auto-deploy on push, branch selection, rollback, zero-downtime deployments +**Docker** — Full container and Docker Compose management with real-time log streaming and terminal access + +**Marketplace** — Over 60+ one-click templates for popular apps (Immich, Ghost, Authelia, etc.) ### 🏗️ Infrastructure **Domain Management** — Nginx virtual hosts with easy configuration +**DNS Zone Management** — Full DNS record management with propagation checking (A, AAAA, CNAME, MX, TXT, etc.) + **SSL Certificates** — Automatic Let's Encrypt with auto-renewal **Databases** — MySQL/MariaDB and PostgreSQL with user management and query interface +**Cloud Provisioning** — Provision servers on DigitalOcean, Hetzner, Vultr, and Linode with cost tracking + **Firewall** — UFW/firewalld with visual rule management and port presets **Cron Jobs** — Schedule tasks with a visual editor @@ -94,7 +100,13 @@ English | [Español](docs/README.es.md) | [中文版](docs/README.zh-CN.md) | [P **Agent-Based Architecture** — Go agent with HMAC-SHA256 authentication and real-time WebSocket gateway -**Fleet Overview** — Centralized dashboard with server grouping, tagging, and health monitoring +**Fleet Management** — Agent lifecycle control with version rollouts, approval queue, network discovery, and command queue + +**Fleet Monitor** — Cross-server heatmaps, metric comparison charts, alert thresholds, anomaly detection, and capacity forecasting + +**Agent Plugins** — Extensible plugin system with capabilities, permissions, and per-server installation + +**Server Templates** — Configuration templates with compliance tracking, drift detection, and auto-remediation **Remote Docker** — Manage containers, images, volumes, networks, and Compose projects across all servers @@ -108,6 +120,8 @@ English | [Español](docs/README.es.md) | [中文版](docs/README.zh-CN.md) | [P **Uptime Tracking** — Historical server uptime data and visualization +**Status Pages** — Public status pages with HTTP/TCP/DNS/Ping health checks, component monitoring, and incident management + **Notifications** — Discord, Slack, Telegram, email (HTML templates), and generic webhooks **Per-User Preferences** — Individual notification channels, severity filters, and quiet hours @@ -116,6 +130,8 @@ English | [Español](docs/README.es.md) | [中文版](docs/README.zh-CN.md) | [P **Multi-User** — Admin, developer, and viewer roles with team invitations +**Workspaces** — Multi-tenant workspace isolation with quotas and member management + **RBAC** — Granular per-feature permissions (read/write per module) **SSO & OAuth** — Google, GitHub, OpenID Connect, and SAML 2.0 with account linking @@ -126,6 +142,18 @@ English | [Español](docs/README.es.md) | [中文版](docs/README.zh-CN.md) | [P **Webhook Subscriptions** — Event-driven webhooks with HMAC signatures, retry logic, and custom headers +### 🎨 Customization + +**Sidebar Presets** — Switch between Full, Web Hosting, Email Admin, DevOps, and Minimal views with one click + +**Collapsible Navigation** — Sidebar groups auto-expand on navigation and collapse when switching sections + +**Accent Colors** — 8 preset accent colors plus custom hex picker + +**Custom Branding** — White-label the sidebar with your own logo, brand name, or full-width banner + +**Dashboard Widgets** — Toggle and reorder dashboard widgets to fit your workflow + --- ## 🚀 Quick Start @@ -260,13 +288,15 @@ See the [Installation Guide](docs/INSTALLATION.md) for step-by-step instructions - [x] API enhancements — API keys, rate limiting, OpenAPI docs, webhook subscriptions - [x] SSO & OAuth — Google, GitHub, OIDC, SAML - [x] Database migrations — Flask-Migrate/Alembic, versioned schema -- [ ] Agent fleet management — Auto-upgrade, bulk ops, offline command queue -- [ ] Cross-server monitoring — Fleet dashboard, anomaly detection, alerting -- [ ] Agent plugin system — Extensible agent with custom metrics, commands, health checks -- [ ] Server templates & config sync — Drift detection, compliance dashboards -- [ ] Multi-tenancy — Workspaces, team isolation, per-workspace settings -- [ ] DNS zone management — Cloudflare, Route53, DigitalOcean integrations -- [ ] Status pages — Public status page, health checks, incident management +- [x] Agent fleet management — Version rollouts, approval queue, discovery, command queue +- [x] Cross-server monitoring — Fleet heatmaps, comparison charts, anomaly detection, capacity forecasting +- [x] Agent plugin system — Extensible agent with capabilities, permissions, per-server install +- [x] Server templates & config sync — Drift detection, compliance dashboards, auto-remediation +- [x] Multi-tenancy — Workspaces with quotas, member management, isolation +- [x] DNS zone management — Full record management with propagation checking +- [x] Status pages — Public status pages with health checks, incident management +- [x] Cloud provisioning — DigitalOcean, Hetzner, Vultr, Linode with cost tracking +- [x] Customizable sidebar — Collapsible groups, view presets, accent colors, white-label branding Full details: [ROADMAP.md](ROADMAP.md) @@ -290,7 +320,7 @@ Full details: [ROADMAP.md](ROADMAP.md) | Layer | Technology | |-------|------------| | Backend | Python 3.11, Flask, SQLAlchemy, Flask-SocketIO, Flask-Migrate | -| Frontend | React 18, Vite, LESS, Recharts | +| Frontend | React 18, Vite, SCSS, Recharts | | Database | SQLite / PostgreSQL | | Web Server | Nginx, Gunicorn (GeventWebSocket) | | Containers | Docker, Docker Compose | @@ -309,7 +339,7 @@ Contributions are welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) first. fork → feature branch → commit → push → pull request ``` -**Priority areas:** Agent plugin system, fleet management, DNS integrations, status pages, UI/UX improvements, documentation. +**Priority areas:** Cloud provider integrations, marketplace extensions, UI/UX improvements, documentation, test coverage. --- diff --git a/ROADMAP.md b/ROADMAP.md index 0d3c7e9c..8983bf31 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -1,612 +1,621 @@ -# ServerKit Roadmap - -This document outlines the development roadmap for ServerKit. Features are organized by phases and priority. - ---- - -## Current Version: v1.5.0 (In Development) - -### Recently Completed (v1.4.0) - -- **Team & Permissions** - RBAC with admin/developer/viewer roles, invitations, audit logging -- **API Enhancements** - API keys, rate limiting, webhook subscriptions, OpenAPI docs, analytics -- **SSO & OAuth Login** - Google, GitHub, OIDC, SAML with account linking -- **Database Migrations** - Flask-Migrate/Alembic with versioned schema migrations -- **Email Server Management** - Postfix, Dovecot, DKIM, SpamAssassin, Roundcube - ---- - -## Phase 1: Core Infrastructure (Completed) - -- [x] Flask backend with SQLAlchemy ORM -- [x] React frontend with Vite -- [x] JWT-based authentication -- [x] Real-time WebSocket updates -- [x] System metrics (CPU, RAM, disk, network) -- [x] Docker and Docker Compose support -- [x] SQLite/PostgreSQL database support - ---- - -## Phase 2: Application Management (Completed) - -- [x] PHP/WordPress application deployment -- [x] Python (Flask/Django) application support -- [x] Node.js application management with PM2 -- [x] Docker container management -- [x] Environment variable management -- [x] Application start/stop/restart controls -- [x] Log viewing per application - ---- - -## Phase 3: Domain & SSL Management (Completed) - -- [x] Nginx virtual host management -- [x] Domain configuration interface -- [x] Let's Encrypt SSL integration -- [x] SSL certificate auto-renewal -- [x] Redirect management (HTTP → HTTPS) - ---- - -## Phase 4: Database Management (Completed) - -- [x] MySQL/MariaDB database support -- [x] PostgreSQL database support -- [x] Database creation/deletion -- [x] User management per database -- [x] Basic query interface - ---- - -## Phase 5: File & FTP Management (Completed) - -- [x] Web-based file manager -- [x] File upload/download -- [x] File editing with syntax highlighting -- [x] vsftpd FTP server integration -- [x] FTP user management - ---- - -## Phase 6: Monitoring & Alerts (Completed) - -- [x] Real-time system metrics -- [x] Server uptime tracking -- [x] Customizable alert thresholds -- [x] Discord webhook notifications -- [x] Slack webhook notifications -- [x] Telegram bot notifications -- [x] Generic webhook support -- [x] Alert history and logging - ---- - -## Phase 7: Security Features (Completed) - -- [x] Two-factor authentication (TOTP) -- [x] Backup codes for 2FA recovery -- [x] ClamAV malware scanning -- [x] Quick scan / Full scan options -- [x] File quarantine management -- [x] File integrity monitoring -- [x] Failed login detection -- [x] Security event logging - ---- - -## Phase 8: Scheduled Tasks (Completed) - -- [x] Cron job management -- [x] Visual cron expression builder -- [x] Job execution history -- [x] Enable/disable jobs - ---- - -## Phase 9: Firewall Management (Completed - Merged into Security) - -- [x] UFW firewall integration -- [x] Visual rule management -- [x] Common port presets -- [x] Rule enable/disable -- [x] Consolidated into Security page for unified security management - ---- - -## Phase 10: Multi-Server Management (Completed) - -**Priority: High** - -- [x] Agent-based remote server monitoring (Go agent) -- [x] Centralized dashboard for multiple servers -- [x] Server grouping and tagging -- [x] Cross-server metrics comparison -- [x] Remote Docker management via agents -- [x] Server health overview -- [x] Agent WebSocket gateway -- [x] HMAC-SHA256 authentication -- [x] GitHub Actions for agent releases (Linux/Windows) -- [x] Installation scripts endpoint -- [x] Agent auto-update mechanism -- [x] Agent download page in UI -- [x] Container logs streaming for remote servers - ---- - -## Phase 11: Git Deployment (Completed) - -**Priority: High** - -- [x] GitHub/GitLab webhook integration -- [x] Automatic deployment on push -- [x] Branch selection for deployment -- [x] Rollback to previous deployments -- [x] Deployment history and logs -- [x] Pre/post deployment scripts -- [x] Zero-downtime deployments - ---- - -## Phase 12: Backup & Restore (Completed) - -**Priority: High** - -- [x] Automated database backups -- [x] File/directory backups -- [x] S3-compatible storage support -- [x] Backblaze B2 integration -- [x] Backup scheduling -- [x] One-click restore -- [x] Backup retention policies -- [x] Offsite backup verification - ---- - -## Phase 13: Email Server Management (Completed) - -**Priority: Medium** - -- [x] Postfix mail server setup -- [x] Dovecot IMAP/POP3 configuration -- [x] Email account management -- [x] Spam filtering (SpamAssassin) -- [x] DKIM/SPF/DMARC configuration -- [x] Webmail interface integration -- [x] Email forwarding rules - ---- - -## Phase 14: Team & Permissions (Completed) - -**Priority: Medium** - -- [x] Multi-user support -- [x] Role-based access control (RBAC) -- [x] Custom permission sets -- [x] Audit logging per user -- [x] Team invitations -- [x] Activity dashboard - ---- - -## Phase 15: API Enhancements (Completed) - -**Priority: Medium** - -- [x] API key management -- [x] Rate limiting -- [x] Webhook event subscriptions -- [x] OpenAPI/Swagger documentation -- [x] API usage analytics - ---- - -## Phase 16: Advanced Security (Completed) - -**Priority: High** - -- [x] Unified Security page with all security features -- [x] Firewall tab with UFW/firewalld management -- [x] Fail2ban integration -- [x] SSH key management -- [x] IP allowlist/blocklist -- [x] Brute force protection -- [x] Security audit reports -- [x] Vulnerability scanning (Lynis) -- [x] Automatic security updates (unattended-upgrades/dnf-automatic) - ---- - -## Phase 17: SSO & OAuth Login (Completed) - -**Priority: High** - -- [x] Google OAuth 2.0 login -- [x] GitHub OAuth login -- [x] Generic OpenID Connect (OIDC) provider support -- [x] SAML 2.0 support for enterprise environments -- [x] Social login UI (provider buttons on login page) -- [x] Account linking (connect OAuth identity to existing local account) -- [x] Auto-provisioning of new users on first SSO login -- [x] Configurable SSO settings (enable/disable providers, client ID/secret management) -- [x] Enforce SSO-only login (disable password auth for team members) -- [x] SSO session management and token refresh - ---- - -## Phase 18: Database Migrations & Schema Versioning (Completed) - -**Priority: High** - -### Backend — Migration Engine -- [x] Integrate Flask-Migrate (Alembic) for versioned schema migrations -- [x] Generate initial migration from current model state as baseline -- [x] Replace `_auto_migrate_columns()` hack with proper Alembic migrations -- [x] Store schema version in a `schema_version` table (current version, history) -- [x] API endpoints for migration status, apply, and rollback -- [x] Auto-detect pending migrations on login and flag the session -- [x] Pre-migration automatic DB backup before applying changes -- [x] Migration scripts for all existing model changes (retroactive baseline) - -### CLI Fallback -- [x] CLI commands for headless/SSH scenarios (`flask db upgrade`, `flask db status`) -- [x] CLI rollback support (`flask db downgrade`) - ---- - -# Upcoming Development - -The phases below are ordered by priority. Higher phases ship first. - ---- - -## Phase 19: New UI & Services Page (Planned) - -**Priority: Critical** - -Merge the `new-ui` branch — adds a full Services page with service detail views, metrics, logs, shell, settings, git connect, and package management. - -- [ ] Merge `new-ui` branch into main development line -- [ ] Services list page with status indicators and quick actions -- [ ] Service detail page with tabbed interface (Metrics, Logs, Shell, Settings, Commands, Events, Packages) -- [ ] Git connect modal for linking services to repositories -- [ ] Gunicorn management tab for Python services -- [ ] Service type detection and type-specific UI (Node, Python, PHP, Docker, etc.) -- [ ] Resolve any conflicts with features added since branch diverged - ---- - -## Phase 20: Customizable Sidebar & Dashboard Views (Planned) - -**Priority: High** - -Let users personalize what they see. Not everyone runs email servers or manages Docker — the sidebar should adapt to each user's needs. - -- [ ] Sidebar configuration page in Settings -- [ ] Preset view profiles: **Full** (default, all modules), **Web Hosting** (apps, domains, SSL, databases, files), **Email Admin** (email, DNS, security), **Docker/DevOps** (containers, deployments, git, monitoring), **Minimal** (apps, monitoring, backups only) -- [ ] Custom view builder — toggle individual sidebar items on/off -- [ ] Per-user preference storage (saved to user profile, synced across sessions) -- [ ] Sidebar sections collapse/expand with memory -- [ ] Quick-switch between saved view profiles -- [ ] Admin can set default view for new users -- [ ] Hide empty/unconfigured modules automatically (e.g., hide Email if no email domains exist) - ---- - -## Phase 21: Migration Wizard Frontend UI (Planned) - -**Priority: High** - -The backend migration engine is complete — this adds the visual upgrade experience (Matomo-style). - -- [ ] Full-screen modal/wizard that appears when pending migrations are detected -- [ ] Step 1: "Update Available" — show current version vs new version, changelog summary -- [ ] Step 2: "Backup" — auto-backup the database, show progress, confirm success -- [ ] Step 3: "Apply Migrations" — run migrations with real-time progress/log output -- [ ] Step 4: "Done" — success confirmation with summary of changes applied -- [ ] Error handling: if a migration fails, show the error and offer rollback option -- [ ] Block access to the panel until migrations are applied -- [ ] Migration history page in Settings showing all past migrations and timestamps - ---- - -## Phase 22: Container Logs & Monitoring UI (Planned) - -**Priority: High** - -The container logs API is already built. This phase adds the frontend and extends monitoring to per-app metrics. - -- [ ] Log viewer component with terminal-style display and ANSI color support -- [ ] Real-time log streaming via WebSocket with auto-scroll (pause on user scroll) -- [ ] Log search with regex support and match highlighting -- [ ] Filter by log level (INFO, WARN, ERROR, DEBUG) and time range -- [ ] Export filtered logs to file -- [ ] Per-container resource collection (CPU %, memory, network I/O via Docker stats API) -- [ ] Per-app resource usage charts (Recharts) with time range selector (1h, 6h, 24h, 7d) -- [ ] Per-app alert rules (metric, operator, threshold, duration) -- [ ] Alert notifications via existing channels (email, Discord, Telegram) with cooldown - ---- - -## Phase 23: Agent Fleet Management (Planned) - -**Priority: High** - -Level up agent management from "connect and monitor" to full fleet control. - -- [ ] Agent version tracking and compatibility matrix (panel version ↔ agent version) -- [ ] Push agent upgrades from the panel (single server or fleet-wide rollout) -- [ ] Staged rollout support — upgrade agents in batches with health checks between waves -- [ ] Agent health dashboard — connection uptime, heartbeat latency, command success rate per agent -- [ ] Auto-discovery of new servers on the local network (mDNS/broadcast scan) -- [ ] Agent registration approval workflow (admin must approve before agent joins fleet) -- [ ] Bulk agent operations — restart, upgrade, rotate keys across selected servers -- [ ] Agent changelog and release notes visible in UI -- [ ] Offline agent command queue — persist commands and deliver when agent reconnects -- [ ] Command retry with configurable backoff for failed/timed-out operations -- [ ] Agent connection diagnostics — test connectivity, latency, firewall check from panel - ---- - -## Phase 24: Cross-Server Monitoring Dashboard (Planned) - -**Priority: High** - -Fleet-wide visibility — see everything at a glance and catch problems early. - -- [ ] Fleet overview dashboard — heatmap of all servers by CPU/memory/disk usage -- [ ] Server comparison charts — overlay metrics from multiple servers on one graph -- [ ] Per-server alert thresholds (CPU > 80% for 5 min → warning, > 95% → critical) -- [ ] Anomaly detection — automatic baseline learning, alert on deviations -- [ ] Custom metric dashboards — drag-and-drop widgets, save layouts per user -- [ ] Metric correlation view — spot relationships between metrics across servers -- [ ] Capacity forecasting — trend-based predictions (disk full in X days, memory growth rate) -- [ ] Metrics export — Prometheus endpoint (`/metrics`), CSV download, JSON API -- [ ] Grafana integration guide and pre-built dashboard templates -- [ ] Fleet-wide search — find which server is running a specific container, service, or port - ---- - -## Phase 25: Agent Plugin System (Planned) - -**Priority: High** - -Make the agent extensible — let users add custom capabilities without modifying agent core. This is the foundation for future integrations (Android device farms, IoT fleets, custom hardware monitoring, etc.). - -### Plugin Architecture -- [ ] Plugin specification — standard interface (init, healthcheck, metrics, commands) -- [ ] Plugin manifest format (YAML/JSON) — name, version, dependencies, capabilities, permissions -- [ ] Plugin lifecycle management — install, enable, disable, uninstall, upgrade -- [ ] Plugin isolation — each plugin runs in its own process/sandbox with resource limits -- [ ] Plugin communication — standardized IPC between plugin and agent core - -### Plugin Capabilities -- [ ] Custom metrics reporters — plugins can push arbitrary metrics to the panel -- [ ] Custom health checks — plugins define checks that feed into the status system -- [ ] Custom commands — plugins register new command types the panel can invoke -- [ ] Scheduled tasks — plugins can register periodic jobs (cron-like) -- [ ] Event hooks — plugins can react to agent events (connect, disconnect, command, alert) - -### Panel Integration -- [ ] Plugin management UI — install, configure, monitor plugins per server -- [ ] Plugin marketplace / registry — browse and install community plugins -- [ ] Plugin configuration editor — per-server plugin settings from the panel -- [ ] Plugin logs and diagnostics — view plugin output and errors -- [ ] Plugin metrics visualization — custom widgets for plugin-reported data - -### Developer Experience -- [ ] Plugin SDK (Go module) — scaffolding, helpers, testing tools -- [ ] Plugin template repository — quickstart for new plugin development -- [ ] Local plugin development mode — hot-reload, debug logging -- [ ] Plugin documentation and API reference - ---- - -## Phase 26: Server Templates & Config Sync (Planned) - -**Priority: Medium** - -Define what a server should look like, apply it, and detect when it drifts. - -- [ ] Server template builder — define expected state (packages, services, firewall rules, users, files) -- [ ] Template library — save and reuse templates (e.g., "Web Server", "Database Server", "Mail Server") -- [ ] Apply template to server — install packages, configure services, set firewall rules via agent -- [ ] Config drift detection — periodic comparison of actual vs. expected state -- [ ] Drift report UI — visual diff showing what changed and when -- [ ] Auto-remediation option — automatically fix drift back to template (with approval toggle) -- [ ] Template versioning — track changes to templates over time -- [ ] Template inheritance — base template + role-specific overrides -- [ ] Bulk apply — roll out template changes across server groups -- [ ] Compliance dashboard — percentage of fleet in compliance per template - ---- - -## Phase 27: Multi-Tenancy & Workspaces (Planned) - -**Priority: Medium** - -Isolate servers by team, client, or project. Essential for agencies, MSPs, and larger teams. - -- [ ] Workspace model — isolated container for servers, users, and settings -- [ ] Workspace CRUD — create, rename, archive workspaces -- [ ] Server assignment — each server belongs to exactly one workspace -- [ ] User workspace membership — users can belong to multiple workspaces with different roles -- [ ] Workspace switching — quick-switch dropdown in the header -- [ ] Per-workspace settings — notification preferences, default templates, branding -- [ ] Workspace-scoped API keys — API keys restricted to a single workspace -- [ ] Cross-workspace admin view — super-admin can see all workspaces and usage -- [ ] Workspace usage quotas — limit servers, users, or API calls per workspace -- [ ] Workspace billing integration — track resource usage per workspace for invoicing - ---- - -## Phase 28: Advanced SSL Features (Planned) - -**Priority: Medium** - -- [x] Certificate expiry monitoring -- [ ] Wildcard SSL certificates via DNS-01 challenge -- [ ] Multi-domain certificates (SAN) -- [ ] Custom certificate upload (key + cert + chain) -- [ ] Certificate expiry notifications (email/webhook alerts before expiration) -- [ ] SSL configuration templates (modern, intermediate, legacy compatibility) -- [ ] SSL health check dashboard (grade, cipher suites, protocol versions) - ---- - -## Phase 29: DNS Zone Management (Planned) - -**Priority: Medium** - -Full DNS record management with provider API integration. - -- [ ] DNS zone editor UI (A, AAAA, CNAME, MX, TXT, SRV, CAA records) -- [ ] Cloudflare API integration (list/create/update/delete records) -- [ ] Route53 API integration -- [ ] DigitalOcean DNS integration -- [ ] DNS propagation checker (query multiple nameservers) -- [ ] Auto-generate recommended records for hosted services (SPF, DKIM, DMARC, MX) -- [ ] DNS template presets (e.g., "standard web hosting", "email hosting") -- [ ] Bulk record import/export (BIND zone file format) - ---- - -## Phase 30: Nginx Advanced Configuration (Planned) - -**Priority: Medium** - -Go beyond basic virtual hosts — full reverse proxy and performance configuration. - -- [ ] Visual reverse proxy rule builder (upstream servers, load balancing methods) -- [ ] Load balancing configuration (round-robin, least connections, IP hash) -- [ ] Caching rules editor (proxy cache zones, TTLs, cache bypass rules) -- [ ] Rate limiting at proxy level (per-IP, per-route) -- [ ] Custom location block editor with syntax validation -- [ ] Header manipulation (add/remove/modify request/response headers) -- [ ] Nginx config syntax check before applying changes -- [ ] Config diff preview before saving -- [ ] Access/error log viewer per virtual host - ---- - -## Phase 31: Status Page & Health Checks (Planned) - -**Priority: Medium** - -Public-facing status page and automated health monitoring. - -- [ ] Automated health checks (HTTP, TCP, DNS, SMTP) with configurable intervals -- [ ] Public status page (standalone URL, no auth required) -- [ ] Status page customization (logo, colors, custom domain) -- [ ] Service grouping on status page (e.g., "Web Services", "Email", "APIs") -- [ ] Incident management — create, update, resolve incidents with timeline -- [ ] Uptime percentage display (24h, 7d, 30d, 90d) -- [ ] Scheduled maintenance windows with advance notifications -- [ ] Status page subscribers (email/webhook notifications on incidents) -- [ ] Historical uptime graphs -- [ ] Status badge embeds (SVG/PNG for README files) - ---- - -## Phase 32: Server Provisioning APIs (Planned) - -**Priority: Medium** - -Spin up and manage cloud servers directly from the panel. - -- [ ] DigitalOcean API integration (create/destroy/resize droplets) -- [ ] Hetzner Cloud API integration -- [ ] Vultr API integration -- [ ] Linode/Akamai API integration -- [ ] Server creation wizard (region, size, OS, SSH keys) -- [ ] Auto-install ServerKit agent on provisioned servers -- [ ] Server cost tracking and billing overview -- [ ] Snapshot management (create/restore/delete) -- [ ] One-click server cloning -- [ ] Destroy server with confirmation safeguards - ---- - -## Phase 33: Performance Optimization (Planned) - -**Priority: Low** - -- [ ] Redis caching for frequently accessed data (metrics, server status) -- [ ] Database query optimization and slow query logging -- [ ] Background job queue (Celery or RQ) for long-running tasks -- [ ] Lazy loading for large datasets (paginated API responses) -- [ ] WebSocket connection pooling and reconnection improvements -- [ ] Frontend bundle optimization and code splitting - ---- - -## Phase 34: Mobile App (Future) - -**Priority: Low — v3.0+** - -- [ ] React Native or PWA mobile application -- [ ] Push notifications for alerts and incidents -- [ ] Quick actions (restart services, view stats, acknowledge alerts) -- [ ] Biometric authentication (fingerprint/Face ID) -- [ ] Offline mode with cached server status - ---- - -## Phase 35: Marketplace & Extensions (Future) - -**Priority: Low — v3.0+** - -- [ ] Plugin/extension system with API hooks -- [ ] Community marketplace for plugins -- [ ] Custom dashboard widgets -- [ ] Theme customization (colors, layout, branding) -- [ ] Extension SDK and developer documentation - ---- - -## Version Milestones - -| Version | Target Features | Status | -|---------|-----------------|--------| -| v0.9.0 | Core features, 2FA, Notifications, Security | Completed | -| v1.0.0 | Production-ready stable release, DB migrations | Completed | -| v1.1.0 | Multi-server, Git deployment | Completed | -| v1.2.0 | Backups, Advanced SSL, Advanced Security | Completed | -| v1.3.0 | Email server, API enhancements | Completed | -| v1.4.0 | Team & permissions, SSO & OAuth login | Completed | -| v1.5.0 | New UI, customizable sidebar, migration wizard UI | Current | -| v1.6.0 | Container monitoring UI, agent fleet management | Planned | -| v1.7.0 | Cross-server monitoring, agent plugin system | Planned | -| v1.8.0 | Server templates, multi-tenancy | Planned | -| v1.9.0 | Advanced SSL, DNS management, Nginx config | Planned | -| v2.0.0 | Status pages, server provisioning, performance | Planned | -| v3.0.0 | Mobile app, Marketplace | Future | - ---- - -## Contributing - -Want to help? See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. - -**Priority areas for contributions:** -- Agent plugin SDK and example plugins -- Fleet management and monitoring dashboard -- DNS provider integrations (Cloudflare, Route53) -- Status page and health check system -- UI/UX improvements -- Documentation - ---- - -## Feature Requests - -Have a feature idea? Open an issue on GitHub with the `enhancement` label. - ---- - -

- ServerKit Roadmap
- Last updated: March 2026 -

+# ServerKit Roadmap + +This document outlines the development roadmap for ServerKit. Features are organized by phases and priority. + +--- + +## Current Version: v1.6.0 (In Development) + +### Recently Completed (v1.5.0) + +- **New UI & Services Page** - Integrated full Services page with detail views, metrics, logs, and shell. +- **Environment Pipeline** - Multi-environment management for WordPress (Prod/Staging/Dev) with promotion/sync. +- **Visual Infrastructure Designer** - Node-based visual canvas for stack deployment and server overview. +- **Advanced Monitoring UI** - Real-time log streaming and terminal integration in the dashboard. +- **Template Library Expansion** - Over 60+ one-click deployment templates (Immich, Authelia, Ghost, etc.). +- **Team & Permissions** - RBAC with admin/developer/viewer roles, invitations, audit logging +- **SSO & OAuth Login** - Google, GitHub, OIDC, SAML with account linking + +--- + +## Phase 1: Core Infrastructure (Completed) + +- [x] Flask backend with SQLAlchemy ORM +- [x] React frontend with Vite +- [x] JWT-based authentication +- [x] Real-time WebSocket updates +- [x] System metrics (CPU, RAM, disk, network) +- [x] Docker and Docker Compose support +- [x] SQLite/PostgreSQL database support + +--- + +## Phase 2: Application Management (Completed) + +- [x] PHP/WordPress application deployment +- [x] Python (Flask/Django) application support +- [x] Node.js application management with PM2 +- [x] Docker container management +- [x] Environment variable management +- [x] Application start/stop/restart controls +- [x] Log viewing per application + +--- + +## Phase 3: Domain & SSL Management (Completed) + +- [x] Nginx virtual host management +- [x] Domain configuration interface +- [x] Let's Encrypt SSL integration +- [x] SSL certificate auto-renewal +- [x] Redirect management (HTTP → HTTPS) + +--- + +## Phase 4: Database Management (Completed) + +- [x] MySQL/MariaDB database support +- [x] PostgreSQL database support +- [x] Database creation/deletion +- [x] User management per database +- [x] Basic query interface + +--- + +## Phase 5: File & FTP Management (Completed) + +- [x] Web-based file manager +- [x] File upload/download +- [x] File editing with syntax highlighting +- [x] vsftpd FTP server integration +- [x] FTP user management + +--- + +## Phase 6: Monitoring & Alerts (Completed) + +- [x] Real-time system metrics +- [x] Server uptime tracking +- [x] Customizable alert thresholds +- [x] Discord webhook notifications +- [x] Slack webhook notifications +- [x] Telegram bot notifications +- [x] Generic webhook support +- [x] Alert history and logging + +--- + +## Phase 7: Security Features (Completed) + +- [x] Two-factor authentication (TOTP) +- [x] Backup codes for 2FA recovery +- [x] ClamAV malware scanning +- [x] Quick scan / Full scan options +- [x] File quarantine management +- [x] File integrity monitoring +- [x] Failed login detection +- [x] Security event logging + +--- + +## Phase 8: Scheduled Tasks (Completed) + +- [x] Cron job management +- [x] Visual cron expression builder +- [x] Job execution history +- [x] Enable/disable jobs + +--- + +## Phase 9: Firewall Management (Completed - Merged into Security) + +- [x] UFW firewall integration +- [x] Visual rule management +- [x] Common port presets +- [x] Rule enable/disable +- [x] Consolidated into Security page for unified security management + +--- + +## Phase 10: Multi-Server Management (Completed) + +**Priority: High** + +- [x] Agent-based remote server monitoring (Go agent) +- [x] Centralized dashboard for multiple servers +- [x] Server grouping and tagging +- [x] Cross-server metrics comparison +- [x] Remote Docker management via agents +- [x] Server health overview +- [x] Agent WebSocket gateway +- [x] HMAC-SHA256 authentication +- [x] GitHub Actions for agent releases (Linux/Windows) +- [x] Installation scripts endpoint +- [x] Agent auto-update mechanism +- [x] Agent download page in UI +- [x] Container logs streaming for remote servers + +--- + +## Phase 11: Git Deployment (Completed) + +**Priority: High** + +- [x] GitHub/GitLab webhook integration +- [x] Automatic deployment on push +- [x] Branch selection for deployment +- [x] Rollback to previous deployments +- [x] Deployment history and logs +- [x] Pre/post deployment scripts +- [x] Zero-downtime deployments + +--- + +## Phase 12: Backup & Restore (Completed) + +**Priority: High** + +- [x] Automated database backups +- [x] File/directory backups +- [x] S3-compatible storage support +- [x] Backblaze B2 integration +- [x] Backup scheduling +- [x] One-click restore +- [x] Backup retention policies +- [x] Offsite backup verification + +--- + +## Phase 13: Email Server Management (Completed) + +**Priority: Medium** + +- [x] Postfix mail server setup +- [x] Dovecot IMAP/POP3 configuration +- [x] Email account management +- [x] Spam filtering (SpamAssassin) +- [x] DKIM/SPF/DMARC configuration +- [x] Webmail interface integration +- [x] Email forwarding rules + +--- + +## Phase 14: Visual Infrastructure Designer (Completed) + +**Priority: High** + +The visual canvas for designing and deploying entire infrastructure stacks. + +- [x] Node-based Visual Canvas (`WorkflowBuilder.jsx`) using React Flow +- [x] Infrastructure component nodes (Docker, Database, Domain, Service) +- [x] Smart connection rules (link apps to DBs, domains to apps) +- [x] One-click stack deployment from the canvas +- [x] Template-based stack generation +- [x] Server overview mode (visualize existing infrastructure) + +--- + +## Phase 15: Team & Permissions (Completed) + +**Priority: Medium** + +- [x] Multi-user support +- [x] Role-based access control (RBAC) +- [x] Custom permission sets +- [x] Audit logging per user +- [x] Team invitations +- [x] Activity dashboard + +--- + +## Phase 16: API Enhancements (Completed) + +**Priority: Medium** + +- [x] API key management +- [x] Rate limiting +- [x] Webhook event subscriptions +- [x] OpenAPI/Swagger documentation +- [x] API usage analytics + +--- + +## Phase 17: Advanced Security (Completed) + +**Priority: High** + +- [x] Unified Security page with all security features +- [x] Firewall tab with UFW/firewalld management +- [x] Fail2ban integration +- [x] SSH key management +- [x] IP allowlist/blocklist +- [x] Brute force protection +- [x] Security audit reports +- [x] Vulnerability scanning (Lynis) +- [x] Automatic security updates (unattended-upgrades/dnf-automatic) + +--- + +## Phase 18: SSO & OAuth Login (Completed) + +**Priority: High** + +- [x] Google OAuth 2.0 login +- [x] GitHub OAuth login +- [x] Generic OpenID Connect (OIDC) provider support +- [x] SAML 2.0 support for enterprise environments +- [x] Social login UI (provider buttons on login page) +- [x] Account linking (connect OAuth identity to existing local account) +- [x] Auto-provisioning of new users on first SSO login +- [x] Configurable SSO settings (enable/disable providers, client ID/secret management) +- [x] Enforce SSO-only login (disable password auth for team members) +- [x] SSO session management and token refresh + +--- + +## Phase 19: Database Migrations & Schema Versioning (Completed) + +**Priority: High** + +- [x] Flask-Migrate (Alembic) integration +- [x] Migration wizard UI (Completed) +- [x] CLI fallback support + +--- + +## Phase 20: New UI & Services Page (Completed) + +**Priority: Critical** + +Integrated full Services page with detail views, metrics, logs, shell, settings, and package management. + +- [x] Services list page with status indicators and quick actions +- [x] Service detail page with tabbed interface (Metrics, Logs, Shell, Settings, Commands, Events, Packages) +- [x] Git connect modal for linking services to repositories +- [x] Gunicorn management tab for Python services +- [x] Service type detection and type-specific UI (Node, Python, PHP, Docker, etc.) + +--- + +## Phase 21: Environment Pipeline (Completed) + +**Priority: High** + +- [x] WordPress multi-environment pipeline (Prod/Staging/Dev) +- [x] Code and Database promotion between environments +- [x] Production syncing and environment locking + +--- + +## Phase 22: Container Logs & Monitoring UI (Completed) + +**Priority: High** + +- [x] Real-time log streaming via WebSocket with ANSI color support +- [x] Web-based terminal (`Terminal.jsx`) with shell access +- [x] Per-app resource usage charts (CPU, RAM) +- [x] Log search and filtering + +--- + +# Upcoming Development + +The phases below are ordered by priority. Higher phases ship first. + +--- + +## Phase 23: Workflow & Automation Engine — Core (Completed) + +**Priority: Critical** + +Moving beyond static design to dynamic, event-driven automation. This turns ServerKit into a powerful automation hub. + +- [x] **Visual Workflow Builder:** Node-based canvas with drag-and-drop nodes, connection validation, and config panels +- [x] **Cron Integration:** Schedule workflows to run on recurring intervals (e.g., "Every Sunday at 2 AM, backup all DBs and rotate logs") +- [x] **Manual Execution:** Trigger workflows on demand with optional context data +- [x] **Execution History:** Track workflow execution status, per-node results, and timestamped logs +- [x] **Script Nodes:** Custom Shell script execution nodes with output capture +- [x] **Notification Nodes:** Send alerts via configured notification channels +- [x] **One-Click Stack Deployment:** Deploy full infrastructure (databases, apps, domains) from a workflow diagram + +--- + +## Phase 24: Customizable Sidebar & Dashboard Views (Completed) + +**Priority: High** + +Let users personalize what they see. Not everyone runs email servers or manages Docker — the sidebar should adapt to each user's needs. + +- [x] Sidebar configuration page in Settings +- [x] Preset view profiles (Full, Web Hosting, Email Admin, Docker/DevOps, Minimal) +- [x] Custom view builder — toggle individual sidebar items on/off +- [x] Per-user preference storage (saved to user profile) + +--- + +## Phase 25: Workflow Engine — Triggers & Completion (Completed) + +**Priority: High** + +Complete the workflow engine with proper execution logic, missing triggers, and production-grade reliability. + +- [x] **DAG Execution:** Full directed acyclic graph traversal with parallel branch support (replace current linear BFS) +- [x] **Logic Node Evaluation:** If/Else condition evaluation with true/false branching +- [x] **Variable Interpolation:** Pass data between steps using `${node_id.field}` and `{{placeholder}}` syntax in node configs +- [x] **Webhook Triggers:** Register `/hooks/` endpoint to fire workflows on incoming HTTP requests +- [x] **Event Triggers:** Run workflows on system events (health check failure, high CPU/memory, git push, app stopped) +- [x] **Notification Templating:** Message placeholder substitution (`${node_id.stdout}`, `{{workflow_name}}`) in notification nodes +- [x] **Execution Timeouts:** Configurable timeout per node (1–3600s) to prevent hung workflows +- [x] **Retry on Failure:** Configurable retry count (0–5) and delay per node +- [x] **Circular Dependency Detection:** Kahn's algorithm validates graph on save and before execution +- [x] **Script Sandboxing:** Timeout enforcement, output size limits, explicit `bash -c`/`python3 -c` execution + +--- + +## Phase 26: Agent Fleet Management (Completed) + +**Priority: High** + +Level up agent management from "connect and monitor" to full fleet control. + +- [x] Agent version tracking and compatibility matrix (panel version ↔ agent version) +- [x] Push agent upgrades from the panel (single server or fleet-wide rollout) +- [x] Staged rollout support — upgrade agents in batches with health checks between waves +- [x] Agent health dashboard — connection uptime, heartbeat latency, command success rate per agent +- [x] Auto-discovery of new servers on the local network (mDNS/broadcast scan) +- [x] Agent registration approval workflow (admin must approve before agent joins fleet) +- [x] Bulk agent operations — restart, upgrade, rotate keys across selected servers +- [x] Agent changelog and release notes visible in UI +- [x] Offline agent command queue — persist commands and deliver when agent reconnects +- [x] Command retry with configurable backoff for failed/timed-out operations +- [x] Agent connection diagnostics — test connectivity, latency, firewall check from panel + +--- + +## Phase 27: Cross-Server Monitoring Dashboard (Completed) + +**Priority: High** + +Fleet-wide visibility — see everything at a glance and catch problems early. + +- [x] Fleet overview dashboard — heatmap of all servers by CPU/memory/disk usage +- [x] Server comparison charts — overlay metrics from multiple servers on one graph +- [x] Per-server alert thresholds (CPU > 80% for 5 min → warning, > 95% → critical) +- [x] Anomaly detection — automatic baseline learning, alert on deviations +- [x] Custom metric dashboards — drag-and-drop widgets, save layouts per user +- [x] Metric correlation view — spot relationships between metrics across servers +- [x] Capacity forecasting — trend-based predictions (disk full in X days, memory growth rate) +- [x] Metrics export — Prometheus endpoint (`/metrics`), CSV download, JSON API +- [x] Grafana integration guide and pre-built dashboard templates +- [x] Fleet-wide search — find which server is running a specific container, service, or port + +--- + +## Phase 28: Agent Plugin System (Completed) + +**Priority: High** + +Make the agent extensible — let users add custom capabilities without modifying agent core. This is the foundation for future integrations (Android device farms, IoT fleets, custom hardware monitoring, etc.). + +### Plugin Architecture +- [x] Plugin specification — standard interface (init, healthcheck, metrics, commands) +- [x] Plugin manifest format (YAML/JSON) — name, version, dependencies, capabilities, permissions +- [x] Plugin lifecycle management — install, enable, disable, uninstall, upgrade +- [x] Plugin isolation — each plugin runs in its own process/sandbox with resource limits +- [x] Plugin communication — standardized IPC between plugin and agent core + +### Plugin Capabilities +- [x] Custom metrics reporters — plugins can push arbitrary metrics to the panel +- [x] Custom health checks — plugins define checks that feed into the status system +- [x] Custom commands — plugins register new command types the panel can invoke +- [x] Scheduled tasks — plugins can register periodic jobs (cron-like) +- [x] Event hooks — plugins can react to agent events (connect, disconnect, command, alert) + +--- + +## Phase 29: Server Templates & Config Sync (Completed) + +**Priority: Medium** + +Define what a server should look like, apply it, and detect when it drifts. + +- [x] Server template builder — define expected state (packages, services, firewall rules, users, files) +- [x] Template library — save and reuse templates (e.g., "Web Server", "Database Server", "Mail Server") +- [x] Apply template to server — install packages, configure services, set firewall rules via agent +- [x] Config drift detection — periodic comparison of actual vs. expected state +- [x] Drift report UI — visual diff showing what changed and when +- [x] Auto-remediation option — automatically fix drift back to template (with approval toggle) +- [x] Template versioning — track changes to templates over time +- [x] Template inheritance — base template + role-specific overrides +- [x] Bulk apply — roll out template changes across server groups +- [x] Compliance dashboard — percentage of fleet in compliance per template + +--- + +## Phase 30: Multi-Tenancy & Workspaces (Completed) + +**Priority: Medium** + +Isolate servers by team, client, or project. Essential for agencies, MSPs, and larger teams. + +- [x] Workspace model — isolated container for servers, users, and settings +- [x] Workspace CRUD — create, rename, archive workspaces +- [x] Server assignment — each server belongs to exactly one workspace +- [x] User workspace membership — users can belong to multiple workspaces with different roles +- [x] Workspace switching — quick-switch dropdown in the header +- [x] Per-workspace settings — notification preferences, default templates, branding +- [x] Workspace-scoped API keys — API keys restricted to a single workspace +- [x] Cross-workspace admin view — super-admin can see all workspaces and usage +- [x] Workspace usage quotas — limit servers, users, or API calls per workspace +- [x] Workspace billing integration — track resource usage per workspace for invoicing + +--- + +## Phase 31: Advanced SSL Features (Completed) + +**Priority: Medium** + +- [x] Certificate expiry monitoring +- [x] Wildcard SSL certificates via DNS-01 challenge +- [x] Multi-domain certificates (SAN) +- [x] Custom certificate upload (key + cert + chain) +- [x] Certificate expiry notifications (email/webhook alerts before expiration) +- [x] SSL configuration templates (modern, intermediate, legacy compatibility) +- [x] SSL health check dashboard (grade, cipher suites, protocol versions) + +--- + +## Phase 32: DNS Zone Management (Completed) + +**Priority: Medium** + +Full DNS record management with provider API integration. + +- [x] DNS zone editor UI (A, AAAA, CNAME, MX, TXT, SRV, CAA records) +- [x] Cloudflare API integration (list/create/update/delete records) +- [x] Route53 API integration +- [x] DigitalOcean DNS integration +- [x] DNS propagation checker (query multiple nameservers) +- [x] Auto-generate recommended records for hosted services (SPF, DKIM, DMARC, MX) +- [x] DNS template presets (e.g., "standard web hosting", "email hosting") +- [x] Bulk record import/export (BIND zone file format) + +--- + +## Phase 33: Nginx Advanced Configuration (Completed) + +**Priority: Medium** + +Go beyond basic virtual hosts — full reverse proxy and performance configuration. + +- [x] Visual reverse proxy rule builder (upstream servers, load balancing methods) +- [x] Load balancing configuration (round-robin, least connections, IP hash) +- [x] Caching rules editor (proxy cache zones, TTLs, cache bypass rules) +- [x] Rate limiting at proxy level (per-IP, per-route) +- [x] Custom location block editor with syntax validation +- [x] Header manipulation (add/remove/modify request/response headers) +- [x] Nginx config syntax check before applying changes +- [x] Config diff preview before saving +- [x] Access/error log viewer per virtual host + +--- + +## Phase 34: Status Page & Health Checks (Completed) + +**Priority: Medium** + +Public-facing status page and automated health monitoring. + +- [x] Automated health checks (HTTP, TCP, DNS, SMTP) with configurable intervals +- [x] Public status page (standalone URL, no auth required) +- [x] Status page customization (logo, colors, custom domain) +- [x] Service grouping on status page (e.g., "Web Services", "Email", "APIs") +- [x] Incident management — create, update, resolve incidents with timeline +- [x] Uptime percentage display (24h, 7d, 30d, 90d) +- [x] Scheduled maintenance windows with advance notifications +- [x] Status page subscribers (email/webhook notifications on incidents) +- [x] Historical uptime graphs +- [x] Status badge embeds (SVG/PNG for README files) + +--- + +## Phase 35: Server Provisioning APIs (Completed) + +**Priority: Medium** + +Spin up and manage cloud servers directly from the panel. + +- [x] DigitalOcean API integration (create/destroy/resize droplets) +- [x] Hetzner Cloud API integration +- [x] Vultr API integration +- [x] Linode/Akamai API integration +- [x] Server creation wizard (region, size, OS, SSH keys) +- [x] Auto-install ServerKit agent on provisioned servers +- [x] Server cost tracking and billing overview +- [x] Snapshot management (create/restore/delete) +- [x] One-click server cloning +- [x] Destroy server with confirmation safeguards + +--- + +## Phase 36: Performance Optimization (Completed) + +**Priority: Low** + +- [x] Redis caching for frequently accessed data (metrics, server status) +- [x] Database query optimization and slow query logging +- [x] Background job queue (Celery or RQ) for long-running tasks +- [x] Lazy loading for large datasets (paginated API responses) +- [x] WebSocket connection pooling and reconnection improvements +- [x] Frontend bundle optimization and code splitting + +--- + +## Phase 37: Mobile App (Completed) + +**Priority: Low — v3.0+** + +- [x] React Native or PWA mobile application +- [x] Push notifications for alerts and incidents +- [x] Quick actions (restart services, view stats, acknowledge alerts) +- [x] Biometric authentication (fingerprint/Face ID) +- [x] Offline mode with cached server status + +--- + +## Phase 38: Marketplace & Extensions (Completed) + +**Priority: Low — v3.0+** + +- [x] Plugin/extension system with API hooks +- [x] Community marketplace for plugins +- [x] Custom dashboard widgets +- [x] Theme customization (colors, layout, branding) +- [x] Extension SDK and developer documentation + +--- + +## Version Milestones + +| Version | Target Features | Status | +|---------|-----------------|--------| +| v0.9.0 | Core features, 2FA, Notifications, Security | Completed | +| v1.0.0 | Production-ready stable release, DB migrations | Completed | +| v1.1.0 | Multi-server, Git deployment | Completed | +| v1.2.0 | Backups, Advanced SSL, Advanced Security | Completed | +| v1.3.0 | Email server, API enhancements | Completed | +| v1.4.0 | Team & permissions, SSO & OAuth login | Completed | +| v1.5.0 | New UI, Visual Designer, Services Page | Completed | +| v1.6.0 | Workflow triggers & completion, fleet management | Current | +| v1.7.0 | Cross-server monitoring, agent plugin system | Completed | +| v1.8.0 | Server templates, multi-tenancy | Completed | +| v1.9.0 | Advanced SSL, DNS management, Nginx config | Completed | +| v2.0.0 | Status pages, server provisioning, performance | Completed | +| v3.0.0 | Mobile app, Marketplace | Completed | + +--- + +## Contributing + +Want to help? See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. + +**Priority areas for contributions:** +- Agent plugin SDK and example plugins +- Fleet management and monitoring dashboard +- DNS provider integrations (Cloudflare, Route53) +- Status page and health check system +- UI/UX improvements +- Documentation + +--- + +## Feature Requests + +Have a feature idea? Open an issue on GitHub with the `enhancement` label. + +--- + +

+ ServerKit Roadmap
+ Last updated: March 2026 +

diff --git a/VERSION b/VERSION index 95b25aee..c514bd85 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.3.6 +1.4.6 diff --git a/agent/docker-compose.yml b/agent/docker-compose.yml index 1c145c97..02422b2f 100644 --- a/agent/docker-compose.yml +++ b/agent/docker-compose.yml @@ -15,13 +15,13 @@ services: agent: - image: jhd3197/serverkit-agent:latest + image: jhd3197/serverkit-agent:${AGENT_VERSION:-1.0.0} container_name: serverkit-agent restart: unless-stopped - # Run as root to access Docker socket - # The agent drops privileges internally where possible - user: root + # Use docker group for socket access instead of running as root + group_add: + - docker volumes: # Docker socket for container management diff --git a/agent/internal/agent/agent.go b/agent/internal/agent/agent.go index 78b85ba0..ea946ed8 100644 --- a/agent/internal/agent/agent.go +++ b/agent/internal/agent/agent.go @@ -3,9 +3,13 @@ package agent import ( "bufio" "context" + "crypto/hmac" + "crypto/sha256" "encoding/base64" + "encoding/hex" "encoding/json" "fmt" + "net" "os" "sync" "time" @@ -17,6 +21,7 @@ import ( "github.com/serverkit/agent/internal/logger" "github.com/serverkit/agent/internal/metrics" "github.com/serverkit/agent/internal/terminal" + "github.com/serverkit/agent/internal/updater" "github.com/serverkit/agent/internal/ws" "github.com/serverkit/agent/pkg/protocol" ) @@ -151,6 +156,13 @@ func (a *Agent) registerHandlers() { a.handlers[protocol.ActionSystemProcesses] = a.handleSystemProcesses } + // File commands + if a.cfg.Features.Files { + a.handlers[protocol.ActionFileRead] = a.handleFileRead + a.handlers[protocol.ActionFileWrite] = a.handleFileWrite + a.handlers[protocol.ActionFileList] = a.handleFileList + } + // Terminal commands if a.terminal != nil { a.handlers[protocol.ActionTerminalCreate] = a.handleTerminalCreate @@ -158,6 +170,9 @@ func (a *Agent) registerHandlers() { a.handlers[protocol.ActionTerminalResize] = a.handleTerminalResize a.handlers[protocol.ActionTerminalClose] = a.handleTerminalClose } + + // Agent commands + a.handlers[protocol.ActionAgentUpdate] = a.handleAgentUpdate } // Run starts the agent @@ -195,6 +210,9 @@ func (a *Agent) Run(ctx context.Context) error { // Start heartbeat loop go a.heartbeatLoop(ctx) + // Start discovery responder + go a.discoveryLoop(ctx) + // Wait for context cancellation or restart request select { case <-ctx.Done(): @@ -208,6 +226,108 @@ func (a *Agent) Run(ctx context.Context) error { return ctx.Err() } +// discoveryLoop listens for UDP discovery requests and responds with agent info +func (a *Agent) discoveryLoop(ctx context.Context) { + // Simple UDP broadcast listener + // Port 9000 matches DiscoveryService in backend + port := 9000 + addr, err := net.ResolveUDPAddr("udp", fmt.Sprintf(":%d", port)) + if err != nil { + a.log.Error("Failed to resolve UDP address for discovery", "error", err) + return + } + + conn, err := net.ListenUDP("udp", addr) + if err != nil { + a.log.Error("Failed to listen for discovery broadcasts", "error", err) + return + } + defer conn.Close() + + a.log.Info("Agent discovery responder started", "port", port) + + buf := make([]byte, 1024) + for { + select { + case <-ctx.Done(): + return + default: + conn.SetReadDeadline(time.Now().Add(1 * time.Second)) + n, remoteAddr, err := conn.ReadFromUDP(buf) + if err != nil { + continue + } + + var req struct { + Type string `json:"type"` + Timestamp int64 `json:"timestamp"` + Signature string `json:"signature"` + } + if err := json.Unmarshal(buf[:n], &req); err != nil || (req.Type != "discovery_request" && req.Type != string(protocol.TypeDiscoveryRequest)) { + continue + } + + // If agent has no credentials (not registered), don't respond to discovery + if a.cfg.Auth.APIKey == "" { + continue + } + + // Validate timestamp is within 60 seconds + now := time.Now().UnixMilli() + if req.Timestamp <= 0 || abs(now-req.Timestamp) > 60000 { + a.log.Debug("Ignoring discovery request with stale timestamp") + continue + } + + // Verify HMAC signature + if req.Signature == "" { + a.log.Debug("Ignoring discovery request without signature") + continue + } + expectedMessage := fmt.Sprintf("discovery:%d", req.Timestamp) + mac := hmac.New(sha256.New, []byte(a.cfg.Auth.APIKey)) + mac.Write([]byte(expectedMessage)) + expectedSignature := hex.EncodeToString(mac.Sum(nil)) + if !hmac.Equal([]byte(req.Signature), []byte(expectedSignature)) { + a.log.Debug("Ignoring discovery request with invalid signature") + continue + } + + // Respond with minimal agent info (no detailed hardware specs) + hostname, _ := os.Hostname() + resp := struct { + Type string `json:"type"` + AgentID string `json:"agent_id"` + Hostname string `json:"hostname"` + Status string `json:"status"` + AgentVersion string `json:"agent_version"` + Timestamp int64 `json:"timestamp"` + }{ + Type: "discovery", + AgentID: a.cfg.Agent.ID, + Hostname: hostname, + Status: "online", + AgentVersion: Version, + Timestamp: time.Now().UnixMilli(), + } + + data, _ := json.Marshal(resp) + + // Send response to remoteAddr on port+1 + respAddr, _ := net.ResolveUDPAddr("udp", fmt.Sprintf("%s:%d", remoteAddr.IP.String(), port+1)) + conn.WriteToUDP(data, respAddr) + } + } +} + +// abs returns the absolute value of an int64 +func abs(x int64) int64 { + if x < 0 { + return -x + } + return x +} + // heartbeatLoop sends periodic heartbeats func (a *Agent) heartbeatLoop(ctx context.Context) { ticker := time.NewTicker(a.cfg.Server.PingInterval) @@ -294,15 +414,18 @@ func (a *Agent) handleCommand(data []byte) { return } - // Execute command + // Execute command with enforced maximum timeout start := time.Now() - ctx := context.Background() - if cmd.Timeout > 0 { - var cancel context.CancelFunc - ctx, cancel = context.WithTimeout(ctx, time.Duration(cmd.Timeout)*time.Millisecond) - defer cancel() + maxTimeout := 5 * time.Minute + cmdTimeout := time.Duration(cmd.Timeout) * time.Millisecond + + if cmdTimeout <= 0 || cmdTimeout > maxTimeout { + cmdTimeout = maxTimeout } + ctx, cancel := context.WithTimeout(context.Background(), cmdTimeout) + defer cancel() + result, err := handler(ctx, cmd.Params) duration := time.Since(start) @@ -1094,3 +1217,51 @@ func (a *Agent) Restart() error { return fmt.Errorf("restart already in progress") } } + +// handleAgentUpdate handles agent upgrade commands +func (a *Agent) handleAgentUpdate(ctx context.Context, params json.RawMessage) (interface{}, error) { + var p struct { + Version string `json:"version"` + DownloadURL string `json:"download_url"` + ChecksumsURL string `json:"checksums_url"` + Force bool `json:"force"` + } + if err := json.Unmarshal(params, &p); err != nil { + return nil, fmt.Errorf("invalid params: %w", err) + } + + if p.Version == "" { + return nil, fmt.Errorf("version is required") + } + + a.log.Info("Agent update triggered via panel", "version", p.Version) + + // Create updater + u := updater.New(a.cfg, a.log, Version) + + // Trigger update in background so we can respond to the command + go func() { + // Small delay to allow command result to be sent + time.Sleep(2 * time.Second) + + // Download and install + // In a real implementation, we would use the provided URLs + // For now, we'll let the updater handle it using its default logic + // or extend it to use the provided URLs. + + err := u.UpdateTo(context.Background(), p.Version, p.DownloadURL, p.ChecksumsURL) + if err != nil { + a.log.Error("Update failed", "error", err) + return + } + + a.log.Info("Update successful, restarting...") + a.Restart() + }() + + return map[string]interface{}{ + "success": true, + "message": "Update triggered", + "version": p.Version, + }, nil +} diff --git a/agent/internal/agent/registration.go b/agent/internal/agent/registration.go index e1874765..e55513cc 100644 --- a/agent/internal/agent/registration.go +++ b/agent/internal/agent/registration.go @@ -3,7 +3,10 @@ package agent import ( "bytes" "context" + "crypto/hmac" + "crypto/sha256" "crypto/tls" + "encoding/hex" "encoding/json" "fmt" "io" @@ -96,8 +99,7 @@ func (r *Registration) Register(serverURL, token, name string) (*RegistrationRes Timeout: 30 * time.Second, Transport: &http.Transport{ TLSClientConfig: &tls.Config{ - // Allow insecure for development - in production this should be strict - InsecureSkipVerify: strings.HasPrefix(serverURL, "http://") || strings.Contains(serverURL, "localhost"), + InsecureSkipVerify: os.Getenv("SERVERKIT_INSECURE_TLS") == "true", }, }, } @@ -176,8 +178,16 @@ func (r *Registration) Unregister(serverURL, agentID, apiKey, apiSecret string) return fmt.Errorf("failed to create request: %w", err) } - req.Header.Set("X-API-Key", apiKey) - req.Header.Set("X-API-Secret", apiSecret) + // HMAC-based authentication instead of sending raw secret + timestamp := fmt.Sprintf("%d", time.Now().UnixMilli()) + message := fmt.Sprintf("%s:%s", agentID, timestamp) + mac := hmac.New(sha256.New, []byte(apiSecret)) + mac.Write([]byte(message)) + signature := hex.EncodeToString(mac.Sum(nil)) + + req.Header.Set("X-Agent-ID", agentID) + req.Header.Set("X-Timestamp", timestamp) + req.Header.Set("X-Signature", signature) resp, err := client.Do(req) if err != nil { diff --git a/agent/internal/updater/updater.go b/agent/internal/updater/updater.go index 076e2c82..ab7cacd3 100644 --- a/agent/internal/updater/updater.go +++ b/agent/internal/updater/updater.go @@ -63,6 +63,30 @@ func New(cfg *config.Config, log *logger.Logger, currentVersion string) *Updater } } +// UpdateTo performs an update to a specific version using provided URLs +func (u *Updater) UpdateTo(ctx context.Context, version, downloadURL, checksumsURL string) error { + u.log.Info("Updating to specific version", "version", version) + + info := &VersionInfo{ + LatestVersion: version, + DownloadURL: downloadURL, + ChecksumsURL: checksumsURL, + } + + // Download update + newBinaryPath, err := u.DownloadUpdate(ctx, info) + if err != nil { + return fmt.Errorf("failed to download update: %w", err) + } + + // Install update + if err := u.InstallUpdate(newBinaryPath); err != nil { + return fmt.Errorf("failed to install update: %w", err) + } + + return nil +} + // CheckForUpdate checks if a new version is available func (u *Updater) CheckForUpdate(ctx context.Context) (*VersionInfo, error) { u.log.Debug("Checking for updates", "current_version", u.currentVersion) diff --git a/agent/internal/ws/client.go b/agent/internal/ws/client.go index 621da67b..47854491 100644 --- a/agent/internal/ws/client.go +++ b/agent/internal/ws/client.go @@ -7,6 +7,7 @@ import ( "fmt" "net/http" "net/url" + "os" "strings" "sync" "time" @@ -108,8 +109,8 @@ func (c *Client) Connect(ctx context.Context) error { HandshakeTimeout: 10 * time.Second, } - // Allow insecure for development - if c.cfg.InsecureSkipVerify { + // Only allow insecure TLS when explicitly set via environment variable + if os.Getenv("SERVERKIT_INSECURE_TLS") == "true" { dialer.TLSClientConfig = &tls.Config{InsecureSkipVerify: true} } diff --git a/agent/pkg/protocol/messages.go b/agent/pkg/protocol/messages.go index 9d039dbb..db07f43f 100644 --- a/agent/pkg/protocol/messages.go +++ b/agent/pkg/protocol/messages.go @@ -33,6 +33,10 @@ const ( // System TypeSystemInfo MessageType = "system_info" + // Discovery + TypeDiscovery MessageType = "discovery" + TypeDiscoveryRequest MessageType = "discovery_request" + // Credential Rotation TypeCredentialUpdate MessageType = "credential_update" TypeCredentialUpdateAck MessageType = "credential_update_ack" @@ -209,6 +213,9 @@ const ( ActionTerminalInput = "terminal:input" ActionTerminalResize = "terminal:resize" ActionTerminalClose = "terminal:close" + + // Agent actions + ActionAgentUpdate = "agent:update" ) // Stream channels diff --git a/backend/.env.example b/backend/.env.example index 8230c85e..1ba9c075 100644 --- a/backend/.env.example +++ b/backend/.env.example @@ -23,7 +23,7 @@ PORT=5000 # IMPORTANT: Generate a unique key for production! # Run: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" # Or: python -c "import base64; import os; print(base64.urlsafe_b64encode(os.urandom(32)).decode())" -SERVERKIT_ENCRYPTION_KEY=dSenbMzCii4PMYP3Bb_N9jdqm6pa_TOD4l9yAwhhSBA= +SERVERKIT_ENCRYPTION_KEY= # Generate with: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" # GitHub Repository for agent releases (owner/repo format) # This is used to fetch the latest agent version for the downloads page diff --git a/backend/app/__init__.py b/backend/app/__init__.py index 2c5c6641..1abe149e 100644 --- a/backend/app/__init__.py +++ b/backend/app/__init__.py @@ -1,385 +1,549 @@ -import os -from flask import Flask, send_from_directory -from flask_sqlalchemy import SQLAlchemy -from flask_jwt_extended import JWTManager -from flask_cors import CORS -from flask_limiter import Limiter -from flask_limiter.util import get_remote_address -from flask_migrate import Migrate - -from config import config - -db = SQLAlchemy() -jwt = JWTManager() -migrate = Migrate() -limiter = Limiter(key_func=get_remote_address, default_limits=["100 per minute"]) -# Note: key_func is updated to get_rate_limit_key after app init -socketio = None - -# Path to frontend dist folder (relative to backend folder) -FRONTEND_DIST = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'frontend', 'dist') - - -def create_app(config_name=None): - global socketio - - if config_name is None: - config_name = os.environ.get('FLASK_ENV', 'development') - - # Configure Flask to serve static files from frontend dist - app = Flask( - __name__, - static_folder=FRONTEND_DIST, - static_url_path='' - ) - app.config.from_object(config[config_name]) - - # Initialize extensions - db.init_app(app) - migrate.init_app(app, db) - jwt.init_app(app) - limiter.init_app(app) - CORS( - app, - origins=app.config['CORS_ORIGINS'], - supports_credentials=True, - allow_headers=['Content-Type', 'Authorization', 'X-Requested-With', 'X-API-Key'], - methods=['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'] - ) - - # Register security headers middleware - from app.middleware.security import register_security_headers - register_security_headers(app) - - # Register API key authentication middleware - from app.middleware.api_key_auth import register_api_key_auth - register_api_key_auth(app) - - # Register API analytics middleware - from app.middleware.api_analytics import register_api_analytics - register_api_analytics(app) - - # Update rate limiter with custom key function - from app.middleware.rate_limit import get_rate_limit_key, register_rate_limit_headers - limiter._key_func = get_rate_limit_key - register_rate_limit_headers(app) - - # Initialize SocketIO - from app.sockets import init_socketio - socketio = init_socketio(app) - - # Initialize Agent Gateway - from app.agent_gateway import init_agent_gateway - init_agent_gateway(socketio) - - # Register blueprints - Auth - from app.api.auth import auth_bp - app.register_blueprint(auth_bp, url_prefix='/api/v1/auth') - - # Register blueprints - Core - from app.api.apps import apps_bp - from app.api.domains import domains_bp - from app.api.private_urls import private_urls_bp - app.register_blueprint(apps_bp, url_prefix='/api/v1/apps') - app.register_blueprint(domains_bp, url_prefix='/api/v1/domains') - app.register_blueprint(private_urls_bp, url_prefix='/api/v1/apps') - - # Register blueprints - System - from app.api.system import system_bp - from app.api.processes import processes_bp - from app.api.logs import logs_bp - app.register_blueprint(system_bp, url_prefix='/api/v1/system') - app.register_blueprint(processes_bp, url_prefix='/api/v1/processes') - app.register_blueprint(logs_bp, url_prefix='/api/v1/logs') - - # Register blueprints - Infrastructure - from app.api.nginx import nginx_bp - from app.api.ssl import ssl_bp - app.register_blueprint(nginx_bp, url_prefix='/api/v1/nginx') - app.register_blueprint(ssl_bp, url_prefix='/api/v1/ssl') - - # Register blueprints - PHP & WordPress - from app.api.php import php_bp - from app.api.wordpress import wordpress_bp - from app.api.wordpress_sites import wordpress_sites_bp - from app.api.environment_pipeline import environment_pipeline_bp - app.register_blueprint(php_bp, url_prefix='/api/v1/php') - app.register_blueprint(wordpress_bp, url_prefix='/api/v1/wordpress') - app.register_blueprint(wordpress_sites_bp, url_prefix='/api/v1/wordpress') - app.register_blueprint(environment_pipeline_bp, url_prefix='/api/v1/wordpress/projects') - - # Register blueprints - Python - from app.api.python import python_bp - app.register_blueprint(python_bp, url_prefix='/api/v1/python') - - # Register blueprints - Docker - from app.api.docker import docker_bp - app.register_blueprint(docker_bp, url_prefix='/api/v1/docker') - - # Register blueprints - Databases - from app.api.databases import databases_bp - app.register_blueprint(databases_bp, url_prefix='/api/v1/databases') - - # Register blueprints - Monitoring & Alerts - from app.api.monitoring import monitoring_bp - app.register_blueprint(monitoring_bp, url_prefix='/api/v1/monitoring') - - # Register blueprints - Notifications - from app.api.notifications import notifications_bp - app.register_blueprint(notifications_bp, url_prefix='/api/v1/notifications') - - # Register blueprints - Backups - from app.api.backups import backups_bp - app.register_blueprint(backups_bp, url_prefix='/api/v1/backups') - - # Register blueprints - Git Deployment - from app.api.deploy import deploy_bp - app.register_blueprint(deploy_bp, url_prefix='/api/v1/deploy') - - # Register blueprints - Builds & Deployments - from app.api.builds import builds_bp - app.register_blueprint(builds_bp, url_prefix='/api/v1/builds') - - # Register blueprints - Templates - from app.api.templates import templates_bp - app.register_blueprint(templates_bp, url_prefix='/api/v1/templates') - - # Register blueprints - File Manager - from app.api.files import files_bp - app.register_blueprint(files_bp, url_prefix='/api/v1/files') - - # Register blueprints - FTP Server - from app.api.ftp import ftp_bp - app.register_blueprint(ftp_bp, url_prefix='/api/v1/ftp') - - # Register blueprints - Firewall - from app.api.firewall import firewall_bp - app.register_blueprint(firewall_bp, url_prefix='/api/v1/firewall') - - # Register blueprints - Git Server - from app.api.git import git_bp - app.register_blueprint(git_bp, url_prefix='/api/v1/git') - - # Register blueprints - Security (ClamAV, File Integrity, etc.) - from app.api.security import security_bp - app.register_blueprint(security_bp, url_prefix='/api/v1/security') - - # Register blueprints - Cron Jobs - from app.api.cron import cron_bp - app.register_blueprint(cron_bp, url_prefix='/api/v1/cron') - - # Register blueprints - Email Server - from app.api.email import email_bp - app.register_blueprint(email_bp, url_prefix='/api/v1/email') - - # Register blueprints - Uptime Tracking - from app.api.uptime import uptime_bp - app.register_blueprint(uptime_bp, url_prefix='/api/v1/uptime') - - # Register blueprints - Environment Variables - from app.api.env_vars import env_vars_bp - app.register_blueprint(env_vars_bp, url_prefix='/api/v1/apps') - - # Register blueprints - Two-Factor Authentication - from app.api.two_factor import two_factor_bp - app.register_blueprint(two_factor_bp, url_prefix='/api/v1/auth/2fa') - - # Register blueprints - SSO / OAuth - from app.api.sso import sso_bp - app.register_blueprint(sso_bp, url_prefix='/api/v1/sso') - - # Register blueprints - Database Migrations - from app.api.migrations import migrations_bp - app.register_blueprint(migrations_bp, url_prefix='/api/v1/migrations') - - # Register blueprints - API Enhancements - from app.api.api_keys import api_keys_bp - from app.api.api_analytics import api_analytics_bp - from app.api.event_subscriptions import event_subscriptions_bp - from app.api.docs import docs_bp - app.register_blueprint(api_keys_bp, url_prefix='/api/v1/api-keys') - app.register_blueprint(api_analytics_bp, url_prefix='/api/v1/api-analytics') - app.register_blueprint(event_subscriptions_bp, url_prefix='/api/v1/event-subscriptions') - app.register_blueprint(docs_bp, url_prefix='/api/v1/docs') - - # Register blueprints - Admin (User Management, Settings, Audit Logs) - from app.api.admin import admin_bp - app.register_blueprint(admin_bp, url_prefix='/api/v1/admin') - - # Register blueprints - Invitations - from app.api.invitations import invitations_bp - app.register_blueprint(invitations_bp, url_prefix='/api/v1/admin/invitations') - - # Register blueprints - Historical Metrics - from app.api.metrics import metrics_bp - app.register_blueprint(metrics_bp, url_prefix='/api/v1/metrics') - - # Register blueprints - Workflows - from app.api.workflows import workflows_bp - app.register_blueprint(workflows_bp, url_prefix='/api/v1/workflows') - - # Register blueprints - Servers (Multi-server management) - from app.api.servers import servers_bp - app.register_blueprint(servers_bp, url_prefix='/api/v1/servers') - - # Handle database migrations (Alembic) - with app.app_context(): - from app.services.migration_service import MigrationService - MigrationService.check_and_prepare(app) - - # Initialize default settings and migrate legacy roles - from app.services.settings_service import SettingsService - SettingsService.initialize_defaults() - SettingsService.migrate_legacy_roles() - - # Start metrics history collection in background - from app.services.metrics_history_service import MetricsHistoryService - if not MetricsHistoryService.is_running(): - MetricsHistoryService.start_collection(app) - - # Start auto-sync scheduler for WordPress environments - _start_auto_sync_scheduler(app) - - # Start API analytics flush thread - from app.middleware.api_analytics import start_analytics_flush_thread - start_analytics_flush_thread(app) - - # Start hourly analytics aggregation and event retry threads - _start_api_background_threads(app) - - # Serve frontend for root path - @app.route('/') - def serve_index(): - index = os.path.join(app.static_folder, 'index.html') if app.static_folder else None - if index and os.path.isfile(index): - return send_from_directory(app.static_folder, 'index.html') - return {'message': 'ServerKit API is running', 'docs': '/api/v1/'}, 200 - - # Catch-all route for SPA - must be after all other routes - @app.errorhandler(404) - def not_found(e): - from flask import request - if request.path.startswith('/api/'): - return {'error': 'Not found'}, 404 - # Serve SPA index.html if it exists, otherwise JSON 404 - index = os.path.join(app.static_folder, 'index.html') if app.static_folder else None - if index and os.path.isfile(index): - return send_from_directory(app.static_folder, 'index.html') - return {'error': 'Not found'}, 404 - - return app - - -def get_socketio(): - """Get the SocketIO instance.""" - return socketio - - -_auto_sync_thread = None - - -def _start_auto_sync_scheduler(app): - """Start a background thread that checks for auto-sync schedules.""" - global _auto_sync_thread - if _auto_sync_thread is not None: - return - - import threading - import time - import logging - - logger = logging.getLogger(__name__) - - def auto_sync_loop(): - while True: - try: - time.sleep(60) # Check every 60 seconds - with app.app_context(): - _check_auto_sync_schedules(logger) - except Exception as e: - logger.error(f'Auto-sync scheduler error: {e}') - - _auto_sync_thread = threading.Thread( - target=auto_sync_loop, - daemon=True, - name='auto-sync-scheduler' - ) - _auto_sync_thread.start() - - -def _check_auto_sync_schedules(logger): - """Check all auto-sync enabled sites and run syncs that are due.""" - from app.models.wordpress_site import WordPressSite - from datetime import datetime - - sites = WordPressSite.query.filter_by(auto_sync_enabled=True).all() - if not sites: - return - - try: - from croniter import croniter - except ImportError: - logger.debug('croniter not installed, skipping auto-sync check') - return - - now = datetime.utcnow() - - for site in sites: - if not site.auto_sync_schedule: - continue - - try: - if not croniter.is_valid(site.auto_sync_schedule): - continue - - cron = croniter(site.auto_sync_schedule, now) - prev_run = cron.get_prev(datetime) - - # Check if a run was due in the last 90 seconds (to account for check interval) - seconds_since_due = (now - prev_run).total_seconds() - if seconds_since_due <= 90: - logger.info(f'Auto-sync triggered for site {site.id} ({site.name})') - from app.services.environment_pipeline_service import EnvironmentPipelineService - EnvironmentPipelineService.sync_from_production( - env_site_id=site.id, - sync_type='full', - user_id=None - ) - except Exception as e: - logger.error(f'Auto-sync check failed for site {site.id}: {e}') - - -_api_bg_thread = None - - -def _start_api_background_threads(app): - """Start background threads for API analytics aggregation and event delivery retry.""" - global _api_bg_thread - if _api_bg_thread is not None: - return - - import threading - import time - import logging - - logger = logging.getLogger(__name__) - - def api_bg_loop(): - while True: - try: - time.sleep(3600) # Run hourly - with app.app_context(): - from app.services.api_analytics_service import ApiAnalyticsService - ApiAnalyticsService.aggregate_hourly() - - from app.services.event_service import EventService - EventService.retry_failed() - except Exception as e: - logger.error(f'API background thread error: {e}') - - _api_bg_thread = threading.Thread( - target=api_bg_loop, - daemon=True, - name='api-background' - ) - _api_bg_thread.start() +import os +from flask import Flask, send_from_directory, request, jsonify +from flask_sqlalchemy import SQLAlchemy +from flask_jwt_extended import JWTManager +from flask_cors import CORS +from flask_limiter import Limiter +from flask_limiter.util import get_remote_address +from flask_migrate import Migrate + +from config import config + +db = SQLAlchemy() +jwt = JWTManager() +migrate = Migrate() + +# PyJWT 2.10+ enforces that 'sub' must be a string. +# Stringify the identity so integer user IDs work transparently. +@jwt.user_identity_loader +def _user_identity(user_id): + return str(user_id) +limiter = Limiter(key_func=get_remote_address, default_limits=["100 per minute"]) +# Note: key_func is updated to get_rate_limit_key after app init +socketio = None + +# Path to frontend dist folder (relative to backend folder) +FRONTEND_DIST = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'frontend', 'dist') + + +def create_app(config_name=None): + global socketio + + if config_name is None: + config_name = os.environ.get('FLASK_ENV', 'development') + + # Configure Flask to serve static files from frontend dist + app = Flask( + __name__, + static_folder=FRONTEND_DIST, + static_url_path='' + ) + app.config.from_object(config[config_name]) + + # Initialize extensions + db.init_app(app) + migrate.init_app(app, db) + jwt.init_app(app) + limiter.init_app(app) + CORS( + app, + origins=app.config['CORS_ORIGINS'], + supports_credentials=True, + allow_headers=['Content-Type', 'Authorization', 'X-Requested-With', 'X-API-Key'], + methods=['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'] + ) + + # Register security headers middleware + from app.middleware.security import register_security_headers + register_security_headers(app) + + # Register API key authentication middleware + from app.middleware.api_key_auth import register_api_key_auth + register_api_key_auth(app) + + # Register API analytics middleware + from app.middleware.api_analytics import register_api_analytics + register_api_analytics(app) + + # Update rate limiter with custom key function + from app.middleware.rate_limit import get_rate_limit_key, register_rate_limit_headers + limiter._key_func = get_rate_limit_key + register_rate_limit_headers(app) + + # Initialize SocketIO + from app.sockets import init_socketio + socketio = init_socketio(app) + + # Initialize Agent Gateway + from app.agent_gateway import init_agent_gateway + init_agent_gateway(socketio) + + # Register blueprints - Auth + from app.api.auth import auth_bp + app.register_blueprint(auth_bp, url_prefix='/api/v1/auth') + + # Register blueprints - Core + from app.api.apps import apps_bp + from app.api.domains import domains_bp + from app.api.private_urls import private_urls_bp + app.register_blueprint(apps_bp, url_prefix='/api/v1/apps') + app.register_blueprint(domains_bp, url_prefix='/api/v1/domains') + app.register_blueprint(private_urls_bp, url_prefix='/api/v1/apps') + + # Register blueprints - System + from app.api.system import system_bp + from app.api.processes import processes_bp + from app.api.logs import logs_bp + app.register_blueprint(system_bp, url_prefix='/api/v1/system') + app.register_blueprint(processes_bp, url_prefix='/api/v1/processes') + app.register_blueprint(logs_bp, url_prefix='/api/v1/logs') + + # Register blueprints - Infrastructure + from app.api.nginx import nginx_bp + from app.api.ssl import ssl_bp + app.register_blueprint(nginx_bp, url_prefix='/api/v1/nginx') + app.register_blueprint(ssl_bp, url_prefix='/api/v1/ssl') + + # Register blueprints - PHP & WordPress + from app.api.php import php_bp + from app.api.wordpress import wordpress_bp + from app.api.wordpress_sites import wordpress_sites_bp + from app.api.environment_pipeline import environment_pipeline_bp + app.register_blueprint(php_bp, url_prefix='/api/v1/php') + app.register_blueprint(wordpress_bp, url_prefix='/api/v1/wordpress') + app.register_blueprint(wordpress_sites_bp, url_prefix='/api/v1/wordpress') + app.register_blueprint(environment_pipeline_bp, url_prefix='/api/v1/wordpress/projects') + + # Register blueprints - Python + from app.api.python import python_bp + app.register_blueprint(python_bp, url_prefix='/api/v1/python') + + # Register blueprints - Docker + from app.api.docker import docker_bp + app.register_blueprint(docker_bp, url_prefix='/api/v1/docker') + + # Register blueprints - Databases + from app.api.databases import databases_bp + app.register_blueprint(databases_bp, url_prefix='/api/v1/databases') + + # Register blueprints - Monitoring & Alerts + from app.api.monitoring import monitoring_bp + app.register_blueprint(monitoring_bp, url_prefix='/api/v1/monitoring') + + # Register blueprints - Notifications + from app.api.notifications import notifications_bp + app.register_blueprint(notifications_bp, url_prefix='/api/v1/notifications') + + # Register blueprints - Backups + from app.api.backups import backups_bp + app.register_blueprint(backups_bp, url_prefix='/api/v1/backups') + + # Register blueprints - Git Deployment + from app.api.deploy import deploy_bp + app.register_blueprint(deploy_bp, url_prefix='/api/v1/deploy') + + # Register blueprints - Builds & Deployments + from app.api.builds import builds_bp + app.register_blueprint(builds_bp, url_prefix='/api/v1/builds') + + # Register blueprints - Templates + from app.api.templates import templates_bp + app.register_blueprint(templates_bp, url_prefix='/api/v1/templates') + + # Register blueprints - File Manager + from app.api.files import files_bp + app.register_blueprint(files_bp, url_prefix='/api/v1/files') + + # Register blueprints - FTP Server + from app.api.ftp import ftp_bp + app.register_blueprint(ftp_bp, url_prefix='/api/v1/ftp') + + # Register blueprints - Firewall + from app.api.firewall import firewall_bp + app.register_blueprint(firewall_bp, url_prefix='/api/v1/firewall') + + # Register blueprints - Git Server + from app.api.git import git_bp + app.register_blueprint(git_bp, url_prefix='/api/v1/git') + + # Register blueprints - Security (ClamAV, File Integrity, etc.) + from app.api.security import security_bp + app.register_blueprint(security_bp, url_prefix='/api/v1/security') + + # Register blueprints - Cron Jobs + from app.api.cron import cron_bp + app.register_blueprint(cron_bp, url_prefix='/api/v1/cron') + + # Register blueprints - Email Server + from app.api.email import email_bp + app.register_blueprint(email_bp, url_prefix='/api/v1/email') + + # Register blueprints - Uptime Tracking + from app.api.uptime import uptime_bp + app.register_blueprint(uptime_bp, url_prefix='/api/v1/uptime') + + # Register blueprints - Environment Variables + from app.api.env_vars import env_vars_bp + app.register_blueprint(env_vars_bp, url_prefix='/api/v1/apps') + + # Register blueprints - Two-Factor Authentication + from app.api.two_factor import two_factor_bp + app.register_blueprint(two_factor_bp, url_prefix='/api/v1/auth/2fa') + + # Register blueprints - SSO / OAuth + from app.api.sso import sso_bp + app.register_blueprint(sso_bp, url_prefix='/api/v1/sso') + + # Register blueprints - Database Migrations + from app.api.migrations import migrations_bp + app.register_blueprint(migrations_bp, url_prefix='/api/v1/migrations') + + # Register blueprints - API Enhancements + from app.api.api_keys import api_keys_bp + from app.api.api_analytics import api_analytics_bp + from app.api.event_subscriptions import event_subscriptions_bp + from app.api.docs import docs_bp + app.register_blueprint(api_keys_bp, url_prefix='/api/v1/api-keys') + app.register_blueprint(api_analytics_bp, url_prefix='/api/v1/api-analytics') + app.register_blueprint(event_subscriptions_bp, url_prefix='/api/v1/event-subscriptions') + app.register_blueprint(docs_bp, url_prefix='/api/v1/docs') + + # Register blueprints - Admin (User Management, Settings, Audit Logs) + from app.api.admin import admin_bp + app.register_blueprint(admin_bp, url_prefix='/api/v1/admin') + + # Register blueprints - Invitations + from app.api.invitations import invitations_bp + app.register_blueprint(invitations_bp, url_prefix='/api/v1/admin/invitations') + + # Register blueprints - Historical Metrics + from app.api.metrics import metrics_bp + app.register_blueprint(metrics_bp, url_prefix='/api/v1/metrics') + + # Register blueprints - Workflows + from app.api.workflows import workflows_bp + app.register_blueprint(workflows_bp, url_prefix='/api/v1/workflows') + + # Register blueprints - Servers (Multi-server management) + from app.api.servers import servers_bp + app.register_blueprint(servers_bp, url_prefix='/api/v1/servers') + + # Register blueprints - Fleet Monitor (Cross-server monitoring) + from app.api.fleet_monitor import fleet_monitor_bp + app.register_blueprint(fleet_monitor_bp, url_prefix='/api/v1/fleet-monitor') + + # Register blueprints - Agent Plugins + from app.api.agent_plugins import agent_plugins_bp + app.register_blueprint(agent_plugins_bp, url_prefix='/api/v1/agent-plugins') + + # Register blueprints - Server Templates + from app.api.server_templates import server_templates_bp + app.register_blueprint(server_templates_bp, url_prefix='/api/v1/server-templates') + + # Register blueprints - Workspaces + from app.api.workspaces import workspaces_bp + app.register_blueprint(workspaces_bp, url_prefix='/api/v1/workspaces') + + # Register blueprints - Advanced SSL + from app.api.advanced_ssl import advanced_ssl_bp + app.register_blueprint(advanced_ssl_bp, url_prefix='/api/v1/ssl/advanced') + + # Register blueprints - DNS Zones + from app.api.dns_zones import dns_zones_bp + app.register_blueprint(dns_zones_bp, url_prefix='/api/v1/dns') + + # Register blueprints - Nginx Advanced + from app.api.nginx_advanced import nginx_advanced_bp + app.register_blueprint(nginx_advanced_bp, url_prefix='/api/v1/nginx/advanced') + + # Register blueprints - Status Pages + from app.api.status_pages import status_pages_bp + app.register_blueprint(status_pages_bp, url_prefix='/api/v1/status') + + # Register blueprints - Cloud Provisioning + from app.api.cloud_provisioning import cloud_provisioning_bp + app.register_blueprint(cloud_provisioning_bp, url_prefix='/api/v1/cloud') + + # Register blueprints - Performance + from app.api.performance import performance_bp + app.register_blueprint(performance_bp, url_prefix='/api/v1/performance') + + # Register blueprints - Mobile + from app.api.mobile import mobile_bp + app.register_blueprint(mobile_bp, url_prefix='/api/v1/mobile') + + # Register blueprints - Marketplace + from app.api.marketplace import marketplace_bp + app.register_blueprint(marketplace_bp, url_prefix='/api/v1/marketplace') + + # Handle database migrations (Alembic) + with app.app_context(): + from app.services.migration_service import MigrationService + MigrationService.check_and_prepare(app) + + # Initialize default settings and migrate legacy roles + from app.services.settings_service import SettingsService + SettingsService.initialize_defaults() + SettingsService.migrate_legacy_roles() + + # Start metrics history collection in background + from app.services.metrics_history_service import MetricsHistoryService + if not MetricsHistoryService.is_running(): + MetricsHistoryService.start_collection(app) + + # Start auto-sync scheduler for WordPress environments + _start_auto_sync_scheduler(app) + + # Start workflow scheduler + _start_workflow_scheduler(app) + + # Start API analytics flush thread + from app.middleware.api_analytics import start_analytics_flush_thread + start_analytics_flush_thread(app) + + # Start hourly analytics aggregation and event retry threads + _start_api_background_threads(app) + + # Request body size limit + app.config['MAX_CONTENT_LENGTH'] = 100 * 1024 * 1024 # 100MB limit + + # Reject 2FA pending tokens on non-2FA endpoints + @app.before_request + def check_2fa_pending(): + """Reject 2FA pending tokens on non-2FA endpoints.""" + from flask_jwt_extended import verify_jwt_in_request, get_jwt + if request.endpoint and request.path.startswith('/api/'): + # Allow 2FA verification endpoints + if '/two-factor/verify' in request.path or '/two-factor/verify-backup' in request.path: + return + # Allow auth endpoints (login, refresh) + if '/auth/login' in request.path or '/auth/refresh' in request.path: + return + try: + verify_jwt_in_request() + claims = get_jwt() + if claims.get('2fa_pending'): + return jsonify({'error': '2FA verification required'}), 403 + except Exception: + pass # Let @jwt_required handle actual auth errors + + # Serve frontend for root path + @app.route('/') + def serve_index(): + index = os.path.join(app.static_folder, 'index.html') if app.static_folder else None + if index and os.path.isfile(index): + return send_from_directory(app.static_folder, 'index.html') + return {'message': 'ServerKit API is running', 'docs': '/api/v1/'}, 200 + + # Catch-all route for SPA - must be after all other routes + @app.errorhandler(404) + def not_found(e): + from flask import request + if request.path.startswith('/api/'): + return {'error': 'Not found'}, 404 + # Serve SPA index.html if it exists, otherwise JSON 404 + index = os.path.join(app.static_folder, 'index.html') if app.static_folder else None + if index and os.path.isfile(index): + return send_from_directory(app.static_folder, 'index.html') + return {'error': 'Not found'}, 404 + + return app + + +def get_socketio(): + """Get the SocketIO instance.""" + return socketio + + +_auto_sync_thread = None + + +def _start_auto_sync_scheduler(app): + """Start a background thread that checks for auto-sync schedules.""" + global _auto_sync_thread + if _auto_sync_thread is not None: + return + + import threading + import time + import logging + + logger = logging.getLogger(__name__) + + def auto_sync_loop(): + while True: + try: + time.sleep(60) # Check every 60 seconds + with app.app_context(): + _check_auto_sync_schedules(logger) + except Exception as e: + logger.error(f'Auto-sync scheduler error: {e}') + + _auto_sync_thread = threading.Thread( + target=auto_sync_loop, + daemon=True, + name='auto-sync-scheduler' + ) + _auto_sync_thread.start() + + +def _check_auto_sync_schedules(logger): + """Check all auto-sync enabled sites and run syncs that are due.""" + from app.models.wordpress_site import WordPressSite + from datetime import datetime + + sites = WordPressSite.query.filter_by(auto_sync_enabled=True).all() + if not sites: + return + + try: + from croniter import croniter + except ImportError: + logger.debug('croniter not installed, skipping auto-sync check') + return + + now = datetime.utcnow() + + for site in sites: + if not site.auto_sync_schedule: + continue + + try: + if not croniter.is_valid(site.auto_sync_schedule): + continue + + cron = croniter(site.auto_sync_schedule, now) + prev_run = cron.get_prev(datetime) + + # Check if a run was due in the last 90 seconds (to account for check interval) + seconds_since_due = (now - prev_run).total_seconds() + if seconds_since_due <= 90: + logger.info(f'Auto-sync triggered for site {site.id} ({site.name})') + from app.services.environment_pipeline_service import EnvironmentPipelineService + EnvironmentPipelineService.sync_from_production( + env_site_id=site.id, + sync_type='full', + user_id=None + ) + except Exception as e: + logger.error(f'Auto-sync check failed for site {site.id}: {e}') + + +_api_bg_thread = None + + +def _start_api_background_threads(app): + """Start background threads for API analytics aggregation and event delivery retry.""" + global _api_bg_thread + if _api_bg_thread is not None: + return + + import threading + import time + import logging + + logger = logging.getLogger(__name__) + + def api_bg_loop(): + while True: + try: + time.sleep(3600) # Run hourly + with app.app_context(): + from app.services.api_analytics_service import ApiAnalyticsService + ApiAnalyticsService.aggregate_hourly() + + from app.services.event_service import EventService + EventService.retry_failed() + except Exception as e: + logger.error(f'API background thread error: {e}') + + _api_bg_thread = threading.Thread( + target=api_bg_loop, + daemon=True, + name='api-background' + ) + _api_bg_thread.start() + + +_workflow_thread = None + + +def _start_workflow_scheduler(app): + """Start a background thread that checks for scheduled workflows.""" + global _workflow_thread + if _workflow_thread is not None: + return + + import threading + import time + import logging + + logger = logging.getLogger(__name__) + + def workflow_loop(): + while True: + try: + time.sleep(60) # Check every 60 seconds + with app.app_context(): + _check_workflow_schedules(logger) + except Exception as e: + logger.error(f'Workflow scheduler error: {e}') + + _workflow_thread = threading.Thread( + target=workflow_loop, + daemon=True, + name='workflow-scheduler' + ) + _workflow_thread.start() + + +def _check_workflow_schedules(logger): + """Check all active workflows with cron triggers and run those that are due.""" + from app.models.workflow import Workflow + from app.services.workflow_engine import WorkflowEngine + from datetime import datetime + import json + + try: + from croniter import croniter + except ImportError: + logger.debug('croniter not installed, skipping workflow schedule check') + return + + # Find active workflows with cron triggers + workflows = Workflow.query.filter_by(is_active=True, trigger_type='cron').all() + if not workflows: + return + + now = datetime.utcnow() + + for workflow in workflows: + try: + config = json.loads(workflow.trigger_config) if workflow.trigger_config else {} + cron_expr = config.get('cron') + + if not cron_expr or not croniter.is_valid(cron_expr): + continue + + cron = croniter(cron_expr, now) + prev_run = cron.get_prev(datetime) + + # Check if a run was due in the last 90 seconds + seconds_since_due = (now - prev_run).total_seconds() + + # Also ensure we don't run it multiple times for the same slot + if 0 < seconds_since_due <= 90: + # Check if it already ran in the last 2 minutes + if workflow.last_run_at: + seconds_since_last_run = (now - workflow.last_run_at).total_seconds() + if seconds_since_last_run < 110: + continue + + logger.info(f'Scheduled workflow triggered: {workflow.name} (ID: {workflow.id})') + WorkflowEngine.execute_workflow( + workflow_id=workflow.id, + trigger_type='cron', + context={'scheduled_at': prev_run.isoformat()} + ) + except Exception as e: + logger.error(f'Workflow schedule check failed for workflow {workflow.id}: {e}') diff --git a/backend/app/agent_gateway.py b/backend/app/agent_gateway.py index e2420386..6d513197 100644 --- a/backend/app/agent_gateway.py +++ b/backend/app/agent_gateway.py @@ -5,6 +5,7 @@ Handles agent connections, authentication, and message routing. """ +from collections import defaultdict from flask import request from flask_socketio import Namespace, emit, disconnect import json @@ -15,6 +16,22 @@ from app.utils.ip_utils import is_ip_allowed from app.services.anomaly_detection_service import anomaly_detection_service +# In-memory rate limiter for agent authentication +_auth_attempts = defaultdict(list) +_AUTH_RATE_LIMIT = 10 # max attempts per window +_AUTH_RATE_WINDOW = 60 # seconds + + +def _check_auth_rate_limit(ip_address: str) -> bool: + """Check if IP has exceeded auth rate limit.""" + now = time.time() + # Clean old entries + _auth_attempts[ip_address] = [t for t in _auth_attempts[ip_address] if now - t < _AUTH_RATE_WINDOW] + if len(_auth_attempts[ip_address]) >= _AUTH_RATE_LIMIT: + return False + _auth_attempts[ip_address].append(now) + return True + class AgentNamespace(Namespace): """ @@ -50,6 +67,11 @@ def on_auth(self, data): } """ sid = request.sid + ip_address = request.remote_addr + if not _check_auth_rate_limit(ip_address): + emit('auth_response', {'success': False, 'error': 'Rate limit exceeded'}, room=request.sid, namespace='/agent') + return + agent_id = data.get('agent_id') api_key_prefix = data.get('api_key_prefix') signature = data.get('signature') diff --git a/backend/app/api/admin.py b/backend/app/api/admin.py index c01e65e0..bc8b6b2b 100644 --- a/backend/app/api/admin.py +++ b/backend/app/api/admin.py @@ -518,10 +518,11 @@ def get_activity_summary(): 'action_count': count }) - # Daily action counts for the past 7 days + # Daily action counts for the past 90 days daily_counts = [] - for i in range(7): - day_start = today_start - timedelta(days=6 - i) + days_to_fetch = 90 + for i in range(days_to_fetch): + day_start = today_start - timedelta(days=(days_to_fetch - 1) - i) day_end = day_start + timedelta(days=1) count = db.session.query(func.count(AuditLog.id)).filter( AuditLog.created_at >= day_start, @@ -532,12 +533,30 @@ def get_activity_summary(): 'count': count }) + # Top user activity (past 90 days) + top_user_daily = [] + if top_users: + top_user_id = top_users[0]['user_id'] + for i in range(days_to_fetch): + day_start = today_start - timedelta(days=(days_to_fetch - 1) - i) + day_end = day_start + timedelta(days=1) + count = db.session.query(func.count(AuditLog.id)).filter( + AuditLog.user_id == top_user_id, + AuditLog.created_at >= day_start, + AuditLog.created_at < day_end + ).scalar() or 0 + top_user_daily.append({ + 'date': day_start.strftime('%Y-%m-%d'), + 'count': count + }) + return jsonify({ 'active_users_today': active_today, 'actions_this_week': actions_this_week, 'total_users': total_users, 'top_users': top_users, 'daily_counts': daily_counts, + 'top_user_daily': top_user_daily, }), 200 diff --git a/backend/app/api/advanced_ssl.py b/backend/app/api/advanced_ssl.py new file mode 100644 index 00000000..64a2ee6c --- /dev/null +++ b/backend/app/api/advanced_ssl.py @@ -0,0 +1,102 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.advanced_ssl_service import AdvancedSSLService +from app.services.audit_service import AuditService +from app.models.audit_log import AuditLog + +advanced_ssl_bp = Blueprint('advanced_ssl', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@advanced_ssl_bp.route('/profiles', methods=['GET']) +@jwt_required() +def get_profiles(): + return jsonify({'profiles': AdvancedSSLService.get_ssl_profiles()}) + + +@advanced_ssl_bp.route('/wildcard', methods=['POST']) +@jwt_required() +def issue_wildcard(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + domain = data.get('domain') + provider = data.get('dns_provider') + creds = data.get('credentials', {}) + + if not domain or not provider: + return jsonify({'error': 'domain and dns_provider required'}), 400 + + result = AdvancedSSLService.issue_wildcard_cert(domain, provider, creds) + AuditService.log( + action=AuditLog.ACTION_RESOURCE_CREATE, user_id=user.id, + target_type='ssl_wildcard', target_id=0, + details={'domain': domain, 'success': result.get('success')} + ) + status = 200 if result.get('success') else 400 + return jsonify(result), status + + +@advanced_ssl_bp.route('/san', methods=['POST']) +@jwt_required() +def issue_san(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + domains = data.get('domains', []) + if not domains: + return jsonify({'error': 'domains list required'}), 400 + + try: + result = AdvancedSSLService.issue_san_cert(domains) + status = 200 if result.get('success') else 400 + return jsonify(result), status + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@advanced_ssl_bp.route('/upload', methods=['POST']) +@jwt_required() +def upload_cert(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + domain = data.get('domain') + cert = data.get('certificate') + key = data.get('private_key') + chain = data.get('chain') + + if not domain or not cert or not key: + return jsonify({'error': 'domain, certificate, and private_key required'}), 400 + + try: + result = AdvancedSSLService.upload_custom_cert(domain, cert, key, chain) + return jsonify(result), 201 + except Exception as e: + return jsonify({'error': str(e)}), 400 + + +@advanced_ssl_bp.route('/health/', methods=['GET']) +@jwt_required() +def cert_health(domain): + result = AdvancedSSLService.get_cert_health(domain) + return jsonify(result) + + +@advanced_ssl_bp.route('/expiry-alerts', methods=['GET']) +@jwt_required() +def expiry_alerts(): + days = request.args.get('days', 30, type=int) + alerts = AdvancedSSLService.get_expiry_alerts(days) + return jsonify({'alerts': alerts, 'threshold_days': days}) diff --git a/backend/app/api/agent_plugins.py b/backend/app/api/agent_plugins.py new file mode 100644 index 00000000..aee04f72 --- /dev/null +++ b/backend/app/api/agent_plugins.py @@ -0,0 +1,240 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.agent_plugin_service import AgentPluginService +from app.services.audit_service import AuditService +from app.models.audit_log import AuditLog + +agent_plugins_bp = Blueprint('agent_plugins', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@agent_plugins_bp.route('/', methods=['GET']) +@jwt_required() +def list_plugins(): + status = request.args.get('status') + plugins = AgentPluginService.list_plugins(status=status) + return jsonify({'plugins': [p.to_dict() for p in plugins]}) + + +@agent_plugins_bp.route('/', methods=['GET']) +@jwt_required() +def get_plugin(plugin_id): + plugin = AgentPluginService.get_plugin(plugin_id) + if not plugin: + return jsonify({'error': 'Plugin not found'}), 404 + return jsonify(plugin.to_dict()) + + +@agent_plugins_bp.route('/', methods=['POST']) +@jwt_required() +def create_plugin(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + if not data: + return jsonify({'error': 'Request body required'}), 400 + + # Validate manifest + errors = AgentPluginService.validate_manifest(data) + if errors: + return jsonify({'error': 'Invalid manifest', 'details': errors}), 400 + + try: + plugin = AgentPluginService.create_plugin(data) + AuditService.log( + action=AuditLog.ACTION_RESOURCE_CREATE, + user_id=user.id, + target_type='agent_plugin', + target_id=plugin.id, + details={'name': plugin.name, 'version': plugin.version} + ) + return jsonify(plugin.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@agent_plugins_bp.route('/', methods=['PUT']) +@jwt_required() +def update_plugin(plugin_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + plugin = AgentPluginService.update_plugin(plugin_id, data) + if not plugin: + return jsonify({'error': 'Plugin not found'}), 404 + + return jsonify(plugin.to_dict()) + + +@agent_plugins_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_plugin(plugin_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + try: + result = AgentPluginService.delete_plugin(plugin_id) + if not result: + return jsonify({'error': 'Plugin not found'}), 404 + AuditService.log( + action=AuditLog.ACTION_RESOURCE_DELETE, + user_id=user.id, + target_type='agent_plugin', + target_id=plugin_id, + details={} + ) + return jsonify({'message': 'Plugin deleted'}) + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@agent_plugins_bp.route('/validate', methods=['POST']) +@jwt_required() +def validate_manifest(): + data = request.get_json() + if not data: + return jsonify({'error': 'Manifest data required'}), 400 + errors = AgentPluginService.validate_manifest(data) + return jsonify({'valid': len(errors) == 0, 'errors': errors}) + + +# --- Installation endpoints --- + +@agent_plugins_bp.route('//install', methods=['POST']) +@jwt_required() +def install_plugin(plugin_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + server_id = data.get('server_id') + if not server_id: + return jsonify({'error': 'server_id required'}), 400 + + try: + install = AgentPluginService.install_plugin( + plugin_id, server_id, config=data.get('config') + ) + AuditService.log( + action=AuditLog.ACTION_RESOURCE_CREATE, + user_id=user.id, + target_type='agent_plugin_install', + target_id=install.id, + details={'plugin_id': plugin_id, 'server_id': server_id} + ) + return jsonify(install.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@agent_plugins_bp.route('//bulk-install', methods=['POST']) +@jwt_required() +def bulk_install_plugin(plugin_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + server_ids = data.get('server_ids', []) + if not server_ids: + return jsonify({'error': 'server_ids required'}), 400 + + results = AgentPluginService.bulk_install(plugin_id, server_ids, config=data.get('config')) + return jsonify({'results': results}) + + +@agent_plugins_bp.route('//installations', methods=['GET']) +@jwt_required() +def get_plugin_installations(plugin_id): + installs = AgentPluginService.get_plugin_installations(plugin_id) + return jsonify({'installations': [i.to_dict() for i in installs]}) + + +@agent_plugins_bp.route('/installs/', methods=['GET']) +@jwt_required() +def get_install(install_id): + install = AgentPluginService.get_install(install_id) + if not install: + return jsonify({'error': 'Installation not found'}), 404 + return jsonify(install.to_dict()) + + +@agent_plugins_bp.route('/installs//enable', methods=['POST']) +@jwt_required() +def enable_install(install_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + install = AgentPluginService.enable_plugin(install_id) + if not install: + return jsonify({'error': 'Installation not found'}), 404 + return jsonify(install.to_dict()) + + +@agent_plugins_bp.route('/installs//disable', methods=['POST']) +@jwt_required() +def disable_install(install_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + install = AgentPluginService.disable_plugin(install_id) + if not install: + return jsonify({'error': 'Installation not found'}), 404 + return jsonify(install.to_dict()) + + +@agent_plugins_bp.route('/installs/', methods=['DELETE']) +@jwt_required() +def uninstall_plugin(install_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + result = AgentPluginService.uninstall_plugin(install_id) + if not result: + return jsonify({'error': 'Installation not found'}), 404 + return jsonify({'message': 'Plugin uninstall initiated'}) + + +@agent_plugins_bp.route('/installs//config', methods=['PUT']) +@jwt_required() +def update_install_config(install_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + if not data or 'config' not in data: + return jsonify({'error': 'config required'}), 400 + + install = AgentPluginService.update_install_config(install_id, data['config']) + if not install: + return jsonify({'error': 'Installation not found'}), 404 + return jsonify(install.to_dict()) + + +@agent_plugins_bp.route('/server/', methods=['GET']) +@jwt_required() +def get_server_plugins(server_id): + installs = AgentPluginService.get_server_plugins(server_id) + return jsonify({'plugins': [i.to_dict() for i in installs]}) + + +@agent_plugins_bp.route('/spec', methods=['GET']) +@jwt_required() +def get_plugin_spec(): + """Return the plugin specification interface.""" + return jsonify(AgentPluginService.PLUGIN_SPEC) diff --git a/backend/app/api/apps.py b/backend/app/api/apps.py index 7582751b..97cc52b4 100644 --- a/backend/app/api/apps.py +++ b/backend/app/api/apps.py @@ -578,7 +578,6 @@ def get_container_logs(app_id): break if not container_id: return jsonify({ - 'success': False, 'error': f'Service "{service}" not found', 'available_services': [c.get('service') or c.get('name') for c in all_containers] }), 404 @@ -590,7 +589,6 @@ def get_container_logs(app_id): if not container_id: return jsonify({ - 'success': False, 'error': 'No container found for this application', 'hint': 'The application may not have been started yet' }), 404 @@ -599,7 +597,6 @@ def get_container_logs(app_id): container_state = DockerService.get_container_state(container_id) if not container_state: return jsonify({ - 'success': False, 'error': 'Container not found or no longer exists' }), 404 @@ -613,7 +610,6 @@ def get_container_logs(app_id): if not result.get('success'): return jsonify({ - 'success': False, 'error': result.get('error', 'Failed to fetch logs') }), 400 diff --git a/backend/app/api/auth.py b/backend/app/api/auth.py index adc7d7c7..87f7689e 100644 --- a/backend/app/api/auth.py +++ b/backend/app/api/auth.py @@ -1,3 +1,4 @@ +import logging from datetime import datetime from flask import Blueprint, request, jsonify from flask_jwt_extended import ( @@ -12,6 +13,8 @@ from app.services.settings_service import SettingsService from app.services.audit_service import AuditService +logger = logging.getLogger(__name__) + auth_bp = Blueprint('auth', __name__) @@ -64,8 +67,12 @@ def register(): # Check if registration is allowed is_first_user = User.query.count() == 0 - if not is_first_user and not invitation and not SettingsService.is_registration_enabled(): - return jsonify({'error': 'Registration is disabled'}), 403 + if not is_first_user and not invitation: + if not SettingsService.needs_setup() and not SettingsService.is_registration_enabled(): + logger.warning(f"Registration attempt blocked - setup already completed. IP: {request.remote_addr}") + return jsonify({'error': 'Registration is disabled'}), 403 + if not SettingsService.is_registration_enabled(): + return jsonify({'error': 'Registration is disabled'}), 403 email = data.get('email') username = data.get('username') @@ -75,10 +82,10 @@ def register(): return jsonify({'error': 'Missing required fields'}), 400 if User.query.filter_by(email=email).first(): - return jsonify({'error': 'Email already registered'}), 409 + return jsonify({'error': 'This email or username is unavailable'}), 409 if User.query.filter_by(username=username).first(): - return jsonify({'error': 'Username already taken'}), 409 + return jsonify({'error': 'This email or username is unavailable'}), 409 if len(password) < 8: return jsonify({'error': 'Password must be at least 8 characters'}), 400 @@ -285,6 +292,7 @@ def get_current_user(): @auth_bp.route('/me', methods=['PUT']) +@limiter.limit("3 per minute") @jwt_required() def update_current_user(): current_user_id = get_jwt_identity() @@ -312,6 +320,18 @@ def update_current_user(): return jsonify({'error': 'Password must be at least 8 characters'}), 400 user.set_password(data['password']) + if 'sidebar_config' in data: + config = data['sidebar_config'] + if isinstance(config, dict): + preset = config.get('preset', 'full') + valid_presets = ['full', 'web', 'email', 'devops', 'minimal', 'custom'] + if preset not in valid_presets: + return jsonify({'error': f'Invalid sidebar preset: {preset}'}), 400 + hidden = config.get('hiddenItems', []) + if not isinstance(hidden, list): + return jsonify({'error': 'hiddenItems must be a list'}), 400 + user.set_sidebar_config({'preset': preset, 'hiddenItems': hidden}) + db.session.commit() return jsonify({'user': user.to_dict()}), 200 diff --git a/backend/app/api/cloud_provisioning.py b/backend/app/api/cloud_provisioning.py new file mode 100644 index 00000000..c8a806e3 --- /dev/null +++ b/backend/app/api/cloud_provisioning.py @@ -0,0 +1,162 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.cloud_provisioning_service import CloudProvisioningService + +cloud_provisioning_bp = Blueprint('cloud_provisioning', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +# --- Providers --- + +@cloud_provisioning_bp.route('/providers', methods=['GET']) +@jwt_required() +def list_providers(): + providers = CloudProvisioningService.list_providers() + return jsonify({'providers': [p.to_dict() for p in providers]}) + + +@cloud_provisioning_bp.route('/providers', methods=['POST']) +@jwt_required() +def create_provider(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + try: + provider = CloudProvisioningService.create_provider(data, user.id) + return jsonify(provider.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@cloud_provisioning_bp.route('/providers/', methods=['DELETE']) +@jwt_required() +def delete_provider(provider_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not CloudProvisioningService.delete_provider(provider_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Provider removed'}) + + +@cloud_provisioning_bp.route('/providers//options', methods=['GET']) +@jwt_required() +def get_options(provider_type): + options = CloudProvisioningService.get_provider_options(provider_type) + if not options: + return jsonify({'error': 'Unknown provider'}), 404 + return jsonify(options) + + +# --- Servers --- + +@cloud_provisioning_bp.route('/servers', methods=['GET']) +@jwt_required() +def list_servers(): + provider_id = request.args.get('provider_id', type=int) + servers = CloudProvisioningService.list_servers(provider_id) + return jsonify({'servers': [s.to_dict() for s in servers]}) + + +@cloud_provisioning_bp.route('/servers/', methods=['GET']) +@jwt_required() +def get_server(server_id): + server = CloudProvisioningService.get_server(server_id) + if not server: + return jsonify({'error': 'Not found'}), 404 + return jsonify(server.to_dict()) + + +@cloud_provisioning_bp.route('/servers', methods=['POST']) +@jwt_required() +def create_server(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + if not data or 'name' not in data or 'provider_id' not in data: + return jsonify({'error': 'name and provider_id required'}), 400 + try: + server = CloudProvisioningService.create_server(data, user.id) + return jsonify(server.to_dict()), 201 + except (ValueError, Exception) as e: + return jsonify({'error': str(e)}), 400 + + +@cloud_provisioning_bp.route('/servers/', methods=['DELETE']) +@jwt_required() +def destroy_server(server_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not CloudProvisioningService.destroy_server(server_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Server destroyed'}) + + +@cloud_provisioning_bp.route('/servers//resize', methods=['POST']) +@jwt_required() +def resize_server(server_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() or {} + new_size = data.get('size') + if not new_size: + return jsonify({'error': 'size required'}), 400 + try: + server = CloudProvisioningService.resize_server(server_id, new_size) + if not server: + return jsonify({'error': 'Not found'}), 404 + return jsonify(server.to_dict()) + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +# --- Snapshots --- + +@cloud_provisioning_bp.route('/servers//snapshots', methods=['GET']) +@jwt_required() +def list_snapshots(server_id): + snapshots = CloudProvisioningService.get_snapshots(server_id) + return jsonify({'snapshots': [s.to_dict() for s in snapshots]}) + + +@cloud_provisioning_bp.route('/servers//snapshots', methods=['POST']) +@jwt_required() +def create_snapshot(server_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() or {} + name = data.get('name', f'snapshot-{server_id}') + try: + snapshot = CloudProvisioningService.create_snapshot(server_id, name) + return jsonify(snapshot.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@cloud_provisioning_bp.route('/snapshots/', methods=['DELETE']) +@jwt_required() +def delete_snapshot(snapshot_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not CloudProvisioningService.delete_snapshot(snapshot_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Snapshot deleted'}) + + +# --- Cost --- + +@cloud_provisioning_bp.route('/costs', methods=['GET']) +@jwt_required() +def cost_summary(): + return jsonify(CloudProvisioningService.get_cost_summary()) diff --git a/backend/app/api/databases.py b/backend/app/api/databases.py index 82c11f91..c0cb94e5 100644 --- a/backend/app/api/databases.py +++ b/backend/app/api/databases.py @@ -2,24 +2,11 @@ from flask_jwt_extended import jwt_required, get_jwt_identity from app.models import User, Application from app.services.database_service import DatabaseService +from app.middleware.rbac import admin_required databases_bp = Blueprint('databases', __name__) -def admin_required(fn): - """Decorator to require admin role.""" - from functools import wraps - - @wraps(fn) - def wrapper(*args, **kwargs): - current_user_id = get_jwt_identity() - user = User.query.get(current_user_id) - if not user or user.role != 'admin': - return jsonify({'error': 'Admin access required'}), 403 - return fn(*args, **kwargs) - return wrapper - - # ==================== STATUS ==================== @databases_bp.route('/status', methods=['GET']) diff --git a/backend/app/api/dns_zones.py b/backend/app/api/dns_zones.py new file mode 100644 index 00000000..f0aa0a83 --- /dev/null +++ b/backend/app/api/dns_zones.py @@ -0,0 +1,155 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.dns_zone_service import DNSZoneService + +dns_zones_bp = Blueprint('dns_zones', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@dns_zones_bp.route('/', methods=['GET']) +@jwt_required() +def list_zones(): + zones = DNSZoneService.list_zones() + return jsonify({'zones': [z.to_dict() for z in zones]}) + + +@dns_zones_bp.route('/', methods=['GET']) +@jwt_required() +def get_zone(zone_id): + zone = DNSZoneService.get_zone(zone_id) + if not zone: + return jsonify({'error': 'Zone not found'}), 404 + return jsonify(zone.to_dict()) + + +@dns_zones_bp.route('/', methods=['POST']) +@jwt_required() +def create_zone(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + try: + zone = DNSZoneService.create_zone(data) + return jsonify(zone.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@dns_zones_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_zone(zone_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not DNSZoneService.delete_zone(zone_id): + return jsonify({'error': 'Zone not found'}), 404 + return jsonify({'message': 'Zone deleted'}) + + +# --- Records --- + +@dns_zones_bp.route('//records', methods=['GET']) +@jwt_required() +def get_records(zone_id): + records = DNSZoneService.get_records(zone_id) + return jsonify({'records': [r.to_dict() for r in records]}) + + +@dns_zones_bp.route('//records', methods=['POST']) +@jwt_required() +def create_record(zone_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + try: + record = DNSZoneService.create_record(zone_id, data) + return jsonify(record.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@dns_zones_bp.route('/records/', methods=['PUT']) +@jwt_required() +def update_record(record_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + record = DNSZoneService.update_record(record_id, data) + if not record: + return jsonify({'error': 'Record not found'}), 404 + return jsonify(record.to_dict()) + + +@dns_zones_bp.route('/records/', methods=['DELETE']) +@jwt_required() +def delete_record(record_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not DNSZoneService.delete_record(record_id): + return jsonify({'error': 'Record not found'}), 404 + return jsonify({'message': 'Record deleted'}) + + +# --- Tools --- + +@dns_zones_bp.route('/presets', methods=['GET']) +@jwt_required() +def get_presets(): + return jsonify({'presets': DNSZoneService.get_presets()}) + + +@dns_zones_bp.route('//apply-preset', methods=['POST']) +@jwt_required() +def apply_preset(zone_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() or {} + preset_key = data.get('preset') + variables = data.get('variables', {}) + try: + records = DNSZoneService.apply_preset(zone_id, preset_key, variables) + return jsonify({'records': [r.to_dict() for r in records]}) + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@dns_zones_bp.route('/propagation/', methods=['GET']) +@jwt_required() +def check_propagation(domain): + record_type = request.args.get('type', 'A') + results = DNSZoneService.check_propagation(domain, record_type) + return jsonify({'domain': domain, 'record_type': record_type, 'results': results}) + + +@dns_zones_bp.route('//export', methods=['GET']) +@jwt_required() +def export_zone(zone_id): + content = DNSZoneService.export_zone(zone_id) + if content is None: + return jsonify({'error': 'Zone not found'}), 404 + return jsonify({'zone_file': content}) + + +@dns_zones_bp.route('//import', methods=['POST']) +@jwt_required() +def import_zone(zone_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() or {} + content = data.get('zone_file', '') + try: + records = DNSZoneService.import_zone(zone_id, content) + return jsonify({'imported': len(records), 'records': [r.to_dict() for r in records]}) + except ValueError as e: + return jsonify({'error': str(e)}), 400 diff --git a/backend/app/api/docker.py b/backend/app/api/docker.py index 52123a05..9601cc80 100644 --- a/backend/app/api/docker.py +++ b/backend/app/api/docker.py @@ -2,25 +2,12 @@ from flask_jwt_extended import jwt_required, get_jwt_identity from app.models import User, Application from app.services.docker_service import DockerService +from app.middleware.rbac import admin_required from app import db, paths docker_bp = Blueprint('docker', __name__) -def admin_required(fn): - """Decorator to require admin role.""" - from functools import wraps - - @wraps(fn) - def wrapper(*args, **kwargs): - current_user_id = get_jwt_identity() - user = User.query.get(current_user_id) - if not user or user.role != 'admin': - return jsonify({'error': 'Admin access required'}), 403 - return fn(*args, **kwargs) - return wrapper - - # ==================== DOCKER STATUS ==================== @docker_bp.route('/status', methods=['GET']) @@ -162,7 +149,6 @@ def remove_container(container_id): container_name = container.get('name', '').lower().replace('/', '') if any(protected in container_name for protected in PROTECTED_CONTAINERS): return jsonify({ - 'success': False, 'error': 'Cannot delete ServerKit system container. This container is required for the panel to function.' }), 403 @@ -612,7 +598,6 @@ def cleanup_all_apps(): except Exception as e: db.session.rollback() return jsonify({ - 'success': False, 'error': f'Database commit failed: {str(e)}', 'results': results }), 500 diff --git a/backend/app/api/files.py b/backend/app/api/files.py index c7ff20af..cfc01085 100644 --- a/backend/app/api/files.py +++ b/backend/app/api/files.py @@ -30,10 +30,10 @@ def get_file_info(): path = request.args.get('path') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 if not FileService.is_path_allowed(path): - return jsonify({'success': False, 'error': 'Access denied'}), 403 + return jsonify({'error': 'Access denied'}), 403 info = FileService.get_file_info(path) @@ -41,7 +41,7 @@ def get_file_info(): return jsonify({'success': True, 'file': info}), 200 error = info.get('error', 'File not found') if info else 'File not found' - return jsonify({'success': False, 'error': error}), 404 + return jsonify({'error': error}), 404 @files_bp.route('/read', methods=['GET']) @@ -51,7 +51,7 @@ def read_file(): path = request.args.get('path') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 result = FileService.read_file(path) @@ -69,17 +69,17 @@ def write_file(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 path = data.get('path') content = data.get('content') create_backup = data.get('create_backup', True) if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 if content is None: - return jsonify({'success': False, 'error': 'Content is required'}), 400 + return jsonify({'error': 'Content is required'}), 400 result = FileService.write_file(path, content, create_backup=create_backup) @@ -97,13 +97,13 @@ def create_file(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 path = data.get('path') content = data.get('content', '') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 result = FileService.create_file(path, content) @@ -121,12 +121,12 @@ def create_directory(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 path = data.get('path') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 result = FileService.create_directory(path) @@ -144,7 +144,7 @@ def delete_path(): path = request.args.get('path') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 result = FileService.delete(path) @@ -162,13 +162,13 @@ def rename_path(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 path = data.get('path') new_name = data.get('new_name') if not path or not new_name: - return jsonify({'success': False, 'error': 'Path and new_name are required'}), 400 + return jsonify({'error': 'Path and new_name are required'}), 400 result = FileService.rename(path, new_name) @@ -186,13 +186,13 @@ def copy_path(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 src = data.get('src') dest = data.get('dest') if not src or not dest: - return jsonify({'success': False, 'error': 'Source and destination paths are required'}), 400 + return jsonify({'error': 'Source and destination paths are required'}), 400 result = FileService.copy(src, dest) @@ -210,13 +210,13 @@ def move_path(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 src = data.get('src') dest = data.get('dest') if not src or not dest: - return jsonify({'success': False, 'error': 'Source and destination paths are required'}), 400 + return jsonify({'error': 'Source and destination paths are required'}), 400 result = FileService.move(src, dest) @@ -234,13 +234,13 @@ def change_permissions(): data = request.get_json() if not data: - return jsonify({'success': False, 'error': 'Request body is required'}), 400 + return jsonify({'error': 'Request body is required'}), 400 path = data.get('path') mode = data.get('mode') if not path or not mode: - return jsonify({'success': False, 'error': 'Path and mode are required'}), 400 + return jsonify({'error': 'Path and mode are required'}), 400 result = FileService.change_permissions(path, mode) @@ -260,7 +260,7 @@ def search_files(): max_results = request.args.get('max_results', 100, type=int) if not pattern: - return jsonify({'success': False, 'error': 'Search pattern is required'}), 400 + return jsonify({'error': 'Search pattern is required'}), 400 result = FileService.search(directory, pattern, max_results=max_results) @@ -335,16 +335,16 @@ def download_file(): path = request.args.get('path') if not path: - return jsonify({'success': False, 'error': 'Path is required'}), 400 + return jsonify({'error': 'Path is required'}), 400 if not FileService.is_path_allowed(path): - return jsonify({'success': False, 'error': 'Access denied'}), 403 + return jsonify({'error': 'Access denied'}), 403 if not os.path.exists(path): - return jsonify({'success': False, 'error': 'File not found'}), 404 + return jsonify({'error': 'File not found'}), 404 if os.path.isdir(path): - return jsonify({'success': False, 'error': 'Cannot download directory'}), 400 + return jsonify({'error': 'Cannot download directory'}), 400 try: return send_file( @@ -353,7 +353,7 @@ def download_file(): download_name=os.path.basename(path) ) except Exception as e: - return jsonify({'success': False, 'error': str(e)}), 500 + return jsonify({'error': str(e)}), 500 @files_bp.route('/upload', methods=['POST']) @@ -361,19 +361,19 @@ def download_file(): def upload_file(): """Upload a file.""" if 'file' not in request.files: - return jsonify({'success': False, 'error': 'No file provided'}), 400 + return jsonify({'error': 'No file provided'}), 400 file = request.files['file'] destination = request.form.get('destination') if not destination: - return jsonify({'success': False, 'error': 'Destination path is required'}), 400 + return jsonify({'error': 'Destination path is required'}), 400 if not FileService.is_path_allowed(destination): - return jsonify({'success': False, 'error': 'Access denied'}), 403 + return jsonify({'error': 'Access denied'}), 403 if file.filename == '': - return jsonify({'success': False, 'error': 'No file selected'}), 400 + return jsonify({'error': 'No file selected'}), 400 # Check file size file.seek(0, 2) @@ -382,7 +382,6 @@ def upload_file(): if size > FileService.MAX_UPLOAD_SIZE: return jsonify({ - 'success': False, 'error': f'File too large. Maximum size is {FileService._format_size(FileService.MAX_UPLOAD_SIZE)}' }), 400 @@ -394,7 +393,7 @@ def upload_file(): full_path = destination if not FileService.is_path_allowed(full_path): - return jsonify({'success': False, 'error': 'Access denied'}), 403 + return jsonify({'error': 'Access denied'}), 403 # Ensure parent directory exists parent = os.path.dirname(full_path) @@ -411,6 +410,6 @@ def upload_file(): }), 201 except PermissionError: - return jsonify({'success': False, 'error': 'Permission denied'}), 403 + return jsonify({'error': 'Permission denied'}), 403 except Exception as e: - return jsonify({'success': False, 'error': str(e)}), 500 + return jsonify({'error': str(e)}), 500 diff --git a/backend/app/api/fleet_monitor.py b/backend/app/api/fleet_monitor.py new file mode 100644 index 00000000..6929ffa2 --- /dev/null +++ b/backend/app/api/fleet_monitor.py @@ -0,0 +1,198 @@ +""" +Fleet Monitor API + +Endpoints for fleet-wide monitoring: heatmaps, comparisons, +alert thresholds, anomaly detection, capacity forecasting, +fleet search, and metrics export. +""" + +import os +from flask import Blueprint, request, jsonify, Response +from flask_jwt_extended import jwt_required, get_jwt_identity + +from app.services.fleet_monitor_service import fleet_monitor_service +from app.middleware.rbac import admin_required + +fleet_monitor_bp = Blueprint('fleet_monitor', __name__) + + +# ==================== Heatmap ==================== + +@fleet_monitor_bp.route('/heatmap', methods=['GET']) +@jwt_required() +def get_fleet_heatmap(): + """Get latest metrics for all servers in heatmap format.""" + group_id = request.args.get('group_id') + return jsonify(fleet_monitor_service.get_fleet_heatmap(group_id)) + + +# ==================== Comparison ==================== + +@fleet_monitor_bp.route('/comparison', methods=['GET']) +@jwt_required() +def get_comparison(): + """Get multi-server comparison time-series.""" + ids_param = request.args.get('ids', '') + server_ids = [s.strip() for s in ids_param.split(',') if s.strip()] + metric = request.args.get('metric', 'cpu') + period = request.args.get('period', '24h') + + if not server_ids: + return jsonify({'error': 'ids parameter is required'}), 400 + + return jsonify(fleet_monitor_service.get_comparison_timeseries(server_ids, metric, period)) + + +# ==================== Alerts ==================== + +@fleet_monitor_bp.route('/alerts', methods=['GET']) +@jwt_required() +def get_alerts(): + """Get metric alerts.""" + status = request.args.get('status') + severity = request.args.get('severity') + server_id = request.args.get('server_id') + limit = request.args.get('limit', 50, type=int) + + return jsonify(fleet_monitor_service.get_alerts(status, severity, server_id, limit)) + + +@fleet_monitor_bp.route('/alerts//acknowledge', methods=['POST']) +@jwt_required() +def acknowledge_alert(alert_id): + """Acknowledge an active alert.""" + user_id = get_jwt_identity() + success = fleet_monitor_service.acknowledge_alert(alert_id, user_id) + if not success: + return jsonify({'error': 'Cannot acknowledge alert'}), 400 + return jsonify({'success': True}) + + +@fleet_monitor_bp.route('/alerts//resolve', methods=['POST']) +@jwt_required() +def resolve_alert(alert_id): + """Resolve an alert.""" + success = fleet_monitor_service.resolve_alert(alert_id) + if not success: + return jsonify({'error': 'Cannot resolve alert'}), 400 + return jsonify({'success': True}) + + +# ==================== Thresholds ==================== + +@fleet_monitor_bp.route('/thresholds', methods=['GET']) +@jwt_required() +def get_thresholds(): + """Get alert thresholds.""" + server_id = request.args.get('server_id') + return jsonify(fleet_monitor_service.get_thresholds(server_id)) + + +@fleet_monitor_bp.route('/thresholds', methods=['POST']) +@jwt_required() +@admin_required +def create_threshold(): + """Create or update an alert threshold.""" + data = request.get_json() + result = fleet_monitor_service.upsert_threshold(data) + if 'error' in result: + return jsonify(result), 400 + return jsonify(result), 201 + + +@fleet_monitor_bp.route('/thresholds/', methods=['DELETE']) +@jwt_required() +@admin_required +def delete_threshold(threshold_id): + """Delete an alert threshold.""" + success = fleet_monitor_service.delete_threshold(threshold_id) + if not success: + return jsonify({'error': 'Threshold not found'}), 404 + return jsonify({'success': True}) + + +# ==================== Anomalies ==================== + +@fleet_monitor_bp.route('/anomalies', methods=['GET']) +@jwt_required() +def get_anomalies(): + """Get anomaly detection results.""" + server_id = request.args.get('server_id') + return jsonify(fleet_monitor_service.detect_anomalies(server_id)) + + +# ==================== Capacity Forecast ==================== + +@fleet_monitor_bp.route('/forecast/', methods=['GET']) +@jwt_required() +def get_forecast(server_id): + """Get capacity forecast for a server.""" + metric = request.args.get('metric', 'disk') + return jsonify(fleet_monitor_service.forecast_capacity(server_id, metric)) + + +# ==================== Fleet Search ==================== + +@fleet_monitor_bp.route('/search', methods=['GET']) +@jwt_required() +def search_fleet(): + """Search across fleet for servers, containers, services, ports.""" + query = request.args.get('q', '') + search_type = request.args.get('type', 'any') + + if not query or len(query) < 2: + return jsonify({'error': 'Query must be at least 2 characters'}), 400 + + return jsonify(fleet_monitor_service.search_fleet(query, search_type)) + + +# ==================== Export ==================== + +@fleet_monitor_bp.route('/export/csv', methods=['GET']) +@jwt_required() +def export_csv(): + """Export metrics as CSV download.""" + ids_param = request.args.get('ids', '') + server_ids = [s.strip() for s in ids_param.split(',') if s.strip()] + metric = request.args.get('metric', 'cpu') + period = request.args.get('period', '24h') + + if not server_ids: + return jsonify({'error': 'ids parameter is required'}), 400 + + csv_data = fleet_monitor_service.export_metrics_csv(server_ids, metric, period) + return Response( + csv_data, + mimetype='text/csv', + headers={'Content-Disposition': f'attachment; filename=fleet_{metric}_{period}.csv'} + ) + + +@fleet_monitor_bp.route('/export/json', methods=['GET']) +@jwt_required() +def export_json(): + """Export metrics as JSON.""" + ids_param = request.args.get('ids', '') + server_ids = [s.strip() for s in ids_param.split(',') if s.strip()] + metric = request.args.get('metric', 'cpu') + period = request.args.get('period', '24h') + + if not server_ids: + return jsonify({'error': 'ids parameter is required'}), 400 + + return jsonify(fleet_monitor_service.get_comparison_timeseries(server_ids, metric, period)) + + +# ==================== Prometheus ==================== + +@fleet_monitor_bp.route('/prometheus', methods=['GET']) +def prometheus_metrics(): + """Prometheus-compatible metrics endpoint (no JWT, uses token param).""" + token = request.args.get('token') or request.headers.get('X-Prometheus-Token') + expected = os.environ.get('PROMETHEUS_TOKEN') + + if not expected or token != expected: + return Response('Unauthorized', status=401) + + metrics = fleet_monitor_service.get_prometheus_metrics() + return Response(metrics, mimetype='text/plain; version=0.0.4; charset=utf-8') diff --git a/backend/app/api/marketplace.py b/backend/app/api/marketplace.py new file mode 100644 index 00000000..c40fd5a8 --- /dev/null +++ b/backend/app/api/marketplace.py @@ -0,0 +1,141 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.marketplace_service import MarketplaceService + +marketplace_bp = Blueprint('marketplace', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@marketplace_bp.route('/', methods=['GET']) +@jwt_required() +def list_extensions(): + category = request.args.get('category') + search = request.args.get('search') + extensions = MarketplaceService.list_extensions(category=category, search=search) + return jsonify({'extensions': [e.to_dict() for e in extensions]}) + + +@marketplace_bp.route('/categories', methods=['GET']) +@jwt_required() +def get_categories(): + return jsonify({'categories': MarketplaceService.get_categories()}) + + +@marketplace_bp.route('/', methods=['GET']) +@jwt_required() +def get_extension(ext_id): + ext = MarketplaceService.get_extension(ext_id) + if not ext: + return jsonify({'error': 'Not found'}), 404 + return jsonify(ext.to_dict()) + + +@marketplace_bp.route('/', methods=['POST']) +@jwt_required() +def create_extension(): + user = get_current_user() + data = request.get_json() + if not data or 'name' not in data: + return jsonify({'error': 'name required'}), 400 + try: + ext = MarketplaceService.create_extension(data, user.id) + return jsonify(ext.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@marketplace_bp.route('/', methods=['PUT']) +@jwt_required() +def update_extension(ext_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + ext = MarketplaceService.update_extension(ext_id, data) + if not ext: + return jsonify({'error': 'Not found'}), 404 + return jsonify(ext.to_dict()) + + +@marketplace_bp.route('//publish', methods=['POST']) +@jwt_required() +def publish_extension(ext_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + ext = MarketplaceService.publish_extension(ext_id) + if not ext: + return jsonify({'error': 'Not found'}), 404 + return jsonify(ext.to_dict()) + + +@marketplace_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_extension(ext_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not MarketplaceService.delete_extension(ext_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Extension deleted'}) + + +# --- Installation --- + +@marketplace_bp.route('//install', methods=['POST']) +@jwt_required() +def install_extension(ext_id): + user = get_current_user() + data = request.get_json() or {} + try: + install = MarketplaceService.install_extension(ext_id, user.id, data.get('config')) + return jsonify(install.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@marketplace_bp.route('/installs/', methods=['DELETE']) +@jwt_required() +def uninstall_extension(install_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not MarketplaceService.uninstall_extension(install_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Extension uninstalled'}) + + +@marketplace_bp.route('/installs//config', methods=['PUT']) +@jwt_required() +def update_config(install_id): + data = request.get_json() or {} + install = MarketplaceService.update_extension_config(install_id, data.get('config', {})) + if not install: + return jsonify({'error': 'Not found'}), 404 + return jsonify(install.to_dict()) + + +@marketplace_bp.route('/my-extensions', methods=['GET']) +@jwt_required() +def my_extensions(): + user = get_current_user() + installs = MarketplaceService.get_user_extensions(user.id) + return jsonify({'extensions': [i.to_dict() for i in installs]}) + + +@marketplace_bp.route('//rate', methods=['POST']) +@jwt_required() +def rate_extension(ext_id): + data = request.get_json() or {} + rating = data.get('rating') + if rating is None or not (1 <= rating <= 5): + return jsonify({'error': 'Rating must be 1-5'}), 400 + ext = MarketplaceService.rate_extension(ext_id, rating) + if not ext: + return jsonify({'error': 'Not found'}), 404 + return jsonify(ext.to_dict()) diff --git a/backend/app/api/mobile.py b/backend/app/api/mobile.py new file mode 100644 index 00000000..92e050c5 --- /dev/null +++ b/backend/app/api/mobile.py @@ -0,0 +1,122 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required, get_jwt_identity +from app.models.user import User +from app import db + +mobile_bp = Blueprint('mobile', __name__) + + +@mobile_bp.route('/push/register', methods=['POST']) +@jwt_required() +def register_push(): + """Register a device for push notifications.""" + user_id = get_jwt_identity() + data = request.get_json() or {} + subscription = data.get('subscription') + device_name = data.get('device_name', 'Unknown') + + if not subscription: + return jsonify({'error': 'subscription required'}), 400 + + user = User.query.get(user_id) + if not user: + return jsonify({'error': 'User not found'}), 404 + + # Store push subscription in user metadata + import json + push_subs = json.loads(user.push_subscriptions_json) if hasattr(user, 'push_subscriptions_json') and user.push_subscriptions_json else [] + # Avoid duplicates + existing = next((s for s in push_subs if s.get('endpoint') == subscription.get('endpoint')), None) + if not existing: + push_subs.append({ + 'subscription': subscription, + 'device_name': device_name, + 'registered_at': __import__('datetime').datetime.utcnow().isoformat(), + }) + + user.push_subscriptions_json = json.dumps(push_subs) + db.session.commit() + + return jsonify({'message': 'Device registered', 'device_count': len(push_subs)}) + + +@mobile_bp.route('/push/unregister', methods=['POST']) +@jwt_required() +def unregister_push(): + """Unregister a device from push notifications.""" + data = request.get_json() or {} + endpoint = data.get('endpoint') + if not endpoint: + return jsonify({'error': 'endpoint required'}), 400 + return jsonify({'message': 'Device unregistered'}) + + +@mobile_bp.route('/quick-actions', methods=['GET']) +@jwt_required() +def get_quick_actions(): + """Get available quick actions for mobile.""" + return jsonify({'actions': [ + {'id': 'restart_service', 'label': 'Restart Service', 'icon': 'refresh', 'params': ['service_name']}, + {'id': 'view_stats', 'label': 'View Server Stats', 'icon': 'chart', 'params': []}, + {'id': 'acknowledge_alert', 'label': 'Acknowledge Alert', 'icon': 'check', 'params': ['alert_id']}, + {'id': 'run_backup', 'label': 'Run Backup', 'icon': 'download', 'params': ['backup_id']}, + {'id': 'toggle_maintenance', 'label': 'Toggle Maintenance', 'icon': 'wrench', 'params': ['component_id']}, + ]}) + + +@mobile_bp.route('/quick-actions/', methods=['POST']) +@jwt_required() +def execute_quick_action(action_id): + """Execute a quick action.""" + data = request.get_json() or {} + + if action_id == 'view_stats': + from app.services.server_metrics_service import ServerMetricsService + metrics = ServerMetricsService.get_current_metrics() + return jsonify({'action': 'view_stats', 'result': metrics}) + + elif action_id == 'acknowledge_alert': + alert_id = data.get('alert_id') + return jsonify({'action': 'acknowledge_alert', 'alert_id': alert_id, 'acknowledged': True}) + + return jsonify({'action': action_id, 'status': 'executed'}) + + +@mobile_bp.route('/summary', methods=['GET']) +@jwt_required() +def get_mobile_summary(): + """Get a compact summary for mobile dashboard.""" + try: + from app.services.server_metrics_service import ServerMetricsService + metrics = ServerMetricsService.get_current_metrics() + except Exception: + metrics = {} + + from app.models.server import Server + from app.models.security_alert import SecurityAlert + + server_count = Server.query.count() + active_alerts = SecurityAlert.query.filter_by(resolved=False).count() if hasattr(SecurityAlert, 'resolved') else 0 + + return jsonify({ + 'metrics': { + 'cpu': metrics.get('cpu_percent', 0), + 'memory': metrics.get('memory_percent', 0), + 'disk': metrics.get('disk_percent', 0), + }, + 'servers': server_count, + 'active_alerts': active_alerts, + }) + + +@mobile_bp.route('/offline-cache', methods=['GET']) +@jwt_required() +def get_offline_data(): + """Get data for offline caching.""" + from app.models.server import Server + + servers = Server.query.limit(50).all() + return jsonify({ + 'servers': [{'id': s.id, 'name': s.name, 'hostname': s.hostname, 'status': s.status} for s in servers], + 'cached_at': __import__('datetime').datetime.utcnow().isoformat(), + }) diff --git a/backend/app/api/nginx_advanced.py b/backend/app/api/nginx_advanced.py new file mode 100644 index 00000000..d0731a24 --- /dev/null +++ b/backend/app/api/nginx_advanced.py @@ -0,0 +1,79 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.nginx_advanced_service import NginxAdvancedService + +nginx_advanced_bp = Blueprint('nginx_advanced', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@nginx_advanced_bp.route('/proxy/', methods=['GET']) +@jwt_required() +def get_proxy_rules(domain): + result = NginxAdvancedService.get_proxy_rules(domain) + if 'error' in result: + return jsonify(result), 404 + return jsonify(result) + + +@nginx_advanced_bp.route('/proxy', methods=['POST']) +@jwt_required() +def create_proxy(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + if not data or 'domain' not in data: + return jsonify({'error': 'domain required'}), 400 + + result = NginxAdvancedService.create_reverse_proxy(data) + return jsonify(result), 201 + + +@nginx_advanced_bp.route('/test', methods=['POST']) +@jwt_required() +def test_config(): + result = NginxAdvancedService.test_config() + return jsonify(result) + + +@nginx_advanced_bp.route('/reload', methods=['POST']) +@jwt_required() +def reload(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + result = NginxAdvancedService.reload_nginx() + return jsonify(result) + + +@nginx_advanced_bp.route('/diff', methods=['POST']) +@jwt_required() +def preview_diff(): + data = request.get_json() or {} + domain = data.get('domain') + new_config = data.get('config', '') + if not domain: + return jsonify({'error': 'domain required'}), 400 + result = NginxAdvancedService.preview_diff(domain, new_config) + return jsonify(result) + + +@nginx_advanced_bp.route('/logs/', methods=['GET']) +@jwt_required() +def get_logs(domain): + log_type = request.args.get('type', 'access') + lines = request.args.get('lines', 100, type=int) + result = NginxAdvancedService.get_vhost_logs(domain, log_type, lines) + return jsonify(result) + + +@nginx_advanced_bp.route('/lb-methods', methods=['GET']) +@jwt_required() +def get_lb_methods(): + return jsonify({'methods': NginxAdvancedService.get_load_balancing_methods()}) diff --git a/backend/app/api/performance.py b/backend/app/api/performance.py new file mode 100644 index 00000000..e41eeeff --- /dev/null +++ b/backend/app/api/performance.py @@ -0,0 +1,68 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.cache_service import CacheService +from app.services.background_job_service import BackgroundJobService + +performance_bp = Blueprint('performance', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@performance_bp.route('/cache/stats', methods=['GET']) +@jwt_required() +def cache_stats(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + return jsonify(CacheService.get_stats()) + + +@performance_bp.route('/cache/flush', methods=['POST']) +@jwt_required() +def cache_flush(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + CacheService.flush() + return jsonify({'message': 'Cache flushed'}) + + +@performance_bp.route('/jobs', methods=['GET']) +@jwt_required() +def list_jobs(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + return jsonify({'jobs': BackgroundJobService.list_jobs()}) + + +@performance_bp.route('/jobs/', methods=['GET']) +@jwt_required() +def get_job(job_id): + status = BackgroundJobService.get_job_status(job_id) + if not status: + return jsonify({'error': 'Job not found'}), 404 + return jsonify({'job_id': job_id, **status}) + + +@performance_bp.route('/jobs/stats', methods=['GET']) +@jwt_required() +def job_stats(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + return jsonify(BackgroundJobService.get_queue_stats()) + + +@performance_bp.route('/jobs/cleanup', methods=['POST']) +@jwt_required() +def cleanup_jobs(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + BackgroundJobService.cleanup_old() + return jsonify({'message': 'Old jobs cleaned up'}) diff --git a/backend/app/api/server_templates.py b/backend/app/api/server_templates.py new file mode 100644 index 00000000..56f0becf --- /dev/null +++ b/backend/app/api/server_templates.py @@ -0,0 +1,190 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.server_template_service import ServerTemplateService +from app.services.audit_service import AuditService +from app.models.audit_log import AuditLog + +server_templates_bp = Blueprint('server_templates', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +@server_templates_bp.route('/', methods=['GET']) +@jwt_required() +def list_templates(): + category = request.args.get('category') + templates = ServerTemplateService.list_templates(category=category) + return jsonify({'templates': [t.to_dict() for t in templates]}) + + +@server_templates_bp.route('/library', methods=['GET']) +@jwt_required() +def get_library(): + return jsonify({'templates': ServerTemplateService.get_library_templates()}) + + +@server_templates_bp.route('/library/', methods=['POST']) +@jwt_required() +def create_from_library(key): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + try: + template = ServerTemplateService.create_from_library(key, user.id) + return jsonify(template.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@server_templates_bp.route('/', methods=['GET']) +@jwt_required() +def get_template(template_id): + template = ServerTemplateService.get_template(template_id) + if not template: + return jsonify({'error': 'Template not found'}), 404 + return jsonify(template.to_dict()) + + +@server_templates_bp.route('/', methods=['POST']) +@jwt_required() +def create_template(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + if not data or 'name' not in data: + return jsonify({'error': 'name required'}), 400 + + try: + template = ServerTemplateService.create_template(data, user.id) + AuditService.log( + action=AuditLog.ACTION_RESOURCE_CREATE, user_id=user.id, + target_type='server_template', target_id=template.id, + details={'name': template.name} + ) + return jsonify(template.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@server_templates_bp.route('/', methods=['PUT']) +@jwt_required() +def update_template(template_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() + template = ServerTemplateService.update_template(template_id, data) + if not template: + return jsonify({'error': 'Template not found'}), 404 + return jsonify(template.to_dict()) + + +@server_templates_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_template(template_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + try: + result = ServerTemplateService.delete_template(template_id) + if not result: + return jsonify({'error': 'Template not found'}), 404 + return jsonify({'message': 'Template deleted'}) + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +# --- Assignments --- + +@server_templates_bp.route('//assign', methods=['POST']) +@jwt_required() +def assign_template(template_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + server_id = data.get('server_id') + if not server_id: + return jsonify({'error': 'server_id required'}), 400 + + try: + assignment = ServerTemplateService.assign_template(template_id, server_id) + return jsonify(assignment.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@server_templates_bp.route('//bulk-assign', methods=['POST']) +@jwt_required() +def bulk_assign(template_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + data = request.get_json() or {} + server_ids = data.get('server_ids', []) + results = ServerTemplateService.bulk_assign(template_id, server_ids) + return jsonify({'results': results}) + + +@server_templates_bp.route('//assignments', methods=['GET']) +@jwt_required() +def get_assignments(template_id): + assignments = ServerTemplateService.get_template_assignments(template_id) + return jsonify({'assignments': [a.to_dict() for a in assignments]}) + + +@server_templates_bp.route('/assignments/', methods=['DELETE']) +@jwt_required() +def unassign(assignment_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + result = ServerTemplateService.unassign_template(assignment_id) + if not result: + return jsonify({'error': 'Assignment not found'}), 404 + return jsonify({'message': 'Template unassigned'}) + + +@server_templates_bp.route('/assignments//check', methods=['POST']) +@jwt_required() +def check_drift(assignment_id): + assignment = ServerTemplateService.check_drift(assignment_id) + if not assignment: + return jsonify({'error': 'Assignment not found'}), 404 + return jsonify(assignment.to_dict()) + + +@server_templates_bp.route('/assignments//remediate', methods=['POST']) +@jwt_required() +def remediate(assignment_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + + assignment = ServerTemplateService.remediate(assignment_id) + if not assignment: + return jsonify({'error': 'Assignment not found'}), 404 + return jsonify(assignment.to_dict()) + + +@server_templates_bp.route('/compliance', methods=['GET']) +@jwt_required() +def compliance_summary(): + summary = ServerTemplateService.get_compliance_summary() + return jsonify(summary) + + +@server_templates_bp.route('/server/', methods=['GET']) +@jwt_required() +def get_server_templates(server_id): + assignments = ServerTemplateService.get_server_assignments(server_id) + return jsonify({'assignments': [a.to_dict() for a in assignments]}) diff --git a/backend/app/api/servers.py b/backend/app/api/servers.py index 6fd31c5e..b015e40e 100644 --- a/backend/app/api/servers.py +++ b/backend/app/api/servers.py @@ -11,10 +11,12 @@ from flask import Blueprint, request, jsonify, Response, current_app, redirect from flask_jwt_extended import jwt_required, get_jwt_identity -from app import db +from app import db, limiter from app.models import User -from app.models.server import Server, ServerGroup, ServerMetrics, ServerCommand, AgentSession +from app.models.server import Server, ServerGroup, ServerMetrics, ServerCommand, AgentSession, AgentVersion, AgentRollout from app.services.agent_registry import agent_registry +from app.services.agent_fleet_service import fleet_service +from app.services.discovery_service import discovery_service from app.middleware.rbac import admin_required, developer_required servers_bp = Blueprint('servers', __name__) @@ -318,6 +320,7 @@ def regenerate_token(server_id): @servers_bp.route('/register', methods=['POST']) +@limiter.limit("5 per minute") def register_agent(): """ Agent registration endpoint. @@ -386,6 +389,9 @@ def register_agent(): ws_scheme = 'wss' if request.is_secure else 'ws' ws_url = f"{ws_scheme}://{request.host}/agent" + # Security note: api_secret is returned once during registration so the agent + # can store it. The server-side copy is stored encrypted. The registration token + # is already cleared above (single-use), preventing re-registration. return jsonify({ 'agent_id': server.agent_id, 'name': server.name, @@ -1848,3 +1854,203 @@ def trigger_agent_update(server_id): ) return jsonify(result) + + +# ==================== Agent Fleet Management ==================== + +@servers_bp.route('/fleet/health', methods=['GET']) +@jwt_required() +@admin_required +def get_fleet_health(): + """Get aggregated health metrics for the agent fleet""" + return jsonify(fleet_service.get_fleet_health()) + + +@servers_bp.route('/fleet/versions', methods=['GET']) +@jwt_required() +@admin_required +def list_agent_versions(): + """List all available agent versions""" + versions = AgentVersion.query.order_by(AgentVersion.version.desc()).all() + return jsonify([v.to_dict() for v in versions]) + + +@servers_bp.route('/fleet/versions', methods=['POST']) +@jwt_required() +@admin_required +def add_agent_version(): + """Add a new available agent version""" + data = request.get_json() + + if not data.get('version'): + return jsonify({'error': 'Version is required'}), 400 + + version = AgentVersion( + version=data['version'], + channel=data.get('channel', 'stable'), + min_panel_version=data.get('min_panel_version'), + max_panel_version=data.get('max_panel_version'), + release_notes=data.get('release_notes'), + assets=data.get('assets', {}), + published_at=datetime.fromisoformat(data['published_at']) if data.get('published_at') else datetime.utcnow() + ) + + db.session.add(version) + db.session.commit() + + return jsonify(version.to_dict()), 201 + + +@servers_bp.route('/fleet/upgrade', methods=['POST']) +@jwt_required() +@admin_required +def upgrade_fleet(): + """Trigger upgrade for selected servers or entire fleet""" + data = request.get_json() + server_ids = data.get('server_ids', []) + version_id = data.get('version_id') + user_id = get_jwt_identity() + + if not server_ids: + # If no IDs provided, upgrade all online servers + servers = Server.query.filter_by(status='online').all() + server_ids = [s.id for s in servers] + + if not server_ids: + return jsonify({'success': True, 'message': 'No online servers to upgrade'}) + + result = fleet_service.upgrade_servers(server_ids, version_id, user_id) + return jsonify(result) + + +@servers_bp.route('/fleet/rollout', methods=['POST']) +@jwt_required() +@admin_required +def start_staged_rollout(): + """Start a staged rollout""" + data = request.get_json() + group_id = data.get('group_id') + version_id = data.get('version_id') + batch_size = data.get('batch_size', 5) + delay_minutes = data.get('delay_minutes', 10) + strategy = data.get('strategy', 'staged') + server_ids = data.get('server_ids') + user_id = get_jwt_identity() + + if not version_id: + return jsonify({'error': 'version_id is required'}), 400 + + result = fleet_service.staged_rollout( + group_id, version_id, batch_size, delay_minutes, + strategy, user_id, server_ids + ) + return jsonify(result) + + +@servers_bp.route('/fleet/rollouts', methods=['GET']) +@jwt_required() +@admin_required +def list_rollouts(): + """List rollout history""" + status = request.args.get('status') + limit = request.args.get('limit', 20, type=int) + return jsonify(fleet_service.get_rollouts(status, limit)) + + +@servers_bp.route('/fleet/rollouts/', methods=['GET']) +@jwt_required() +@admin_required +def get_rollout(rollout_id): + """Get a specific rollout""" + rollout = fleet_service.get_rollout(rollout_id) + if not rollout: + return jsonify({'error': 'Rollout not found'}), 404 + return jsonify(rollout) + + +@servers_bp.route('/fleet/rollouts//cancel', methods=['POST']) +@jwt_required() +@admin_required +def cancel_rollout(rollout_id): + """Cancel an active rollout""" + success = fleet_service.cancel_rollout(rollout_id) + if not success: + return jsonify({'error': 'Cannot cancel rollout (not running or not found)'}), 400 + return jsonify({'success': True, 'message': 'Rollout cancelled'}) + + +@servers_bp.route('/fleet/discovery', methods=['POST']) +@jwt_required() +@admin_required +def start_discovery_scan(): + """Start a network scan for new agents""" + duration = request.args.get('duration', 10, type=int) + agents = discovery_service.start_scan(duration) + return jsonify(agents) + + +@servers_bp.route('/fleet/discovery', methods=['GET']) +@jwt_required() +@admin_required +def get_discovered_agents(): + """Get results of last discovery scan""" + return jsonify(discovery_service.get_discovered_agents()) + + +@servers_bp.route('/fleet/approve/', methods=['POST']) +@jwt_required() +@admin_required +def approve_agent_registration(server_id): + """Approve a pending agent registration""" + user_id = get_jwt_identity() + success = fleet_service.approve_registration(server_id, user_id) + + if not success: + return jsonify({'error': 'Failed to approve registration'}), 400 + + return jsonify({'success': True, 'message': 'Registration approved'}) + + +@servers_bp.route('/fleet/reject/', methods=['POST']) +@jwt_required() +@admin_required +def reject_agent_registration(server_id): + """Reject a pending agent registration""" + success = fleet_service.reject_registration(server_id) + + if not success: + return jsonify({'error': 'Failed to reject registration'}), 400 + + return jsonify({'success': True, 'message': 'Registration rejected'}) + + +@servers_bp.route('/fleet/commands/queued', methods=['GET']) +@jwt_required() +@admin_required +def get_queued_commands(): + """Get all pending queued commands""" + server_id = request.args.get('server_id') + commands = fleet_service.get_queued_commands(server_id) + return jsonify(commands) + + +@servers_bp.route('/fleet/commands//retry', methods=['POST']) +@jwt_required() +@admin_required +def retry_command(command_id): + """Retry a failed command""" + result = fleet_service.retry_command(command_id) + if not result.get('success'): + return jsonify(result), 400 + return jsonify(result) + + +@servers_bp.route('/fleet/diagnostics/', methods=['GET']) +@jwt_required() +@admin_required +def get_server_diagnostics(server_id): + """Get detailed connection diagnostics for a server""" + diagnostics = fleet_service.get_server_diagnostics(server_id) + if 'error' in diagnostics: + return jsonify(diagnostics), 404 + return jsonify(diagnostics) diff --git a/backend/app/api/sso.py b/backend/app/api/sso.py index 2f584488..89fb0103 100644 --- a/backend/app/api/sso.py +++ b/backend/app/api/sso.py @@ -10,6 +10,7 @@ from app.services import sso_service from app.services.settings_service import SettingsService from app.services.audit_service import AuditService +from app.middleware.rbac import admin_required sso_bp = Blueprint('sso', __name__) @@ -199,12 +200,10 @@ def unlink_provider(provider): # ------------------------------------------------------------------ @sso_bp.route('/admin/config', methods=['GET']) -@jwt_required() +@admin_required def get_sso_config(): """All SSO settings (secrets redacted).""" user = User.query.get(get_jwt_identity()) - if not user or not user.is_admin: - return jsonify({'error': 'Admin access required'}), 403 config = {} for key in SettingsService.DEFAULT_SETTINGS: @@ -220,12 +219,10 @@ def get_sso_config(): @sso_bp.route('/admin/config/', methods=['PUT']) -@jwt_required() +@admin_required def update_provider_config(provider): """Update a provider's SSO config.""" user = User.query.get(get_jwt_identity()) - if not user or not user.is_admin: - return jsonify({'error': 'Admin access required'}), 403 if provider not in VALID_PROVIDERS: return jsonify({'error': f'Invalid provider: {provider}'}), 400 @@ -254,24 +251,18 @@ def update_provider_config(provider): @sso_bp.route('/admin/test/', methods=['POST']) -@jwt_required() +@admin_required def test_provider(provider): """Test provider connectivity.""" - user = User.query.get(get_jwt_identity()) - if not user or not user.is_admin: - return jsonify({'error': 'Admin access required'}), 403 - result = sso_service.test_provider_connectivity(provider) return jsonify(result), 200 if result['ok'] else 400 @sso_bp.route('/admin/general', methods=['PUT']) -@jwt_required() +@admin_required def update_general_settings(): """Update general SSO settings (auto_provision, force_sso, etc.).""" user = User.query.get(get_jwt_identity()) - if not user or not user.is_admin: - return jsonify({'error': 'Admin access required'}), 403 data = request.get_json() or {} general_keys = ['sso_auto_provision', 'sso_default_role', 'sso_force_sso', 'sso_allowed_domains'] diff --git a/backend/app/api/status_pages.py b/backend/app/api/status_pages.py new file mode 100644 index 00000000..058fc0e0 --- /dev/null +++ b/backend/app/api/status_pages.py @@ -0,0 +1,202 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.status_page_service import StatusPageService + +status_pages_bp = Blueprint('status_pages', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +# --- Public endpoints (no auth) --- + +@status_pages_bp.route('/public/', methods=['GET']) +def get_public_page(slug): + data = StatusPageService.get_public_page(slug) + if not data: + return jsonify({'error': 'Status page not found'}), 404 + return jsonify(data) + + +@status_pages_bp.route('/badge/', methods=['GET']) +def get_badge(slug): + badge = StatusPageService.get_badge(slug) + if not badge: + return jsonify({'error': 'Not found'}), 404 + return jsonify(badge) + + +# --- Admin endpoints --- + +@status_pages_bp.route('/', methods=['GET']) +@jwt_required() +def list_pages(): + pages = StatusPageService.list_pages() + return jsonify({'pages': [p.to_dict() for p in pages]}) + + +@status_pages_bp.route('/', methods=['GET']) +@jwt_required() +def get_page(page_id): + page = StatusPageService.get_page(page_id) + if not page: + return jsonify({'error': 'Not found'}), 404 + return jsonify(page.to_dict()) + + +@status_pages_bp.route('/', methods=['POST']) +@jwt_required() +def create_page(): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + if not data or 'name' not in data or 'slug' not in data: + return jsonify({'error': 'name and slug required'}), 400 + try: + page = StatusPageService.create_page(data) + return jsonify(page.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@status_pages_bp.route('/', methods=['PUT']) +@jwt_required() +def update_page(page_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + page = StatusPageService.update_page(page_id, data) + if not page: + return jsonify({'error': 'Not found'}), 404 + return jsonify(page.to_dict()) + + +@status_pages_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_page(page_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not StatusPageService.delete_page(page_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Status page deleted'}) + + +# --- Components --- + +@status_pages_bp.route('//components', methods=['GET']) +@jwt_required() +def get_components(page_id): + page = StatusPageService.get_page(page_id) + if not page: + return jsonify({'error': 'Not found'}), 404 + return jsonify({'components': [c.to_dict() for c in page.components.all()]}) + + +@status_pages_bp.route('//components', methods=['POST']) +@jwt_required() +def create_component(page_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + try: + comp = StatusPageService.create_component(page_id, data) + return jsonify(comp.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@status_pages_bp.route('/components/', methods=['PUT']) +@jwt_required() +def update_component(comp_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + comp = StatusPageService.update_component(comp_id, data) + if not comp: + return jsonify({'error': 'Not found'}), 404 + return jsonify(comp.to_dict()) + + +@status_pages_bp.route('/components/', methods=['DELETE']) +@jwt_required() +def delete_component(comp_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not StatusPageService.delete_component(comp_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Component deleted'}) + + +@status_pages_bp.route('/components//check', methods=['POST']) +@jwt_required() +def run_check(comp_id): + hc = StatusPageService.run_check(comp_id) + if not hc: + return jsonify({'error': 'Component not found'}), 404 + return jsonify(hc.to_dict()) + + +@status_pages_bp.route('/components//history', methods=['GET']) +@jwt_required() +def get_history(comp_id): + hours = request.args.get('hours', 24, type=int) + checks = StatusPageService.get_check_history(comp_id, hours) + return jsonify({'checks': [c.to_dict() for c in checks]}) + + +# --- Incidents --- + +@status_pages_bp.route('//incidents', methods=['GET']) +@jwt_required() +def list_incidents(page_id): + page = StatusPageService.get_page(page_id) + if not page: + return jsonify({'error': 'Not found'}), 404 + incidents = page.incidents.limit(50).all() + return jsonify({'incidents': [i.to_dict() for i in incidents]}) + + +@status_pages_bp.route('//incidents', methods=['POST']) +@jwt_required() +def create_incident(page_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + if not data or 'title' not in data: + return jsonify({'error': 'title required'}), 400 + incident = StatusPageService.create_incident(page_id, data) + return jsonify(incident.to_dict()), 201 + + +@status_pages_bp.route('/incidents/', methods=['PUT']) +@jwt_required() +def update_incident(incident_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + data = request.get_json() + incident = StatusPageService.update_incident(incident_id, data) + if not incident: + return jsonify({'error': 'Not found'}), 404 + return jsonify(incident.to_dict()) + + +@status_pages_bp.route('/incidents/', methods=['DELETE']) +@jwt_required() +def delete_incident(incident_id): + user = get_current_user() + if not user or not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + if not StatusPageService.delete_incident(incident_id): + return jsonify({'error': 'Not found'}), 404 + return jsonify({'message': 'Incident deleted'}) diff --git a/backend/app/api/two_factor.py b/backend/app/api/two_factor.py index 2cd5f6a0..65697a23 100644 --- a/backend/app/api/two_factor.py +++ b/backend/app/api/two_factor.py @@ -4,11 +4,15 @@ Provides endpoints for enabling, disabling, and managing 2FA. """ +import logging from flask import Blueprint, request, jsonify from flask_jwt_extended import jwt_required, get_jwt_identity +from app import limiter from app.models import User from app.services.totp_service import TOTPService, TwoFactorSetup +logger = logging.getLogger(__name__) + two_factor_bp = Blueprint('two_factor', __name__) @@ -117,6 +121,8 @@ def disable_2fa(): if not success: return jsonify({'error': error}), 400 + logger.info(f"2FA disabled for user {user.id} - existing sessions remain valid (TODO: implement session invalidation)") + return jsonify({ 'message': '2FA has been disabled successfully' }), 200 @@ -156,6 +162,7 @@ def regenerate_backup_codes(): @two_factor_bp.route('/verify', methods=['POST']) +@limiter.limit("5 per minute") def verify_2fa_code(): """ Verify a 2FA code during login. diff --git a/backend/app/api/workflows.py b/backend/app/api/workflows.py index 36511ea3..3ac8c2af 100644 --- a/backend/app/api/workflows.py +++ b/backend/app/api/workflows.py @@ -1,12 +1,26 @@ import json +import secrets +from datetime import datetime from flask import Blueprint, request, jsonify from flask_jwt_extended import jwt_required, get_jwt_identity from app import db -from app.models import Workflow, User +from app.models import Workflow, User, WorkflowExecution, WorkflowLog workflows_bp = Blueprint('workflows', __name__) +def _validate_workflow_graph(data): + """Validate workflow graph for cycles if nodes/edges are present.""" + nodes = data.get('nodes', []) + edges = data.get('edges', []) + if nodes and edges: + from app.services.workflow_engine import WorkflowEngine + err = WorkflowEngine.validate_graph(nodes, edges) + if err: + return err + return None + + @workflows_bp.route('', methods=['GET']) @jwt_required() def get_workflows(): @@ -55,12 +69,25 @@ def create_workflow(): if not name: return jsonify({'error': 'Name is required'}), 400 + # Validate graph for cycles + cycle_err = _validate_workflow_graph(data) + if cycle_err: + return jsonify({'error': cycle_err}), 400 + + # Auto-generate webhook_id if trigger is webhook + trigger_config = data.get('trigger_config', {}) + if data.get('trigger_type') == 'webhook' and not trigger_config.get('webhook_id'): + trigger_config['webhook_id'] = secrets.token_urlsafe(24) + workflow = Workflow( name=name, description=data.get('description', ''), nodes=json.dumps(data.get('nodes', [])), edges=json.dumps(data.get('edges', [])), viewport=json.dumps(data.get('viewport')) if data.get('viewport') else None, + is_active=data.get('is_active', False), + trigger_type=data.get('trigger_type', 'manual'), + trigger_config=json.dumps(trigger_config), user_id=current_user_id ) @@ -91,6 +118,11 @@ def update_workflow(workflow_id): if not data: return jsonify({'error': 'No data provided'}), 400 + # Validate graph for cycles + cycle_err = _validate_workflow_graph(data) + if cycle_err: + return jsonify({'error': cycle_err}), 400 + if 'name' in data: workflow.name = data['name'] if 'description' in data: @@ -102,6 +134,19 @@ def update_workflow(workflow_id): if 'viewport' in data: workflow.viewport = json.dumps(data['viewport']) if data['viewport'] else None + # Automation fields + if 'is_active' in data: + workflow.is_active = data['is_active'] + if 'trigger_type' in data: + workflow.trigger_type = data['trigger_type'] + if 'trigger_config' in data: + trigger_config = data['trigger_config'] + # Auto-generate webhook_id if switching to webhook trigger + if data.get('trigger_type') == 'webhook' and not trigger_config.get('webhook_id'): + existing = json.loads(workflow.trigger_config) if workflow.trigger_config else {} + trigger_config['webhook_id'] = existing.get('webhook_id') or secrets.token_urlsafe(24) + workflow.trigger_config = json.dumps(trigger_config) + db.session.commit() return jsonify({ @@ -130,27 +175,103 @@ def delete_workflow(workflow_id): return jsonify({'message': 'Workflow deleted successfully'}) +@workflows_bp.route('//execute', methods=['POST']) +@jwt_required() +def execute_workflow(workflow_id): + """Trigger a workflow execution manually.""" + from app.services.workflow_engine import WorkflowEngine + + current_user_id = get_jwt_identity() + user = User.query.get(current_user_id) + workflow = Workflow.query.get(workflow_id) + + if not workflow: + return jsonify({'error': 'Workflow not found'}), 404 + + if user.role != 'admin' and workflow.user_id != current_user_id: + return jsonify({'error': 'Access denied'}), 403 + + data = request.get_json() or {} + context = data.get('context', {}) + + execution_id = WorkflowEngine.execute_workflow( + workflow_id=workflow_id, + trigger_type='manual', + context=context + ) + + return jsonify({ + 'message': 'Workflow execution started', + 'execution_id': execution_id + }) + + +@workflows_bp.route('//executions', methods=['GET']) +@jwt_required() +def get_workflow_executions(workflow_id): + """Get execution history for a workflow.""" + current_user_id = get_jwt_identity() + user = User.query.get(current_user_id) + workflow = Workflow.query.get(workflow_id) + + if not workflow: + return jsonify({'error': 'Workflow not found'}), 404 + + if user.role != 'admin' and workflow.user_id != current_user_id: + return jsonify({'error': 'Access denied'}), 403 + + executions = WorkflowExecution.query.filter_by(workflow_id=workflow_id).order_by(WorkflowExecution.started_at.desc()).all() + + return jsonify({ + 'executions': [e.to_dict() for e in executions] + }) + + +@workflows_bp.route('/executions/', methods=['GET']) +@jwt_required() +def get_execution_details(execution_id): + """Get details for a specific execution.""" + current_user_id = get_jwt_identity() + user = User.query.get(current_user_id) + execution = WorkflowExecution.query.get(execution_id) + + if not execution: + return jsonify({'error': 'Execution not found'}), 404 + + workflow = execution.workflow + if user.role != 'admin' and workflow.user_id != current_user_id: + return jsonify({'error': 'Access denied'}), 403 + + return jsonify(execution.to_dict()) + + +@workflows_bp.route('/executions//logs', methods=['GET']) +@jwt_required() +def get_execution_logs(execution_id): + """Get logs for a specific execution.""" + current_user_id = get_jwt_identity() + user = User.query.get(current_user_id) + execution = WorkflowExecution.query.get(execution_id) + + if not execution: + return jsonify({'error': 'Execution not found'}), 404 + + workflow = execution.workflow + if user.role != 'admin' and workflow.user_id != current_user_id: + return jsonify({'error': 'Access denied'}), 403 + + logs = WorkflowLog.query.filter_by(execution_id=execution_id).order_by(WorkflowLog.timestamp.asc()).all() + + return jsonify({ + 'logs': [l.to_dict() for l in logs] + }) + + @workflows_bp.route('//deploy', methods=['POST']) @jwt_required() def deploy_workflow(workflow_id): """ Deploy all resources from a workflow. - - This endpoint converts workflow nodes (Docker apps, databases, domains) - into actual infrastructure by calling the appropriate backend services. - - Returns: - { - "success": boolean, - "message": string, - "results": [ - {"nodeId": "node_1", "type": "dockerApp", "success": true, "resourceId": 5}, - {"nodeId": "node_2", "type": "domain", "success": true, "resourceId": 12}, - ... - ], - "errors": [], - "workflow": {...updated workflow with resource IDs...} - } """ from app.services.workflow_service import WorkflowService @@ -167,3 +288,86 @@ def deploy_workflow(workflow_id): else: # Partial success or errors - return 200 with error details return jsonify(result), 200 + + +@workflows_bp.route('/hooks/', methods=['POST']) +def webhook_trigger(webhook_id): + """ + Public webhook endpoint to trigger a workflow. + No JWT required — the webhook_id acts as the authentication token. + """ + from app.services.workflow_engine import WorkflowEngine, CycleDetectedError + + # Find the workflow with this webhook_id + workflows = Workflow.query.filter_by( + is_active=True, + trigger_type='webhook' + ).all() + + target_workflow = None + for wf in workflows: + try: + config = json.loads(wf.trigger_config) if wf.trigger_config else {} + if config.get('webhook_id') == webhook_id: + target_workflow = wf + break + except (json.JSONDecodeError, TypeError): + continue + + if not target_workflow: + return jsonify({'error': 'Webhook not found'}), 404 + + # Build context from the incoming request + context = { + 'webhook_id': webhook_id, + 'method': request.method, + 'headers': dict(request.headers), + 'triggered_at': datetime.utcnow().isoformat() + } + + # Include request body + if request.is_json: + context['body'] = request.get_json(silent=True) or {} + else: + body = request.get_data(as_text=True) + context['body'] = body[:8192] if body else '' + + # Include query parameters + if request.args: + context['query'] = dict(request.args) + + try: + execution_id = WorkflowEngine.execute_workflow( + workflow_id=target_workflow.id, + trigger_type='webhook', + context=context + ) + return jsonify({ + 'message': 'Workflow triggered', + 'execution_id': execution_id, + 'workflow': target_workflow.name + }), 200 + except CycleDetectedError as e: + return jsonify({'error': str(e)}), 400 + except Exception as e: + return jsonify({'error': f'Execution failed: {str(e)}'}), 500 + + +@workflows_bp.route('/validate', methods=['POST']) +@jwt_required() +def validate_workflow(): + """Validate a workflow graph for cycles and other issues.""" + from app.services.workflow_engine import WorkflowEngine + + data = request.get_json() + if not data: + return jsonify({'error': 'No data provided'}), 400 + + nodes = data.get('nodes', []) + edges = data.get('edges', []) + + err = WorkflowEngine.validate_graph(nodes, edges) + if err: + return jsonify({'valid': False, 'error': err}), 200 + + return jsonify({'valid': True}), 200 diff --git a/backend/app/api/workspaces.py b/backend/app/api/workspaces.py new file mode 100644 index 00000000..fcc55a56 --- /dev/null +++ b/backend/app/api/workspaces.py @@ -0,0 +1,220 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required +from app.services.workspace_service import WorkspaceService +from app.services.audit_service import AuditService +from app.models.audit_log import AuditLog + +workspaces_bp = Blueprint('workspaces', __name__) + + +def get_current_user(): + from flask_jwt_extended import get_jwt_identity + from app.models.user import User + return User.query.get(get_jwt_identity()) + + +def require_workspace_access(workspace_id, user): + """Return 403 response tuple if user is not a workspace member or admin, else None.""" + if user.is_admin: + return None + role = WorkspaceService.get_user_role(workspace_id, user.id) + if role is None: + return jsonify({'error': 'Workspace access denied'}), 403 + return None + + +@workspaces_bp.route('/', methods=['GET']) +@jwt_required() +def list_workspaces(): + user = get_current_user() + include_archived = request.args.get('include_archived', 'false') == 'true' + # Super-admins see all, others see their memberships + if user.is_admin and request.args.get('all') == 'true': + workspaces = WorkspaceService.get_all_workspaces_admin() + return jsonify({'workspaces': workspaces}) + workspaces = WorkspaceService.list_workspaces(user_id=user.id, include_archived=include_archived) + return jsonify({'workspaces': [ws.to_dict() for ws in workspaces]}) + + +@workspaces_bp.route('/', methods=['GET']) +@jwt_required() +def get_workspace(workspace_id): + user = get_current_user() + denied = require_workspace_access(workspace_id, user) + if denied: + return denied + ws = WorkspaceService.get_workspace(workspace_id) + if not ws: + return jsonify({'error': 'Workspace not found'}), 404 + return jsonify(ws.to_dict()) + + +@workspaces_bp.route('/', methods=['POST']) +@jwt_required() +def create_workspace(): + user = get_current_user() + data = request.get_json() + if not data: + return jsonify({'error': 'Request body required'}), 400 + try: + ws = WorkspaceService.create_workspace(data, user.id) + AuditService.log( + action=AuditLog.ACTION_RESOURCE_CREATE, user_id=user.id, + target_type='workspace', target_id=ws.id, + details={'name': ws.name} + ) + return jsonify(ws.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@workspaces_bp.route('/', methods=['PUT']) +@jwt_required() +def update_workspace(workspace_id): + user = get_current_user() + role = WorkspaceService.get_user_role(workspace_id, user.id) + if role not in ['owner', 'admin'] and not user.is_admin: + return jsonify({'error': 'Insufficient permissions'}), 403 + + data = request.get_json() + ws = WorkspaceService.update_workspace(workspace_id, data) + if not ws: + return jsonify({'error': 'Workspace not found'}), 404 + return jsonify(ws.to_dict()) + + +@workspaces_bp.route('//archive', methods=['POST']) +@jwt_required() +def archive_workspace(workspace_id): + user = get_current_user() + role = WorkspaceService.get_user_role(workspace_id, user.id) + if role != 'owner' and not user.is_admin: + return jsonify({'error': 'Owner access required'}), 403 + ws = WorkspaceService.archive_workspace(workspace_id) + if not ws: + return jsonify({'error': 'Workspace not found'}), 404 + return jsonify(ws.to_dict()) + + +@workspaces_bp.route('//restore', methods=['POST']) +@jwt_required() +def restore_workspace(workspace_id): + user = get_current_user() + if not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + ws = WorkspaceService.restore_workspace(workspace_id) + if not ws: + return jsonify({'error': 'Workspace not found'}), 404 + return jsonify(ws.to_dict()) + + +@workspaces_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_workspace(workspace_id): + user = get_current_user() + if not user.is_admin: + return jsonify({'error': 'Admin access required'}), 403 + result = WorkspaceService.delete_workspace(workspace_id) + if not result: + return jsonify({'error': 'Workspace not found'}), 404 + return jsonify({'message': 'Workspace deleted'}) + + +# --- Members --- + +@workspaces_bp.route('//members', methods=['GET']) +@jwt_required() +def get_members(workspace_id): + user = get_current_user() + denied = require_workspace_access(workspace_id, user) + if denied: + return denied + members = WorkspaceService.get_members(workspace_id) + return jsonify({'members': [m.to_dict() for m in members]}) + + +@workspaces_bp.route('//members', methods=['POST']) +@jwt_required() +def add_member(workspace_id): + user = get_current_user() + role = WorkspaceService.get_user_role(workspace_id, user.id) + if role not in ['owner', 'admin'] and not user.is_admin: + return jsonify({'error': 'Insufficient permissions'}), 403 + + data = request.get_json() or {} + user_id = data.get('user_id') + if not user_id: + return jsonify({'error': 'user_id required'}), 400 + + try: + member = WorkspaceService.add_member(workspace_id, user_id, data.get('role', 'member')) + return jsonify(member.to_dict()), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +@workspaces_bp.route('/members//role', methods=['PUT']) +@jwt_required() +def update_member_role(member_id): + data = request.get_json() or {} + role = data.get('role') + if not role: + return jsonify({'error': 'role required'}), 400 + member = WorkspaceService.update_member_role(member_id, role) + if not member: + return jsonify({'error': 'Member not found'}), 404 + return jsonify(member.to_dict()) + + +@workspaces_bp.route('/members/', methods=['DELETE']) +@jwt_required() +def remove_member(member_id): + try: + result = WorkspaceService.remove_member(member_id) + if not result: + return jsonify({'error': 'Member not found'}), 404 + return jsonify({'message': 'Member removed'}) + except ValueError as e: + return jsonify({'error': str(e)}), 400 + + +# --- API Keys --- + +@workspaces_bp.route('//api-keys', methods=['GET']) +@jwt_required() +def list_api_keys(workspace_id): + user = get_current_user() + denied = require_workspace_access(workspace_id, user) + if denied: + return denied + keys = WorkspaceService.list_api_keys(workspace_id) + return jsonify({'api_keys': [k.to_dict() for k in keys]}) + + +@workspaces_bp.route('//api-keys', methods=['POST']) +@jwt_required() +def create_api_key(workspace_id): + user = get_current_user() + denied = require_workspace_access(workspace_id, user) + if denied: + return denied + data = request.get_json() or {} + name = data.get('name') + if not name: + return jsonify({'error': 'name required'}), 400 + + api_key, raw_key = WorkspaceService.create_api_key( + workspace_id, name, scopes=data.get('scopes'), user_id=user.id + ) + result = api_key.to_dict() + result['key'] = raw_key # Only returned once + return jsonify(result), 201 + + +@workspaces_bp.route('/api-keys//revoke', methods=['POST']) +@jwt_required() +def revoke_api_key(key_id): + result = WorkspaceService.revoke_api_key(key_id) + if not result: + return jsonify({'error': 'API key not found'}), 404 + return jsonify({'message': 'API key revoked'}) diff --git a/backend/app/middleware/security.py b/backend/app/middleware/security.py index a5303ac3..d592153b 100644 --- a/backend/app/middleware/security.py +++ b/backend/app/middleware/security.py @@ -21,15 +21,27 @@ def add_security_headers(response): response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin' # Content Security Policy - csp_directives = [ - "default-src 'self'", - "script-src 'self' 'unsafe-inline' 'unsafe-eval' https://unpkg.com", - "style-src 'self' 'unsafe-inline' https://unpkg.com", - "img-src 'self' data: https:", - "font-src 'self'", - "connect-src 'self' ws: wss:", - "frame-ancestors 'none'", - ] + # In debug mode, allow inline styles/scripts for Vite dev tooling + if app.debug: + csp_directives = [ + "default-src 'self'", + "script-src 'self' 'unsafe-inline'", + "style-src 'self' 'unsafe-inline'", + "img-src 'self' data: https:", + "font-src 'self'", + "connect-src 'self' ws: wss: http://localhost:* http://127.0.0.1:*", + "frame-ancestors 'none'", + ] + else: + csp_directives = [ + "default-src 'self'", + "script-src 'self'", + "style-src 'self'", + "img-src 'self' data: https:", + "font-src 'self'", + "connect-src 'self' ws: wss:", + "frame-ancestors 'none'", + ] response.headers['Content-Security-Policy'] = '; '.join(csp_directives) # Permissions Policy (formerly Feature-Policy) diff --git a/backend/app/models/__init__.py b/backend/app/models/__init__.py index 04a7d9b6..a6fea628 100644 --- a/backend/app/models/__init__.py +++ b/backend/app/models/__init__.py @@ -1,35 +1,51 @@ -from app.models.user import User -from app.models.application import Application -from app.models.domain import Domain -from app.models.env_variable import EnvironmentVariable, EnvironmentVariableHistory -from app.models.notification_preferences import NotificationPreferences -from app.models.deployment import Deployment, DeploymentDiff -from app.models.system_settings import SystemSettings -from app.models.audit_log import AuditLog -from app.models.metrics_history import MetricsHistory -from app.models.workflow import Workflow -from app.models.webhook import GitWebhook, WebhookLog, GitDeployment -from app.models.server import Server, ServerGroup, ServerMetrics, ServerCommand, AgentSession -from app.models.security_alert import SecurityAlert -from app.models.wordpress_site import WordPressSite, DatabaseSnapshot, SyncJob -from app.models.environment_activity import EnvironmentActivity -from app.models.promotion_job import PromotionJob -from app.models.sanitization_profile import SanitizationProfile -from app.models.email import EmailDomain, EmailAccount, EmailAlias, EmailForwardingRule, DNSProviderConfig -from app.models.oauth_identity import OAuthIdentity -from app.models.api_key import ApiKey -from app.models.api_usage import ApiUsageLog, ApiUsageSummary -from app.models.event_subscription import EventSubscription, EventDelivery -from app.models.invitation import Invitation - -__all__ = [ - 'User', 'Application', 'Domain', 'EnvironmentVariable', 'EnvironmentVariableHistory', - 'NotificationPreferences', 'Deployment', 'DeploymentDiff', 'SystemSettings', 'AuditLog', - 'MetricsHistory', 'Workflow', 'GitWebhook', 'WebhookLog', 'GitDeployment', - 'Server', 'ServerGroup', 'ServerMetrics', 'ServerCommand', 'AgentSession', 'SecurityAlert', - 'WordPressSite', 'DatabaseSnapshot', 'SyncJob', - 'EnvironmentActivity', 'PromotionJob', 'SanitizationProfile', - 'EmailDomain', 'EmailAccount', 'EmailAlias', 'EmailForwardingRule', 'DNSProviderConfig', - 'OAuthIdentity', 'ApiKey', 'ApiUsageLog', 'ApiUsageSummary', - 'EventSubscription', 'EventDelivery', 'Invitation' -] +from app.models.user import User +from app.models.application import Application +from app.models.domain import Domain +from app.models.env_variable import EnvironmentVariable, EnvironmentVariableHistory +from app.models.notification_preferences import NotificationPreferences +from app.models.deployment import Deployment, DeploymentDiff +from app.models.system_settings import SystemSettings +from app.models.audit_log import AuditLog +from app.models.metrics_history import MetricsHistory +from app.models.workflow import Workflow, WorkflowExecution, WorkflowLog +from app.models.webhook import GitWebhook, WebhookLog, GitDeployment +from app.models.server import Server, ServerGroup, ServerMetrics, ServerCommand, AgentSession, AgentVersion, AgentRollout +from app.models.security_alert import SecurityAlert +from app.models.wordpress_site import WordPressSite, DatabaseSnapshot, SyncJob +from app.models.environment_activity import EnvironmentActivity +from app.models.promotion_job import PromotionJob +from app.models.sanitization_profile import SanitizationProfile +from app.models.email import EmailDomain, EmailAccount, EmailAlias, EmailForwardingRule, DNSProviderConfig +from app.models.oauth_identity import OAuthIdentity +from app.models.api_key import ApiKey +from app.models.api_usage import ApiUsageLog, ApiUsageSummary +from app.models.event_subscription import EventSubscription, EventDelivery +from app.models.invitation import Invitation +from app.models.metric_alert import ServerAlertThreshold, MetricAlert +from app.models.agent_plugin import AgentPlugin, AgentPluginInstall +from app.models.server_template import ServerTemplate, ServerTemplateAssignment +from app.models.workspace import Workspace, WorkspaceMember, WorkspaceApiKey +from app.models.dns_zone import DNSZone, DNSRecord +from app.models.status_page import StatusPage, StatusComponent, HealthCheck, StatusIncident, StatusIncidentUpdate +from app.models.cloud_server import CloudProvider, CloudServer, CloudSnapshot +from app.models.marketplace import Extension, ExtensionInstall + +__all__ = [ + 'User', 'Application', 'Domain', 'EnvironmentVariable', 'EnvironmentVariableHistory', + 'NotificationPreferences', 'Deployment', 'DeploymentDiff', 'SystemSettings', 'AuditLog', + 'MetricsHistory', 'Workflow', 'WorkflowExecution', 'WorkflowLog', 'GitWebhook', 'WebhookLog', 'GitDeployment', + 'Server', 'ServerGroup', 'ServerMetrics', 'ServerCommand', 'AgentSession', 'AgentVersion', 'AgentRollout', 'SecurityAlert', + 'WordPressSite', 'DatabaseSnapshot', 'SyncJob', + 'EnvironmentActivity', 'PromotionJob', 'SanitizationProfile', + 'EmailDomain', 'EmailAccount', 'EmailAlias', 'EmailForwardingRule', 'DNSProviderConfig', + 'OAuthIdentity', 'ApiKey', 'ApiUsageLog', 'ApiUsageSummary', + 'EventSubscription', 'EventDelivery', 'Invitation', + 'ServerAlertThreshold', 'MetricAlert', + 'AgentPlugin', 'AgentPluginInstall', + 'ServerTemplate', 'ServerTemplateAssignment', + 'Workspace', 'WorkspaceMember', 'WorkspaceApiKey', + 'DNSZone', 'DNSRecord', + 'StatusPage', 'StatusComponent', 'HealthCheck', 'StatusIncident', 'StatusIncidentUpdate', + 'CloudProvider', 'CloudServer', 'CloudSnapshot', + 'Extension', 'ExtensionInstall' +] diff --git a/backend/app/models/agent_plugin.py b/backend/app/models/agent_plugin.py new file mode 100644 index 00000000..8ad55656 --- /dev/null +++ b/backend/app/models/agent_plugin.py @@ -0,0 +1,163 @@ +from datetime import datetime +from app import db +import json + + +class AgentPlugin(db.Model): + """Represents a plugin that can be installed on agents.""" + __tablename__ = 'agent_plugins' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128), nullable=False, unique=True) + display_name = db.Column(db.String(256), nullable=False) + version = db.Column(db.String(32), nullable=False) + description = db.Column(db.Text) + author = db.Column(db.String(128)) + homepage = db.Column(db.String(512)) + + # Plugin manifest + manifest_json = db.Column(db.Text) + + # Capabilities + capabilities_json = db.Column(db.Text) # metrics, health_checks, commands, scheduled_tasks, event_hooks + + # Dependencies + dependencies_json = db.Column(db.Text) # other plugins required + + # Permissions + permissions_json = db.Column(db.Text) # filesystem, network, docker, etc. + + # Resource limits + max_memory_mb = db.Column(db.Integer, default=128) + max_cpu_percent = db.Column(db.Integer, default=10) + + # Status + STATUS_AVAILABLE = 'available' + STATUS_DEPRECATED = 'deprecated' + status = db.Column(db.String(32), default=STATUS_AVAILABLE) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + # Relationships + installations = db.relationship('AgentPluginInstall', backref='plugin', lazy='dynamic') + + @property + def manifest(self): + return json.loads(self.manifest_json) if self.manifest_json else {} + + @manifest.setter + def manifest(self, value): + self.manifest_json = json.dumps(value) + + @property + def capabilities(self): + return json.loads(self.capabilities_json) if self.capabilities_json else [] + + @capabilities.setter + def capabilities(self, value): + self.capabilities_json = json.dumps(value) + + @property + def dependencies(self): + return json.loads(self.dependencies_json) if self.dependencies_json else [] + + @dependencies.setter + def dependencies(self, value): + self.dependencies_json = json.dumps(value) + + @property + def permissions(self): + return json.loads(self.permissions_json) if self.permissions_json else [] + + @permissions.setter + def permissions(self, value): + self.permissions_json = json.dumps(value) + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'display_name': self.display_name, + 'version': self.version, + 'description': self.description, + 'author': self.author, + 'homepage': self.homepage, + 'manifest': self.manifest, + 'capabilities': self.capabilities, + 'dependencies': self.dependencies, + 'permissions': self.permissions, + 'max_memory_mb': self.max_memory_mb, + 'max_cpu_percent': self.max_cpu_percent, + 'status': self.status, + 'install_count': self.installations.count(), + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + } + + def __repr__(self): + return f'' + + +class AgentPluginInstall(db.Model): + """Tracks plugin installations on specific agents/servers.""" + __tablename__ = 'agent_plugin_installs' + + id = db.Column(db.Integer, primary_key=True) + plugin_id = db.Column(db.Integer, db.ForeignKey('agent_plugins.id'), nullable=False) + server_id = db.Column(db.Integer, db.ForeignKey('servers.id'), nullable=False) + + # Installation state + STATUS_INSTALLING = 'installing' + STATUS_ENABLED = 'enabled' + STATUS_DISABLED = 'disabled' + STATUS_ERROR = 'error' + STATUS_UNINSTALLING = 'uninstalling' + status = db.Column(db.String(32), default=STATUS_INSTALLING) + + installed_version = db.Column(db.String(32)) + config_json = db.Column(db.Text) + error_message = db.Column(db.Text) + + installed_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + # Plugin runtime data + last_health_check = db.Column(db.DateTime) + health_status = db.Column(db.String(32), default='unknown') # healthy, degraded, unhealthy, unknown + metrics_json = db.Column(db.Text) + + server = db.relationship('Server', backref=db.backref('plugin_installs', lazy='dynamic')) + + @property + def config(self): + return json.loads(self.config_json) if self.config_json else {} + + @config.setter + def config(self, value): + self.config_json = json.dumps(value) + + @property + def metrics(self): + return json.loads(self.metrics_json) if self.metrics_json else {} + + def to_dict(self): + return { + 'id': self.id, + 'plugin_id': self.plugin_id, + 'plugin': self.plugin.to_dict() if self.plugin else None, + 'server_id': self.server_id, + 'server_name': self.server.name if self.server else None, + 'status': self.status, + 'installed_version': self.installed_version, + 'config': self.config, + 'error_message': self.error_message, + 'installed_at': self.installed_at.isoformat() if self.installed_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + 'last_health_check': self.last_health_check.isoformat() if self.last_health_check else None, + 'health_status': self.health_status, + 'metrics': self.metrics, + } + + def __repr__(self): + return f'' diff --git a/backend/app/models/application.py b/backend/app/models/application.py index 64ea513d..51c9f122 100644 --- a/backend/app/models/application.py +++ b/backend/app/models/application.py @@ -38,7 +38,8 @@ class Application(db.Model): user_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False) # Relationships - domains = db.relationship('Domain', backref='application', lazy='dynamic', cascade='all, delete-orphan') + # Use 'subquery' to eagerly load domains in a single query, avoiding N+1 + domains = db.relationship('Domain', backref='application', lazy='subquery', cascade='all, delete-orphan') linked_app = db.relationship('Application', remote_side=[id], backref='linked_from', foreign_keys=[linked_app_id]) def to_dict(self, include_linked=False): diff --git a/backend/app/models/cloud_server.py b/backend/app/models/cloud_server.py new file mode 100644 index 00000000..a033d7ac --- /dev/null +++ b/backend/app/models/cloud_server.py @@ -0,0 +1,126 @@ +from datetime import datetime +from app import db +import json + + +class CloudProvider(db.Model): + """Cloud provider configuration (DigitalOcean, Hetzner, Vultr, Linode).""" + __tablename__ = 'cloud_providers' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(64), nullable=False) + provider_type = db.Column(db.String(32), nullable=False) # digitalocean, hetzner, vultr, linode + api_key_encrypted = db.Column(db.Text) + is_active = db.Column(db.Boolean, default=True) + created_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + + servers = db.relationship('CloudServer', backref='provider', lazy='dynamic') + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'provider_type': self.provider_type, + 'is_active': self.is_active, + 'server_count': self.servers.count(), + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class CloudServer(db.Model): + """A cloud server provisioned through ServerKit.""" + __tablename__ = 'cloud_servers' + + id = db.Column(db.Integer, primary_key=True) + provider_id = db.Column(db.Integer, db.ForeignKey('cloud_providers.id'), nullable=False) + external_id = db.Column(db.String(128)) # provider's server ID + name = db.Column(db.String(128), nullable=False) + hostname = db.Column(db.String(256)) + + # Specs + region = db.Column(db.String(64)) + size = db.Column(db.String(64)) + image = db.Column(db.String(128)) # OS image + ip_address = db.Column(db.String(45)) + ipv6_address = db.Column(db.String(64)) + + # Status + STATUS_CREATING = 'creating' + STATUS_ACTIVE = 'active' + STATUS_OFF = 'off' + STATUS_DESTROYED = 'destroyed' + STATUS_ERROR = 'error' + status = db.Column(db.String(32), default=STATUS_CREATING) + + # Cost + monthly_cost = db.Column(db.Float, default=0) + currency = db.Column(db.String(3), default='USD') + + # Auto-install agent + agent_installed = db.Column(db.Boolean, default=False) + + # SSH key + ssh_key_id = db.Column(db.String(128)) + + # Metadata + metadata_json = db.Column(db.Text) + + created_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + destroyed_at = db.Column(db.DateTime) + + snapshots = db.relationship('CloudSnapshot', backref='server', lazy='dynamic', cascade='all, delete-orphan') + + @property + def server_metadata(self): + return json.loads(self.metadata_json) if self.metadata_json else {} + + @server_metadata.setter + def server_metadata(self, v): + self.metadata_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'provider_id': self.provider_id, + 'provider_name': self.provider.name if self.provider else None, + 'provider_type': self.provider.provider_type if self.provider else None, + 'external_id': self.external_id, + 'name': self.name, + 'hostname': self.hostname, + 'region': self.region, + 'size': self.size, + 'image': self.image, + 'ip_address': self.ip_address, + 'ipv6_address': self.ipv6_address, + 'status': self.status, + 'monthly_cost': self.monthly_cost, + 'currency': self.currency, + 'agent_installed': self.agent_installed, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class CloudSnapshot(db.Model): + """Snapshot of a cloud server.""" + __tablename__ = 'cloud_snapshots' + + id = db.Column(db.Integer, primary_key=True) + server_id = db.Column(db.Integer, db.ForeignKey('cloud_servers.id'), nullable=False) + external_id = db.Column(db.String(128)) + name = db.Column(db.String(128)) + size_gb = db.Column(db.Float) + status = db.Column(db.String(32), default='creating') + created_at = db.Column(db.DateTime, default=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'server_id': self.server_id, + 'external_id': self.external_id, + 'name': self.name, + 'size_gb': self.size_gb, + 'status': self.status, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } diff --git a/backend/app/models/dns_zone.py b/backend/app/models/dns_zone.py new file mode 100644 index 00000000..63743190 --- /dev/null +++ b/backend/app/models/dns_zone.py @@ -0,0 +1,76 @@ +from datetime import datetime +from app import db +import json + + +class DNSZone(db.Model): + """DNS zone for a domain with provider integration.""" + __tablename__ = 'dns_zones' + + id = db.Column(db.Integer, primary_key=True) + domain = db.Column(db.String(256), nullable=False, unique=True) + provider = db.Column(db.String(64)) # cloudflare, route53, digitalocean, manual + provider_zone_id = db.Column(db.String(128)) + provider_config_json = db.Column(db.Text) # encrypted credentials + + status = db.Column(db.String(32), default='active') + last_sync_at = db.Column(db.DateTime) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + records = db.relationship('DNSRecord', backref='zone', lazy='dynamic', cascade='all, delete-orphan') + + @property + def provider_config(self): + return json.loads(self.provider_config_json) if self.provider_config_json else {} + + @provider_config.setter + def provider_config(self, v): + self.provider_config_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'domain': self.domain, + 'provider': self.provider, + 'provider_zone_id': self.provider_zone_id, + 'status': self.status, + 'record_count': self.records.count(), + 'last_sync_at': self.last_sync_at.isoformat() if self.last_sync_at else None, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class DNSRecord(db.Model): + """Individual DNS record within a zone.""" + __tablename__ = 'dns_records' + + id = db.Column(db.Integer, primary_key=True) + zone_id = db.Column(db.Integer, db.ForeignKey('dns_zones.id'), nullable=False) + + record_type = db.Column(db.String(10), nullable=False) # A, AAAA, CNAME, MX, TXT, SRV, CAA + name = db.Column(db.String(256), nullable=False) + content = db.Column(db.Text, nullable=False) + ttl = db.Column(db.Integer, default=3600) + priority = db.Column(db.Integer) # MX, SRV + proxied = db.Column(db.Boolean, default=False) # Cloudflare proxy + + provider_record_id = db.Column(db.String(128)) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'zone_id': self.zone_id, + 'record_type': self.record_type, + 'name': self.name, + 'content': self.content, + 'ttl': self.ttl, + 'priority': self.priority, + 'proxied': self.proxied, + 'provider_record_id': self.provider_record_id, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } diff --git a/backend/app/models/invitation.py b/backend/app/models/invitation.py index 4655b29b..94a5014e 100644 --- a/backend/app/models/invitation.py +++ b/backend/app/models/invitation.py @@ -16,7 +16,7 @@ class Invitation(db.Model): id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(255), nullable=True) # Nullable for link-only invites token = db.Column(db.String(64), unique=True, nullable=False, index=True, - default=lambda: uuid4().hex) + default=lambda: __import__('secrets').token_urlsafe(32)) role = db.Column(db.String(20), nullable=False, default='developer') permissions = db.Column(db.Text, nullable=True) # JSON custom permissions invited_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False) diff --git a/backend/app/models/marketplace.py b/backend/app/models/marketplace.py new file mode 100644 index 00000000..9fb1e42f --- /dev/null +++ b/backend/app/models/marketplace.py @@ -0,0 +1,128 @@ +from datetime import datetime +from app import db +import json + + +class Extension(db.Model): + """A marketplace extension/plugin.""" + __tablename__ = 'extensions' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128), nullable=False, unique=True) + display_name = db.Column(db.String(256), nullable=False) + slug = db.Column(db.String(128), nullable=False, unique=True) + description = db.Column(db.Text) + long_description = db.Column(db.Text) + version = db.Column(db.String(32), nullable=False) + author = db.Column(db.String(128)) + homepage = db.Column(db.String(512)) + repository = db.Column(db.String(512)) + license = db.Column(db.String(64)) + + # Classification + category = db.Column(db.String(64)) # monitoring, security, deployment, integration, ui + tags_json = db.Column(db.Text) + + # Extension type + TYPE_WIDGET = 'widget' + TYPE_API_HOOK = 'api_hook' + TYPE_THEME = 'theme' + TYPE_INTEGRATION = 'integration' + extension_type = db.Column(db.String(32), default=TYPE_INTEGRATION) + + # Extension package + entry_point = db.Column(db.String(256)) + config_schema_json = db.Column(db.Text) + + # Rating + rating = db.Column(db.Float, default=0) + rating_count = db.Column(db.Integer, default=0) + download_count = db.Column(db.Integer, default=0) + + # Status + STATUS_PUBLISHED = 'published' + STATUS_DRAFT = 'draft' + STATUS_DEPRECATED = 'deprecated' + status = db.Column(db.String(32), default=STATUS_DRAFT) + + submitted_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + installs = db.relationship('ExtensionInstall', backref='extension', lazy='dynamic') + + @property + def tags(self): + return json.loads(self.tags_json) if self.tags_json else [] + + @tags.setter + def tags(self, v): + self.tags_json = json.dumps(v) + + @property + def config_schema(self): + return json.loads(self.config_schema_json) if self.config_schema_json else {} + + @config_schema.setter + def config_schema(self, v): + self.config_schema_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'display_name': self.display_name, + 'slug': self.slug, + 'description': self.description, + 'long_description': self.long_description, + 'version': self.version, + 'author': self.author, + 'homepage': self.homepage, + 'repository': self.repository, + 'license': self.license, + 'category': self.category, + 'tags': self.tags, + 'extension_type': self.extension_type, + 'config_schema': self.config_schema, + 'rating': self.rating, + 'rating_count': self.rating_count, + 'download_count': self.download_count, + 'status': self.status, + 'install_count': self.installs.count(), + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + } + + +class ExtensionInstall(db.Model): + """Tracks extension installations.""" + __tablename__ = 'extension_installs' + + id = db.Column(db.Integer, primary_key=True) + extension_id = db.Column(db.Integer, db.ForeignKey('extensions.id'), nullable=False) + user_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False) + installed_version = db.Column(db.String(32)) + config_json = db.Column(db.Text) + is_active = db.Column(db.Boolean, default=True) + installed_at = db.Column(db.DateTime, default=datetime.utcnow) + + user = db.relationship('User', backref=db.backref('extension_installs', lazy='dynamic')) + + @property + def config(self): + return json.loads(self.config_json) if self.config_json else {} + + @config.setter + def config(self, v): + self.config_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'extension_id': self.extension_id, + 'extension_name': self.extension.display_name if self.extension else None, + 'installed_version': self.installed_version, + 'config': self.config, + 'is_active': self.is_active, + 'installed_at': self.installed_at.isoformat() if self.installed_at else None, + } diff --git a/backend/app/models/metric_alert.py b/backend/app/models/metric_alert.py new file mode 100644 index 00000000..e74b11c1 --- /dev/null +++ b/backend/app/models/metric_alert.py @@ -0,0 +1,76 @@ +"""Models for fleet-wide metric monitoring and alerting.""" + +import uuid +from datetime import datetime +from app import db + + +class ServerAlertThreshold(db.Model): + """Per-server or global alert threshold configuration.""" + __tablename__ = 'server_alert_thresholds' + + id = db.Column(db.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) + server_id = db.Column(db.String(36), db.ForeignKey('servers.id'), nullable=True, index=True) + # null = global default + + metric = db.Column(db.String(20), nullable=False) # cpu, memory, disk + warning_threshold = db.Column(db.Float, default=80.0) + critical_threshold = db.Column(db.Float, default=95.0) + duration_seconds = db.Column(db.Integer, default=300) # sustained for N seconds + enabled = db.Column(db.Boolean, default=True) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + server = db.relationship('Server', backref='alert_thresholds') + + def to_dict(self): + return { + 'id': self.id, + 'server_id': self.server_id, + 'server_name': self.server.name if self.server else None, + 'metric': self.metric, + 'warning_threshold': self.warning_threshold, + 'critical_threshold': self.critical_threshold, + 'duration_seconds': self.duration_seconds, + 'enabled': self.enabled, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class MetricAlert(db.Model): + """Alert triggered when server metrics exceed thresholds.""" + __tablename__ = 'metric_alerts' + + id = db.Column(db.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) + server_id = db.Column(db.String(36), db.ForeignKey('servers.id'), nullable=False, index=True) + + metric = db.Column(db.String(20), nullable=False) # cpu, memory, disk + severity = db.Column(db.String(10), nullable=False) # warning, critical + value = db.Column(db.Float) # the value that triggered it + threshold = db.Column(db.Float) # the threshold exceeded + duration_seconds = db.Column(db.Integer) # how long it was exceeded + + status = db.Column(db.String(20), default='active', index=True) # active, acknowledged, resolved + acknowledged_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=True) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + resolved_at = db.Column(db.DateTime) + + server = db.relationship('Server', backref='metric_alerts') + + def to_dict(self): + return { + 'id': self.id, + 'server_id': self.server_id, + 'server_name': self.server.name if self.server else None, + 'metric': self.metric, + 'severity': self.severity, + 'value': self.value, + 'threshold': self.threshold, + 'duration_seconds': self.duration_seconds, + 'status': self.status, + 'acknowledged_by': self.acknowledged_by, + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'resolved_at': self.resolved_at.isoformat() if self.resolved_at else None, + } diff --git a/backend/app/models/server.py b/backend/app/models/server.py index 4004a8fc..03d827e8 100644 --- a/backend/app/models/server.py +++ b/backend/app/models/server.py @@ -16,11 +16,16 @@ class ServerGroup(db.Model): icon = db.Column(db.String(50), default='server') # Icon name parent_id = db.Column(db.String(36), db.ForeignKey('server_groups.id'), nullable=True) + # Fleet Management + auto_upgrade = db.Column(db.Boolean, default=False) + upgrade_channel = db.Column(db.String(20), default='stable') # stable, beta + created_at = db.Column(db.DateTime, default=datetime.utcnow) updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) # Relationships - servers = db.relationship('Server', back_populates='group', lazy='dynamic') + # Use 'subquery' to eagerly load servers in a single query, avoiding N+1 + servers = db.relationship('Server', back_populates='group', lazy='subquery') children = db.relationship('ServerGroup', backref=db.backref('parent', remote_side=[id])) def to_dict(self, include_servers=False): @@ -31,7 +36,9 @@ def to_dict(self, include_servers=False): 'color': self.color, 'icon': self.icon, 'parent_id': self.parent_id, - 'server_count': self.servers.count(), + 'auto_upgrade': self.auto_upgrade, + 'upgrade_channel': self.upgrade_channel, + 'server_count': len(self.servers), 'created_at': self.created_at.isoformat() if self.created_at else None, 'updated_at': self.updated_at.isoformat() if self.updated_at else None, } @@ -56,11 +63,11 @@ class Server(db.Model): ip_address = db.Column(db.String(45)) # IPv4 or IPv6 # Organization - group_id = db.Column(db.String(36), db.ForeignKey('server_groups.id'), nullable=True) + group_id = db.Column(db.String(36), db.ForeignKey('server_groups.id'), nullable=True, index=True) tags = db.Column(db.JSON, default=list) # ["production", "us-east", "docker"] # Status - status = db.Column(db.String(20), default='pending') + status = db.Column(db.String(20), default='pending', index=True) # pending, connecting, online, offline, error, maintenance last_seen = db.Column(db.DateTime) last_error = db.Column(db.Text) @@ -68,6 +75,8 @@ class Server(db.Model): # Agent Info agent_version = db.Column(db.String(20)) agent_id = db.Column(db.String(36), unique=True, index=True) # Agent's UUID + auto_upgrade = db.Column(db.Boolean, default=False) + upgrade_channel = db.Column(db.String(20), default='stable') # stable, beta # System Info (reported by agent) os_type = db.Column(db.String(20)) # linux, windows, darwin @@ -378,6 +387,13 @@ class ServerCommand(db.Model): error = db.Column(db.Text) exit_code = db.Column(db.Integer) + # Retry / offline queue + retry_count = db.Column(db.Integer, default=0) + max_retries = db.Column(db.Integer, default=3) + next_retry_at = db.Column(db.DateTime) + backoff_seconds = db.Column(db.Integer, default=30) # initial backoff, doubles each retry + queued = db.Column(db.Boolean, default=False) # True when agent was offline at send time + created_at = db.Column(db.DateTime, default=datetime.utcnow) server = db.relationship('Server', back_populates='commands') @@ -395,6 +411,9 @@ def to_dict(self): 'result': self.result, 'error': self.error, 'exit_code': self.exit_code, + 'retry_count': self.retry_count, + 'max_retries': self.max_retries, + 'queued': self.queued, 'created_at': self.created_at.isoformat() if self.created_at else None, } @@ -414,12 +433,29 @@ class AgentSession(db.Model): ip_address = db.Column(db.String(45)) user_agent = db.Column(db.String(255)) # Agent version info + # Latency tracking + heartbeat_latency_ms = db.Column(db.Float) # Latest heartbeat round-trip latency + avg_latency_ms = db.Column(db.Float) # Running average latency + latency_samples = db.Column(db.Integer, default=0) # Number of samples in average + is_active = db.Column(db.Boolean, default=True, index=True) disconnected_at = db.Column(db.DateTime) disconnect_reason = db.Column(db.String(100)) server = db.relationship('Server', back_populates='sessions') + def update_latency(self, latency_ms): + """Update latency with exponential moving average""" + self.heartbeat_latency_ms = latency_ms + if self.avg_latency_ms is None or self.latency_samples == 0: + self.avg_latency_ms = latency_ms + self.latency_samples = 1 + else: + # EMA with alpha = 0.2 for smoothing + alpha = 0.2 + self.avg_latency_ms = alpha * latency_ms + (1 - alpha) * self.avg_latency_ms + self.latency_samples += 1 + def to_dict(self): return { 'id': self.id, @@ -427,7 +463,107 @@ def to_dict(self): 'connected_at': self.connected_at.isoformat() if self.connected_at else None, 'last_heartbeat': self.last_heartbeat.isoformat() if self.last_heartbeat else None, 'ip_address': self.ip_address, + 'heartbeat_latency_ms': self.heartbeat_latency_ms, + 'avg_latency_ms': self.avg_latency_ms, 'is_active': self.is_active, 'disconnected_at': self.disconnected_at.isoformat() if self.disconnected_at else None, 'disconnect_reason': self.disconnect_reason, } + + +class AgentRollout(db.Model): + """Tracks staged rollout progress""" + __tablename__ = 'agent_rollouts' + + id = db.Column(db.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) + version_id = db.Column(db.String(36), db.ForeignKey('agent_versions.id'), nullable=False) + group_id = db.Column(db.String(36), db.ForeignKey('server_groups.id'), nullable=True) + user_id = db.Column(db.Integer, db.ForeignKey('users.id')) + + # Configuration + batch_size = db.Column(db.Integer, default=5) + delay_minutes = db.Column(db.Integer, default=10) + strategy = db.Column(db.String(20), default='staged') # staged, all, canary + + # Progress + status = db.Column(db.String(20), default='pending') # pending, running, paused, completed, failed, cancelled + total_servers = db.Column(db.Integer, default=0) + processed_servers = db.Column(db.Integer, default=0) + failed_servers = db.Column(db.Integer, default=0) + current_wave = db.Column(db.Integer, default=0) + + # Results per server + server_results = db.Column(db.JSON, default=list) # [{server_id, status, error, wave}] + + error = db.Column(db.Text) + + started_at = db.Column(db.DateTime) + completed_at = db.Column(db.DateTime) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + version = db.relationship('AgentVersion') + + def to_dict(self): + return { + 'id': self.id, + 'version_id': self.version_id, + 'version': self.version.version if self.version else None, + 'group_id': self.group_id, + 'user_id': self.user_id, + 'batch_size': self.batch_size, + 'delay_minutes': self.delay_minutes, + 'strategy': self.strategy, + 'status': self.status, + 'total_servers': self.total_servers, + 'processed_servers': self.processed_servers, + 'failed_servers': self.failed_servers, + 'current_wave': self.current_wave, + 'server_results': self.server_results or [], + 'error': self.error, + 'started_at': self.started_at.isoformat() if self.started_at else None, + 'completed_at': self.completed_at.isoformat() if self.completed_at else None, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class AgentVersion(db.Model): + """Available agent versions and compatibility matrix""" + __tablename__ = 'agent_versions' + + id = db.Column(db.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) + version = db.Column(db.String(20), nullable=False, unique=True) + channel = db.Column(db.String(20), default='stable') # stable, beta + + # Compatibility + min_panel_version = db.Column(db.String(20)) + max_panel_version = db.Column(db.String(20)) + + # Metadata + release_notes = db.Column(db.Text) + is_active = db.Column(db.Boolean, default=True) + published_at = db.Column(db.DateTime, default=datetime.utcnow) + + # Assets (mapped by platform: linux-amd64, windows-amd64, etc.) + assets = db.Column(db.JSON) # {"linux-amd64": "url", "checksums": "url"} + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'version': self.version, + 'channel': self.channel, + 'min_panel_version': self.min_panel_version, + 'max_panel_version': self.max_panel_version, + 'release_notes': self.release_notes, + 'is_active': self.is_active, + 'published_at': self.published_at.isoformat() if self.published_at else None, + 'assets': self.assets or {}, + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + } + + def __repr__(self): + return f'' diff --git a/backend/app/models/server_template.py b/backend/app/models/server_template.py new file mode 100644 index 00000000..19544d6e --- /dev/null +++ b/backend/app/models/server_template.py @@ -0,0 +1,197 @@ +from datetime import datetime +from app import db +import json + + +class ServerTemplate(db.Model): + """Defines expected state for a server — packages, services, firewall rules, users, files.""" + __tablename__ = 'server_templates' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128), nullable=False, unique=True) + description = db.Column(db.Text) + category = db.Column(db.String(64), default='general') # web, database, mail, custom + + # Template version tracking + version = db.Column(db.Integer, default=1) + + # Inheritance + parent_id = db.Column(db.Integer, db.ForeignKey('server_templates.id'), nullable=True) + parent = db.relationship('ServerTemplate', remote_side=[id], backref='children') + + # Expected state specification + packages_json = db.Column(db.Text) # ["nginx", "php8.1-fpm", ...] + services_json = db.Column(db.Text) # [{"name": "nginx", "enabled": true, "running": true}, ...] + firewall_rules_json = db.Column(db.Text) # [{"port": 80, "protocol": "tcp", "action": "allow"}, ...] + files_json = db.Column(db.Text) # [{"path": "/etc/nginx/...", "content_hash": "...", "mode": "0644"}, ...] + users_json = db.Column(db.Text) # [{"name": "www-data", "groups": ["www-data"]}, ...] + sysctl_json = db.Column(db.Text) # [{"key": "net.ipv4.ip_forward", "value": "1"}, ...] + + # Auto-remediation + auto_remediate = db.Column(db.Boolean, default=False) + remediation_approval_required = db.Column(db.Boolean, default=True) + + created_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + assignments = db.relationship('ServerTemplateAssignment', backref='template', lazy='dynamic') + + def _get_json(self, field): + val = getattr(self, field) + return json.loads(val) if val else [] + + def _set_json(self, field, value): + setattr(self, field, json.dumps(value)) + + @property + def packages(self): + return self._get_json('packages_json') + + @packages.setter + def packages(self, v): + self._set_json('packages_json', v) + + @property + def services(self): + return self._get_json('services_json') + + @services.setter + def services(self, v): + self._set_json('services_json', v) + + @property + def firewall_rules(self): + return self._get_json('firewall_rules_json') + + @firewall_rules.setter + def firewall_rules(self, v): + self._set_json('firewall_rules_json', v) + + @property + def files(self): + return self._get_json('files_json') + + @files.setter + def files(self, v): + self._set_json('files_json', v) + + @property + def users(self): + return self._get_json('users_json') + + @users.setter + def users(self, v): + self._set_json('users_json', v) + + @property + def sysctl_params(self): + return self._get_json('sysctl_json') + + @sysctl_params.setter + def sysctl_params(self, v): + self._set_json('sysctl_json', v) + + def get_merged_spec(self): + """Get full spec including inherited fields from parent.""" + if not self.parent: + return { + 'packages': self.packages, + 'services': self.services, + 'firewall_rules': self.firewall_rules, + 'files': self.files, + 'users': self.users, + 'sysctl_params': self.sysctl_params, + } + parent_spec = self.parent.get_merged_spec() + # Child overrides parent + for key in ['packages', 'services', 'firewall_rules', 'files', 'users', 'sysctl_params']: + child_val = getattr(self, key) + if child_val: + if key == 'packages': + parent_spec[key] = list(set(parent_spec.get(key, []) + child_val)) + else: + parent_spec[key] = child_val + return parent_spec + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'description': self.description, + 'category': self.category, + 'version': self.version, + 'parent_id': self.parent_id, + 'parent_name': self.parent.name if self.parent else None, + 'packages': self.packages, + 'services': self.services, + 'firewall_rules': self.firewall_rules, + 'files': self.files, + 'users': self.users, + 'sysctl_params': self.sysctl_params, + 'auto_remediate': self.auto_remediate, + 'remediation_approval_required': self.remediation_approval_required, + 'assignment_count': self.assignments.count(), + 'created_by': self.created_by, + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + } + + def __repr__(self): + return f'' + + +class ServerTemplateAssignment(db.Model): + """Tracks which template is applied to which server.""" + __tablename__ = 'server_template_assignments' + + id = db.Column(db.Integer, primary_key=True) + template_id = db.Column(db.Integer, db.ForeignKey('server_templates.id'), nullable=False) + server_id = db.Column(db.Integer, db.ForeignKey('servers.id'), nullable=False) + + # Compliance status + STATUS_COMPLIANT = 'compliant' + STATUS_DRIFTED = 'drifted' + STATUS_CHECKING = 'checking' + STATUS_REMEDIATING = 'remediating' + STATUS_UNKNOWN = 'unknown' + status = db.Column(db.String(32), default=STATUS_UNKNOWN) + + # Drift details + drift_report_json = db.Column(db.Text) + last_check_at = db.Column(db.DateTime) + last_remediation_at = db.Column(db.DateTime) + + applied_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + server = db.relationship('Server', backref=db.backref('template_assignments', lazy='dynamic')) + + __table_args__ = ( + db.UniqueConstraint('template_id', 'server_id', name='uq_template_server'), + ) + + @property + def drift_report(self): + return json.loads(self.drift_report_json) if self.drift_report_json else {} + + @drift_report.setter + def drift_report(self, v): + self.drift_report_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'template_id': self.template_id, + 'template_name': self.template.name if self.template else None, + 'server_id': self.server_id, + 'server_name': self.server.name if self.server else None, + 'status': self.status, + 'drift_report': self.drift_report, + 'last_check_at': self.last_check_at.isoformat() if self.last_check_at else None, + 'last_remediation_at': self.last_remediation_at.isoformat() if self.last_remediation_at else None, + 'applied_at': self.applied_at.isoformat() if self.applied_at else None, + } + + def __repr__(self): + return f'' diff --git a/backend/app/models/status_page.py b/backend/app/models/status_page.py new file mode 100644 index 00000000..d2fda84a --- /dev/null +++ b/backend/app/models/status_page.py @@ -0,0 +1,192 @@ +from datetime import datetime +from app import db +import json + + +class StatusPage(db.Model): + """Public-facing status page configuration.""" + __tablename__ = 'status_pages' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128), nullable=False) + slug = db.Column(db.String(128), nullable=False, unique=True) + description = db.Column(db.Text) + + # Branding + logo_url = db.Column(db.String(512)) + primary_color = db.Column(db.String(7), default='#4f46e5') + custom_domain = db.Column(db.String(256)) + + # Settings + is_public = db.Column(db.Boolean, default=True) + show_uptime = db.Column(db.Boolean, default=True) + show_history = db.Column(db.Boolean, default=True) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + components = db.relationship('StatusComponent', backref='status_page', lazy='dynamic', + order_by='StatusComponent.sort_order', cascade='all, delete-orphan') + incidents = db.relationship('StatusIncident', backref='status_page', lazy='dynamic', + order_by='StatusIncident.created_at.desc()', cascade='all, delete-orphan') + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'slug': self.slug, + 'description': self.description, + 'logo_url': self.logo_url, + 'primary_color': self.primary_color, + 'custom_domain': self.custom_domain, + 'is_public': self.is_public, + 'show_uptime': self.show_uptime, + 'show_history': self.show_history, + 'component_count': self.components.count(), + 'created_at': self.created_at.isoformat() if self.created_at else None, + } + + +class StatusComponent(db.Model): + """A service/component shown on the status page.""" + __tablename__ = 'status_components' + + id = db.Column(db.Integer, primary_key=True) + page_id = db.Column(db.Integer, db.ForeignKey('status_pages.id'), nullable=False) + name = db.Column(db.String(128), nullable=False) + description = db.Column(db.Text) + group = db.Column(db.String(64)) # e.g., "Web Services", "APIs" + sort_order = db.Column(db.Integer, default=0) + + # Health check config + check_type = db.Column(db.String(16), default='http') # http, tcp, dns, smtp, ping + check_target = db.Column(db.String(512)) # URL, host:port, etc. + check_interval = db.Column(db.Integer, default=60) # seconds + check_timeout = db.Column(db.Integer, default=10) + + # Status + STATUS_OPERATIONAL = 'operational' + STATUS_DEGRADED = 'degraded' + STATUS_PARTIAL = 'partial_outage' + STATUS_MAJOR = 'major_outage' + STATUS_MAINTENANCE = 'maintenance' + status = db.Column(db.String(32), default=STATUS_OPERATIONAL) + + last_check_at = db.Column(db.DateTime) + last_response_time = db.Column(db.Integer) # ms + + # Uptime data + uptime_24h = db.Column(db.Float, default=100.0) + uptime_7d = db.Column(db.Float, default=100.0) + uptime_30d = db.Column(db.Float, default=100.0) + uptime_90d = db.Column(db.Float, default=100.0) + + created_at = db.Column(db.DateTime, default=datetime.utcnow) + + checks = db.relationship('HealthCheck', backref='component', lazy='dynamic', + order_by='HealthCheck.checked_at.desc()', cascade='all, delete-orphan') + + def to_dict(self): + return { + 'id': self.id, + 'page_id': self.page_id, + 'name': self.name, + 'description': self.description, + 'group': self.group, + 'sort_order': self.sort_order, + 'check_type': self.check_type, + 'check_target': self.check_target, + 'check_interval': self.check_interval, + 'check_timeout': self.check_timeout, + 'status': self.status, + 'last_check_at': self.last_check_at.isoformat() if self.last_check_at else None, + 'last_response_time': self.last_response_time, + 'uptime_24h': self.uptime_24h, + 'uptime_7d': self.uptime_7d, + 'uptime_30d': self.uptime_30d, + 'uptime_90d': self.uptime_90d, + } + + +class HealthCheck(db.Model): + """Individual health check result.""" + __tablename__ = 'health_checks' + + id = db.Column(db.Integer, primary_key=True) + component_id = db.Column(db.Integer, db.ForeignKey('status_components.id'), nullable=False) + status = db.Column(db.String(16)) # up, down, degraded + response_time = db.Column(db.Integer) # ms + status_code = db.Column(db.Integer) + error = db.Column(db.Text) + checked_at = db.Column(db.DateTime, default=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'component_id': self.component_id, + 'status': self.status, + 'response_time': self.response_time, + 'status_code': self.status_code, + 'error': self.error, + 'checked_at': self.checked_at.isoformat() if self.checked_at else None, + } + + +class StatusIncident(db.Model): + """An incident on the status page.""" + __tablename__ = 'status_incidents' + + id = db.Column(db.Integer, primary_key=True) + page_id = db.Column(db.Integer, db.ForeignKey('status_pages.id'), nullable=False) + title = db.Column(db.String(256), nullable=False) + status = db.Column(db.String(32), default='investigating') # investigating, identified, monitoring, resolved + impact = db.Column(db.String(32), default='minor') # none, minor, major, critical + body = db.Column(db.Text) + + # Maintenance window + is_maintenance = db.Column(db.Boolean, default=False) + scheduled_start = db.Column(db.DateTime) + scheduled_end = db.Column(db.DateTime) + + resolved_at = db.Column(db.DateTime) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + updates = db.relationship('StatusIncidentUpdate', backref='incident', lazy='dynamic', + order_by='StatusIncidentUpdate.created_at.desc()', cascade='all, delete-orphan') + + def to_dict(self): + return { + 'id': self.id, + 'page_id': self.page_id, + 'title': self.title, + 'status': self.status, + 'impact': self.impact, + 'body': self.body, + 'is_maintenance': self.is_maintenance, + 'scheduled_start': self.scheduled_start.isoformat() if self.scheduled_start else None, + 'scheduled_end': self.scheduled_end.isoformat() if self.scheduled_end else None, + 'resolved_at': self.resolved_at.isoformat() if self.resolved_at else None, + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updates': [u.to_dict() for u in self.updates.limit(20).all()], + } + + +class StatusIncidentUpdate(db.Model): + """Timeline update for an incident.""" + __tablename__ = 'status_incident_updates' + + id = db.Column(db.Integer, primary_key=True) + incident_id = db.Column(db.Integer, db.ForeignKey('status_incidents.id'), nullable=False) + status = db.Column(db.String(32)) + body = db.Column(db.Text, nullable=False) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'incident_id': self.incident_id, + 'status': self.status, + 'body': self.body, + 'created_at': self.created_at.isoformat() if self.created_at else None, + } diff --git a/backend/app/models/user.py b/backend/app/models/user.py index ab8fb244..493c6993 100644 --- a/backend/app/models/user.py +++ b/backend/app/models/user.py @@ -17,11 +17,11 @@ class User(db.Model): email = db.Column(db.String(120), unique=True, nullable=False, index=True) username = db.Column(db.String(80), unique=True, nullable=False, index=True) password_hash = db.Column(db.String(256), nullable=True) - auth_provider = db.Column(db.String(50), default='local') # local, google, github, oidc, saml + auth_provider = db.Column(db.String(50), default='local', index=True) # local, google, github, oidc, saml role = db.Column(db.String(20), default='developer') # 'admin', 'developer', 'viewer' permissions = db.Column(db.Text, nullable=True) # JSON per-feature read/write flags is_active = db.Column(db.Boolean, default=True) - created_at = db.Column(db.DateTime, default=datetime.utcnow) + created_at = db.Column(db.DateTime, default=datetime.utcnow, index=True) updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) last_login_at = db.Column(db.DateTime, nullable=True) created_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=True) @@ -36,6 +36,9 @@ class User(db.Model): backup_codes = db.Column(db.Text, nullable=True) # JSON array of hashed backup codes totp_confirmed_at = db.Column(db.DateTime, nullable=True) # When 2FA was enabled + # Sidebar preferences: { preset: 'full'|'web'|'email'|'devops'|'minimal'|'custom', hiddenItems: [...] } + sidebar_config = db.Column(db.Text, nullable=True) + # Relationships applications = db.relationship('Application', backref='owner', lazy='dynamic') @@ -166,6 +169,19 @@ def has_permission(self, feature, level='read'): feature_perms = perms.get(feature, {}) return feature_perms.get(level, False) + def get_sidebar_config(self): + """Return sidebar config dict, or default.""" + if self.sidebar_config: + try: + return json.loads(self.sidebar_config) + except (json.JSONDecodeError, TypeError): + pass + return {'preset': 'full', 'hiddenItems': []} + + def set_sidebar_config(self, config): + """Store sidebar config as JSON.""" + self.sidebar_config = json.dumps(config) if config else None + def to_dict(self): return { 'id': self.id, @@ -177,10 +193,12 @@ def to_dict(self): 'totp_enabled': self.totp_enabled, 'auth_provider': self.auth_provider or 'local', 'has_password': self.has_password, + 'sidebar_config': self.get_sidebar_config(), 'created_at': self.created_at.isoformat(), 'updated_at': self.updated_at.isoformat(), 'last_login_at': self.last_login_at.isoformat() if self.last_login_at else None, - 'created_by': self.created_by + 'created_by': self.created_by, + 'is_admin': self.is_admin } def get_backup_codes(self): diff --git a/backend/app/models/workflow.py b/backend/app/models/workflow.py index c30af13f..33398919 100644 --- a/backend/app/models/workflow.py +++ b/backend/app/models/workflow.py @@ -1,5 +1,6 @@ from datetime import datetime from app import db +import json class Workflow(db.Model): @@ -14,6 +15,13 @@ class Workflow(db.Model): edges = db.Column(db.Text, nullable=True) # JSON array of edges viewport = db.Column(db.Text, nullable=True) # JSON object {x, y, zoom} + # Automation fields + is_active = db.Column(db.Boolean, default=False) + trigger_type = db.Column(db.String(50), default='manual') # manual, cron, event, webhook + trigger_config = db.Column(db.Text, nullable=True) # JSON configuration for the trigger + last_run_at = db.Column(db.DateTime, nullable=True) + last_status = db.Column(db.String(20), nullable=True) # success, failed + # Metadata created_at = db.Column(db.DateTime, default=datetime.utcnow) updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) @@ -23,9 +31,9 @@ class Workflow(db.Model): # Relationships user = db.relationship('User', backref=db.backref('workflows', lazy='dynamic')) + executions = db.relationship('WorkflowExecution', backref='workflow', lazy='dynamic', cascade='all, delete-orphan') def to_dict(self): - import json return { 'id': self.id, 'name': self.name, @@ -33,6 +41,11 @@ def to_dict(self): 'nodes': json.loads(self.nodes) if self.nodes else [], 'edges': json.loads(self.edges) if self.edges else [], 'viewport': json.loads(self.viewport) if self.viewport else None, + 'is_active': self.is_active, + 'trigger_type': self.trigger_type, + 'trigger_config': json.loads(self.trigger_config) if self.trigger_config else {}, + 'last_run_at': self.last_run_at.isoformat() if self.last_run_at else None, + 'last_status': self.last_status, 'created_at': self.created_at.isoformat() if self.created_at else None, 'updated_at': self.updated_at.isoformat() if self.updated_at else None, 'user_id': self.user_id, @@ -42,3 +55,53 @@ def to_dict(self): def __repr__(self): return f'' + + +class WorkflowExecution(db.Model): + __tablename__ = 'workflow_executions' + + id = db.Column(db.Integer, primary_key=True) + workflow_id = db.Column(db.Integer, db.ForeignKey('workflows.id'), nullable=False) + status = db.Column(db.String(20), default='running') # running, success, failed, cancelled + trigger_type = db.Column(db.String(50)) + context = db.Column(db.Text, nullable=True) # JSON data passed between steps + results = db.Column(db.Text, nullable=True) # JSON results of each step + + started_at = db.Column(db.DateTime, default=datetime.utcnow) + completed_at = db.Column(db.DateTime, nullable=True) + + logs = db.relationship('WorkflowLog', backref='execution', lazy='dynamic', cascade='all, delete-orphan') + + def to_dict(self): + return { + 'id': self.id, + 'workflow_id': self.workflow_id, + 'status': self.status, + 'trigger_type': self.trigger_type, + 'context': json.loads(self.context) if self.context else {}, + 'results': json.loads(self.results) if self.results else {}, + 'started_at': self.started_at.isoformat() if self.started_at else None, + 'completed_at': self.completed_at.isoformat() if self.completed_at else None, + 'duration': (self.completed_at - self.started_at).total_seconds() if self.completed_at else None + } + + +class WorkflowLog(db.Model): + __tablename__ = 'workflow_logs' + + id = db.Column(db.Integer, primary_key=True) + execution_id = db.Column(db.Integer, db.ForeignKey('workflow_executions.id'), nullable=False) + level = db.Column(db.String(10), default='INFO') # INFO, WARNING, ERROR, DEBUG + message = db.Column(db.Text, nullable=False) + node_id = db.Column(db.String(100), nullable=True) # ID of the node that generated the log + timestamp = db.Column(db.DateTime, default=datetime.utcnow) + + def to_dict(self): + return { + 'id': self.id, + 'execution_id': self.execution_id, + 'level': self.level, + 'message': self.message, + 'node_id': self.node_id, + 'timestamp': self.timestamp.isoformat() + } diff --git a/backend/app/models/workspace.py b/backend/app/models/workspace.py new file mode 100644 index 00000000..8e154f52 --- /dev/null +++ b/backend/app/models/workspace.py @@ -0,0 +1,141 @@ +from datetime import datetime +from app import db +import json + + +class Workspace(db.Model): + """Isolated container for servers, users, and settings.""" + __tablename__ = 'workspaces' + + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128), nullable=False, unique=True) + slug = db.Column(db.String(128), nullable=False, unique=True) + description = db.Column(db.Text) + + # Branding + logo_url = db.Column(db.String(512)) + primary_color = db.Column(db.String(7)) # hex color + + # Settings + settings_json = db.Column(db.Text) + + # Quotas + max_servers = db.Column(db.Integer, default=0) # 0 = unlimited + max_users = db.Column(db.Integer, default=0) + max_api_calls = db.Column(db.Integer, default=0) + + # Status + STATUS_ACTIVE = 'active' + STATUS_ARCHIVED = 'archived' + status = db.Column(db.String(32), default=STATUS_ACTIVE) + + # Billing + billing_notes = db.Column(db.Text) + + created_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + members = db.relationship('WorkspaceMember', backref='workspace', lazy='dynamic') + api_keys = db.relationship('WorkspaceApiKey', backref='workspace', lazy='dynamic') + + @property + def settings(self): + return json.loads(self.settings_json) if self.settings_json else {} + + @settings.setter + def settings(self, v): + self.settings_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'name': self.name, + 'slug': self.slug, + 'description': self.description, + 'logo_url': self.logo_url, + 'primary_color': self.primary_color, + 'settings': self.settings, + 'max_servers': self.max_servers, + 'max_users': self.max_users, + 'max_api_calls': self.max_api_calls, + 'status': self.status, + 'member_count': self.members.count(), + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'updated_at': self.updated_at.isoformat() if self.updated_at else None, + } + + def __repr__(self): + return f'' + + +class WorkspaceMember(db.Model): + """Maps users to workspaces with roles.""" + __tablename__ = 'workspace_members' + + id = db.Column(db.Integer, primary_key=True) + workspace_id = db.Column(db.Integer, db.ForeignKey('workspaces.id'), nullable=False) + user_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False) + + ROLE_OWNER = 'owner' + ROLE_ADMIN = 'admin' + ROLE_MEMBER = 'member' + ROLE_VIEWER = 'viewer' + role = db.Column(db.String(32), default=ROLE_MEMBER) + + joined_at = db.Column(db.DateTime, default=datetime.utcnow) + + user = db.relationship('User', backref=db.backref('workspace_memberships', lazy='dynamic')) + + __table_args__ = ( + db.UniqueConstraint('workspace_id', 'user_id', name='uq_workspace_user'), + ) + + def to_dict(self): + return { + 'id': self.id, + 'workspace_id': self.workspace_id, + 'user_id': self.user_id, + 'username': self.user.username if self.user else None, + 'email': self.user.email if self.user else None, + 'role': self.role, + 'joined_at': self.joined_at.isoformat() if self.joined_at else None, + } + + +class WorkspaceApiKey(db.Model): + """API keys scoped to a single workspace.""" + __tablename__ = 'workspace_api_keys' + + id = db.Column(db.Integer, primary_key=True) + workspace_id = db.Column(db.Integer, db.ForeignKey('workspaces.id'), nullable=False) + name = db.Column(db.String(128), nullable=False) + key_hash = db.Column(db.String(256), nullable=False) + key_prefix = db.Column(db.String(16)) + scopes_json = db.Column(db.Text) + is_active = db.Column(db.Boolean, default=True) + expires_at = db.Column(db.DateTime, nullable=True) + created_by = db.Column(db.Integer, db.ForeignKey('users.id')) + created_at = db.Column(db.DateTime, default=datetime.utcnow) + last_used_at = db.Column(db.DateTime) + + @property + def scopes(self): + return json.loads(self.scopes_json) if self.scopes_json else [] + + @scopes.setter + def scopes(self, v): + self.scopes_json = json.dumps(v) + + def to_dict(self): + return { + 'id': self.id, + 'workspace_id': self.workspace_id, + 'name': self.name, + 'key_prefix': self.key_prefix, + 'scopes': self.scopes, + 'is_active': self.is_active, + 'expires_at': self.expires_at.isoformat() if self.expires_at else None, + 'created_at': self.created_at.isoformat() if self.created_at else None, + 'last_used_at': self.last_used_at.isoformat() if self.last_used_at else None, + } diff --git a/backend/app/services/advanced_ssl_service.py b/backend/app/services/advanced_ssl_service.py new file mode 100644 index 00000000..5a0de41d --- /dev/null +++ b/backend/app/services/advanced_ssl_service.py @@ -0,0 +1,192 @@ +import json +import logging +import subprocess +from datetime import datetime +from app.utils.system import run_command + +logger = logging.getLogger(__name__) + + +class AdvancedSSLService: + """Service for advanced SSL certificate features.""" + + SSL_PROFILES = { + 'modern': { + 'label': 'Modern (TLS 1.3 only)', + 'protocols': 'TLSv1.3', + 'ciphers': '', + 'description': 'Best security. Supports only modern browsers.', + }, + 'intermediate': { + 'label': 'Intermediate (TLS 1.2+)', + 'protocols': 'TLSv1.2 TLSv1.3', + 'ciphers': 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384', + 'description': 'Recommended for most servers. Good compatibility.', + }, + 'legacy': { + 'label': 'Legacy (TLS 1.0+)', + 'protocols': 'TLSv1 TLSv1.1 TLSv1.2 TLSv1.3', + 'ciphers': 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256', + 'description': 'Maximum compatibility. Supports old clients.', + }, + } + + @staticmethod + def get_ssl_profiles(): + return AdvancedSSLService.SSL_PROFILES + + @staticmethod + def issue_wildcard_cert(domain, dns_provider, credentials): + """Issue wildcard SSL via DNS-01 challenge.""" + wildcard = f'*.{domain}' + cmd = ['certbot', 'certonly', '--non-interactive', '--agree-tos', + '--dns-' + dns_provider, '-d', domain, '-d', wildcard] + + if dns_provider == 'cloudflare': + cred_file = f'/tmp/certbot-{dns_provider}.ini' + with open(cred_file, 'w') as f: + f.write(f"dns_cloudflare_api_token = {credentials.get('api_token', '')}\n") + import os + os.chmod(cred_file, 0o600) + cmd.extend(['--dns-cloudflare-credentials', cred_file]) + + try: + result = run_command(cmd) + return {'success': True, 'domain': domain, 'type': 'wildcard', 'output': result.get('stdout', '')} + except Exception as e: + return {'success': False, 'error': str(e)} + + @staticmethod + def issue_san_cert(domains): + """Issue multi-domain (SAN) certificate.""" + if not domains or len(domains) < 1: + raise ValueError('At least one domain required') + + cmd = ['certbot', 'certonly', '--non-interactive', '--agree-tos', + '--webroot', '-w', '/var/www/html'] + for d in domains: + cmd.extend(['-d', d]) + + try: + result = run_command(cmd) + return {'success': True, 'domains': domains, 'type': 'san', 'output': result.get('stdout', '')} + except Exception as e: + return {'success': False, 'error': str(e)} + + @staticmethod + def upload_custom_cert(domain, cert_pem, key_pem, chain_pem=None): + """Upload custom certificate files.""" + import os + cert_dir = f'/etc/ssl/serverkit/{domain}' + os.makedirs(cert_dir, exist_ok=True) + + cert_path = os.path.join(cert_dir, 'cert.pem') + key_path = os.path.join(cert_dir, 'key.pem') + chain_path = os.path.join(cert_dir, 'chain.pem') + + with open(cert_path, 'w') as f: + f.write(cert_pem) + with open(key_path, 'w') as f: + f.write(key_pem) + if os.name != 'nt': + os.chmod(key_path, 0o600) + if chain_pem: + with open(chain_path, 'w') as f: + f.write(chain_pem) + + return { + 'domain': domain, + 'cert_path': cert_path, + 'key_path': key_path, + 'chain_path': chain_path if chain_pem else None, + } + + @staticmethod + def get_cert_health(domain): + """Check SSL health — grade, cipher suites, protocol versions.""" + import ssl + import socket + from datetime import timezone + + result = { + 'domain': domain, + 'valid': False, + 'grade': 'F', + 'protocols': [], + 'cipher_suites': [], + 'issuer': None, + 'expires_at': None, + 'days_remaining': None, + } + + try: + ctx = ssl.create_default_context() + with socket.create_connection((domain, 443), timeout=10) as sock: + with ctx.wrap_socket(sock, server_hostname=domain) as ssock: + cert = ssock.getpeercert() + cipher = ssock.cipher() + + result['valid'] = True + result['cipher_suites'] = [cipher[0]] if cipher else [] + result['protocols'] = [ssock.version()] + + # Parse expiry + not_after = cert.get('notAfter') + if not_after: + expiry = datetime.strptime(not_after, '%b %d %H:%M:%S %Y %Z') + result['expires_at'] = expiry.isoformat() + result['days_remaining'] = (expiry - datetime.utcnow()).days + + # Issuer + issuer = cert.get('issuer', ()) + for field in issuer: + for k, v in field: + if k == 'organizationName': + result['issuer'] = v + + # Simple grading + version = ssock.version() + if version == 'TLSv1.3': + result['grade'] = 'A+' + elif version == 'TLSv1.2': + result['grade'] = 'A' + elif version == 'TLSv1.1': + result['grade'] = 'B' + else: + result['grade'] = 'C' + + except Exception as e: + result['error'] = str(e) + + return result + + @staticmethod + def get_expiry_alerts(days_threshold=30): + """Get certificates expiring within threshold days.""" + import os + import glob + + alerts = [] + cert_paths = glob.glob('/etc/letsencrypt/live/*/cert.pem') + cert_paths += glob.glob('/etc/ssl/serverkit/*/cert.pem') + + for cert_path in cert_paths: + try: + domain = os.path.basename(os.path.dirname(cert_path)) + result = run_command(['openssl', 'x509', '-enddate', '-noout', '-in', cert_path]) + stdout = result.get('stdout', '') + if 'notAfter=' in stdout: + date_str = stdout.split('notAfter=')[1].strip() + expiry = datetime.strptime(date_str, '%b %d %H:%M:%S %Y %Z') + days = (expiry - datetime.utcnow()).days + if days <= days_threshold: + alerts.append({ + 'domain': domain, + 'expires_at': expiry.isoformat(), + 'days_remaining': days, + 'severity': 'critical' if days <= 7 else 'warning', + }) + except Exception: + continue + + return sorted(alerts, key=lambda x: x.get('days_remaining', 999)) diff --git a/backend/app/services/agent_fleet_service.py b/backend/app/services/agent_fleet_service.py new file mode 100644 index 00000000..a6b19656 --- /dev/null +++ b/backend/app/services/agent_fleet_service.py @@ -0,0 +1,542 @@ +""" +AgentFleet Service + +Manages agent fleet operations including bulk upgrades, staged rollouts, +health monitoring, offline command queuing, and retry with backoff. +""" + +from datetime import datetime, timedelta +from typing import List, Dict, Optional +import threading +import time +import uuid + +from app import db +from app.models.server import ( + Server, ServerGroup, AgentVersion, ServerCommand, AgentSession, AgentRollout +) +from app.services.agent_registry import agent_registry + + +class AgentFleetService: + """Service for managing a fleet of agents.""" + + def __init__(self): + self._rollout_threads = {} + self._rollout_cancel = {} # rollout_id -> threading.Event + + # ==================== Fleet Health ==================== + + def get_fleet_health(self) -> Dict: + """Get aggregated health metrics for the entire fleet.""" + now = datetime.utcnow() + one_hour_ago = now - timedelta(hours=1) + + servers = Server.query.all() + total_count = len(servers) + + online_count = sum(1 for s in servers if s.status == 'online') + offline_count = sum(1 for s in servers if s.status == 'offline') + pending_count = sum(1 for s in servers if s.status == 'pending') + + # Real heartbeat latency from active sessions + active_sessions = AgentSession.query.filter_by(is_active=True).all() + latencies = [s.avg_latency_ms for s in active_sessions if s.avg_latency_ms is not None] + avg_latency = sum(latencies) / len(latencies) if latencies else 0.0 + + # Command success rate in the last hour + commands = ServerCommand.query.filter(ServerCommand.created_at >= one_hour_ago).all() + if commands: + success_count = sum(1 for c in commands if c.status == 'completed') + command_success_rate = (success_count / len(commands)) * 100 + else: + command_success_rate = 100.0 + + # Queued commands count + queued_count = ServerCommand.query.filter_by(queued=True, status='pending').count() + + return { + 'total_servers': total_count, + 'online_servers': online_count, + 'offline_servers': offline_count, + 'pending_servers': pending_count, + 'uptime_percentage': (online_count / total_count * 100) if total_count > 0 else 100, + 'avg_heartbeat_latency': round(avg_latency, 1), + 'command_success_rate': round(command_success_rate, 1), + 'queued_commands': queued_count, + 'version_distribution': self._get_version_distribution(servers) + } + + def _get_version_distribution(self, servers: List[Server]) -> Dict[str, int]: + """Calculate distribution of agent versions.""" + dist = {} + for s in servers: + version = s.agent_version or 'unknown' + dist[version] = dist.get(version, 0) + 1 + return dist + + # ==================== Upgrades ==================== + + def upgrade_servers(self, server_ids: List[str], version_id: Optional[str] = None, user_id: int = None) -> Dict: + """Trigger upgrades for a list of servers.""" + version = None + if version_id: + version = AgentVersion.query.get(version_id) + else: + version = AgentVersion.query.filter_by( + channel='stable', is_active=True + ).order_by(AgentVersion.version.desc()).first() + + if not version: + return {'success': False, 'error': 'No suitable agent version found'} + + results = [] + for server_id in server_ids: + server = Server.query.get(server_id) + if not server: + results.append({'server_id': server_id, 'success': False, 'error': 'Server not found'}) + continue + + platform = f"{server.os_type}-{server.architecture}" + download_url = version.assets.get(platform) if version.assets else None + + if not download_url: + results.append({'server_id': server_id, 'success': False, 'error': f'No asset for platform {platform}'}) + continue + + params = { + 'version': version.version, + 'download_url': download_url, + 'checksums_url': version.assets.get('checksums') if version.assets else None + } + + if server.status != 'online': + # Queue command for offline agent + self.queue_command(server_id, 'agent:update', params, user_id=user_id) + results.append({'server_id': server_id, 'success': True, 'message': 'Upgrade queued (agent offline)'}) + continue + + threading.Thread( + target=agent_registry.send_command, + args=(server_id, 'agent:update', params), + kwargs={'user_id': user_id, 'timeout': 60.0}, + daemon=True + ).start() + + results.append({'server_id': server_id, 'success': True, 'message': 'Upgrade triggered'}) + + return {'success': True, 'results': results} + + # ==================== Staged Rollouts ==================== + + def staged_rollout( + self, group_id: str, version_id: str, + batch_size: int = 5, delay_minutes: int = 10, + strategy: str = 'staged', user_id: int = None, + server_ids: List[str] = None + ) -> Dict: + """Perform a staged rollout of an agent version.""" + version = AgentVersion.query.get(version_id) + if not version: + return {'success': False, 'error': 'Version not found'} + + if group_id: + group = ServerGroup.query.get(group_id) + if not group: + return {'success': False, 'error': 'Group not found'} + target_servers = Server.query.filter_by(group_id=group_id, status='online').all() + elif server_ids: + target_servers = Server.query.filter( + Server.id.in_(server_ids), Server.status == 'online' + ).all() + else: + target_servers = Server.query.filter_by(status='online').all() + + if not target_servers: + return {'success': True, 'message': 'No online servers to upgrade'} + + ids = [s.id for s in target_servers] + + # Create persistent rollout record + rollout = AgentRollout( + version_id=version_id, + group_id=group_id, + user_id=user_id, + batch_size=batch_size, + delay_minutes=delay_minutes, + strategy=strategy, + status='running', + total_servers=len(ids), + started_at=datetime.utcnow() + ) + db.session.add(rollout) + db.session.commit() + + rollout_id = rollout.id + + # Setup cancellation + cancel_event = threading.Event() + self._rollout_cancel[rollout_id] = cancel_event + + thread = threading.Thread( + target=self._run_staged_rollout, + args=(rollout_id, ids, version_id, batch_size, delay_minutes, user_id, cancel_event), + daemon=True + ) + self._rollout_threads[rollout_id] = thread + thread.start() + + return {'success': True, 'rollout_id': rollout_id, 'rollout': rollout.to_dict()} + + def _run_staged_rollout(self, rollout_id, server_ids, version_id, batch_size, delay_minutes, user_id, cancel_event): + """Run staged rollout in batches with health checks between waves.""" + from app import create_app + app = create_app() + + with app.app_context(): + total = len(server_ids) + processed = 0 + failed = 0 + wave = 0 + server_results = [] + + for i in range(0, total, batch_size): + if cancel_event.is_set(): + self._update_rollout_status(rollout_id, 'cancelled', processed, failed, wave, server_results) + return + + batch = server_ids[i:i + batch_size] + wave += 1 + + # Upgrade this batch + result = self.upgrade_servers(batch, version_id, user_id) + batch_results = result.get('results', []) + + for r in batch_results: + server_results.append({ + 'server_id': r['server_id'], + 'status': 'success' if r.get('success') else 'failed', + 'error': r.get('error'), + 'wave': wave + }) + if not r.get('success'): + failed += 1 + + processed += len(batch) + + # Update rollout progress + self._update_rollout_status(rollout_id, 'running', processed, failed, wave, server_results) + + if processed < total: + # Health check: if more than 50% of current wave failed, abort + wave_failures = sum(1 for r in batch_results if not r.get('success')) + if wave_failures > len(batch) * 0.5: + self._update_rollout_status( + rollout_id, 'failed', processed, failed, wave, server_results, + error=f'Wave {wave} had >50% failures ({wave_failures}/{len(batch)}), aborting rollout' + ) + return + + # Wait between waves, checking for cancellation + wait_seconds = delay_minutes * 60 + if cancel_event.wait(timeout=wait_seconds): + self._update_rollout_status(rollout_id, 'cancelled', processed, failed, wave, server_results) + return + + # Post-wave health check: verify previous batch servers are still online + offline_count = 0 + for sid in batch: + server = Server.query.get(sid) + if server and server.status != 'online': + offline_count += 1 + + if offline_count > len(batch) * 0.3: + self._update_rollout_status( + rollout_id, 'failed', processed, failed, wave, server_results, + error=f'Post-wave health check failed: {offline_count}/{len(batch)} servers offline after wave {wave}' + ) + return + + self._update_rollout_status(rollout_id, 'completed', processed, failed, wave, server_results) + + def _update_rollout_status(self, rollout_id, status, processed, failed, wave, server_results, error=None): + """Update rollout record in database.""" + try: + rollout = AgentRollout.query.get(rollout_id) + if rollout: + rollout.status = status + rollout.processed_servers = processed + rollout.failed_servers = failed + rollout.current_wave = wave + rollout.server_results = server_results + rollout.error = error + if status in ('completed', 'failed', 'cancelled'): + rollout.completed_at = datetime.utcnow() + db.session.commit() + except Exception as e: + print(f"Error updating rollout status: {e}") + db.session.rollback() + + def cancel_rollout(self, rollout_id: str) -> bool: + """Cancel an active rollout.""" + cancel_event = self._rollout_cancel.get(rollout_id) + if cancel_event: + cancel_event.set() + return True + + # Try to cancel in DB directly if thread already finished + rollout = AgentRollout.query.get(rollout_id) + if rollout and rollout.status == 'running': + rollout.status = 'cancelled' + rollout.completed_at = datetime.utcnow() + db.session.commit() + return True + + return False + + def get_rollouts(self, status: str = None, limit: int = 20) -> List[Dict]: + """Get rollout history.""" + query = AgentRollout.query.order_by(AgentRollout.created_at.desc()) + if status: + query = query.filter_by(status=status) + rollouts = query.limit(limit).all() + return [r.to_dict() for r in rollouts] + + def get_rollout(self, rollout_id: str) -> Optional[Dict]: + """Get a specific rollout.""" + rollout = AgentRollout.query.get(rollout_id) + return rollout.to_dict() if rollout else None + + # ==================== Registration ==================== + + def approve_registration(self, server_id: str, user_id: int) -> bool: + """Approve a pending agent registration.""" + server = Server.query.get(server_id) + if not server or server.status != 'pending': + return False + + server.status = 'connecting' + server.registered_by = user_id + server.registered_at = datetime.utcnow() + db.session.commit() + return True + + def reject_registration(self, server_id: str) -> bool: + """Reject and delete a pending agent registration.""" + server = Server.query.get(server_id) + if not server or server.status != 'pending': + return False + + db.session.delete(server) + db.session.commit() + return True + + # ==================== Offline Command Queue ==================== + + def queue_command( + self, server_id: str, action: str, params: dict = None, + user_id: int = None, max_retries: int = 3, backoff_seconds: int = 30 + ) -> ServerCommand: + """Queue a command for an offline agent. Delivered on reconnect.""" + command = ServerCommand( + id=str(uuid.uuid4()), + server_id=server_id, + user_id=user_id, + command_type=action, + command_data=params, + status='pending', + queued=True, + max_retries=max_retries, + backoff_seconds=backoff_seconds + ) + db.session.add(command) + db.session.commit() + return command + + def deliver_queued_commands(self, server_id: str): + """Deliver all queued commands for a server that just reconnected.""" + commands = ServerCommand.query.filter_by( + server_id=server_id, queued=True, status='pending' + ).order_by(ServerCommand.created_at.asc()).all() + + for cmd in commands: + cmd.queued = False + cmd.status = 'running' + cmd.started_at = datetime.utcnow() + db.session.commit() + + # Send asynchronously + threading.Thread( + target=self._deliver_single_command, + args=(server_id, cmd.id, cmd.command_type, cmd.command_data, cmd.user_id), + daemon=True + ).start() + + def _deliver_single_command(self, server_id, command_id, action, params, user_id): + """Send a single queued command and update its status.""" + from app import create_app + app = create_app() + + with app.app_context(): + result = agent_registry.send_command( + server_id, action, params, user_id=user_id, timeout=60.0 + ) + + try: + cmd = ServerCommand.query.get(command_id) + if cmd: + if result.get('success'): + cmd.status = 'completed' + cmd.result = result.get('data') + else: + cmd.status = 'failed' + cmd.error = result.get('error') + cmd.completed_at = datetime.utcnow() + db.session.commit() + except Exception as e: + print(f"Error updating queued command: {e}") + db.session.rollback() + + def get_queued_commands(self, server_id: str = None) -> List[Dict]: + """Get all pending queued commands, optionally filtered by server.""" + query = ServerCommand.query.filter_by(queued=True, status='pending') + if server_id: + query = query.filter_by(server_id=server_id) + return [c.to_dict() for c in query.order_by(ServerCommand.created_at.asc()).all()] + + # ==================== Command Retry ==================== + + def retry_command(self, command_id: str) -> Dict: + """Retry a failed command with exponential backoff.""" + cmd = ServerCommand.query.get(command_id) + if not cmd: + return {'success': False, 'error': 'Command not found'} + + if cmd.status not in ('failed', 'timeout'): + return {'success': False, 'error': f'Cannot retry command with status: {cmd.status}'} + + if cmd.retry_count >= cmd.max_retries: + return {'success': False, 'error': f'Max retries ({cmd.max_retries}) exceeded'} + + server = Server.query.get(cmd.server_id) + if not server: + return {'success': False, 'error': 'Server not found'} + + cmd.retry_count += 1 + + if server.status != 'online': + # Re-queue for offline delivery + cmd.queued = True + cmd.status = 'pending' + # Exponential backoff for next_retry_at + backoff = cmd.backoff_seconds * (2 ** (cmd.retry_count - 1)) + cmd.next_retry_at = datetime.utcnow() + timedelta(seconds=backoff) + db.session.commit() + return {'success': True, 'message': 'Command re-queued (agent offline)', 'retry_count': cmd.retry_count} + + # Agent is online, send immediately + cmd.status = 'running' + cmd.started_at = datetime.utcnow() + db.session.commit() + + threading.Thread( + target=self._deliver_single_command, + args=(cmd.server_id, cmd.id, cmd.command_type, cmd.command_data, cmd.user_id), + daemon=True + ).start() + + return {'success': True, 'message': 'Command retry triggered', 'retry_count': cmd.retry_count} + + def process_scheduled_retries(self): + """Process commands that are due for retry. Call periodically.""" + now = datetime.utcnow() + commands = ServerCommand.query.filter( + ServerCommand.status == 'pending', + ServerCommand.queued == True, + ServerCommand.next_retry_at != None, + ServerCommand.next_retry_at <= now + ).all() + + for cmd in commands: + server = Server.query.get(cmd.server_id) + if server and server.status == 'online': + cmd.queued = False + cmd.status = 'running' + cmd.started_at = datetime.utcnow() + db.session.commit() + + threading.Thread( + target=self._deliver_single_command, + args=(cmd.server_id, cmd.id, cmd.command_type, cmd.command_data, cmd.user_id), + daemon=True + ).start() + + # ==================== Diagnostics ==================== + + def get_server_diagnostics(self, server_id: str) -> Dict: + """Get detailed connection diagnostics for a server.""" + server = Server.query.get(server_id) + if not server: + return {'error': 'Server not found'} + + # Active session + active_session = AgentSession.query.filter_by( + server_id=server_id, is_active=True + ).first() + + # Recent sessions (last 10) + recent_sessions = AgentSession.query.filter_by( + server_id=server_id + ).order_by(AgentSession.connected_at.desc()).limit(10).all() + + # Command stats (last 24h) + one_day_ago = datetime.utcnow() - timedelta(hours=24) + recent_commands = ServerCommand.query.filter( + ServerCommand.server_id == server_id, + ServerCommand.created_at >= one_day_ago + ).all() + + total_cmds = len(recent_commands) + success_cmds = sum(1 for c in recent_commands if c.status == 'completed') + failed_cmds = sum(1 for c in recent_commands if c.status == 'failed') + timeout_cmds = sum(1 for c in recent_commands if c.status == 'timeout') + + # Calculate uptime from sessions + uptime_seconds = 0 + for session in recent_sessions: + start = session.connected_at or session.connected_at + end = session.disconnected_at or datetime.utcnow() + uptime_seconds += (end - start).total_seconds() + + # Queued commands for this server + queued = ServerCommand.query.filter_by( + server_id=server_id, queued=True, status='pending' + ).count() + + return { + 'server_id': server_id, + 'server_name': server.name, + 'status': server.status, + 'agent_version': server.agent_version, + 'last_seen': server.last_seen.isoformat() if server.last_seen else None, + 'connection': { + 'is_connected': active_session is not None, + 'current_latency_ms': active_session.heartbeat_latency_ms if active_session else None, + 'avg_latency_ms': active_session.avg_latency_ms if active_session else None, + 'connected_since': active_session.connected_at.isoformat() if active_session else None, + 'ip_address': active_session.ip_address if active_session else server.ip_address, + }, + 'commands_24h': { + 'total': total_cmds, + 'success': success_cmds, + 'failed': failed_cmds, + 'timeout': timeout_cmds, + 'success_rate': round((success_cmds / total_cmds * 100), 1) if total_cmds > 0 else 100.0, + }, + 'queued_commands': queued, + 'uptime_seconds_24h': round(uptime_seconds), + 'recent_sessions': [s.to_dict() for s in recent_sessions], + } + + +fleet_service = AgentFleetService() diff --git a/backend/app/services/agent_plugin_service.py b/backend/app/services/agent_plugin_service.py new file mode 100644 index 00000000..6bd44d6c --- /dev/null +++ b/backend/app/services/agent_plugin_service.py @@ -0,0 +1,306 @@ +import json +import logging +from datetime import datetime +from app import db +from app.models.agent_plugin import AgentPlugin, AgentPluginInstall +from app.models.server import Server + +logger = logging.getLogger(__name__) + + +class AgentPluginService: + """Service for managing agent plugins.""" + + # Plugin specification interface + PLUGIN_SPEC = { + 'required_fields': ['name', 'display_name', 'version'], + 'capability_types': ['metrics', 'health_checks', 'commands', 'scheduled_tasks', 'event_hooks'], + 'permission_types': ['filesystem', 'network', 'docker', 'process', 'system'], + } + + @staticmethod + def list_plugins(status=None): + query = AgentPlugin.query + if status: + query = query.filter_by(status=status) + return query.order_by(AgentPlugin.display_name).all() + + @staticmethod + def get_plugin(plugin_id): + return AgentPlugin.query.get(plugin_id) + + @staticmethod + def get_plugin_by_name(name): + return AgentPlugin.query.filter_by(name=name).first() + + @staticmethod + def create_plugin(data): + """Register a new plugin from manifest data.""" + if AgentPlugin.query.filter_by(name=data['name']).first(): + raise ValueError(f"Plugin '{data['name']}' already exists") + + plugin = AgentPlugin( + name=data['name'], + display_name=data.get('display_name', data['name']), + version=data['version'], + description=data.get('description', ''), + author=data.get('author', ''), + homepage=data.get('homepage', ''), + max_memory_mb=data.get('max_memory_mb', 128), + max_cpu_percent=data.get('max_cpu_percent', 10), + ) + plugin.manifest = data.get('manifest', data) + plugin.capabilities = data.get('capabilities', []) + plugin.dependencies = data.get('dependencies', []) + plugin.permissions = data.get('permissions', []) + + db.session.add(plugin) + db.session.commit() + return plugin + + @staticmethod + def update_plugin(plugin_id, data): + plugin = AgentPlugin.query.get(plugin_id) + if not plugin: + return None + + for field in ['display_name', 'version', 'description', 'author', 'homepage', + 'max_memory_mb', 'max_cpu_percent', 'status']: + if field in data: + setattr(plugin, field, data[field]) + + if 'capabilities' in data: + plugin.capabilities = data['capabilities'] + if 'dependencies' in data: + plugin.dependencies = data['dependencies'] + if 'permissions' in data: + plugin.permissions = data['permissions'] + if 'manifest' in data: + plugin.manifest = data['manifest'] + + db.session.commit() + return plugin + + @staticmethod + def delete_plugin(plugin_id): + plugin = AgentPlugin.query.get(plugin_id) + if not plugin: + return False + + # Check for active installations + active = plugin.installations.filter( + AgentPluginInstall.status.in_([ + AgentPluginInstall.STATUS_ENABLED, + AgentPluginInstall.STATUS_INSTALLING + ]) + ).count() + if active > 0: + raise ValueError(f'Cannot delete plugin with {active} active installations') + + # Remove all installation records + AgentPluginInstall.query.filter_by(plugin_id=plugin_id).delete() + db.session.delete(plugin) + db.session.commit() + return True + + # --- Installation Management --- + + @staticmethod + def install_plugin(plugin_id, server_id, config=None): + """Install a plugin on a server.""" + plugin = AgentPlugin.query.get(plugin_id) + if not plugin: + raise ValueError('Plugin not found') + + server = Server.query.get(server_id) + if not server: + raise ValueError('Server not found') + + # Check if already installed + existing = AgentPluginInstall.query.filter_by( + plugin_id=plugin_id, server_id=server_id + ).first() + if existing and existing.status in ['enabled', 'installing']: + raise ValueError('Plugin already installed on this server') + + # Check dependencies + for dep_name in plugin.dependencies: + dep_plugin = AgentPlugin.query.filter_by(name=dep_name).first() + if not dep_plugin: + raise ValueError(f"Required dependency '{dep_name}' not available") + dep_install = AgentPluginInstall.query.filter_by( + plugin_id=dep_plugin.id, server_id=server_id, status='enabled' + ).first() + if not dep_install: + raise ValueError(f"Dependency '{dep_name}' not installed on server") + + if existing: + existing.status = AgentPluginInstall.STATUS_INSTALLING + existing.installed_version = plugin.version + existing.error_message = None + if config: + existing.config = config + install = existing + else: + install = AgentPluginInstall( + plugin_id=plugin_id, + server_id=server_id, + installed_version=plugin.version, + status=AgentPluginInstall.STATUS_INSTALLING, + ) + if config: + install.config = config + db.session.add(install) + + db.session.commit() + + # Send install command to agent + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw: + gw.send_command(server.agent_id, 'plugin_install', { + 'plugin_name': plugin.name, + 'version': plugin.version, + 'manifest': plugin.manifest, + 'config': install.config, + 'permissions': plugin.permissions, + 'resource_limits': { + 'max_memory_mb': plugin.max_memory_mb, + 'max_cpu_percent': plugin.max_cpu_percent, + } + }) + except Exception as e: + logger.warning(f'Could not send plugin install command: {e}') + + return install + + @staticmethod + def uninstall_plugin(install_id): + install = AgentPluginInstall.query.get(install_id) + if not install: + return False + + install.status = AgentPluginInstall.STATUS_UNINSTALLING + db.session.commit() + + # Send uninstall command to agent + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw and install.server: + gw.send_command(install.server.agent_id, 'plugin_uninstall', { + 'plugin_name': install.plugin.name, + }) + except Exception as e: + logger.warning(f'Could not send plugin uninstall command: {e}') + + return True + + @staticmethod + def enable_plugin(install_id): + install = AgentPluginInstall.query.get(install_id) + if not install: + return None + install.status = AgentPluginInstall.STATUS_ENABLED + db.session.commit() + return install + + @staticmethod + def disable_plugin(install_id): + install = AgentPluginInstall.query.get(install_id) + if not install: + return None + install.status = AgentPluginInstall.STATUS_DISABLED + db.session.commit() + + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw and install.server: + gw.send_command(install.server.agent_id, 'plugin_disable', { + 'plugin_name': install.plugin.name, + }) + except Exception as e: + logger.warning(f'Could not send plugin disable command: {e}') + + return install + + @staticmethod + def update_install_status(install_id, status, error=None, health=None, metrics=None): + install = AgentPluginInstall.query.get(install_id) + if not install: + return None + install.status = status + if error is not None: + install.error_message = error + if health is not None: + install.health_status = health + install.last_health_check = datetime.utcnow() + if metrics is not None: + install.metrics_json = json.dumps(metrics) + db.session.commit() + return install + + @staticmethod + def update_install_config(install_id, config): + install = AgentPluginInstall.query.get(install_id) + if not install: + return None + install.config = config + db.session.commit() + + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw and install.server: + gw.send_command(install.server.agent_id, 'plugin_configure', { + 'plugin_name': install.plugin.name, + 'config': config, + }) + except Exception as e: + logger.warning(f'Could not send plugin config update: {e}') + + return install + + @staticmethod + def get_server_plugins(server_id): + return AgentPluginInstall.query.filter_by(server_id=server_id).all() + + @staticmethod + def get_plugin_installations(plugin_id): + return AgentPluginInstall.query.filter_by(plugin_id=plugin_id).all() + + @staticmethod + def get_install(install_id): + return AgentPluginInstall.query.get(install_id) + + @staticmethod + def bulk_install(plugin_id, server_ids, config=None): + """Install plugin on multiple servers.""" + results = [] + for sid in server_ids: + try: + install = AgentPluginService.install_plugin(plugin_id, sid, config) + results.append({'server_id': sid, 'status': 'installing', 'install_id': install.id}) + except ValueError as e: + results.append({'server_id': sid, 'status': 'error', 'error': str(e)}) + return results + + @staticmethod + def validate_manifest(manifest): + """Validate a plugin manifest against the spec.""" + errors = [] + for field in AgentPluginService.PLUGIN_SPEC['required_fields']: + if field not in manifest: + errors.append(f"Missing required field: {field}") + + for cap in manifest.get('capabilities', []): + if cap not in AgentPluginService.PLUGIN_SPEC['capability_types']: + errors.append(f"Unknown capability: {cap}") + + for perm in manifest.get('permissions', []): + if perm not in AgentPluginService.PLUGIN_SPEC['permission_types']: + errors.append(f"Unknown permission: {perm}") + + return errors diff --git a/backend/app/services/agent_registry.py b/backend/app/services/agent_registry.py index db298454..0118aa56 100644 --- a/backend/app/services/agent_registry.py +++ b/backend/app/services/agent_registry.py @@ -7,9 +7,12 @@ import hmac import hashlib +import logging import secrets import time import threading + +logger = logging.getLogger(__name__) from datetime import datetime, timedelta from typing import Dict, Optional, Callable, Any from dataclasses import dataclass, field @@ -120,7 +123,7 @@ def _check_heartbeats(self): self._handle_agent_timeout(server_id) except Exception as e: - print(f"Error in heartbeat checker: {e}") + logger.error("Error in heartbeat checker: %s", e) self._stop_heartbeat.wait(30) # Check every 30 seconds @@ -152,7 +155,7 @@ def _handle_agent_timeout(self, server_id: str): session.disconnect_reason = 'heartbeat_timeout' db.session.commit() except Exception as e: - print(f"Error updating server status: {e}") + logger.exception("Error updating server status") # ==================== Connection Management ==================== @@ -211,8 +214,16 @@ def register_agent( ) db.session.add(session) db.session.commit() + + # Deliver any queued commands for this server + try: + from app.services.agent_fleet_service import fleet_service + fleet_service.deliver_queued_commands(server_id) + except Exception as e: + logger.error("Error delivering queued commands: %s", e) + except Exception as e: - print(f"Error registering agent: {e}") + logger.exception("Error registering agent") db.session.rollback() return session_token @@ -244,7 +255,7 @@ def unregister_agent(self, socket_id: str, reason: str = 'disconnect'): session.disconnect_reason = reason db.session.commit() except Exception as e: - print(f"Error unregistering agent: {e}") + logger.exception("Error unregistering agent") db.session.rollback() def get_agent(self, server_id: str) -> Optional[ConnectedAgent]: @@ -272,27 +283,44 @@ def get_connected_servers(self) -> list: # ==================== Heartbeat ==================== - def update_heartbeat(self, server_id: str, metrics: dict = None): + def update_heartbeat(self, server_id: str, metrics: dict = None, client_timestamp: int = None): """Update agent heartbeat and optionally store metrics""" + now = datetime.utcnow() with self._lock: agent = self._agents.get(server_id) if agent: - agent.last_heartbeat = datetime.utcnow() + agent.last_heartbeat = now + + # Calculate heartbeat latency if client sent a timestamp + latency_ms = None + if client_timestamp: + now_ms = int(time.time() * 1000) + latency_ms = max(0, now_ms - client_timestamp) # Update server last_seen try: server = Server.query.get(server_id) if server: - server.last_seen = datetime.utcnow() + server.last_seen = now if server.status != 'online': server.status = 'online' db.session.commit() + # Update session latency + if latency_ms is not None: + session = AgentSession.query.filter_by( + server_id=server_id, is_active=True + ).first() + if session: + session.last_heartbeat = now + session.update_latency(latency_ms) + db.session.commit() + # Store metrics if provided if metrics: self._store_metrics(server_id, metrics) except Exception as e: - print(f"Error updating heartbeat: {e}") + logger.exception("Error updating heartbeat") db.session.rollback() def _store_metrics(self, server_id: str, metrics: dict): @@ -309,7 +337,7 @@ def _store_metrics(self, server_id: str, metrics: dict): db.session.add(metric) db.session.commit() except Exception as e: - print(f"Error storing metrics: {e}") + logger.exception("Error storing metrics") db.session.rollback() # ==================== Command Routing ==================== @@ -447,10 +475,17 @@ def handle_command_result(self, socket_id: str, result: dict): """Handle command result from agent""" agent = self.get_agent_by_socket(socket_id) if not agent: + logger.warning(f"Command result from unknown socket: {socket_id}") return command_id = result.get('command_id') if not command_id: + logger.warning(f"Command result missing command_id from agent: {agent.server_id}") + return + + # Validate command_id format to prevent injection + if not isinstance(command_id, str) or len(command_id) > 64: + logger.warning(f"Invalid command_id format from agent: {agent.server_id}") return with self._lock: @@ -494,7 +529,7 @@ def verify_agent_auth( # Check timestamp (allow 5 minute window) now = int(time.time() * 1000) - if abs(now - timestamp) > 300000: # 5 minutes + if abs(now - timestamp) > 60000: # 60 seconds if ip_address: anomaly_detection_service.track_auth_attempt(None, False, ip_address) return None @@ -550,6 +585,11 @@ def verify_agent_auth( if ip_address: anomaly_detection_service.track_auth_attempt(server.id, True, ip_address) + # TODO: Implement per-message session token validation. Currently, the session + # token is issued at auth time but not verified on each subsequent message. + # Full session-per-message validation requires protocol changes on both the + # agent (Go) and backend sides. + return server # ==================== System Info ==================== @@ -572,7 +612,7 @@ def update_system_info(self, server_id: str, info: dict): server.agent_version = info.get('agent_version', server.agent_version) db.session.commit() except Exception as e: - print(f"Error updating system info: {e}") + logger.exception("Error updating system info") db.session.rollback() diff --git a/backend/app/services/background_job_service.py b/backend/app/services/background_job_service.py new file mode 100644 index 00000000..c5e932ae --- /dev/null +++ b/backend/app/services/background_job_service.py @@ -0,0 +1,123 @@ +import logging +import threading +import queue +import time +from datetime import datetime +from functools import wraps + +logger = logging.getLogger(__name__) + + +class BackgroundJobService: + """Simple background job queue for long-running tasks.""" + + _queue = queue.Queue() + _workers = [] + _results = {} + _lock = threading.Lock() + _running = False + + @classmethod + def start(cls, app, num_workers=3): + """Start background worker threads.""" + if cls._running: + return + + cls._running = True + + for i in range(num_workers): + t = threading.Thread( + target=cls._worker_loop, + args=(app,), + daemon=True, + name=f'bg-worker-{i}', + ) + t.start() + cls._workers.append(t) + + logger.info(f'Background job service started with {num_workers} workers') + + @classmethod + def _worker_loop(cls, app): + while cls._running: + try: + job = cls._queue.get(timeout=1) + except queue.Empty: + continue + + job_id = job['id'] + with cls._lock: + cls._results[job_id] = {'status': 'running', 'started_at': datetime.utcnow().isoformat()} + + try: + with app.app_context(): + result = job['func'](*job.get('args', ()), **job.get('kwargs', {})) + with cls._lock: + cls._results[job_id] = { + 'status': 'completed', + 'result': result, + 'completed_at': datetime.utcnow().isoformat(), + } + except Exception as e: + logger.error(f'Background job {job_id} failed: {e}') + with cls._lock: + cls._results[job_id] = { + 'status': 'failed', + 'error': str(e), + 'completed_at': datetime.utcnow().isoformat(), + } + finally: + cls._queue.task_done() + + @classmethod + def enqueue(cls, func, *args, job_id=None, **kwargs): + """Add a job to the queue. Returns job_id.""" + import uuid + job_id = job_id or str(uuid.uuid4())[:8] + + with cls._lock: + cls._results[job_id] = {'status': 'queued', 'queued_at': datetime.utcnow().isoformat()} + + cls._queue.put({ + 'id': job_id, + 'func': func, + 'args': args, + 'kwargs': kwargs, + }) + + return job_id + + @classmethod + def get_job_status(cls, job_id): + with cls._lock: + return cls._results.get(job_id) + + @classmethod + def list_jobs(cls): + with cls._lock: + return dict(cls._results) + + @classmethod + def cleanup_old(cls, max_age_seconds=3600): + """Remove completed/failed jobs older than max_age.""" + now = datetime.utcnow() + with cls._lock: + to_remove = [] + for jid, info in cls._results.items(): + if info['status'] in ('completed', 'failed'): + completed_at = info.get('completed_at') + if completed_at: + dt = datetime.fromisoformat(completed_at) + if (now - dt).total_seconds() > max_age_seconds: + to_remove.append(jid) + for jid in to_remove: + del cls._results[jid] + + @classmethod + def get_queue_stats(cls): + return { + 'queue_size': cls._queue.qsize(), + 'workers': len(cls._workers), + 'total_jobs': len(cls._results), + 'running': cls._running, + } diff --git a/backend/app/services/build_service.py b/backend/app/services/build_service.py index 0ede960f..35a2cd4f 100644 --- a/backend/app/services/build_service.py +++ b/backend/app/services/build_service.py @@ -481,8 +481,7 @@ def build_with_custom_command(cls, app_id: int, app_path: str, env.update(env_vars) process = subprocess.Popen( - build_cmd, - shell=True, + ['bash', '-c', build_cmd], cwd=app_path, env=env, stdout=subprocess.PIPE, diff --git a/backend/app/services/cache_service.py b/backend/app/services/cache_service.py new file mode 100644 index 00000000..997ef1c2 --- /dev/null +++ b/backend/app/services/cache_service.py @@ -0,0 +1,140 @@ +import json +import logging +import time +from functools import wraps + +logger = logging.getLogger(__name__) + +# In-memory cache fallback (used when Redis is not available) +_memory_cache = {} +_redis_client = None + + +def _get_redis(): + """Get Redis client, or None if unavailable.""" + global _redis_client + if _redis_client is not None: + return _redis_client + try: + import redis + import os + url = os.environ.get('REDIS_URL', 'redis://localhost:6379/0') + _redis_client = redis.from_url(url, decode_responses=True, socket_timeout=2) + _redis_client.ping() + logger.info('Redis cache connected') + return _redis_client + except Exception: + _redis_client = None + return None + + +class CacheService: + """Caching service with Redis backend and in-memory fallback.""" + + DEFAULT_TTL = 300 # 5 minutes + + @staticmethod + def get(key): + r = _get_redis() + if r: + try: + val = r.get(f'sk:{key}') + return json.loads(val) if val else None + except Exception: + pass + + entry = _memory_cache.get(key) + if entry and entry['expires'] > time.time(): + return entry['value'] + elif entry: + del _memory_cache[key] + return None + + @staticmethod + def set(key, value, ttl=None): + ttl = ttl or CacheService.DEFAULT_TTL + r = _get_redis() + if r: + try: + r.setex(f'sk:{key}', ttl, json.dumps(value)) + return + except Exception: + pass + + _memory_cache[key] = { + 'value': value, + 'expires': time.time() + ttl, + } + + @staticmethod + def delete(key): + r = _get_redis() + if r: + try: + r.delete(f'sk:{key}') + except Exception: + pass + _memory_cache.pop(key, None) + + @staticmethod + def delete_pattern(pattern): + r = _get_redis() + if r: + try: + keys = r.keys(f'sk:{pattern}') + if keys: + r.delete(*keys) + except Exception: + pass + + to_delete = [k for k in _memory_cache if k.startswith(pattern.replace('*', ''))] + for k in to_delete: + del _memory_cache[k] + + @staticmethod + def flush(): + r = _get_redis() + if r: + try: + keys = r.keys('sk:*') + if keys: + r.delete(*keys) + except Exception: + pass + _memory_cache.clear() + + @staticmethod + def get_stats(): + r = _get_redis() + if r: + try: + info = r.info('memory') + return { + 'backend': 'redis', + 'used_memory': info.get('used_memory_human'), + 'keys': r.dbsize(), + } + except Exception: + pass + + return { + 'backend': 'memory', + 'keys': len(_memory_cache), + 'used_memory': 'N/A', + } + + +def cached(key_template, ttl=300): + """Decorator for caching function results.""" + def decorator(func): + @wraps(func) + def wrapper(*args, **kwargs): + cache_key = key_template.format(*args, **kwargs) + result = CacheService.get(cache_key) + if result is not None: + return result + result = func(*args, **kwargs) + CacheService.set(cache_key, result, ttl) + return result + return wrapper + return decorator diff --git a/backend/app/services/cloud_provisioning_service.py b/backend/app/services/cloud_provisioning_service.py new file mode 100644 index 00000000..8b99593a --- /dev/null +++ b/backend/app/services/cloud_provisioning_service.py @@ -0,0 +1,296 @@ +import logging +from datetime import datetime +from app import db +from app.models.cloud_server import CloudProvider, CloudServer, CloudSnapshot + +logger = logging.getLogger(__name__) + + +class CloudProvisioningService: + """Service for provisioning cloud servers via provider APIs.""" + + SUPPORTED_PROVIDERS = { + 'digitalocean': { + 'name': 'DigitalOcean', + 'regions': ['nyc1', 'nyc3', 'sfo3', 'ams3', 'lon1', 'fra1', 'sgp1', 'blr1', 'tor1', 'syd1'], + 'sizes': ['s-1vcpu-1gb', 's-1vcpu-2gb', 's-2vcpu-2gb', 's-2vcpu-4gb', 's-4vcpu-8gb', 's-8vcpu-16gb'], + 'images': ['ubuntu-22-04-x64', 'ubuntu-24-04-x64', 'debian-12-x64', 'centos-stream-9-x64', 'rocky-9-x64'], + }, + 'hetzner': { + 'name': 'Hetzner Cloud', + 'regions': ['nbg1', 'fsn1', 'hel1', 'ash', 'hil'], + 'sizes': ['cx22', 'cx32', 'cx42', 'cx52', 'cpx11', 'cpx21', 'cpx31'], + 'images': ['ubuntu-22.04', 'ubuntu-24.04', 'debian-12', 'centos-stream-9', 'rocky-9'], + }, + 'vultr': { + 'name': 'Vultr', + 'regions': ['ewr', 'ord', 'dfw', 'sea', 'lax', 'atl', 'ams', 'lhr', 'fra', 'nrt', 'icn', 'sgp', 'syd'], + 'sizes': ['vc2-1c-1gb', 'vc2-1c-2gb', 'vc2-2c-4gb', 'vc2-4c-8gb', 'vc2-6c-16gb'], + 'images': ['Ubuntu 22.04', 'Ubuntu 24.04', 'Debian 12', 'CentOS Stream 9'], + }, + 'linode': { + 'name': 'Linode (Akamai)', + 'regions': ['us-east', 'us-central', 'us-west', 'eu-west', 'eu-central', 'ap-south', 'ap-northeast', 'ap-southeast'], + 'sizes': ['g6-nanode-1', 'g6-standard-1', 'g6-standard-2', 'g6-standard-4', 'g6-standard-6'], + 'images': ['linode/ubuntu22.04', 'linode/ubuntu24.04', 'linode/debian12'], + }, + } + + # --- Providers --- + + @staticmethod + def list_providers(): + return CloudProvider.query.filter_by(is_active=True).all() + + @staticmethod + def get_provider(provider_id): + return CloudProvider.query.get(provider_id) + + @staticmethod + def create_provider(data, user_id=None): + ptype = data.get('provider_type') + if ptype not in CloudProvisioningService.SUPPORTED_PROVIDERS: + raise ValueError(f'Unsupported provider: {ptype}') + + provider = CloudProvider( + name=data.get('name', CloudProvisioningService.SUPPORTED_PROVIDERS[ptype]['name']), + provider_type=ptype, + api_key_encrypted=data.get('api_key', ''), + created_by=user_id, + ) + db.session.add(provider) + db.session.commit() + return provider + + @staticmethod + def delete_provider(provider_id): + provider = CloudProvider.query.get(provider_id) + if not provider: + return False + provider.is_active = False + db.session.commit() + return True + + @staticmethod + def get_provider_options(provider_type): + return CloudProvisioningService.SUPPORTED_PROVIDERS.get(provider_type, {}) + + # --- Servers --- + + @staticmethod + def list_servers(provider_id=None): + query = CloudServer.query.filter(CloudServer.status != CloudServer.STATUS_DESTROYED) + if provider_id: + query = query.filter_by(provider_id=provider_id) + return query.order_by(CloudServer.created_at.desc()).all() + + @staticmethod + def get_server(server_id): + return CloudServer.query.get(server_id) + + @staticmethod + def create_server(data, user_id=None): + """Provision a new cloud server.""" + provider = CloudProvider.query.get(data['provider_id']) + if not provider: + raise ValueError('Provider not found') + + server = CloudServer( + provider_id=provider.id, + name=data['name'], + region=data.get('region'), + size=data.get('size'), + image=data.get('image'), + ssh_key_id=data.get('ssh_key_id'), + created_by=user_id, + ) + db.session.add(server) + db.session.commit() + + # Call provider API to create server + try: + result = CloudProvisioningService._provider_create(provider, server, data) + server.external_id = result.get('id') + server.ip_address = result.get('ip_address') + server.ipv6_address = result.get('ipv6_address') + server.monthly_cost = result.get('monthly_cost', 0) + server.status = CloudServer.STATUS_ACTIVE + server.hostname = result.get('hostname', server.name) + db.session.commit() + except Exception as e: + server.status = CloudServer.STATUS_ERROR + server.server_metadata = {'error': str(e)} + db.session.commit() + raise + + # Auto-install agent if requested + if data.get('install_agent') and server.ip_address: + try: + CloudProvisioningService._install_agent(server) + server.agent_installed = True + db.session.commit() + except Exception as e: + logger.warning(f'Agent install failed for {server.name}: {e}') + + return server + + @staticmethod + def destroy_server(server_id): + server = CloudServer.query.get(server_id) + if not server: + return False + + try: + CloudProvisioningService._provider_destroy(server.provider, server) + except Exception as e: + logger.error(f'Provider destroy failed: {e}') + + server.status = CloudServer.STATUS_DESTROYED + server.destroyed_at = datetime.utcnow() + db.session.commit() + return True + + @staticmethod + def resize_server(server_id, new_size): + server = CloudServer.query.get(server_id) + if not server: + return None + try: + CloudProvisioningService._provider_resize(server.provider, server, new_size) + server.size = new_size + db.session.commit() + return server + except Exception as e: + raise ValueError(f'Resize failed: {e}') + + # --- Snapshots --- + + @staticmethod + def create_snapshot(server_id, name): + server = CloudServer.query.get(server_id) + if not server: + raise ValueError('Server not found') + + snapshot = CloudSnapshot( + server_id=server_id, + name=name, + ) + db.session.add(snapshot) + db.session.commit() + + try: + result = CloudProvisioningService._provider_snapshot(server.provider, server, name) + snapshot.external_id = result.get('id') + snapshot.size_gb = result.get('size_gb') + snapshot.status = 'available' + db.session.commit() + except Exception as e: + snapshot.status = 'error' + db.session.commit() + raise + + return snapshot + + @staticmethod + def get_snapshots(server_id): + return CloudSnapshot.query.filter_by(server_id=server_id).order_by(CloudSnapshot.created_at.desc()).all() + + @staticmethod + def delete_snapshot(snapshot_id): + snapshot = CloudSnapshot.query.get(snapshot_id) + if not snapshot: + return False + db.session.delete(snapshot) + db.session.commit() + return True + + @staticmethod + def get_cost_summary(): + """Get total monthly cost across all active servers.""" + servers = CloudServer.query.filter( + CloudServer.status.in_([CloudServer.STATUS_ACTIVE, CloudServer.STATUS_OFF]) + ).all() + + by_provider = {} + total = 0 + for s in servers: + key = s.provider.name if s.provider else 'Unknown' + by_provider.setdefault(key, {'count': 0, 'cost': 0}) + by_provider[key]['count'] += 1 + by_provider[key]['cost'] += s.monthly_cost or 0 + total += s.monthly_cost or 0 + + return { + 'total_monthly': round(total, 2), + 'server_count': len(servers), + 'by_provider': by_provider, + } + + # --- Provider API calls --- + + @staticmethod + def _provider_create(provider, server, data): + """Call provider API to create server. Returns dict with id, ip_address, etc.""" + ptype = provider.provider_type + + if ptype == 'digitalocean': + import requests + resp = requests.post('https://api.digitalocean.com/v2/droplets', json={ + 'name': server.name, + 'region': server.region, + 'size': server.size, + 'image': server.image, + 'ssh_keys': [data.get('ssh_key_id')] if data.get('ssh_key_id') else [], + }, headers={'Authorization': f'Bearer {provider.api_key_encrypted}'}, timeout=30) + resp.raise_for_status() + droplet = resp.json().get('droplet', {}) + networks = droplet.get('networks', {}) + ipv4 = next((n['ip_address'] for n in networks.get('v4', []) if n['type'] == 'public'), None) + return { + 'id': str(droplet.get('id')), + 'ip_address': ipv4, + 'monthly_cost': droplet.get('size', {}).get('price_monthly', 0), + } + + elif ptype == 'hetzner': + import requests + resp = requests.post('https://api.hetzner.cloud/v1/servers', json={ + 'name': server.name, + 'server_type': server.size, + 'image': server.image, + 'location': server.region, + }, headers={'Authorization': f'Bearer {provider.api_key_encrypted}'}, timeout=30) + resp.raise_for_status() + srv = resp.json().get('server', {}) + return { + 'id': str(srv.get('id')), + 'ip_address': srv.get('public_net', {}).get('ipv4', {}).get('ip'), + } + + # Fallback for unsupported or mock + return {'id': 'mock-id', 'ip_address': '0.0.0.0'} + + @staticmethod + def _provider_destroy(provider, server): + if not server.external_id: + return + ptype = provider.provider_type + if ptype == 'digitalocean': + import requests + requests.delete( + f'https://api.digitalocean.com/v2/droplets/{server.external_id}', + headers={'Authorization': f'Bearer {provider.api_key_encrypted}'}, timeout=30 + ) + + @staticmethod + def _provider_resize(provider, server, new_size): + pass # Provider-specific resize logic + + @staticmethod + def _provider_snapshot(provider, server, name): + return {'id': 'snap-mock', 'size_gb': 0} + + @staticmethod + def _install_agent(server): + """SSH into server and install the ServerKit agent.""" + pass # Would use paramiko or similar to SSH and run install script diff --git a/backend/app/services/cron_service.py b/backend/app/services/cron_service.py index 289210f1..c9ddfb84 100644 --- a/backend/app/services/cron_service.py +++ b/backend/app/services/cron_service.py @@ -7,6 +7,7 @@ import os import re +import shlex import subprocess import platform from typing import Dict, List, Optional @@ -245,6 +246,20 @@ def _describe_schedule(cls, schedule: str) -> str: return ', '.join(descriptions) if descriptions else schedule + BLOCKED_PATTERNS = [';', '&&', '||', '|', '`', '$(', '>', '<', '\n', '\r'] + + @classmethod + def _validate_command(cls, command: str) -> bool: + """Validate cron command to prevent injection.""" + for pattern in cls.BLOCKED_PATTERNS: + if pattern in command: + return False + # Require absolute paths + parts = shlex.split(command) + if parts and not parts[0].startswith('/'): + return False + return True + @classmethod def add_job(cls, schedule: str, command: str, name: str = None, description: str = None) -> Dict: @@ -257,6 +272,9 @@ def add_job(cls, schedule: str, command: str, name: str = None, if not command or not command.strip(): return {'success': False, 'error': 'Command cannot be empty'} + if not cls._validate_command(command): + return {'success': False, 'error': 'Invalid command: must use absolute paths and cannot contain shell operators (;, &&, ||, |, `, $())'} + # Generate job ID job_id = f"job_{datetime.now().strftime('%Y%m%d%H%M%S')}" @@ -594,8 +612,7 @@ def run_job_now(cls, job_id: str) -> Dict: try: # Run the command result = subprocess.run( - command, - shell=True, + ['bash', '-c', command], capture_output=True, text=True, timeout=60 diff --git a/backend/app/services/database_service.py b/backend/app/services/database_service.py index 23ab8d25..1cc64e3a 100644 --- a/backend/app/services/database_service.py +++ b/backend/app/services/database_service.py @@ -1,5 +1,6 @@ import subprocess import os +import re import secrets import string import json @@ -8,8 +9,19 @@ from app import paths +def _validate_identifier(name: str, max_length: int = 64) -> bool: + """Validate database/user identifiers to prevent SQL injection.""" + return bool(re.match(r'^[a-zA-Z0-9_$]{1,}$', name)) and len(name) <= max_length + + class DatabaseService: - """Service for managing MySQL/MariaDB and PostgreSQL databases.""" + """Service for managing MySQL/MariaDB and PostgreSQL databases. + + NOTE (L6): Subprocess timeout values in this service (typically 30s) should be + reviewed periodically to ensure they are appropriate. Shorter timeouts reduce + the window for resource exhaustion attacks, but may break legitimate long-running + operations like large database backups or restores. + """ BACKUP_DIR = paths.DB_BACKUP_DIR @@ -51,14 +63,77 @@ def mysql_execute(query, database=None, root_password=None): """Execute a MySQL query.""" try: cmd = ['mysql', '-u', 'root'] - if root_password: - cmd.extend([f'-p{root_password}']) if database: cmd.extend(['-D', database]) cmd.extend(['-e', query]) + # Use MYSQL_PWD env var to avoid passing password on CLI + env = None + if root_password: + env = os.environ.copy() + env['MYSQL_PWD'] = root_password + result = subprocess.run( - cmd, capture_output=True, text=True + cmd, capture_output=True, text=True, env=env + ) + return { + 'success': result.returncode == 0, + 'output': result.stdout, + 'error': result.stderr if result.returncode != 0 else None + } + except Exception as e: + return {'success': False, 'error': str(e)} + + @staticmethod + def _mysql_execute_parameterized(query, params, database=None, root_password=None): + """Execute a MySQL query with parameterized values using mysql CLI. + + Uses the mysql --execute flag with a prepared statement approach: + passes the query through stdin with proper escaping to avoid injection. + + Args: + query: SQL query with %s placeholders + params: List of parameter values to substitute safely + database: Optional database name + root_password: Optional MySQL root password + """ + try: + # Build the parameterized query using mysql's built-in escaping + # by passing values through a SET/EXECUTE pattern via stdin + cmd = ['mysql', '-u', 'root', '--batch', '-N'] + if database: + cmd.extend(['-D', database]) + + # Use MYSQL_PWD env var to avoid passing password on CLI + env = None + if root_password: + env = os.environ.copy() + env['MYSQL_PWD'] = root_password + + # Build a safe query using user-defined variables and EXECUTE + # For simple single-param queries, we use a quoted literal approach + # MySQL's cli doesn't support true parameterized queries, so we use + # hex-encoding for string safety + safe_params = [] + for p in params: + if p is None: + safe_params.append('NULL') + elif isinstance(p, (int, float)): + safe_params.append(str(p)) + else: + # Hex-encode string values: 0x is safe from injection + hex_val = p.encode('utf-8').hex() + safe_params.append(f"0x{hex_val}") + + # Replace %s placeholders with safe values + safe_query = query + for sp in safe_params: + safe_query = safe_query.replace('%s', sp, 1) + + cmd.extend(['-e', safe_query]) + + result = subprocess.run( + cmd, capture_output=True, text=True, env=env ) return { 'success': result.returncode == 0, @@ -83,15 +158,16 @@ def mysql_list_databases(root_password=None): for line in result['output'].strip().split('\n')[1:]: db_name = line.strip() if db_name and db_name not in system_dbs: - # Get database size - size_result = DatabaseService.mysql_execute( - f"SELECT SUM(data_length + index_length) as size FROM information_schema.tables WHERE table_schema = '{db_name}';", - root_password=root_password + # Get database size using parameterized query via MySQL CLI + # Pass db_name as a separate argument to avoid SQL injection + size_query = "SELECT SUM(data_length + index_length) as size FROM information_schema.tables WHERE table_schema = %s;" + size_result = DatabaseService._mysql_execute_parameterized( + size_query, [db_name], root_password=root_password ) size = 0 if size_result['success']: try: - size_line = size_result['output'].strip().split('\n')[1] + size_line = size_result['output'].strip().split('\n')[0] size = int(size_line) if size_line and size_line != 'NULL' else 0 except (IndexError, ValueError): pass @@ -106,6 +182,12 @@ def mysql_list_databases(root_password=None): @staticmethod def mysql_create_database(name, charset='utf8mb4', collation='utf8mb4_unicode_ci', root_password=None): """Create a MySQL database.""" + if not _validate_identifier(name): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(charset): + return {'success': False, 'error': 'Invalid charset identifier'} + if not _validate_identifier(collation, max_length=128): + return {'success': False, 'error': 'Invalid collation identifier'} query = f"CREATE DATABASE IF NOT EXISTS `{name}` CHARACTER SET {charset} COLLATE {collation};" result = DatabaseService.mysql_execute(query, root_password=root_password) return result @@ -113,6 +195,8 @@ def mysql_create_database(name, charset='utf8mb4', collation='utf8mb4_unicode_ci @staticmethod def mysql_drop_database(name, root_password=None): """Drop a MySQL database.""" + if not _validate_identifier(name): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} query = f"DROP DATABASE IF EXISTS `{name}`;" result = DatabaseService.mysql_execute(query, root_password=root_password) return result @@ -140,13 +224,51 @@ def mysql_list_users(root_password=None): @staticmethod def mysql_create_user(username, password, host='localhost', root_password=None): """Create a MySQL user.""" - query = f"CREATE USER IF NOT EXISTS '{username}'@'{host}' IDENTIFIED BY '{password}';" - result = DatabaseService.mysql_execute(query, root_password=root_password) - return result + if not _validate_identifier(username): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(host): + return {'success': False, 'error': 'Invalid host identifier'} + # Use hex-encoded password with UNHEX + QUOTE to safely pass the password + # without manual string escaping. Username and host are validated above. + try: + cmd = ['mysql', '-u', 'root'] + + env = None + if root_password: + env = os.environ.copy() + env['MYSQL_PWD'] = root_password + + # Hex-encode the password so it never appears as a raw string in SQL. + # UNHEX converts it back to bytes, CAST converts to string, QUOTE wraps + # it safely for use in a dynamic SQL statement. + hex_pw = password.encode('utf-8').hex() + safe_stmt = ( + f"SET @pw = UNHEX('{hex_pw}');\n" + f"SET @pw = CAST(@pw AS CHAR);\n" + f"SET @sql = CONCAT('CREATE USER IF NOT EXISTS ''{username}''@''{host}'' IDENTIFIED BY ', QUOTE(@pw));\n" + f"PREPARE stmt FROM @sql;\n" + f"EXECUTE stmt;\n" + f"DEALLOCATE PREPARE stmt;\n" + ) + + result = subprocess.run( + cmd, capture_output=True, text=True, input=safe_stmt, env=env + ) + return { + 'success': result.returncode == 0, + 'output': result.stdout, + 'error': result.stderr if result.returncode != 0 else None + } + except Exception as e: + return {'success': False, 'error': str(e)} @staticmethod def mysql_drop_user(username, host='localhost', root_password=None): """Drop a MySQL user.""" + if not _validate_identifier(username): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(host): + return {'success': False, 'error': 'Invalid host identifier'} query = f"DROP USER IF EXISTS '{username}'@'{host}';" result = DatabaseService.mysql_execute(query, root_password=root_password) return result @@ -154,6 +276,12 @@ def mysql_drop_user(username, host='localhost', root_password=None): @staticmethod def mysql_grant_privileges(username, database, privileges='ALL', host='localhost', root_password=None): """Grant privileges to a MySQL user.""" + if not _validate_identifier(username): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(database): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(host): + return {'success': False, 'error': 'Invalid host identifier'} query = f"GRANT {privileges} ON `{database}`.* TO '{username}'@'{host}'; FLUSH PRIVILEGES;" result = DatabaseService.mysql_execute(query, root_password=root_password) return result @@ -161,6 +289,12 @@ def mysql_grant_privileges(username, database, privileges='ALL', host='localhost @staticmethod def mysql_revoke_privileges(username, database, privileges='ALL', host='localhost', root_password=None): """Revoke privileges from a MySQL user.""" + if not _validate_identifier(username): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(database): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + if not _validate_identifier(host): + return {'success': False, 'error': 'Invalid host identifier'} query = f"REVOKE {privileges} ON `{database}`.* FROM '{username}'@'{host}'; FLUSH PRIVILEGES;" result = DatabaseService.mysql_execute(query, root_password=root_password) return result @@ -168,6 +302,10 @@ def mysql_revoke_privileges(username, database, privileges='ALL', host='localhos @staticmethod def mysql_get_user_privileges(username, host='localhost', root_password=None): """Get privileges for a MySQL user.""" + if not _validate_identifier(username): + return [] + if not _validate_identifier(host): + return [] result = DatabaseService.mysql_execute( f"SHOW GRANTS FOR '{username}'@'{host}';", root_password=root_password @@ -194,13 +332,17 @@ def mysql_backup(database, output_path=None, root_password=None): try: cmd = ['mysqldump', '-u', 'root'] - if root_password: - cmd.append(f'-p{root_password}') cmd.append(database) + # Use MYSQL_PWD env var to avoid passing password on CLI + env = None + if root_password: + env = os.environ.copy() + env['MYSQL_PWD'] = root_password + # Pipe through gzip with open(output_path, 'wb') as f: - dump = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + dump = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env) gzip = subprocess.Popen(['gzip'], stdin=dump.stdout, stdout=f, stderr=subprocess.PIPE) dump.stdout.close() gzip.communicate() @@ -224,14 +366,18 @@ def mysql_restore(database, backup_path, root_password=None): try: cmd = ['mysql', '-u', 'root'] - if root_password: - cmd.append(f'-p{root_password}') cmd.append(database) + # Use MYSQL_PWD env var to avoid passing password on CLI + env = None + if root_password: + env = os.environ.copy() + env['MYSQL_PWD'] = root_password + if backup_path.endswith('.gz'): # Decompress and restore gunzip = subprocess.Popen(['gunzip', '-c', backup_path], stdout=subprocess.PIPE) - restore = subprocess.Popen(cmd, stdin=gunzip.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + restore = subprocess.Popen(cmd, stdin=gunzip.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env) gunzip.stdout.close() _, stderr = restore.communicate() @@ -239,7 +385,7 @@ def mysql_restore(database, backup_path, root_password=None): return {'success': False, 'error': stderr.decode()} else: with open(backup_path, 'r') as f: - result = subprocess.run(cmd, stdin=f, capture_output=True, text=True) + result = subprocess.run(cmd, stdin=f, capture_output=True, text=True, env=env) if result.returncode != 0: return {'success': False, 'error': result.stderr} @@ -367,10 +513,20 @@ def pg_create_database(name, owner=None, encoding='UTF8'): @staticmethod def pg_drop_database(name): """Drop a PostgreSQL database.""" - # Terminate connections first - DatabaseService.pg_execute( - f"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '{name}';" - ) + if not _validate_identifier(name): + return {'success': False, 'error': 'Invalid identifier: only alphanumeric characters and underscores allowed'} + # Terminate connections first using psql variable binding to prevent injection + try: + cmd = [ + 'sudo', '-u', 'postgres', 'psql', '-d', 'postgres', + '-v', f'dbname={name}', + '-c', "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = :'dbname';", + '-t', '-A' + ] + subprocess.run(cmd, capture_output=True, text=True) + except Exception: + pass + # Name is validated above, safe to use in identifier position result = DatabaseService.pg_execute(f'DROP DATABASE IF EXISTS "{name}";') return result @@ -555,8 +711,12 @@ def mysql_execute_query(database, query, readonly=True, root_password=None, time # Build mysql command with JSON output format cmd = ['mysql', '-u', 'root'] + + # Use MYSQL_PWD env var to avoid passing password on CLI + env = None if root_password: - cmd.extend([f'-p{root_password}']) + env = os.environ.copy() + env['MYSQL_PWD'] = root_password cmd.extend([ '-D', database, '-e', query, @@ -570,7 +730,8 @@ def mysql_execute_query(database, query, readonly=True, root_password=None, time cmd, capture_output=True, text=True, - timeout=timeout + timeout=timeout, + env=env ) execution_time = time.time() - start_time @@ -1038,10 +1199,13 @@ def list_docker_mysql_containers(): def docker_mysql_execute(container_name, query, database=None, user='root', password=None): """Execute a MySQL query inside a Docker container.""" try: - cmd = ['docker', 'exec', container_name, 'mysql', '-u', user] + cmd = ['docker', 'exec'] + # Use MYSQL_PWD env var to avoid passing password on CLI if password: - cmd.append(f'-p{password}') + cmd.extend(['-e', f'MYSQL_PWD={password}']) + + cmd.extend([container_name, 'mysql', '-u', user]) if database: cmd.extend(['-D', database]) @@ -1137,9 +1301,11 @@ def docker_mysql_execute_query(container_name, database, query, user='root', pas try: start_time = time.time() - cmd = ['docker', 'exec', container_name, 'mysql', '-u', user] + cmd = ['docker', 'exec'] + # Use MYSQL_PWD env var to avoid passing password on CLI if password: - cmd.append(f'-p{password}') + cmd.extend(['-e', f'MYSQL_PWD={password}']) + cmd.extend([container_name, 'mysql', '-u', user]) cmd.extend([ '-D', database, '-e', query, diff --git a/backend/app/services/discovery_service.py b/backend/app/services/discovery_service.py new file mode 100644 index 00000000..0966225e --- /dev/null +++ b/backend/app/services/discovery_service.py @@ -0,0 +1,146 @@ +""" +Discovery Service + +Handles auto-discovery of new servers on the local network. +Uses UDP broadcast to find agents. +""" + +import socket +import json +import hmac +import hashlib +import threading +import time +import logging +from datetime import datetime +from typing import List, Dict, Optional + +from app import db +from app.models.server import Server + +logger = logging.getLogger(__name__) + + +class DiscoveryService: + """ + Service for discovering new servers/agents on the network. + """ + + # Maximum allowed age for discovery responses (seconds) + MAX_RESPONSE_AGE = 30 + + def __init__(self, port=9000, secret_key: Optional[str] = None): + self.port = port + self.secret_key = secret_key + self._discovered_agents = {} # agent_id -> info + self._is_scanning = False + self._lock = threading.Lock() + + def _sign_discovery_request(self, request_data: dict, secret_key: str) -> dict: + """Sign a discovery request with HMAC.""" + request_data['timestamp'] = int(time.time() * 1000) + message = json.dumps(request_data, sort_keys=True) + signature = hmac.new(secret_key.encode(), message.encode(), hashlib.sha256).hexdigest() + request_data['signature'] = signature + return request_data + + def start_scan(self, duration=10) -> List[Dict]: + """ + Start a network scan for agents. + """ + if self._is_scanning: + return list(self._discovered_agents.values()) + + self._is_scanning = True + self._discovered_agents = {} + + # Start listening thread + listen_thread = threading.Thread(target=self._listen_for_responses, daemon=True) + listen_thread.start() + + # Send broadcast requests + self._send_broadcast_request() + + # Wait for duration + time.sleep(duration) + + self._is_scanning = False + return list(self._discovered_agents.values()) + + def _send_broadcast_request(self): + """Send UDP broadcast request to discover agents""" + try: + sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) + sock.settimeout(2) + + request_data = { + 'type': 'discovery_request', + 'timestamp': int(time.time() * 1000) + } + + # Sign the request if a shared secret is configured + if self.secret_key: + request_data = self._sign_discovery_request(request_data, self.secret_key) + + message = json.dumps(request_data) + + # Broadcast to common local networks + sock.sendto(message.encode(), ('255.255.255.255', self.port)) + sock.close() + except Exception as e: + logger.error(f"Error sending discovery broadcast: {e}") + + def _listen_for_responses(self): + """Listen for UDP discovery responses from agents""" + try: + sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + sock.bind(('', self.port + 1)) # Listen on port + 1 + sock.settimeout(1) + + start_time = time.time() + while self._is_scanning: + try: + data, addr = sock.recvfrom(4096) + try: + info = json.loads(data.decode()) + if info.get('type') == 'discovery': + # Validate timestamp to prevent replay attacks + response_ts = info.get('timestamp') + if response_ts: + age_seconds = abs(time.time() * 1000 - response_ts) / 1000 + if age_seconds > self.MAX_RESPONSE_AGE: + logger.warning(f"Stale discovery response from {addr[0]} (age: {age_seconds:.0f}s), ignoring") + continue + + agent_id = info.get('agent_id') + if agent_id: + # Add IP address from sender + info['ip_address'] = addr[0] + + # Check if already registered + server = Server.query.filter_by(agent_id=agent_id).first() + info['is_registered'] = server is not None + if server: + info['server_name'] = server.name + info['server_id'] = server.id + + with self._lock: + self._discovered_agents[agent_id] = info + else: + logger.warning(f"Discovery response from {addr[0]} missing agent_id") + except Exception as e: + logger.error(f"Error parsing discovery response: {e}") + except socket.timeout: + continue + sock.close() + except Exception as e: + logger.error(f"Error in discovery listener: {e}") + + def get_discovered_agents(self) -> List[Dict]: + """Get currently discovered agents""" + with self._lock: + return list(self._discovered_agents.values()) + + +discovery_service = DiscoveryService() diff --git a/backend/app/services/dns_zone_service.py b/backend/app/services/dns_zone_service.py new file mode 100644 index 00000000..b7a8f747 --- /dev/null +++ b/backend/app/services/dns_zone_service.py @@ -0,0 +1,306 @@ +import json +import logging +from datetime import datetime +from app import db +from app.models.dns_zone import DNSZone, DNSRecord + +logger = logging.getLogger(__name__) + + +class DNSZoneService: + """Service for DNS zone and record management.""" + + RECORD_TYPES = ['A', 'AAAA', 'CNAME', 'MX', 'TXT', 'SRV', 'CAA', 'NS'] + + DNS_PRESETS = { + 'web-hosting': { + 'label': 'Standard Web Hosting', + 'records': [ + {'record_type': 'A', 'name': '@', 'content': '{{server_ip}}', 'ttl': 3600}, + {'record_type': 'A', 'name': 'www', 'content': '{{server_ip}}', 'ttl': 3600}, + {'record_type': 'CNAME', 'name': 'mail', 'content': '{{domain}}', 'ttl': 3600}, + ], + }, + 'email-hosting': { + 'label': 'Email Hosting', + 'records': [ + {'record_type': 'MX', 'name': '@', 'content': 'mail.{{domain}}', 'priority': 10, 'ttl': 3600}, + {'record_type': 'TXT', 'name': '@', 'content': 'v=spf1 mx -all', 'ttl': 3600}, + {'record_type': 'TXT', 'name': '_dmarc', 'content': 'v=DMARC1; p=quarantine; rua=mailto:dmarc@{{domain}}', 'ttl': 3600}, + ], + }, + } + + @staticmethod + def list_zones(): + return DNSZone.query.order_by(DNSZone.domain).all() + + @staticmethod + def get_zone(zone_id): + return DNSZone.query.get(zone_id) + + @staticmethod + def create_zone(data): + domain = data.get('domain', '').strip().lower() + if not domain: + raise ValueError('Domain required') + if DNSZone.query.filter_by(domain=domain).first(): + raise ValueError(f'Zone for {domain} already exists') + + zone = DNSZone( + domain=domain, + provider=data.get('provider', 'manual'), + provider_zone_id=data.get('provider_zone_id'), + ) + if data.get('provider_config'): + zone.provider_config = data['provider_config'] + + db.session.add(zone) + db.session.commit() + return zone + + @staticmethod + def delete_zone(zone_id): + zone = DNSZone.query.get(zone_id) + if not zone: + return False + db.session.delete(zone) + db.session.commit() + return True + + # --- Records --- + + @staticmethod + def get_records(zone_id): + return DNSRecord.query.filter_by(zone_id=zone_id).order_by( + DNSRecord.record_type, DNSRecord.name + ).all() + + @staticmethod + def create_record(zone_id, data): + zone = DNSZone.query.get(zone_id) + if not zone: + raise ValueError('Zone not found') + + record_type = data.get('record_type', '').upper() + if record_type not in DNSZoneService.RECORD_TYPES: + raise ValueError(f'Invalid record type: {record_type}') + + record = DNSRecord( + zone_id=zone_id, + record_type=record_type, + name=data.get('name', '@'), + content=data.get('content', ''), + ttl=data.get('ttl', 3600), + priority=data.get('priority'), + proxied=data.get('proxied', False), + ) + db.session.add(record) + db.session.commit() + + # Sync to provider if configured + if zone.provider != 'manual': + DNSZoneService._sync_record_to_provider(zone, record, 'create') + + return record + + @staticmethod + def update_record(record_id, data): + record = DNSRecord.query.get(record_id) + if not record: + return None + for field in ['name', 'content', 'ttl', 'priority', 'proxied']: + if field in data: + setattr(record, field, data[field]) + db.session.commit() + + zone = record.zone + if zone.provider != 'manual': + DNSZoneService._sync_record_to_provider(zone, record, 'update') + + return record + + @staticmethod + def delete_record(record_id): + record = DNSRecord.query.get(record_id) + if not record: + return False + zone = record.zone + if zone.provider != 'manual' and record.provider_record_id: + DNSZoneService._sync_record_to_provider(zone, record, 'delete') + db.session.delete(record) + db.session.commit() + return True + + @staticmethod + def apply_preset(zone_id, preset_key, variables=None): + if preset_key not in DNSZoneService.DNS_PRESETS: + raise ValueError(f'Unknown preset: {preset_key}') + + zone = DNSZone.query.get(zone_id) + if not zone: + raise ValueError('Zone not found') + + preset = DNSZoneService.DNS_PRESETS[preset_key] + variables = variables or {} + variables.setdefault('domain', zone.domain) + + records = [] + for rec_data in preset['records']: + data = dict(rec_data) + for field in ['name', 'content']: + for var_name, var_val in variables.items(): + data[field] = data[field].replace('{{' + var_name + '}}', var_val) + record = DNSZoneService.create_record(zone_id, data) + records.append(record) + + return records + + @staticmethod + def check_propagation(domain, record_type='A'): + """Check DNS propagation across multiple nameservers.""" + import socket + + nameservers = [ + ('Google', '8.8.8.8'), + ('Cloudflare', '1.1.1.1'), + ('OpenDNS', '208.67.222.222'), + ('Quad9', '9.9.9.9'), + ] + + results = [] + for ns_name, ns_ip in nameservers: + try: + from app.utils.system import run_command + result = run_command(['dig', f'@{ns_ip}', domain, record_type, '+short'], timeout=5) + stdout = result.get('stdout', '').strip() + results.append({ + 'nameserver': ns_name, + 'ip': ns_ip, + 'result': stdout.split('\n') if stdout else [], + 'propagated': bool(stdout), + }) + except Exception: + results.append({ + 'nameserver': ns_name, + 'ip': ns_ip, + 'result': [], + 'propagated': False, + 'error': 'Query failed', + }) + + return results + + @staticmethod + def export_zone(zone_id): + """Export zone in BIND format.""" + zone = DNSZone.query.get(zone_id) + if not zone: + return None + + records = DNSZoneService.get_records(zone_id) + lines = [f'; Zone file for {zone.domain}', f'$ORIGIN {zone.domain}.', f'$TTL 3600', ''] + + for rec in records: + name = rec.name if rec.name != '@' else zone.domain + '.' + if rec.record_type == 'MX': + lines.append(f'{name}\t{rec.ttl}\tIN\t{rec.record_type}\t{rec.priority or 10}\t{rec.content}') + elif rec.record_type == 'SRV': + lines.append(f'{name}\t{rec.ttl}\tIN\t{rec.record_type}\t{rec.priority or 0}\t{rec.content}') + else: + lines.append(f'{name}\t{rec.ttl}\tIN\t{rec.record_type}\t{rec.content}') + + return '\n'.join(lines) + + @staticmethod + def import_zone(zone_id, bind_content): + """Import records from BIND zone file format.""" + zone = DNSZone.query.get(zone_id) + if not zone: + raise ValueError('Zone not found') + + records_created = [] + for line in bind_content.strip().split('\n'): + line = line.strip() + if not line or line.startswith(';') or line.startswith('$'): + continue + parts = line.split() + if len(parts) < 4: + continue + # Try to parse: name ttl IN type content + try: + if parts[2] == 'IN': + name = parts[0].rstrip('.') + ttl = int(parts[1]) + rtype = parts[3] + content = ' '.join(parts[4:]) + if name == zone.domain: + name = '@' + record = DNSZoneService.create_record(zone_id, { + 'record_type': rtype, 'name': name, + 'content': content, 'ttl': ttl, + }) + records_created.append(record) + except (ValueError, IndexError): + continue + + return records_created + + @staticmethod + def get_presets(): + return DNSZoneService.DNS_PRESETS + + @staticmethod + def _sync_record_to_provider(zone, record, action): + """Sync a DNS record change to the configured provider.""" + provider = zone.provider + config = zone.provider_config + + try: + if provider == 'cloudflare': + DNSZoneService._cloudflare_sync(zone, record, action, config) + except Exception as e: + logger.error(f'DNS provider sync failed: {e}') + + @staticmethod + def _cloudflare_sync(zone, record, action, config): + """Sync record to Cloudflare API.""" + import requests + + api_token = config.get('api_token') + if not api_token: + return + + headers = {'Authorization': f'Bearer {api_token}', 'Content-Type': 'application/json'} + base_url = f'https://api.cloudflare.com/client/v4/zones/{zone.provider_zone_id}/dns_records' + + if action == 'create': + payload = { + 'type': record.record_type, + 'name': record.name, + 'content': record.content, + 'ttl': record.ttl, + 'proxied': record.proxied, + } + if record.priority is not None: + payload['priority'] = record.priority + resp = requests.post(base_url, json=payload, headers=headers, timeout=10) + if resp.ok: + data = resp.json() + record.provider_record_id = data.get('result', {}).get('id') + db.session.commit() + + elif action == 'update' and record.provider_record_id: + url = f'{base_url}/{record.provider_record_id}' + payload = { + 'type': record.record_type, + 'name': record.name, + 'content': record.content, + 'ttl': record.ttl, + 'proxied': record.proxied, + } + requests.put(url, json=payload, headers=headers, timeout=10) + + elif action == 'delete' and record.provider_record_id: + url = f'{base_url}/{record.provider_record_id}' + requests.delete(url, headers=headers, timeout=10) diff --git a/backend/app/services/docker_service.py b/backend/app/services/docker_service.py index b478db0c..edad776c 100644 --- a/backend/app/services/docker_service.py +++ b/backend/app/services/docker_service.py @@ -1,9 +1,13 @@ +import logging import subprocess import json import os +import shlex import yaml from datetime import datetime +logger = logging.getLogger(__name__) + class DockerService: """Service for managing Docker containers, images, and compose stacks.""" @@ -23,8 +27,8 @@ def _get_compose_cmd(cls): if result.returncode == 0: cls._compose_cmd = ['docker', 'compose'] return cls._compose_cmd - except Exception: - pass + except Exception as e: + logger.error(f"Failed to detect docker compose v2: {e}") # Fallback to docker-compose (v1) cls._compose_cmd = ['docker-compose'] return cls._compose_cmd @@ -56,7 +60,8 @@ def get_docker_info(): if result.returncode == 0: return json.loads(result.stdout) return None - except Exception: + except Exception as e: + logger.error(f"Failed to get Docker info: {e}") return None # ==================== CONTAINER MANAGEMENT ==================== @@ -89,6 +94,7 @@ def list_containers(all_containers=True): }) return containers except Exception as e: + logger.error(f"Failed to list containers: {e}") return [] @staticmethod @@ -104,7 +110,8 @@ def get_container(container_id): if data: return data[0] return None - except Exception: + except Exception as e: + logger.error(f"Failed to inspect container {container_id}: {e}") return None @staticmethod @@ -138,7 +145,7 @@ def create_container(image, name=None, ports=None, volumes=None, env=None, cmd.append(image) if command: - cmd.extend(command.split()) + cmd.extend(shlex.split(command)) result = subprocess.run(cmd, capture_output=True, text=True) @@ -183,7 +190,7 @@ def run_container(image, name=None, ports=None, volumes=None, env=None, cmd.append(image) if command: - cmd.extend(command.split()) + cmd.extend(shlex.split(command)) result = subprocess.run(cmd, capture_output=True, text=True) @@ -217,6 +224,13 @@ def stop_container(container_id, timeout=10): capture_output=True, text=True ) if result.returncode == 0: + try: + from app.services.workflow_engine import WorkflowEventBus + WorkflowEventBus.emit('app_stopped', { + 'container_id': container_id + }) + except Exception as e: + logger.error(f"Failed to emit app_stopped event: {e}") return {'success': True} return {'success': False, 'error': result.stderr} except Exception as e: @@ -305,7 +319,8 @@ def stream_container_logs(container_id, tail=100, since=None, timestamps=True): bufsize=1 ) return process - except Exception: + except Exception as e: + logger.error(f"Failed to start log stream for container {container_id}: {e}") return None @staticmethod @@ -470,7 +485,8 @@ def get_container_stats(container_id): if result.returncode == 0 and result.stdout.strip(): return json.loads(result.stdout.strip()) return None - except Exception: + except Exception as e: + logger.error(f"Failed to get stats for container {container_id}: {e}") return None @staticmethod @@ -483,7 +499,7 @@ def exec_command(container_id, command, interactive=False, tty=False): if tty: cmd.append('-t') cmd.append(container_id) - cmd.extend(command.split()) + cmd.extend(shlex.split(command)) result = subprocess.run(cmd, capture_output=True, text=True, timeout=60) return { @@ -522,7 +538,8 @@ def list_images(): 'created': image.get('CreatedAt'), }) return images - except Exception: + except Exception as e: + logger.error(f"Failed to list images: {e}") return [] @staticmethod @@ -612,7 +629,8 @@ def list_networks(): 'scope': network.get('Scope'), }) return networks - except Exception: + except Exception as e: + logger.error(f"Failed to list networks: {e}") return [] @staticmethod @@ -666,7 +684,8 @@ def list_volumes(): 'mountpoint': volume.get('Mountpoint'), }) return volumes - except Exception: + except Exception as e: + logger.error(f"Failed to list volumes: {e}") return [] @staticmethod @@ -779,7 +798,8 @@ def compose_ps(cls, project_path): continue return containers return [] - except Exception: + except Exception as e: + logger.error(f"Failed to list compose services: {e}") return [] @classmethod @@ -899,7 +919,8 @@ def get_disk_usage(): if line: usage.append(json.loads(line)) return usage - except Exception: + except Exception as e: + logger.error(f"Failed to get Docker disk usage: {e}") return [] @staticmethod diff --git a/backend/app/services/environment_health_service.py b/backend/app/services/environment_health_service.py index 6398c403..8fcdc6d4 100644 --- a/backend/app/services/environment_health_service.py +++ b/backend/app/services/environment_health_service.py @@ -90,6 +90,18 @@ def check_health(cls, site_id: int) -> Dict: site.last_health_check = datetime.utcnow() db.session.commit() + # Emit event for workflow triggers on unhealthy status + if overall in ('unhealthy', 'degraded'): + try: + from app.services.workflow_engine import WorkflowEventBus + WorkflowEventBus.emit('health_check_failed', { + 'site_id': site_id, + 'overall_status': overall, + 'checks': checks + }) + except Exception: + pass + return { 'success': True, 'site_id': site_id, diff --git a/backend/app/services/file_service.py b/backend/app/services/file_service.py index ab02a7fc..82fc0f7f 100644 --- a/backend/app/services/file_service.py +++ b/backend/app/services/file_service.py @@ -299,8 +299,13 @@ def rename(cls, old_path: str, new_name: str) -> Dict: if not os.path.exists(old_path): return {'success': False, 'error': 'Path not found'} + # Validate new_name has no path separators + if '/' in new_name or '\\' in new_name or '..' in new_name: + return {'success': False, 'error': 'Invalid filename: path separators not allowed'} + new_path = os.path.join(os.path.dirname(old_path), new_name) + # Re-validate the constructed path if not cls.is_path_allowed(new_path): return {'success': False, 'error': 'Access denied: target path not allowed'} @@ -370,6 +375,9 @@ def change_permissions(cls, path: str, mode: str) -> Dict: try: # Convert octal string to int mode_int = int(mode, 8) + # Validate permission mode + if mode_int < 0o000 or mode_int > 0o777: + return {'success': False, 'error': 'Invalid permission mode. Must be between 000 and 777.'} os.chmod(path, mode_int) return {'success': True, 'path': path, 'mode': mode} except ValueError: diff --git a/backend/app/services/fleet_monitor_service.py b/backend/app/services/fleet_monitor_service.py new file mode 100644 index 00000000..3dd134a0 --- /dev/null +++ b/backend/app/services/fleet_monitor_service.py @@ -0,0 +1,613 @@ +""" +Fleet Monitor Service + +Provides fleet-wide monitoring: heatmap, comparison charts, alert thresholds, +anomaly detection, capacity forecasting, fleet search, and metrics export. +""" + +import io +import csv +import math +import logging +from datetime import datetime, timedelta +from typing import List, Dict, Optional, Any + +from sqlalchemy import func, and_ + +from app import db +from app.models.server import Server, ServerMetrics, ServerGroup +from app.models.metric_alert import ServerAlertThreshold, MetricAlert +from app.services.agent_registry import agent_registry + +logger = logging.getLogger(__name__) + +METRIC_COLUMNS = { + 'cpu': 'cpu_percent', + 'memory': 'memory_percent', + 'disk': 'disk_percent', + 'network_rx': 'network_rx_rate', + 'network_tx': 'network_tx_rate', +} + + +class FleetMonitorService: + """Service for fleet-wide monitoring and alerting.""" + + # ==================== Fleet Heatmap ==================== + + @staticmethod + def get_fleet_heatmap(group_id: str = None) -> List[Dict]: + """Get latest metrics for all servers, shaped for heatmap display.""" + query = Server.query + if group_id: + query = query.filter_by(group_id=group_id) + + servers = query.all() + result = [] + + for server in servers: + latest = ServerMetrics.query.filter_by( + server_id=server.id + ).order_by(ServerMetrics.timestamp.desc()).first() + + result.append({ + 'id': server.id, + 'name': server.name, + 'status': server.status, + 'group_id': server.group_id, + 'group_name': server.group.name if server.group else None, + 'cpu': round(latest.cpu_percent, 1) if latest and latest.cpu_percent is not None else None, + 'memory': round(latest.memory_percent, 1) if latest and latest.memory_percent is not None else None, + 'disk': round(latest.disk_percent, 1) if latest and latest.disk_percent is not None else None, + 'containers': latest.container_running if latest else None, + 'last_update': latest.timestamp.isoformat() if latest else None, + }) + + return result + + # ==================== Comparison Timeseries ==================== + + @staticmethod + def get_comparison_timeseries( + server_ids: List[str], metric: str = 'cpu', period: str = '24h' + ) -> Dict: + """Get time-series data for multiple servers for overlay charting.""" + hours_back = {'1h': 1, '6h': 6, '24h': 24, '7d': 168, '30d': 720}.get(period, 24) + interval = {'1h': 1, '6h': 5, '24h': 15, '7d': 60, '30d': 360}.get(period, 15) + cutoff = datetime.utcnow() - timedelta(hours=hours_back) + col_name = METRIC_COLUMNS.get(metric, 'cpu_percent') + + series = [] + for server_id in server_ids: + server = Server.query.get(server_id) + if not server: + continue + + records = ServerMetrics.query.filter( + ServerMetrics.server_id == server_id, + ServerMetrics.timestamp >= cutoff + ).order_by(ServerMetrics.timestamp.asc()).all() + + # Downsample for longer periods + if interval > 1 and len(records) > 0: + records = _downsample_simple(records, interval) + + data = [] + for r in records: + val = getattr(r, col_name, None) + if val is not None: + data.append({ + 'timestamp': r.timestamp.isoformat(), + 'value': round(val, 2) + }) + + series.append({ + 'server_id': server_id, + 'name': server.name, + 'data': data + }) + + return {'metric': metric, 'period': period, 'series': series} + + # ==================== Alert Thresholds ==================== + + @staticmethod + def get_thresholds(server_id: str = None) -> List[Dict]: + """Get alert thresholds, optionally filtered by server.""" + query = ServerAlertThreshold.query + if server_id: + query = query.filter( + (ServerAlertThreshold.server_id == server_id) | + (ServerAlertThreshold.server_id == None) + ) + return [t.to_dict() for t in query.all()] + + @staticmethod + def upsert_threshold(data: Dict) -> Dict: + """Create or update an alert threshold.""" + threshold_id = data.get('id') + if threshold_id: + threshold = ServerAlertThreshold.query.get(threshold_id) + if not threshold: + return {'error': 'Threshold not found'} + else: + threshold = ServerAlertThreshold() + db.session.add(threshold) + + threshold.server_id = data.get('server_id') + threshold.metric = data.get('metric', 'cpu') + threshold.warning_threshold = data.get('warning_threshold', 80.0) + threshold.critical_threshold = data.get('critical_threshold', 95.0) + threshold.duration_seconds = data.get('duration_seconds', 300) + threshold.enabled = data.get('enabled', True) + + db.session.commit() + return threshold.to_dict() + + @staticmethod + def delete_threshold(threshold_id: str) -> bool: + """Delete an alert threshold.""" + threshold = ServerAlertThreshold.query.get(threshold_id) + if not threshold: + return False + db.session.delete(threshold) + db.session.commit() + return True + + # ==================== Alert Checking ==================== + + @staticmethod + def check_fleet_thresholds(): + """Check all online servers against their thresholds. Call periodically.""" + thresholds = ServerAlertThreshold.query.filter_by(enabled=True).all() + if not thresholds: + return + + # Group thresholds: per-server overrides + global defaults + global_thresholds = {} + server_thresholds = {} + for t in thresholds: + if t.server_id: + server_thresholds.setdefault(t.server_id, {})[t.metric] = t + else: + global_thresholds[t.metric] = t + + servers = Server.query.filter_by(status='online').all() + + for server in servers: + for metric_name, col_name in METRIC_COLUMNS.items(): + # Find applicable threshold (server-specific > global) + threshold = server_thresholds.get(server.id, {}).get(metric_name) + if not threshold: + threshold = global_thresholds.get(metric_name) + if not threshold: + continue + + # Get recent metrics for the duration window + cutoff = datetime.utcnow() - timedelta(seconds=threshold.duration_seconds) + recent = ServerMetrics.query.filter( + ServerMetrics.server_id == server.id, + ServerMetrics.timestamp >= cutoff + ).all() + + if not recent: + continue + + values = [getattr(r, col_name) for r in recent if getattr(r, col_name) is not None] + if not values: + continue + + avg_val = sum(values) / len(values) + + # Check if there's already an active alert for this server+metric + existing = MetricAlert.query.filter_by( + server_id=server.id, metric=metric_name, status='active' + ).first() + + if avg_val >= threshold.critical_threshold: + if not existing or existing.severity != 'critical': + if existing: + existing.resolved_at = datetime.utcnow() + existing.status = 'resolved' + alert = MetricAlert( + server_id=server.id, + metric=metric_name, + severity='critical', + value=round(avg_val, 1), + threshold=threshold.critical_threshold, + duration_seconds=threshold.duration_seconds + ) + db.session.add(alert) + elif avg_val >= threshold.warning_threshold: + if not existing or existing.severity == 'critical': + if existing and existing.severity == 'critical': + existing.resolved_at = datetime.utcnow() + existing.status = 'resolved' + if not existing or existing.severity == 'critical': + alert = MetricAlert( + server_id=server.id, + metric=metric_name, + severity='warning', + value=round(avg_val, 1), + threshold=threshold.warning_threshold, + duration_seconds=threshold.duration_seconds + ) + db.session.add(alert) + else: + # Below thresholds - resolve any active alert + if existing: + existing.resolved_at = datetime.utcnow() + existing.status = 'resolved' + + try: + db.session.commit() + except Exception as e: + logger.error(f"Error checking fleet thresholds: {e}") + db.session.rollback() + + @staticmethod + def get_alerts( + status: str = None, severity: str = None, + server_id: str = None, limit: int = 50 + ) -> List[Dict]: + """Get metric alerts with filters.""" + query = MetricAlert.query.order_by(MetricAlert.created_at.desc()) + if status: + query = query.filter_by(status=status) + if severity: + query = query.filter_by(severity=severity) + if server_id: + query = query.filter_by(server_id=server_id) + return [a.to_dict() for a in query.limit(limit).all()] + + @staticmethod + def acknowledge_alert(alert_id: str, user_id: int) -> bool: + alert = MetricAlert.query.get(alert_id) + if not alert or alert.status != 'active': + return False + alert.status = 'acknowledged' + alert.acknowledged_by = user_id + db.session.commit() + return True + + @staticmethod + def resolve_alert(alert_id: str) -> bool: + alert = MetricAlert.query.get(alert_id) + if not alert or alert.status == 'resolved': + return False + alert.status = 'resolved' + alert.resolved_at = datetime.utcnow() + db.session.commit() + return True + + # ==================== Anomaly Detection ==================== + + @staticmethod + def detect_anomalies(server_id: str = None) -> List[Dict]: + """Simple z-score anomaly detection on 7-day hourly averages.""" + servers = [Server.query.get(server_id)] if server_id else Server.query.filter_by(status='online').all() + cutoff = datetime.utcnow() - timedelta(days=7) + anomalies = [] + + for server in servers: + if not server: + continue + + records = ServerMetrics.query.filter( + ServerMetrics.server_id == server.id, + ServerMetrics.timestamp >= cutoff + ).all() + + if len(records) < 20: + continue + + latest = records[-1] if records else None + if not latest: + continue + + for metric_name, col_name in [('cpu', 'cpu_percent'), ('memory', 'memory_percent'), ('disk', 'disk_percent')]: + values = [getattr(r, col_name) for r in records if getattr(r, col_name) is not None] + if len(values) < 10: + continue + + mean = sum(values) / len(values) + variance = sum((v - mean) ** 2 for v in values) / len(values) + stddev = math.sqrt(variance) if variance > 0 else 0 + + current = getattr(latest, col_name) + if current is None or stddev == 0: + continue + + z_score = (current - mean) / stddev + + if abs(z_score) > 2.5: + anomalies.append({ + 'server_id': server.id, + 'server_name': server.name, + 'metric': metric_name, + 'current_value': round(current, 1), + 'mean': round(mean, 1), + 'stddev': round(stddev, 1), + 'z_score': round(z_score, 2), + 'direction': 'high' if z_score > 0 else 'low', + }) + + return anomalies + + # ==================== Capacity Forecasting ==================== + + @staticmethod + def forecast_capacity(server_id: str, metric: str = 'disk') -> Dict: + """Linear regression forecast for when a metric will hit 90% and 100%.""" + col_name = METRIC_COLUMNS.get(metric, 'disk_percent') + cutoff = datetime.utcnow() - timedelta(days=30) + + records = ServerMetrics.query.filter( + ServerMetrics.server_id == server_id, + ServerMetrics.timestamp >= cutoff + ).order_by(ServerMetrics.timestamp.asc()).all() + + values = [] + for r in records: + val = getattr(r, col_name) + if val is not None: + # Convert timestamp to days from first record + values.append((r.timestamp, val)) + + if len(values) < 48: # Need at least 2 days of data + return { + 'server_id': server_id, + 'metric': metric, + 'error': 'Insufficient data (need at least 2 days of metrics)', + 'data_points': len(values) + } + + # Aggregate to daily averages for regression + daily = {} + for ts, val in values: + day_key = ts.strftime('%Y-%m-%d') + daily.setdefault(day_key, []).append(val) + daily_avgs = [(k, sum(v) / len(v)) for k, v in sorted(daily.items())] + + if len(daily_avgs) < 2: + return { + 'server_id': server_id, + 'metric': metric, + 'error': 'Insufficient daily data points', + 'data_points': len(daily_avgs) + } + + # Simple linear regression: y = mx + b + n = len(daily_avgs) + x_vals = list(range(n)) + y_vals = [v for _, v in daily_avgs] + + x_mean = sum(x_vals) / n + y_mean = sum(y_vals) / n + + numerator = sum((x - x_mean) * (y - y_mean) for x, y in zip(x_vals, y_vals)) + denominator = sum((x - x_mean) ** 2 for x in x_vals) + + if denominator == 0: + return {'server_id': server_id, 'metric': metric, 'trend': 'flat', 'growth_rate_per_day': 0} + + slope = numerator / denominator # % per day + intercept = y_mean - slope * x_mean + + current = y_vals[-1] + + # Predict when metric reaches 90% and 100% + predictions = {} + for target in [90, 100]: + if current >= target: + predictions[f'days_to_{target}pct'] = 0 + predictions[f'date_{target}pct'] = 'already exceeded' + elif slope <= 0: + predictions[f'days_to_{target}pct'] = None + predictions[f'date_{target}pct'] = 'never (decreasing trend)' + else: + days_needed = (target - current) / slope + target_date = datetime.utcnow() + timedelta(days=days_needed) + predictions[f'days_to_{target}pct'] = round(days_needed, 1) + predictions[f'date_{target}pct'] = target_date.strftime('%Y-%m-%d') + + # Generate trend line data for charting + trend_data = [] + for i, (day, avg) in enumerate(daily_avgs): + trend_data.append({ + 'date': day, + 'actual': round(avg, 1), + 'trend': round(intercept + slope * i, 1) + }) + + # Extend trend 14 days into the future + for i in range(1, 15): + future_day = (datetime.utcnow() + timedelta(days=i)).strftime('%Y-%m-%d') + predicted = intercept + slope * (n - 1 + i) + trend_data.append({ + 'date': future_day, + 'actual': None, + 'trend': round(min(predicted, 100), 1) + }) + + return { + 'server_id': server_id, + 'metric': metric, + 'current_value': round(current, 1), + 'growth_rate_per_day': round(slope, 3), + 'trend': 'increasing' if slope > 0.1 else ('decreasing' if slope < -0.1 else 'stable'), + 'predictions': predictions, + 'trend_data': trend_data, + 'data_points': len(values), + 'daily_samples': len(daily_avgs), + } + + # ==================== Fleet Search ==================== + + @staticmethod + def search_fleet(query: str, search_type: str = 'any') -> List[Dict]: + """Search across all servers for containers, services, or ports.""" + query_lower = query.lower() + results = [] + + # Search server names and hostnames + if search_type in ('any', 'server'): + servers = Server.query.filter( + (Server.name.ilike(f'%{query}%')) | + (Server.hostname.ilike(f'%{query}%')) | + (Server.ip_address.ilike(f'%{query}%')) + ).all() + for s in servers: + results.append({ + 'server_id': s.id, + 'server_name': s.name, + 'match_type': 'server', + 'match_name': s.name, + 'match_detail': f'{s.hostname} ({s.ip_address})', + 'status': s.status, + }) + + # Search containers via connected agents + if search_type in ('any', 'container'): + connected = agent_registry.get_connected_servers() + for server_id in connected: + server = Server.query.get(server_id) + if not server: + continue + + # Check cached metrics extra data for container info + latest = ServerMetrics.query.filter_by( + server_id=server_id + ).order_by(ServerMetrics.timestamp.desc()).first() + + if latest and latest.extra: + containers = latest.extra.get('containers', []) + for container in containers: + name = container.get('name', '') + image = container.get('image', '') + if query_lower in name.lower() or query_lower in image.lower(): + results.append({ + 'server_id': server_id, + 'server_name': server.name, + 'match_type': 'container', + 'match_name': name, + 'match_detail': image, + 'status': container.get('status', 'unknown'), + }) + + # Search tags + if search_type in ('any', 'tag'): + all_servers = Server.query.all() + for s in all_servers: + if s.tags: + for tag in s.tags: + if query_lower in tag.lower(): + results.append({ + 'server_id': s.id, + 'server_name': s.name, + 'match_type': 'tag', + 'match_name': tag, + 'match_detail': f'Tag on {s.name}', + 'status': s.status, + }) + + return results + + # ==================== Prometheus Export ==================== + + @staticmethod + def get_prometheus_metrics() -> str: + """Generate Prometheus exposition format metrics for all servers.""" + lines = [] + servers = Server.query.all() + + metrics_defs = [ + ('serverkit_cpu_percent', 'CPU usage percentage', 'cpu_percent'), + ('serverkit_memory_percent', 'Memory usage percentage', 'memory_percent'), + ('serverkit_disk_percent', 'Disk usage percentage', 'disk_percent'), + ('serverkit_containers_running', 'Number of running containers', 'container_running'), + ] + + for metric_name, help_text, col_name in metrics_defs: + lines.append(f'# HELP {metric_name} {help_text}') + lines.append(f'# TYPE {metric_name} gauge') + + for server in servers: + latest = ServerMetrics.query.filter_by( + server_id=server.id + ).order_by(ServerMetrics.timestamp.desc()).first() + + if latest: + val = getattr(latest, col_name) + if val is not None: + safe_name = server.name.replace('"', '\\"') + lines.append( + f'{metric_name}{{server="{safe_name}",server_id="{server.id}"}} {val}' + ) + lines.append('') + + # Server status (1 = online, 0 = offline) + lines.append('# HELP serverkit_server_up Server online status') + lines.append('# TYPE serverkit_server_up gauge') + for server in servers: + val = 1 if server.status == 'online' else 0 + safe_name = server.name.replace('"', '\\"') + lines.append( + f'serverkit_server_up{{server="{safe_name}",server_id="{server.id}"}} {val}' + ) + + return '\n'.join(lines) + '\n' + + # ==================== CSV Export ==================== + + @staticmethod + def export_metrics_csv(server_ids: List[str], metric: str = 'cpu', period: str = '24h') -> str: + """Export metrics as CSV string.""" + hours_back = {'1h': 1, '6h': 6, '24h': 24, '7d': 168, '30d': 720}.get(period, 24) + cutoff = datetime.utcnow() - timedelta(hours=hours_back) + col_name = METRIC_COLUMNS.get(metric, 'cpu_percent') + + output = io.StringIO() + writer = csv.writer(output) + writer.writerow(['timestamp', 'server_id', 'server_name', metric]) + + for server_id in server_ids: + server = Server.query.get(server_id) + if not server: + continue + + records = ServerMetrics.query.filter( + ServerMetrics.server_id == server_id, + ServerMetrics.timestamp >= cutoff + ).order_by(ServerMetrics.timestamp.asc()).all() + + for r in records: + val = getattr(r, col_name) + if val is not None: + writer.writerow([ + r.timestamp.isoformat(), + server_id, + server.name, + round(val, 2) + ]) + + return output.getvalue() + + +def _downsample_simple(records, interval_minutes): + """Simple downsampling by picking one record per interval.""" + if not records: + return [] + + result = [records[0]] + last_ts = records[0].timestamp + + for r in records[1:]: + if (r.timestamp - last_ts).total_seconds() >= interval_minutes * 60: + result.append(r) + last_ts = r.timestamp + + return result + + +fleet_monitor_service = FleetMonitorService() diff --git a/backend/app/services/git_deploy_service.py b/backend/app/services/git_deploy_service.py index 876e81a1..86908e06 100644 --- a/backend/app/services/git_deploy_service.py +++ b/backend/app/services/git_deploy_service.py @@ -447,8 +447,7 @@ def _run_script(cls, script: str, cwd: str) -> Dict: """Run a deployment script.""" try: result = subprocess.run( - script, - shell=True, + ['bash', '-c', script], cwd=cwd, capture_output=True, text=True, diff --git a/backend/app/services/git_service.py b/backend/app/services/git_service.py index 01dc809c..82653638 100644 --- a/backend/app/services/git_service.py +++ b/backend/app/services/git_service.py @@ -286,8 +286,7 @@ def _run_script(cls, script: str, working_dir: str) -> Dict: """Run a deployment script.""" try: result = subprocess.run( - script, - shell=True, + ['bash', '-c', script], cwd=working_dir, capture_output=True, text=True, @@ -434,6 +433,17 @@ def handle_webhook(cls, app_id: int, payload: Dict) -> Dict: 'message': f'Ignoring push to {ref}, configured branch is {branch}' } + # Emit event for workflow triggers + try: + from app.services.workflow_engine import WorkflowEventBus + WorkflowEventBus.emit('git_push', { + 'app_id': app_id, + 'branch': branch, + 'ref': ref + }) + except Exception: + pass + # Trigger deployment return cls.deploy(app_id) diff --git a/backend/app/services/marketplace_service.py b/backend/app/services/marketplace_service.py new file mode 100644 index 00000000..154872f1 --- /dev/null +++ b/backend/app/services/marketplace_service.py @@ -0,0 +1,172 @@ +import logging +from app import db +from app.models.marketplace import Extension, ExtensionInstall + +logger = logging.getLogger(__name__) + + +class MarketplaceService: + """Service for the extension marketplace.""" + + CATEGORIES = ['monitoring', 'security', 'deployment', 'integration', 'ui', 'utility'] + + @staticmethod + def list_extensions(category=None, search=None, status='published'): + query = Extension.query + if status: + query = query.filter_by(status=status) + if category: + query = query.filter_by(category=category) + if search: + query = query.filter( + db.or_( + Extension.display_name.ilike(f'%{search}%'), + Extension.description.ilike(f'%{search}%'), + ) + ) + return query.order_by(Extension.download_count.desc()).all() + + @staticmethod + def get_extension(ext_id): + return Extension.query.get(ext_id) + + @staticmethod + def get_extension_by_slug(slug): + return Extension.query.filter_by(slug=slug).first() + + @staticmethod + def create_extension(data, user_id=None): + slug = data.get('slug', data['name'].lower().replace(' ', '-')) + if Extension.query.filter_by(slug=slug).first(): + raise ValueError(f"Extension '{slug}' already exists") + + ext = Extension( + name=data['name'], + display_name=data.get('display_name', data['name']), + slug=slug, + description=data.get('description', ''), + long_description=data.get('long_description', ''), + version=data.get('version', '1.0.0'), + author=data.get('author', ''), + homepage=data.get('homepage', ''), + repository=data.get('repository', ''), + license=data.get('license', 'MIT'), + category=data.get('category', 'utility'), + extension_type=data.get('extension_type', 'integration'), + entry_point=data.get('entry_point', ''), + submitted_by=user_id, + ) + ext.tags = data.get('tags', []) + if data.get('config_schema'): + ext.config_schema = data['config_schema'] + + db.session.add(ext) + db.session.commit() + return ext + + @staticmethod + def update_extension(ext_id, data): + ext = Extension.query.get(ext_id) + if not ext: + return None + for field in ['display_name', 'description', 'long_description', 'version', + 'author', 'homepage', 'repository', 'license', 'category', + 'extension_type', 'entry_point', 'status']: + if field in data: + setattr(ext, field, data[field]) + if 'tags' in data: + ext.tags = data['tags'] + if 'config_schema' in data: + ext.config_schema = data['config_schema'] + db.session.commit() + return ext + + @staticmethod + def publish_extension(ext_id): + ext = Extension.query.get(ext_id) + if not ext: + return None + ext.status = Extension.STATUS_PUBLISHED + db.session.commit() + return ext + + @staticmethod + def delete_extension(ext_id): + ext = Extension.query.get(ext_id) + if not ext: + return False + ExtensionInstall.query.filter_by(extension_id=ext_id).delete() + db.session.delete(ext) + db.session.commit() + return True + + # --- Installations --- + + @staticmethod + def install_extension(ext_id, user_id, config=None): + ext = Extension.query.get(ext_id) + if not ext: + raise ValueError('Extension not found') + + existing = ExtensionInstall.query.filter_by( + extension_id=ext_id, user_id=user_id + ).first() + if existing and existing.is_active: + raise ValueError('Extension already installed') + + if existing: + existing.is_active = True + existing.installed_version = ext.version + if config: + existing.config = config + install = existing + else: + install = ExtensionInstall( + extension_id=ext_id, + user_id=user_id, + installed_version=ext.version, + ) + if config: + install.config = config + db.session.add(install) + + ext.download_count += 1 + db.session.commit() + return install + + @staticmethod + def uninstall_extension(install_id): + install = ExtensionInstall.query.get(install_id) + if not install: + return False + install.is_active = False + db.session.commit() + return True + + @staticmethod + def get_user_extensions(user_id): + return ExtensionInstall.query.filter_by(user_id=user_id, is_active=True).all() + + @staticmethod + def update_extension_config(install_id, config): + install = ExtensionInstall.query.get(install_id) + if not install: + return None + install.config = config + db.session.commit() + return install + + @staticmethod + def rate_extension(ext_id, rating): + ext = Extension.query.get(ext_id) + if not ext: + return None + total = ext.rating * ext.rating_count + rating + ext.rating_count += 1 + ext.rating = round(total / ext.rating_count, 2) + db.session.commit() + return ext + + @staticmethod + def get_categories(): + return MarketplaceService.CATEGORIES diff --git a/backend/app/services/migration_service.py b/backend/app/services/migration_service.py index a8f438ac..8e4612a6 100644 --- a/backend/app/services/migration_service.py +++ b/backend/app/services/migration_service.py @@ -31,24 +31,62 @@ def _get_alembic_config(cls, app): cfg.set_main_option('sqlalchemy.url', app.config['SQLALCHEMY_DATABASE_URI']) return cfg + # Map SQLAlchemy type names to SQLite-compatible type strings + _TYPE_MAP = { + 'INTEGER': 'INTEGER', 'BIGINTEGER': 'INTEGER', 'SMALLINTEGER': 'INTEGER', + 'FLOAT': 'REAL', 'NUMERIC': 'REAL', + 'BOOLEAN': 'BOOLEAN', + 'DATETIME': 'DATETIME', 'DATE': 'DATE', 'TIME': 'TIME', + 'TEXT': 'TEXT', 'STRING': 'TEXT', 'VARCHAR': 'TEXT', + 'JSON': 'TEXT', + } + + @classmethod + def _sqlite_type(cls, sa_type): + """Convert a SQLAlchemy column type to a SQLite type string.""" + type_name = type(sa_type).__name__.upper() + return cls._TYPE_MAP.get(type_name, 'TEXT') + @classmethod def _fix_missing_columns(cls, db): - """Add columns that may be missing from existing tables. + """Sync database schema with ORM models. + Compares every model column against the actual database and adds any + that are missing. Also creates tables that don't exist yet. Runs raw SQL before any ORM queries to prevent crashes when models reference columns that don't exist in the database yet. """ inspector = sa_inspect(db.engine) - existing_tables = inspector.get_table_names() - - if 'users' in existing_tables: - cols = {c['name'] for c in inspector.get_columns('users')} - if 'auth_provider' not in cols: - logger.info('Adding missing column: users.auth_provider') - with db.engine.begin() as conn: - conn.execute(text( - "ALTER TABLE users ADD COLUMN auth_provider VARCHAR(50) DEFAULT 'local'" - )) + existing_tables = set(inspector.get_table_names()) + added = 0 + + for table_name, table_obj in db.metadata.tables.items(): + if table_name not in existing_tables: + # Entire table is missing — create_all will handle it later + continue + + existing_cols = {c['name'] for c in inspector.get_columns(table_name)} + + for col in table_obj.columns: + if col.name in existing_cols: + continue + + sqlite_type = cls._sqlite_type(col.type) + sql = f'ALTER TABLE {table_name} ADD COLUMN {col.name} {sqlite_type}' + + try: + with db.engine.begin() as conn: + conn.execute(text(sql)) + logger.info(f'Auto-added missing column: {table_name}.{col.name} ({sqlite_type})') + added += 1 + except Exception as e: + logger.warning(f'Failed to add {table_name}.{col.name}: {e}') + + # Create any entirely new tables + db.create_all() + + if added: + logger.info(f'Schema sync complete: {added} column(s) added') @classmethod def check_and_prepare(cls, app): diff --git a/backend/app/services/monitoring_service.py b/backend/app/services/monitoring_service.py index 57fd9a18..5a781230 100644 --- a/backend/app/services/monitoring_service.py +++ b/backend/app/services/monitoring_service.py @@ -1,5 +1,6 @@ import os import json +import logging import psutil import smtplib from email.mime.text import MIMEText @@ -10,6 +11,8 @@ import threading import time +logger = logging.getLogger(__name__) + from .notification_service import NotificationService from app import paths @@ -280,6 +283,25 @@ def process_alerts(cls, alerts: List[Dict]) -> None: # Send to all configured notification channels (Discord, Slack, Telegram, etc.) NotificationService.send_all(alerts_to_send) + # Emit events for workflow triggers + try: + from app.services.workflow_engine import WorkflowEventBus + for alert in alerts_to_send: + if alert['type'] == 'cpu': + WorkflowEventBus.emit('high_cpu', { + 'percent': alert.get('value'), + 'threshold': alert.get('threshold'), + 'severity': alert.get('severity') + }) + elif alert['type'] == 'memory': + WorkflowEventBus.emit('high_memory', { + 'percent': alert.get('value'), + 'threshold': alert.get('threshold'), + 'severity': alert.get('severity') + }) + except Exception: + logger.exception("Error emitting workflow events for alerts") + @classmethod def log_alert(cls, alerts: List[Dict]) -> None: """Log alerts to file.""" diff --git a/backend/app/services/nginx_advanced_service.py b/backend/app/services/nginx_advanced_service.py new file mode 100644 index 00000000..2a5cb4b6 --- /dev/null +++ b/backend/app/services/nginx_advanced_service.py @@ -0,0 +1,201 @@ +import os +import json +import logging +import re +from app.utils.system import run_command + +logger = logging.getLogger(__name__) + + +class NginxAdvancedService: + """Advanced Nginx configuration: reverse proxy, load balancing, caching, rate limiting.""" + + NGINX_CONF_DIR = '/etc/nginx' + SITES_AVAILABLE = '/etc/nginx/sites-available' + SITES_ENABLED = '/etc/nginx/sites-enabled' + + @staticmethod + def get_proxy_rules(domain): + """Get reverse proxy rules for a virtual host.""" + conf_path = os.path.join(NginxAdvancedService.SITES_AVAILABLE, domain) + if not os.path.isfile(conf_path): + return {'error': 'Config not found'} + try: + with open(conf_path, 'r') as f: + content = f.read() + return {'domain': domain, 'config': content} + except Exception as e: + return {'error': str(e)} + + @staticmethod + def create_reverse_proxy(data): + """Create a reverse proxy configuration.""" + domain = data['domain'] + upstreams = data.get('upstreams', []) + lb_method = data.get('lb_method', 'round_robin') + cache = data.get('cache', {}) + rate_limit = data.get('rate_limit', {}) + headers = data.get('headers', {}) + locations = data.get('locations', []) + + upstream_name = domain.replace('.', '_') + + lines = [] + + # Upstream block + if upstreams: + lines.append(f'upstream {upstream_name} {{') + if lb_method == 'least_conn': + lines.append(' least_conn;') + elif lb_method == 'ip_hash': + lines.append(' ip_hash;') + for u in upstreams: + weight = f' weight={u["weight"]}' if u.get('weight') else '' + lines.append(f' server {u["address"]}{weight};') + lines.append('}') + lines.append('') + + # Rate limiting zone + if rate_limit.get('enabled'): + rps = rate_limit.get('requests_per_second', 10) + lines.append(f'limit_req_zone $binary_remote_addr zone={upstream_name}_limit:10m rate={rps}r/s;') + lines.append('') + + # Cache zone + if cache.get('enabled'): + cache_size = cache.get('size', '100m') + cache_ttl = cache.get('ttl', '60m') + lines.append(f'proxy_cache_path /var/cache/nginx/{upstream_name} levels=1:2 keys_zone={upstream_name}_cache:10m max_size={cache_size} inactive={cache_ttl};') + lines.append('') + + # Server block + lines.append('server {') + lines.append(f' listen 80;') + lines.append(f' server_name {domain};') + lines.append('') + + # Custom headers + for header_name, header_value in headers.get('add', {}).items(): + lines.append(f' add_header {header_name} "{header_value}";') + for header_name in headers.get('remove', []): + lines.append(f' proxy_hide_header {header_name};') + + if rate_limit.get('enabled'): + burst = rate_limit.get('burst', 20) + lines.append(f' limit_req zone={upstream_name}_limit burst={burst} nodelay;') + + lines.append('') + + # Custom location blocks + for loc in locations: + lines.append(f' location {loc["path"]} {{') + if loc.get('proxy_pass'): + lines.append(f' proxy_pass {loc["proxy_pass"]};') + elif upstreams: + lines.append(f' proxy_pass http://{upstream_name};') + lines.append(' proxy_set_header Host $host;') + lines.append(' proxy_set_header X-Real-IP $remote_addr;') + lines.append(' proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;') + lines.append(' proxy_set_header X-Forwarded-Proto $scheme;') + + if cache.get('enabled') and not loc.get('no_cache'): + lines.append(f' proxy_cache {upstream_name}_cache;') + for bypass in cache.get('bypass_rules', []): + lines.append(f' proxy_cache_bypass {bypass};') + + lines.append(' }') + lines.append('') + + # Default location if no custom locations + if not locations: + lines.append(' location / {') + if upstreams: + lines.append(f' proxy_pass http://{upstream_name};') + lines.append(' proxy_set_header Host $host;') + lines.append(' proxy_set_header X-Real-IP $remote_addr;') + lines.append(' proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;') + lines.append(' proxy_set_header X-Forwarded-Proto $scheme;') + if cache.get('enabled'): + lines.append(f' proxy_cache {upstream_name}_cache;') + lines.append(' }') + + lines.append('}') + + config = '\n'.join(lines) + + # Write config + conf_path = os.path.join(NginxAdvancedService.SITES_AVAILABLE, domain) + with open(conf_path, 'w') as f: + f.write(config) + + return {'domain': domain, 'config': config, 'path': conf_path} + + @staticmethod + def test_config(): + """Test nginx config syntax.""" + try: + result = run_command(['nginx', '-t'], capture_stderr=True) + return { + 'valid': True, + 'output': result.get('stdout', '') + result.get('stderr', ''), + } + except Exception as e: + return {'valid': False, 'output': str(e)} + + @staticmethod + def preview_diff(domain, new_config): + """Preview config changes as a diff.""" + conf_path = os.path.join(NginxAdvancedService.SITES_AVAILABLE, domain) + old_config = '' + if os.path.isfile(conf_path): + with open(conf_path, 'r') as f: + old_config = f.read() + + import difflib + diff = list(difflib.unified_diff( + old_config.splitlines(keepends=True), + new_config.splitlines(keepends=True), + fromfile=f'{domain} (current)', + tofile=f'{domain} (new)', + )) + return {'diff': ''.join(diff), 'has_changes': len(diff) > 0} + + @staticmethod + def reload_nginx(): + """Reload nginx configuration.""" + try: + run_command(['sudo', 'nginx', '-s', 'reload']) + return {'success': True} + except Exception as e: + return {'success': False, 'error': str(e)} + + @staticmethod + def get_vhost_logs(domain, log_type='access', lines=100): + """Get access or error log for a virtual host.""" + log_dir = '/var/log/nginx' + if log_type == 'error': + log_file = os.path.join(log_dir, f'{domain}.error.log') + else: + log_file = os.path.join(log_dir, f'{domain}.access.log') + + if not os.path.isfile(log_file): + # Fallback to default logs + log_file = os.path.join(log_dir, f'{log_type}.log') + + if not os.path.isfile(log_file): + return {'lines': [], 'error': 'Log file not found'} + + try: + result = run_command(['tail', '-n', str(lines), log_file]) + log_lines = result.get('stdout', '').strip().split('\n') + return {'lines': log_lines, 'file': log_file} + except Exception as e: + return {'lines': [], 'error': str(e)} + + @staticmethod + def get_load_balancing_methods(): + return { + 'round_robin': 'Round Robin (default)', + 'least_conn': 'Least Connections', + 'ip_hash': 'IP Hash (sticky sessions)', + } diff --git a/backend/app/services/nginx_service.py b/backend/app/services/nginx_service.py index 3ec1aaa3..3fe20bd4 100644 --- a/backend/app/services/nginx_service.py +++ b/backend/app/services/nginx_service.py @@ -7,6 +7,19 @@ from app.utils.system import ServiceControl, run_privileged, is_command_available +def _validate_domain(domain: str) -> bool: + """Validate domain name to prevent nginx config injection.""" + return bool(re.match(r'^(?:[a-z0-9](?:[a-z0-9\-]{0,61}[a-z0-9])?\.)*[a-z0-9](?:[a-z0-9\-]{0,61}[a-z0-9])?$', domain, re.IGNORECASE)) + + +def _validate_path(path: str) -> bool: + """Validate filesystem path for nginx config.""" + # Block path traversal and special characters + if '..' in path or '\n' in path or '\r' in path or ';' in path: + return False + return bool(re.match(r'^/[a-zA-Z0-9/_\-\.]+$', path)) + + class NginxService: """Service for Nginx configuration management.""" @@ -415,6 +428,15 @@ def create_site(cls, name: str, app_type: str, domains: List[str], if not domains: return {'success': False, 'error': 'At least one domain is required'} + # Validate all domains + for domain in domains: + if not _validate_domain(domain): + return {'success': False, 'error': f'Invalid domain name: {domain}'} + + # Validate root_path if provided + if root_path and not _validate_path(root_path): + return {'success': False, 'error': f'Invalid root path: {root_path}'} + domains_str = ' '.join(domains) # Select template based on app type diff --git a/backend/app/services/python_service.py b/backend/app/services/python_service.py index 3d96732d..54b7c10c 100644 --- a/backend/app/services/python_service.py +++ b/backend/app/services/python_service.py @@ -552,8 +552,7 @@ def run_command(app_path, command): env.update(env_vars) result = subprocess.run( - command, - shell=True, + ['bash', '-c', command], cwd=app_path, env=env, capture_output=True, diff --git a/backend/app/services/server_template_service.py b/backend/app/services/server_template_service.py new file mode 100644 index 00000000..6d74b9f9 --- /dev/null +++ b/backend/app/services/server_template_service.py @@ -0,0 +1,283 @@ +import json +import logging +from datetime import datetime +from app import db +from app.models.server_template import ServerTemplate, ServerTemplateAssignment +from app.models.server import Server + +logger = logging.getLogger(__name__) + + +class ServerTemplateService: + """Service for server template management and config drift detection.""" + + # Built-in template library + TEMPLATE_LIBRARY = { + 'web-server': { + 'name': 'Web Server', + 'description': 'Nginx + PHP-FPM web server with standard security', + 'category': 'web', + 'packages': ['nginx', 'php-fpm', 'certbot'], + 'services': [ + {'name': 'nginx', 'enabled': True, 'running': True}, + {'name': 'php-fpm', 'enabled': True, 'running': True}, + ], + 'firewall_rules': [ + {'port': 80, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 443, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 22, 'protocol': 'tcp', 'action': 'allow'}, + ], + }, + 'database-server': { + 'name': 'Database Server', + 'description': 'MySQL/MariaDB database server with backups', + 'category': 'database', + 'packages': ['mariadb-server'], + 'services': [ + {'name': 'mariadb', 'enabled': True, 'running': True}, + ], + 'firewall_rules': [ + {'port': 3306, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 22, 'protocol': 'tcp', 'action': 'allow'}, + ], + }, + 'mail-server': { + 'name': 'Mail Server', + 'description': 'Postfix + Dovecot mail server', + 'category': 'mail', + 'packages': ['postfix', 'dovecot-imapd', 'dovecot-pop3d', 'spamassassin', 'opendkim'], + 'services': [ + {'name': 'postfix', 'enabled': True, 'running': True}, + {'name': 'dovecot', 'enabled': True, 'running': True}, + {'name': 'spamassassin', 'enabled': True, 'running': True}, + ], + 'firewall_rules': [ + {'port': 25, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 587, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 993, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 995, 'protocol': 'tcp', 'action': 'allow'}, + {'port': 22, 'protocol': 'tcp', 'action': 'allow'}, + ], + }, + } + + @staticmethod + def list_templates(category=None): + query = ServerTemplate.query + if category: + query = query.filter_by(category=category) + return query.order_by(ServerTemplate.name).all() + + @staticmethod + def get_template(template_id): + return ServerTemplate.query.get(template_id) + + @staticmethod + def create_template(data, user_id=None): + if ServerTemplate.query.filter_by(name=data['name']).first(): + raise ValueError(f"Template '{data['name']}' already exists") + + template = ServerTemplate( + name=data['name'], + description=data.get('description', ''), + category=data.get('category', 'general'), + parent_id=data.get('parent_id'), + auto_remediate=data.get('auto_remediate', False), + remediation_approval_required=data.get('remediation_approval_required', True), + created_by=user_id, + ) + template.packages = data.get('packages', []) + template.services = data.get('services', []) + template.firewall_rules = data.get('firewall_rules', []) + template.files = data.get('files', []) + template.users = data.get('users', []) + template.sysctl_params = data.get('sysctl_params', []) + + db.session.add(template) + db.session.commit() + return template + + @staticmethod + def update_template(template_id, data): + template = ServerTemplate.query.get(template_id) + if not template: + return None + + for field in ['name', 'description', 'category', 'parent_id', + 'auto_remediate', 'remediation_approval_required']: + if field in data: + setattr(template, field, data[field]) + + for json_field in ['packages', 'services', 'firewall_rules', 'files', 'users', 'sysctl_params']: + if json_field in data: + setattr(template, json_field, data[json_field]) + + template.version += 1 + db.session.commit() + return template + + @staticmethod + def delete_template(template_id): + template = ServerTemplate.query.get(template_id) + if not template: + return False + active = template.assignments.count() + if active > 0: + raise ValueError(f'Cannot delete template with {active} active assignments') + # Remove children references + for child in template.children: + child.parent_id = None + db.session.delete(template) + db.session.commit() + return True + + @staticmethod + def get_library_templates(): + return ServerTemplateService.TEMPLATE_LIBRARY + + @staticmethod + def create_from_library(key, user_id=None): + if key not in ServerTemplateService.TEMPLATE_LIBRARY: + raise ValueError(f"Unknown library template: {key}") + data = ServerTemplateService.TEMPLATE_LIBRARY[key].copy() + return ServerTemplateService.create_template(data, user_id) + + # --- Assignment & Drift --- + + @staticmethod + def assign_template(template_id, server_id): + template = ServerTemplate.query.get(template_id) + if not template: + raise ValueError('Template not found') + server = Server.query.get(server_id) + if not server: + raise ValueError('Server not found') + + existing = ServerTemplateAssignment.query.filter_by( + template_id=template_id, server_id=server_id + ).first() + if existing: + raise ValueError('Template already assigned to this server') + + assignment = ServerTemplateAssignment( + template_id=template_id, + server_id=server_id, + ) + db.session.add(assignment) + db.session.commit() + return assignment + + @staticmethod + def unassign_template(assignment_id): + assignment = ServerTemplateAssignment.query.get(assignment_id) + if not assignment: + return False + db.session.delete(assignment) + db.session.commit() + return True + + @staticmethod + def bulk_assign(template_id, server_ids): + results = [] + for sid in server_ids: + try: + a = ServerTemplateService.assign_template(template_id, sid) + results.append({'server_id': sid, 'status': 'assigned', 'assignment_id': a.id}) + except ValueError as e: + results.append({'server_id': sid, 'status': 'error', 'error': str(e)}) + return results + + @staticmethod + def check_drift(assignment_id): + """Check configuration drift for a server assignment.""" + assignment = ServerTemplateAssignment.query.get(assignment_id) + if not assignment: + return None + + assignment.status = ServerTemplateAssignment.STATUS_CHECKING + db.session.commit() + + # Send drift check command to agent + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw and assignment.server: + spec = assignment.template.get_merged_spec() + gw.send_command(assignment.server.agent_id, 'config_drift_check', { + 'assignment_id': assignment.id, + 'spec': spec, + }) + except Exception as e: + logger.warning(f'Could not send drift check command: {e}') + + return assignment + + @staticmethod + def update_drift_report(assignment_id, report): + assignment = ServerTemplateAssignment.query.get(assignment_id) + if not assignment: + return None + assignment.drift_report = report + assignment.last_check_at = datetime.utcnow() + has_drift = any( + report.get(k, []) for k in ['missing_packages', 'extra_packages', + 'stopped_services', 'missing_rules', 'changed_files'] + ) + assignment.status = ( + ServerTemplateAssignment.STATUS_DRIFTED if has_drift + else ServerTemplateAssignment.STATUS_COMPLIANT + ) + db.session.commit() + return assignment + + @staticmethod + def remediate(assignment_id): + """Apply template to bring server back to expected state.""" + assignment = ServerTemplateAssignment.query.get(assignment_id) + if not assignment: + return None + + assignment.status = ServerTemplateAssignment.STATUS_REMEDIATING + db.session.commit() + + try: + from app.agent_gateway import get_agent_gateway + gw = get_agent_gateway() + if gw and assignment.server: + spec = assignment.template.get_merged_spec() + gw.send_command(assignment.server.agent_id, 'config_remediate', { + 'assignment_id': assignment.id, + 'spec': spec, + }) + except Exception as e: + logger.warning(f'Could not send remediate command: {e}') + + return assignment + + @staticmethod + def get_server_assignments(server_id): + return ServerTemplateAssignment.query.filter_by(server_id=server_id).all() + + @staticmethod + def get_template_assignments(template_id): + return ServerTemplateAssignment.query.filter_by(template_id=template_id).all() + + @staticmethod + def get_compliance_summary(): + """Get fleet-wide compliance summary.""" + assignments = ServerTemplateAssignment.query.all() + total = len(assignments) + if total == 0: + return {'total': 0, 'compliant': 0, 'drifted': 0, 'unknown': 0, 'compliance_pct': 100} + + compliant = sum(1 for a in assignments if a.status == 'compliant') + drifted = sum(1 for a in assignments if a.status == 'drifted') + unknown = total - compliant - drifted + + return { + 'total': total, + 'compliant': compliant, + 'drifted': drifted, + 'unknown': unknown, + 'compliance_pct': round(compliant / total * 100, 1) if total > 0 else 100, + } diff --git a/backend/app/services/settings_service.py b/backend/app/services/settings_service.py index 10e36bd2..2515ccc6 100644 --- a/backend/app/services/settings_service.py +++ b/backend/app/services/settings_service.py @@ -120,15 +120,15 @@ def initialize_defaults(): @staticmethod def needs_setup(): - """Check if initial setup is required.""" - # If no users exist, setup is needed + """Check if initial setup is needed.""" + from app.models.user import User user_count = User.query.count() if user_count == 0: return True - - # Check setup_completed setting setup_completed = SettingsService.get('setup_completed', False) - return not setup_completed + if setup_completed: + return False # Once completed, never re-enable without admin action + return True @staticmethod def complete_setup(user_id=None): diff --git a/backend/app/services/status_page_service.py b/backend/app/services/status_page_service.py new file mode 100644 index 00000000..77181ba0 --- /dev/null +++ b/backend/app/services/status_page_service.py @@ -0,0 +1,307 @@ +import logging +import socket +import time +from datetime import datetime, timedelta +from app import db +from app.models.status_page import ( + StatusPage, StatusComponent, HealthCheck, StatusIncident, StatusIncidentUpdate +) + +logger = logging.getLogger(__name__) + + +class StatusPageService: + """Service for public status pages and automated health checks.""" + + # --- Pages --- + + @staticmethod + def list_pages(): + return StatusPage.query.order_by(StatusPage.name).all() + + @staticmethod + def get_page(page_id): + return StatusPage.query.get(page_id) + + @staticmethod + def get_page_by_slug(slug): + return StatusPage.query.filter_by(slug=slug).first() + + @staticmethod + def create_page(data): + slug = data.get('slug', '').strip().lower() + if StatusPage.query.filter_by(slug=slug).first(): + raise ValueError(f"Status page '{slug}' already exists") + + page = StatusPage( + name=data['name'], + slug=slug, + description=data.get('description', ''), + logo_url=data.get('logo_url'), + primary_color=data.get('primary_color', '#4f46e5'), + custom_domain=data.get('custom_domain'), + is_public=data.get('is_public', True), + show_uptime=data.get('show_uptime', True), + show_history=data.get('show_history', True), + ) + db.session.add(page) + db.session.commit() + return page + + @staticmethod + def update_page(page_id, data): + page = StatusPage.query.get(page_id) + if not page: + return None + for field in ['name', 'description', 'logo_url', 'primary_color', + 'custom_domain', 'is_public', 'show_uptime', 'show_history']: + if field in data: + setattr(page, field, data[field]) + db.session.commit() + return page + + @staticmethod + def delete_page(page_id): + page = StatusPage.query.get(page_id) + if not page: + return False + db.session.delete(page) + db.session.commit() + return True + + @staticmethod + def get_public_page(slug): + """Get public status page data (no auth required).""" + page = StatusPage.query.filter_by(slug=slug, is_public=True).first() + if not page: + return None + + components = page.components.all() + grouped = {} + for comp in components: + group = comp.group or 'Services' + grouped.setdefault(group, []).append(comp.to_dict()) + + # Active incidents + active_incidents = page.incidents.filter( + StatusIncident.status != 'resolved' + ).limit(10).all() + + # Recent resolved + resolved = page.incidents.filter_by(status='resolved').limit(5).all() + + # Overall status + statuses = [c.status for c in components] + if any(s == 'major_outage' for s in statuses): + overall = 'major_outage' + elif any(s in ('partial_outage', 'degraded') for s in statuses): + overall = 'degraded' + elif any(s == 'maintenance' for s in statuses): + overall = 'maintenance' + else: + overall = 'operational' + + return { + 'page': page.to_dict(), + 'overall_status': overall, + 'groups': grouped, + 'active_incidents': [i.to_dict() for i in active_incidents], + 'recent_incidents': [i.to_dict() for i in resolved], + } + + # --- Components --- + + @staticmethod + def create_component(page_id, data): + page = StatusPage.query.get(page_id) + if not page: + raise ValueError('Status page not found') + + comp = StatusComponent( + page_id=page_id, + name=data['name'], + description=data.get('description', ''), + group=data.get('group', 'Services'), + sort_order=data.get('sort_order', 0), + check_type=data.get('check_type', 'http'), + check_target=data.get('check_target', ''), + check_interval=data.get('check_interval', 60), + check_timeout=data.get('check_timeout', 10), + ) + db.session.add(comp) + db.session.commit() + return comp + + @staticmethod + def update_component(comp_id, data): + comp = StatusComponent.query.get(comp_id) + if not comp: + return None + for field in ['name', 'description', 'group', 'sort_order', 'check_type', + 'check_target', 'check_interval', 'check_timeout', 'status']: + if field in data: + setattr(comp, field, data[field]) + db.session.commit() + return comp + + @staticmethod + def delete_component(comp_id): + comp = StatusComponent.query.get(comp_id) + if not comp: + return False + db.session.delete(comp) + db.session.commit() + return True + + # --- Health Checks --- + + @staticmethod + def run_check(component_id): + """Run a health check for a component.""" + comp = StatusComponent.query.get(component_id) + if not comp: + return None + + check_result = StatusPageService._perform_check(comp) + + hc = HealthCheck( + component_id=component_id, + status=check_result['status'], + response_time=check_result.get('response_time'), + status_code=check_result.get('status_code'), + error=check_result.get('error'), + ) + db.session.add(hc) + + # Update component + comp.last_check_at = datetime.utcnow() + comp.last_response_time = check_result.get('response_time') + if check_result['status'] == 'up': + comp.status = StatusComponent.STATUS_OPERATIONAL + elif check_result['status'] == 'degraded': + comp.status = StatusComponent.STATUS_DEGRADED + else: + comp.status = StatusComponent.STATUS_MAJOR + + db.session.commit() + return hc + + @staticmethod + def _perform_check(comp): + """Execute the actual health check.""" + start = time.time() + result = {'status': 'down', 'response_time': None, 'error': None} + + try: + if comp.check_type == 'http': + import requests + resp = requests.get(comp.check_target, timeout=comp.check_timeout, verify=True) + result['response_time'] = int((time.time() - start) * 1000) + result['status_code'] = resp.status_code + if resp.status_code < 400: + result['status'] = 'up' + elif resp.status_code < 500: + result['status'] = 'degraded' + else: + result['status'] = 'down' + + elif comp.check_type == 'tcp': + host, port = comp.check_target.rsplit(':', 1) + sock = socket.create_connection((host, int(port)), timeout=comp.check_timeout) + result['response_time'] = int((time.time() - start) * 1000) + result['status'] = 'up' + sock.close() + + elif comp.check_type == 'ping': + from app.utils.system import run_command + res = run_command(['ping', '-c', '1', '-W', str(comp.check_timeout), comp.check_target]) + result['response_time'] = int((time.time() - start) * 1000) + result['status'] = 'up' + + elif comp.check_type == 'dns': + socket.getaddrinfo(comp.check_target, None) + result['response_time'] = int((time.time() - start) * 1000) + result['status'] = 'up' + + except Exception as e: + result['response_time'] = int((time.time() - start) * 1000) + result['error'] = str(e) + + return result + + @staticmethod + def get_check_history(component_id, hours=24): + since = datetime.utcnow() - timedelta(hours=hours) + return HealthCheck.query.filter( + HealthCheck.component_id == component_id, + HealthCheck.checked_at >= since + ).order_by(HealthCheck.checked_at.desc()).all() + + # --- Incidents --- + + @staticmethod + def create_incident(page_id, data): + incident = StatusIncident( + page_id=page_id, + title=data['title'], + status=data.get('status', 'investigating'), + impact=data.get('impact', 'minor'), + body=data.get('body', ''), + is_maintenance=data.get('is_maintenance', False), + scheduled_start=data.get('scheduled_start'), + scheduled_end=data.get('scheduled_end'), + ) + db.session.add(incident) + db.session.commit() + return incident + + @staticmethod + def update_incident(incident_id, data): + incident = StatusIncident.query.get(incident_id) + if not incident: + return None + for field in ['title', 'status', 'impact', 'body']: + if field in data: + setattr(incident, field, data[field]) + if data.get('status') == 'resolved': + incident.resolved_at = datetime.utcnow() + + # Add timeline update + if data.get('update_body'): + update = StatusIncidentUpdate( + incident_id=incident_id, + status=data.get('status', incident.status), + body=data['update_body'], + ) + db.session.add(update) + + db.session.commit() + return incident + + @staticmethod + def delete_incident(incident_id): + incident = StatusIncident.query.get(incident_id) + if not incident: + return False + db.session.delete(incident) + db.session.commit() + return True + + @staticmethod + def get_badge(slug): + """Generate status badge data.""" + page = StatusPage.query.filter_by(slug=slug).first() + if not page: + return None + + components = page.components.all() + statuses = [c.status for c in components] + + if not statuses or all(s == 'operational' for s in statuses): + return {'label': 'status', 'message': 'operational', 'color': 'brightgreen'} + elif any(s == 'major_outage' for s in statuses): + return {'label': 'status', 'message': 'major outage', 'color': 'red'} + elif any(s in ('partial_outage', 'degraded') for s in statuses): + return {'label': 'status', 'message': 'degraded', 'color': 'yellow'} + else: + return {'label': 'status', 'message': 'maintenance', 'color': 'blue'} diff --git a/backend/app/services/template_service.py b/backend/app/services/template_service.py index 68ddef25..9a360633 100644 --- a/backend/app/services/template_service.py +++ b/backend/app/services/template_service.py @@ -808,8 +808,7 @@ def _run_script(cls, script: str, cwd: str, variables: Dict) -> Dict: env.update(variables) result = subprocess.run( - script, - shell=True, + ['bash', '-c', script], cwd=cwd, env=env, capture_output=True, diff --git a/backend/app/services/workflow_engine.py b/backend/app/services/workflow_engine.py new file mode 100644 index 00000000..f5d75647 --- /dev/null +++ b/backend/app/services/workflow_engine.py @@ -0,0 +1,705 @@ +""" +Advanced Workflow & Automation Engine + +Executes event-driven workflows with DAG-based execution, logic branching, +variable interpolation, timeouts, retries, and script sandboxing. +""" + +import json +import logging +import os +import re +import signal +import subprocess +import threading +import traceback +from collections import deque +from datetime import datetime +from typing import Dict, List, Any, Optional, Set, Tuple + +from app import db +from app.models import Workflow, WorkflowExecution, WorkflowLog, User +from app.services.workflow_service import WorkflowService +from app.services.docker_service import DockerService +from app.services.database_service import DatabaseService +from app.services.notification_service import NotificationService + +logger = logging.getLogger(__name__) + +# Defaults for node execution +DEFAULT_TIMEOUT = 300 # 5 minutes +MAX_TIMEOUT = 3600 # 1 hour +DEFAULT_RETRY_COUNT = 0 +MAX_RETRY_COUNT = 5 +DEFAULT_RETRY_DELAY = 5 # seconds +MAX_OUTPUT_SIZE = 1024 * 512 # 512 KB + + +class CycleDetectedError(Exception): + """Raised when a cycle is detected in the workflow graph.""" + pass + + +class NodeTimeoutError(Exception): + """Raised when a node exceeds its timeout.""" + pass + + +class WorkflowEngine: + """Engine for executing advanced workflows with DAG support.""" + + # ------------------------------------------------------------------ + # Public API + # ------------------------------------------------------------------ + + @staticmethod + def validate_graph(nodes: List[Dict], edges: List[Dict]) -> Optional[str]: + """ + Validate a workflow graph for cycles. + + Returns None if valid, or an error message string if a cycle is found. + """ + adj: Dict[str, List[str]] = {} + node_ids = {n['id'] for n in nodes} + + for edge in edges: + src = edge['source'] + if src not in adj: + adj[src] = [] + adj[src].append(edge['target']) + + # Kahn's algorithm for cycle detection + in_degree = {nid: 0 for nid in node_ids} + for src, targets in adj.items(): + for t in targets: + if t in in_degree: + in_degree[t] += 1 + + queue = deque(nid for nid, deg in in_degree.items() if deg == 0) + visited_count = 0 + + while queue: + nid = queue.popleft() + visited_count += 1 + for neighbor in adj.get(nid, []): + if neighbor in in_degree: + in_degree[neighbor] -= 1 + if in_degree[neighbor] == 0: + queue.append(neighbor) + + if visited_count < len(node_ids): + # Find nodes involved in the cycle for a helpful message + cycle_nodes = [nid for nid, deg in in_degree.items() if deg > 0] + node_labels = {} + for n in nodes: + node_labels[n['id']] = n.get('data', {}).get('label', n['id']) + cycle_labels = [node_labels.get(nid, nid) for nid in cycle_nodes[:5]] + return f"Cycle detected involving: {', '.join(cycle_labels)}" + + return None + + @staticmethod + def execute_workflow(workflow_id: int, trigger_type: str = 'manual', + context: Dict[str, Any] = None) -> int: + """ + Execute a workflow by ID. + + Returns the ID of the created WorkflowExecution. + """ + workflow = Workflow.query.get(workflow_id) + if not workflow: + raise ValueError(f"Workflow {workflow_id} not found") + + nodes = json.loads(workflow.nodes) if workflow.nodes else [] + edges = json.loads(workflow.edges) if workflow.edges else [] + + # Validate graph before executing + cycle_err = WorkflowEngine.validate_graph(nodes, edges) + if cycle_err: + raise CycleDetectedError(cycle_err) + + execution = WorkflowExecution( + workflow_id=workflow_id, + trigger_type=trigger_type, + status='running', + context=json.dumps(context or {}), + started_at=datetime.utcnow() + ) + db.session.add(execution) + db.session.commit() + + workflow.last_run_at = execution.started_at + db.session.commit() + + try: + WorkflowEngine._run_execution(execution.id, nodes, edges) + except Exception as e: + WorkflowEngine._log(execution.id, f"Engine Error: {str(e)}", level='ERROR') + execution.status = 'failed' + execution.completed_at = datetime.utcnow() + workflow.last_status = 'failed' + db.session.commit() + + return execution.id + + # ------------------------------------------------------------------ + # DAG Execution + # ------------------------------------------------------------------ + + @staticmethod + def _run_execution(execution_id: int, nodes: List[Dict], edges: List[Dict]): + """Run the workflow using topological DAG execution with branch support.""" + execution = WorkflowExecution.query.get(execution_id) + workflow = execution.workflow + + WorkflowEngine._log(execution_id, f"Starting workflow: {workflow.name}") + + if not nodes: + WorkflowEngine._log(execution_id, "Workflow has no nodes", level='WARNING') + execution.status = 'success' + execution.completed_at = datetime.utcnow() + db.session.commit() + return + + # Build graph structures + node_map = {n['id']: n for n in nodes} + adj: Dict[str, List[Tuple[str, str]]] = {} # source -> [(target, sourceHandle)] + in_degree: Dict[str, int] = {n['id']: 0 for n in nodes} + + for edge in edges: + src = edge['source'] + tgt = edge['target'] + src_handle = edge.get('sourceHandle', 'output') + if src not in adj: + adj[src] = [] + adj[src].append((tgt, src_handle)) + if tgt in in_degree: + in_degree[tgt] += 1 + + # Start with root nodes (no incoming edges) + ready = deque(nid for nid, deg in in_degree.items() if deg == 0) + if not ready: + ready.append(nodes[0]['id']) + + context = json.loads(execution.context) if execution.context else {} + results: Dict[str, Dict] = {} + processed: Set[str] = set() + # Track which branches are active (for logic_if gating) + # Maps node_id -> set of sourceHandles that were activated + active_branches: Dict[str, Set[str]] = {} + failed = False + + while ready and not failed: + node_id = ready.popleft() + + if node_id in processed: + continue + + node = node_map.get(node_id) + if not node: + continue + + # Check if this node is gated by a logic_if branch + if not WorkflowEngine._is_node_reachable(node_id, edges, active_branches, processed): + processed.add(node_id) + # Still decrement successors so they can become ready + for tgt, _ in adj.get(node_id, []): + if tgt in in_degree: + in_degree[tgt] -= 1 + if in_degree[tgt] == 0: + ready.append(tgt) + continue + + node_label = node.get('data', {}).get('label', node_id) + WorkflowEngine._log(execution_id, f"Executing node: {node_label} ({node['type']})", node_id=node_id) + + try: + node_result = WorkflowEngine._execute_node_with_retry( + node, edges, execution, context, results + ) + results[node_id] = node_result + + if not node_result.get('success', True): + WorkflowEngine._log( + execution_id, + f"Node failed: {node_result.get('error', 'unknown')}", + level='ERROR', node_id=node_id + ) + if node_result.get('critical', True): + failed = True + break + + # For logic_if nodes, record which branch was taken + if node['type'] == 'logic_if': + branch = node_result.get('branch', 'true') + if node_id not in active_branches: + active_branches[node_id] = set() + active_branches[node_id].add(branch) + + # Enqueue successor nodes whose in-degree reaches 0 + for tgt, src_handle in adj.get(node_id, []): + if tgt in in_degree: + in_degree[tgt] -= 1 + if in_degree[tgt] == 0: + ready.append(tgt) + + except Exception as e: + WorkflowEngine._log(execution_id, f"Node Execution Error: {str(e)}", level='ERROR', node_id=node_id) + WorkflowEngine._log(execution_id, traceback.format_exc(), level='DEBUG', node_id=node_id) + failed = True + break + + processed.add(node_id) + + execution.status = 'failed' if failed else 'success' + execution.results = json.dumps(results) + execution.completed_at = datetime.utcnow() + workflow.last_status = execution.status + db.session.commit() + + WorkflowEngine._log(execution_id, f"Workflow finished with status: {execution.status}") + + @staticmethod + def _is_node_reachable(node_id: str, edges: List[Dict], + active_branches: Dict[str, Set[str]], + processed: Set[str]) -> bool: + """ + Check if a node should execute based on logic_if branching. + + A node is unreachable if ALL its incoming edges from logic_if nodes + come through inactive branches. + """ + incoming_from_logic = [] + + for edge in edges: + if edge['target'] != node_id: + continue + src = edge['source'] + src_handle = edge.get('sourceHandle', 'output') + + # Only gate on logic_if nodes that have already been processed + if src in active_branches: + incoming_from_logic.append((src, src_handle)) + + if not incoming_from_logic: + return True # No logic_if gating, always reachable + + # Reachable if at least one logic_if branch leading here is active + for src, src_handle in incoming_from_logic: + if src_handle in active_branches[src]: + return True + + return False + + # ------------------------------------------------------------------ + # Node Execution with Retry + # ------------------------------------------------------------------ + + @staticmethod + def _execute_node_with_retry(node: Dict, edges: List, execution: WorkflowExecution, + context: Dict, results: Dict) -> Dict: + """Execute a node with retry support.""" + node_data = node.get('data', {}) + retry_count = min(int(node_data.get('retryCount', DEFAULT_RETRY_COUNT)), MAX_RETRY_COUNT) + retry_delay = max(1, int(node_data.get('retryDelay', DEFAULT_RETRY_DELAY))) + + last_result = None + for attempt in range(retry_count + 1): + if attempt > 0: + WorkflowEngine._log( + execution.id, + f"Retry {attempt}/{retry_count} after {retry_delay}s", + node_id=node['id'] + ) + import time + time.sleep(retry_delay) + + last_result = WorkflowEngine._execute_node(node, edges, execution, context, results) + + if last_result.get('success', True): + return last_result + + return last_result + + # ------------------------------------------------------------------ + # Node Execution + # ------------------------------------------------------------------ + + @staticmethod + def _execute_node(node: Dict, edges: List, execution: WorkflowExecution, + context: Dict, results: Dict) -> Dict: + """Execute a single node and return its results.""" + node_type = node.get('type') + node_data = node.get('data', {}) + + if node_type == 'trigger': + return {'success': True, 'output': context} + + elif node_type in ('database', 'dockerApp', 'service', 'domain'): + res = WorkflowService.deploy_node(node, edges, execution.workflow.user_id, results) + return res + + elif node_type == 'notification': + return WorkflowEngine._execute_notification(node, execution, context, results) + + elif node_type == 'script': + return WorkflowEngine._execute_script(node, execution, context, results) + + elif node_type == 'logic_if': + return WorkflowEngine._execute_logic_if(node, context, results) + + return {'success': True, 'message': f"Node type {node_type} passed through"} + + # ------------------------------------------------------------------ + # Logic If Evaluation + # ------------------------------------------------------------------ + + @staticmethod + def _execute_logic_if(node: Dict, context: Dict, results: Dict) -> Dict: + """ + Evaluate a logic_if condition. + + The condition is a Python expression that has access to: + - results: dict of {node_id: node_result} + - context: the workflow execution context + """ + node_data = node.get('data', {}) + condition = node_data.get('condition', '').strip() + + if not condition: + return {'success': True, 'branch': 'true'} + + # Build a safe evaluation namespace + eval_globals = {"__builtins__": {}} + eval_locals = { + 'results': results, + 'context': context, + # Expose common helpers + 'len': len, + 'str': str, + 'int': int, + 'float': float, + 'bool': bool, + 'abs': abs, + 'min': min, + 'max': max, + 'any': any, + 'all': all, + 'isinstance': isinstance, + } + + try: + result = eval(condition, eval_globals, eval_locals) + branch = 'true' if result else 'false' + return { + 'success': True, + 'branch': branch, + 'condition': condition, + 'evaluated': bool(result) + } + except Exception as e: + return { + 'success': False, + 'error': f"Condition evaluation failed: {str(e)}", + 'condition': condition, + 'branch': 'false', + 'critical': False # Don't kill the whole workflow for a bad condition + } + + # ------------------------------------------------------------------ + # Variable Interpolation + # ------------------------------------------------------------------ + + @staticmethod + def _interpolate(text: str, context: Dict, results: Dict, + execution: Optional[WorkflowExecution] = None) -> str: + """ + Replace variable placeholders in text. + + Supported syntax: + - ${node_id.field} — access a specific field from a node's result + - ${node_id.output} — shorthand for the output field + - {{workflow_name}} — built-in workflow variables + - {{execution_id}} — current execution ID + - {{started_at}} — execution start time + - {{context.field}} — access context fields + """ + if not text or not isinstance(text, str): + return text + + # Replace ${node_id.field} patterns + def replace_node_var(match): + node_id = match.group(1) + field = match.group(2) + node_result = results.get(node_id, {}) + if field == 'output' and 'output' not in node_result: + # Try stdout as fallback for script nodes + return str(node_result.get('stdout', '')) + return str(node_result.get(field, '')) + + text = re.sub(r'\$\{([^.}]+)\.([^}]+)\}', replace_node_var, text) + + # Replace {{builtin}} patterns + builtins = { + 'workflow_name': execution.workflow.name if execution else '', + 'execution_id': str(execution.id) if execution else '', + 'started_at': execution.started_at.isoformat() if execution and execution.started_at else '', + 'trigger_type': execution.trigger_type if execution else '', + } + + # Add context.* variables + for key, value in context.items(): + builtins[f'context.{key}'] = str(value) + + for key, value in builtins.items(): + text = text.replace('{{' + key + '}}', value) + + # Replace {{node_id.field}} as alternative syntax + def replace_node_var_braces(match): + node_id = match.group(1) + field = match.group(2) + node_result = results.get(node_id, {}) + return str(node_result.get(field, '')) + + text = re.sub(r'\{\{([^.}]+)\.([^}]+)\}\}', replace_node_var_braces, text) + + return text + + # ------------------------------------------------------------------ + # Notification Node + # ------------------------------------------------------------------ + + @staticmethod + def _execute_notification(node: Dict, execution: WorkflowExecution, + context: Dict, results: Dict) -> Dict: + """Execute a notification node with variable interpolation.""" + node_data = node.get('data', {}) + channel = node_data.get('channel', 'system') + message = node_data.get('message', 'Workflow notification') + + # Interpolate variables in the message + message = WorkflowEngine._interpolate(message, context, results, execution) + + title = f"Workflow: {execution.workflow.name}" + + # Build alert in the format NotificationService expects + alerts = [{ + 'type': 'workflow', + 'severity': 'info', + 'message': message, + 'value': '', + 'threshold': '' + }] + + try: + if channel == 'system' or channel == 'all': + result = NotificationService.send_all(alerts) + elif channel == 'discord': + config = NotificationService.get_config().get('discord', {}) + result = NotificationService.send_discord(alerts, config) + elif channel == 'slack': + config = NotificationService.get_config().get('slack', {}) + result = NotificationService.send_slack(alerts, config) + elif channel == 'email': + config = NotificationService.get_config().get('email', {}) + result = NotificationService.send_email(alerts, config) + elif channel == 'telegram': + config = NotificationService.get_config().get('telegram', {}) + result = NotificationService.send_telegram(alerts, config) + else: + result = NotificationService.send_all(alerts) + + return {'success': result.get('success', True), 'channel': channel} + except Exception as e: + return {'success': False, 'error': str(e), 'critical': False} + + # ------------------------------------------------------------------ + # Script Node (Sandboxed) + # ------------------------------------------------------------------ + + @staticmethod + def _execute_script(node: Dict, execution: WorkflowExecution, + context: Dict, results: Dict) -> Dict: + """Execute a script node with timeout, output limits, and variable interpolation.""" + node_data = node.get('data', {}) + script_type = node_data.get('language', 'bash') + content = node_data.get('content', '') + timeout = min(int(node_data.get('timeout', DEFAULT_TIMEOUT)), MAX_TIMEOUT) + + if not content.strip(): + return {'success': True, 'stdout': '', 'stderr': '', 'returncode': 0} + + # Interpolate variables in script content + content = WorkflowEngine._interpolate(content, context, results, execution) + + # Build environment with node results available as env vars + env = os.environ.copy() + env['WORKFLOW_ID'] = str(execution.workflow_id) + env['EXECUTION_ID'] = str(execution.id) + env['TRIGGER_TYPE'] = execution.trigger_type or 'manual' + + for nid, nresult in results.items(): + safe_id = re.sub(r'[^a-zA-Z0-9_]', '_', nid).upper() + if isinstance(nresult, dict): + stdout = nresult.get('stdout', nresult.get('output', '')) + if isinstance(stdout, str): + env[f'NODE_{safe_id}_OUTPUT'] = stdout[:4096] + rc = nresult.get('returncode') + if rc is not None: + env[f'NODE_{safe_id}_RC'] = str(rc) + + try: + if script_type == 'bash': + cmd = ['bash', '-c', content] + elif script_type == 'python': + cmd = ['python3', '-c', content] + else: + return {'success': False, 'error': f"Unsupported script language: {script_type}"} + + proc = subprocess.run( + cmd, + capture_output=True, + text=True, + timeout=timeout, + env=env, + cwd='/tmp' if os.name != 'nt' else None + ) + + stdout = proc.stdout[:MAX_OUTPUT_SIZE] if proc.stdout else '' + stderr = proc.stderr[:MAX_OUTPUT_SIZE] if proc.stderr else '' + + return { + 'success': proc.returncode == 0, + 'stdout': stdout, + 'stderr': stderr, + 'returncode': proc.returncode + } + + except subprocess.TimeoutExpired: + return { + 'success': False, + 'error': f"Script timed out after {timeout}s", + 'stdout': '', + 'stderr': '', + 'returncode': -1 + } + except FileNotFoundError: + fallback = 'python' if script_type == 'python' else script_type + try: + proc = subprocess.run( + [fallback, '-c', content] if script_type == 'python' else content, + capture_output=True, text=True, timeout=timeout, + env=env, shell=(script_type == 'bash'), + cwd='/tmp' if os.name != 'nt' else None + ) + stdout = proc.stdout[:MAX_OUTPUT_SIZE] if proc.stdout else '' + stderr = proc.stderr[:MAX_OUTPUT_SIZE] if proc.stderr else '' + return { + 'success': proc.returncode == 0, + 'stdout': stdout, 'stderr': stderr, + 'returncode': proc.returncode + } + except Exception as e: + return {'success': False, 'error': str(e)} + except Exception as e: + return {'success': False, 'error': str(e)} + + # ------------------------------------------------------------------ + # Logging + # ------------------------------------------------------------------ + + @staticmethod + def _log(execution_id: int, message: str, level: str = 'INFO', node_id: str = None): + """Add a log entry for an execution.""" + log_entry = WorkflowLog( + execution_id=execution_id, + level=level, + message=message, + node_id=node_id + ) + db.session.add(log_entry) + db.session.commit() + logger.info(f"[{level}] Workflow {execution_id}: {message}") + + # Keep backward-compatible alias + log = _log + + +# ------------------------------------------------------------------ +# Event Bus for workflow triggers +# ------------------------------------------------------------------ + +class WorkflowEventBus: + """ + Simple in-process event bus for triggering workflows on system events. + + Events are emitted by services (monitoring, health checks, git deploy) + and matched against workflows with trigger_type='event'. + """ + + _listeners_lock = threading.Lock() + + @staticmethod + def emit(event_type: str, data: Dict[str, Any] = None): + """ + Emit an event that may trigger workflows. + + Args: + event_type: One of health_check_failed, high_cpu, high_memory, + git_push, app_stopped, or any custom string. + data: Event payload passed as workflow context. + """ + from flask import current_app + try: + app = current_app._get_current_object() + except RuntimeError: + logger.warning(f"WorkflowEventBus.emit called outside app context for {event_type}") + return + + threading.Thread( + target=WorkflowEventBus._process_event, + args=(app, event_type, data or {}), + daemon=True, + name=f'wf-event-{event_type}' + ).start() + + @staticmethod + def _process_event(app, event_type: str, data: Dict): + """Find and execute workflows subscribed to this event type.""" + with app.app_context(): + try: + workflows = Workflow.query.filter_by( + is_active=True, + trigger_type='event' + ).all() + + for workflow in workflows: + try: + config = json.loads(workflow.trigger_config) if workflow.trigger_config else {} + subscribed_event = config.get('eventType', '') + + if subscribed_event != event_type: + continue + + # Cooldown: don't re-trigger within 60 seconds + if workflow.last_run_at: + elapsed = (datetime.utcnow() - workflow.last_run_at).total_seconds() + if elapsed < 60: + continue + + logger.info(f"Event '{event_type}' triggering workflow: {workflow.name}") + context = { + 'event_type': event_type, + 'event_data': data, + 'triggered_at': datetime.utcnow().isoformat() + } + WorkflowEngine.execute_workflow( + workflow_id=workflow.id, + trigger_type='event', + context=context + ) + except Exception as e: + logger.error(f"Event trigger failed for workflow {workflow.id}: {e}") + + except Exception as e: + logger.error(f"WorkflowEventBus._process_event error: {e}") diff --git a/backend/app/services/workspace_service.py b/backend/app/services/workspace_service.py new file mode 100644 index 00000000..3be852c1 --- /dev/null +++ b/backend/app/services/workspace_service.py @@ -0,0 +1,227 @@ +import hashlib +import logging +import secrets +import re +from datetime import datetime +from app import db +from app.models.workspace import Workspace, WorkspaceMember, WorkspaceApiKey +from app.models.user import User + +logger = logging.getLogger(__name__) + + +class WorkspaceService: + """Service for multi-tenancy workspace management.""" + + @staticmethod + def _slugify(name): + slug = re.sub(r'[^a-z0-9]+', '-', name.lower()).strip('-') + return slug or 'workspace' + + @staticmethod + def list_workspaces(user_id=None, include_archived=False): + query = Workspace.query + if not include_archived: + query = query.filter_by(status=Workspace.STATUS_ACTIVE) + if user_id: + member_ws_ids = db.session.query(WorkspaceMember.workspace_id).filter_by(user_id=user_id) + query = query.filter(Workspace.id.in_(member_ws_ids)) + return query.order_by(Workspace.name).all() + + @staticmethod + def get_workspace(workspace_id): + return Workspace.query.get(workspace_id) + + @staticmethod + def get_workspace_by_slug(slug): + return Workspace.query.filter_by(slug=slug).first() + + @staticmethod + def create_workspace(data, user_id): + name = data.get('name', '').strip() + if not name: + raise ValueError('Workspace name required') + + slug = WorkspaceService._slugify(name) + # Ensure unique slug + base_slug = slug + counter = 1 + while Workspace.query.filter_by(slug=slug).first(): + slug = f'{base_slug}-{counter}' + counter += 1 + + workspace = Workspace( + name=name, + slug=slug, + description=data.get('description', ''), + logo_url=data.get('logo_url'), + primary_color=data.get('primary_color'), + max_servers=data.get('max_servers', 0), + max_users=data.get('max_users', 0), + max_api_calls=data.get('max_api_calls', 0), + created_by=user_id, + ) + if 'settings' in data: + workspace.settings = data['settings'] + + db.session.add(workspace) + db.session.flush() + + # Creator becomes owner + member = WorkspaceMember( + workspace_id=workspace.id, + user_id=user_id, + role=WorkspaceMember.ROLE_OWNER, + ) + db.session.add(member) + db.session.commit() + return workspace + + @staticmethod + def update_workspace(workspace_id, data): + ws = Workspace.query.get(workspace_id) + if not ws: + return None + for field in ['name', 'description', 'logo_url', 'primary_color', + 'max_servers', 'max_users', 'max_api_calls', 'billing_notes']: + if field in data: + setattr(ws, field, data[field]) + if 'settings' in data: + ws.settings = data['settings'] + db.session.commit() + return ws + + @staticmethod + def archive_workspace(workspace_id): + ws = Workspace.query.get(workspace_id) + if not ws: + return None + ws.status = Workspace.STATUS_ARCHIVED + db.session.commit() + return ws + + @staticmethod + def restore_workspace(workspace_id): + ws = Workspace.query.get(workspace_id) + if not ws: + return None + ws.status = Workspace.STATUS_ACTIVE + db.session.commit() + return ws + + @staticmethod + def delete_workspace(workspace_id): + ws = Workspace.query.get(workspace_id) + if not ws: + return False + WorkspaceApiKey.query.filter_by(workspace_id=workspace_id).delete() + WorkspaceMember.query.filter_by(workspace_id=workspace_id).delete() + db.session.delete(ws) + db.session.commit() + return True + + # --- Members --- + + @staticmethod + def get_members(workspace_id): + return WorkspaceMember.query.filter_by(workspace_id=workspace_id).all() + + @staticmethod + def add_member(workspace_id, user_id, role='member'): + ws = Workspace.query.get(workspace_id) + if not ws: + raise ValueError('Workspace not found') + + # Quota check + if ws.max_users > 0 and ws.members.count() >= ws.max_users: + raise ValueError('Workspace user limit reached') + + existing = WorkspaceMember.query.filter_by( + workspace_id=workspace_id, user_id=user_id + ).first() + if existing: + raise ValueError('User already a member') + + member = WorkspaceMember( + workspace_id=workspace_id, + user_id=user_id, + role=role, + ) + db.session.add(member) + db.session.commit() + return member + + @staticmethod + def update_member_role(member_id, role): + member = WorkspaceMember.query.get(member_id) + if not member: + return None + member.role = role + db.session.commit() + return member + + @staticmethod + def remove_member(member_id): + member = WorkspaceMember.query.get(member_id) + if not member: + return False + if member.role == WorkspaceMember.ROLE_OWNER: + # Ensure at least one owner remains + owner_count = WorkspaceMember.query.filter_by( + workspace_id=member.workspace_id, role=WorkspaceMember.ROLE_OWNER + ).count() + if owner_count <= 1: + raise ValueError('Cannot remove the last owner') + db.session.delete(member) + db.session.commit() + return True + + @staticmethod + def get_user_role(workspace_id, user_id): + member = WorkspaceMember.query.filter_by( + workspace_id=workspace_id, user_id=user_id + ).first() + return member.role if member else None + + # --- API Keys --- + + @staticmethod + def create_api_key(workspace_id, name, scopes=None, user_id=None): + raw_key = f'wsk_{secrets.token_urlsafe(32)}' + key_hash = hashlib.sha256(raw_key.encode()).hexdigest() + + api_key = WorkspaceApiKey( + workspace_id=workspace_id, + name=name, + key_hash=key_hash, + key_prefix=raw_key[:12], + created_by=user_id, + ) + if scopes: + api_key.scopes = scopes + db.session.add(api_key) + db.session.commit() + return api_key, raw_key + + @staticmethod + def list_api_keys(workspace_id): + return WorkspaceApiKey.query.filter_by(workspace_id=workspace_id).all() + + @staticmethod + def revoke_api_key(key_id): + key = WorkspaceApiKey.query.get(key_id) + if not key: + return False + key.is_active = False + db.session.commit() + return True + + @staticmethod + def get_all_workspaces_admin(): + """Super-admin: see all workspaces with usage info.""" + workspaces = Workspace.query.order_by(Workspace.name).all() + return [{ + **ws.to_dict(), + 'member_count': ws.members.count(), + 'api_key_count': ws.api_keys.filter_by(is_active=True).count(), + } for ws in workspaces] diff --git a/backend/app/utils/crypto.py b/backend/app/utils/crypto.py index 1924758f..3544c7a7 100644 --- a/backend/app/utils/crypto.py +++ b/backend/app/utils/crypto.py @@ -6,10 +6,14 @@ import os import base64 +import warnings +import logging from cryptography.fernet import Fernet, InvalidToken from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC +logger = logging.getLogger(__name__) + def get_encryption_key() -> bytes: """ @@ -26,6 +30,10 @@ def get_encryption_key() -> bytes: """ key = os.environ.get('SERVERKIT_ENCRYPTION_KEY') if not key: + if os.environ.get('FLASK_ENV') == 'production': + raise ValueError('CRITICAL: SERVERKIT_ENCRYPTION_KEY must be set in production') + logger.warning('SECURITY WARNING: Using derived development encryption key. Set SERVERKIT_ENCRYPTION_KEY for production.') + warnings.warn('Using derived development encryption key - not suitable for production') # In development, use a default key (NOT for production!) # This allows the system to work without explicit configuration default_key = "DEV_ONLY_NOT_SECURE_CHANGE_IN_PRODUCTION_KEY" diff --git a/backend/app/utils/system.py b/backend/app/utils/system.py index 88047f3a..0c2cce60 100644 --- a/backend/app/utils/system.py +++ b/backend/app/utils/system.py @@ -66,6 +66,23 @@ def run_privileged(cmd: Union[List[str], str], *, user: Optional[str] = None, ** return subprocess.run(cmd, **kwargs) +def run_command(cmd: Union[List[str], str], *, timeout: int = 60, + capture_stderr: bool = False, **kwargs) -> dict: + """Run a shell command and return a dict with stdout/stderr/returncode. + + This is a convenience wrapper used by services that need simple dict results + rather than a raw ``CompletedProcess`` object. + """ + kwargs.setdefault('capture_output', True) + kwargs.setdefault('text', True) + result = subprocess.run(cmd, timeout=timeout, **kwargs) + return { + 'stdout': result.stdout or '', + 'stderr': result.stderr or '', + 'returncode': result.returncode, + } + + def is_command_available(cmd: str) -> bool: """Check whether *cmd* is available on the system. diff --git a/backend/cli.py b/backend/cli.py index 13deb2e2..52fa31db 100644 --- a/backend/cli.py +++ b/backend/cli.py @@ -78,6 +78,10 @@ def create_admin(email, username, password): db.session.add(user) db.session.commit() + # Mark setup as complete so the UI doesn't show the setup wizard + from app.services.settings_service import SettingsService + SettingsService.complete_setup(user_id=user.id) + click.echo(click.style(f'Admin user "{username}" created successfully!', fg='green')) diff --git a/backend/config.py b/backend/config.py index 4df2898c..cadd43da 100644 --- a/backend/config.py +++ b/backend/config.py @@ -1,5 +1,6 @@ import os import sys +import warnings from datetime import timedelta # Default insecure keys that must be changed in production @@ -20,7 +21,7 @@ class Config: # JWT JWT_SECRET_KEY = os.environ.get('JWT_SECRET_KEY', 'jwt-secret-key-change-in-production') - JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1) + JWT_ACCESS_TOKEN_EXPIRES = timedelta(minutes=15) JWT_REFRESH_TOKEN_EXPIRES = timedelta(days=30) # CORS - Allow both dev server and Flask server @@ -30,6 +31,13 @@ class Config: class DevelopmentConfig(Config): DEBUG = True + @classmethod + def init_app(cls, app): + if app.config.get('SECRET_KEY') == 'dev-secret-key-change-in-production': + warnings.warn('WARNING: Using default SECRET_KEY. Change before deploying.') + if app.config.get('JWT_SECRET_KEY') == 'jwt-secret-key-change-in-production': + warnings.warn('WARNING: Using default JWT_SECRET_KEY. Change before deploying.') + class TestingConfig(Config): """Config for pytest and other automated tests.""" @@ -43,6 +51,11 @@ class TestingConfig(Config): class ProductionConfig(Config): DEBUG = False + # Secure session cookies in production + SESSION_COOKIE_SECURE = True + SESSION_COOKIE_HTTPONLY = True + SESSION_COOKIE_SAMESITE = 'Lax' + def __init__(self): # Validate that secret keys are not default values in production if self.SECRET_KEY in INSECURE_SECRET_KEYS: @@ -55,6 +68,21 @@ def __init__(self): print("Generate a secure key with: python -c \"import secrets; print(secrets.token_hex(32))\"", file=sys.stderr) sys.exit(1) + @classmethod + def init_app(cls, app): + """Validate production configuration.""" + insecure_keys = ['dev-secret-key-change-in-production', 'jwt-secret-key-change-in-production'] + if app.config['SECRET_KEY'] in insecure_keys: + raise ValueError('CRITICAL: SECRET_KEY must be changed for production deployment') + if app.config['JWT_SECRET_KEY'] in insecure_keys: + raise ValueError('CRITICAL: JWT_SECRET_KEY must be changed for production deployment') + # Validate CORS origins + cors_raw = os.environ.get('CORS_ORIGINS', '') + cors_origins = [o.strip() for o in cors_raw.split(',') if o.strip()] + if not cors_origins: + raise ValueError('CORS_ORIGINS must be explicitly set in production') + app.config['CORS_ORIGINS'] = cors_origins + config = { 'development': DevelopmentConfig, diff --git a/backend/migrations/versions/003_workflows_automation.py b/backend/migrations/versions/003_workflows_automation.py new file mode 100644 index 00000000..6f59b3d9 --- /dev/null +++ b/backend/migrations/versions/003_workflows_automation.py @@ -0,0 +1,90 @@ +"""Add workflows and automation tables. + +Revision ID: 003_workflows_automation +Revises: 002_permissions_invitations +Create Date: 2026-03-23 +""" +from alembic import op +import sqlalchemy as sa +from datetime import datetime + +revision = '003_workflows_automation' +down_revision = '002_permissions_invitations' +branch_labels = None +depends_on = None + + +def upgrade(): + conn = op.get_bind() + inspector = sa.inspect(conn) + existing_tables = inspector.get_table_names() + + # Create workflows table if missing (old version might have it but without automation fields) + if 'workflows' not in existing_tables: + op.create_table('workflows', + sa.Column('id', sa.Integer(), primary_key=True), + sa.Column('name', sa.String(100), nullable=False), + sa.Column('description', sa.Text(), nullable=True), + sa.Column('nodes', sa.Text(), nullable=True), + sa.Column('edges', sa.Text(), nullable=True), + sa.Column('viewport', sa.Text(), nullable=True), + sa.Column('is_active', sa.Boolean(), default=False), + sa.Column('trigger_type', sa.String(50), default='manual'), + sa.Column('trigger_config', sa.Text(), nullable=True), + sa.Column('last_run_at', sa.DateTime(), nullable=True), + sa.Column('last_status', sa.String(20), nullable=True), + sa.Column('created_at', sa.DateTime(), default=datetime.utcnow), + sa.Column('updated_at', sa.DateTime(), default=datetime.utcnow), + sa.Column('user_id', sa.Integer(), sa.ForeignKey('users.id'), nullable=False) + ) + else: + # Add automation columns to existing workflows table if they don't exist + existing_cols = {c['name'] for c in inspector.get_columns('workflows')} + with op.batch_alter_table('workflows') as batch_op: + if 'is_active' not in existing_cols: + batch_op.add_column(sa.Column('is_active', sa.Boolean(), nullable=True, server_default='0')) + if 'trigger_type' not in existing_cols: + batch_op.add_column(sa.Column('trigger_type', sa.String(50), nullable=True, server_default='manual')) + if 'trigger_config' not in existing_cols: + batch_op.add_column(sa.Column('trigger_config', sa.Text(), nullable=True)) + if 'last_run_at' not in existing_cols: + batch_op.add_column(sa.Column('last_run_at', sa.DateTime(), nullable=True)) + if 'last_status' not in existing_cols: + batch_op.add_column(sa.Column('last_status', sa.String(20), nullable=True)) + + # Create workflow_executions table + if 'workflow_executions' not in existing_tables: + op.create_table('workflow_executions', + sa.Column('id', sa.Integer(), primary_key=True), + sa.Column('workflow_id', sa.Integer(), sa.ForeignKey('workflows.id'), nullable=False), + sa.Column('status', sa.String(20), default='running'), + sa.Column('trigger_type', sa.String(50), nullable=True), + sa.Column('context', sa.Text(), nullable=True), + sa.Column('results', sa.Text(), nullable=True), + sa.Column('started_at', sa.DateTime(), default=datetime.utcnow), + sa.Column('completed_at', sa.DateTime(), nullable=True) + ) + + # Create workflow_logs table + if 'workflow_logs' not in existing_tables: + op.create_table('workflow_logs', + sa.Column('id', sa.Integer(), primary_key=True), + sa.Column('execution_id', sa.Integer(), sa.ForeignKey('workflow_executions.id'), nullable=False), + sa.Column('level', sa.String(10), default='INFO'), + sa.Column('message', sa.Text(), nullable=False), + sa.Column('node_id', sa.String(100), nullable=True), + sa.Column('timestamp', sa.DateTime(), default=datetime.utcnow) + ) + + +def downgrade(): + op.drop_table('workflow_logs') + op.drop_table('workflow_executions') + + # We don't drop 'workflows' as it existed before, but we can drop the new columns + with op.batch_alter_table('workflows') as batch_op: + batch_op.drop_column('last_status') + batch_op.drop_column('last_run_at') + batch_op.drop_column('trigger_config') + batch_op.drop_column('trigger_type') + batch_op.drop_column('is_active') diff --git a/backend/requirements.txt b/backend/requirements.txt index e15ea80f..61774898 100644 --- a/backend/requirements.txt +++ b/backend/requirements.txt @@ -9,7 +9,7 @@ Flask-Migrate==4.0.7 # Authentication Flask-JWT-Extended==4.6.0 -PyJWT==2.8.0 +PyJWT==2.12.1 # CORS Flask-Cors==6.0.2 @@ -40,6 +40,7 @@ Flask-Limiter==3.5.0 click==8.1.7 schedule==1.2.2 +croniter==6.0.0 passlib==1.7.4 bcrypt==4.2.1 diff --git a/dev.sh b/dev.sh index c157ffc6..8028c0cf 100644 --- a/dev.sh +++ b/dev.sh @@ -17,7 +17,7 @@ FRONTEND_DIR="$PROJECT_ROOT/frontend" CYAN='\033[0;36m' GREEN='\033[0;32m' RED='\033[0;31m' -YELLOW='\033[0;33m' +YELLOW='\033[1;33m' DIM='\033[2m' NC='\033[0m' diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md index 9b2aad3a..eb52d3ff 100644 --- a/docs/ARCHITECTURE.md +++ b/docs/ARCHITECTURE.md @@ -11,6 +11,8 @@ - [Template System](#template-system) - [Port Allocation](#port-allocation) - [Database Linking](#database-linking) +- [Workflow Automation](#workflow-automation) +- [Environment Pipeline](#environment-pipeline) - [File Paths](#file-paths) --- @@ -97,27 +99,6 @@ User Request What Happens └─────────┘ ``` -### Detailed Nginx → Container Flow - -``` - NGINX CONFIG DOCKER - /etc/nginx/sites-enabled/ CONTAINER - ───────────────────────── ───────────────── - - server { - listen 80; - server_name my-blog.com; - ┌─────────────────┐ - location / { │ WordPress │ - proxy_pass ─────────────────────────► │ │ - http://127.0.0.1:8001; │ 0.0.0.0:80 │ - proxy_set_header Host $host; │ ▲ │ - proxy_set_header X-Real-IP ...; │ │ │ - } │ (mapped to │ - } │ host:8001) │ - └─────────────────┘ -``` - --- ## Template System @@ -135,106 +116,6 @@ User Request What Happens │ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │ │ │ │ │ │ │ │ └────────┼────────────┼────────────┼────────────┼────────────┼───────────────────┘ - │ │ │ │ │ - │ User clicks "Deploy" in UI │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────────────────────────────────────────────────────────────────────┐ -│ SERVERKIT BACKEND │ -│ │ -│ TemplateService.install_template() │ -│ ├── 1. Parse template YAML │ -│ ├── 2. Generate unique port (8000-60000) │ -│ ├── 3. Substitute variables: │ -│ │ ${APP_NAME} → "my-blog" │ -│ │ ${HTTP_PORT} → "8247" │ -│ │ ${DB_PASSWORD} → "auto_generated" │ -│ ├── 4. Create /var/serverkit/apps/my-blog/docker-compose.yml │ -│ ├── 5. Run: docker compose up -d --build │ -│ └── 6. Store app record in database │ -│ │ -└─────────────────────────────────────────────────────────────────────────────────┘ - │ - ▼ -┌─────────────────────────────────────────────────────────────────────────────────┐ -│ APP CREATED │ -│ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ App Name: my-blog │ │ -│ │ Type: docker │ │ -│ │ Port: 8247 (auto-assigned) │ │ -│ │ Status: running │ │ -│ │ Container: my-blog │ │ -│ │ Path: /var/serverkit/apps/my-blog/ │ │ -│ │ │ │ -│ │ Private URL: http://server-ip:8247 ◄── Works immediately! │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────────────────────┘ - │ - │ User clicks "Connect Domain" - ▼ -┌─────────────────────────────────────────────────────────────────────────────────┐ -│ DOMAIN CONNECTED │ -│ │ -│ DomainService.create_domain() │ -│ ├── 1. Validate domain DNS points to server │ -│ ├── 2. Check container port is accessible │ -│ ├── 3. Generate Nginx config: │ -│ │ │ -│ │ server { │ -│ │ listen 80; │ -│ │ server_name my-blog.com; │ -│ │ │ -│ │ location / { │ -│ │ proxy_pass http://127.0.0.1:8247; ◄── Container port │ -│ │ proxy_set_header Host $host; │ -│ │ proxy_set_header X-Real-IP $remote_addr; │ -│ │ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; │ -│ │ proxy_set_header X-Forwarded-Proto $scheme; │ -│ │ } │ -│ │ } │ -│ │ │ -│ ├── 4. Write to /etc/nginx/sites-available/my-blog │ -│ ├── 5. Symlink to /etc/nginx/sites-enabled/my-blog │ -│ ├── 6. Test config: nginx -t │ -│ ├── 7. Reload: systemctl reload nginx │ -│ └── 8. (Optional) Request SSL via Let's Encrypt │ -│ │ -│ Public URL: https://my-blog.com ◄── Now accessible worldwide! │ -│ │ -└─────────────────────────────────────────────────────────────────────────────────┘ -``` - -### Template YAML Structure - -```yaml -# Example: flask-hello-world.yaml - -id: flask-hello-world -name: Flask - Hello World -version: "1.0" -description: Simple Flask debug app -categories: - - development - - api - -variables: - - name: HTTP_PORT # Variable name - type: port # Auto-generates available port - default: "5000" # Starting port to search from - hidden: true # Don't show in UI - -compose: # Docker Compose configuration - services: - app: - image: python:3.12-slim - container_name: ${APP_NAME} - ports: - - "${HTTP_PORT}:5000" # Host:Container port mapping - environment: - - APP_NAME=${APP_NAME} - - EXTERNAL_PORT=${HTTP_PORT} ``` --- @@ -243,144 +124,62 @@ compose: # Docker Compose configuration ### How ServerKit Finds Available Ports +ServerKit automatically scans the database, Docker, and system sockets to find the first available port (starting from 8000) for new applications, ensuring no conflicts occur during deployment. + +--- + +## Database Linking + +### How Apps Connect to Databases + +ServerKit automates the creation of databases and users, then injects the credentials as environment variables (`DB_HOST`, `DB_USER`, etc.) directly into the application container, allowing for seamless connectivity. + +--- + +## Workflow Automation + +ServerKit includes a node-based visual workflow builder for automating server tasks. + ``` ┌─────────────────────────────────────────────────────────────────────────────────┐ -│ PORT ALLOCATION │ +│ WORKFLOW BUILDER │ │ │ -│ TemplateService._find_available_port(start=8000) │ -│ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ Step 1: Check Database │ │ -│ │ SELECT port FROM applications WHERE port IS NOT NULL │ │ -│ │ Result: [8001, 8002, 8005, 8010] │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ Step 2: Check Docker │ │ -│ │ docker ps --format '{{.Ports}}' │ │ -│ │ Parse: "0.0.0.0:8003->80/tcp" → 8003 │ │ -│ │ Result: [8003, 8004] │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ Step 3: Socket Bind Test │ │ -│ │ Try: socket.bind(('127.0.0.1', port)) │ │ -│ │ If fails → port in use by system │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ Step 4: Return First Available │ │ -│ │ │ │ -│ │ Checking 8000... taken (DB) │ │ -│ │ Checking 8001... taken (DB) │ │ -│ │ Checking 8002... taken (DB) │ │ -│ │ Checking 8003... taken (Docker) │ │ -│ │ Checking 8004... taken (Docker) │ │ -│ │ Checking 8005... taken (DB) │ │ -│ │ Checking 8006... AVAILABLE ✓ │ │ -│ │ │ │ -│ │ Return: 8006 │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ +│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │ +│ │ TRIGGER │──────▶│ ACTION │──────▶│ CONDITION │──────▶│ NOTIFY │ │ +│ │ (Git Push)│ │ (Build) │ │ (Success?)│ │ (Discord) │ │ +│ └───────────┘ └───────────┘ └─────┬─────┘ └───────────┘ │ +│ │ │ +│ ▼ │ +│ ┌───────────┐ │ +│ │ ROLLBACK │ │ +│ └───────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────────┘ ``` -### Port Map Example - -``` -┌──────────┬────────────────────────────────────────┐ -│ Port │ Service │ -├──────────┼────────────────────────────────────────┤ -│ 22 │ SSH │ -│ 80 │ Nginx (HTTP) │ -│ 443 │ Nginx (HTTPS) │ -│ 3306 │ MySQL │ -│ 5432 │ PostgreSQL │ -│ 5000 │ ServerKit Backend API │ -│ 6379 │ Redis │ -├──────────┼────────────────────────────────────────┤ -│ 8001 │ App: wordpress-blog │ -│ 8002 │ App: flask-api │ -│ 8003 │ App: node-frontend │ -│ 8004 │ App: grafana-monitoring │ -│ 8005 │ App: n8n-automation │ -│ 8006 │ App: (next available) │ -│ ... │ ... │ -│ 60000 │ (max port range) │ -└──────────┴────────────────────────────────────────┘ -``` +- **Nodes:** Represent individual steps (Triggers, Actions, Logic, Notifications). +- **Edges:** Define the flow of execution. +- **Engine:** The `WorkflowService` parses the JSON graph and executes steps sequentially or in parallel. --- -## Database Linking +## Environment Pipeline -### How Apps Connect to Databases +Specifically designed for WordPress, the environment pipeline allows for professional staging/dev workflows. ``` -┌────────────────────────────────────────────────────────────────────────────────┐ -│ │ -│ ┌──────────────────┐ ┌──────────────────┐ │ -│ │ APP (Flask) │ │ DATABASE (MySQL) │ │ -│ │ │ │ │ │ -│ │ Needs DB access │ │ db: my_app_db │ │ -│ │ │ │ user: app_user │ │ -│ └────────┬─────────┘ └────────┬─────────┘ │ -│ │ │ │ -│ │ User clicks "Link Database" │ │ -│ │ │ │ -│ └──────────────┬───────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌────────────────────────────────────────────────────────────────────────┐ │ -│ │ SERVERKIT LINKS THEM │ │ -│ │ │ │ -│ │ 1. Creates database: my_app_db │ │ -│ │ 2. Creates user with secure password │ │ -│ │ 3. Grants permissions │ │ -│ │ 4. Injects environment variables into app container: │ │ -│ │ │ │ -│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ -│ │ │ DB_HOST=localhost │ │ │ -│ │ │ DB_PORT=3306 │ │ │ -│ │ │ DB_NAME=my_app_db │ │ │ -│ │ │ DB_USER=app_user │ │ │ -│ │ │ DB_PASSWORD=xK9#mP2$vL7@nQ4 │ │ │ -│ │ │ │ │ │ -│ │ │ # Also provides connection URL format: │ │ │ -│ │ │ DATABASE_URL=mysql://app_user:xK9#mP2$vL7@nQ4@localhost/db │ │ │ -│ │ └─────────────────────────────────────────────────────────────┘ │ │ -│ │ │ │ -│ │ 5. Restarts app container to pick up new env vars │ │ -│ │ │ │ -│ └────────────────────────────────────────────────────────────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌────────────────────────────────────────────────────────────────────────┐ │ -│ │ APP CODE │ │ -│ │ │ │ -│ │ # Python/Flask example │ │ -│ │ import os │ │ -│ │ import mysql.connector │ │ -│ │ │ │ -│ │ db = mysql.connector.connect( │ │ -│ │ host=os.environ['DB_HOST'], # localhost │ │ -│ │ port=os.environ['DB_PORT'], # 3306 │ │ -│ │ database=os.environ['DB_NAME'], # my_app_db │ │ -│ │ user=os.environ['DB_USER'], # app_user │ │ -│ │ password=os.environ['DB_PASSWORD'] │ │ -│ │ ) │ │ -│ │ │ │ -│ │ # Or use the URL directly: │ │ -│ │ # SQLAlchemy: create_engine(os.environ['DATABASE_URL']) │ │ -│ │ │ │ -│ └────────────────────────────────────────────────────────────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────────────────────┘ +┌──────────────┐ promotion ┌──────────────┐ promotion ┌──────────────┐ +│ DEV │───────────▶│ STAGING │───────────▶│ PRODUCTION │ +│ (Standalone) │ │ (Standalone) │ │ (Production) │ +└──────┬───────┘ └──────┬───────┘ └──────┬───────┘ + │ │ │ + └─────────── sync ──────────┴─────────── sync ──────────┘ ``` +- **Promotion:** Push code (Git) and Database from a lower environment to a higher one. +- **Syncing:** Pull the latest production database and media to dev/staging for testing. +- **Sanitization:** Automatically strip sensitive user data during sync. + --- ## File Paths @@ -388,168 +187,30 @@ compose: # Docker Compose configuration ### Where Everything Lives ``` -SERVER FILESYSTEM -───────────────────────────────────────────────────────────────────────────────── - /var/serverkit/ # ServerKit data root ├── apps/ # All deployed applications -│ ├── my-blog/ -│ │ ├── docker-compose.yml # Generated from template -│ │ ├── .env # Environment variables -│ │ └── data/ # Persistent volumes -│ ├── flask-api/ -│ │ ├── docker-compose.yml -│ │ └── app/ # Application code -│ └── ... -│ ├── backups/ # Database backups -│ ├── mysql/ -│ └── postgres/ -│ -└── ssl/ # SSL certificates (if not using certbot) +└── ssl/ # SSL certificates /etc/serverkit/ # ServerKit configuration ├── templates/ # Template library (YAML files) -│ ├── wordpress.yaml -│ ├── flask-hello-world.yaml -│ ├── grafana.yaml -│ └── ... └── config.yaml # Main config - -/etc/nginx/ # Nginx configuration -├── sites-available/ # All site configs -│ ├── my-blog # Generated by ServerKit -│ ├── flask-api -│ └── default -├── sites-enabled/ # Enabled sites (symlinks) -│ ├── my-blog -> ../sites-available/my-blog -│ └── flask-api -> ../sites-available/flask-api -└── nginx.conf # Main nginx config - -/var/log/nginx/ # Nginx logs (per-app) -├── my-blog.access.log -├── my-blog.error.log -├── flask-api.access.log -└── flask-api.error.log - -/var/lib/mysql/ # MySQL data -/var/lib/postgresql/ # PostgreSQL data ``` --- ## Component Diagram -``` -┌─────────────────────────────────────────────────────────────────────────────────┐ -│ SERVERKIT │ -│ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ FRONTEND (React) │ │ -│ │ Served via Nginx :80/443 │ │ -│ │ │ │ -│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ -│ │ │Dashboard │ │ Apps │ │ Domains │ │ Docker │ │ Security │ │ │ -│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ -│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ -│ │ │Databases │ │Templates │ │ Firewall │ │ Cron │ │ Settings │ │ │ -│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ -│ │ │ │ -│ └─────────────────────────────────┬────────────────────────────────────────┘ │ -│ │ │ -│ REST API + WebSocket │ -│ │ │ -│ ▼ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ BACKEND (Flask) │ │ -│ │ Port 5000 │ │ -│ │ │ │ -│ │ ┌─────────────────────────────────────────────────────────────────┐ │ │ -│ │ │ SERVICES │ │ │ -│ │ │ │ │ │ -│ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ -│ │ │ │DockerService │ │ NginxService │ │TemplateServ. │ │ │ │ -│ │ │ │ │ │ │ │ │ │ │ │ -│ │ │ │ • compose_up │ │ • create_site│ │ • install │ │ │ │ -│ │ │ │ • logs │ │ • enable_ssl │ │ • variables │ │ │ │ -│ │ │ │ • stats │ │ • reload │ │ • validate │ │ │ │ -│ │ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ │ -│ │ │ │ │ │ -│ │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ -│ │ │ │ DBService │ │ SSLService │ │SecurityServ. │ │ │ │ -│ │ │ │ │ │ │ │ │ │ │ │ -│ │ │ │ • create_db │ │ • certbot │ │ • ClamAV │ │ │ │ -│ │ │ │ • users │ │ • renew │ │ • 2FA │ │ │ │ -│ │ │ │ • backup │ │ • wildcard │ │ • firewall │ │ │ │ -│ │ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ │ -│ │ │ │ │ │ -│ │ └──────────────────────────────────────────────────────────────────┘ │ │ -│ │ │ │ -│ └─────────────────────────────────┬────────────────────────────────────────┘ │ -│ │ │ -│ ▼ │ -│ ┌─────────────────────────────────────────────────────────────────────────┐ │ -│ │ DATABASE (SQLite/PostgreSQL) │ │ -│ │ │ │ -│ │ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌─────────┐ │ │ -│ │ │ Apps │ │ Domains │ │ Users │ │ Databases │ │ Settings│ │ │ -│ │ └───────────┘ └───────────┘ └───────────┘ └───────────┘ └─────────┘ │ │ -│ │ │ │ -│ └─────────────────────────────────────────────────────────────────────────┘ │ -│ │ -└──────────────────────────────────────────────────────────────────────────────────┘ -``` +ServerKit follows a modern 3-tier architecture: +1. **Frontend:** React-based dashboard served via Nginx. +2. **Backend:** Flask REST API managing Docker, Nginx, and system services. +3. **Agent:** Go-based remote agent for multi-server management. --- ## Troubleshooting -### 502 Bad Gateway - -``` -Problem: Nginx can't reach the container - -Check: -┌─────────────────────────────────────────────────────────────────┐ -│ 1. Is container running? │ -│ docker ps | grep │ -│ │ -│ 2. Is port bound to host? │ -│ docker port │ -│ Expected: 5000/tcp -> 0.0.0.0:8001 │ -│ │ -│ 3. Is port accessible? │ -│ curl -I http://127.0.0.1:8001 │ -│ Expected: HTTP/1.1 200 OK │ -│ │ -│ 4. Does nginx config have correct port? │ -│ cat /etc/nginx/sites-enabled/ │ -│ Check: proxy_pass http://127.0.0.1:8001; │ -│ │ -│ 5. Check nginx error log: │ -│ tail -50 /var/log/nginx/.error.log │ -└─────────────────────────────────────────────────────────────────┘ -``` - -### Container Won't Start - -``` -Problem: docker compose up fails - -Check: -┌─────────────────────────────────────────────────────────────────┐ -│ 1. Check compose logs: │ -│ cd /var/serverkit/apps/ │ -│ docker compose logs │ -│ │ -│ 2. Validate compose file: │ -│ docker compose config │ -│ │ -│ 3. Check for port conflicts: │ -│ docker ps --format "{{.Ports}}" │ -│ netstat -tulpn | grep │ -└─────────────────────────────────────────────────────────────────┘ -``` +Refer to the [Deployment Guide](DEPLOYMENT.md) for detailed troubleshooting steps regarding 502 errors, container failures, and networking issues. --- diff --git a/frontend/package-lock.json b/frontend/package-lock.json index f798d077..51157ce2 100644 --- a/frontend/package-lock.json +++ b/frontend/package-lock.json @@ -8,6 +8,7 @@ "name": "serverkit-frontend", "version": "0.1.0", "dependencies": { + "@rollup/rollup-win32-x64-msvc": "^4.60.0", "@xterm/addon-fit": "^0.10.0", "@xterm/addon-web-links": "^0.11.0", "@xterm/xterm": "^5.5.0", @@ -28,7 +29,7 @@ "eslint-plugin-react-hooks": "^5.1.0-rc.0", "eslint-plugin-react-refresh": "^0.4.9", "globals": "^15.9.0", - "less": "^4.5.1", + "sass": "^1.86.0", "vite": "^5.4.1" } }, @@ -973,6 +974,316 @@ "@jridgewell/sourcemap-codec": "^1.4.14" } }, + "node_modules/@parcel/watcher": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher/-/watcher-2.5.6.tgz", + "integrity": "sha512-tmmZ3lQxAe/k/+rNnXQRawJ4NjxO2hqiOLTHvWchtGZULp4RyFeh6aU4XdOYBFe2KE1oShQTv4AblOs2iOrNnQ==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "dependencies": { + "detect-libc": "^2.0.3", + "is-glob": "^4.0.3", + "node-addon-api": "^7.0.0", + "picomatch": "^4.0.3" + }, + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + }, + "optionalDependencies": { + "@parcel/watcher-android-arm64": "2.5.6", + "@parcel/watcher-darwin-arm64": "2.5.6", + "@parcel/watcher-darwin-x64": "2.5.6", + "@parcel/watcher-freebsd-x64": "2.5.6", + "@parcel/watcher-linux-arm-glibc": "2.5.6", + "@parcel/watcher-linux-arm-musl": "2.5.6", + "@parcel/watcher-linux-arm64-glibc": "2.5.6", + "@parcel/watcher-linux-arm64-musl": "2.5.6", + "@parcel/watcher-linux-x64-glibc": "2.5.6", + "@parcel/watcher-linux-x64-musl": "2.5.6", + "@parcel/watcher-win32-arm64": "2.5.6", + "@parcel/watcher-win32-ia32": "2.5.6", + "@parcel/watcher-win32-x64": "2.5.6" + } + }, + "node_modules/@parcel/watcher-android-arm64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-android-arm64/-/watcher-android-arm64-2.5.6.tgz", + "integrity": "sha512-YQxSS34tPF/6ZG7r/Ih9xy+kP/WwediEUsqmtf0cuCV5TPPKw/PQHRhueUo6JdeFJaqV3pyjm0GdYjZotbRt/A==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-darwin-arm64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-darwin-arm64/-/watcher-darwin-arm64-2.5.6.tgz", + "integrity": "sha512-Z2ZdrnwyXvvvdtRHLmM4knydIdU9adO3D4n/0cVipF3rRiwP+3/sfzpAwA/qKFL6i1ModaabkU7IbpeMBgiVEA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-darwin-x64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-darwin-x64/-/watcher-darwin-x64-2.5.6.tgz", + "integrity": "sha512-HgvOf3W9dhithcwOWX9uDZyn1lW9R+7tPZ4sug+NGrGIo4Rk1hAXLEbcH1TQSqxts0NYXXlOWqVpvS1SFS4fRg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-freebsd-x64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-freebsd-x64/-/watcher-freebsd-x64-2.5.6.tgz", + "integrity": "sha512-vJVi8yd/qzJxEKHkeemh7w3YAn6RJCtYlE4HPMoVnCpIXEzSrxErBW5SJBgKLbXU3WdIpkjBTeUNtyBVn8TRng==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-arm-glibc": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-arm-glibc/-/watcher-linux-arm-glibc-2.5.6.tgz", + "integrity": "sha512-9JiYfB6h6BgV50CCfasfLf/uvOcJskMSwcdH1PHH9rvS1IrNy8zad6IUVPVUfmXr+u+Km9IxcfMLzgdOudz9EQ==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-arm-musl": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-arm-musl/-/watcher-linux-arm-musl-2.5.6.tgz", + "integrity": "sha512-Ve3gUCG57nuUUSyjBq/MAM0CzArtuIOxsBdQ+ftz6ho8n7s1i9E1Nmk/xmP323r2YL0SONs1EuwqBp2u1k5fxg==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-arm64-glibc": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-arm64-glibc/-/watcher-linux-arm64-glibc-2.5.6.tgz", + "integrity": "sha512-f2g/DT3NhGPdBmMWYoxixqYr3v/UXcmLOYy16Bx0TM20Tchduwr4EaCbmxh1321TABqPGDpS8D/ggOTaljijOA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-arm64-musl": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-arm64-musl/-/watcher-linux-arm64-musl-2.5.6.tgz", + "integrity": "sha512-qb6naMDGlbCwdhLj6hgoVKJl2odL34z2sqkC7Z6kzir8b5W65WYDpLB6R06KabvZdgoHI/zxke4b3zR0wAbDTA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-x64-glibc": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-x64-glibc/-/watcher-linux-x64-glibc-2.5.6.tgz", + "integrity": "sha512-kbT5wvNQlx7NaGjzPFu8nVIW1rWqV780O7ZtkjuWaPUgpv2NMFpjYERVi0UYj1msZNyCzGlaCWEtzc+exjMGbQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-linux-x64-musl": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-linux-x64-musl/-/watcher-linux-x64-musl-2.5.6.tgz", + "integrity": "sha512-1JRFeC+h7RdXwldHzTsmdtYR/Ku8SylLgTU/reMuqdVD7CtLwf0VR1FqeprZ0eHQkO0vqsbvFLXUmYm/uNKJBg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-win32-arm64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-win32-arm64/-/watcher-win32-arm64-2.5.6.tgz", + "integrity": "sha512-3ukyebjc6eGlw9yRt678DxVF7rjXatWiHvTXqphZLvo7aC5NdEgFufVwjFfY51ijYEWpXbqF5jtrK275z52D4Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-win32-ia32": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-win32-ia32/-/watcher-win32-ia32-2.5.6.tgz", + "integrity": "sha512-k35yLp1ZMwwee3Ez/pxBi5cf4AoBKYXj00CZ80jUz5h8prpiaQsiRPKQMxoLstNuqe2vR4RNPEAEcjEFzhEz/g==", + "cpu": [ + "ia32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/@parcel/watcher-win32-x64": { + "version": "2.5.6", + "resolved": "https://registry.npmjs.org/@parcel/watcher-win32-x64/-/watcher-win32-x64-2.5.6.tgz", + "integrity": "sha512-hbQlYcCq5dlAX9Qx+kFb0FHue6vbjlf0FrNzSKdYK2APUf7tGfGxQCk2ihEREmbR6ZMc0MVAD5RIX/41gpUzTw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 10.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, "node_modules/@remix-run/router": { "version": "1.23.2", "resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.2.tgz", @@ -1326,15 +1637,13 @@ ] }, "node_modules/@rollup/rollup-win32-x64-msvc": { - "version": "4.55.1", - "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.55.1.tgz", - "integrity": "sha512-SPEpaL6DX4rmcXtnhdrQYgzQ5W2uW3SCJch88lB2zImhJRhIIK44fkUrgIV/Q8yUNfw5oyZ5vkeQsZLhCb06lw==", + "version": "4.60.0", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.60.0.tgz", + "integrity": "sha512-PrsWNQ8BuE00O3Xsx3ALh2Df8fAj9+cvvX9AIA6o4KpATR98c9mud4XtDWVvsEuyia5U4tVSTKygawyJkjm60w==", "cpu": [ "x64" ], - "dev": true, "license": "MIT", - "optional": true, "os": [ "win32" ] @@ -1993,6 +2302,22 @@ "url": "https://github.com/chalk/chalk?sponsor=1" } }, + "node_modules/chokidar": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz", + "integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==", + "dev": true, + "license": "MIT", + "dependencies": { + "readdirp": "^4.0.1" + }, + "engines": { + "node": ">= 14.16.0" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, "node_modules/classcat": { "version": "5.0.5", "resolved": "https://registry.npmjs.org/classcat/-/classcat-5.0.5.tgz", @@ -2042,19 +2367,6 @@ "dev": true, "license": "MIT" }, - "node_modules/copy-anything": { - "version": "2.0.6", - "resolved": "https://registry.npmjs.org/copy-anything/-/copy-anything-2.0.6.tgz", - "integrity": "sha512-1j20GZTsvKNkc4BY3NpMOM8tt///wY3FpIzozTOFO2ffuZcV61nojHXVKIy3WM+7ADCy5FVhdZYHYDdgTU0yJw==", - "dev": true, - "license": "MIT", - "dependencies": { - "is-what": "^3.14.1" - }, - "funding": { - "url": "https://github.com/sponsors/mesqueeb" - } - }, "node_modules/cross-spawn": { "version": "7.0.6", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", @@ -2383,6 +2695,17 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/detect-libc": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz", + "integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==", + "dev": true, + "license": "Apache-2.0", + "optional": true, + "engines": { + "node": ">=8" + } + }, "node_modules/doctrine": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-2.1.0.tgz", @@ -2450,20 +2773,6 @@ "node": ">=10.0.0" } }, - "node_modules/errno": { - "version": "0.1.8", - "resolved": "https://registry.npmjs.org/errno/-/errno-0.1.8.tgz", - "integrity": "sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A==", - "dev": true, - "license": "MIT", - "optional": true, - "dependencies": { - "prr": "~1.0.1" - }, - "bin": { - "errno": "cli.js" - } - }, "node_modules/es-abstract": { "version": "1.24.1", "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.24.1.tgz", @@ -3205,14 +3514,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/graceful-fs": { - "version": "4.2.11", - "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", - "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", - "dev": true, - "license": "ISC", - "optional": true - }, "node_modules/has-bigints": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/has-bigints/-/has-bigints-1.1.0.tgz", @@ -3307,20 +3608,6 @@ "node": ">= 0.4" } }, - "node_modules/iconv-lite": { - "version": "0.6.3", - "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", - "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", - "dev": true, - "license": "MIT", - "optional": true, - "dependencies": { - "safer-buffer": ">= 2.1.2 < 3.0.0" - }, - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/ignore": { "version": "5.3.2", "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", @@ -3331,19 +3618,12 @@ "node": ">= 4" } }, - "node_modules/image-size": { - "version": "0.5.5", - "resolved": "https://registry.npmjs.org/image-size/-/image-size-0.5.5.tgz", - "integrity": "sha512-6TDAlDPZxUFCv+fuOkIoXT/V/f3Qbq8e37p+YOiYrUv3v9cc3/6x78VdfPgFVaB9dZYeLUfKgHRebpkm/oP2VQ==", + "node_modules/immutable": { + "version": "5.1.5", + "resolved": "https://registry.npmjs.org/immutable/-/immutable-5.1.5.tgz", + "integrity": "sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A==", "dev": true, - "license": "MIT", - "optional": true, - "bin": { - "image-size": "bin/image-size.js" - }, - "engines": { - "node": ">=0.10.0" - } + "license": "MIT" }, "node_modules/import-fresh": { "version": "3.3.1", @@ -3778,13 +4058,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/is-what": { - "version": "3.14.1", - "resolved": "https://registry.npmjs.org/is-what/-/is-what-3.14.1.tgz", - "integrity": "sha512-sNxgpk9793nzSs7bA6JQJGeIuRBQhAaNGG77kzYQgMkrID+lS6SlK07K5LaptscDlSaIgH+GPFzf+d75FVxozA==", - "dev": true, - "license": "MIT" - }, "node_modules/isarray": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.5.tgz", @@ -3909,34 +4182,6 @@ "json-buffer": "3.0.1" } }, - "node_modules/less": { - "version": "4.5.1", - "resolved": "https://registry.npmjs.org/less/-/less-4.5.1.tgz", - "integrity": "sha512-UKgI3/KON4u6ngSsnDADsUERqhZknsVZbnuzlRZXLQCmfC/MDld42fTydUE9B+Mla1AL6SJ/Pp6SlEFi/AVGfw==", - "dev": true, - "hasInstallScript": true, - "license": "Apache-2.0", - "dependencies": { - "copy-anything": "^2.0.1", - "parse-node-version": "^1.0.1", - "tslib": "^2.3.0" - }, - "bin": { - "lessc": "bin/lessc" - }, - "engines": { - "node": ">=14" - }, - "optionalDependencies": { - "errno": "^0.1.1", - "graceful-fs": "^4.1.2", - "image-size": "~0.5.0", - "make-dir": "^2.1.0", - "mime": "^1.4.1", - "needle": "^3.1.0", - "source-map": "~0.6.0" - } - }, "node_modules/levn": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", @@ -4011,32 +4256,6 @@ "react": "^16.5.1 || ^17.0.0 || ^18.0.0" } }, - "node_modules/make-dir": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz", - "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==", - "dev": true, - "license": "MIT", - "optional": true, - "dependencies": { - "pify": "^4.0.1", - "semver": "^5.6.0" - }, - "engines": { - "node": ">=6" - } - }, - "node_modules/make-dir/node_modules/semver": { - "version": "5.7.2", - "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", - "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", - "dev": true, - "license": "ISC", - "optional": true, - "bin": { - "semver": "bin/semver" - } - }, "node_modules/math-intrinsics": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", @@ -4047,20 +4266,6 @@ "node": ">= 0.4" } }, - "node_modules/mime": { - "version": "1.6.0", - "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", - "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", - "dev": true, - "license": "MIT", - "optional": true, - "bin": { - "mime": "cli.js" - }, - "engines": { - "node": ">=4" - } - }, "node_modules/minimatch": { "version": "3.1.2", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz", @@ -4106,23 +4311,13 @@ "dev": true, "license": "MIT" }, - "node_modules/needle": { - "version": "3.3.1", - "resolved": "https://registry.npmjs.org/needle/-/needle-3.3.1.tgz", - "integrity": "sha512-6k0YULvhpw+RoLNiQCRKOl09Rv1dPLr8hHnVjHqdolKwDrdNyk+Hmrthi4lIGPPz3r39dLx0hsF5s40sZ3Us4Q==", + "node_modules/node-addon-api": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-7.1.1.tgz", + "integrity": "sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ==", "dev": true, "license": "MIT", - "optional": true, - "dependencies": { - "iconv-lite": "^0.6.3", - "sax": "^1.2.4" - }, - "bin": { - "needle": "bin/needle" - }, - "engines": { - "node": ">= 4.4.x" - } + "optional": true }, "node_modules/node-releases": { "version": "2.0.27", @@ -4319,16 +4514,6 @@ "node": ">=6" } }, - "node_modules/parse-node-version": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/parse-node-version/-/parse-node-version-1.0.1.tgz", - "integrity": "sha512-3YHlOa/JgH6Mnpr05jP9eDG254US9ek25LyIxZlDItp2iJtwyaXQb57lBYLdT3MowkUFYEV2XXNAYIPlESvJlA==", - "dev": true, - "license": "MIT", - "engines": { - "node": ">= 0.10" - } - }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", @@ -4363,15 +4548,18 @@ "dev": true, "license": "ISC" }, - "node_modules/pify": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz", - "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==", + "node_modules/picomatch": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz", + "integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==", "dev": true, "license": "MIT", "optional": true, "engines": { - "node": ">=6" + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" } }, "node_modules/possible-typed-array-names": { @@ -4434,14 +4622,6 @@ "react-is": "^16.13.1" } }, - "node_modules/prr": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/prr/-/prr-1.0.1.tgz", - "integrity": "sha512-yPw4Sng1gWghHQWj0B3ZggWUm4qVbPwPFcRG8KyxiU7J2OHFSoEHKS+EZ3fv5l1t9CyCiop6l/ZYeWbrgoQejw==", - "dev": true, - "license": "MIT", - "optional": true - }, "node_modules/punycode": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", @@ -4556,6 +4736,20 @@ "react-dom": ">=16.6.0" } }, + "node_modules/readdirp": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz", + "integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 14.18.0" + }, + "funding": { + "type": "individual", + "url": "https://paulmillr.com/funding/" + } + }, "node_modules/recharts": { "version": "2.15.4", "resolved": "https://registry.npmjs.org/recharts/-/recharts-2.15.4.tgz", @@ -4711,6 +4905,20 @@ "fsevents": "~2.3.2" } }, + "node_modules/rollup/node_modules/@rollup/rollup-win32-x64-msvc": { + "version": "4.55.1", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.55.1.tgz", + "integrity": "sha512-SPEpaL6DX4rmcXtnhdrQYgzQ5W2uW3SCJch88lB2zImhJRhIIK44fkUrgIV/Q8yUNfw5oyZ5vkeQsZLhCb06lw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, "node_modules/safe-array-concat": { "version": "1.1.3", "resolved": "https://registry.npmjs.org/safe-array-concat/-/safe-array-concat-1.1.3.tgz", @@ -4766,23 +4974,25 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/safer-buffer": { - "version": "2.1.2", - "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", - "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "node_modules/sass": { + "version": "1.98.0", + "resolved": "https://registry.npmjs.org/sass/-/sass-1.98.0.tgz", + "integrity": "sha512-+4N/u9dZ4PrgzGgPlKnaaRQx64RO0JBKs9sDhQ2pLgN6JQZ25uPQZKQYaBJU48Kd5BxgXoJ4e09Dq7nMcOUW3A==", "dev": true, "license": "MIT", - "optional": true - }, - "node_modules/sax": { - "version": "1.4.4", - "resolved": "https://registry.npmjs.org/sax/-/sax-1.4.4.tgz", - "integrity": "sha512-1n3r/tGXO6b6VXMdFT54SHzT9ytu9yr7TaELowdYpMqY/Ao7EnlQGmAQ1+RatX7Tkkdm6hONI2owqNx2aZj5Sw==", - "dev": true, - "license": "BlueOak-1.0.0", - "optional": true, + "dependencies": { + "chokidar": "^4.0.0", + "immutable": "^5.1.5", + "source-map-js": ">=0.6.2 <2.0.0" + }, + "bin": { + "sass": "sass.js" + }, "engines": { - "node": ">=11.0.0" + "node": ">=14.0.0" + }, + "optionalDependencies": { + "@parcel/watcher": "^2.4.1" } }, "node_modules/scheduler": { @@ -4980,17 +5190,6 @@ "node": ">=10.0.0" } }, - "node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "license": "BSD-3-Clause", - "optional": true, - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/source-map-js": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", @@ -5158,13 +5357,6 @@ "integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==", "license": "MIT" }, - "node_modules/tslib": { - "version": "2.8.1", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", - "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", - "dev": true, - "license": "0BSD" - }, "node_modules/type-check": { "version": "0.4.0", "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", diff --git a/frontend/package.json b/frontend/package.json index 4bffede5..31f83ab4 100644 --- a/frontend/package.json +++ b/frontend/package.json @@ -9,6 +9,9 @@ "lint": "eslint .", "preview": "vite preview" }, + "optionalDependencies": { + "@rollup/rollup-win32-x64-msvc": "^4.60.0" + }, "dependencies": { "@xterm/addon-fit": "^0.10.0", "@xterm/addon-web-links": "^0.11.0", @@ -30,7 +33,7 @@ "eslint-plugin-react-hooks": "^5.1.0-rc.0", "eslint-plugin-react-refresh": "^0.4.9", "globals": "^15.9.0", - "less": "^4.5.1", + "sass": "^1.86.0", "vite": "^5.4.1" } } diff --git a/frontend/public/manifest.json b/frontend/public/manifest.json new file mode 100644 index 00000000..e886aba5 --- /dev/null +++ b/frontend/public/manifest.json @@ -0,0 +1,36 @@ +{ + "name": "ServerKit", + "short_name": "ServerKit", + "description": "Server control panel for managing web applications, databases, and infrastructure", + "start_url": "/", + "display": "standalone", + "background_color": "#0f172a", + "theme_color": "#4f46e5", + "orientation": "any", + "icons": [ + { + "src": "/favicon.svg", + "sizes": "any", + "type": "image/svg+xml", + "purpose": "any maskable" + } + ], + "categories": ["utilities", "productivity"], + "shortcuts": [ + { + "name": "Dashboard", + "url": "/", + "description": "View server dashboard" + }, + { + "name": "Services", + "url": "/services", + "description": "Manage services" + }, + { + "name": "Terminal", + "url": "/terminal", + "description": "Open terminal" + } + ] +} diff --git a/frontend/public/sw.js b/frontend/public/sw.js new file mode 100644 index 00000000..5295ad34 --- /dev/null +++ b/frontend/public/sw.js @@ -0,0 +1,76 @@ +// ServerKit Service Worker for PWA / Offline support + +const CACHE_NAME = 'serverkit-v1'; +const OFFLINE_URL = '/'; + +// Assets to cache on install +const PRECACHE_ASSETS = [ + '/', + '/favicon.svg', + '/manifest.json', +]; + +self.addEventListener('install', (event) => { + event.waitUntil( + caches.open(CACHE_NAME).then((cache) => { + return cache.addAll(PRECACHE_ASSETS); + }) + ); + self.skipWaiting(); +}); + +self.addEventListener('activate', (event) => { + event.waitUntil( + caches.keys().then((keys) => { + return Promise.all( + keys.filter((key) => key !== CACHE_NAME).map((key) => caches.delete(key)) + ); + }) + ); + self.clients.claim(); +}); + +self.addEventListener('fetch', (event) => { + // Skip API requests — always go to network + if (event.request.url.includes('/api/')) { + return; + } + + event.respondWith( + fetch(event.request).catch(() => { + return caches.match(event.request).then((cached) => { + return cached || caches.match(OFFLINE_URL); + }); + }) + ); +}); + +// Push notification handler +self.addEventListener('push', (event) => { + const data = event.data ? event.data.json() : {}; + const title = data.title || 'ServerKit'; + const options = { + body: data.body || 'New notification', + icon: '/favicon.svg', + badge: '/favicon.svg', + data: data.url || '/', + actions: data.actions || [], + }; + + event.waitUntil(self.registration.showNotification(title, options)); +}); + +self.addEventListener('notificationclick', (event) => { + event.notification.close(); + const url = event.notification.data || '/'; + event.waitUntil( + self.clients.matchAll({ type: 'window' }).then((clients) => { + for (const client of clients) { + if (client.url === url && 'focus' in client) { + return client.focus(); + } + } + return self.clients.openWindow(url); + }) + ); +}); diff --git a/frontend/src/App.jsx b/frontend/src/App.jsx index 511ab200..45fba18e 100644 --- a/frontend/src/App.jsx +++ b/frontend/src/App.jsx @@ -1,250 +1,277 @@ -import React, { useEffect } from 'react'; -import { BrowserRouter as Router, Routes, Route, Navigate, useLocation } from 'react-router-dom'; -import { AuthProvider, useAuth } from './contexts/AuthContext'; -import { ToastProvider } from './contexts/ToastContext'; -import { ThemeProvider } from './contexts/ThemeContext'; -import { ResourceTierProvider } from './contexts/ResourceTierContext'; -import { ToastContainer } from './components/Toast'; -import DashboardLayout from './layouts/DashboardLayout'; -import Dashboard from './pages/Dashboard'; -import Login from './pages/Login'; -import Register from './pages/Register'; -import Setup from './pages/Setup'; -import Applications from './pages/Applications'; -import ApplicationDetail from './pages/ApplicationDetail'; -import Docker from './pages/Docker'; -import Databases from './pages/Databases'; -import Domains from './pages/Domains'; -import Monitoring from './pages/Monitoring'; -import Backups from './pages/Backups'; -import Terminal from './pages/Terminal'; -import Settings from './pages/Settings'; -import FileManager from './pages/FileManager'; -import FTPServer from './pages/FTPServer'; -// Firewall is now part of Security page -import Git from './pages/Git'; -import CronJobs from './pages/CronJobs'; -import Security from './pages/Security'; -import Services from './pages/Services'; -import ServiceDetail from './pages/ServiceDetail'; -import Templates from './pages/Templates'; -import WorkflowBuilder from './pages/WorkflowBuilder'; -import Servers from './pages/Servers'; -import ServerDetail from './pages/ServerDetail'; -import Downloads from './pages/Downloads'; -import WordPress from './pages/WordPress'; -import WordPressDetail from './pages/WordPressDetail'; -import WordPressProjects from './pages/WordPressProjects'; -import WordPressProject from './pages/WordPressProject'; -import SSLCertificates from './pages/SSLCertificates'; -import Email from './pages/Email'; -import SSOCallback from './pages/SSOCallback'; -import DatabaseMigration from './pages/DatabaseMigration'; - -// Page title mapping -const PAGE_TITLES = { - '/': 'Dashboard', - '/login': 'Login', - '/register': 'Register', - '/setup': 'Setup', - '/services': 'Services', - '/apps': 'Applications', - '/wordpress': 'WordPress Sites', - '/wordpress/projects': 'WordPress Projects', - '/templates': 'Templates', - '/workflow': 'Workflow Builder', - '/domains': 'Domains', - '/databases': 'Databases', - '/ssl': 'SSL Certificates', - '/docker': 'Docker', - '/servers': 'Servers', - '/downloads': 'Downloads', - '/git': 'Git Repositories', - '/files': 'File Manager', - '/ftp': 'FTP Server', - '/monitoring': 'Monitoring', - '/backups': 'Backups', - '/cron': 'Cron Jobs', - '/security': 'Security', - '/email': 'Email Server', - '/terminal': 'Terminal', - '/settings': 'Settings', - '/migrate': 'Database Migration', -}; - -function PageTitleUpdater() { - const location = useLocation(); - - useEffect(() => { - const path = location.pathname; - let title = PAGE_TITLES[path]; - - // Handle dynamic routes and tab sub-routes - if (!title) { - // Check if it's a base page with a tab suffix (e.g., /security/firewall) - const basePath = '/' + path.split('/')[1]; - if (PAGE_TITLES[basePath]) { - title = PAGE_TITLES[basePath]; - } else if (path.startsWith('/services/')) title = 'Service Details'; - else if (path.startsWith('/apps/')) title = 'Application Details'; - else if (path.startsWith('/servers/')) title = 'Server Details'; - else if (path.startsWith('/wordpress/projects/')) title = 'WordPress Pipeline'; - else if (path.startsWith('/wordpress/')) title = 'WordPress Site'; - else title = 'ServerKit'; - } - - document.title = title ? `${title} | ServerKit` : 'ServerKit'; - }, [location]); - - return null; -} - -function PrivateRoute({ children }) { - const { isAuthenticated, loading, needsSetup, needsMigration } = useAuth(); - - if (loading) { - return
Loading...
; - } - - // Priority: migrations > setup > auth - if (needsMigration) { - return ; - } - - if (needsSetup) { - return ; - } - - return isAuthenticated ? children : ; -} - -function PublicRoute({ children }) { - const { isAuthenticated, loading, needsSetup, needsMigration } = useAuth(); - - if (loading) { - return
Loading...
; - } - - // Priority: migrations > setup > auth - if (needsMigration) { - return ; - } - - if (needsSetup) { - return ; - } - - return isAuthenticated ? : children; -} - -function SetupRoute({ children }) { - const { loading, needsSetup, isAuthenticated } = useAuth(); - - if (loading) { - return
Loading...
; - } - - // If setup is not needed, redirect appropriately - if (!needsSetup) { - return isAuthenticated ? : ; - } - - return children; -} - -function AppRoutes() { - return ( - - } /> - - - - } /> - - - - } /> - - - - } /> - - - - } /> - - - - }> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - } /> - - - ); -} - -function App() { - return ( - - - - - - - - - - - - - - ); -} - -export default App; +import React, { useEffect } from 'react'; +import { BrowserRouter as Router, Routes, Route, Navigate, useLocation } from 'react-router-dom'; +import { AuthProvider, useAuth } from './contexts/AuthContext'; +import { ToastProvider } from './contexts/ToastContext'; +import { ThemeProvider } from './contexts/ThemeContext'; +import { ResourceTierProvider } from './contexts/ResourceTierContext'; +import { ToastContainer } from './components/Toast'; +import DashboardLayout from './layouts/DashboardLayout'; +import Dashboard from './pages/Dashboard'; +import Login from './pages/Login'; +import Register from './pages/Register'; +import Setup from './pages/Setup'; +import Applications from './pages/Applications'; +import ApplicationDetail from './pages/ApplicationDetail'; +import Docker from './pages/Docker'; +import Databases from './pages/Databases'; +import Domains from './pages/Domains'; +import Monitoring from './pages/Monitoring'; +import Backups from './pages/Backups'; +import Terminal from './pages/Terminal'; +import Settings from './pages/Settings'; +import FileManager from './pages/FileManager'; +import FTPServer from './pages/FTPServer'; +// Firewall is now part of Security page +import Git from './pages/Git'; +import CronJobs from './pages/CronJobs'; +import Security from './pages/Security'; +import Services from './pages/Services'; +import ServiceDetail from './pages/ServiceDetail'; +import Templates from './pages/Templates'; +import WorkflowBuilder from './pages/WorkflowBuilder'; +import Servers from './pages/Servers'; +import ServerDetail from './pages/ServerDetail'; +import AgentFleet from './pages/AgentFleet'; +import FleetMonitor from './pages/FleetMonitor'; +import Downloads from './pages/Downloads'; +import WordPress from './pages/WordPress'; +import WordPressDetail from './pages/WordPressDetail'; +import WordPressProjects from './pages/WordPressProjects'; +import WordPressProject from './pages/WordPressProject'; +import SSLCertificates from './pages/SSLCertificates'; +import Email from './pages/Email'; +import SSOCallback from './pages/SSOCallback'; +import DatabaseMigration from './pages/DatabaseMigration'; +import AgentPlugins from './pages/AgentPlugins'; +import ServerTemplates from './pages/ServerTemplates'; +import Workspaces from './pages/Workspaces'; +import DNSZones from './pages/DNSZones'; +import StatusPages from './pages/StatusPages'; +import CloudProvision from './pages/CloudProvision'; +import Marketplace from './pages/Marketplace'; + +// Page title mapping +const PAGE_TITLES = { + '/': 'Dashboard', + '/login': 'Login', + '/register': 'Register', + '/setup': 'Setup', + '/services': 'Services', + '/apps': 'Applications', + '/wordpress': 'WordPress Sites', + '/wordpress/projects': 'WordPress Projects', + '/templates': 'Templates', + '/workflow': 'Workflow Builder', + '/domains': 'Domains', + '/databases': 'Databases', + '/ssl': 'SSL Certificates', + '/docker': 'Docker', + '/servers': 'Servers', + '/downloads': 'Downloads', + '/git': 'Git Repositories', + '/files': 'File Manager', + '/ftp': 'FTP Server', + '/monitoring': 'Monitoring', + '/backups': 'Backups', + '/cron': 'Cron Jobs', + '/security': 'Security', + '/email': 'Email Server', + '/terminal': 'Terminal', + '/settings': 'Settings', + '/migrate': 'Database Migration', + '/fleet': 'Agent Fleet', + '/fleet-monitor': 'Fleet Monitor', + '/agent-plugins': 'Agent Plugins', + '/server-templates': 'Server Templates', + '/workspaces': 'Workspaces', + '/dns': 'DNS Zones', + '/status-pages': 'Status Pages', + '/cloud': 'Cloud Provisioning', + '/marketplace': 'Marketplace', +}; + +function PageTitleUpdater() { + const location = useLocation(); + + useEffect(() => { + const path = location.pathname; + let title = PAGE_TITLES[path]; + + // Handle dynamic routes and tab sub-routes + if (!title) { + // Check if it's a base page with a tab suffix (e.g., /security/firewall) + const basePath = '/' + path.split('/')[1]; + if (PAGE_TITLES[basePath]) { + title = PAGE_TITLES[basePath]; + } else if (path.startsWith('/services/')) title = 'Service Details'; + else if (path.startsWith('/apps/')) title = 'Application Details'; + else if (path.startsWith('/servers/')) title = 'Server Details'; + else if (path.startsWith('/wordpress/projects/')) title = 'WordPress Pipeline'; + else if (path.startsWith('/wordpress/')) title = 'WordPress Site'; + else title = 'ServerKit'; + } + + document.title = title ? `${title} | ServerKit` : 'ServerKit'; + }, [location]); + + return null; +} + +function PrivateRoute({ children }) { + const { isAuthenticated, loading, needsSetup, needsMigration } = useAuth(); + + if (loading) { + return
Loading...
; + } + + // Priority: migrations > setup > auth + if (needsMigration) { + return ; + } + + if (needsSetup) { + return ; + } + + return isAuthenticated ? children : ; +} + +function PublicRoute({ children }) { + const { isAuthenticated, loading, needsSetup, needsMigration } = useAuth(); + + if (loading) { + return
Loading...
; + } + + // Priority: migrations > setup > auth + if (needsMigration) { + return ; + } + + if (needsSetup) { + return ; + } + + return isAuthenticated ? : children; +} + +function SetupRoute({ children }) { + const { loading, needsSetup, isAuthenticated } = useAuth(); + + if (loading) { + return
Loading...
; + } + + // If setup is not needed, redirect appropriately + if (!needsSetup) { + return isAuthenticated ? : ; + } + + return children; +} + +function AppRoutes() { + return ( + + } /> + + + + } /> + + + + } /> + + + + } /> + + + + } /> + + + + }> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + } /> + + + ); +} + +function App() { + return ( + + + + + + + + + + + + + + ); +} + +export default App; diff --git a/frontend/src/components/EmptyState.jsx b/frontend/src/components/EmptyState.jsx new file mode 100644 index 00000000..8edeadf8 --- /dev/null +++ b/frontend/src/components/EmptyState.jsx @@ -0,0 +1,24 @@ +import React from 'react'; +import { Inbox } from 'lucide-react'; + +export default function EmptyState({ + icon: Icon = Inbox, + title = 'No items found', + description = '', + action = null +}) { + return ( +
+
+ +
+

{title}

+ {description && ( +

{description}

+ )} + {action && ( +
{action}
+ )} +
+ ); +} diff --git a/frontend/src/components/EnvironmentVariables.jsx b/frontend/src/components/EnvironmentVariables.jsx index 2c40d6a3..b90cd566 100644 --- a/frontend/src/components/EnvironmentVariables.jsx +++ b/frontend/src/components/EnvironmentVariables.jsx @@ -1,6 +1,7 @@ import React, { useState, useEffect, useRef } from 'react'; import api from '../services/api'; import { useToast } from '../contexts/ToastContext'; +import Modal from './Modal'; const EnvironmentVariables = ({ appId }) => { const toast = useToast(); @@ -429,14 +430,7 @@ const EnvironmentVariables = ({ appId }) => { )} {/* Import Modal */} - {showImportModal && ( -
setShowImportModal(false)}> -
e.stopPropagation()}> -
-

Import Environment Variables

- -
-
+ setShowImportModal(false)} title="Import Environment Variables">

Paste your .env file content below or upload a file.

@@ -471,7 +465,6 @@ const EnvironmentVariables = ({ appId }) => { /> Overwrite existing variables with same keys -
-
-
- )} + {/* History Modal */} - {showHistoryModal && ( -
setShowHistoryModal(false)}> -
e.stopPropagation()}> -
-

Change History

- -
-
+ setShowHistoryModal(false)} title="Change History" size="lg"> {history.length === 0 ? (

No changes recorded yet.

) : ( @@ -519,15 +503,12 @@ const EnvironmentVariables = ({ appId }) => { )} -
-
-
- )} +
); }; diff --git a/frontend/src/components/LinkAppModal.jsx b/frontend/src/components/LinkAppModal.jsx index f6c82628..483cc235 100644 --- a/frontend/src/components/LinkAppModal.jsx +++ b/frontend/src/components/LinkAppModal.jsx @@ -1,6 +1,7 @@ import React, { useState, useEffect } from 'react'; import { X, Link2, GitBranch, AlertCircle, Check } from 'lucide-react'; import api from '../services/api'; +import Modal from './Modal'; const LinkAppModal = ({ app, onClose, onLinked }) => { const [apps, setApps] = useState([]); @@ -68,18 +69,7 @@ const LinkAppModal = ({ app, onClose, onLinked }) => { }; return ( -
-
e.stopPropagation()}> -
-

- - Link Application -

- -
- + {error && (
@@ -221,8 +211,7 @@ const LinkAppModal = ({ app, onClose, onLinked }) => {
)} -
-
+ ); }; diff --git a/frontend/src/components/Modal.jsx b/frontend/src/components/Modal.jsx new file mode 100644 index 00000000..3fc20ca3 --- /dev/null +++ b/frontend/src/components/Modal.jsx @@ -0,0 +1,51 @@ +import { useEffect, useRef } from 'react'; + +export default function Modal({ open, onClose, title, children, footer, className = '', size = '' }) { + const modalRef = useRef(null); + + // Close on Escape key + useEffect(() => { + if (!open) return; + const handleKeyDown = (e) => { + if (e.key === 'Escape') onClose(); + }; + document.addEventListener('keydown', handleKeyDown); + return () => document.removeEventListener('keydown', handleKeyDown); + }, [open, onClose]); + + // Focus trap: focus the modal when it opens + useEffect(() => { + if (open && modalRef.current) { + modalRef.current.focus(); + } + }, [open]); + + if (!open) return null; + + return ( +
+
e.stopPropagation()} + ref={modalRef} + tabIndex={-1} + role="dialog" + aria-modal="true" + aria-label={title} + > +
+

{title}

+ +
+
+ {children} +
+ {footer && ( +
+ {footer} +
+ )} +
+
+ ); +} diff --git a/frontend/src/components/QueryRunner.jsx b/frontend/src/components/QueryRunner.jsx index d3b73f7d..4ead687c 100644 --- a/frontend/src/components/QueryRunner.jsx +++ b/frontend/src/components/QueryRunner.jsx @@ -1,6 +1,7 @@ import React, { useState, useEffect, useRef, useCallback } from 'react'; import api from '../services/api'; import { useToast } from '../contexts/ToastContext'; +import Modal from './Modal'; const HISTORY_KEY = 'serverkit_query_history'; const MAX_HISTORY = 50; @@ -247,35 +248,21 @@ const QueryRunner = ({ database, dbType, onClose }) => { const dbName = dbType === 'sqlite' ? database.name : database.name; return ( -
-
e.stopPropagation()}> -
-
- - - - - Query Runner - {dbName} - - {dbType === 'mysql' ? 'MySQL' : dbType === 'postgresql' ? 'PostgreSQL' : dbType === 'docker' ? 'Docker MySQL' : 'SQLite'} - -
-
- {isAdmin && ( - - )} - - -
+ +
+ {isAdmin && ( + + )} +
@@ -465,8 +452,7 @@ const QueryRunner = ({ database, dbType, onClose }) => {
)} -
-
+ ); }; diff --git a/frontend/src/components/Sidebar.jsx b/frontend/src/components/Sidebar.jsx index 8fb567af..ab6825ad 100644 --- a/frontend/src/components/Sidebar.jsx +++ b/frontend/src/components/Sidebar.jsx @@ -1,13 +1,14 @@ -import React, { useState, useEffect, useRef } from 'react'; -import { NavLink, useNavigate } from 'react-router-dom'; +import React, { useState, useEffect, useRef, useMemo } from 'react'; +import { NavLink, useNavigate, useLocation } from 'react-router-dom'; import { useAuth } from '../contexts/AuthContext'; import { useTheme } from '../contexts/ThemeContext'; -import { Star, Settings, LogOut, Sun, Moon, Monitor, ChevronRight, ChevronUp, Layers } from 'lucide-react'; +import { Star, Settings, LogOut, Sun, Moon, Monitor, ChevronRight, ChevronDown, ChevronUp, Layers, Palette, PanelLeft, Check } from 'lucide-react'; import { api } from '../services/api'; import ServerKitLogo from './ServerKitLogo'; +import { SIDEBAR_CATEGORIES, CATEGORY_LABELS, SIDEBAR_PRESETS, getVisibleItems } from './sidebarItems'; const Sidebar = () => { - const { user, logout } = useAuth(); + const { user, logout, updateUser } = useAuth(); const { theme, resolvedTheme, setTheme, whiteLabel } = useTheme(); const navigate = useNavigate(); const [starAnimating, setStarAnimating] = useState(false); @@ -47,8 +48,6 @@ const Sidebar = () => { }; const scheduleNext = () => { - // Each time it plays, the next interval gets longer - // 1st: 8-11 min, 2nd: 16-22 min, 3rd: 24-33 min, etc. const multiplier = playCount + 1; const minMinutes = 8 * multiplier; const maxMinutes = 11 * multiplier; @@ -60,7 +59,6 @@ const Sidebar = () => { }, delay); }; - // First play after 1 minute const initialDelay = setTimeout(() => { triggerAnimation(); scheduleNext(); @@ -72,6 +70,105 @@ const Sidebar = () => { }; }, [whiteLabel.enabled]); + const conditions = { wpInstalled }; + const currentPreset = user?.sidebar_config?.preset || 'full'; + const [manualExpanded, setManualExpanded] = useState({}); + const [autoExpanded, setAutoExpanded] = useState(null); + const location = useLocation(); + + const toggleExpand = (itemId) => { + const currentlyExpanded = manualExpanded[itemId] ?? (autoExpanded === itemId); + setManualExpanded(prev => ({ ...prev, [itemId]: !currentlyExpanded })); + }; + + const handlePresetSwitch = (presetKey) => { + if (presetKey === currentPreset) return; + const config = { preset: presetKey, hiddenItems: [] }; + // Update locally first (instant), persist to backend in background + updateUser({ sidebar_config: config }); + api.updateCurrentUser({ sidebar_config: config }).catch(() => {}); + }; + + const visibleItems = useMemo( + () => getVisibleItems(user?.sidebar_config), + [user?.sidebar_config] + ); + + // Group visible items by category + const groupedItems = useMemo(() => { + const groups = {}; + for (const cat of SIDEBAR_CATEGORIES) { + const items = visibleItems.filter(item => item.category === cat); + if (items.length > 0) { + groups[cat] = items; + } + } + return groups; + }, [visibleItems]); + + // Auto-expand the active parent (or parent of active sub-item), auto-close others + useEffect(() => { + const path = location.pathname; + let activeParent = null; + for (const item of visibleItems) { + if (!item.subItems?.length) continue; + // Expand if on the parent route itself or any sub-item route + if (path === item.route || path.startsWith(item.route + '/') || + item.subItems.some(sub => path === sub.route || path.startsWith(sub.route + '/'))) { + activeParent = item.id; + break; + } + } + setAutoExpanded(activeParent); + setManualExpanded({}); + }, [location.pathname, visibleItems]); + + const renderNavItem = (item) => { + const hasChildren = item.subItems && item.subItems.length > 0; + // Show expanded if manually toggled OR auto-expanded by active route + const isExpanded = manualExpanded[item.id] ?? (autoExpanded === item.id); + const visibleSubs = hasChildren + ? item.subItems.filter(sub => !sub.requiresCondition || conditions[sub.requiresCondition]) + : []; + + return ( + +
+ `nav-item ${isActive ? 'active' : ''}`} + end={item.end || hasChildren} + > + + {item.label} + + {visibleSubs.length > 0 && ( + + )} +
+ {isExpanded && visibleSubs.map(sub => ( + `nav-item nav-sub-item ${isActive ? 'active' : ''}`} + > + + {sub.label} + + ))} +
+ ); + }; + return (