A self-hosted web app for managing Linux package updates across multiple servers. Connect via SSH, check for updates, and apply them from a single dashboard in your browser.
- Multi-distribution support: APT (Debian/Ubuntu), DNF (Fedora/RHEL 8+), YUM (CentOS/older RHEL), Pacman (Arch/Manjaro), Flatpak, and Snap
- Auto-detection: package managers and system info are detected automatically on first connection; you can disable individual managers per system
- Granular updates: upgrade everything at once or pick individual packages per system
- Background scheduling: periodic checks keep your dashboard up to date (configurable cache duration)
- Flexible notifications: set up multiple channels per event type (Email/SMTP, ntfy.sh), scope them to specific systems, and pick which events trigger each channel
- Encrypted credentials: SSH passwords and private keys are encrypted at rest with AES-256-GCM
- Four auth methods: password, Passkeys (WebAuthn), SSO (OpenID Connect), and API tokens for external integrations
- SSH-safe upgrades: upgrade commands run via nohup on the remote host, so they survive SSH disconnects and keep running even if the dashboard loses connection
- Full upgrade: run
apt full-upgradeordnf distro-syncfrom the dashboard for dist-level upgrades - Remote reboot: trigger reboots from the UI with a dashboard-wide reboot-needed indicator
- System duplication: clone an existing system entry (including encrypted credentials) to quickly add similar servers
- Exclude from Upgrade All: make individual systems start unchecked in the Upgrade All Systems dialog
- Notification digests: schedule notification delivery on a cron expression for batched digest summaries instead of immediate alerts
- Dark mode: dark/light theme with OS preference detection
- Update history: logs every check and upgrade operation per system
- Real-time status: see which systems are online, up to date, or need attention at a glance
- Version info: build version, commit hash, and branch displayed in the sidebar
- Docker ready: multi-stage Dockerfile with health check and a persistent volume for production
Overview of all systems with summary stats and color-coded update status at a glance.
Manage all connected servers with status, update counts, and quick actions.
Add a new server via SSH with password or key-based authentication.
Detailed view of a single system showing connection info, OS details, resource usage, available packages, and upgrade history.
Expandable history entries with the executed command and its full output.
Configure notification channels (Email/SMTP, ntfy.sh) with per-event and per-system filtering.
Configure update schedules, SSH timeouts, OIDC single sign-on, and API tokens.
Caution
This application is designed for use on trusted local networks only. It is not intended to be exposed directly to the internet. If you need remote access, place it behind a reverse proxy with proper TLS termination, authentication, and network-level access controls (e.g. VPN, firewall rules).
- Bun 1.x installed
- SSH access to at least one Linux server
# Clone the repository
git clone https://github.com/your-username/linux-update-dashboard.git
cd linux-update-dashboard
# Install dependencies
bun install
# Generate an encryption key
export LUDASH_ENCRYPTION_KEY=$(bun -e "console.log(require('crypto').randomBytes(32).toString('base64'))")
# Start development servers
bun run devThe frontend dev server runs on http://localhost:5173 (proxies API calls to the backend on port 3001).
On first visit, you'll be guided through creating an admin account.
bun run build
NODE_ENV=production bun run startThe production server serves both the API and the built frontend on port 3001.
# Generate your encryption key (required)
export LUDASH_ENCRYPTION_KEY=$(openssl rand -base64 32)
# Pull and run
docker run -d \
-p 3001:3001 \
-e LUDASH_ENCRYPTION_KEY=$LUDASH_ENCRYPTION_KEY \
-v ludash_data:/data \
ghcr.io/theduffman85/linux-update-dashboard:latestOptional Docker Secrets variant:
mkdir -p ./secrets
openssl rand -base64 32 > ./secrets/ludash_encryption_key.txt
docker run -d \
-p 3001:3001 \
-e LUDASH_ENCRYPTION_KEY_FILE=/run/secrets/ludash_encryption_key \
-v "$(pwd)/secrets/ludash_encryption_key.txt:/run/secrets/ludash_encryption_key:ro" \
-v ludash_data:/data \
ghcr.io/theduffman85/linux-update-dashboard:latestservices:
dashboard:
image: ghcr.io/theduffman85/linux-update-dashboard:latest
container_name: linux-update-dashboard
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- dashboard_data:/data
environment:
- LUDASH_ENCRYPTION_KEY=${LUDASH_ENCRYPTION_KEY}
# Optional: use Docker secrets instead of direct env vars
# - LUDASH_ENCRYPTION_KEY_FILE=/run/secrets/ludash_encryption_key
# - LUDASH_SECRET_KEY_FILE=/run/secrets/ludash_secret_key
- LUDASH_DB_PATH=/data/dashboard.db
- NODE_ENV=production
# Optional: set your public URL for stricter origin validation
# - LUDASH_BASE_URL=https://dashboard.example.com
# - LUDASH_TRUST_PROXY=true
volumes:
dashboard_data:The dashboard will be available at http://localhost:3001. Data is persisted in a Docker volume.
If you prefer Docker secrets with Compose, add a secrets: block and set LUDASH_ENCRYPTION_KEY_FILE instead of LUDASH_ENCRYPTION_KEY.
Example:
services:
dashboard:
image: ghcr.io/theduffman85/linux-update-dashboard:latest
container_name: linux-update-dashboard
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- dashboard_data:/data
environment:
- LUDASH_ENCRYPTION_KEY_FILE=/run/secrets/ludash_encryption_key
- LUDASH_DB_PATH=/data/dashboard.db
- NODE_ENV=production
secrets:
- ludash_encryption_key
secrets:
ludash_encryption_key:
file: ./secrets/ludash_encryption_key.txt
volumes:
dashboard_data:Create the secret file before starting:
mkdir -p ./secrets
openssl rand -base64 32 > ./secrets/ludash_encryption_key.txt
docker compose up -dcd docker
# Generate your encryption key (required)
export LUDASH_ENCRYPTION_KEY=$(openssl rand -base64 32)
# Start the container
docker compose up -dThe Docker image includes a built-in HEALTHCHECK that verifies the web server is responding. Docker will automatically mark the container as healthy or unhealthy.
Endpoint: GET /api/health (localhost: no auth, external: requires authentication)
curl http://localhost:3001/api/health
# {"status":"ok"}The health check runs every 30 seconds with a 10-second start period to allow for initialization. You can check the container's health status with:
docker inspect --format='{{.State.Health.Status}}' linux-update-dashboard| Variable | Required | Default | Description |
|---|---|---|---|
LUDASH_ENCRYPTION_KEY |
Yes | - | AES-256 key for encrypting stored SSH credentials |
LUDASH_ENCRYPTION_KEY_FILE |
No | - | Optional alternative: read LUDASH_ENCRYPTION_KEY value from file (Docker secrets) |
LUDASH_DB_PATH |
No | ./data/dashboard.db |
SQLite database file path |
LUDASH_SECRET_KEY |
No | Auto-generated | JWT session signing secret (auto-persisted to .secret_key) |
LUDASH_SECRET_KEY_FILE |
No | Auto-generated | Read LUDASH_SECRET_KEY value from file (Docker secrets) |
LUDASH_PORT |
No | 3001 |
HTTP server port |
LUDASH_HOST |
No | 0.0.0.0 |
HTTP server bind address |
LUDASH_BASE_URL |
No | http://localhost:3001 |
Public URL for WebAuthn/OIDC. When set, detected origin must match it. When unset, origin is inferred from request headers (Host/proto plus Origin/Referer heuristics), which works behind reverse proxies without extra config |
LUDASH_TRUST_PROXY |
No | false |
Trust X-Forwarded-* headers from your reverse proxy (needed for forwarded host/proto detection when LUDASH_BASE_URL is set) |
LUDASH_LOG_LEVEL |
No | info |
Log level |
LUDASH_DEFAULT_CACHE_HOURS |
No | 12 |
How long update results are cached before re-checking |
LUDASH_DEFAULT_SSH_TIMEOUT |
No | 30 |
SSH connection timeout in seconds |
LUDASH_DEFAULT_CMD_TIMEOUT |
No | 120 |
SSH command execution timeout in seconds |
LUDASH_MAX_CONCURRENT_CONNECTIONS |
No | 5 |
Max simultaneous SSH connections |
NODE_EXTRA_CA_CERTS |
No | - | Path to a PEM CA bundle to trust additional/self-signed certificates for outbound TLS (OIDC, SMTP, ntfy, etc.) |
NODE_ENV |
No | - | Set to production for static file serving |
If you use LUDASH_ENCRYPTION_KEY_FILE, do not also set LUDASH_ENCRYPTION_KEY. If both VAR and VAR_FILE are set for the same setting, startup fails with a configuration error.
Four auth methods are supported and can be used at the same time:
Standard username/password login. Passwords are hashed with bcrypt (cost factor 12). Sessions use long-lived JWTs (30-day expiry) in an HTTP-only cookie, with silent daily rolling refresh. Can be disabled from the Settings page, but only when at least one passkey or SSO provider is configured (enforced server-side to prevent lockout). Users can change their password from the Settings page.
Note: Password login cannot be disabled unless at least one passkey or SSO provider is configured, preventing account lockout.
Register hardware keys or platform authenticators (Touch ID, Windows Hello) for passwordless login. Each passkey can be given a custom name (e.g. "YubiKey", "MacBook") during registration and renamed later from the Settings page. Works behind reverse proxies without extra configuration; set LUDASH_BASE_URL for stricter origin validation.
Hook up any OIDC-compatible identity provider (Authentik, Keycloak, Okta, Auth0, etc.) through the Settings page. Users get auto-provisioned on first login. Set the callback URL in your provider to:
{LUDASH_BASE_URL}/api/auth/oidc/callback
If your IdP (or other outbound HTTPS target) uses a private/self-signed CA, mount the CA cert into the container and set NODE_EXTRA_CA_CERTS:
services:
dashboard:
image: ghcr.io/theduffman85/linux-update-dashboard:latest
volumes:
- ./certs/homelab-ca.crt:/etc/ssl/certs/homelab-ca.crt:ro
environment:
- NODE_EXTRA_CA_CERTS=/etc/ssl/certs/homelab-ca.crtFor non-Docker runs, set NODE_EXTRA_CA_CERTS to a local PEM file path before starting the app.
Bearer tokens for external API consumers (e.g. gethomepage widgets, scripts, monitoring). Create and manage tokens from the Settings page.
- Permission levels: read-only (GET/HEAD only) or read/write
- Configurable expiry: 30, 60, 90, 365 days, or never
- Secure storage: only the SHA-256 hash is stored; the plain token is shown once on creation
- Scoped access: tokens cannot access management endpoints (auth, settings, tokens, passkeys, notifications)
- Rate-limited: failed bearer attempts are rate-limited (20/min per IP), max 25 tokens per user
Usage:
curl -H "Authorization: Bearer ludash_..." http://localhost:3001/api/dashboard/stats| Package Manager | Distributions |
|---|---|
| APT | Debian, Ubuntu, Linux Mint |
| DNF | Fedora, RHEL 8+, AlmaLinux, Rocky |
| YUM | CentOS, older RHEL |
| Pacman | Arch Linux, Manjaro |
| Flatpak | Any (cross-distribution) |
| Snap | Any (cross-distribution) |
Package managers are auto-detected on each system over SSH when you test the connection or run the first check. Detected managers are enabled by default, and you can toggle them individually per system in the edit dialog. Security updates are identified where possible (e.g. APT security repos).
├── .github/ # CI/CD workflows and Dependabot
│ ├── dependabot.yml
│ └── workflows/
│ ├── dev-build.yml # Dev branch Docker builds
│ ├── release.yml # Production releases
│ └── trivy-scan.yml # Container security scanning
├── client/ # React SPA
│ ├── lib/ # TanStack Query hooks and API client
│ ├── components/ # Shared UI components
│ ├── context/ # Auth and toast providers
│ ├── hooks/ # Custom hooks (theme)
│ ├── pages/ # Route pages
│ └── styles/ # Tailwind CSS
├── server/ # Hono backend
│ ├── auth/ # Password, WebAuthn, OIDC, session handling
│ ├── db/ # SQLite + Drizzle schema (8 tables)
│ ├── middleware/ # Auth and rate-limit middleware
│ ├── routes/ # API route handlers
│ ├── services/ # Business logic, caching, scheduling
│ └── ssh/ # SSH connection manager + parsers
├── tests/server/ # Bun test suites
├── docker/ # Dockerfile, compose, entrypoint
│ └── test-systems/ # Docker test containers
├── run.sh # Local dev/production runner
├── reset-dev-branch.sh # Reset dev branch to main
├── drizzle.config.ts # Drizzle Kit configuration
├── vite.config.ts # Vite + Tailwind config
└── package.json
There's a helper script run.sh to manage services.
Development mode (hot reload, server on :3001, client on :5173):
./run.sh devProduction mode (build and start on :3001):
./run.shOr use the npm scripts directly:
# Start both dev servers (backend :3001 + Vite :5173 with HMR)
bun run dev
# Or run them individually
bun run dev:server # Backend only (with watch mode)
bun run dev:client # Vite frontend only
# Run tests
bun test
# Type check
bun run check
# Database management
bun run db:generate # Generate migrations from schema changes
bun run db:migrate # Apply pending migrations
bun run db:studio # Open Drizzle Studio GUIThe project includes Docker-based test systems that simulate real Linux servers with pending updates. This lets you develop and test the dashboard without needing actual remote machines.
Start the dashboard with test systems:
./run.sh testThis will:
- Stop any running dev/production services
- Build and start 10 Docker containers (including Alpine, fish-shell, and sudo-password APT fixtures)
- Build the frontend in production mode
- Run database migrations
- Start the production server on
:3001
SSH credentials for all test systems:
- User:
testuser - Password:
testpass Sudo password:testpass(required forludash-test-ubuntu-sudoandludash-test-debian-fish-sudo, optional for others)- Passwordless
sudois pre-configured on all test systems exceptludash-test-ubuntu-sudoandludash-test-debian-fish-sudo
| Container | SSH Port | Package Manager | Login Shell | Base Image |
|---|---|---|---|---|
ludash-test-ubuntu |
2001 | APT | bash |
Ubuntu 24.04 |
ludash-test-fedora |
2002 | DNF | bash |
Fedora 41 |
ludash-test-centos7 |
2003 | YUM | bash |
CentOS 7 |
ludash-test-archlinux |
2004 | Pacman | bash |
Arch Linux |
ludash-test-flatpak |
2005 | Flatpak | bash |
Ubuntu 24.04 |
ludash-test-snap |
2006 | Snap | bash |
Ubuntu 24.04 |
ludash-test-ubuntu-sudo |
2007 | APT (sudo password) | bash |
Ubuntu 24.04 |
ludash-test-debian-fish |
2008 | APT | fish |
Debian 12 |
ludash-test-debian-fish-sudo |
2009 | APT (sudo password) | fish |
Debian 12 |
ludash-test-alpine |
2010 | APK | bash |
Alpine 3.16 |
To add a test system in the dashboard, use host.docker.internal (or 172.17.0.1 on Linux) as the hostname with the corresponding SSH port.
Each container is built with older package versions pinned from archived repositories, while current repos remain active. This means apt list --upgradable, dnf check-update, pacman -Qu, apk list -u, etc. will always report pending updates — giving you realistic data to work with in the dashboard.
The Docker Compose file and all Dockerfiles are in docker/test-systems/.
To reset the dev branch to match main (force push):
./reset-dev-branch.shAll endpoints require authentication unless noted. Responses are JSON.
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/health |
Health check (localhost: no auth, external: requires auth) |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/auth/status |
Auth state, setup status, OIDC availability |
| POST | /api/auth/setup |
Create initial admin account |
| POST | /api/auth/login |
Password login |
| POST | /api/auth/logout |
Clear session |
| GET | /api/auth/me |
Current user info |
| POST | /api/auth/webauthn/register/options |
Start passkey registration |
| POST | /api/auth/webauthn/register/verify |
Complete passkey registration |
| POST | /api/auth/webauthn/login/options |
Start passkey login |
| POST | /api/auth/webauthn/login/verify |
Complete passkey login |
| GET | /api/auth/oidc/login |
Redirect to OIDC provider |
| GET | /api/auth/oidc/callback |
OIDC callback handler |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/systems |
List all systems with update counts |
| GET | /api/systems/:id |
System detail with updates and history |
| POST | /api/systems |
Add a new system |
| PUT | /api/systems/:id |
Update system configuration |
| DELETE | /api/systems/:id |
Remove a system |
| POST | /api/systems/test-connection |
Test SSH connectivity |
| POST | /api/systems/:id/reboot |
Reboot a system |
| GET | /api/systems/:id/updates |
Cached updates for a system |
| GET | /api/systems/:id/history |
Upgrade history for a system |
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/systems/:id/check |
Check one system for updates |
| POST | /api/systems/check-all |
Check all systems (background) |
| POST | /api/systems/:id/upgrade |
Upgrade all packages on a system |
| POST | /api/systems/:id/full-upgrade |
Full/dist upgrade on a system |
| POST | /api/systems/:id/upgrade/:packageName |
Upgrade a single package |
| POST | /api/cache/refresh |
Invalidate cache and re-check all systems |
| GET | /api/jobs/:id |
Poll background job status |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/notifications |
List all notification channels |
| GET | /api/notifications/:id |
Get a notification channel |
| POST | /api/notifications |
Create a notification channel |
| PUT | /api/notifications/:id |
Update a notification channel |
| DELETE | /api/notifications/:id |
Delete a notification channel |
| POST | /api/notifications/test |
Test a notification config inline (before saving) |
| POST | /api/notifications/:id/test |
Send a test notification |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/passkeys |
List passkeys for the authenticated user |
| PATCH | /api/passkeys/:id |
Rename a passkey |
| DELETE | /api/passkeys/:id |
Remove a passkey |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/tokens |
List tokens for the authenticated user |
| POST | /api/tokens |
Create a new token (name, expiresInDays, readOnly) |
| PATCH | /api/tokens/:id |
Rename a token |
| DELETE | /api/tokens/:id |
Revoke a token |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/dashboard/stats |
Summary statistics |
| GET | /api/dashboard/systems |
All systems with status metadata |
| GET | /api/settings |
Current settings |
| PUT | /api/settings |
Update settings |
- Credential encryption: SSH passwords and private keys are encrypted at rest using AES-256-GCM with per-entry random IVs and auth tags
- Notification secrets: SMTP passwords and ntfy tokens are also encrypted at rest within notification channel configs
- Key derivation: supports both raw base64 keys and passphrase-derived keys (PBKDF2-SHA256, 480k iterations)
- Session security: HTTP-only, SameSite=Lax cookies with JWT (HS256)
- CSRF protection: state-changing API requests require a per-session CSRF token header
- Input validation: strict type, format, and range validation on all API inputs
- Notification URL validation: outbound notification URLs are validated for correct format (http/https); private/local targets are allowed since they are admin-configured
- Rate limiting: auth endpoints are rate-limited (3 req/min for setup, 5 req/min for login and WebAuthn verify, 20 failed bearer attempts/min per IP)
- API token security: only SHA-256 hashes stored, tokens blocked from management endpoints, CSRF skipped for stateless bearer requests
- Password-disable safeguard: password login cannot be disabled unless a passkey or SSO is configured (enforced server-side)
- Timing-safe login: a pre-computed dummy hash is always compared on failed lookups to prevent username enumeration
- Encrypted OIDC secrets: OIDC client secrets are encrypted at rest alongside SSH credentials
- Passphrase key derivation: encryption keys can be raw base64 or passphrases derived via PBKDF2-SHA256 (480k iterations)
- Concurrent access control: per-system mutex prevents conflicting SSH operations
- Connection pooling: semaphore-based concurrency limiting to prevent SSH connection exhaustion
All upgrade operations (upgrade all, full upgrade, single package) run via nohup on the remote system, so they survive SSH connection drops. If your network blips or the dashboard restarts mid-upgrade, the process keeps running on the server.
- Sudo handling — if a sudo password is configured, it is sent only over the live SSH stdin stream to a one-time
sudolaunch of the background process. The password is never written to files or environment variables. For non-password sudo, detached commands usesudo -n. - Temp script — the upgrade command is base64-encoded, written to a temporary script on the remote host, and launched with
nohupin the background. - Live streaming — output is streamed back to the dashboard in real time using
tail --pid, which automatically stops when the process finishes. - Exit code capture — the script writes its exit code to a companion file, which the dashboard reads after the process completes.
- Fail-safe behavior — if SSH-safe
nohupsetup fails (e.g.mktempunavailable), the upgrade is marked failed instead of falling back to unsafe direct execution.
If the SSH connection drops while monitoring, the dashboard shows a warning:
SSH connection lost during upgrade. The process may still be running on the remote system.
The upgrade itself continues on the remote host unaffected. Temporary files are cleaned up once the exit code is read.
| Operation | SSH-safe |
|---|---|
| Upgrade all packages | Yes |
| Full upgrade (dist upgrade) | Yes |
| Upgrade single package | Yes |
| Check for updates | No (read-only, safe to retry) |
| Reboot | No (fire-and-forget) |
The UI marks SSH-safe operations with an SSH-safe badge in the activity history.







