██████╗ █████╗ ██████╗ ███████╗ ██╗ ██╗ ██╗ ███████╗
██╔════╝ ██╔══██╗ ██╔══██╗ ██╔════╝ ██║ ██║ ██║ ██╔════╝
██║ ███████║ ██████╔╝ ███████╗ ██║ ██║ ██║ █████╗
██║ ██╔══██║ ██╔═══╝ ╚════██║ ██║ ██║ ██║ ██╔══╝
╚██████╗ ██║ ██║ ██║ ███████║ ╚██████╔╝ ███████╗ ███████╗
╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝ ╚═════╝ ╚══════╝ ╚══════╝
A comprehensive Laravel backup package. Back up your database and files, store them anywhere, get notified, and restore with confidence.
Capsule supports MySQL, PostgreSQL, and SQLite. It stores backups on any Laravel filesystem disk (local, S3, SFTP, FTP, DigitalOcean Spaces, etc.) and notifies you via Email, Slack, Discord, Microsoft Teams, or Google Chat.
- Installation
- Quick Start
- Commands
- Backup
- Restore
- List & Inspect
- Verify
- Cleanup
- Diagnose
- Health
- Advisor
- Download
- Backup If Stale
- Features
- Incremental Backups
- Backup Simulation
- Backup Policies
- Multi-Disk Storage
- Chunked Streaming
- Encryption
- Programmatic API
- Configuration
- Storage
- Database
- Files
- Retention
- Scheduling
- Notifications
- Extensibility
- Integrations
- Requirements
- License
composer require dgtlss/capsulePublish the config file:
php artisan vendor:publish --tag=capsule-configRun the migrations:
php artisan migrateThat's it. Capsule auto-discovers via Laravel's package discovery.
Run your first backup:
php artisan capsule:backupYou'll see a summary report when it finishes:
┌───────────────────────────────────────────────────────┐
│ ✅ Backup #1 completed successfully │
├───────────────────────────────────────────────────────┤
│ Duration 2.4s │
│ Archive backup_2026-02-26_02-00-00.zip │
│ Size 14.2 MB (from 89.5 MB) │
│ Compression 0.16x │
│ Files 2,847 files in 312 dirs │
│ Database 1 dump(s) - 45.2 MB │
│ Storage s3 │
│ Throughput 37.3 MB/s │
│ Tag nightly │
│ Next run Tomorrow at 02:00 │
└───────────────────────────────────────────────────────┘
Check your backup status:
php artisan capsule:healthIf something goes wrong, start here:
php artisan capsule:diagnose# Standard backup
php artisan capsule:backup
# Database only
php artisan capsule:backup --db-only
# Files only
php artisan capsule:backup --files-only
# Incremental (only changed files since last full backup)
php artisan capsule:backup --incremental
# Tag a backup for easy identification
php artisan capsule:backup --tag=pre-deploy
# Simulate without running (estimates size and duration)
php artisan capsule:backup --simulate
# Use a named policy
php artisan capsule:backup --policy=database-hourly
# Stream directly to cloud (no local disk usage)
php artisan capsule:backup --no-local
# Encrypt the backup
php artisan capsule:backup --encrypt
# Verify integrity after creation
php artisan capsule:backup --verify
# Max compression
php artisan capsule:backup --compress=9
# Force run even if another backup is in progress
php artisan capsule:backup --force
# Verbose output
php artisan capsule:backup --detailed
# JSON output (for CI/automation)
php artisan capsule:backup --format=json# Restore the latest backup
php artisan capsule:restore
# Restore a specific backup
php artisan capsule:restore 42
# Browse what's inside before restoring
php artisan capsule:restore --list
# Restore specific files only
php artisan capsule:restore --only=config/database.php --only=config/app.php
# Restore using glob patterns
php artisan capsule:restore --only='*.php'
# Database only
php artisan capsule:restore --db-only
# Files only
php artisan capsule:restore --files-only
# Restore files to a different directory
php artisan capsule:restore --files-only --target=/tmp/restored
# Restore to a different database connection
php artisan capsule:restore --db-only --connection=mysql_secondary
# Preview without making changes
php artisan capsule:restore --dry-run
# Skip confirmation
php artisan capsule:restore --force# List recent backups
php artisan capsule:list
# Limit results
php artisan capsule:list --limit=10
# JSON output
php artisan capsule:list --format=json
# Inspect a specific backup (shows manifest with checksums)
php artisan capsule:inspect 42# Verify the latest backup (downloads, checks ZIP + checksums)
php artisan capsule:verify
# Verify a specific backup
php artisan capsule:verify --id=42
# Verify all successful backups
php artisan capsule:verify --all
# Keep the downloaded file after verification
php artisan capsule:verify --keepCapsule also runs automated verification on a schedule. See Integrity Monitoring.
# Clean up based on retention policy
php artisan capsule:cleanup
# Preview what would be deleted
php artisan capsule:cleanup --dry-run
# Override retention days
php artisan capsule:cleanup --days=7
# Also clean up failed backup records
php artisan capsule:cleanup --failed
# Clean up orphaned storage files
php artisan capsule:cleanup --storage# Check config, storage, database, file paths, system requirements
php artisan capsule:diagnose
# Include performance, security, and backup history analysis
php artisan capsule:diagnose --detailed
# Attempt to fix common issues (e.g., publish missing config)
php artisan capsule:diagnose --fix# JSON health snapshot
php artisan capsule:healthReturns:
{
"last_success_age_days": 0,
"recent_failures_7d": 0,
"storage_usage_bytes": 14892032
}# Analyze trends and get scheduling recommendations
php artisan capsule:advisorThe advisor examines your backup history and reports on size growth, duration trends, compression efficiency, failure rates, and gives actionable recommendations.
# Download the latest backup to local disk
php artisan capsule:download
# Download a specific backup
php artisan capsule:download 42
# Download to a custom path
php artisan capsule:download --path=/tmp/backupsOnly run a backup if the last successful one is older than a threshold. Useful for redundant scheduling or deploy hooks where you don't want to double-backup.
# Backup only if last success is older than 24 hours (default)
php artisan capsule:backup-if-stale
# Custom threshold
php artisan capsule:backup-if-stale --hours=6
# With a specific policy
php artisan capsule:backup-if-stale --hours=12 --policy=database-hourlyInstead of backing up all files every time, incremental mode only includes files that changed since the last full backup. Capsule tracks file sizes and modification times to detect changes.
php artisan capsule:backup --incrementalIf no previous full backup exists, Capsule automatically runs a full backup instead.
A typical workflow:
# Weekly full backup
php artisan capsule:backup --tag=weekly-full
# Daily incremental
php artisan capsule:backup --incremental --tag=daily-incrementalEstimate backup size and duration before committing:
php artisan capsule:backup --simulateThis scans all configured paths and databases, calculates totals, estimates compression from historical data, and reports:
- Raw data size and estimated archive size
- Estimated duration based on past throughput
- Top file extensions by size
- Largest files
- Historical comparison against recent backups
- Warnings for low disk space or large datasets
Define named backup strategies for different needs:
// config/capsule.php
'policies' => [
'database-hourly' => [
'database' => true,
'files' => false,
'disk' => 's3',
'frequency' => 'hourly',
'retention' => ['days' => 7, 'count' => 168],
],
'full-weekly' => [
'database' => true,
'files' => true,
'disk' => 'glacier',
'frequency' => 'weekly',
'time' => '03:00',
'retention' => ['days' => 365, 'count' => 52],
],
'incremental-daily' => [
'database' => true,
'files' => true,
'incremental' => true,
'frequency' => 'daily',
'time' => '02:00',
],
],Each policy runs on its own schedule automatically. Run a specific policy manually:
php artisan capsule:backup --policy=database-hourlyWhen no policies are defined, Capsule uses the global config as a single default policy.
Back up to multiple destinations for redundancy:
'default_disk' => 's3',
'additional_disks' => ['local-archive', 's3-secondary'],The primary backup goes to default_disk. Copies are replicated to each additional disk. Replication failures are logged but don't fail the backup.
For large backups or limited local storage, stream directly to cloud storage:
php artisan capsule:backup --no-localData is streamed in configurable chunks, uploaded directly to storage, then collated into a final ZIP archive. No local disk space required beyond small temporary buffers.
'chunked_backup' => [
'chunk_size' => 10485760, // 10 MB chunks
'temp_prefix' => 'capsule_chunk_',
'max_concurrent_uploads' => 3,
],Capsule supports two encryption approaches:
ZIP-level encryption (simple, compatible with standard ZIP tools):
php artisan capsule:backup --encryptSet the password via environment variable:
CAPSULE_BACKUP_PASSWORD=your-secret-keyEnvelope encryption (advanced, supports key rotation):
Each backup is encrypted with a unique random data key (DEK), which is then wrapped with your master key. The key ID is stored in the manifest, enabling you to rotate the master key while old backups remain decryptable with their original key.
Use Capsule from your application code without going through artisan:
use Dgtlss\Capsule\Facades\Capsule;
// Run a backup
$success = Capsule::backup();
// With options
$success = Capsule::backup([
'tag' => 'pre-deploy',
'db_only' => true,
'incremental' => true,
]);
// Simulate
$estimate = Capsule::simulate();
// List backups
$backups = Capsule::list(10);
// Get the latest successful backup
$latest = Capsule::latest();
// Health check
if (!Capsule::isHealthy()) {
// alert
}Capsule continuously verifies your backups are intact:
'verification' => [
'schedule_enabled' => true,
'frequency' => 'daily',
'time' => '04:00',
'recheck_days' => 7, // Re-verify after 7 days
],Each scheduled run picks an unverified backup, downloads it, validates the ZIP structure and SHA-256 checksums for every entry, and logs the result. Failed verifications trigger notifications.
# Run manually
php artisan capsule:verify-scheduledAfter each backup, Capsule compares the result against the rolling average:
- Size anomalies: flags backups that are >200% larger or smaller than average
- Duration anomalies: flags backups taking >300% longer than average
- File count anomalies: flags unexpected changes in file count
- Compression anomalies: flags drops in compression efficiency
Anomalies appear in the post-backup summary and are included in notifications.
'anomaly' => [
'size_deviation_percent' => 200,
'duration_deviation_percent' => 300,
],Before adding a dump to the archive, Capsule validates it:
- MySQL/MariaDB: checks for expected header comments and the "Dump completed" end marker
- PostgreSQL: validates the dump header format
- SQLite: verifies the magic header bytes and minimum file size
A corrupt or empty dump aborts the backup with a clear error.
Every backup operation is recorded in an immutable audit trail:
'audit' => [
'enabled' => true,
],Tracks: action (backup/restore/cleanup), trigger (artisan/scheduler/api), actor (system user or authenticated user), status, and full details. Stored in the backup_audit_logs table.
For S3-compatible storage, Capsule can tag objects and transition them to cheaper storage classes:
's3_lifecycle' => [
'tagging_enabled' => true,
'transition_enabled' => true,
'transitions' => [
['after_days' => 30, 'storage_class' => 'STANDARD_IA'],
['after_days' => 90, 'storage_class' => 'GLACIER'],
],
],After publishing the config (php artisan vendor:publish --tag=capsule-config), all settings live in config/capsule.php.
Capsule uses your existing Laravel filesystem disks. No duplicate storage configuration needed.
'default_disk' => env('CAPSULE_DEFAULT_DISK', 'local'),
'backup_path' => env('CAPSULE_BACKUP_PATH', 'backups'),Point CAPSULE_DEFAULT_DISK at any disk in your config/filesystems.php:
CAPSULE_DEFAULT_DISK=s3Upload reliability:
'storage' => [
'retries' => 3,
'backoff_ms' => 500,
'max_backoff_ms' => 5000,
],'database' => [
'enabled' => true,
'connections' => null, // null = auto-detect default connection
'exclude_tables' => [],
'include_tables' => [],
'include_triggers' => true,
'include_routines' => false,
'mysqldump_flags' => '',
'compress' => true,
],Set connections to an array to back up multiple databases:
'connections' => ['mysql', 'pgsql'],'files' => [
'enabled' => true,
'paths' => [base_path()],
'exclude_paths' => [
base_path('.env'),
base_path('node_modules'),
base_path('vendor'),
base_path('.git'),
storage_path('logs'),
storage_path('framework/cache'),
],
'compress' => true,
],'retention' => [
'days' => 30, // Delete backups older than this
'count' => 10, // Always keep the latest N
'max_storage_mb' => null, // Optional storage budget
'min_keep' => 3, // Never drop below this count
'cleanup_enabled' => true,
],'schedule' => [
'enabled' => true,
'frequency' => 'daily', // hourly, daily, twiceDaily, weekly, monthly, or cron
'time' => '02:00',
],Custom cron expression:
'frequency' => '0 3 * * 1-5', // Weekdays at 3 AMMake sure your server's cron is configured to run Laravel's scheduler:
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
'notifications' => [
'enabled' => true,
'webhook_retries' => 3,
'webhook_backoff_ms' => 1000,
'email' => [
'enabled' => false,
'to' => env('CAPSULE_EMAIL_TO'),
'from' => env('CAPSULE_EMAIL_FROM'),
'notify_on' => null, // null = all events, or ['failure']
],
'webhooks' => [
'slack' => [
'enabled' => false,
'webhook_url' => env('CAPSULE_SLACK_WEBHOOK_URL'),
'channel' => '#general',
'username' => 'Capsule',
'icon_emoji' => ':package:',
'notify_on' => null,
],
'discord' => [
'enabled' => false,
'webhook_url' => env('CAPSULE_DISCORD_WEBHOOK_URL'),
'username' => 'Capsule',
'notify_on' => null,
],
'teams' => [
'enabled' => false,
'webhook_url' => env('CAPSULE_TEAMS_WEBHOOK_URL'),
'notify_on' => null,
],
'google_chat' => [
'enabled' => false,
'webhook_url' => env('CAPSULE_GOOGLE_CHAT_WEBHOOK_URL'),
'notify_on' => null,
],
],
],The notify_on option controls which events trigger each channel. Set to ['failure'] to only get alerted on failures, or null to receive everything.
All notifications include: app name, environment, hostname, backup size, duration, storage disk, and error details (for failures).
Webhook channels use Block Kit (Slack), Embeds (Discord), Adaptive Cards (Teams), and Card v2 (Google Chat).
Register custom file filters and pipeline steps:
'extensibility' => [
'file_filters' => [
// \App\Backup\Filters\ExcludeLargeFiles::class,
],
'pre_steps' => [
// \App\Backup\Steps\EnterMaintenanceMode::class,
],
'post_steps' => [
// \App\Backup\Steps\ExitMaintenanceMode::class,
],
],File filters implement Dgtlss\Capsule\Contracts\FileFilterInterface:
public function shouldInclude(string $absolutePath, BackupContext $context): bool;Built-in filters: MaxFileSizeFilter, ExtensionFilter, PatternFilter.
Steps implement Dgtlss\Capsule\Contracts\StepInterface:
public function handle(BackupContext $context): void;Events dispatched during backup:
BackupStarting, DatabaseDumpStarting, DatabaseDumpCompleted, FilesCollectStarting, FilesCollectCompleted, ArchiveFinalizing, BackupUploaded, BackupSucceeded, BackupFailed
use Spatie\Health\Facades\Health;
use Dgtlss\Capsule\Health\CapsuleBackupCheck;
Health::checks([
CapsuleBackupCheck::new(),
]);Configure thresholds:
'health' => [
'max_last_success_age_days' => 2,
'max_recent_failures' => 0,
'warn_storage_percent' => 90,
],Capsule ships a browse-only Filament page with filtering, pagination, status badges, and health stats. Add to your panel:
use Dgtlss\Capsule\Filament\Pages\BackupsPage;All commands support --format=json for machine-readable output:
php artisan capsule:backup --format=json
php artisan capsule:list --format=json
php artisan capsule:verify --all --format=json
php artisan capsule:cleanup --dry-run --format=json
php artisan capsule:health
php artisan capsule:advisor --format=json- PHP 8.1+
- Laravel 10, 11, or 12
zipPHP extension- Database tools:
mysqldump(MySQL/MariaDB),pg_dump(PostgreSQL) - Optional:
mysql/psqlfor restore
MIT. See LICENSE.md.