diskwatch is a read-only Linux storage monitor and terminal TUI for disk
activity, filesystem space, block devices, ZFS pools, mdraid arrays, LVM state,
and SMART health. It is useful when you want a lightweight Rust companion to
tools like iostat, df, lsblk, zpool, mdadm, lvs, and smartctl.
The default mode is an interactive terminal UI. A --once mode is also
available for scripts, diagnostics, CI logs, and non-interactive environments.
Use diskwatch when you want to:
- Monitor per-device read throughput, write throughput, read IOPS, write IOPS, and busy percentage from Linux disk statistics.
- Inspect mounted filesystem capacity, including used, available, total, and
percent used. Kernel pseudo filesystems are filtered, while capacity-bearing
mounts such as container overlay roots are retained. Tmpfs mounts are hidden
by default and can be shown with
--tmpfs. Remote and FUSE mounts are skipped to avoid blocking on stale network filesystems. - Inspect block-device inventory, including size, type, rotational hint, logical and physical sector size, vendor, model, and serial where readable.
- Check ZFS pool capacity and health when
zpoolis installed. Use--zfs-deepfor topology, vdev I/O, ARC/L2ARC, dataset, and kernel kstat summaries. - Check mdraid array state from
/proc/mdstatand optionalmdadmoutput. - Check LVM physical volumes, volume groups, and logical volumes when LVM tools are installed.
- Check SMART health, temperature, power-on hours, and wear/lifetime fields when
smartctlis installed and allowed to read a device.
Run from the repository without installing:
cargo run -- --once
cargo run -- --interval 500msBuild and install the release binary:
make install
diskwatch --once
diskwatchIf /usr/local/bin is not in your PATH, either add it or install with a custom
PREFIX, BINDIR, or INSTALL_PATH.
diskwatch targets Linux systems that expose storage activity through procfs,
block-device metadata through sysfs, filesystem capacity through mounted
filesystems, and optional storage-stack details through local read-only command
line tools.
| System type | Support level | Notes |
|---|---|---|
| Bare-metal Linux with procfs and sysfs | Full | Expected to show disk activity, mounted filesystem space, and block-device inventory. Optional ZFS, mdraid, LVM, and SMART sections depend on local tooling and permissions. |
| Linux VMs and containers | Partial | /proc/diskstats, /proc/mounts, and /sys/block may be filtered or virtualized. Optional device health data is often hidden. |
| Linux hosts without ZFS, mdraid, LVM, or SMART tools | Partial | Core activity, space, and block-device panels can still work. Missing optional tools become N/A values and diagnostics. |
| macOS, Windows, BSD, WSL without Linux storage procfs/sysfs access | Not supported for useful runtime data | The crate may compile on some non-Linux targets, but the monitor expects Linux /proc, /sys, and storage command interfaces. |
The TUI requires an interactive terminal. Use --once for automation or
non-interactive environments.
| Data | Source |
|---|---|
| Device activity counters | /proc/diskstats |
| Mounted filesystem list | /proc/mounts |
| Filesystem capacity | statvfs |
| Block-device inventory | /sys/block |
| mdraid state | /proc/mdstat |
| mdraid details | mdadm --detail --scan |
| ZFS pools | zpool list and zpool status |
| Deep ZFS stats | zpool iostat, zfs list, zfs get, and /proc/spl/kstat/zfs |
| LVM state | pvs, vgs, and lvs |
| SMART health | smartctl |
diskwatch does not change filesystems, mount state, RAID arrays, ZFS pools,
LVM volumes, SMART settings, kernel tunables, or any other system
configuration.
Optional command output is cached for 30 seconds in the TUI and collected under
a short aggregate budget so slower tools such as zpool, mdadm, LVM commands,
or per-device smartctl checks cannot multiply into long UI stalls. Core
activity, filesystem, block-device, and /proc/mdstat data are still refreshed
on the normal interval.
Deep ZFS mode has its own bounded ZFS collection budget. It may spend about one
second sampling zpool iostat -y 1 1 so the displayed vdev rates and latency
fields represent a real interval. If a ZFS source is too slow, unavailable, or
blocked by permissions, diskwatch keeps the partial data it has and reports a
source-specific diagnostic instead of failing the whole report.
Mounted filesystem capacity uses synchronous local statvfs calls. Remote and
FUSE filesystem types such as NFS, CIFS, sshfs, and similar mounts are skipped
so a stale mount cannot freeze the monitor.
Required for building:
- Rust 1.88 or newer, matching the crate's
rust-version. - Cargo.
Optional but recommended:
make, for the repository build/install targets.sudo,setcap, andgetcap, for installing the binary with Linux file capabilities.zpool, if you want ZFS pool details.mdadm, if you want mdraid details beyond/proc/mdstat.- LVM tools (
pvs,vgs,lvs), if you want LVM details. smartctl, usually from smartmontools, if you want SMART health data.
On Debian or Ubuntu-style systems, the optional runtime tools are typically in:
sudo apt install make libcap2-bin zfsutils-linux mdadm lvm2 smartmontoolsOn Fedora-style systems:
sudo dnf install make libcap zfs-fuse mdadm lvm2 smartmontoolsDistribution package names vary. Optional storage tools are discovered in
standard local system command directories such as /usr/bin, /usr/sbin,
/usr/local/bin, and /usr/local/sbin; install the package that provides them
for your distribution.
Build a debug binary:
cargo buildBuild an optimized release binary:
cargo build --releaseThe release binary is written to:
target/release/diskwatchThe Makefile wraps the release build:
make buildBuild Debian and RPM packages:
make package VERSION=0.1.0
make check-packages VERSION=0.1.0Package artifacts are written to dist/ by default:
diskwatch_0.1.0_amd64.debdiskwatch-0.1.0-1.x86_64.rpm
Both packages install diskwatch to /usr/bin/diskwatch, keep the binary
executable, and run this during package installation:
setcap cap_dac_read_search+ep /usr/bin/diskwatchRequired package build tools:
dpkg-deb, usually provided by the Debian or Ubuntudpkgpackage.rpmbuild, usually provided by the Fedora, RHEL, or Debianrpmpackage.
Run the full local check suite:
make checkThat runs:
cargo fmt --check
cargo test
cargo clippy -- -D warningsYou can also run individual targets:
make fmt
make test
make clippyThe recommended install path is through the Makefile:
make installBy default this:
- Builds
target/release/diskwatchif needed. - Installs it to
/usr/local/bin/diskwatch. - Applies the
cap_dac_read_search+epfile capability set. - Prints the resulting capability with
getcap.
Verify the installed command:
command -v diskwatch
diskwatch --onceIf you prefer to run the privileged install step explicitly, build first and
then run install under sudo:
make build
sudo make installThe prebuild matters because sudo make install runs as root and the Makefile
expects the release binary to already exist in that case.
Install under a different prefix:
PREFIX="$HOME/.local" make installInstall to a specific binary directory:
BINDIR="$HOME/.local/bin" make installInstall to an exact path:
INSTALL_PATH="$HOME/.local/bin/diskwatch" make installTo install only the binary:
make install-binaryWithout capabilities, diskwatch still runs, but protected storage metadata and
SMART details may be unavailable on some hosts.
You can apply or reapply capabilities later:
make capabilityCheck the installed capabilities:
make show-capability
getcap "$(command -v diskwatch)"Remove the installed binary:
make uninstallYou can also install with Cargo:
cargo install --path .Cargo does not apply Linux file capabilities. If you need protected storage
metadata reads, apply the capabilities manually or use make install.
The core activity, filesystem, and block-device sections rely on /proc,
/sys, and mounted filesystems:
test -r /proc/diskstats
test -r /proc/mounts
ls /sys/blockThese data sources normally work as an unprivileged user on Linux. Some containers or hardened hosts may hide devices or expose only virtualized storage.
ZFS details use read-only zpool commands:
zpool list
zpool statusIf ZFS is not installed or no pools are present, the ZFS section reports N/A
and includes a diagnostic when useful.
For deeper ZFS reporting, run:
diskwatch --zfs-deep
diskwatch --once --zfs-deepDeep ZFS mode collects:
- Pool capacity, health, scan, error, action, and topology from
zpool listandzpool status -P. - Vdev read/write rates, IOPS, latency, and queue counters from one bounded
zpool iostat -Hp -vlq -y <pools> 1 1command. - Dataset usage and selected properties from scoped
zfs listandzfs getcommands for the detected pools. - ARC, L2ARC, dbuf, dnode, ZIL, zfetch, ABD, and recent TXG summaries from
/proc/spl/kstat/zfswhen readable.
Permission failures and missing kstat files degrade to diagnostics and N/A
fields. The command never changes pool state, starts scrubs, clears errors, or
modifies ZFS properties.
mdraid state is read from:
/proc/mdstatAdditional details use:
mdadm --detail --scanIf mdadm is missing, /proc/mdstat can still provide array state on systems
that use mdraid.
LVM details use:
pvs --readonly
vgs --readonly
lvs --readonlyInstall LVM tools if you want physical volume, volume group, and logical volume details.
SMART health uses smartctl:
smartctl -n standby -A -H /dev/sdaDevice names vary by host. diskwatch probes common physical disk names such as
sd*, hd*, nvme*, and mmcblk*; it skips logical and virtual devices such
as dm-*, vda, xvda, nbd*, rbd*, and zd* so they do not consume the
optional command budget. The -n standby guard reduces the chance of waking
sleeping disks, but smartctl device autodetection can still wake some
hardware. Some drives, USB adapters, NVMe devices, and RAID controllers require
different smartctl options or elevated privileges. The monitor reports
missing, asleep, or unreadable SMART data as N/A rather than failing.
Some filesystems, package managers, or copy operations do not preserve Linux file capabilities. If the installed binary is replaced after install, run:
make capabilityThe default capability is:
cap_dac_read_search+epIt can help read protected metadata, but it does not grant write access and does not bypass all kernel, device, container, or command-level restrictions.
Start the interactive TUI:
diskwatchExit the TUI with any of:
qEscCtrl-C
Use a custom update interval:
diskwatch --interval 500ms
diskwatch --interval 2sPrint one text report and exit:
diskwatch --onceUse a custom sampling interval for the one-shot report:
diskwatch --once --interval 250msIn --once mode, diskwatch takes an initial sample, waits for the interval,
then takes a second sample so activity rates can be computed from counter
deltas.
Loop devices and loop-backed filesystem rows are hidden by default. Show them when needed:
diskwatch --loopTmpfs filesystem rows are hidden by default. Show them when needed:
diskwatch --tmpfsShow CLI help:
diskwatch --helpCurrent options:
Usage: diskwatch [OPTIONS]
Options:
--interval <INTERVAL> [default: 1s]
--once
--loop
--tmpfs
--zfs-deep
-h, --help Print help
diskwatch is designed as a read-only monitor. It reads Linux procfs/sysfs
files, mounted filesystem statistics, and optional command output. It does not
write to block devices, run repairs, start scrubs, alter mounts, modify ZFS
pools, change mdraid arrays, change LVM metadata, start SMART tests, or tune
kernel storage settings.
src/main.rs: binary entry point.src/lib.rs: mode selection, TUI loop, and terminal lifecycle.src/cli.rs: command-line options.src/diskstats.rs:/proc/diskstatsparsing and activity calculations.src/block.rs:/sys/blockinventory and block-device metadata.src/filesystems.rs: mount parsing and filesystem capacity.src/raid.rs:/proc/mdstatand optionalmdadmparsing.src/zfs.rs: optional ZFS command and kstat parsing.src/lvm.rs: optional LVM command parsing.src/smart.rs: optionalsmartctlparsing.src/commands.rs: timeout-aware helper for optional read-only commands.src/snapshot.rs: combined sampling state.src/render.rs: TUI rendering and one-shot text reports.Makefile: build, install, capability, package, and check targets.