burette: add fio-based block I/O performance tests#3127
burette: add fio-based block I/O performance tests#3127jstarks wants to merge 5 commits intomicrosoft:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new burette performance benchmark that measures block I/O performance (via fio) in an Alpine Linux UEFI VM, aimed at comparing virtio-blk vs. storvsc backend overhead under a warm-VM iteration model.
Changes:
- Introduces a new
disk_iowarm performance test that boots Alpine, installsfio, discovers an attached data disk, and records BW/IOPS metrics. - Extends the burette CLI to expose the new
disk-iotest plus disk-backend and disk-file/size options. - Adds required dependencies for disk resources, GUIDs, PCIe config, and resource plumbing.
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| petri/burette/src/tests/mod.rs | Registers the new disk_io test module. |
| petri/burette/src/tests/disk_io.rs | Implements the fio-based block I/O benchmark, disk attachment, and metric parsing. |
| petri/burette/src/main.rs | Adds the disk-io test selection and related CLI flags. |
| petri/burette/Cargo.toml | Adds dependencies needed to configure disks/PCIe/virtio resources. |
| Cargo.lock | Updates lockfile for new dependencies. |
Add a disk_io benchmark to burette that boots an Alpine Linux VM, installs fio, and measures sequential/random read/write throughput (MiB/s) and IOPS. Supports virtio-blk and storvsc (synthetic SCSI) backends. Uses UEFI boot with PCIe topology and discovers the data disk via /sys/block. Defaults to a RAM-backed disk to isolate virtio/storvsc overhead; optionally accepts a host file path for realistic storage latency.
- Fix VirtioBlk doc comment: MMIO -> PCIe (virtio-pci) - Add disk size validation (ensure > 0 GiB) - Change make_disk_resource to return Result instead of panicking - Hoist disk resource creation outside closure for error propagation - Use GUID-based sysfs discovery for storvsc data disk instead of fragile sd* device name heuristics - Restructure discover_data_disk to only list /sys/block for virtio-blk - Add disk I/O section to Guide perf.md
15bd329 to
f0cd0db
Compare
mattkur
left a comment
There was a problem hiding this comment.
This looks good to me. I am concerned about the reliability and portability of reaching out to the internet to install things like fio, but that will be addressed by microsft/openvmm-deps#42 .
I'm also little dismayed by the number of lines of code required to set up the pcie hierarchy, but again this is a good starting point. Thanks!
Jack is fixing this. |
If anyone is interested in reviewing #3118 then this is ready to go :) |
- Run each fio mode (read, write, randread, randwrite) once instead of running randread/randwrite twice. Extract both BW and IOPS from the same JSON output for random modes, saving ~30s per iteration. - Select Alpine VHD based on MachineArch::host() instead of hardcoding ALPINE_3_23_X64, matching the pattern used by the network test. Previously this would fail on aarch64 hosts due to arch mismatch.
Add a disk_io benchmark to burette that boots an Alpine Linux VM, installs fio, and measures sequential/random read/write throughput (MiB/s) and IOPS.
Supports virtio-blk and storvsc (synthetic SCSI) backends. Uses UEFI boot with PCIe topology and discovers the data disk via /sys/block. Defaults to a RAM-backed disk to isolate virtio/storvsc overhead; optionally accepts a host file path for realistic storage latency.