Skip to content

burette: add virtio-fs performance test#3446

Merged
benhillis merged 3 commits into
microsoft:mainfrom
benhillis:user/benhill/burette-virtiofs-test
May 8, 2026
Merged

burette: add virtio-fs performance test#3446
benhillis merged 3 commits into
microsoft:mainfrom
benhillis:user/benhill/burette-virtiofs-test

Conversation

@benhillis
Copy link
Copy Markdown
Member

Adds a virtio-fs file server performance test to the burette benchmark suite.

What it does

Boots a minimal Linux VM with a virtio-fs device backed by a host tempdir, mounts it inside the guest, and runs fio against a regular file on the mount. Measures sequential and random read/write bandwidth (MiB/s) and IOPS across multiple iterations using warm mode (VM booted once, reused for all iterations).

Test design

  • Sequential tests use 128k block size to exercise the FUSE bulk I/O path
  • Random tests use 4k block size for IOPS measurement
  • Guest page caches are dropped before each fio job so reads go through the full FUSE request path
  • ramp_time=0 ensures cold-start measurement; harness warmup iteration handles VM warm-up
  • --end_fsync=1 flushes buffered writes through FUSE before reporting
  • --direct=0 (buffered I/O) because Linux FUSE does not support O_DIRECT without FOPEN_DIRECT_IO
  • Default test file size is 512 MiB (exceeds guest page cache in the 1 GB VM)

Metrics

Metric Unit
fio_virtiofs_seq_read_bw MiB/s
fio_virtiofs_seq_write_bw MiB/s
fio_virtiofs_rand_read_bw MiB/s
fio_virtiofs_rand_read_iops IOPS
fio_virtiofs_rand_write_bw MiB/s
fio_virtiofs_rand_write_iops IOPS

Usage

./target/release/burette run --test virtio-fs --iterations 5

Ben Hillis and others added 2 commits May 8, 2026 18:55
Adds a virtio_fs test to burette that boots a Linux VM with a
virtio-fs device backed by a host tempdir, mounts it inside the
guest, and runs four fio jobs (sequential/random read/write,
io_uring, 4 KiB blocks, iodepth 32) against a pre-allocated 128 MiB
test file.

This gives us an end-to-end measurement of the OpenVMM virtio-fs
file server's throughput and IOPS that we can use as the baseline
for upcoming perf work in oss/vm/devices/virtio/virtiofs/. Use:

  burette run --test virtio-fs -o baseline.json
  # ... apply a perf change, rebuild openvmm ...
  burette run --test virtio-fs -o candidate.json
  burette compare baseline.json candidate.json

Reports:
- fio_virtiofs_seq_read_bw      (MiB/s)
- fio_virtiofs_seq_write_bw     (MiB/s)
- fio_virtiofs_rand_read_bw     (MiB/s)
- fio_virtiofs_rand_read_iops   (IOPS)
- fio_virtiofs_rand_write_bw    (MiB/s)
- fio_virtiofs_rand_write_iops  (IOPS)

The mount lives at /perf/tmp/vfs in the guest (the tmpfs that
prepare_chroot sets up over the read-only erofs perf rootfs), so
fio sees /tmp/vfs/test.dat once chrooted into /perf.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The original test used 4k blocks with direct=0 and ramp_time=5 on a 128 MiB
file in a 1 GB VM. After the ramp period the entire file sat in guest page
cache, so measurements reflected page-cache throughput rather than the
virtio-fs FUSE request path.

Fixes:
- Increase default file size to 512 MiB (exceeds page cache capacity)
- Drop guest page caches before each fio job
- Set ramp_time=0 so reads start cold (harness warmup handles VM warm-up)
- Use 128k blocks for sequential tests (exercises zero-copy + max-pages)
- Keep 4k blocks for random tests (measures IOPS / multi-queue)
- Add --invalidate=1 and --end_fsync=1 to flush writes through FUSE
- Align CLI default (--virtiofs-file-size-mib) with the new constant

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings May 8, 2026 18:57
@benhillis benhillis requested a review from a team as a code owner May 8, 2026 18:57
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Note

Copilot was unable to run its full agentic suite in this review.

Adds a new burette benchmark that measures virtio-fs performance by booting a minimal Linux VM, mounting a host-backed virtio-fs share, and running fio to report throughput and IOPS metrics.

Changes:

  • Introduces a new warm-mode virtio-fs fio benchmark test with bandwidth/IOPS parsing and perf recording.
  • Wires the new virtio_fs test into the test module registry and CLI (burette run + packaging artifacts).
  • Adds a CLI flag to configure the virtio-fs test file size.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
petri/burette/src/tests/virtio_fs.rs Implements virtio-fs VM setup, fio execution, and metric extraction.
petri/burette/src/tests/mod.rs Exposes the new virtio_fs test module.
petri/burette/src/main.rs Adds CLI integration, arg plumbing, and artifact registration for the new test.

Comment thread petri/burette/src/tests/virtio_fs.rs Outdated
Comment thread petri/burette/src/tests/virtio_fs.rs
Comment thread petri/burette/src/tests/virtio_fs.rs Outdated
- Add --time_based=1 so fio runs for the full 10s measurement window
  even if the file size is consumed early on fast configurations.
- Run sync before dropping caches to flush dirty pages first, making
  cache state deterministic between fio jobs.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@benhillis benhillis merged commit 4e6df8d into microsoft:main May 8, 2026
65 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants