ToDo/Block/old: Difference between revisions

From QEMU
No edit summary
 
(10 intermediate revisions by one other user not shown)
Line 35: Line 35:
* No more modifying guest drivers to add simple storage protocol features
* No more modifying guest drivers to add simple storage protocol features


=Material for next QEMU release=
= QEMU 1.2 =
 
====Runtime WCE toggling [Paolo]====
* IDE, SCSI, virtio device can toggle write cache at runtime
** Idea: replace O_DSYNC with manual bdrv_flush calls after each write
** Minor speedups on qcow2 metadata updates too
* virtio automatically enables writethrough for old guests that cannot flush properly, even with cache=writeback (of course not with cache=unsafe)
* We can switch the default to cache=writeback!
* Future improvement: move the option to the guest side (wce=on|off|none), host side can use cache=off|on|unsafe and deprecate none/directsync/writeback/writethrough


====QCOW3 [Kevin]====
* Zero clusters for efficient sparse images and copy-on-read
====Live block operations [Paolo]====
====Live block operations [Paolo]====
* Copy the contents of an image file while a guest is using it
* Copy the contents of an image file while a guest is using it
Line 42: Line 52:
* Improved error handling, similar to -drive rerror/werror
* Improved error handling, similar to -drive rerror/werror
** Errors can pause or cancel the job (with possibly separate handling for ENOSPC)
** Errors can pause or cancel the job (with possibly separate handling for ENOSPC)
=Material for next QEMU release=


====In-place qcow2 <-> qed conversion [Devin, GSoC 2011]:====
====In-place qcow2 <-> qed conversion [Devin, GSoC 2011]:====
* Fast conversion between qcow2 and qed image formats without copy all data
* Fast conversion between qcow2 and qed image formats without copy all data
* Patches currently being reviewed and merged
* Patches currently being reviewed and merged
====Runtime WCE toggling [Paolo, mst?]====
* wce=none|on|off and run-time guest toggling support
** Is wce=... an option for guest devices or block devices?
** Easy for IDE, little harder for virtio, somewhat harder for SCSI (MODE SELECT)
* Prerequisite to switching away from cache=writethrough by default
* Idea: replace O_DSYNC with manual bdrv_flush calls after each write
** Possible to get some speedups on qcow2 metadata updates too (certainly with respect to O_SYNC; not sure how much with O_DSYNC)


====Block migration [Juan?]====
====Block migration [Juan?]====
Line 61: Line 65:
====IDE CD-ROM passthrough [Paolo/Markus?]====
====IDE CD-ROM passthrough [Paolo/Markus?]====
* Track host tray state
* Track host tray state
====Unified request object====
* Unify BdrvTrackedRequest, RwCo etc. in a single struct.
* Perhaps expose make_request to device models and do_request to drivers to allow specifying more flags (FUA, write zeros,...)


=Future changes=
=Future changes=
Line 71: Line 79:
* Perhaps base this on QEMU Object Model right away
* Perhaps base this on QEMU Object Model right away


====QCOW3====
====QCOW3 [Kevin]====
* Extend qcow2 format to address current and future image format challenges
* Extend qcow2 format to address current and future image format challenges
** Feature bits for fine-grained file format extensions
** Feature bits for fine-grained file format extensions
** Sub-clusters to reduce metadata size and fragmentation
** Sub-clusters to reduce metadata size and fragmentation
** Zero clusters for efficient sparse images and copy-on-read
** luks-like key scheme that allows changing passphrase without re-encrypting data
** luks-like key scheme that allows changing passphrase without re-encrypting data


Line 97: Line 104:
====qed online resize [Zhi Yong]====
====qed online resize [Zhi Yong]====
* Support shrinking
* Support shrinking
= Old Items Archived in 2020 =
== op blockers [Jeff] ==
* Mutual exclusion of operations/background jobs
** Streaming in two different parts of the backing chain - allowed? (Benoît though that not, but does anything break?)
** Does streaming only require that streamed images stay read-only (i.e. backing chain segment on which the operation is performed)
** Live commit in the opposite direction at the same time?
** Action:
*** Draw up matrix of operations (mirror, stream, resize, etc)
*** Make op blocker mechanism use matrix as data instead of code (define an array)
*** Enforce that new QMP/QAPI commands and block jobs add themselves to the matrix
* node-name allows starting operations in the middle of the chain; we need to protect against incompatible concurrent operations
** In fact, we even used paths before node-name (e.g. for live commit), so this has existed for a while
* bs->backing_blocker already forbids almost everything on backing files
** Except live commit, which needs to be forbidden only when another job runs on the same chain
* Plan for 2.1 was to block all nodes recursively
** bdrv_swap() during block job completion turns out to be nasty, especially for live commit of active layer:
*** Need to clean up blockers on the removed subchain
*** Which blockers should the newly swapped in node have?
* Alternative plan for 2.1:
** Keep checking blockers on the requested node (for bs->backing_blockers to be effective)
** But also check in the active layer because this is where block jobs do their blocking
*** bottommost node might work as well
**** As Kevin pointed out on IRC, in the current code blockers exist on backing files that don't exist on the active layer
* Long term (2.2+): Block categories of operations
** Some thoughts, in the form of code example for block-commit: http://fpaste.org/113005/14036975/
* blockdev-add probably shouldn't be able to reference a node that has blockers
== -blockdev world ==
=== Basic infrastructure for blockdev-add [Kevin, Markus] ===
* Convert remaining drivers to make use of "QDict options" argument: iscsi, sheepdog, rbd
=== blockdev-add + blockdev-del QMP interface ===
* By default, return an error for blockdev-del if reference count > 1
* But have a force option that closes the image file, even if it breaks the remaining users (e.g. uncooperative guest that doesn't release its  PCI device)
* Note: backends created with blockdev-add are currently indestructible: they aren't deleted on frontend unplug (commit 2d246f0), and can't be deleted with drive_del (commit 48f364d)
=== Split BlockBackend from BlockDriverState [Max, Markus] ===
* Make block driver private embedded in BlockDriverState instead of opaque pointer
* To be moved to BlockFilters later (stay in BDS for now; BlockFilters implemented as BlockDriver):
** bps_limits
** copy_on_read
=== BlockFilter and dynamic reconfiguration of the BDS graph ===
* Add/remove (e.g. filter) BDSes at runtime
* Ability to implement light-weight block drivers that play together with snapshots (e.g. block debug, active-mirroring, copy-on-read, I/O throttling, etc)
** Converting current I/O throttling code to a block filter should be simple, mostly a mechanical task.
* Requires BlockBackend split
** Keep filters on top even after taking snapshots
* filters implement ops normally, and call out to their child BDS explicitly, no before- or after-ops-magic
* Benoît's customer may want I/O throttling in arbitrary places in the graph
* Be careful to never add cycles to the graph!
=== Dynamic graph reconfiguration (e.g. adding block filters, taking snapshots, etc.) ===
* Where does the new node get inserted and how to specify how it is linked up with the existing nodes?
** On a given "arrow" between two nodes (only works with 1 child, 1 parent)
** On a given set of arrows (possibly more complex than what is really needed?)
* How does removing a node work with more than one child of the deleted node?
* Keep using the existing QMP command for I/O throttling for now, until we understand the general problem reasonably well
* Action:
** Figure out the general problem
** Split I/O throttling off into own BDS [Benoît]
*** Requires some care with snapshots etc.
=== BDS graph rules and manipulating arbitrary nodes ===
* Arbitrary nodes
** Accept node-name where we now have other means to id BDS
*** drive-mirror of arbitrary node [Berto]
*** block-stream of arbitrary node [Berto]
** Action:
*** Add base-nodename argument to block-stream command [Jeff]
*** Allow node names in the device argument of the block-stream command [Berto]
**** If command can modify part of a backing chain, need to add option to update the parent's backing filename field on disk! [Jeff]
**** Add optional backing-filename argument (since libvirt may use fd passing and QEMU's filename is useless) [Jeff]
***** Done: block-commit, block-stream, change-backing-file
***** Might need more
*** Deprecate filename references in QMP commands in favour of node names (e.g. streaming base) [Jeff?]
== New design for better encryption support [Dan] ==
* Deprecated in 2.3 (commit a1f688f), disabled in the system emulator in 2.7 (commit 8c0dcbc4a)
* Dan Berrange intends to work on a replacement, starting in a few months https://www.berrange.com/posts/2015/03/17/qemu-qcow2-built-in-encryption-just-say-no-deprecated-now-to-be-deleted-soon/
** LUKS format driver is merged
** qcow2 integration is still missing
== Block jobs ==
* Main loop prints "main-loop: WARNING: I/O thread spun for 1000 iterations" when block job is running.
** We have "block_job_sleep_ns(..., 0)" in block job coroutines, but that doesn't really yield the BQL to VCPU as desired.
=== Live streaming of intermediate layers (using block-stream) [Berto] ===
* QEMU part is merged
* libvirt is going to expose the functionality, may need introspection to detect it, though
=== Active mirroring: just like mirroring, but live, on the fly, skip the bitmap [Kevin] ===
* similar to drive-backup
* security (virus-scan or some sort of inspection)
* should be implemented as a block filter
=== Remove bs->job field and allow multiple jobs on a BDS [John] ===
* allows more than one blockjob at a block device at a time
* infra-structure, refactoring work
* Careful not to break QMP API, need wrapper
* Allow non-block jobs (long-running operations outside block layer)
=== Image fleecing [jsnow] ===
* Image creation with existing BlockDriverState as backing file (BlockDriverState ref count)
* Patches: http://lists.gnu.org/archive/html/qemu-devel/2013-11/msg03692.html
** still not merged
* Writable backing file
=== Incremental backup [jsnow] ===
* Backup applications need a dirty block bitmap so they can read only blocks that changed
* Two approaches discussed:
** Dirty bitmap file
** Write changed data through NBD:
*** See also: http://lists.gnu.org/archive/html/qemu-devel/2013-11/msg03035.html
** Merkle tree - hash tree allows efficient syncing of image files between hosts
=== Block job I/O throttling ===
* Reuse Benoît's throttling implementation
* Handle large buffer sizes used by block jobs
** block jobs like to work in bulk for efficiency but throttling doesn't like big, bursty requests (note from Benoît: I think the solution would be to make the clock used for the throttling computation coarser)
=== Atomic block-job-cancel ===
* Stopping drive-mirror on multiple disks simultaneously should be atomic
* Currently 'stop'/'cont' is required to issue several block-job-cancel commands atomically
* QMP 'transaction' should support block-job-cancel so it can be issued without 'stop'/'cont'
== Block Drivers ==
=== dmg ===
* [[ToDo/Block/DmgChunkSizeIndependence|Chunk size independence]] (for reading modern dmg files)
== Test ==
=== Test Infrastructure ===
* Desires?
* support for testing AIO requests
** No design yet, but we need some way to label I/O requests in blkdebug
** right now we sleep, which is stupid
** related to SCSI req tags
=== Tests for -drive discard= ===
* Currently the discard feature is not well-tested in qemu-iotests
=== Block device model tests ===
* AHCI got decent coverage
* rest basic to nonexistent
=== iotests.py - Python module for writing qemu-iotests ===
* Share filters with bash ./common.filter file (simple solution: invoke bash in subprocess and set necessary environment variables like TEST_DIR)
=== Broken or unreliable qemu-iotests ===
* 136 on tmpfs is not working or unreliable
== qcow2 ==
* Cluster allocation performance: [Kevin/John]
** Delayed COW
** Use a single request to write both guest data and COW padding
** Journalling (should help a lot with internal COW, and possibly with delayed COW)
* Run-time image file preallocation (fallocate 128 MB or whatever at the end of the image file to avoid host file system fragmentation; like Parallels series ''"write/create for Parallels images with reasonable performance"'' in v3)
* qcow2 backing file validation (parent modification invalidates children) [jeff]
** similar to vmdk and vhdx
* qcow2 internal snapshot read-only BlockDriverState
** Allows accessing snapshots while guest accesses disk image
** Tricky, insufficient prio
* subclusters
** allocate larger chunks, cow smallers ones, for perf
* Use finer grained locking in the Qcow2Cache so that random I/O loads can load/update multiple L2 tables at once instead of serialising everything
* Shrink support in qcow2_truncate()
=== Header extension for qcow2 generation id ===
Desirable to add two header extensions:
# Double Generation id (one for metadata, one for guest visible content), coupled with an auto-clear feature bit, for use in backing images
# Expected backing file generations, for use in overlay images
Usage:
* Any program that opens a qcow2 file read-write with a generation id header must increment the appropriate generation ids before making that sort of change to that file.  The id does not have to be incremented for every change to the file, only for the first time a change is made since the file was opened for writing.
* A generation id is valid only if the auto-clear bit is still set (thus, if an older qemu opens a backing image, it is required to leave the unrecognized generation id header alone, but also required to clear the unknown auto-clear bit, making it obvious that the generation id header may no longer be accurate and a new generation id is needed once new qemu again handles the file).
* Any program that opens a qcow2 file that has expected backing generation header should default to verifying that the backing file has that generation id.  If the backing file id is not correct, then the access should fail unless the user supplies an extra flag to acknowledge the risk/update the expected id.
* Should internal snapshots track id? Open question.
** If snapshot includes generation id, then you can roll back to that id as part of reverting to a snapshot. But the id must then be something like a UUID, as mere linear incrementing causes branching collisions (take snapshot at id 2, then create id 3, then roll back to 2, then create a new id 3, but the two "id 3" states are not the same, which breaks any overlay depending on id 3).
** If snapshot does not include generation id, then the mere act of taking or reverting to a snapshot increments the id, and overlays must use a forced open to accept the new id, even if guest-visible contents are unchanged.
== virtio data plane ==
Current state: the old "dataplane" code is gone, all relevant parts of QEMU are now thread-safe (memory API + dirty bitmap). Since 2.8, virtio-blk and virtio-scsi always use "dataplane" code when ioeventfd is on.
* multiqueue
** Single BlockDriverState, multiple independent threads accessing in parallel
** Allows us to extend Linux multiqueue block layer up into guest
** For maximum SMP scalability and performance with high IOPS SSDs
multiqueue plan:
# bdrv_wakeup/BDRV_POLL_WHILE so that only the I/O thread runs aio_poll for the AioContext. RFifoLock gone [2.9]
# aio_co_wake/aio_co_schedule to automatically release/acquire AioContext around coroutine yield [2.9]
# thread-safe CoMutex [2.9]
# fine-grained AioContext critical sections [2.9]
# thread-safe BlockDriverState [2.10]
# thread-safe drivers [2.10, needs QED conversion to coroutines]
# thread-safe block jobs [2.10]
# thread-safe virtio-blk [2.10]
# thread-safe virtio-scsi [2.10]
# separate threads for each virtqueue
== Make qemu-img use QMP command implementations internally (e.g. use mirroring for qemu-img convert) [Max] ==
* Ensures that live operations provide the same functionality as we have offline
== I/O accounting (for query-blockstats) ==
* Driving it was made device model's responsibility (commit a597e79 in 2011)
** Most of them still don't
** The ones that do are inconsistent
** Consequently, query-blockstats is better treated as wild guess, not data
* Need to take a step back
** Benoît got us use cases, discussed on list
*** measuring in the device model is good for billing
*** some metrics are missing
*** It would be good to collect the same data everywhere in the BDS graph for telemetry purpose (seeking hidden costs)
*** having a dedicated JSON socket to output accounting data would be good
*** so we can keep analysis out of qemu
* Working on revamping the I/O accounting infrastructure (Benoît)
** Preliminary patches merged
** Averaging module under review
** More tocme
== Performance improvements ==
* Use clang coroutines instead of ucontext and compare performance
* using driver=file improves performance a bit. Anything we can do to make this happen by default?
* avoid big allocations for VirtQueueElement.  Old patches from Ming Lei used a special-purpose pool allocator, Paolo posted new patches that use regular malloc with smaller allocations (up to a few hundred bytes)
* Move linux-aio to AioContext. We already have a thread pool, might as well add linux-aio there.
== virtio-blk discard support [Peter Lieven] ==
* spec, guest driver, device model
== virtio-blk gets lots of small requests from the guest ==
* But we don't know why
* Possibly a guest driver issue
* multiwrite was introduced to mitigate this long ago
* Need to perform more benchmarks to see to what extent it exists today
== Trace guest block I/O, replay with qemu-io ==
== Dataplane ==
* AioContext assertions to prevent callbacks in wrong event loop [Stefan]
== Adding QMP to qemu-nbd ==
* wanted so doing things offline works same as online
* Patches from Benoît Canet are on the list, need rebase
* Kevin mentioned one might as well invoke qemu-system-x86_64.
== Adding Sparse File handling to NBD ==
* NBD handles holes inefficiently (passing zeroes over the wire for both reads and writes, no way to query where holes are). Proposals have been made on the NBD list on how to add support, with qemu serving as one of the proof-of-concept implementations, target qemu 2.7 [Eric Blake]
** add WRITE_ZEROES for writing, extension is fairly stable
** add STRUCTURED_READ for reading, extension is proposed but harder to implement
** add BLOCK_INFO for querying, extension still under discussion on NBD mailing list
== Export QEMU volumes as ISCSI or FCOE ==
* Andy Grover is working on implementing a preliminary tcmu-runner plugin using the block layer
* a QMP socket would be needed to make this usable in a cloud context
== Avoid qemu-img convert host cache pollution ==
* converting LVM snapshot may fill up host cache uselessly
* still want to use readahead
* convert should advise kernel to drop cached src blocks
** what if blocks are shared with users that are likely to use them again?
* should it advise kernel src is read sequentially?
== Dependency graph ==
(paste on http://yuml.me/diagram/scruffy/class/draw)
<code>
[Image fleecing]
[Incremental backup]
[qcow2 improvements]
[Make qemu-img use QMP command implementations internally]
[Recursive Op Blocker]->[Category Op Blocker]
[Category Op Blocker]->[intermediate layers live streaming]
[Category Op Blocker]->[Basic infrastructure for blockdev-add]
[Category Op Blocker]->[drive-mirror (block-mirror) of arbitrary node]
[Category Op Blocker]->[Jeff Cody's block-commit of arbitrary node]
[Proper specification for blockdev-add]->[Basic infrastructure for blockdev-add]
[Basic infrastructure for blockdev-add]->[blockdev-add + blockdev-del QMP interface]
[Image formats]
[virtio-blk data plane]->[Multiqueue block layer]
[virtio-scsi data plane]
[Split BlockBackend from BlockDriverState]->[Get rid of bdrv_swap]
[Split BlockBackend from BlockDriverState]->[BlockFilter and dynamic reconfiguration of the BDS graph]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[New design for better encryption support]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[Active mirroring]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[Throttle as a filter ?]
[Remove bs->job field and allow multiple jobs on a BDS]
[iotests.py - Python module for writing qemu-iotests]->[Test Infrastructure]
[Tests for -drive discard]
[Adding TLS to NBD]
[AHCI emulation]
[Block job I/O throttling]
[I/O accounting]
[Add guard page to bottom of coroutine stack]
[I/O throttling groups]
[QMP added to qemu-nbd] -> [Export QEMU volumes as ISCSI or FCOE with QMP]
</code>
(result: http://yuml.me/c2e9edb8)

Latest revision as of 10:43, 11 November 2020

QEMU 1.0

Coroutines in the block layer [Kevin]

  • Programming model to simplify block drivers without blocking QEMU threads
  • All synchronous drivers converted to asynchronous

VMDK enhancements [Fam, GSoC 2011]

  • Implement latest VMDK specs to support modern image files
  • Patches currently being reviewed and merged

iSCSI block device integration

  • Enable userspace-only remote access to disk images

QEMU 1.1

Generic copy-on-read [Stefan]

  • Populate image file to avoid fetching same block from backing file again later

Generic image streaming [Stefan]

  • Make block_stream commands available for all image formats that support backing files

Block I/O limits [Zhi Yong]

  • Resource control for guest I/O bandwidth/iops consumption
  • Usable with virtio on QEMU 1.0

snapshot_blkdev and Backup API [Jeff]

  • Support for consistent disk snapshots

NBD asynchronous I/O [Paolo]

  • Improved performance

virtio-scsi [Paolo/Stefan]

  • The next step after virtio-blk, full SCSI command set and appears as SCSI HBA in guest
  • Real /dev/sda devices in guest
  • No more modifying guest drivers to add simple storage protocol features

QEMU 1.2

Runtime WCE toggling [Paolo]

  • IDE, SCSI, virtio device can toggle write cache at runtime
    • Idea: replace O_DSYNC with manual bdrv_flush calls after each write
    • Minor speedups on qcow2 metadata updates too
  • virtio automatically enables writethrough for old guests that cannot flush properly, even with cache=writeback (of course not with cache=unsafe)
  • We can switch the default to cache=writeback!
  • Future improvement: move the option to the guest side (wce=on|off|none), host side can use cache=off|on|unsafe and deprecate none/directsync/writeback/writethrough

QCOW3 [Kevin]

  • Zero clusters for efficient sparse images and copy-on-read

Live block operations [Paolo]

  • Copy the contents of an image file while a guest is using it
    • Various implementations: active/synchronous (guest sees I/O completion when data reaches both source and destination), passive/asynchronous (guest sees I/O completion when data reaches source only)
  • Improved error handling, similar to -drive rerror/werror
    • Errors can pause or cancel the job (with possibly separate handling for ENOSPC)

Material for next QEMU release

In-place qcow2 <-> qed conversion [Devin, GSoC 2011]:

  • Fast conversion between qcow2 and qed image formats without copy all data
  • Patches currently being reviewed and merged

Block migration [Juan?]

  • Block migration working with a separate migration thread
  • Perhaps just drop it.

IDE CD-ROM passthrough [Paolo/Markus?]

  • Track host tray state

Unified request object

  • Unify BdrvTrackedRequest, RwCo etc. in a single struct.
  • Perhaps expose make_request to device models and do_request to drivers to allow specifying more flags (FUA, write zeros,...)

Future changes

Cow overlay [Dong Xu "Robert"]

  • Allow live block copy and image streaming to raw destination files

-blockdev [Markus?]

  • Explicit user control over block device trees
  • Perhaps base this on QEMU Object Model right away

QCOW3 [Kevin]

  • Extend qcow2 format to address current and future image format challenges
    • Feature bits for fine-grained file format extensions
    • Sub-clusters to reduce metadata size and fragmentation
    • luks-like key scheme that allows changing passphrase without re-encrypting data

NBD server for block device migration [Stefan?]

  • Enable remote access to live disk images for external backup software

Avoid blocking QEMU threads

  • Today loss of NFS connectivity can hang guests
  • It's critical never to block the vcpu thread
  • The iothread should also not block while the qemu mutex is held
  • All blocking operations must be done asynchronously or in a worker thread

tcm_vhost [Zhi Yong]

  • Directly connect virtio-scsi with Linux in-kernel SCSI target
  • Pass-through of host SCSI devices

qcow2 online resize [Zhi Yong]

  • Handle snapshots
  • Support shrinking

qed online resize [Zhi Yong]

  • Support shrinking

Old Items Archived in 2020

op blockers [Jeff]

  • Mutual exclusion of operations/background jobs
    • Streaming in two different parts of the backing chain - allowed? (Benoît though that not, but does anything break?)
    • Does streaming only require that streamed images stay read-only (i.e. backing chain segment on which the operation is performed)
    • Live commit in the opposite direction at the same time?
    • Action:
      • Draw up matrix of operations (mirror, stream, resize, etc)
      • Make op blocker mechanism use matrix as data instead of code (define an array)
      • Enforce that new QMP/QAPI commands and block jobs add themselves to the matrix
  • node-name allows starting operations in the middle of the chain; we need to protect against incompatible concurrent operations
    • In fact, we even used paths before node-name (e.g. for live commit), so this has existed for a while
  • bs->backing_blocker already forbids almost everything on backing files
    • Except live commit, which needs to be forbidden only when another job runs on the same chain
  • Plan for 2.1 was to block all nodes recursively
    • bdrv_swap() during block job completion turns out to be nasty, especially for live commit of active layer:
      • Need to clean up blockers on the removed subchain
      • Which blockers should the newly swapped in node have?
  • Alternative plan for 2.1:
    • Keep checking blockers on the requested node (for bs->backing_blockers to be effective)
    • But also check in the active layer because this is where block jobs do their blocking
      • bottommost node might work as well
        • As Kevin pointed out on IRC, in the current code blockers exist on backing files that don't exist on the active layer
  • Long term (2.2+): Block categories of operations
  • blockdev-add probably shouldn't be able to reference a node that has blockers

-blockdev world

Basic infrastructure for blockdev-add [Kevin, Markus]

  • Convert remaining drivers to make use of "QDict options" argument: iscsi, sheepdog, rbd

blockdev-add + blockdev-del QMP interface

  • By default, return an error for blockdev-del if reference count > 1
  • But have a force option that closes the image file, even if it breaks the remaining users (e.g. uncooperative guest that doesn't release its PCI device)
  • Note: backends created with blockdev-add are currently indestructible: they aren't deleted on frontend unplug (commit 2d246f0), and can't be deleted with drive_del (commit 48f364d)

Split BlockBackend from BlockDriverState [Max, Markus]

  • Make block driver private embedded in BlockDriverState instead of opaque pointer
  • To be moved to BlockFilters later (stay in BDS for now; BlockFilters implemented as BlockDriver):
    • bps_limits
    • copy_on_read

BlockFilter and dynamic reconfiguration of the BDS graph

  • Add/remove (e.g. filter) BDSes at runtime
  • Ability to implement light-weight block drivers that play together with snapshots (e.g. block debug, active-mirroring, copy-on-read, I/O throttling, etc)
    • Converting current I/O throttling code to a block filter should be simple, mostly a mechanical task.
  • Requires BlockBackend split
    • Keep filters on top even after taking snapshots
  • filters implement ops normally, and call out to their child BDS explicitly, no before- or after-ops-magic
  • Benoît's customer may want I/O throttling in arbitrary places in the graph
  • Be careful to never add cycles to the graph!

Dynamic graph reconfiguration (e.g. adding block filters, taking snapshots, etc.)

  • Where does the new node get inserted and how to specify how it is linked up with the existing nodes?
    • On a given "arrow" between two nodes (only works with 1 child, 1 parent)
    • On a given set of arrows (possibly more complex than what is really needed?)
  • How does removing a node work with more than one child of the deleted node?
  • Keep using the existing QMP command for I/O throttling for now, until we understand the general problem reasonably well
  • Action:
    • Figure out the general problem
    • Split I/O throttling off into own BDS [Benoît]
      • Requires some care with snapshots etc.

BDS graph rules and manipulating arbitrary nodes

  • Arbitrary nodes
    • Accept node-name where we now have other means to id BDS
      • drive-mirror of arbitrary node [Berto]
      • block-stream of arbitrary node [Berto]
    • Action:
      • Add base-nodename argument to block-stream command [Jeff]
      • Allow node names in the device argument of the block-stream command [Berto]
        • If command can modify part of a backing chain, need to add option to update the parent's backing filename field on disk! [Jeff]
        • Add optional backing-filename argument (since libvirt may use fd passing and QEMU's filename is useless) [Jeff]
          • Done: block-commit, block-stream, change-backing-file
          • Might need more
      • Deprecate filename references in QMP commands in favour of node names (e.g. streaming base) [Jeff?]

New design for better encryption support [Dan]

Block jobs

  • Main loop prints "main-loop: WARNING: I/O thread spun for 1000 iterations" when block job is running.
    • We have "block_job_sleep_ns(..., 0)" in block job coroutines, but that doesn't really yield the BQL to VCPU as desired.

Live streaming of intermediate layers (using block-stream) [Berto]

  • QEMU part is merged
  • libvirt is going to expose the functionality, may need introspection to detect it, though

Active mirroring: just like mirroring, but live, on the fly, skip the bitmap [Kevin]

  • similar to drive-backup
  • security (virus-scan or some sort of inspection)
  • should be implemented as a block filter

Remove bs->job field and allow multiple jobs on a BDS [John]

  • allows more than one blockjob at a block device at a time
  • infra-structure, refactoring work
  • Careful not to break QMP API, need wrapper
  • Allow non-block jobs (long-running operations outside block layer)

Image fleecing [jsnow]

Incremental backup [jsnow]

Block job I/O throttling

  • Reuse Benoît's throttling implementation
  • Handle large buffer sizes used by block jobs
    • block jobs like to work in bulk for efficiency but throttling doesn't like big, bursty requests (note from Benoît: I think the solution would be to make the clock used for the throttling computation coarser)

Atomic block-job-cancel

  • Stopping drive-mirror on multiple disks simultaneously should be atomic
  • Currently 'stop'/'cont' is required to issue several block-job-cancel commands atomically
  • QMP 'transaction' should support block-job-cancel so it can be issued without 'stop'/'cont'

Block Drivers

dmg

Test

Test Infrastructure

  • Desires?
  • support for testing AIO requests
    • No design yet, but we need some way to label I/O requests in blkdebug
    • right now we sleep, which is stupid
    • related to SCSI req tags

Tests for -drive discard=

  • Currently the discard feature is not well-tested in qemu-iotests

Block device model tests

  • AHCI got decent coverage
  • rest basic to nonexistent

iotests.py - Python module for writing qemu-iotests

  • Share filters with bash ./common.filter file (simple solution: invoke bash in subprocess and set necessary environment variables like TEST_DIR)

Broken or unreliable qemu-iotests

  • 136 on tmpfs is not working or unreliable

qcow2

  • Cluster allocation performance: [Kevin/John]
    • Delayed COW
    • Use a single request to write both guest data and COW padding
    • Journalling (should help a lot with internal COW, and possibly with delayed COW)
  • Run-time image file preallocation (fallocate 128 MB or whatever at the end of the image file to avoid host file system fragmentation; like Parallels series "write/create for Parallels images with reasonable performance" in v3)
  • qcow2 backing file validation (parent modification invalidates children) [jeff]
    • similar to vmdk and vhdx
  • qcow2 internal snapshot read-only BlockDriverState
    • Allows accessing snapshots while guest accesses disk image
    • Tricky, insufficient prio
  • subclusters
    • allocate larger chunks, cow smallers ones, for perf
  • Use finer grained locking in the Qcow2Cache so that random I/O loads can load/update multiple L2 tables at once instead of serialising everything
  • Shrink support in qcow2_truncate()

Header extension for qcow2 generation id

Desirable to add two header extensions:

  1. Double Generation id (one for metadata, one for guest visible content), coupled with an auto-clear feature bit, for use in backing images
  2. Expected backing file generations, for use in overlay images

Usage:

  • Any program that opens a qcow2 file read-write with a generation id header must increment the appropriate generation ids before making that sort of change to that file. The id does not have to be incremented for every change to the file, only for the first time a change is made since the file was opened for writing.
  • A generation id is valid only if the auto-clear bit is still set (thus, if an older qemu opens a backing image, it is required to leave the unrecognized generation id header alone, but also required to clear the unknown auto-clear bit, making it obvious that the generation id header may no longer be accurate and a new generation id is needed once new qemu again handles the file).
  • Any program that opens a qcow2 file that has expected backing generation header should default to verifying that the backing file has that generation id. If the backing file id is not correct, then the access should fail unless the user supplies an extra flag to acknowledge the risk/update the expected id.
  • Should internal snapshots track id? Open question.
    • If snapshot includes generation id, then you can roll back to that id as part of reverting to a snapshot. But the id must then be something like a UUID, as mere linear incrementing causes branching collisions (take snapshot at id 2, then create id 3, then roll back to 2, then create a new id 3, but the two "id 3" states are not the same, which breaks any overlay depending on id 3).
    • If snapshot does not include generation id, then the mere act of taking or reverting to a snapshot increments the id, and overlays must use a forced open to accept the new id, even if guest-visible contents are unchanged.


virtio data plane

Current state: the old "dataplane" code is gone, all relevant parts of QEMU are now thread-safe (memory API + dirty bitmap). Since 2.8, virtio-blk and virtio-scsi always use "dataplane" code when ioeventfd is on.

  • multiqueue
    • Single BlockDriverState, multiple independent threads accessing in parallel
    • Allows us to extend Linux multiqueue block layer up into guest
    • For maximum SMP scalability and performance with high IOPS SSDs

multiqueue plan:

  1. bdrv_wakeup/BDRV_POLL_WHILE so that only the I/O thread runs aio_poll for the AioContext. RFifoLock gone [2.9]
  2. aio_co_wake/aio_co_schedule to automatically release/acquire AioContext around coroutine yield [2.9]
  3. thread-safe CoMutex [2.9]
  4. fine-grained AioContext critical sections [2.9]
  5. thread-safe BlockDriverState [2.10]
  6. thread-safe drivers [2.10, needs QED conversion to coroutines]
  7. thread-safe block jobs [2.10]
  8. thread-safe virtio-blk [2.10]
  9. thread-safe virtio-scsi [2.10]
  10. separate threads for each virtqueue

Make qemu-img use QMP command implementations internally (e.g. use mirroring for qemu-img convert) [Max]

  • Ensures that live operations provide the same functionality as we have offline

I/O accounting (for query-blockstats)

  • Driving it was made device model's responsibility (commit a597e79 in 2011)
    • Most of them still don't
    • The ones that do are inconsistent
    • Consequently, query-blockstats is better treated as wild guess, not data
  • Need to take a step back
    • Benoît got us use cases, discussed on list
      • measuring in the device model is good for billing
      • some metrics are missing
      • It would be good to collect the same data everywhere in the BDS graph for telemetry purpose (seeking hidden costs)
      • having a dedicated JSON socket to output accounting data would be good
      • so we can keep analysis out of qemu
  • Working on revamping the I/O accounting infrastructure (Benoît)
    • Preliminary patches merged
    • Averaging module under review
    • More tocme

Performance improvements

  • Use clang coroutines instead of ucontext and compare performance
  • using driver=file improves performance a bit. Anything we can do to make this happen by default?
  • avoid big allocations for VirtQueueElement. Old patches from Ming Lei used a special-purpose pool allocator, Paolo posted new patches that use regular malloc with smaller allocations (up to a few hundred bytes)
  • Move linux-aio to AioContext. We already have a thread pool, might as well add linux-aio there.

virtio-blk discard support [Peter Lieven]

  • spec, guest driver, device model

virtio-blk gets lots of small requests from the guest

  • But we don't know why
  • Possibly a guest driver issue
  • multiwrite was introduced to mitigate this long ago
  • Need to perform more benchmarks to see to what extent it exists today

Trace guest block I/O, replay with qemu-io

Dataplane

  • AioContext assertions to prevent callbacks in wrong event loop [Stefan]

Adding QMP to qemu-nbd

  • wanted so doing things offline works same as online
  • Patches from Benoît Canet are on the list, need rebase
  • Kevin mentioned one might as well invoke qemu-system-x86_64.

Adding Sparse File handling to NBD

  • NBD handles holes inefficiently (passing zeroes over the wire for both reads and writes, no way to query where holes are). Proposals have been made on the NBD list on how to add support, with qemu serving as one of the proof-of-concept implementations, target qemu 2.7 [Eric Blake]
    • add WRITE_ZEROES for writing, extension is fairly stable
    • add STRUCTURED_READ for reading, extension is proposed but harder to implement
    • add BLOCK_INFO for querying, extension still under discussion on NBD mailing list

Export QEMU volumes as ISCSI or FCOE

  • Andy Grover is working on implementing a preliminary tcmu-runner plugin using the block layer
  • a QMP socket would be needed to make this usable in a cloud context

Avoid qemu-img convert host cache pollution

  • converting LVM snapshot may fill up host cache uselessly
  • still want to use readahead
  • convert should advise kernel to drop cached src blocks
    • what if blocks are shared with users that are likely to use them again?
  • should it advise kernel src is read sequentially?

Dependency graph

(paste on http://yuml.me/diagram/scruffy/class/draw) [Image fleecing] [Incremental backup] [qcow2 improvements] [Make qemu-img use QMP command implementations internally] [Recursive Op Blocker]->[Category Op Blocker] [Category Op Blocker]->[intermediate layers live streaming] [Category Op Blocker]->[Basic infrastructure for blockdev-add] [Category Op Blocker]->[drive-mirror (block-mirror) of arbitrary node] [Category Op Blocker]->[Jeff Cody's block-commit of arbitrary node] [Proper specification for blockdev-add]->[Basic infrastructure for blockdev-add] [Basic infrastructure for blockdev-add]->[blockdev-add + blockdev-del QMP interface] [Image formats] [virtio-blk data plane]->[Multiqueue block layer] [virtio-scsi data plane] [Split BlockBackend from BlockDriverState]->[Get rid of bdrv_swap] [Split BlockBackend from BlockDriverState]->[BlockFilter and dynamic reconfiguration of the BDS graph] [BlockFilter and dynamic reconfiguration of the BDS graph]->[New design for better encryption support] [BlockFilter and dynamic reconfiguration of the BDS graph]->[Active mirroring] [BlockFilter and dynamic reconfiguration of the BDS graph]->[Throttle as a filter ?] [Remove bs->job field and allow multiple jobs on a BDS] [iotests.py - Python module for writing qemu-iotests]->[Test Infrastructure] [Tests for -drive discard] [Adding TLS to NBD] [AHCI emulation] [Block job I/O throttling] [I/O accounting] [Add guard page to bottom of coroutine stack] [I/O throttling groups] [QMP added to qemu-nbd] -> [Export QEMU volumes as ISCSI or FCOE with QMP] (result: http://yuml.me/c2e9edb8)