ToDo/Block: Difference between revisions

From QEMU
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page contains block layer and storage features that have been proposed.  These features may not be in active development and questions about them should be addressed to the QEMU mailing list at qemu-devel@nongnu.org.
This page contains block layer and storage features that have been proposed.  These features may not be in active development and questions about them should be addressed to the QEMU mailing list at qemu-devel@nongnu.org.


== op blockers [Jeff] ==
== Core block layer ==
* Mutual exclusion of operations/background jobs
* Implement request cancellation in block/io_uring.c and util/thread-pool.c so that device reset and guest cancel commands promptly cancel hung requests. Today QEMU waits for requests to complete. Linux io_uring offers a cancel command that may be able to abort hung NFS requests (the liburing API is io_uring_prep_cancel()). QEMU's thread pool could be extended to use a signal to interrupt blocking system calls. Linux AIO does not seem to offer useful cancel semantics because only drivers/usb/gadget/ implements the kernel's kiocb_set_cancel_fn() API.
** Streaming in two different parts of the backing chain - allowed? (Benoît though that not, but does anything break?)
** Does streaming only require that streamed images stay read-only (i.e. backing chain segment on which the operation is performed)
** Live commit in the opposite direction at the same time?
** Action:
*** Draw up matrix of operations (mirror, stream, resize, etc)
*** Make op blocker mechanism use matrix as data instead of code (define an array)
*** Enforce that new QMP/QAPI commands and block jobs add themselves to the matrix
* node-name allows starting operations in the middle of the chain; we need to protect against incompatible concurrent operations
** In fact, we even used paths before node-name (e.g. for live commit), so this has existed for a while
* bs->backing_blocker already forbids almost everything on backing files
** Except live commit, which needs to be forbidden only when another job runs on the same chain
* Plan for 2.1 was to block all nodes recursively
** bdrv_swap() during block job completion turns out to be nasty, especially for live commit of active layer:
*** Need to clean up blockers on the removed subchain
*** Which blockers should the newly swapped in node have?
* Alternative plan for 2.1:
** Keep checking blockers on the requested node (for bs->backing_blockers to be effective)
** But also check in the active layer because this is where block jobs do their blocking
*** bottommost node might work as well
**** As Kevin pointed out on IRC, in the current code blockers exist on backing files that don't exist on the active layer
* Long term (2.2+): Block categories of operations
** Some thoughts, in the form of code example for block-commit: http://fpaste.org/113005/14036975/
* blockdev-add probably shouldn't be able to reference a node that has blockers


== -blockdev world ==
== virtio-blk ==
 
* Add a cancel command to the virtio-blk device so that running requests can be aborted. This requires changing the VIRTIO spec, extending QEMU's device emulation, and implementing blk_mq_ops->timeout() in Linux virtio_blk.ko. This task depends on first implementing real request cancellation in QEMU.
=== Basic infrastructure for blockdev-add [Kevin, Markus] ===
* Convert remaining drivers to make use of "QDict options" argument: iscsi, sheepdog, rbd
 
=== blockdev-add + blockdev-del QMP interface ===
* By default, return an error for blockdev-del if reference count > 1
* But have a force option that closes the image file, even if it breaks the remaining users (e.g. uncooperative guest that doesn't release its  PCI device)
* Note: backends created with blockdev-add are currently indestructible: they aren't deleted on frontend unplug (commit 2d246f0), and can't be deleted with drive_del (commit 48f364d)
 
=== Split BlockBackend from BlockDriverState [Max, Markus] ===
* Make block driver private embedded in BlockDriverState instead of opaque pointer
* To be moved to BlockFilters later (stay in BDS for now; BlockFilters implemented as BlockDriver):
** bps_limits
** copy_on_read
 
=== BlockFilter and dynamic reconfiguration of the BDS graph ===
* Add/remove (e.g. filter) BDSes at runtime
* Ability to implement light-weight block drivers that play together with snapshots (e.g. block debug, active-mirroring, copy-on-read, I/O throttling, etc)
** Converting current I/O throttling code to a block filter should be simple, mostly a mechanical task.
* Requires BlockBackend split
** Keep filters on top even after taking snapshots
* filters implement ops normally, and call out to their child BDS explicitly, no before- or after-ops-magic
* Benoît's customer may want I/O throttling in arbitrary places in the graph
* Be careful to never add cycles to the graph!
 
=== Dynamic graph reconfiguration (e.g. adding block filters, taking snapshots, etc.) ===
* Where does the new node get inserted and how to specify how it is linked up with the existing nodes?
** On a given "arrow" between two nodes (only works with 1 child, 1 parent)
** On a given set of arrows (possibly more complex than what is really needed?)
* How does removing a node work with more than one child of the deleted node?
* Keep using the existing QMP command for I/O throttling for now, until we understand the general problem reasonably well
* Action:
** Figure out the general problem
** Split I/O throttling off into own BDS [Benoît]
*** Requires some care with snapshots etc.
 
=== BDS graph rules and manipulating arbitrary nodes ===
* Arbitrary nodes
** Accept node-name where we now have other means to id BDS
*** drive-mirror of arbitrary node [Berto]
*** block-stream of arbitrary node [Berto]
** Action:
*** Add base-nodename argument to block-stream command [Jeff]
*** Allow node names in the device argument of the block-stream command [Berto]
**** If command can modify part of a backing chain, need to add option to update the parent's backing filename field on disk! [Jeff]
**** Add optional backing-filename argument (since libvirt may use fd passing and QEMU's filename is useless) [Jeff]
***** Done: block-commit, block-stream, change-backing-file
***** Might need more
*** Deprecate filename references in QMP commands in favour of node names (e.g. streaming base) [Jeff?]
 
== New design for better encryption support [Dan] ==
* Deprecated in 2.3 (commit a1f688f), disabled in the system emulator in 2.7 (commit 8c0dcbc4a)
* Dan Berrange intends to work on a replacement, starting in a few months https://www.berrange.com/posts/2015/03/17/qemu-qcow2-built-in-encryption-just-say-no-deprecated-now-to-be-deleted-soon/
** LUKS format driver is merged
** qcow2 integration is still missing
 
== Block jobs ==
* Main loop prints "main-loop: WARNING: I/O thread spun for 1000 iterations" when block job is running.
** We have "block_job_sleep_ns(..., 0)" in block job coroutines, but that doesn't really yield the BQL to VCPU as desired.
 
=== Live streaming of intermediate layers (using block-stream) [Berto] ===
* QEMU part is merged
* libvirt is going to expose the functionality, may need introspection to detect it, though
 
=== Active mirroring: just like mirroring, but live, on the fly, skip the bitmap [Kevin] ===
* similar to drive-backup
* security (virus-scan or some sort of inspection)
* should be implemented as a block filter
 
=== Remove bs->job field and allow multiple jobs on a BDS [John] ===
* allows more than one blockjob at a block device at a time
* infra-structure, refactoring work
* Careful not to break QMP API, need wrapper
* Allow non-block jobs (long-running operations outside block layer)
 
=== Image fleecing [jsnow] ===
* Image creation with existing BlockDriverState as backing file (BlockDriverState ref count)
* Patches: http://lists.gnu.org/archive/html/qemu-devel/2013-11/msg03692.html
** still not merged
* Writable backing file
 
=== Incremental backup [jsnow] ===
* Backup applications need a dirty block bitmap so they can read only blocks that changed
* Two approaches discussed:
** Dirty bitmap file
** Write changed data through NBD:
*** See also: http://lists.gnu.org/archive/html/qemu-devel/2013-11/msg03035.html
** Merkle tree - hash tree allows efficient syncing of image files between hosts
 
=== Block job I/O throttling ===
* Reuse Benoît's throttling implementation
* Handle large buffer sizes used by block jobs
** block jobs like to work in bulk for efficiency but throttling doesn't like big, bursty requests (note from Benoît: I think the solution would be to make the clock used for the throttling computation coarser)
 
=== Atomic block-job-cancel ===
* Stopping drive-mirror on multiple disks simultaneously should be atomic
* Currently 'stop'/'cont' is required to issue several block-job-cancel commands atomically
* QMP 'transaction' should support block-job-cancel so it can be issued without 'stop'/'cont'
 
== Block Drivers ==
=== dmg ===
* [[ToDo/Block/DmgChunkSizeIndependence|Chunk size independence]] (for reading modern dmg files)
 
== Test ==
=== Test Infrastructure ===
* Desires?
* support for testing AIO requests
** No design yet, but we need some way to label I/O requests in blkdebug
** right now we sleep, which is stupid
** related to SCSI req tags
 
=== Tests for -drive discard= ===
* Currently the discard feature is not well-tested in qemu-iotests
 
=== Block device model tests ===
* AHCI got decent coverage
* rest basic to nonexistent
 
=== iotests.py - Python module for writing qemu-iotests [stefan] ===
* Extract qemu.py generic QEMU interaction code
* Document qemu.py and iotests.py so it meets standard Python module conventions
* Port live migration qemu-iotest to Python to see if it's preferrable to the shell version
 
=== Broken or unreliable qemu-iotests ===
* 136 on tmpfs is not working or unreliable
 
== qcow2 ==
* Cluster allocation performance: [Kevin/John]
** Delayed COW
** Use a single request to write both guest data and COW padding
** Journalling (should help a lot with internal COW, and possibly with delayed COW)
* Run-time image file preallocation (fallocate 128 MB or whatever at the end of the image file to avoid host file system fragmentation; like Parallels series ''"write/create for Parallels images with reasonable performance"'' in v3)
* qcow2 backing file validation (parent modification invalidates children) [jeff]
** similar to vmdk and vhdx
* qcow2 internal snapshot read-only BlockDriverState
** Allows accessing snapshots while guest accesses disk image
** Tricky, insufficient prio
* subclusters
** allocate larger chunks, cow smallers ones, for perf
* Use finer grained locking in the Qcow2Cache so that random I/O loads can load/update multiple L2 tables at once instead of serialising everything
* Shrink support in qcow2_truncate()
 
=== Header extension for qcow2 generation id ===
Desirable to add two header extensions:
# Double Generation id (one for metadata, one for guest visible content), coupled with an auto-clear feature bit, for use in backing images
# Expected backing file generations, for use in overlay images
Usage:
* Any program that opens a qcow2 file read-write with a generation id header must increment the appropriate generation ids before making that sort of change to that file.  The id does not have to be incremented for every change to the file, only for the first time a change is made since the file was opened for writing.
* A generation id is valid only if the auto-clear bit is still set (thus, if an older qemu opens a backing image, it is required to leave the unrecognized generation id header alone, but also required to clear the unknown auto-clear bit, making it obvious that the generation id header may no longer be accurate and a new generation id is needed once new qemu again handles the file).
* Any program that opens a qcow2 file that has expected backing generation header should default to verifying that the backing file has that generation id.  If the backing file id is not correct, then the access should fail unless the user supplies an extra flag to acknowledge the risk/update the expected id.
* Should internal snapshots track id? Open question.
** If snapshot includes generation id, then you can roll back to that id as part of reverting to a snapshot. But the id must then be something like a UUID, as mere linear incrementing causes branching collisions (take snapshot at id 2, then create id 3, then roll back to 2, then create a new id 3, but the two "id 3" states are not the same, which breaks any overlay depending on id 3).
** If snapshot does not include generation id, then the mere act of taking or reverting to a snapshot increments the id, and overlays must use a forced open to accept the new id, even if guest-visible contents are unchanged.
 
 
== virtio data plane ==
Current state: the old "dataplane" code is gone, all relevant parts of QEMU are now thread-safe (memory API + dirty bitmap). Since 2.8, virtio-blk and virtio-scsi always use "dataplane" code when ioeventfd is on.
 
* multiqueue
** Single BlockDriverState, multiple independent threads accessing in parallel
** Allows us to extend Linux multiqueue block layer up into guest
** For maximum SMP scalability and performance with high IOPS SSDs
 
multiqueue plan:
# bdrv_wakeup/BDRV_POLL_WHILE so that only the I/O thread runs aio_poll for the AioContext. RFifoLock gone [2.9]
# aio_co_wake/aio_co_schedule to automatically release/acquire AioContext around coroutine yield [2.9]
# thread-safe CoMutex [2.9]
# fine-grained AioContext critical sections [2.9]
# thread-safe BlockDriverState [2.10]
# thread-safe drivers [2.10, needs QED conversion to coroutines]
# thread-safe block jobs [2.10]
# thread-safe virtio-blk [2.10]
# thread-safe virtio-scsi [2.10]
# separate threads for each virtqueue
 
== Make qemu-img use QMP command implementations internally (e.g. use mirroring for qemu-img convert) [Max] ==
* Ensures that live operations provide the same functionality as we have offline
 
== I/O accounting (for query-blockstats) ==
* Driving it was made device model's responsibility (commit a597e79 in 2011)
** Most of them still don't
** The ones that do are inconsistent
** Consequently, query-blockstats is better treated as wild guess, not data
* Need to take a step back
** Benoît got us use cases, discussed on list
*** measuring in the device model is good for billing
*** some metrics are missing
*** It would be good to collect the same data everywhere in the BDS graph for telemetry purpose (seeking hidden costs)
*** having a dedicated JSON socket to output accounting data would be good
*** so we can keep analysis out of qemu
* Working on revamping the I/O accounting infrastructure (Benoît)
** Preliminary patches merged
** Averaging module under review
** More tocme
 
== Performance improvements ==
* Use clang coroutines instead of ucontext and compare performance
* using driver=file improves performance a bit. Anything we can do to make this happen by default?
* avoid big allocations for VirtQueueElement.  Old patches from Ming Lei used a special-purpose pool allocator, Paolo posted new patches that use regular malloc with smaller allocations (up to a few hundred bytes)
* Move linux-aio to AioContext. We already have a thread pool, might as well add linux-aio there.
 
== virtio-blk discard support [Peter Lieven] ==
* spec, guest driver, device model
 
== virtio-blk gets lots of small requests from the guest ==
* But we don't know why
* Possibly a guest driver issue
* multiwrite was introduced to mitigate this long ago
* Need to perform more benchmarks to see to what extent it exists today
 
== Trace guest block I/O, replay with qemu-io ==
 
== Dataplane ==
* AioContext assertions to prevent callbacks in wrong event loop [Stefan]
 
== Adding QMP to qemu-nbd ==
* wanted so doing things offline works same as online
* Patches from Benoît Canet are on the list, need rebase
* Kevin mentioned one might as well invoke qemu-system-x86_64.
 
== Adding Sparse File handling to NBD ==
* NBD handles holes inefficiently (passing zeroes over the wire for both reads and writes, no way to query where holes are). Proposals have been made on the NBD list on how to add support, with qemu serving as one of the proof-of-concept implementations, target qemu 2.7 [Eric Blake]
** add WRITE_ZEROES for writing, extension is fairly stable
** add STRUCTURED_READ for reading, extension is proposed but harder to implement
** add BLOCK_INFO for querying, extension still under discussion on NBD mailing list
 
== Export QEMU volumes as ISCSI or FCOE ==
* Andy Grover is working on implementing a preliminary tcmu-runner plugin using the block layer
* a QMP socket would be needed to make this usable in a cloud context
 
== Avoid qemu-img convert host cache pollution ==
* converting LVM snapshot may fill up host cache uselessly
* still want to use readahead
* convert should advise kernel to drop cached src blocks
** what if blocks are shared with users that are likely to use them again?
* should it advise kernel src is read sequentially?
 
== Dependency graph ==
(paste on http://yuml.me/diagram/scruffy/class/draw)
<code>
[Image fleecing]
[Incremental backup]
[qcow2 improvements]
[Make qemu-img use QMP command implementations internally]
[Recursive Op Blocker]->[Category Op Blocker]
[Category Op Blocker]->[intermediate layers live streaming]
[Category Op Blocker]->[Basic infrastructure for blockdev-add]
[Category Op Blocker]->[drive-mirror (block-mirror) of arbitrary node]
[Category Op Blocker]->[Jeff Cody's block-commit of arbitrary node]
[Proper specification for blockdev-add]->[Basic infrastructure for blockdev-add]
[Basic infrastructure for blockdev-add]->[blockdev-add + blockdev-del QMP interface]
[Image formats]
[virtio-blk data plane]->[Multiqueue block layer]
[virtio-scsi data plane]
[Split BlockBackend from BlockDriverState]->[Get rid of bdrv_swap]
[Split BlockBackend from BlockDriverState]->[BlockFilter and dynamic reconfiguration of the BDS graph]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[New design for better encryption support]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[Active mirroring]
[BlockFilter and dynamic reconfiguration of the BDS graph]->[Throttle as a filter ?]
[Remove bs->job field and allow multiple jobs on a BDS]
[iotests.py - Python module for writing qemu-iotests]->[Test Infrastructure]
[Tests for -drive discard]
[Adding TLS to NBD]
[AHCI emulation]
[Block job I/O throttling]
[I/O accounting]
[Add guard page to bottom of coroutine stack]
[I/O throttling groups]
[QMP added to qemu-nbd] -> [Export QEMU volumes as ISCSI or FCOE with QMP]
</code>
(result: http://yuml.me/c2e9edb8)


== Old ==
== Old ==
* [[ToDo/Block/old|Old block layer roadmap]]
* [[ToDo/Block/old|Old block layer roadmap]]
* [[ToDo/Block/Qcow2PerformanceRoadmap|Old QCOW2 performance roadmap]]
* [[ToDo/Block/Qcow2PerformanceRoadmap|Old QCOW2 performance roadmap]]

Latest revision as of 10:51, 11 November 2020

This page contains block layer and storage features that have been proposed. These features may not be in active development and questions about them should be addressed to the QEMU mailing list at qemu-devel@nongnu.org.

Core block layer

  • Implement request cancellation in block/io_uring.c and util/thread-pool.c so that device reset and guest cancel commands promptly cancel hung requests. Today QEMU waits for requests to complete. Linux io_uring offers a cancel command that may be able to abort hung NFS requests (the liburing API is io_uring_prep_cancel()). QEMU's thread pool could be extended to use a signal to interrupt blocking system calls. Linux AIO does not seem to offer useful cancel semantics because only drivers/usb/gadget/ implements the kernel's kiocb_set_cancel_fn() API.

virtio-blk

  • Add a cancel command to the virtio-blk device so that running requests can be aborted. This requires changing the VIRTIO spec, extending QEMU's device emulation, and implementing blk_mq_ops->timeout() in Linux virtio_blk.ko. This task depends on first implementing real request cancellation in QEMU.

Old