Features/IncrementalBackup

From QEMU


Overview

QEMU's Incremental Backup feature is implemented using user-manipulable 'dirty bitmap' primitives. By allowing the user (A human, management interface library such as libvirt, or a backup application utility) to interface directly with these primitives, a rich variety of functionality can be achieved without enforcing inflexible paradigms.

This wiki document is based on the source code documentation for the feature, which can be found here: https://github.com/qemu/qemu/blob/master/docs/interop/bitmaps.rst

Development & Status

The core functionality of the incremental backup feature was merged 2015-04-28 and was released in QEMU 2.4. The transactional command interface was released in QEMU 2.5.

There are several features still pending:

Migration

Migration support is necessary for the ability to migrate from one host to another without losing the bitmap data which is normally stored in host RAM.

  • Live Migration
    • Vladimir has posted a Live Migration series for bitmaps, which uses the usual re-iterative technique of sending changed blocks of data during a migration until the pivot.
    • This approach uses a "meta bitmap" to track changes in the dirty bitmap during the course of the migration.
    • patchset
  • "Postcopy" Migration
    • Vladimir and Denis have also posted a "Postcopy" style bitmap migration series. The idea is that since bitmap data is non-critical, if the source QEMU is lost after a pivot, this is a non-critical failure and the bitmap can simply be recreating alongside a new full backup.
    • patchset

Current blocker: Input and design planning required from migration maintainers.

Persistence

Persistence is the ability to save dirty block information to persistent storage in order to completely shut down QEMU without losing the information necessary to make the next incremental backup.

Per-Format

Per-format persistence is the strategy of having each file format be responsible for implementing a method to store dirty bitmaps.

  • Vladimir Sementsov-Ogievskiy has amended the qcow2 specification to allow for storing bitmaps alongside the data they describe.
  • Other formats, such as parallels, may follow suit later. At the moment, qcow2 is the only format with patches for bitmap persistence.

Current blocker: Pending review by block maintainers for QEMU 2.9.

Universal

Universal persistence is the strategy of having a unified approach to storing bitmaps regardless of the format of the data the bitmap is describing, and will be necessary for adding support to such things as RAW files and network devices which do not rely on e.g. qcow2.

  • An early design goal of incremental backup was the ability to save bitmap data for any arbitrary format, including RAW files which would allow us to keep incremental backups of network devices and more.
  • Fam Zheng proposed a new lightweight wrapper format, QBM, to perform the task of linking bitmap data with arbitrary formats. patchset
    • There is opposition to this format because it introduces a new primary qemu-originated format, increasing our maintenance burden, and possibly duplicating the effort already spent on allowing qcow2 and parallels formats to save bitmaps natively.
  • A different idea is to amend qcow2 to provide a write-through wrapper mode similar to QBM, where qcow2 will function as a top-level wrapper responsible for storing bitmaps. This appears to be the preferred solution currently.

Current blocker: Lack of a compelling use case. There is currently little movement in this area as we wait for solid use case requirements to materialize. qcow2 is thought to be sufficient for now as we flesh out other areas of this broader feature set.

External Backup API

For external standalone backup applications, additional interfaces to QEMU are desired to manipulate the dirty block data instead of relying on QEMU's internal backup mechanisms to facilitate data transfer.

Parallels has lead the push for such an API.

  • QEMU currently offers dirty blocks in a "push-only" mechanism via the QMP command drive-backup, which unconditionally sends all dirty blocks over the wire through an NBD export.
  • Some backup application plugin APIs may expect an iterative method instead, where the dirty blocks are queried and then the dirty blocks are "pulled" from QEMU instead.
  • The general consensus for how to "pull" blocks from QEMU appears to be to use image fleecing and NBD in order to create a point-in-time snapshot that an external client can pull data from.
  • An NBD specification amendment to allow this behavior has been proposed and merged upstream.
  • Virtuozzo have posted patches adding this proposed extension to QEMU's NBD server for early review during the 2.9 window.

Current blocker: Review and testing of NBD specification extensions for QEMU.

Bitmap Query

The QMP control channel is not suited for the transfer of dirty block information however, so a new data transfer channel is needed to communicate the dirty status of individual clusters to the application.

Bitmap Query Proposals:

  • NBD: Export the bitmap data itself as an NBD device to be queried.
  • SCSI: Implement the SCSI "LBA Status" command in the NBD protocol to allow clients to query the status of blocks before issuing read commands.
    • Possibly a non-starter: the LBA Status command as such answers for all layers, not just the top, so it can't quite be used as a "dirty" block status indicator, unless we use proprietary status bits in the reply.
    • Perhaps a new non-SCSI NBD command can be added instead?
  • Socket: Qemu gains a new dirty-bitmap-export command, which sends the raw bitmap data to a user-specified URI akin to the way migration streams are created. QEMU acts as a client to a backup application server and transmits the bitmap requested. The backup application is then free to pull whichever blocks it
  • QMP: Acknowledge that this might not be /so/ bad, and use the QMP interface for transmission of dirty block information.

Current blockers:

  • Agreeing on the data transfer mechanism for obtaining the dirty block status
  • Merging image fleecing support in a way that supports atomic dirty bitmaps

Dirty Bitmaps and Incremental Backup

Dirty bitmaps are objects that track which data needs to be backed up for the next incremental backup. It does this by taking note of which sectors on disk have been modified since the last incremental backup. The granularity at which it tracks data (Every 32KiB? 64KiB? 128KiB? etc) is configurable by the user.

These dirty bitmaps track the modification of data for nodes they are attached to. This would mean that you need at least one bitmap per drive you wish to back up incrementally.

Dirty bitmaps can be created by the user at any time and can be attached to any node in the drive graph, not just the root node.

Dirty Bitmap Names

A dirty bitmap's name is unique to the node, but bitmaps attached to different nodes can share the same name.

  • A drive with an id of 'drive0' can have a bitmap attached simply named 'bitmap'.
  • A different drive with id 'drive1' can also have a bitmap attached that is named 'bitmap'.
  • Dirty bitmaps created for internal use by QEMU may be anonymous and have no name, but any user-created bitmaps may not be. There can be any number of anonymous bitmaps per node.
  • The name of a user-created bitmap must not be empty ("").

Bitmap Modes

  • A Bitmap can be "frozen," which means that it is currently in-use by a backup operation and cannot be deleted, renamed, written to, reset, etc. It is effectively completely immutable.
  • A Bitmap can be "disabled", which is another internal mode that simply puts the bitmap in a "read only" state. This mode is used principally during migration. The bitmap cannot be written to or reset in this state.
  • The normal operating mode for a bitmap is "active."

Basic QMP Usage

Supported Commands

block-dirty-bitmap-add
Create a new, empty bitmap and attach it to a specific node.
block-dirty-bitmap-remove
Delete a bitmap that is not currently in-use by a backup operation.
block-dirty-bitmap-clear
Reset a specific bitmap back to a clean slate, as if it was newly created.

Creation

  • To create a new bitmap, enabled, on the drive with id=drive0:
{ "execute": "block-dirty-bitmap-add",
  "arguments": {
    "node": "drive0",
    "name": "bitmap0"
  }
}
  • This bitmap will have a default granularity that matches the cluster size of its associated drive, if available, clamped to between [4KiB, 64KiB]. The current default for qcow2 is 64KiB.
  • To create a new bitmap that tracks changes in 32KiB segments:
{ "execute": "block-dirty-bitmap-add",
  "arguments": {
    "node": "drive0",
    "name": "bitmap0",
    "granularity": 32768
  }
}

Deletion

  • Bitmaps that are frozen cannot be deleted.
  • Deleting the bitmap does not impact any other bitmaps attached to the same node, nor does it affect any backups already created from this node.
  • Because bitmaps are only unique to the node to which they are attached, you must specify the node/drive name here, too.
{ "execute": "block-dirty-bitmap-remove",
  "arguments": {
    "node": "drive0",
    "name": "bitmap0"
  }
}

Resetting

  • Resetting a bitmap will clear all information it holds.
  • An incremental backup created from an empty bitmap will copy no data, as if nothing has changed.
{ "execute": "block-dirty-bitmap-clear",
  "arguments": {
    "node": "drive0",
    "name": "bitmap0"
  }
}

Transactions

Justification

Bitmaps can be safely modified when the VM is paused or halted by using the basic QMP commands. For instance, you might perform the following actions:

  1. Boot the VM in a paused state.
  2. Create a full drive backup of drive0.
  3. Create a new bitmap attached to drive0.
  4. Resume execution of the VM.
  5. Incremental backups are ready to be created.

At this point, the bitmap and drive backup would be correctly in sync, and incremental backups made from this point forward would be correctly aligned to the full drive backup.

This is not particularly useful if we decide we want to start incremental backups after the VM has been running for a while, for which we will need to perform actions such as the following:

  1. Boot the VM and begin execution.
  2. Using a single transaction, perform the following operations:
    • Create bitmap0.
    • Create a full drive backup of drive0.
  3. Incremental backups are now ready to be created.

Supported Bitmap Transactions

  • block-dirty-bitmap-add
  • block-dirty-bitmap-clear

The usages are identical to their respective QMP commands, but see below for examples.

Example: New Incremental Backup

As outlined in the justification, perhaps we want to create a new incremental backup chain attached to a drive.

{ "execute": "transaction",
  "arguments": {
    "actions": [
      {"type": "block-dirty-bitmap-add",
       "data": {"node": "drive0", "name": "bitmap0"} },
      {"type": "drive-backup",
       "data": {"device": "drive0", "target": "/path/to/full_backup.img",
                "sync": "full", "format": "qcow2"} }
    ]
  }
}

Example: New Incremental Backup Anchor Point

Maybe we just want to create a new full backup with an existing bitmap and want to reset the bitmap to track the new chain.

{ "execute": "transaction",
  "arguments": {
    "actions": [
      {"type": "block-dirty-bitmap-clear",
       "data": {"node": "drive0", "name": "bitmap0"} },
      {"type": "drive-backup",
       "data": {"device": "drive0", "target": "/path/to/new_full_backup.img",
                "sync": "full", "format": "qcow2"} }
    ]
  }
}

Incremental Backups

The star of the show.

Nota Bene! Only incremental backups of entire drives are supported for now. So despite the fact that you can attach a bitmap to any arbitrary node, they are only currently useful when attached to the root node. This is because drive-backup only supports drives/devices instead of arbitrary nodes.

Example: First Incremental Backup

  1. Create a full backup and sync it to the dirty bitmap, as in the transactional examples above; or with the VM offline, manually create a full copy and then create a new bitmap before the VM begins execution.

    • Let's assume the full backup is named 'full_backup.img'.
    • Let's assume the bitmap you created is 'bitmap0' attached to 'drive0'.
  2. Create a destination image for the incremental backup that utilizes the full backup as a backing image.

    • Let's assume it is named 'incremental.0.img'.
    # qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
  3. Issue the incremental backup command:

    { "execute": "drive-backup",
      "arguments": {
        "device": "drive0",
        "bitmap": "bitmap0",
        "target": "incremental.0.img",
        "format": "qcow2",
        "sync": "incremental",
        "mode": "existing"
      }
    }

Example: Second Incremental Backup

  1. Create a new destination image for the incremental backup that points to the previous one, e.g.: 'incremental.1.img'

    # qemu-img create -f qcow2 incremental.1.img -b incremental.0.img -F qcow2
  2. Issue a new incremental backup command. The only difference here is that we have changed the target image below.

    { "execute": "drive-backup",
      "arguments": {
        "device": "drive0",
        "bitmap": "bitmap0",
        "target": "incremental.1.img",
        "format": "qcow2",
        "sync": "incremental",
        "mode": "existing"
      }
    }

Errors

  • In the event of an error that occurs after a backup job is successfully launched, either by a direct QMP command or a QMP transaction, the user will receive a BLOCK_JOB_COMPLETE event with a failure message, accompanied by a BLOCK_JOB_ERROR event.
  • In the case of an event being cancelled, the user will receive a BLOCK_JOB_CANCELLED event instead of a pair of COMPLETE and ERROR events.
  • In either case, the incremental backup data contained within the bitmap is safely rolled back, and the data within the bitmap is not lost. The image file created for the failed attempt can be safely deleted.
  • Once the underlying problem is fixed (e.g. more storage space is freed up), you can simply retry the incremental backup command with the same bitmap.

Example

  1. Create a target image:

    # qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
  2. Attempt to create an incremental backup via QMP:

    { "execute": "drive-backup",
      "arguments": {
        "device": "drive0",
        "bitmap": "bitmap0",
        "target": "incremental.0.img",
        "format": "qcow2",
        "sync": "incremental",
        "mode": "existing"
      }
    }
  3. Receive an event notifying us of failure:

    { "timestamp": { "seconds": 1424709442, "microseconds": 844524 },
      "data": { "speed": 0, "offset": 0, "len": 67108864,
                "error": "No space left on device",
                "device": "drive1", "type": "backup" },
      "event": "BLOCK_JOB_COMPLETED" }
  4. Delete the failed incremental, and re-create the image.

    # rm incremental.0.img
    # qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
  5. Retry the command after fixing the underlying problem, such as freeing up space on the backup volume:

    { "execute": "drive-backup",
      "arguments": {
        "device": "drive0",
        "bitmap": "bitmap0",
        "target": "incremental.0.img",
        "format": "qcow2",
        "sync": "incremental",
        "mode": "existing"
      }
    }
  6. Receive confirmation that the job completed successfully:

    { "timestamp": { "seconds": 1424709668, "microseconds": 526525 },
      "data": { "device": "drive1", "type": "backup",
                "speed": 0, "len": 67108864, "offset": 67108864},
      "event": "BLOCK_JOB_COMPLETED" }

Partial Transactional Failures

  • Sometimes, a transaction will succeed in launching and return success, but then later the backup jobs themselves may fail. It is possible that a management application may have to deal with a partial backup failure after a successful transaction.
  • If multiple backup jobs are specified in a single transaction, when one of them fails, it will not interact with the other backup jobs in any way.
  • The job(s) that succeeded will clear the dirty bitmap associated with the operation, but the job(s) that failed will not. It is not "safe" to delete any incremental backups that were created successfully in this scenario, even though others failed.

Example

  • QMP example highlighting two backup jobs:

    { "execute": "transaction",
      "arguments": {
        "actions": [
          { "type": "drive-backup",
            "data": { "device": "drive0", "bitmap": "bitmap0",
                      "format": "qcow2", "mode": "existing",
                      "sync": "incremental", "target": "d0-incr-1.qcow2" } },
          { "type": "drive-backup",
            "data": { "device": "drive1", "bitmap": "bitmap1",
                      "format": "qcow2", "mode": "existing",
                      "sync": "incremental", "target": "d1-incr-1.qcow2" } },
        ]
      }
    }
  • QMP example response, highlighting one success and one failure:
    • Acknowledgement that the Transaction was accepted and jobs were launched:

      { "return": {} }
    • Later, QEMU sends notice that the first job was completed:

      { "timestamp": { "seconds": 1447192343, "microseconds": 615698 },
        "data": { "device": "drive0", "type": "backup",
                   "speed": 0, "len": 67108864, "offset": 67108864 },
        "event": "BLOCK_JOB_COMPLETED"
      }
    • Later yet, QEMU sends notice that the second job has failed:

      { "timestamp": { "seconds": 1447192399, "microseconds": 683015 },
        "data": { "device": "drive1", "action": "report",
                  "operation": "read" },
        "event": "BLOCK_JOB_ERROR" }
      { "timestamp": { "seconds": 1447192399, "microseconds": 685853 },
        "data": { "speed": 0, "offset": 0, "len": 67108864,
                  "error": "Input/output error",
                  "device": "drive1", "type": "backup" },
        "event": "BLOCK_JOB_COMPLETED" }
  • In the above example, "d0-incr-1.qcow2" is valid and must be kept, but "d1-incr-1.qcow2" is invalid and should be deleted. If a VM-wide incremental backup of all drives at a point-in-time is to be made, new backups for both drives will need to be made, taking into account that a new incremental backup for drive0 needs to be based on top of "d0-incr-1.qcow2."

Grouped Completion Mode

  • While jobs launched by transactions normally complete or fail on their own, it is possible to instruct them to complete or fail together as a group.
  • QMP transactions take an optional properties structure that can affect the semantics of the transaction.
  • The "completion-mode" transaction property can be either "individual" which is the default, legacy behavior described above, or "grouped," a new behavior detailed below.
  • Delayed Completion: In grouped completion mode, no jobs will report success until all jobs are ready to report success.
  • Grouped failure: If any job fails in grouped completion mode, all remaining jobs will be cancelled. Any incremental backups will restore their dirty bitmap objects as if no backup command was ever issued.
    • Regardless of if QEMU reports a particular incremental backup job as CANCELLED or as an ERROR, the in-memory bitmap will be restored.

Example

  • Here's the same example scenario from above with the new property:

    { "execute": "transaction",
      "arguments": {
        "actions": [
          { "type": "drive-backup",
            "data": { "device": "drive0", "bitmap": "bitmap0",
                      "format": "qcow2", "mode": "existing",
                      "sync": "incremental", "target": "d0-incr-1.qcow2" } },
          { "type": "drive-backup",
            "data": { "device": "drive1", "bitmap": "bitmap1",
                      "format": "qcow2", "mode": "existing",
                      "sync": "incremental", "target": "d1-incr-1.qcow2" } },
        ],
        "properties": {
          "completion-mode": "grouped"
        }
      }
    }
  • QMP example response, highlighting a failure for drive2:
    • Acknowledgement that the Transaction was accepted and jobs were launched:

      { "return": {} }
    • Later, QEMU sends notice that the second job has errored out, but that the first job was also cancelled:

      { "timestamp": { "seconds": 1447193702, "microseconds": 632377 },
        "data": { "device": "drive1", "action": "report",
                  "operation": "read" },
        "event": "BLOCK_JOB_ERROR" }
      { "timestamp": { "seconds": 1447193702, "microseconds": 640074 },
        "data": { "speed": 0, "offset": 0, "len": 67108864,
                  "error": "Input/output error",
                  "device": "drive1", "type": "backup" },
        "event": "BLOCK_JOB_COMPLETED" }
      { "timestamp": { "seconds": 1447193702, "microseconds": 640163 },
        "data": { "device": "drive0", "type": "backup", "speed": 0,
                  "len": 67108864, "offset": 16777216 },
        "event": "BLOCK_JOB_CANCELLED" }