Features/Livebackup: Difference between revisions

From QEMU
No edit summary
No edit summary
 
(38 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Livebackup - A full backup solution for making full and incremental disk backups of a running VM  ==
{| class="wikitable" style="margin: 2em; border:1px solid black;"
Livebackup provides the ability for an administrator or a management server to use a livebackup_client program to connect to the qemu process and copy the disk blocks that were modified since the last backup was taken.  
|
'''Note: This feature proposal is available via archive only, the code was not merged into QEMU.


== Contact ==
The [[Features/IncrementalBackup|Incremental Backup]] feature was merged into QEMU instead for the 2.4 development window.
Jagane Sundar (jagane at sundar dot org)


== Overview ==
For the old feature proposal, please see [http://wiki.qemu.org/index.php?title=Features/Livebackup&oldid=1188 Livebackup (2011-05-09)]'''
The goal of this project is to add the ability to do full and incremental disk backups of a running VM. These backups will be transferred over a TCP connection to a backup server, and the virtual disk images will be reconstituted there. This project does not transfer the memory contents of the running VM, or the device states of emulated devices, i.e. livebackup is not VM suspend.
|}


 
[[Category:Obsolete feature pages]]
== Use Cases ==
Today IaaS cloud platforms such as EC2 provide you with the ability to have two types of virtual disks in VM instances
* ephemeral virtual disks that are lost if there is a hardware failure
* EBS storage volumes which are costly
I think that an efficient disk backup mechanism will enable a third type  of virtual disk - one that is backed up, perhaps every hour or so. So a cloud operator using KVM virtual machines can offer three types of VMS:  
* an ephemeral VM that is lost if a hardware failure happens
* a backed up VM that can be restored from the last hourly backup
* a fully highly available VM running off of shared storage.
 
== High Level Design ==
* When qemu block driver is called to open a virtual disk file, it checks for the presence of a file with suffix .livebackupconf, for example when opening the file vdisk0.img, it would look for a file called vdisk0.img.livebackupconf.
* If the livebackupconf file exists, then this disk is part of the backup set, and the block driver for that virtual disk starts tracking blocks that are modified using an in-memory dirty blocks bitmap. This in-memory dirty blocks bitmap is saved to a file called vdisk0.img.dirty_blocks when the VM shuts down. Thus this dirty blocks bitmap is persisted across VM reboots. It is operated on in memory when the VM is running, but saved to disk when the VM shuts down, and read in again when the VM boots.
* qemu starts a livebackup thread, that listens on a TCP port for connections from livebackup_client
* When the operator wants to take an incremental backup of the running VM, he uses the program livebackup_client. This program opens a TCP connection to the qemu process' livebackup thread.
* First, the livebackup_client issues a snapshot command.
* qemu saves the dirty blocks bitmap of each virtual disk in a snapshot struct, and allocates new in-memory dirty blocks map for each virtual disk
* From now on, till the livebackup_client destroys the snapshot, each write from the VM is checked by the livebackup interposer. If the blocks written are already marked as dirty in the snapshot struct's dirty blocks bitmap, the original blocks are saved off in a COW file before the VM write is allowed to proceed.
* The livebackup_client now iterates through all the dirty blocks in the snapshot, and transfers them over to the backup server. It can either reconstitute the virtual disk image at the time of the backup by writing the blocks to the virtual disk image file, or can save the blocks in a COW redo file of qcow, qcow2 or vmdk format.
 
 
== Other technologies that may be used to solve this problem ==
=== LVM snapshots ===
It is possible to create a new LVM partition for each virtual disk in the VM. When a VM needs to be backed up, each of these LVM partitions is snapshotted. At this point things get messy - I don't really know of a good way to identify the blocks that were modified since the last backup. Also, once these blocks are identified, we need a mechanism to transfer them over a TCP connection to the backup server. Perhaps a way to export the 'dirty blocks' map to userland and use a deamon to transfer the block. Or maybe a kernel thread capable of listening on TCP sockets and transferring the blocks over to the backup client (I don't know if this is possible).

Latest revision as of 15:00, 11 October 2016

Note: This feature proposal is available via archive only, the code was not merged into QEMU.

The Incremental Backup feature was merged into QEMU instead for the 2.4 development window.

For the old feature proposal, please see Livebackup (2011-05-09)