Features/COLO: Difference between revisions

From QEMU
Line 221: Line 221:
-netdev tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 -device virtio-net-pci,id=net-pci0,netdev=hn0 \
-netdev tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 -device virtio-net-pci,id=net-pci0,netdev=hn0 \
-drive if=none,driver=raw,file=/mnt/sdb/pure_IMG/redhat/redhat-7.0.img,id=colo1,cache=none,aio=native \
-drive if=none,driver=raw,file=/mnt/sdb/pure_IMG/redhat/redhat-7.0.img,id=colo1,cache=none,aio=native \
-drive if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=70000000,\
-drive if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=70000000,\
file.file.filename=/mnt/ramfs/active_disk.img,file.driver=qcow2,\
file.file.filename=/mnt/ramfs/active_disk.img,file.driver=qcow2,\
file.backing.file.filename=/mnt/ramfs/hidden_disk.img,\
file.backing.file.filename=/mnt/ramfs/hidden_disk.img,\

Revision as of 01:24, 6 July 2015

Summary

COLO-COarse Grain LOck Stepping
Paper: academia paper in SOCC 2013

Virtual machine (VM) replication is a well known technique for providing application-agnostic software-implemented hardware fault tolerance "non-stop service". COLO is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and academia paper in SOCC 2013. It's also presented in KVM forum 2013: Kvm-forum-2013-COLO.pdf
COLO Framework:
Kvm-colo.jpg

There's also several RFC proposal posted to QEMU devel maillist:

http://lists.nongnu.org/archive/html/qemu-devel/2014-06/msg05567.html
http://lists.nongnu.org/archive/html/qemu-devel/2014-09/msg04459.html

Wiki: http://wiki.qemu.org/Features/COLO
Github:(Checkout to latest colo branch)

COLO frame branch
COLO block replication branch
COLO proxy branch
* Copyright (c) 2015 HUAWEI TECHNOLOGIES CO.,LTD.
* Copyright (c) 2015 FUJITSU LIMITED
* Copyright (c) 2015 Intel Corporation

Components

  • COLO Manager:
    • COLO Checkpoint/Failover Controller
Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
always be consistent with VM in Primary side.
(Refer to MicroCheckpointing).
When primary VM writes data into image, the colo disk manger captures this data
and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
the ontext of primary VM 's image.
  • COLO Proxy:
We need an module to compare the packets returned by Primary VM and Secondary VM
and decide whether to start a checkpoint according to some rules. It is a linux kernel module
for host.

Current Status

  • COLO Manager:
    • COLO Controller/Frame (View on Github Checkout to latest colo branch, RFC patch has been posted(v3))
    • COLO Disk Manager (RFC patch has been posted)
  • COLO Proxy(View on Github)

Failover command

We provide a qmp command:

colo-lost-heartbeat

This command will tell COLO that heartbeat is lost. COLO will do some action accordingly.

External heartbeat modules can use this qmp command to communicate with COLO. Users can choose whatever heartbeat implementation they want.

Failover rule:
TODO

How to test COLO

Hardware requirements

There is at least one directly connected nic to forward the network requests from client to secondary vm.
The directly connected nic must not be used by any other purpose. If your guest has more than one nic,
you should have directlyconnected nic for each guest nic.
If you don't have enouth directly connected nic, you can use vlan.

Network link topology

=================================normal ======================================
                                +--------+
                                |client  |
         master                 +----+---+                    slave
-------------------------+           |            + -------------------------+
   PVM                   |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]---------+              |
|guest  |     +---+-+    |                        |       +---+-+            |
|     [tap0]--+ br0 |    |                        |       | br0 |            |
|       |     +-----+  [eth1]-----[forward]----[eth1]--+  +-----+     SVM    |
+-------+                |                        |    |            +-------+|
                         |                        |    |  +-----+   | guest ||
                       [eth2]---[checkpoint]---[eth2]  +--+br1  |-[tap0]    ||
                         |                        |       +-----+   |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+
e.g.
master:
br0: 192.168.0.33
eth1: 192.168.1.33
eth2: 192.168.2.33

slave:
br0: 192.168.0.88
br1: no ip address
eth1: 192.168.1.88
eth2: 192.168.2.88
===========================after failover=====================================
                                +--------+
                                |client  |
    master (dead)               +----+---+                 slave (alive)
-------------------------+           |            ---------------------------+
  PVM                    |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]-------+                |
|guest  |     +---+-+    |                        |     +---+-+              |
|     [tap0]--+ br0 |    |                        |     | br0 +--+           |
|       |     +-----+  [eth1]-----[forward]----[eth1]   +-----+  |     SVM   |
+-------+                |                        |              |  +-------+|
                         |                        |     +-----+  |  | guest ||
                       [eth2]---[checkpoint]---[eth2]   |br1  |  +[tap0]    ||
                         |                        |     +-----+     |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+

Test environment prepare

  • Prepare host kernel
colo-proxy kernel module need cooperate with linux kernel.
You should patch colo-patch-for-kernel.patch into kernel codes,
then compile kernel and intall the new kernel (Recommend kernel-3.18.10)
  • Proxy module
    • proxy module is used for network packets compare.
# git clone https://github.com/coloft/colo-proxy.git
# cd ./colo-proxy
# make
# make install
  • Modified iptables
    • We have add a new rule to iptables command.
# git clone https://github.com/coloft/iptables.git
# cd ./iptables
# git checkout colo
# ./autogen.sh && ./configure
# make && make install
  • Modified arptables
    • Please get the latest arptables and then compile and install
# cd qemu
# ./configure --target-list=x86_64-softmmu --enable-colo --enable-quorum
# make
  • Set Up the Bridge and network environment
    • You must setup you network environment like above picture(Network link topology Normal).
In master, setup a bridge br0, using command brctl, like:
# ifconfig eth0 down
# ifconfig eth0 0.0.0.0
# brctl addbr br0
# brctl addif br0 eth0
# ifconfig br0 192.168.0.33 netmask 255.255.255.0
# ifconfig eth0 up
In slave, setup two bridge br0, br1, commands are same with above,
please note that br1 is linked to eth1(the forward nic).
  • Qemu-ifup
    • We need a script to bring up the TAP interface.
You can find this info from http://en.wikibooks.org/wiki/QEMU/Networking.
NOTE: Don't forget to change this script file permission to be executable

Master:
root@master# cat /etc/qemu-ifup
#!/bin/sh
switch=br0
if [ -n "$1" ]; then
         ip link set $1 up
         brctl addif ${switch} $1
fi
Slave:
root@slave # cat /etc/qemu-ifup
#!/bin/sh
switch=br1  #in primary, switch is br0. in secondary switch is br1
if [ -n "$1" ]; then
         ip link set $1 up
         brctl addif ${switch} $1
fi

Test steps

(Note: We apply two scripts to help completing step (1) ~ step (2), primary-colo.sh secondary-colo.sh)

  • (1) Load modeule
# modprobe xt_PMYCOLO (For slave side, modprobe xt_SECCOLO)
# modprobe nf_conntrack_colo (Other colo module will be automatically loaded by
script colo-proxy-script.sh)
# modprobe xt_mark
# modprobe kvm-intel
  • (2) Startup qemu
  • Master side:
# x86_64-softmmu/qemu-system-x86_64 -machine pc-i440fx-2.3,accel=kvm,usb=off \
-netdev tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 -device virtio-net-pci,id=net-pci0,netdev=hn0 \
-boot c -drive if=virtio,id=disk1,driver=quorum,read-pattern=fifo,cache=none,aio=native,\
children.0.file.filename=/mnt/sdb/pure_IMG/redhat/redhat-7.0.img,children.0.driver=raw \
-vnc :7 -m 2048 -smp 2 -device piix3-usb-uhci -device usb-tablet -monitor stdio -S
  • Slave side:
# qemu-img create -f qcow2 /mnt/ramfs/active_disk.img 10G

# qemu-img create -f qcow2 /mnt/ramfs/hidden_disk.img 10G

# x86_64-softmmu/qemu-system-x86_64 -machine pc-i440fx-2.3,accel=kvm,usb=off \
-netdev tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 -device virtio-net-pci,id=net-pci0,netdev=hn0 \
-drive if=none,driver=raw,file=/mnt/sdb/pure_IMG/redhat/redhat-7.0.img,id=colo1,cache=none,aio=native \
-drive if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=70000000,\
file.file.filename=/mnt/ramfs/active_disk.img,file.driver=qcow2,\
file.backing.file.filename=/mnt/ramfs/hidden_disk.img,\
file.backing.driver=qcow2,\
file.backing.backing.backing_reference=colo1,\
file.backing.allow-write-backing-file=on \
-vnc :7 -m 2048 -smp 2 -device piix3-usb-uhci -device usb-tablet -monitor stdio -incoming tcp:0:8888
 Note:
 1. The export name in secondary QEMU command line is the secondary disk's id. In the expampe, it is colo1.
 2. The export name for primary disk is not in the command line. It is in child_add's argument. Please see below.
 3. The export name for the same disk must be the same
 4. The qmp command nbd-server-start and nbd-server-add must be run before running the qmp command migrate on primary QEMU
 5. Don't use nbd-server-start's other options
 6. Active disk, hidden disk and nbd target's length should be the same.
 7. It is better to put active disk and hidden disk in ramdisk.
 8. It is all a single argument to -drive, and you should ignore the leading whitespace.
  • (3) On Secondary VM's QEMU monitor, issue command (This command must be run before command in step (4))
(qemu) nbd_server_start 192.168.2.88:8889
(qemu) nbd_server_add -w colo1
  • (4) On Primary VM's QEMU monitor, issue command:
(qemu) child_add disk1 child.driver=replication,child.mode=primary,child.file.host=192.168.2.88,child.file.port=8889,child.file.driver=nbd,child.ignore-errors=on
(qemu) migrate_set_capability colo on
(qemu) migrate tcp:192.168.2.88:8888
 Note:
 1. There should be only one NBD Client.
 2. host is the secondary physical machine's hostname or IP
 3. Each disk must have its own export name.
  • (5) Done
    • You will see two runing VMs, whenever you make changes to PVM, SVM will be synced.
  • (6) Failover test
    • You can kill SVM (PVM) and run 'colo_lost_heartbeat' in PVM's (SVM's) monitor at the same time, then PVM (SVM) will failover and client will not feel this change.
  • For Questions/Issues, please contact: Zhang Hailiang <zhang.zhanghailiang@huawei.com>;Yang Hongyang <yanghy@cn.fujitsu.com>; Wen Congyang <wency@cn.fujitsu.com>

Links