Features/COLO: Difference between revisions

From QEMU
No edit summary
(Add my contact info)
 
(206 intermediate revisions by 3 users not shown)
Line 1: Line 1:
= Summary =
= Feature Name =
COarse-grained LOck-stepping Virtual Machines for Non-stop Service<br>
 
= Background =
Virtual machine (VM) replication is a well known technique for providing application-agnostic software-implemented hardware fault
Virtual machine (VM) replication is a well known technique for providing application-agnostic software-implemented hardware fault
tolerance "non-stop service". COLO is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They
tolerance "non-stop service". COLO is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They
receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are
receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are
released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013,
released immediately. Otherwise, a VM checkpoint (on demand) is conducted.<br>
and academia paper in SOCC 2013. It's also presented in KVM forum 2013:
 
[http://www.linux-kvm.org/wiki/images/1/1d/Kvm-forum-2013-COLO.pdf Kvm-forum-2013-COLO.pdf]
 
There's also a RFC proposal posted to QEMU devel maillist:
= Feature authors =
[http://lists.nongnu.org/archive/html/qemu-devel/2014-06/msg05567.html http://lists.nongnu.org/archive/html/qemu-devel/2014-06/msg05567.html]
* '''Name:'''  Chen Zhang, Lukas Straub, Hailiang Zhang, Congyang Wen, Yong Wang, Guang Wang
* ''' Email:''' zhangckid@gmail.com/chen.zhang@intel.com, lukasstraub2@web.de, zhang.zhanghailiang@huawei.com, wency@cn.fujitsu.com, wang.yongD@h3c.com, wang.guanga@h3c.com
 
= Architecture =
The architecture of COLO is shown in the bellow diagram.
It consists of a pair of networked physical nodes:
The primary node running the PVM, and the secondary node running the SVM
to maintain a valid replica of the PVM.
PVM and SVM execute in parallel and generate output of response packets for
client requests according to the application semantics.
 
The incoming packets from the client or external network are received by the
primary node, and then forwarded to the secondary node, so that Both the PVM
and the SVM are stimulated with the same requests.
 
COLO receives the outbound packets from both the PVM and SVM and compares them
before allowing the output to be sent to clients.
 
The SVM is qualified as a valid replica of the PVM, as long as it generates
identical responses to all client requests. Once the differences in the outputs
are detected between the PVM and SVM, COLO withholds transmission of the
outbound packets until it has successfully synchronized the PVM state to the SVM.
 
  Primary Node                                                              Secondary Node
+------------+  +-----------------------+      +------------------------+  +------------+
|            |  |      HeartBeat      +<----->+      HeartBeat        |  |            |
| Primary VM |  +-----------+-----------+      +-----------+------------+  |Secondary VM|
|            |              |                              |              |            |
|            |  +-----------|-----------+      +-----------|------------+  |            |
|            |  |QEMU  +---v----+      |      |QEMU  +----v---+        |  |            |
|            |  |      |Failover|      |      |      |Failover|        |  |            |
|            |  |      +--------+      |      |      +--------+        |  |            |
|            |  |  +---------------+  |      |  +---------------+    |  |            |
|            |  |  | VM Checkpoint +-------------->+ VM Checkpoint |    |  |            |
|            |  |  +---------------+  |      |  +---------------+    |  |            |
|Requests<--------------------------\ /-----------------\ /--------------------->Requests|
|            |  |                  ^ ^ |      |      | |              |  |            |
|Responses+---------------------\ /-|-|------------\ /-------------------------+Responses|
|            |  |              | | | | |      |  | |  | |              |  |            |
|            |  | +-----------+ | | | | |      |  | |  | | +----------+ |  |            |
|            |  | | COLO disk | | | | | |      |  | |  | | | COLO disk| |  |            |
|            |  | |  Manager +---------------------------->| Manager  | |  |            |
|            |  | ++----------+ v v | | |      |  | v  v | +---------++ |  |            |
|            |  |  |+-----------+-+-+-++|      | ++-+--+-+---------+ |  |  |            |
|            |  |  ||  COLO Proxy    ||      | |  COLO Proxy    | |  |  |            |
|            |  |  || (compare packet  ||      | |(adjust sequence | |  |  |            |
|            |  |  ||and mirror packet)||      | |    and ACK)    | |  |  |            |
|            |  |  |+------------+---^-+|      | +-----------------+ |  |  |            |
+------------+  +-----------------------+      +------------------------+  +------------+
+------------+    |            |  |                                |    +------------+
| VM Monitor |    |            |  |                                |    | VM Monitor |
+------------+    |            |  |                                |    +------------+
+---------------------------------------+      +----------------------------------------+
|  Kernel        |            |  |  |      |  Kernel            |                  |
+---------------------------------------+      +----------------------------------------+
                    |            |  |                                |
    +--------------v+  +---------v---+--+      +------------------+ +v-------------+
    |  Storage    |  |External Network|      | External Network | |  Storage    |
    +---------------+  +----------------+      +------------------+ +--------------+
 
= Components introduction =


Wiki: [http://wiki.qemu.org/Features/COLO http://wiki.qemu.org/Features/COLO]<br>
* HeartBeat (Not yet implemented)
Github:
: Runs on both the primary and secondary nodes, to periodically check platform
:[https://github.com/macrosheep/qemu https://github.com/macrosheep/qemu] Checkout to latest colo branch
: availability. When the primary node suffers a hardware fail-stop failure,
:[https://github.com/wencongyang/colo-agent https://github.com/wencongyang/colo-agent]
: the heartbeat stops responding, the secondary node will trigger a failover
Copyright Fujitsu, Corp. 2014
: as soon as it determines the absence.
* COLO Block Replication (Please refer to [http://wiki.qemu.org/Features/BlockReplication BlockReplication])
: When primary VM writes data into image, the colo disk manger captures this data<br>
: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
: the context of primary VM 's image.
: The following is the image of block replication workflow:


= Components =
        +----------------------+            +------------------------+
        |Primary Write Requests|            |Secondary Write Requests|
        +----------------------+            +------------------------+
                  |                                      |
                  |                                      (4)
                  |                                      V
                  |                              /-------------\
                  |      Copy and Forward        |            |
                  |---------(1)----------+      | Disk Buffer |
                  |                      |      |            |
                  |                    (3)      \-------------/
                  |                speculative      ^
                  |                write through    (2)
                  |                      |          |
                  V                      V          |
          +--------------+          +----------------+
          | Primary Disk |          | Secondary Disk |
          +--------------+          +----------------+


* COLO Manager:
    1) Primary write requests will be copied and forwarded to Secondary
** COLO Controller - Modifications of save/restore flow (Refer to [http://wiki.qemu.org/Features/MicroCheckpointing MacroCheckpointing]).
      QEMU.
** COLO Disk Manager - When primary VM writes data into image, the colo disk manger captures this data<br>
    2) Before Primary write requests are written to Secondary disk, the
:: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
      original sector content will be read from Secondary disk and
:: the ontext of primary VM 's image.
      buffered in the Disk buffer, but it will not overwrite the existing
* COLO Agent
      sector content (it could be from either "Secondary Write Requests" or
: We need an agent to compare the packets returned by Primary VM and Secondary VM<br>
      previous COW of "Primary Write Requests") in the Disk buffer.
: and decide whether to start a checkpoint according to some rules. It is a linux kernel module<br>
    3) Primary write requests will be written to Secondary disk.
: for host.
    4) Secondary write requests will be buffered in the Disk buffer and it
      will overwrite the existing sector content in the buffer.


= Current Status =
* COLO framework: COLO Checkpoint/Failover Controller  
* COLO Manager:
: Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
** COLO Controller ([https://github.com/macrosheep/qemu View on Github] Checkout to latest colo branch)
: always be consistent with VM in Primary side.
** COLO Disk Manager (POC)
* COLO Proxy:
* COLO Agent ([https://github.com/wencongyang/colo-agent View on Github])
: Delivers packets to Primary and Seconday, and then compare the reponse from both side. Then decide whether to
: start a checkpoint according to some rules.


= Failover & Heartbeat =
  Primary qemu                                                          Secondary qemu
We provide a qmp command:
+--------------------------------------------------------------+      +----------------------------------------------------------------+
<pre>start_failover</pre>
| +----------------------------------------------------------+ |      |  +-----------------------------------------------------------+ |
| |                                                          | |      |  |                                                          | |
| |                        guest                            | |      |  |                        guest                              | |
| |                                                          | |      |  |                                                          | |
| +-------^--------------------------+-----------------------+ |      |  +---------------------+--------+----------------------------+ |
|        |                          |                        |      |                        ^        |                              |
|        |                          |                        |      |                        |        |                              |
|        |  +------------------------------------------------------+  |                        |        |                              |
|netfilter|  |                      |                        |    |  |  netfilter            |        |                              |
| +----------+ +----------------------------+                  |    |  |  +-----------------------------------------------------------+ |
| |      |  |                      |      |        out      |    |  |  |                    |        |  filter excute order      | |
| |      |  |          +-----------------------------+        |    |  |  |                    |        | +------------------->      | |
| |      |  |          |            |      |        |        |    |  |  |                    |        |  TCP                      | |
| | +-----+--+-+  +-----v----+ +-----v----+ |pri +----+----+sec|    |  |  | +------------+  +---+----+---v+rewriter++  +------------+ | |
| | |          |  |          | |          | |in  |        |in |    |  |  | |            |  |        |              |  |            | | |
| | |  filter  |  |  filter  | |  filter  +------>  colo  <------+ +-------->  filter  +--> adjust |  adjust    +-->   filter  | | |
| | |  mirror  |  |redirector| |redirector| |    | compare |  |  |    |  | | redirector |  | ack    |  seq        |  | redirector | | |
| | |          |  |          | |          | |    |        |  |  |    |  | |            |  |        |              |  |            | | |
| | +----^-----+  +----+-----+ +----------+ |    +---------+  |  |    |  | +------------+  +--------+--------------+  +---+--------+ | |
| |      |  tx        |  rx          rx  |                  |  |    |  |            tx                        all      |  rx      | |
| |      |            |                    |                  |  |    |  +-----------------------------------------------------------+ |
| |      |            +--------------+    |                  |  |    |                                                  |            |
| |      |  filter excute order      |    |                  |  |    |                                                  |            |
| |      |  +---------------->        |    |                  |  +--------------------------------------------------------+            |
| +-----------------------------------------+                  |      |                                                                |
|        |                            |                        |      |                                                                |
+--------------------------------------------------------------+      +----------------------------------------------------------------+
          |guest receive              | guest send
          |                            |
+--------+----------------------------v------------------------+
|                                                              |                          NOTE: filter direction is rx/tx/all
|                        tap                                  |                          rx:receive packets sent to the netdev
|                                                              |                          tx:receive packets sent by the netdev
+--------------------------------------------------------------+
: In COLO-compare, we do packet comparing job.
: Packets coming from the primary char indev will be sent to outdev.
: Packets coming from the secondary char dev will be dropped after comparing.
: colo-comapre need two input chardev and one output chardev: primary_in=chardev1-id (source: primary send packet) secondary_in=chardev2-id (source: secondary send packet) outdev=chardev3-id


This command will tell COLO that heartbeat is lost. COLO will do some action accordingly.
*'''Note:
: HeartBeat is not been realized, so you need to trigger failover process by using 'x-colo-lost-heartbeat' command.


External heartbeat modules can use this qmp command to communicatewith COLO. Users can choose whatever heartbeat implementation they want.
= Current Status =
Have been merged in Qemu upstream (v4.0).


Failover rule:<br>
= How to setup/test COLO =
TODO
For automatically managed colo: [[Features/COLO/Managed HOWTO]]


= Block replication =
For manual use of colo via qmp commands: [[Features/COLO/Manual HOWTO]]
This is the COLO Disk Manager implementation.
Please refer to  [http://wiki.qemu.org/Features/BlockReplication http://wiki.qemu.org/Features/BlockReplication]


= Installation =
= Heartbeat Service =
* Hardware requriements
COLO support internal and external heartbeat service, they both use same qemu interface(qmp or monitor commands).
There is at least one directly connected nic to forward the network requests from client
For internal service, we introduce new Qemu module named Advanced Watch Dog for this job.
to secondary vm. The directly connected nic must not be used by any other
Advanced Watch Dog is an universal monitoring module on VMM side, it can be used to detect network down(VMM to guest, VMM to VMM, VMM to another remote server) and do previously set operation.
purpose. If your guest has more than one nic, you should have directly
If you want to use it, please add "--enable-awd" when configure qemu.
connected nic for each guest nic. If you don't have enouth directly connected
In primary node:
nic, you can use vlan.
* Checkout the latest colo branch on Github.
<pre>
<pre>
/*for example, the latest brach is colo_v0.99, the version may rapidly grows, please make sure you are using the latest branch*/
-monitor tcp::4445,server,nowait
# git clone https://github.com/macrosheep/qemu
-chardev socket,id=h1,host=3.3.3.3,port=9009,server,nowait
# git checkout colo_v0.99
-chardev socket,id=heartbeat0,host=3.3.3.3,port=4445
-object iothread,id=iothread1
-object advanced-watchdog,id=heart1,server=on,awd_node=h1,notification_node=heartbeat0,
opt_script=colo_primary_opt_script,iothread=iothread1,pulse_interval=1000,timeout=5000
</pre>
</pre>
* configure and make:
 
colo_primary_opt_script:
<pre>
<pre>
# ./configure --target-list=x86_64-softmmu --enable-colo
x_colo_lost_heartbeat
# make
</pre>
</pre>


* Install COLO agent module:
In secondary node:
** Download [https://github.com/wencongyang/colo-agent COLO agent], and compile it, it is the kernel module, so copy the module to the directory /lib/modules/<version>/updates/, and run depmod.
 
<pre>
<pre>
# git clone https://github.com/wencongyang/colo-agent
-monitor tcp::4445,server,nowait
# cd colo-agent; make
-chardev socket,id=h1,host=3.3.3.3,port=9009,reconnect=1
# cp *.ko /lib/modules/<version>/updates/
-chardev socket,id=heart1,host=3.3.3.8,port=4445
# depmod
-object iothread,id=iothread1
-object advanced-watchdog,id=heart1,server=off,awd_node=h1,notification_node=he
art1,opt_script=colo_secondary_opt_script,iothread=iothread1,timeout=10000
</pre>
</pre>


= Running COLO =
colo_secondary_opt_script:
* just like QEMU's normal migration, run 2 QEMU VM:
** Add the following net parameters to start qemu for nic replication on both Primary and Secondary QEMU:
<pre>-netdev tap,id=hn0,colo_script=./qemu-colo/network-colo,colo_nicname=eth0,vhost=on -device virtio-net-pci,id=net-pci0,netdev=hn0
/* colo_nicname is the directly connected nic which will foward request to secondary host. */</pre>
** Start Primary VM
** Start Secondary VM with option:
<pre>-incoming tcp:[IP]:[PORT]</pre>
* on Primary VM's QEMU monitor, run following command:
<pre>
<pre>
  migrate_set_capability colo on
nbd_server_stop
  migrate tcp:[IP]:[PORT]
x_colo_lost_heartbeat
</pre>
</pre>
This parts of code still under review.
https://github.com/zhangckid/qemu/tree/colo-with-awd19dec1


* Some hints from  Dr. David Alan Gilbert(dgilbert@redhat.com) for network replication:
For external service, user can write service and policy itself, it detect network down, just need notify COLO service.
<pre>
  1) Each host has a physical interface (em4)
  2) and a bridge (brpair) that has em4 connected to it
    Note I've explicitly set the bridge address on each host to
    the MAC of the physical interface using
    ip link set dev brpair address
  3) Each host has a Vlan alias em4.8 created with
    ip link add link em4 name em4.8 type vlan id 8
    ip link set dev em4.8 up
 
    it's the em4.8 that carries the colo network comparison traffic.


  4) Each host has a tap device:
= Supporting documentation =
    ip tuntap add dev pairtap mode tap
The idea is presented in Xen summit 2012, 2013 and [https://static.sched.com/hosted_files/xensummit19/d0/Application%20Agnostic%20High%20Availability%20Solution%20On%20Hypervisor%20Level%20-%20V1.4.pdf 2019]
    ip link set dev pairtap address 52:54:00:61:0f:46 up  # note not actually guests mac
and [http://www.socc2013.org/home/program/a3-dong.pdf?attredirects=0 academia paper in SOCC 2013].
    brctl addif brpair pairtap    # Not on secondary


  5) ensure that colo-tc is in your path, and start the qemu's
It is also presented in  
   
[https://www.linux-kvm.org/images/1/1d/Kvm-forum-2013-COLO.pdf KVM forum 2013], [http://www.linux-kvm.org/images/0/01/01x07-Hongyang_Yang-Status_update_on_KVM-COLO.pdf KVM forum 2015] and [http://www.linux-kvm.org/images/a/af/03x08B-Hailang_Zhang-Status_Update_on_KVM-COLO_FT.pdf KVM forum2016]
      try/bin/qemu-system-x86_64 -machine pc-i440fx-2.2,accel=kvm -m 4096 -smp 4 -nographic -drive id=image,file=/home/vms/f20.qcow2,cache=none -netdev tap,id=hn0,ifname=pairtap,colo_script=./qemu/network-colo,colo_nicname=em4.8,script=no,downscript=no -device virtio-net-pci,id=net-pci0,mac=52:54:00:61:0f:45,netdev=hn0


    (on secondary add the -incoming tcp::4444)
= Future works =
: 1. Support continuously VM replication.
: 2. Support shared storage.
: 3. Develop the heartbeat part.
: 4. Reduce checkpoint VM’s downtime while do checkpoint.


  6) then from primary ctrl-a c to get prompt and do
= Links =
    migrate_set_speed 100G
* [http://wiki.xen.org/wiki/COLO_-_Coarse_Grain_Lock_Stepping COLO on Xen]
    migrate_set_capability colo on
    migrate -d tcp:otherhost:4444
</pre>

Latest revision as of 18:38, 30 January 2023

Feature Name

COarse-grained LOck-stepping Virtual Machines for Non-stop Service

Background

Virtual machine (VM) replication is a well known technique for providing application-agnostic software-implemented hardware fault tolerance "non-stop service". COLO is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted.


Feature authors

  • Name: Chen Zhang, Lukas Straub, Hailiang Zhang, Congyang Wen, Yong Wang, Guang Wang
  • Email: zhangckid@gmail.com/chen.zhang@intel.com, lukasstraub2@web.de, zhang.zhanghailiang@huawei.com, wency@cn.fujitsu.com, wang.yongD@h3c.com, wang.guanga@h3c.com

Architecture

The architecture of COLO is shown in the bellow diagram. It consists of a pair of networked physical nodes: The primary node running the PVM, and the secondary node running the SVM to maintain a valid replica of the PVM. PVM and SVM execute in parallel and generate output of response packets for client requests according to the application semantics.

The incoming packets from the client or external network are received by the primary node, and then forwarded to the secondary node, so that Both the PVM and the SVM are stimulated with the same requests.

COLO receives the outbound packets from both the PVM and SVM and compares them before allowing the output to be sent to clients.

The SVM is qualified as a valid replica of the PVM, as long as it generates identical responses to all client requests. Once the differences in the outputs are detected between the PVM and SVM, COLO withholds transmission of the outbound packets until it has successfully synchronized the PVM state to the SVM.

 Primary Node                                                              Secondary Node
+------------+  +-----------------------+       +------------------------+  +------------+
|            |  |       HeartBeat       +<----->+       HeartBeat        |  |            |
| Primary VM |  +-----------+-----------+       +-----------+------------+  |Secondary VM|
|            |              |                               |               |            |
|            |  +-----------|-----------+       +-----------|------------+  |            |
|            |  |QEMU   +---v----+      |       |QEMU  +----v---+        |  |            |
|            |  |       |Failover|      |       |      |Failover|        |  |            |
|            |  |       +--------+      |       |      +--------+        |  |            |
|            |  |   +---------------+   |       |   +---------------+    |  |            |
|            |  |   | VM Checkpoint +-------------->+ VM Checkpoint |    |  |            |
|            |  |   +---------------+   |       |   +---------------+    |  |            |
|Requests<--------------------------\ /-----------------\ /--------------------->Requests|
|            |  |                   ^ ^ |       |       | |              |  |            |
|Responses+---------------------\ /-|-|------------\ /-------------------------+Responses|
|            |  |               | | | | |       |  | |  | |              |  |            |
|            |  | +-----------+ | | | | |       |  | |  | | +----------+ |  |            |
|            |  | | COLO disk | | | | | |       |  | |  | | | COLO disk| |  |            |
|            |  | |   Manager +---------------------------->| Manager  | |  |            |
|            |  | ++----------+ v v | | |       |  | v  v | +---------++ |  |            |
|            |  |  |+-----------+-+-+-++|       | ++-+--+-+---------+ |  |  |            |
|            |  |  ||   COLO Proxy     ||       | |   COLO Proxy    | |  |  |            |
|            |  |  || (compare packet  ||       | |(adjust sequence | |  |  |            |
|            |  |  ||and mirror packet)||       | |    and ACK)     | |  |  |            |
|            |  |  |+------------+---^-+|       | +-----------------+ |  |  |            |
+------------+  +-----------------------+       +------------------------+  +------------+
+------------+     |             |   |                                |     +------------+
| VM Monitor |     |             |   |                                |     | VM Monitor |
+------------+     |             |   |                                |     +------------+
+---------------------------------------+       +----------------------------------------+
|   Kernel         |             |   |  |       |   Kernel            |                  |
+---------------------------------------+       +----------------------------------------+
                   |             |   |                                |
    +--------------v+  +---------v---+--+       +------------------+ +v-------------+
    |   Storage     |  |External Network|       | External Network | |   Storage    |
    +---------------+  +----------------+       +------------------+ +--------------+

Components introduction

  • HeartBeat (Not yet implemented)
Runs on both the primary and secondary nodes, to periodically check platform
availability. When the primary node suffers a hardware fail-stop failure,
the heartbeat stops responding, the secondary node will trigger a failover
as soon as it determines the absence.
When primary VM writes data into image, the colo disk manger captures this data
and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
the context of primary VM 's image.
The following is the image of block replication workflow:
       +----------------------+            +------------------------+
       |Primary Write Requests|            |Secondary Write Requests|
       +----------------------+            +------------------------+
                 |                                       |
                 |                                      (4)
                 |                                       V
                 |                              /-------------\
                 |      Copy and Forward        |             |
                 |---------(1)----------+       | Disk Buffer |
                 |                      |       |             |
                 |                     (3)      \-------------/
                 |                 speculative      ^
                 |                write through    (2)
                 |                      |           |
                 V                      V           |
          +--------------+           +----------------+
          | Primary Disk |           | Secondary Disk |
          +--------------+           +----------------+
   1) Primary write requests will be copied and forwarded to Secondary
      QEMU.
   2) Before Primary write requests are written to Secondary disk, the
      original sector content will be read from Secondary disk and
      buffered in the Disk buffer, but it will not overwrite the existing
      sector content (it could be from either "Secondary Write Requests" or
      previous COW of "Primary Write Requests") in the Disk buffer.
   3) Primary write requests will be written to Secondary disk.
   4) Secondary write requests will be buffered in the Disk buffer and it
      will overwrite the existing sector content in the buffer.
  • COLO framework: COLO Checkpoint/Failover Controller
Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
always be consistent with VM in Primary side.
  • COLO Proxy:
Delivers packets to Primary and Seconday, and then compare the reponse from both side. Then decide whether to
start a checkpoint according to some rules.
 Primary qemu                                                           Secondary qemu
+--------------------------------------------------------------+       +----------------------------------------------------------------+
| +----------------------------------------------------------+ |       |  +-----------------------------------------------------------+ |
| |                                                          | |       |  |                                                           | |
| |                        guest                             | |       |  |                        guest                              | |
| |                                                          | |       |  |                                                           | |
| +-------^--------------------------+-----------------------+ |       |  +---------------------+--------+----------------------------+ |
|         |                          |                         |       |                        ^        |                              |
|         |                          |                         |       |                        |        |                              |
|         |  +------------------------------------------------------+  |                        |        |                              |
|netfilter|  |                       |                         |    |  |   netfilter            |        |                              |
| +----------+ +----------------------------+                  |    |  |  +-----------------------------------------------------------+ |
| |       |  |                       |      |        out       |    |  |  |                     |        |  filter excute order       | |
| |       |  |          +-----------------------------+        |    |  |  |                     |        | +------------------->      | |
| |       |  |          |            |      |         |        |    |  |  |                     |        |   TCP                      | |
| | +-----+--+-+  +-----v----+ +-----v----+ |pri +----+----+sec|    |  |  | +------------+  +---+----+---v+rewriter++  +------------+ | |
| | |          |  |          | |          | |in  |         |in |    |  |  | |            |  |        |              |  |            | | |
| | |  filter  |  |  filter  | |  filter  +------>  colo   <------+ +-------->  filter   +--> adjust |   adjust     +-->   filter   | | |
| | |  mirror  |  |redirector| |redirector| |    | compare |   |  |    |  | | redirector |  | ack    |   seq        |  | redirector | | |
| | |          |  |          | |          | |    |         |   |  |    |  | |            |  |        |              |  |            | | |
| | +----^-----+  +----+-----+ +----------+ |    +---------+   |  |    |  | +------------+  +--------+--------------+  +---+--------+ | |
| |      |   tx        |   rx           rx  |                  |  |    |  |            tx                        all       |  rx      | |
| |      |             |                    |                  |  |    |  +-----------------------------------------------------------+ |
| |      |             +--------------+     |                  |  |    |                                                   |            |
| |      |   filter excute order      |     |                  |  |    |                                                   |            |
| |      |  +---------------->        |     |                  |  +--------------------------------------------------------+            |
| +-----------------------------------------+                  |       |                                                                |
|        |                            |                        |       |                                                                |
+--------------------------------------------------------------+       +----------------------------------------------------------------+
         |guest receive               | guest send
         |                            |
+--------+----------------------------v------------------------+
|                                                              |                          NOTE: filter direction is rx/tx/all
|                         tap                                  |                          rx:receive packets sent to the netdev
|                                                              |                          tx:receive packets sent by the netdev
+--------------------------------------------------------------+
In COLO-compare, we do packet comparing job.
Packets coming from the primary char indev will be sent to outdev.
Packets coming from the secondary char dev will be dropped after comparing.
colo-comapre need two input chardev and one output chardev: primary_in=chardev1-id (source: primary send packet) secondary_in=chardev2-id (source: secondary send packet) outdev=chardev3-id
  • Note:
HeartBeat is not been realized, so you need to trigger failover process by using 'x-colo-lost-heartbeat' command.

Current Status

Have been merged in Qemu upstream (v4.0).

How to setup/test COLO

For automatically managed colo: Features/COLO/Managed HOWTO

For manual use of colo via qmp commands: Features/COLO/Manual HOWTO

Heartbeat Service

COLO support internal and external heartbeat service, they both use same qemu interface(qmp or monitor commands). For internal service, we introduce new Qemu module named Advanced Watch Dog for this job. Advanced Watch Dog is an universal monitoring module on VMM side, it can be used to detect network down(VMM to guest, VMM to VMM, VMM to another remote server) and do previously set operation. If you want to use it, please add "--enable-awd" when configure qemu. In primary node:

-monitor tcp::4445,server,nowait
-chardev socket,id=h1,host=3.3.3.3,port=9009,server,nowait
-chardev socket,id=heartbeat0,host=3.3.3.3,port=4445
-object iothread,id=iothread1
-object advanced-watchdog,id=heart1,server=on,awd_node=h1,notification_node=heartbeat0,
opt_script=colo_primary_opt_script,iothread=iothread1,pulse_interval=1000,timeout=5000

colo_primary_opt_script:

x_colo_lost_heartbeat

In secondary node:

-monitor tcp::4445,server,nowait
-chardev socket,id=h1,host=3.3.3.3,port=9009,reconnect=1
-chardev socket,id=heart1,host=3.3.3.8,port=4445
-object iothread,id=iothread1
-object advanced-watchdog,id=heart1,server=off,awd_node=h1,notification_node=he
art1,opt_script=colo_secondary_opt_script,iothread=iothread1,timeout=10000

colo_secondary_opt_script:

nbd_server_stop
x_colo_lost_heartbeat

This parts of code still under review. https://github.com/zhangckid/qemu/tree/colo-with-awd19dec1

For external service, user can write service and policy itself, it detect network down, just need notify COLO service.

Supporting documentation

The idea is presented in Xen summit 2012, 2013 and 2019 and academia paper in SOCC 2013.

It is also presented in KVM forum 2013, KVM forum 2015 and KVM forum2016

Future works

1. Support continuously VM replication.
2. Support shared storage.
3. Develop the heartbeat part.
4. Reduce checkpoint VM’s downtime while do checkpoint.

Links