Features/COLO: Difference between revisions
Lukas Straub (talk | contribs) (Add my contact info) |
|||
(11 intermediate revisions by 2 users not shown) | |||
Line 10: | Line 10: | ||
= Feature authors = | = Feature authors = | ||
* '''Name:''' Hailiang Zhang, Congyang Wen, | * '''Name:''' Chen Zhang, Lukas Straub, Hailiang Zhang, Congyang Wen, Yong Wang, Guang Wang | ||
* ''' Email:''' zhang.zhanghailiang@huawei.com, wency@cn.fujitsu | * ''' Email:''' zhangckid@gmail.com/chen.zhang@intel.com, lukasstraub2@web.de, zhang.zhanghailiang@huawei.com, wency@cn.fujitsu.com, wang.yongD@h3c.com, wang.guanga@h3c.com | ||
= Architecture = | = Architecture = | ||
Line 72: | Line 72: | ||
= Components introduction = | = Components introduction = | ||
* HeartBeat | * HeartBeat (Not yet implemented) | ||
: Runs on both the primary and secondary nodes, to periodically check platform | : Runs on both the primary and secondary nodes, to periodically check platform | ||
: availability. When the primary node suffers a hardware fail-stop failure, | : availability. When the primary node suffers a hardware fail-stop failure, | ||
Line 162: | Line 162: | ||
*'''Note: | *'''Note: | ||
: | : HeartBeat is not been realized, so you need to trigger failover process by using 'x-colo-lost-heartbeat' command. | ||
= Current Status = | = Current Status = | ||
Have been merged in Qemu upstream (v4.0). | |||
= How to setup/test COLO = | = How to setup/test COLO = | ||
For automatically managed colo: [[Features/COLO/Managed HOWTO]] | |||
For manual use of colo via qmp commands: [[Features/COLO/Manual HOWTO]] | |||
= Heartbeat Service = | |||
COLO support internal and external heartbeat service, they both use same qemu interface(qmp or monitor commands). | |||
For internal service, we introduce new Qemu module named Advanced Watch Dog for this job. | |||
: | Advanced Watch Dog is an universal monitoring module on VMM side, it can be used to detect network down(VMM to guest, VMM to VMM, VMM to another remote server) and do previously set operation. | ||
If you want to use it, please add "--enable-awd" when configure qemu. | |||
In primary node: | |||
<pre> | <pre> | ||
-monitor tcp::4445,server,nowait | |||
-chardev socket,id=h1,host=3.3.3.3,port=9009,server,nowait | |||
-chardev socket,id=heartbeat0,host=3.3.3.3,port=4445 | |||
-object iothread,id=iothread1 | |||
-object advanced-watchdog,id=heart1,server=on,awd_node=h1,notification_node=heartbeat0, | |||
opt_script=colo_primary_opt_script,iothread=iothread1,pulse_interval=1000,timeout=5000 | |||
</pre> | </pre> | ||
colo_primary_opt_script: | |||
<pre> | <pre> | ||
x_colo_lost_heartbeat | |||
</pre> | </pre> | ||
In secondary node: | |||
<pre> | <pre> | ||
-monitor tcp::4445,server,nowait | |||
-chardev socket,id=h1,host=3.3.3.3,port=9009,reconnect=1 | |||
-chardev socket,id=heart1,host=3.3.3.8,port=4445 | |||
-object iothread,id=iothread1 | |||
-object advanced-watchdog,id=heart1,server=off,awd_node=h1,notification_node=he | |||
art1,opt_script=colo_secondary_opt_script,iothread=iothread1,timeout=10000 | |||
</pre> | </pre> | ||
colo_secondary_opt_script: | |||
<pre> | <pre> | ||
nbd_server_stop | |||
x_colo_lost_heartbeat | |||
</pre> | </pre> | ||
This parts of code still under review. | |||
https://github.com/zhangckid/qemu/tree/colo-with-awd19dec1 | |||
For external service, user can write service and policy itself, it detect network down, just need notify COLO service. | |||
= Supporting documentation = | = Supporting documentation = | ||
The idea is presented in Xen summit 2012, and | The idea is presented in Xen summit 2012, 2013 and [https://static.sched.com/hosted_files/xensummit19/d0/Application%20Agnostic%20High%20Availability%20Solution%20On%20Hypervisor%20Level%20-%20V1.4.pdf 2019] | ||
and [http://www.socc2013.org/home/program/a3-dong.pdf?attredirects=0 academia paper in SOCC 2013]. | and [http://www.socc2013.org/home/program/a3-dong.pdf?attredirects=0 academia paper in SOCC 2013]. | ||
Latest revision as of 18:38, 30 January 2023
Feature Name
COarse-grained LOck-stepping Virtual Machines for Non-stop Service
Background
Virtual machine (VM) replication is a well known technique for providing application-agnostic software-implemented hardware fault
tolerance "non-stop service". COLO is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They
receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are
released immediately. Otherwise, a VM checkpoint (on demand) is conducted.
Feature authors
- Name: Chen Zhang, Lukas Straub, Hailiang Zhang, Congyang Wen, Yong Wang, Guang Wang
- Email: zhangckid@gmail.com/chen.zhang@intel.com, lukasstraub2@web.de, zhang.zhanghailiang@huawei.com, wency@cn.fujitsu.com, wang.yongD@h3c.com, wang.guanga@h3c.com
Architecture
The architecture of COLO is shown in the bellow diagram. It consists of a pair of networked physical nodes: The primary node running the PVM, and the secondary node running the SVM to maintain a valid replica of the PVM. PVM and SVM execute in parallel and generate output of response packets for client requests according to the application semantics.
The incoming packets from the client or external network are received by the primary node, and then forwarded to the secondary node, so that Both the PVM and the SVM are stimulated with the same requests.
COLO receives the outbound packets from both the PVM and SVM and compares them before allowing the output to be sent to clients.
The SVM is qualified as a valid replica of the PVM, as long as it generates identical responses to all client requests. Once the differences in the outputs are detected between the PVM and SVM, COLO withholds transmission of the outbound packets until it has successfully synchronized the PVM state to the SVM.
Primary Node Secondary Node +------------+ +-----------------------+ +------------------------+ +------------+ | | | HeartBeat +<----->+ HeartBeat | | | | Primary VM | +-----------+-----------+ +-----------+------------+ |Secondary VM| | | | | | | | | +-----------|-----------+ +-----------|------------+ | | | | |QEMU +---v----+ | |QEMU +----v---+ | | | | | | |Failover| | | |Failover| | | | | | | +--------+ | | +--------+ | | | | | | +---------------+ | | +---------------+ | | | | | | | VM Checkpoint +-------------->+ VM Checkpoint | | | | | | | +---------------+ | | +---------------+ | | | |Requests<--------------------------\ /-----------------\ /--------------------->Requests| | | | ^ ^ | | | | | | | |Responses+---------------------\ /-|-|------------\ /-------------------------+Responses| | | | | | | | | | | | | | | | | | | | +-----------+ | | | | | | | | | | +----------+ | | | | | | | COLO disk | | | | | | | | | | | | COLO disk| | | | | | | | Manager +---------------------------->| Manager | | | | | | | ++----------+ v v | | | | | v v | +---------++ | | | | | | |+-----------+-+-+-++| | ++-+--+-+---------+ | | | | | | | || COLO Proxy || | | COLO Proxy | | | | | | | | || (compare packet || | |(adjust sequence | | | | | | | | ||and mirror packet)|| | | and ACK) | | | | | | | | |+------------+---^-+| | +-----------------+ | | | | +------------+ +-----------------------+ +------------------------+ +------------+ +------------+ | | | | +------------+ | VM Monitor | | | | | | VM Monitor | +------------+ | | | | +------------+ +---------------------------------------+ +----------------------------------------+ | Kernel | | | | | Kernel | | +---------------------------------------+ +----------------------------------------+ | | | | +--------------v+ +---------v---+--+ +------------------+ +v-------------+ | Storage | |External Network| | External Network | | Storage | +---------------+ +----------------+ +------------------+ +--------------+
Components introduction
- HeartBeat (Not yet implemented)
- Runs on both the primary and secondary nodes, to periodically check platform
- availability. When the primary node suffers a hardware fail-stop failure,
- the heartbeat stops responding, the secondary node will trigger a failover
- as soon as it determines the absence.
- COLO Block Replication (Please refer to BlockReplication)
- When primary VM writes data into image, the colo disk manger captures this data
- and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
- the context of primary VM 's image.
- The following is the image of block replication workflow:
+----------------------+ +------------------------+ |Primary Write Requests| |Secondary Write Requests| +----------------------+ +------------------------+ | | | (4) | V | /-------------\ | Copy and Forward | | |---------(1)----------+ | Disk Buffer | | | | | | (3) \-------------/ | speculative ^ | write through (2) | | | V V | +--------------+ +----------------+ | Primary Disk | | Secondary Disk | +--------------+ +----------------+
1) Primary write requests will be copied and forwarded to Secondary QEMU. 2) Before Primary write requests are written to Secondary disk, the original sector content will be read from Secondary disk and buffered in the Disk buffer, but it will not overwrite the existing sector content (it could be from either "Secondary Write Requests" or previous COW of "Primary Write Requests") in the Disk buffer. 3) Primary write requests will be written to Secondary disk. 4) Secondary write requests will be buffered in the Disk buffer and it will overwrite the existing sector content in the buffer.
- COLO framework: COLO Checkpoint/Failover Controller
- Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
- always be consistent with VM in Primary side.
- COLO Proxy:
- Delivers packets to Primary and Seconday, and then compare the reponse from both side. Then decide whether to
- start a checkpoint according to some rules.
Primary qemu Secondary qemu +--------------------------------------------------------------+ +----------------------------------------------------------------+ | +----------------------------------------------------------+ | | +-----------------------------------------------------------+ | | | | | | | | | | | guest | | | | guest | | | | | | | | | | | +-------^--------------------------+-----------------------+ | | +---------------------+--------+----------------------------+ | | | | | | ^ | | | | | | | | | | | | +------------------------------------------------------+ | | | | |netfilter| | | | | | netfilter | | | | +----------+ +----------------------------+ | | | +-----------------------------------------------------------+ | | | | | | | out | | | | | | filter excute order | | | | | | +-----------------------------+ | | | | | | +-------------------> | | | | | | | | | | | | | | | | TCP | | | | +-----+--+-+ +-----v----+ +-----v----+ |pri +----+----+sec| | | | +------------+ +---+----+---v+rewriter++ +------------+ | | | | | | | | | | |in | |in | | | | | | | | | | | | | | | | filter | | filter | | filter +------> colo <------+ +--------> filter +--> adjust | adjust +--> filter | | | | | | mirror | |redirector| |redirector| | | compare | | | | | | redirector | | ack | seq | | redirector | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +----^-----+ +----+-----+ +----------+ | +---------+ | | | | +------------+ +--------+--------------+ +---+--------+ | | | | | tx | rx rx | | | | | tx all | rx | | | | | | | | | | +-----------------------------------------------------------+ | | | | +--------------+ | | | | | | | | | filter excute order | | | | | | | | | | +----------------> | | | +--------------------------------------------------------+ | | +-----------------------------------------+ | | | | | | | | | +--------------------------------------------------------------+ +----------------------------------------------------------------+ |guest receive | guest send | | +--------+----------------------------v------------------------+ | | NOTE: filter direction is rx/tx/all | tap | rx:receive packets sent to the netdev | | tx:receive packets sent by the netdev +--------------------------------------------------------------+
- In COLO-compare, we do packet comparing job.
- Packets coming from the primary char indev will be sent to outdev.
- Packets coming from the secondary char dev will be dropped after comparing.
- colo-comapre need two input chardev and one output chardev: primary_in=chardev1-id (source: primary send packet) secondary_in=chardev2-id (source: secondary send packet) outdev=chardev3-id
- Note:
- HeartBeat is not been realized, so you need to trigger failover process by using 'x-colo-lost-heartbeat' command.
Current Status
Have been merged in Qemu upstream (v4.0).
How to setup/test COLO
For automatically managed colo: Features/COLO/Managed HOWTO
For manual use of colo via qmp commands: Features/COLO/Manual HOWTO
Heartbeat Service
COLO support internal and external heartbeat service, they both use same qemu interface(qmp or monitor commands). For internal service, we introduce new Qemu module named Advanced Watch Dog for this job. Advanced Watch Dog is an universal monitoring module on VMM side, it can be used to detect network down(VMM to guest, VMM to VMM, VMM to another remote server) and do previously set operation. If you want to use it, please add "--enable-awd" when configure qemu. In primary node:
-monitor tcp::4445,server,nowait -chardev socket,id=h1,host=3.3.3.3,port=9009,server,nowait -chardev socket,id=heartbeat0,host=3.3.3.3,port=4445 -object iothread,id=iothread1 -object advanced-watchdog,id=heart1,server=on,awd_node=h1,notification_node=heartbeat0, opt_script=colo_primary_opt_script,iothread=iothread1,pulse_interval=1000,timeout=5000
colo_primary_opt_script:
x_colo_lost_heartbeat
In secondary node:
-monitor tcp::4445,server,nowait -chardev socket,id=h1,host=3.3.3.3,port=9009,reconnect=1 -chardev socket,id=heart1,host=3.3.3.8,port=4445 -object iothread,id=iothread1 -object advanced-watchdog,id=heart1,server=off,awd_node=h1,notification_node=he art1,opt_script=colo_secondary_opt_script,iothread=iothread1,timeout=10000
colo_secondary_opt_script:
nbd_server_stop x_colo_lost_heartbeat
This parts of code still under review. https://github.com/zhangckid/qemu/tree/colo-with-awd19dec1
For external service, user can write service and policy itself, it detect network down, just need notify COLO service.
Supporting documentation
The idea is presented in Xen summit 2012, 2013 and 2019 and academia paper in SOCC 2013.
It is also presented in KVM forum 2013, KVM forum 2015 and KVM forum2016
Future works
- 1. Support continuously VM replication.
- 2. Support shared storage.
- 3. Develop the heartbeat part.
- 4. Reduce checkpoint VM’s downtime while do checkpoint.