Documentation/vhost-user-ovs-dpdk
Vhost-user with OVS/DPDK as backend
The purpose of this document is to describe the steps needed to setup a development machine for vhost-user testing. All the information here is taken from various sources available online.
The goal is to connect guests' virtio-net devices having vhost-user backend to OVS dpdkvhostuser ports and be able to run any kind of network traffic between them.
Prerequisites
- Ensure you have hugepages enabled in kernel command line:
default_hugepagesz=2M hugepagesz=2M hugepages=2048
- Install kernel development packages:
yum install kernel-devel
Build from sources
DPDK build
- Get the sources
git clone git://dpdk.org/dpdk cd dpdk
- Edit the configuration file:
diff --git a/config/common_linuxapp b/config/common_linuxapp index 0de43d5..5260501 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -81,12 +81,12 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n # # Compile to share library # -CONFIG_RTE_BUILD_SHARED_LIB=n +CONFIG_RTE_BUILD_SHARED_LIB=y # # Combine to one single library # -CONFIG_RTE_BUILD_COMBINE_LIBS=n +CONFIG_RTE_BUILD_COMBINE_LIBS=y # # Use newest code breaking previous ABI
- Build it
export RTE_SDK=<dpdk-dir> export RTE_TARGET=x86_64-native-linuxapp-gcc make config T=x86_64-native-linuxapp-gcc make -j<cpu num> T=x86_64-native-linuxapp-gcc EXTRA_CFLAGS='-g' cp -f ./x86_64-native-linuxapp-gcc/lib/libdpdk.so /lib64/libdpdk.so (if you use system installed ovs with default ld path)
OVS build
- Get the sources:
git clone https://github.com/openvswitch/ovs cd ovs
- Build it:
./boot.sh ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --with-dpdk=<dpdk dir>/x86_64-native-linuxapp-gcc --disable-ssl --with-debug CFLAGS='-g' make install -j <cpu num>
QEMU build
- Get the sources:
git clone git://git.qemu.org/qemu.git cd qemu
- Build it:
mkdir bin cd bin ../configure --target-list=x86_64-softmmu --enable-debug --extra-cflags='-g' make -j <cpu num>
Setup
System setup
- Prepare directories
mkdir -p /var/run/openvswitch mount -t hugetlbfs -o pagesize=2048k none /dev/hugepages
- Load kernel modules
modprobe openvswitch
OVS setup
- Clean the environment
killall ovsdb-server ovs-vswitchd rm -f /var/run/openvswitch/vhost-user* rm -f /etc/openvswitch/conf.db
- Start database server
export DB_SOCK=/var/run/openvswitch/db.sock cd <ovs-dir>/ovsdb ./ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema ./ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
- Start OVS
cd <ovs-dir>/utilities ./ovs-vsctl --no-wait init cd <ovs-dir>/vswitchd ./ovs-vswitchd --dpdk -c 0xf -n 3 --socket-mem 1024 -- unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
- Configure the bridge
cd <ovs-dir>/utilities ./ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev ./ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ./ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
QEMU setup
- Start 'n' VMs
cd <qemu-dir>/bin/x86_64-softmmu/ qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 \ -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user<n> \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:0<n> \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc \ -net user,hostfwd=tcp::1002<n>-:22 -net nic \ /path/to/img
- VM setup
ssh root@localhost -p1002<n> ifconfig <eth interface> 192.168.100.<n>
Enabling multi-queue
Apply multi-queue patches
- DPDK
[PATCH v6 00/13] vhost-user multiple queues enabling http://dpdk.org/ml/archives/dev/2015-October/024806.html
- OVS
[RFC PATCH v2] netdev-dpdk: Add vhost-user multiqueue support http://openvswitch.org/pipermail/dev/2015-October/061413.html
Enable multi-queue
- OVS setup
ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<queues_nr, the same as QEMU> ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<cpu mask for rx queues, say 0xff00>
- QEMU command line modification
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=<queues_nr> -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:0<n>,mq=on,vectors=<2 + 2*queues_nr>
- In VM
ethtool -L eth0 combined <queues_nr>
Notes
- I want to keep it as simple as possible. If you see steps that can be skipped or unneeded configuration feel free to update the document.