Documentation/vhost-user-ovs-dpdk: Difference between revisions
(explain why copy dpdk.so is simpler) |
|||
(21 intermediate revisions by 4 users not shown) | |||
Line 8: | Line 8: | ||
== Prerequisites == | == Prerequisites == | ||
* Ensure you have hugepages enabled in kernel command line: | * Ensure you have hugepages enabled in kernel command line: | ||
default_hugepagesz=2M hugepagesz=2M hugepages= | default_hugepagesz=2M hugepagesz=2M hugepages=2048 | ||
* Install kernel development packages: | |||
yum install kernel-devel | |||
== Build from sources == | == Build from sources == | ||
Line 17: | Line 19: | ||
cd dpdk | cd dpdk | ||
* Edit the configuration file: | * Edit the configuration file: | ||
diff --git a/config/common_base b/config/common_base | |||
diff --git a/config/ | index 7830535..91b2d29 100644 | ||
index | --- a/config/common_base | ||
--- a/config/ | +++ b/config/common_base | ||
+++ b/config/ | @@ -67,7 +67,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n | ||
@@ - | |||
# | # | ||
# Compile to share library | # Compile to share library | ||
Line 28: | Line 29: | ||
-CONFIG_RTE_BUILD_SHARED_LIB=n | -CONFIG_RTE_BUILD_SHARED_LIB=n | ||
+CONFIG_RTE_BUILD_SHARED_LIB=y | +CONFIG_RTE_BUILD_SHARED_LIB=y | ||
# | # | ||
# Use newest code breaking previous ABI | # Use newest code breaking previous ABI | ||
Line 40: | Line 35: | ||
export RTE_SDK=<dpdk-dir> | export RTE_SDK=<dpdk-dir> | ||
export RTE_TARGET=x86_64-native-linuxapp-gcc | export RTE_TARGET=x86_64-native-linuxapp-gcc | ||
make -j<cpu num> T= | make -j<cpu num> install T=$RTE_TARGET DESTDIR=install EXTRA_CFLAGS='-g' | ||
cp -f install/lib/lib* /lib64/ | |||
export DPDK_BUILD_DIR=x86_64-native-linuxapp-gcc #sometimes is "build", depending on your environment and make command parameters | |||
=== OVS build === | === OVS build === | ||
Line 49: | Line 44: | ||
git clone https://github.com/openvswitch/ovs | git clone https://github.com/openvswitch/ovs | ||
cd ovs | cd ovs | ||
* With DPDK 16.07, you will need an out-of-tree patch as of 07/18/2016: | |||
wget https://patchwork.ozlabs.org/patch/647253/mbox/ -O dpdk-16.07-api-update.mbox | |||
git am -3 dpdk-16.07-api-update.mbox | |||
* Build it: | * Build it: | ||
./boot.sh | ./boot.sh | ||
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --with-dpdk= | ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --with-dpdk=$RTE_SDK/$DPDK_BUILD_DIR --disable-ssl --with-debug CFLAGS='-g' | ||
make install -j <cpu num> | make install -j <cpu num> | ||
=== QEMU build === | === QEMU build === | ||
* Get the sources: | * Get the sources: | ||
git://git.qemu.org/qemu.git | git clone git://git.qemu.org/qemu.git | ||
cd qemu | cd qemu | ||
* Build it: | * Build it: | ||
./configure --target-list=x86_64-softmmu --enable-debug --extra-cflags='-g' | mkdir bin | ||
make | cd bin | ||
../configure --target-list=x86_64-softmmu --enable-debug --extra-cflags='-g' | |||
make -j <cpu num> | |||
== Setup == | == Setup == | ||
Line 71: | Line 72: | ||
* Load kernel modules | * Load kernel modules | ||
modprobe openvswitch | modprobe openvswitch | ||
=== OVS setup === | === OVS setup === | ||
Line 86: | Line 81: | ||
* Start database server | * Start database server | ||
export DB_SOCK=/var/run/openvswitch/db.sock | export DB_SOCK=/var/run/openvswitch/db.sock | ||
ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema | ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema | ||
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach | ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach | ||
* Start OVS | * Start OVS | ||
ovs-vsctl --no-wait init | ovs-vsctl --no-wait init | ||
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf | |||
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024 | |||
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true | |||
ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log | |||
* Configure the bridge | * Configure the bridge | ||
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev | ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev | ||
ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser | ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser | ||
Line 114: | Line 109: | ||
/path/to/img | /path/to/img | ||
* VM setup | * VM setup | ||
ssh root@localhost - | ssh root@localhost -p 1002<n> | ||
ifconfig <eth interface> 192.168.100.<n> | ifconfig <eth interface> 192.168.100.<n> | ||
== Enabling multi-queue == | |||
=== Enable multi-queue === | |||
* OVS setup | |||
In OVS 2.5.0 or older version: (It only supports setting up same number of rx queues for all of PMD netdevs.) | |||
ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<queues_nr, the same as QEMU> | |||
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<cpu mask for rx queues, say 0xff00> | |||
In post OVS 2.5.0: | |||
ovs-vsctl set Interface vhost-user1 options:n_rxq=<queues_nr, the same as QEMU> | |||
ovs-vsctl set Interface vhost-user2 options:n_rxq=<queues_nr, the same as QEMU> | |||
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<cpu mask for rx queues, say 0xff00> | |||
* QEMU command line modification | |||
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,'''queues=<queues_nr'''> | |||
-device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:0<n>,'''mq=on,vectors=<2 + 2 * queues_nr>''' | |||
* In VM | |||
ethtool -L eth0 combined <queues_nr> | |||
== Notes == | == Notes == | ||
* I want to keep it as simple as possible. If you see steps that can be skipped or unneeded configuration feel free to update the document. | * I want to keep it as simple as possible. If you see steps that can be skipped or unneeded configuration feel free to update the document. | ||
Latest revision as of 15:11, 11 October 2016
Vhost-user with OVS/DPDK as backend
The purpose of this document is to describe the steps needed to setup a development machine for vhost-user testing. All the information here is taken from various sources available online.
The goal is to connect guests' virtio-net devices having vhost-user backend to OVS dpdkvhostuser ports and be able to run any kind of network traffic between them.
Prerequisites
- Ensure you have hugepages enabled in kernel command line:
default_hugepagesz=2M hugepagesz=2M hugepages=2048
- Install kernel development packages:
yum install kernel-devel
Build from sources
DPDK build
- Get the sources
git clone git://dpdk.org/dpdk cd dpdk
- Edit the configuration file:
diff --git a/config/common_base b/config/common_base index 7830535..91b2d29 100644 --- a/config/common_base +++ b/config/common_base @@ -67,7 +67,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n # # Compile to share library # -CONFIG_RTE_BUILD_SHARED_LIB=n +CONFIG_RTE_BUILD_SHARED_LIB=y # # Use newest code breaking previous ABI
- Build it
export RTE_SDK=<dpdk-dir> export RTE_TARGET=x86_64-native-linuxapp-gcc make -j<cpu num> install T=$RTE_TARGET DESTDIR=install EXTRA_CFLAGS='-g' cp -f install/lib/lib* /lib64/
export DPDK_BUILD_DIR=x86_64-native-linuxapp-gcc #sometimes is "build", depending on your environment and make command parameters
OVS build
- Get the sources:
git clone https://github.com/openvswitch/ovs cd ovs
- With DPDK 16.07, you will need an out-of-tree patch as of 07/18/2016:
wget https://patchwork.ozlabs.org/patch/647253/mbox/ -O dpdk-16.07-api-update.mbox git am -3 dpdk-16.07-api-update.mbox
- Build it:
./boot.sh ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --with-dpdk=$RTE_SDK/$DPDK_BUILD_DIR --disable-ssl --with-debug CFLAGS='-g' make install -j <cpu num>
QEMU build
- Get the sources:
git clone git://git.qemu.org/qemu.git cd qemu
- Build it:
mkdir bin cd bin ../configure --target-list=x86_64-softmmu --enable-debug --extra-cflags='-g' make -j <cpu num>
Setup
System setup
- Prepare directories
mkdir -p /var/run/openvswitch mount -t hugetlbfs -o pagesize=2048k none /dev/hugepages
- Load kernel modules
modprobe openvswitch
OVS setup
- Clean the environment
killall ovsdb-server ovs-vswitchd rm -f /var/run/openvswitch/vhost-user* rm -f /etc/openvswitch/conf.db
- Start database server
export DB_SOCK=/var/run/openvswitch/db.sock ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
- Start OVS
ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
- Configure the bridge
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
QEMU setup
- Start 'n' VMs
cd <qemu-dir>/bin/x86_64-softmmu/ qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 \ -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user<n> \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:0<n> \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc \ -net user,hostfwd=tcp::1002<n>-:22 -net nic \ /path/to/img
- VM setup
ssh root@localhost -p 1002<n> ifconfig <eth interface> 192.168.100.<n>
Enabling multi-queue
Enable multi-queue
- OVS setup
In OVS 2.5.0 or older version: (It only supports setting up same number of rx queues for all of PMD netdevs.) ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<queues_nr, the same as QEMU> ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<cpu mask for rx queues, say 0xff00>
In post OVS 2.5.0: ovs-vsctl set Interface vhost-user1 options:n_rxq=<queues_nr, the same as QEMU> ovs-vsctl set Interface vhost-user2 options:n_rxq=<queues_nr, the same as QEMU> ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<cpu mask for rx queues, say 0xff00>
- QEMU command line modification
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=<queues_nr> -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:0<n>,mq=on,vectors=<2 + 2 * queues_nr>
- In VM
ethtool -L eth0 combined <queues_nr>
Notes
- I want to keep it as simple as possible. If you see steps that can be skipped or unneeded configuration feel free to update the document.