Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Christian Ehrhardt
on 5 May 2016


DPDK is a fast moving project comprised of a set of libraries and drivers for fast packet processing. It utilizes polling threads, huge pages, numa locality and multi core processing to achieve low latencies and a high packet processing rate.

Up until recently most guides to consume or experiment with DPDK looked like this:

  1. Download the source of DPDK and a solution/example using it
  2. Configure it to the needs of your custom solution
  3. Build the solution/example and DPDK together
  4. Prepare your system to the prerequisites of DPDK
  5. Configure and run your experiments

And those steps often covered more just a simple command. On top for anything you did in step 4 you had to make sure it is persistent across reboots and that the guide you found applies to your current OS environment and software versions. Also especially for step 5 you had to be sure you are not following outdated guides as plugging things together changed quite often. And eventually such a solution also was static, no updates would be applied for you over its lifecycle.

But as DPDK matures it has become time to make it available to a broader audience. So skip twiddling with sources, custom configurations/builds, manually preparing your system, and just basic experiments.

The recent release of Ubuntu 16.04 contains the recent DPDK version 2.2 and Open vSwitch 2.5 which can consume the DPDK library. And with that installing and consuming DPDK got easier than it ever was.

Steps 1-3 of the above for the example of Open vSwitch, comes down to just:

sudo apt-get install openvswitch-switch-dpdk

Example use-case – set up Open vSwitch with DPDK

As with most server applications there is still some configuration needed to really put it to some use, but even for that steps the packaged helper/init scripts as well as recent features in the packages involved will help you and simplify those steps a lot.

So let us go from basic experiments to a full Open vSwitch and DPDK based setup including guests connected via it in just a few steps.
A setup you might have in mind might look like that:

Prepare environment

The dpdk init script assists in persistently re-assigning devices and reserving huge pages. The qemu-kvm init script helps to prepare for huge page backed guests as needed for vhost_user later

apt-get install openvswitch-switch-dpdk uvtool qemu-kvm
echo "NR_2M_PAGES=4096" >> /etc/dpdk/dpdk.conf
echo pci 0000:04:00.0 uio_pci_generic >> /etc/dpdk/interfaces
service dpdk restart
sed -ri -e 's,(KVM_HUGEPAGES=).*,\11,' /etc/default/qemu-kvm
service qemu-kvm restart

Configure Open vSwitch with DPDK

Select the dpdk enabled Open vSwitch and enable it

update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk
echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 --pci-whitelist 0000:04:00.0
-m 2048 --vhost-owner libvirt-qemu:kvm --vhost-perm 0666'" >> /etc/default/openvswitch-switch
service openvswitch-switch restart

Connect a dpdk interface and two guests

That also got simplified a lot by the new libvirt in Ubuntu 16.04. That allows one to avoid any manual qemu commandline that was usually required before. Uvtool (Library and tools to make it easy to consume Ubuntu Cloud images), libvirt and the provided sample xmls make it straightforward to get started with DPDK connected KVM guests.

ovs-vsctl add-br ovsdpdkbr0 -- set bridge ovsdpdkbr0 datapath_type=netdev
ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl add-port ovsdpdkbr0 vhost-user-1 -- set Interface "vhost-user-1" type=dpdkvhostuser
    ovs-vsctl add-port ovsdpdkbr0 vhost-user-2 -- set Interface "vhost-user-2" type=dpdkvhostuser
wget http://paste.ubuntu.com/16062769/ > guest-dpdk-vhost-user-singleq-1.xml
    wget http://paste.ubuntu.com/16062773/ > guest-dpdk-vhost-user-singleq-2.xml
uvt-kvm create --memory 2048 --template guest-dpdk-vhost-user-singleq-1.xml --password=ubuntu guest-dpdk-vhost-user-1 release=xenial arch=amd64 label=daily
    uvt-kvm create --memory 2048 --template guest-dpdk-vhost-user-singleq-2.xml --password=ubuntu guest-dpdk-vhost-user-1 release=xenial arch=amd64 label=daily

That’s it – you now have two guests using that your dpdk based Open vSwitch. Other than any manually set up variant this will be persistent across reboots and update automatically. You can run guest to guest traffic through its vhost_user implementation or any traffic through the outside world via the dpdk0 interface that got patched onto the ovsdpdkbr0.

While at first view this might not seem the shortest step-by-step lists ever – there are really a lot of things taken care for you this way by DPDK, Libvirt and Open vSwitch.

We at Canonical really mind about the consumability of packages, applications and solutions. So if you consider even this guide too complex, feel free to take the next step of application deployment. Take a look at Juju and in there you will find dpdk support in the most recent neutron-openvswitch charm.

More details and background about configuring DPDK and Open vSwitch with DPDK as well as referrals to even more external sources can be found in the Ubuntu Server Guide.

Why DPDK and why not?

As it is meant for fast packet processing let’s take a quick simple test about just that. Let us keep it simple and pick some commonly available benchmarks, like iperf, netperf and uperf.

Iperf already saturated the line speed of the card so there was no improvement to be made. But the request and response workloads of netperf and uperf at different sizes and concurrent connection counts show a clear benefit of DPDK in these cases.

The abbreviations are Benchmark-[MODE]-x with NP being Netperf, UP being Uperf and RR abbreviating request and response. So for example UP-RR-1-250 is a transactional Uperf request and response workload that drives small 1 byte packages over 250 concurrent connections.

If you are willing to spend some cpu and memory on the hunt for very low latency, high packet per second data rates and a highly tunable solution DPDK likely provides a great environment for you.

As always things can’t just be good for everything right? Yes that is true – you certainly won’t save cpu cycles per transferred byte with an approach like DPDK.

Also, as seen above, if the network card is already at its line limit which usually isn’t too hard with streaming workloads. So it becomes a matter of efficiency, and there currently the kernel driver implementation wins.

Final thought

If you look at upstream DPDK and the DPDK portion in Open vSwitch there are still major reworks and critical fixes on a regular base. So don’t expect this to already be a package like ‘sed’ with a typo fix every other year and nothing else.

But it clearly is time to bring DPDK to a wider audience and the next level of consumability, so start now with Ubuntu on your side.

Related posts


Marcin Bednarz
6 June 2019

Need to set up servers in remote locations?

Cloud and server Article

Use bare metal provisioning with a top-of-the-rack switch When deploying a small footprint environment such as edge computing sites, 5G low latency services, a site support cabinet or baseband unit, its critical to establish the optimal number of physical servers needed for set up. While several approaches exist, bare metal provisioning t ...


Benjamin Ryzman
24 September 2024

Data centre networking: What is OVN?

Networking Article

Providing an overview of Open Virtual Network (OVN), its architecture and the main drivers behind it in the data centre networking space. ...


Canonical
12 September 2023

Faster AI application development with Canonical and NVIDIA AI Enterprise

Ubuntu Article

Ubuntu KVM support comes to NVIDIA AI Enterprise Canonical continues to expand its collaboration with NVIDIA by providing Ubuntu KVM Hypervisor support with NVIDIA AI Enterprise 4.0 — which is generally available starting today. Organisations using GPU virtualisation on Ubuntu can look forward to a seamless migration to the new NVIDIA AI ...