OVS Quantum Plugin Documentation

This page documents the mechanisms to leverage Open vSwitch and Open vSwitch Plugin for OpenStack Quantum, in deploying OpenStack based Cloud enviroments using Essex. For documentation on deploying Quantum using Folsom see Quantum Administrator Guide.

Background


The Quantum openvswitch plugin works as part of the OpenStack Quantum Virtual Network Service.  This page is intended to supplement the main Quantum Administrator Guide.  Please read that document first, and reference this page when it refers to "plugin-specific documentation".

The Quantum Openvswitch plugin consists of two components:

1) A plugin loaded at runtime by the Quantum service.  The plugin processes all API calls and stores the resulting logical network data model and associated network mappings in a database backend .

2) An agent which runs on each compute node (i.e., each node running nova-compute). This agent gathers the configuration and mappings from the central mysql database and communicates directly with the local Open vSwitch instance to configure flows to implement the logical data model.

Quantum Service Node Configuration


The Quantum service should be run on a single server, often the same "controller" server that you run other centralized OpenStack components, like nova-scheduler, nova-api, and nova-network.

Database Setup


A database instance must be accessible from the host running the Quantum Service and from all of the compute node hosts.  These instructions assume a MySQL database, but any sqlalchemy database should work.  For example:

sudo apt-get install mysql-server python-mysqldb python-sqlalchemy

To prep the database and make sure any compute node running the OVS Quantum agent will be able to remotely access the MySQL database, run:

$ mysql -u root -p

mysql> CREATE DATABASE ovs_quantum;
mysql> GRANT USAGE ON *.* to root@'yourremotehost' IDENTIFIED BY 'newpassword';
mysql> FLUSH PRIVILEGES;

Editing the OVS Plugin Configuration File


Edit the file etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini to match your mysql configuration. This file MUST be updated with an SQLAlchemy database URI that can be used to access the ovs_quantum database both locally and by  agents on remote hosts (i.e., it cannot use the default "in-memory" sqlite database).  For example, the string must be something like: mysql://root:nova@127.0.0.1:3306/ovs_quantum .  

NOTE: The database settings in this file will be used not only by the Quantum server, but also by the agents running on each compute note.  Thus, the  database IP address in the file should be reachable by all compute nodes.

Selecting OVS as the Quantum Plugin


To select the Open vSwitch plugin, the Quantum service's etc/quantum/plugins.ini file should be modified to read:

provider = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPlugin

Nova Network Node Configuration 


Make sure the nova.conf used when running nova-network and nova-manage contains:

network_manager=nova.network.quantum.manager.QuantumManager
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
linuxnet_ovs_integration_bridge=br-int

Note: If nova-network is being used to provide L3 + NAT forwarding and or DHCP to Quantum networks, the host running nova-network must be configured in a way that resembles a hypervisor, since Quantum must "plug" linux devices from this host into Quantum networks in a way similar to how it plugs linux devices representing VM NICs into a Quantum network.  Namely, you must install a database client, create and configure an integration bridge (e.g., br-int), and run the ovs_quantum_agent.py process.  For details, you can follow the setup instructions below for "Libvirt Setup" (you may omit steps specific to libvirt, such as setting libvirt_* flags for Nova).

See the Quantum Administrator Guide for information about other QuantumManager flags.

Nova Compute Node Configuration 


Locate the main OVS plugin directory, located at quantum/plugins/openvswitch .  This directory is used in the instructions below.

Open vSwitch must be installed & enabled on each compute node host.

Libvirt Setup (KVM / QEMU):

Install the python DB client libraries:

  • On Ubuntu: apt-get install python-mysqldb python-sqlalchemy
  • On CentOS / RHEL : yum install MySQL-python sqlalchemy-python


Create an OVS "integration" bridge, to which all VMs will connect:

ovs-vsctl add-br br-int

If your setup uses multiple physical hosts and the OVS plugin is using VLANs, each server running nova-compute or nova-network should have a NIC with no IP address connected to a network where all VLANs are trunked (details are specific to your switch manufacturer).  This will be your "private" network.   You must then add the NIC that connects the server to the private network (e.g., eth1) as a port on the integration bridge (br-int).  For example:

ovs-vsctl add-port br-int eth1

The nova.conf used by the nova-compute service should contain the following flags to ensure correct vif-plugging.  If your integration bridge name is something other than "br-int", change the first flag listed below:

libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver

From within this main OVS plugin directory, copy agent/ovs_quantum_agent.py and ovs_quantum_plugin.ini from the Quantum source directory to each compute node (make sure ovs_quantum_plugin.ini has the correct name of the OVS integration bridge before copying).

To start the agent, run:

$ python ovs_quantum_agent.py ovs_quantum_plugin.ini

If you are using Red Hat / Fedora, or more recent versions of Ubuntu (Precise or newer), you will need to modify the cgroup_device_acl field in /etc/libvirt/qemu.conf to include "/dev/net/tun" as shown below and restart libvirt. Otherwise VMs will fail to boot with the message "'tap' could not be initialized" in the nova-compute log.

cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun",
]

XenServer Setup:

Create the agent distribution tarball by invoking the following command from the OVS plugin directory (quantum/plugins/openvswitch):

$ make agent-dist

Copy the resulting tarball to each XenServer(s) Dom0 (not the service VM running the nova-compute process).

On XenServer dom0, install the python MySQL client libraries, run:

yum --enablerepo=base -y install MySQL-python sqlalchemy-python

Unpack the tarball, cd into the untarred directory, and run:

$ ./xenserver_install.sh

This will install all of the necessary pieces into /etc/xapi.d/plugins and create a new "integration bridge" on your host.

NOTE: On XenServer the integration bridge will have a name like "xapi1" or "xapi2".  Make sure to update /etc/xapi.d/plugins/ovs_quantum_plugin.ini to match the integration bridge name printed by the xenserver_install.sh script.

To start Quantum on the XenServer host, run the agent on your hypervisor dom0:

$ python /etc/xapi.d/plugins/ovs_quantum_agent.py /etc/xapi.d/plugins/ovs_quantum_plugin.ini

If your setup uses multiple physical hosts and relies on VLANs, make sure your physical switch trunks all VLANs (details are specific to your switch manufacturer), then create a "patch" port to connect the integration bridge with the external bridge you want to use to send traffic to the physical network (e.g., xenbr0 to send traffic via eth0).  Note: the physical network used for VLANs should be different from the one you use for management connectivity.  To create a patch port between bridge xapi1 and xenbr0, run:

ovs-vsctl add-port xenbr0 patch-outside -- set Interface patch-outside type=patch options:peer=patch-inside
ovs-vsctl add-port xapi1 patch-inside -- set Interface patch-outside type=patch options:peer=patch-outside

Also use the integration bridge name to set the following flag in the nova.conf file for nova-compute running in the service VM on the XenServer (note: this bridge name may be different on different compute nodes):

xenapi_ovs_integration_bridge=xapi1
xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver

Limitations 

  • To use the Quantum OVS plugin in tunneling mode, you must be using OVS version 1.4+.
  • To use the Quantum OVS plugin, the OVS kernel datapath must be installed from a kernel module (i.e., not use the in-built kernel support).  In particular, this means that Fedora 17, which uses OVS 1.4 but with the in-built kernel support, will not work for tunneling.
  • OVS is not compatible with iptables + ebtables rules that are applied directly on VIF ports.  Thus, the existing implementations of Nova security groups and spoof-prevention aren't compatible.  We are targeting work for this in Folsom.
  • "Provider Networks": currently there is no way to create a Quantum network that maps directly to a particular hypervisor NIC and optional VLAN.  This work is targeted for Folsom.
  • XenServer is not supported with the current Folsom code, due to the fact that XenServer dom0 supports only python 2.4, and the OVS Quantum plugin requires a python agent running in dom0.   Before the final Folsom release we expect to have a mechanism for running the OVS plugin with XenServer with the latest Folsom features.