OpenStack

This page tracks the effort to provide an Open vSwitch plugin for the Quantum network service in OpenStack

OpenStack, Quantum and Open vSwitch – Part II

Post by: admin : August 1, 2011 1:05 pm : openstack

[This is the second post in a three part series to describe OpenStack, a new OpenStack-compatible networking service called Quantum, and the first Quantum plug-in which is based on Open vSwitch.]

In the previous post, we provided some background for OpenStack and described how it has major sub components for Compute (Nova), Object Storage (Swift), and Image Management (Glance), yet networking is missing as a first class component (currently it is a part of Nova).  While this made sense in the initial context in which OpenStack was designed, there has been a lot of effort within the OpenStack community to define a network service allowing networking to evolve easily and independently.

In this post, we describe Quantum, a recently created networking service compatible with OpenStack.

The Quantum Project –  A Virtual Networking Service for OpenStack

The goal of Quantum is simple, to provide a clean network service abstraction within OpenStack.  While we’ll get into the particulars below, here is a cheat sheet of some of the benefits its design offers:
  • Provides a flexible API for service providers or their tenants to manage OpenStack network topologies (e.g., create multi-tier web applications)
  • Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation.  This allows virtual networking to be implemented in virtual switches, physical switches, or both, and supports incorporating technologies from multiple vendors / open source projects.
  • Offers an API that is extensible and evolves independently of the compute API, allowing plugins to easily introduce more advanced network capabilities (e.g., QoS, ACLs, etc.)
  • Provides a platform for integrating advanced networking solutions such as:
    • Existing firewall services
    • Load balancer services
    • MPLS infrastructure

The Big Picture – Quantum Cloud Networking Fabric

We’ll start by describing how Quantum fits within the existing OpenStack component architecture, shown in the diagram below.

Quantum-OpenStack

Quantum’s primary interface is a programmatic RESTful API.  The abstractions over which it operates are, by design, extremely simple.

 

The Quantum API allows for creation and management of “virtual networks” each of which can have one or more “ports”.  A port on a virtual network can be attached to a “network interface”, where a “network interface” is anything which can source traffic, such as a vNIC exposed by a virtual machine, an interface on a load balancer, and so on. These abstractions offered by Quantum (virtual networks, virtual ports,and network interfaces) are the building blocks for building and managing logical network topologies.Of course, the technology that implements Quantum is fully decoupled from the API (that is, the backend is “pluggable”).

 

So, for example, the logical network abstraction could be implemented using simple VLANs, L2-in-L3 tunneling, or any other mechanism one can imagine and build.  The only requirement is that the actual implementation provide the L2 connectivity described by the logical model.While the native Quantum API does not support more sophisticated network services such as, say, QoS or ACLs, it does provide an API extensibility mechanism that plugins can use to expose them.  This is the conduit by which developers and vendors in the OpenStack ecosystem can innovate within Quantum.  If an extension proves useful and generally applicable, it may become a part of the core Quantum API in a future version.

 

Quantum Internals

Cloud Networking Fabric Quantum - Architecture

There are 3 key functional layers of abstraction that make up the Quantum service:

1) REST API layer: This layer is responsible for implementing the Quantum API and routing API requests to the correct end-point within Quantum’s pluggable infrastructure. The REST API layer also contains various infrastructure glue around launching the Quantum service, marshalling & unmarshalling requests and responses, and validating data format & data correctness. This layer can also contain security and stability infrastructure such as rate-limiting logic on inbound API calls to protect against Denial of Service attacks and make sure that the Service remains responsive under load.

 

REST API Extensions: Quantum provides an “extensibility” mechanism that enables anybody to extend the Core API and add additional features and functionality that are not currently part of the Core API. Taking today’s Core API as an example, one could use the extensibility mechanism to create a QoS extension that enables setting up Quality of Service parameters associated with Quantum networks. Similarly, you can imagine multiple parties can easily integrate advanced networking functionality using Quantum’s extensibility mechanism. Quantum community is actively working on implementing the extensibility framework (to follow the progress, check out the blueprint here).

 

Key Quantum API methods:

Method: Create Network

REST URL: POST /tenants/{tenant-id}/networks
HTTP Request Body: Specified the symbolic name for the network being created.
 E.g.
 {    "network":
 {
 "name": "symbolic name for network1"
 }
 }

Description: This operation creates a Layer-2 network in Quantum based on the information provided in the request body.

Method: List all networks for a particular tenant

REST URL: GET /tenants/{tenant-id}/networks
HTTP Request Body: Not Applicable

Description: This operation returns the list of all networks currently defined in Quantum

Method: Update Network

REST URL: PUT /tenants/{tenant-id}/networks/{network-id}
HTTP Request Body: Specify a new symbolic name for a particular Quantum network
 E.g.
 {    "network":
 {
 "name": "new symbolic name"
 }
 }

Description: This operation updates a network’s attributes in Quantum. The only attribute that can be updated, as it stands now, is the network’s symbolic name.
Method: Delete Network

REST URL: DELETE /tenants/{tenant-id}/networks/{network-id}
HTTP Request Body: Not Applicable

Description: This operation deletes the network specified in the Request URI.

Keeping in line with principles of RESTful interfaces, we have shown examples of Create, Read, Update, Delete(CRUD) methods for the “network” resource. Similar CRUD API methods are available for other resources managed by Quantum, such as “Ports” and “Port Attachments.”

Detailed API Specification is available as a living document at the following Wiki.

2) Authentication & Authorization: This layer is not yet complete, but will be reponsible for validating incoming API Requests and ensuring that an authenticated user is making the request before letting the API call pass through the rest of the stack. Quantum authentication will use the OpenStack Keystone Identity Service.

In addition to basic authentication, the API will enforce client access rights to REST resources.  This is an important component for enforcing access rights in multi-tenant environments in which network configuration is exposed to the tenants.

Key Authorization roles that are currently under consideration are:

  •  Super User/System – Super User can operate on any network within the Quantum namespace, this account will be typically reserved for Service Providers.
  •  Tenant – A Tenant can create networks from the “pool” of service provider networks.  The Tenant can create ports on these networks and attach interfaces to the ports, as long as the Tenant owns those interfaces (e.g., it is an interface of one of the Tenant’s nova VMs).

Quantum community is still actively defining the Authentication and Authorization capabilities of the API.  The blueprint tracking this effort is available here.

3) Pluggable Backend: As we’ve mentioned, Quantum decouples the abstractions from the implementations.  The primary mechanism for doing so is through a pluggable architecture in which the mechanism that implements the virtual networks is implemented as a plugin.

So, how do I implement a Quantum Plugin?

This is far from comprehensive, but to give you a general idea of how plugins work, we have put together an architectural sketch of a sample Quantum plugin (that is loosely based on the Open vSwitch plugin we will describe in the next post).

Our Reference Architecture for Quantum plugin contains:

a) A centralized Quantum controller: This is the box titled “Quantum Cloud Network Fabric Plugin”, and this controller is responsible for servicing users of Cloud environments such as OpenStack. The controller helps tie the end-user facing cloud network model, with the physical networks that are being imlemented by connecting vSwitches and pSwitches.

b) A centralized data model: This is the data model represents the network connectivity model as desired by the cloud users, where they care about creating virtual networks that provide connectivity between a set of virtual machines. A common mechanism to store this cloud network model would be for the plugin to contain an internal database. This database can be used to store the network model being desired by the cloud users. This database will also serve as the single source of “truth” for responding to Quantum API calls as well as for implementing the cloud network model by communicating with various physical and virtual switches that a particular plugin supports.

c) Virtual and Physical Switch communication channel: The final component of our reference architecture is a mechanism for the centralized Quantum controller to communicate  with various virtual and physical switches to implement the physical connectivity. One possible mechanism for establishing such a communication channel would be to install a Quantum plugin agent on every Hypervisor that is running a vSwitch as well as on every physical switch that is supported. The agent can then “phone home”  to get updates from the centralized Quantum controller. The agent here is responsible for communicating with various switches using the appropriate APIs and establishing L2 connectivity. The agent can choose a mechanism (e.g., VLANs) to create the isolation between packets from different Quantum networks.

For a more detailed view on how to create a Quantum plugin, take a look at the Open Source Quantum plug-in that uses Open vSwitch.  Or, just wait for the next post … :)

Leave a response »

OpenStack , Quantum and Open vSwitch – Part I

Post by: admin : July 25, 2011 1:37 pm : openstack

[This is the beginning of a three part post to describe OpenStack, a new OpenStack-compatible networking service called Quantum, and the first Quantum plug-in which is based on Open vSwitch.]

We will start by providing some background on OpenStack, focusing on the original network model and the rational for creating Quantum. This is not meant to be an exhaustive (nor authoratative!) overview. If you are interested in learning more, or keeping track of the progress, check out the project homepage, openstack.com.

OpenStack Overview

OpenStack is a cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS. Traditionally, OpenStack has consisted of the following major subcomponents:

The following diagram provides a illustration of how the components are organized to provide the OpenStack platform.

High Level OpenStack Architecture

The system operates roughly as follows. The “Cloud Controller” cluster is responsible for managing the global state of the system. It interacts with LDAP (not shown), OpenStack Object Storage, and OpenStack Compute. OpenStack Object Storage and OpenStack Compute packages contain a centralized piece that usually runs on the Cloud Controller as well as a “worker” piece that usually runs on a particular compute or storage node. The “workers” can be thought of as agents that manage storage or compute resources locally on a single system. To communicate with compute and storage “workers”, the cloud controller uses Rabbit MQ.

The responsibilities of the primary components are:

OpenStack Compute (Nova), is responsible for managing the Hypervisor hosts, overseeing virtual machine configuration, creation, deployment and destruction on a “pool” of compute resources. Nova currently supports KVM, XEN,VMware vSphere and Hyper-V. Nova also orchestrates deployment across Availability Zones, which are often called Nova Zones{link to Nova Zones}. An availability zone represents a single unit of failure. An example of availability zone could be a Cloud provider’s San Francisco Datacenter, while another availability zone would be New York Datacenter. The rational of availability zone is that when one of the availability zone experiences failure, none of the other zones or impacted. This enables a Cloud user to mitigate risk by distributing and replicating workloads across availability zones. However availability zones need not necessarily span an entire datacenter, zones can very well consist of a single rack of servers that is situated at the north end of datacenter while another zone can be a rack of servers situated in the south end of the datacenter.

OpenStack Object Storage (Swift), allows the creation of clusters that can store, retrieve and update multi-petabyte objects, all within a single storage system. Swift handles data replication and integrity across the cluster to maintain availability during failure. The primary use cases tend to be around static, long-term data storage needs. For example, swift is used for storing virtual machine images and long term backups.

OpenStack Image Service (Glance), is a multi-format Virtual Machine Image repository that provides discovery, registration and delivery services for Virtual Machine Images and disks. Glance doesn’t actually provide the storage of the Virtual Machine Images. Rather, in order to store Virtual Machine images, Glance uses local disk, iSCSI/NAS/SAN ,and Swift as storage backends.

What about networking?

Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

  • Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.
  • VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture. Next we’ll briefly describe some of these shortcomings which precipitated the work towards Quantum, which is the proposed to sit alongside Nova, Swift, and Glance as a standalone service.

Limitations of the Current OpenStack Network Architecture

Limited network options: As we’ve mentioned, OpenStack only supports two network models. While suitable for some environments, they don’t cover all use cases.

For example, the flat model exposes one large “flat” network with no real isolation primitives, making it difficult to build support for multi-tenancy.

The VLAN model, on the other hand, provides basic isolation via 802.1Q tagging. However, it is burdened with the classic set of problems associated with VLANs such as, difficulty in supporting overlapping IP and MAC addresses, difficulty of extending L2 across subnets, VLAN scaling limits, MAC table scaling issues, dependency on proprietary interfaces for configuration of networking gear, etc.  Additionally, the VLAN model only supports having a single interface per VM.

No well-defined network interface: In addition to having limited network options, there is no well defined interface by which a new one can be created and slotted in non-disruptively.  The original developers did a good job designing flexibility into how networks are orchestrated, but the concepts of VLANs are bridges are embedded deeply in the nova codebase.

Simplistic network model: The current networking model is built around L2 and L3. However, interposition of higher-level services is a growing need and not well accommodated in the current model. For example, how would a new “Firewall as a service” offering plug into a private nova network?  Or how would one model connecting a tenant’s network to a remote data center using an MPLS circuit?

It is for these reasons that the Quantum network service was proposed. Quantum is designed to be a first class subcomponent which defines the interface for a network service. It can accommodate the existing FLAT and VLAN models, as well as many other service implementations including virtualized L2 using soft switches, service interposition, hardware integration etc.

In the next post, we will describe Quantum in detail, and describe an implementation of it using Open vSwitch.

1 Comment »

Open vSwitch and OpenStack

Post by: admin : July 14, 2011 1:47 pm : openstack

OpenStack is an open source, scalable, cloud infrastructure platform that has gained considerable industry momentum and interest (learn more here). A number of Open vSwitch developers and users have been working with members of the OpenStack community to define a flexible network service framework for Openstack called Quantum. The Quantum code is in an early beta (Quantum development wiki is here, code is here).

Below is the Quantum presentation from the Diablo OpenStack Design Summit, presented by Open vSwitch team member Dan Wendlandt. We share the excitement of the OpenStack community and it has potential to provide a next generation infrastructure platform for both public and private clouds.

 

Leave a response »

Talk on Open vSwitch and OpenStack

Post by: admin : July 14, 2011 12:47 pm : openstack

The following video is a talk given at the 2010 OpenStack design summit regarding networking issues in the cloud and how Open vSwitch can be used to address them.


12-Deploying OpenStack, Networking Considerations from OpenStack on Vimeo.

 

Leave a response »
« Page 1 »