API to register existing "provider" networks with Quantum

Registered by Robert Kukura

In addition to dynamically creating brand new virtual networks, there are use cases where Quantum needs to be told about existing L2 networks in the data center. Some plugins may already support this, but there should be a common API or extension allowing existing VLANs or other types of networks to be registered with Quantum, after which ports can be created, etc.. An extensible way to describe the existing network is needed, and an error should result if the plugin is not able to handle the described network.

The first phase of this blueprint will implement provider VLANs in the linuxbridge and openvswitch plugins as follows:

* Extend the plugins' create_network functions with an optional argument specifying the VLAN tag.

* Extend the python-quantumclient CLI to allow this argument to be passed to create_network.

* Extend the plugins' get_network_details functions to return VLAN tags that have been assigned either explicitly or dynamicly.

* Add configurability of the range from which VLAN tags are dynamically allocated to the openvswitch plugin. The linuxbridge plugin already has this.

* Since there is no authorization mechanism in quantum yet, comments in the above will indicate where admin priviledges need to be checked.

* Make sure the plugins' delete_network functions properly handle deleting networks whose VLAN tags do not fall into the range from which VLAN tags are dynamically allocated.

Once the support for provider VLANs has been reviewed and merged, support for provider flat networks will be added to the same plugins in a second phase following a similar approach. Since nova networking parity requires supporting multiple physical flat networks, the plugins will also be enhanced to support VLANs (and GRE tunnels for OVS?) over multiple distinct physical networks. Details will be forthcoming.

Blueprint information

dan wendlandt
Robert Kukura
Robert Kukura
Series goal:
Accepted for folsom
Milestone target:
milestone icon 2012.2
Started by
Robert Kukura
Completed by
dan wendlandt

Related branches



Expecting publicly available WIP review branch by monday 8/6/12.

This is my initial participation of quantum... so please forgive my lack of context here. I feel that the current implementation for this blueprint is a bit problematic. It is not a clean design to have vlan_id in the API. In my very biased view, the main difference of a provider network vs. a regular network is that the provider network may already have gateway, firewall, DHCP, etc. - in some cases, we just need to plug the VM into that network. VLAN certainly is one way of plugging VMs in, but there could be many others... For example, we can put the VM onto a VXLAN domain, which somehow terminates the provider network as well, for example with a hw switch that has VXLAN capability. This might be a more general question regarding how we define a "network" in quantum. Maybe by default, a network has DHCP, NATting functions provided by nova, but for provider networks we can optionally take some of them out. -Simon

Hi Simon,

I think its more that the blueprint name is confusing. Really the goal of this work is to support the use case where a service provider wants a particular quantum network (which is logical) to map to specific VLAN. There's nothing requiring Quantum networks to map to VLANs, and in many cases they will not.

I agree with your use case that a service provider may want to create a network where the DHCP, gateway, and FW/NAT is handled externally. The Quantum v2 should already be able to support that use case, as attaching an L3 + NAT elements will be an explicit step (unlike nova) and we're planning on making DHCP use configurable on a per-network basis (https://blueprints.launchpad.net/quantum/+spec/per-net-dhcp-enable), though no one has signed up for that work item yet.


This is required for certain nova-parity use cases. Is this work expected to track both the API extension (presumably an admin extension) and support in one or more plugins? If so, which plugins? As I mentioned, we can rework the NVP extension to use this generic API once we agree on a generic model.

Dan, can you elaborate on the use cases? Are you refering to flat networking parity, where quantum plugin(s) would need to handle existing physical networks in addition to existing virtual networks? -Bob

How to connect existing L2 to virtual L2? Is that on L2 layer or L3 layer? - Nachi

I assume the use case is: 'here's this existing network segment outside of my cloud but reachable by it, e.g. a port or ports on my cloud with VLAN tag xxx (or, equally, VXLAN, GRE, whatever). I would like to be able to plug VMs into it. Dear Quantum, please make me a virtual network that is bridged with this external network.' - Ian.

Hi Ian, yes, I definitely think that is one of the scenarios.
Here a couple use cases:
1) I want a virtual network that maps to VLAN 9 that is trunked to eth1 on all of my hypervisors.
2) I have a managed hosting environment in some part of my data center not adjacent to my hypervisors, and I want to map a virtual network in my cloud to VLAN 99 in that managed hosting environment.
3) I have a load balancer or other physical hosts on a L2-in-L3 VXLAN (or equiv.) that I want on the same L2 network as my VMs. That VXLAN uses a tenant-id of 6000.

Some key points to discuss:
- One model is to view this as a property of the network itself (e.g., passed in on creation) while another model is to view this as a "port" that uplinks you to an external network. One benefit of the port model is that it might be cleaner when thinking about a single virtual network that is uplinked to multiple external networks.
- It seems our use cases to this point are about L2-bridging only. Would be good to decide if we agree on that scoping.
- In general, there may be multiple physical L2 networks that can be bridged to. This could either be b/c the hypervisor has multiple NICs, or because there are multiple remote pods in the data center that contain hosts that need to be plugged into virtual networks. So it seems like we need to be able to identify each remote L2 network using a UUID. This is inline with the work we've already done on this for the NVP plugin.
- It seems that for each physical L2 network that can be bridged to, there is a notion that this physical network *may* be sliced up into multiple L2 segments using a "context" tag like a VLAN, VLAN/NVGRE ID. Thus, it is possible to map a virtual network not just to a particular physical network, but to a particular "context" on that physical network (e.g., VLAN 9 on the physical network that all of my hypervisors reach using eth1).

-- Seems to me that what we're doing is actually defining an interface (rather than a port) to the external world, that we would then plug into a port somewhere within Quantum (the action of which would set up the connection to the network and then bridge it to the Quantum segment).
-- The description of the external world interface you're making would want to be extensible to cover the cases we already know of and have the ability for new ones to be added. You would need config to describe the first hop of the task (i.e. eth1 on all compute nodes, or something more complex) and the tunnel to be used: VLAN(9), GRE(ip-addr, 1234). And, of course, tunnels can be nested...
-- And as for L2 only, that seems appropriate. Quantum networks are L2 networks. I don't think L3 behaviour should crop up when you are doing a task that uses wiring up a switch as its abstraction; that wants separate consideration.
-- An aside: I remember talk about making endpoints available from other tenants. You could pull both tenants out to an externally denominated network ('this port, tag 7') so this actually solves that problem, although perhaps it's better to come up with some sort of advertising call in the longer term. (e.g. advertise(port-id, security-details) -> interface-id - better still if you can find a word where we agree on the spelling ;)
- Ian

Some interesting ideas above, but I'd like to keep the scope of this blueprint reasonable for F-2, especially with all the other changes coming in at the same time. See the additional details I just added to the description above for what I have in mind. I think it makes sense to support multiple physical connections, used for either flat or VLAN networks and maybe tunnels, but I'd like to avoid having to actually model the physical connectivity at this point. I think simple identifers for physical networks/trunks that can be mapped on the nodes to physical network intefaces or preconfigured bridges should suffice for now.

Ian, I definitely agree with you on the port abstraction. In fact, this is actually how we model things using Nicira's platform as well. You create an Attachment (interface in your terminology) that describes the think you're plugging into (e.g., VLAN 99 on physical network "foo"), and then plug that attachment into a port.

Gerrit topic: https://review.openstack.org/#q,topic:bp/provider-networks,n,z

Addressed by: https://review.openstack.org/9069
    Initial implementation of provider extension.

Addressed by: https://review.openstack.org/10938
    Implementation of second phase of provider extension.

Addressed by: https://review.openstack.org/11388
    Implementation of second phase of provider extension for openswitch

Note, this late commit will be tracked separately now, using: https://bugs.launchpad.net/quantum/+bug/1037341 . This is so we can mark this BP as completed, and more accurately represent the scope of the work that needs a feature freeze exception.


Work Items

This blueprint contains Public information 
Everyone can see this information.