Network Opensource Openstack

Neutron Quality of Service coding sprint

Last week we had the Openstack Neutron Quality of Service coding sprint in Ra‘anana, Israel to work on [1].

It’s been an amazing experience, we’ve accomplished a lot, but we still have a lot ahead.We gathered together at Red Hat office for three days [2], delivering almost (sigh!) the full stack for the QoS service with bandwidth limiting.The first day we had a short meeting where we went over the whole picture of blocks and dependencies that we had to complete.

The people from Huawei India (hi Vikram Choudhary & Ramanjaneya Reddy) helped us remotely by bootstraping the DB models and the neutron client.

Eran Gampel (Huawei), Irena Berezovsky (Midokura) and Mike Kolesnik (Red Hat) revised the API for REST consistency during the first day, provided an amendment to the original spec [12], the API extension and the service plugin [13] Concurrently John Schwarz (Red Hat) was working on the API tests which acted as validation of the work they were doing.

Ihar Hrachyshka (Red Hat) finished the DB models and submited the first neutron versioned objects ever on top of the DB models, I recomend reading those patches, they are like nirvana of coding ;).

Mike Kolesnik plugged the missing callbacks for extending networks and ports. Some of those, extending object reads will be moved to a new neutron.callbacks interface.I mostly worked on coordination and writing some code for the generic RPC callbacks [5] to be used with versioned objects, where I had lots of help from Eran and Moshe Levi (Mellanox), the current version is very basic, not supporting object updates but initial retrieval of the resources, hence not a real callback 😉 (yet!).

Eran wrote a pluggable driver backend interface for the service, [6] with a default rpc/messaging backend which fitted very nicely.

Gal Sagie (Huawei) and Moshe Levi worked at the agent level, Gal created the QoS OvS library with the ability to manipulate queues, configure the limits, and attach those queues to ports [7], Moshe leaded the agent design, providing an interface for dynamic agent extensions [8], a QoS agent extension interface [9], and the example for SRIOV [10], Gal then coded the OvS QoS extension driver [11].

During the last day, we tried to put all the pieces together, John was debugging API->SVC->vo->DB (you’d be amazed if you saw him going through vim or ipdb at high speed). Ihar was polishing the models and versioned objects, Mike was polishing the callbacks, and I was tying together the agent side. We were not able to fully assemble a POC in the end, but we were able to interact with neutron client to the server across all the layers. And the agent side was looking good but I managed to destroy the environment I was using, so I will be working on it next week.The plan aheadWe need to assemble the basic POC, make a checklist for missing tests and TODO(QoS), and start enforcing full testing for any other non-poc-essential patch.Doing it as I write: that’s done we may be ready to merge back during the end of liberty-2, or the very start of next one: liberty-3. Since QoS is designed as a separate service, most of the pieces won’t be activated unless explicitly installed, which makes it very low risk of breaking anything for anyone not using QoS.

What can be done better

Better coordination (in general), I’m not awesome at that, but I guess I had the whole picture of the service, so that’s what I did.Better coordination with remotes: It’s hard when you have a lot of ongoing local discussions, and very limited time to sprint, I’m looking forward to find formulas to enhance that part.


In my opinion, the mid-cycle coding sprint was very positive, the ability to meet every day, do fast cross-reviews, and very quickly loop in specific people to specific topics was very productive.I guess remote coding sprints should be very productive too, as long as companies guarantee the ability of people to focus on the specific topic, said that, the face to face part is always very valuable.I was able to learn a lot from all the other participants on specific parts of neutron I wasn’t fully aware of, and by building a service plugin we all got the understanding of a fullstack development, from API request, to database, messaging (or not), agents and how all fits together.

Special thanks Gary Kotton for joining us the first day to understand our plan, and help us later with reviews towards merging patches on the branch.To Livnat Peer, for organizing the event within Red Hat, and making sure we prioritized everything correctly.To Doug Wiegley and Kyle Mestery for helping us with rebases from master to the feature branch to cleanup gate bugs on time.




[3] Versioned objects 1/2:

[4] Versioned objects 2/2:

[5] Generic RPC callbacks:

[6] Pluggable driver backend:

[7] OVS Low level (ovsdb):

[8] Agent extensions:

[9] QoS agent extension :

[10] SRIOV agent extension

[11] OvS QoS extension:

[12] API amendment:

[13] SVC and extension amendment:

Network Openstack

Using multiple external networks in OpenStack Neutron

This document talks about the reference implementation.

Starting on Icehouse release, a single neutron network node using ML2+ovs or OVS, can handle several external networks. I haven’t found a lot of documentation about it, but basically, here’s how to do it, assuming this: you start from a single external network, which is connected to ‘br-ex’‘  you want to attach the new external network to ‘‘eth1’. In the network node (were neutron-l3-agent, neutron-dhcp-agent, etc.. run): Create a second OVS bridge, which will provide connectivity to the new external network:

ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
ip link set eth1 up

(Optionally) If you want to plug a virtual interface into this bridge and add a local IP on the node to this network for testing:

ovs-vsctl add-port br-eth1 vi1 -- set Interface vi1 \ 
ip addr add dev vi1

Edit your /etc/neutron/l3_agent.ini , and set/change:

gateway_external_network_id =
external_network_bridge =

This change tells the l3 agent that it must relay on the physnet<->bridge mappings at /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini it will automatically patch those bridges and router interfaces around. For example, in tunneling mode, it will patch br-int to the external bridges, and set the external ‘‘q’‘router interfaces on br-int. Edit your /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to map ‘‘logical physical nets’ to ‘‘external bridges’

bridge_mappings = physnet1:br-ex,physnet2:br-eth1

Restart your neutron-l3-agent and your neutron-openvswitch-agent

neutron net-create ext_net \
            --provider:network_type flat \
            --provider:physical_network physnet1 \

neutron net-create ext_net2 \
                    --provider:network_type flat \
                    --provider:physical_network physnet2 \

And for example create a couple of internal subnets and routers:

And for example create a couple of internal subnets and routers:

# for the first external net
neutron subnet-create ext_net \
          --gateway \

# here the allocation pool goes explicit. all the IPs available..
neutron router-create router1
neutron router-gateway-set router1 ext_net
neutron net-create privnet
neutron subnet-create privnet \
                 --gateway \
                 --name privnet_subnet
neutron router-interface-add router1 privnet_subnet

# for the second external net
neutron subnet-create ext_net2 \
  --allocation-pool start=,end= \
  --gateway= --enable_dhcp=False
neutron router-create router2
neutron router-gateway-set router2 ext_net2
neutron net-create privnet2
neutron subnet-create privnet2 --gateway --name privnet2_subnet
neutron router-interface-add router2 privnet2_subnet