Sometimes, you find yourself trying to debug a problem with SE linux, specially during software development, or packaging new software features. I have found this with neutron agents to happen quite often, as new system interactions are developed. Disabling selinux during development is generally a bad idea, because you’ll discover such problems later in time and under higher pressure (release deadlines). Here we show a recipe, from Kashyap Chamarthy, to find out what rules are missing, and generate a possible SELinux policy:

Make sure selinux is enabled

sudo su -
setenforce 1

Clear your audit log, and supposing the problem was in neutron-dhcp-agent, restart it.

 > /var/log/audit/audit.log
systemctl restart neutron-dhcp-agent

Wait for the problem to be reproduced..

Find what you got, and create a reference policy

cat /var/log/audit/audit.log
cat /var/log/audit/audit.log | audit2allow -R

At that point, report a bug so you get those policies incorporated in advance. Give a good description of what’s blocked by the policies, and why does it need to be unblocked. Now you can generate a policy, and install it locally:

You can generate a SELinux loadable module to move on without disabling the whole SELinux:

cat /var/log/audit/audit.log | audit2allow -a -M neutron

And you can also install it in runtime

semodule -i neutron.pp

Restart neutron-dhcp-agent (or re-trigger the problem to make sure it’s fixed)

systemctl restart neutron-dhcp-agent

We found during scalability tests, that the security_group_rules_for_devices RPC, which is transmitted from neutron-server to the neutron L2 agents during port changes, grew exponentially.

So we filled a spec for juno-3, the effort leaded by shihanzhang and me can be tracked here:


I have written a test and a little -dirty- benchmark ( line 418) to check the results and make sure the new RPC actually performs better.

Here are the results:

Message size (Y) vs. number of ports (X) graph:

RPC execution time in seconds (Y) vs. number of ports (X):

Starting on Icehouse release, a single neutron network node using ML2+ovs or OVS, can handle several external networks. I haven’t found a lot of documentation about it, but basically, here’s how to do it, assuming this: you start from a single external network, which is connected to ‘br-ex’‘  you want to attach the new external network to ‘‘eth1’. In the network node (were neutron-l3-agent, neutron-dhcp-agent, etc.. run): Create a second OVS bridge, which will provide connectivity to the new external network:

ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
ip link set eth1 up

(Optionally) If you want to plug a virtual interface into this bridge and add a local IP on the node to this network for testing:

ovs-vsctl add-port br-eth1 vi1 -- set Interface vi1 type=internal
ip addr add dev vi1

Edit your /etc/neutron/l3_agent.ini , and set/change:

gateway_external_network_id =
external_network_bridge =

This change tells the l3 agent that it must relay on the physnet<->bridge mappings at /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini it will automatically patch those bridges and router interfaces around. For example, in tunneling mode, it will patch br-int to the external bridges, and set the external ‘‘q’‘router interfaces on br-int. Edit your /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to map ‘‘logical physical nets’ to ‘‘external bridges’

bridge_mappings = physnet1:br-ex,physnet2:br-eth1

Restart your neutron-l3-agent and your neutron-openvswitch-agent

service neutron-l3-agent restart
service neutron-openvswitch-agent restart

At this point, you can create two external networks (please note, if you don’t make the l3_agent.ini changes, the l3 agent will start complaining and will refuse to work)

neutron net-create ext_net --provider:network_type flat \
                           --provider:physical_network physnet1 \

neutron net-create ext_net2 --provider:network_type flat \
                            --provider:physical_network physnet2 \

And for example create a couple of internal subnets and routers:

# for the first external net
neutron subnet-create ext_net --gateway \

# here the allocation pool goes explicit. all the IPs available..
neutron router-create router1
neutron router-gateway-set router1 ext_net
neutron net-create privnet
neutron subnet-create privnet --gateway \
                 --name privnet_subnet
neutron router-interface-add router1 privnet_subnet

# for the second external net
neutron subnet-create ext_net2 --allocation-pool start=,end= \
         --gateway= --enable_dhcp=False
neutron router-create router2
neutron router-gateway-set router2 ext_net2
neutron net-create privnet2
neutron subnet-create privnet2 --gateway --name privnet2_subnet
neutron router-interface-add router2 privnet2_subnet

There are commodity data center operators out there like OVH or Hetzner, who provide you with single floating IPs or RIPE blocks directly routed to your machine. That means that, your machine usually is  AAA.BBB.CCC.DDD/24 (the primary address), with the default router in AAA.BBB.CCC.254 , but this router also will handle traffic for WWX.XXX.YYY.ZZZ/29 or EEE.FFF.GGG.HHH/32 (floating RIPE blocks or addresses) to your network interface to your eth MAC directly, and you can if you have a virtual machine inside your host, you need to setup routing like this:

ifconfig eth0 EEE.FFF.GGG.HHH netmask up route add AAA.BBB.CCC.254 dev eth0route add default via AAA.BBB.CCC.254 dev eth0

(if you use RHEL or CENTOS in an VM, you will need to use the route-eth0 script to setup this) As an extra (at least for ovh) they provide an API to setup your .254 router connect your floating IPs to virtual mac adresses, but… you cannot chose the MAC address, they will provide you a random one. When trying to use this together with openstack / neutron server, it won’t work out of the box. Loïc Dachary describes a solution here Fragmented floating IP pools and multiple AS hack His solution solves 2 problems: disperse floating IP pools + the MAC translation. But adds complexity (you have an extra layer of virtual IPs that you see within neutron, and then you have to manually keep a correlation table in the network node external IP:internal IP … etc) Another solution

warning: this is not tested, but an idea for implementation

He sparked an idea in my mind about how to handle it without double IP translation (but I’m not fixing the multiple floating IP blocks problem): My solution (and I yet have to try it) is adding another bridge in the middle (classic linux bridge) before the external eth…and then use MAC DNAT: for every virtual MAC address in neutron:

ebtables -t nat -A PREROUTING -d 00:11:22:33:44:55  MAC)  -i eth0 -j dnat –to-destination 54:44:33:22:11:00 -ip-dst the output counterpart rule (TBD) considering:00:11:22:33:44:55 : the external interface general MAC address54:44:33:22:11:00 : the internal, randomly generated neutron address : the floating IP address connected to 54:44:33:22:11:00 in neutron

of course, we need a little tool that might be learning from neutron-server which virtual mac addresses are we using…

Yet another solution (sounds way better): Adding openflow rules to br-ex (or an extra br-ovh bridge), to handle incoming and outgoing traffic and do this MAC translation based on openflow tables This would allow us to learn from any outgoing traffic the MAC DNAT counterpath in the input table. See Table 10 & Table 20 for the openflow configuration in openvswitch/neutron in Assaf Muller’s blog: GRE Tunnels in OpenStack Neutron

Cheers, Miguel Ángel Ajo

A few years ago, I built a little embedded switch for one of my clients, they needed something tiny, with VLAN support, which coupled to their hardware and flexible enough for their needs.

They didn’t need any management interface, as they used their own hardware to reconfigure the switch EEPROM. After some search, we found a tiny jewel, impressive for just $3, the RTL8305SC from the Realtek family, it allowed up to 16 VLANs (more than enough for their application) and it also had nice features like loop detection and broadcast storm control. VLAN switching was handled by a simple 16 entry table: VID + a bitmask to map which switch ethernet ports were connected to that VLAN. In 802.1Q aware mode, ports could receive untagged and tagged frames, the untagged ones were connected to the switch port configured table index.

For tagged ones, the table was searched, and portmask checked:

Of course, before forwarding out any packet, the packet dst mac is checked in a 1024 entries lookup table, to check the dst mac is in the out port, or that the dst mac it’s a broadcast address. For inbound packets, the switch was able to pre filter in 4 different ways:

  1. Do nothing, just pass the packets to the switching engine. (default)
  2. Insert input port VLAN tags for non-tagged packets, and keep the tagged ones VID.
  3. Remove VLAN tags from all packets.
  4. Remove incoming VLAN tag, and add the port configured VLAN tag.

Talking about loop protection mechanisms:

Although it didn’t have loop removal capabilities (via STP protocol or friends).. the loop detection was enough for the project, it was rather simple: the switch sends a broadcast message every 3-5 minutes, with Ethertype 0x8899 (RRCP protocol), payload = 0x03,  with src mac = switch’s mac. If the packet comes back into any port, with the same src macs then there is a loop detection signal triggered out (for a led or controller to detect). The switch features weren’t awesome, but it was quite a lot for the cost, and only 0.3 to 1.2W power consumption. This switch chip has been replaced by the RTL8306SD, more powerful, and yet cheap.