Starting on Icehouse release, a single neutron network node using ML2+ovs or OVS, can handle several external networks. I haven’t found a lot of documentation about it, but basically, here’s how to do it, assuming this: you start from a single external network, which is connected to ‘br-ex’‘  you want to attach the new external network to ‘‘eth1’. In the network node (were neutron-l3-agent, neutron-dhcp-agent, etc.. run): Create a second OVS bridge, which will provide connectivity to the new external network:

ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
ip link set eth1 up

(Optionally) If you want to plug a virtual interface into this bridge and add a local IP on the node to this network for testing:

ovs-vsctl add-port br-eth1 vi1 -- set Interface vi1 type=internal
ip addr add dev vi1

Edit your /etc/neutron/l3_agent.ini , and set/change:

gateway_external_network_id =
external_network_bridge =

This change tells the l3 agent that it must relay on the physnet<->bridge mappings at /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini it will automatically patch those bridges and router interfaces around. For example, in tunneling mode, it will patch br-int to the external bridges, and set the external ‘‘q’‘router interfaces on br-int. Edit your /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to map ‘‘logical physical nets’ to ‘‘external bridges’

bridge_mappings = physnet1:br-ex,physnet2:br-eth1

Restart your neutron-l3-agent and your neutron-openvswitch-agent

service neutron-l3-agent restart
service neutron-openvswitch-agent restart

At this point, you can create two external networks (please note, if you don’t make the l3_agent.ini changes, the l3 agent will start complaining and will refuse to work)

neutron net-create ext_net --provider:network_type flat \
                           --provider:physical_network physnet1 \

neutron net-create ext_net2 --provider:network_type flat \
                            --provider:physical_network physnet2 \

And for example create a couple of internal subnets and routers:

# for the first external net
neutron subnet-create ext_net --gateway \

# here the allocation pool goes explicit. all the IPs available..
neutron router-create router1
neutron router-gateway-set router1 ext_net
neutron net-create privnet
neutron subnet-create privnet --gateway \
                 --name privnet_subnet
neutron router-interface-add router1 privnet_subnet

# for the second external net
neutron subnet-create ext_net2 --allocation-pool start=,end= \
         --gateway= --enable_dhcp=False
neutron router-create router2
neutron router-gateway-set router2 ext_net2
neutron net-create privnet2
neutron subnet-create privnet2 --gateway --name privnet2_subnet
neutron router-interface-add router2 privnet2_subnet

There are commodity data center operators out there like OVH or Hetzner, who provide you with single floating IPs or RIPE blocks directly routed to your machine. That means that, your machine usually is  AAA.BBB.CCC.DDD/24 (the primary address), with the default router in AAA.BBB.CCC.254 , but this router also will handle traffic for WWX.XXX.YYY.ZZZ/29 or EEE.FFF.GGG.HHH/32 (floating RIPE blocks or addresses) to your network interface to your eth MAC directly, and you can if you have a virtual machine inside your host, you need to setup routing like this:

ifconfig eth0 EEE.FFF.GGG.HHH netmask up route add AAA.BBB.CCC.254 dev eth0route add default via AAA.BBB.CCC.254 dev eth0

(if you use RHEL or CENTOS in an VM, you will need to use the route-eth0 script to setup this) As an extra (at least for ovh) they provide an API to setup your .254 router connect your floating IPs to virtual mac adresses, but… you cannot chose the MAC address, they will provide you a random one. When trying to use this together with openstack / neutron server, it won’t work out of the box. Loïc Dachary describes a solution here Fragmented floating IP pools and multiple AS hack His solution solves 2 problems: disperse floating IP pools + the MAC translation. But adds complexity (you have an extra layer of virtual IPs that you see within neutron, and then you have to manually keep a correlation table in the network node external IP:internal IP … etc) Another solution

warning: this is not tested, but an idea for implementation

He sparked an idea in my mind about how to handle it without double IP translation (but I’m not fixing the multiple floating IP blocks problem): My solution (and I yet have to try it) is adding another bridge in the middle (classic linux bridge) before the external eth…and then use MAC DNAT: for every virtual MAC address in neutron:

ebtables -t nat -A PREROUTING -d 00:11:22:33:44:55  MAC)  -i eth0 -j dnat –to-destination 54:44:33:22:11:00 -ip-dst the output counterpart rule (TBD) considering:00:11:22:33:44:55 : the external interface general MAC address54:44:33:22:11:00 : the internal, randomly generated neutron address : the floating IP address connected to 54:44:33:22:11:00 in neutron

of course, we need a little tool that might be learning from neutron-server which virtual mac addresses are we using…

Yet another solution (sounds way better): Adding openflow rules to br-ex (or an extra br-ovh bridge), to handle incoming and outgoing traffic and do this MAC translation based on openflow tables This would allow us to learn from any outgoing traffic the MAC DNAT counterpath in the input table. See Table 10 & Table 20 for the openflow configuration in openvswitch/neutron in Assaf Muller’s blog: GRE Tunnels in OpenStack Neutron

Cheers, Miguel Ángel Ajo

A few years ago, I built a little embedded switch for one of my clients, they needed something tiny, with VLAN support, which coupled to their hardware and flexible enough for their needs.

They didn’t need any management interface, as they used their own hardware to reconfigure the switch EEPROM. After some search, we found a tiny jewel, impressive for just $3, the RTL8305SC from the Realtek family, it allowed up to 16 VLANs (more than enough for their application) and it also had nice features like loop detection and broadcast storm control. VLAN switching was handled by a simple 16 entry table: VID + a bitmask to map which switch ethernet ports were connected to that VLAN. In 802.1Q aware mode, ports could receive untagged and tagged frames, the untagged ones were connected to the switch port configured table index.

For tagged ones, the table was searched, and portmask checked:

Of course, before forwarding out any packet, the packet dst mac is checked in a 1024 entries lookup table, to check the dst mac is in the out port, or that the dst mac it’s a broadcast address. For inbound packets, the switch was able to pre filter in 4 different ways:

  1. Do nothing, just pass the packets to the switching engine. (default)
  2. Insert input port VLAN tags for non-tagged packets, and keep the tagged ones VID.
  3. Remove VLAN tags from all packets.
  4. Remove incoming VLAN tag, and add the port configured VLAN tag.

Talking about loop protection mechanisms:

Although it didn’t have loop removal capabilities (via STP protocol or friends).. the loop detection was enough for the project, it was rather simple: the switch sends a broadcast message every 3-5 minutes, with Ethertype 0x8899 (RRCP protocol), payload = 0x03,  with src mac = switch’s mac. If the packet comes back into any port, with the same src macs then there is a loop detection signal triggered out (for a led or controller to detect). The switch features weren’t awesome, but it was quite a lot for the cost, and only 0.3 to 1.2W power consumption. This switch chip has been replaced by the RTL8306SD, more powerful, and yet cheap.

Brian Sidebotham and and Dick Hollenbeck just did it!,

Long story short: A few months ago we suffered, as soon as we discovered that the standard python binaries for win32, were MSVC built, and they had runtime discrepancies with some modules or executables built from mingw (wxpython support), that always ended execution with a segmentation fault.

At that time I was getting “a little” busy, but dick & brian kept working. They discovered that python has no support to build outside MSVC on windows(really, it doesn’t), so dick started python-a-mingw-us project, which cmake-compiles python 2.7.4 and builds a set of win32 binary installers or python 2.7.4, fully mingw runtime compatible. After that, Brian kept working on his kicad winbuilder, yesterday he announced that he got it working in full : wxpython, scripting, everything.

After 3 years of hard work from the team (software, hardware, artists media, mechanical design, financial..), the F Home Studio product is selling.

It’s the most impressive product I’ve been working on so far, designed from the ground up, from hardware to software, to be a professional-musician oriented tool: high quality sound and zero recording latency.

This product software is built with C / C++, VHDL, Qt and Embedded Linux. I had to write a few drivers: for the screen, for FPGA access, and the secondary DSP that drives the real time audio processing. It’s awesome to see it all working together, and how musicians like it.