HA and Distributed are beautiful words, but complex ones. Within few seconds MAC addresses flap in the switch. It is for a good cause, but this anoys the admin who runs to disable port flapping detection in the switch, then he breaths.

I guess you picture the situation, which I have seen a few times happen. Ee already need to disable port flapping detection for L3HA to work, so it’s not a big deal.

In the next few pages I’m explaining how OVN/L3 works over VLAN once Anil’s patches are in place, and I review the life of a couple of ICMP packets through the network.

The network


As the diagram shows, the example network is composed of:

  • 4 Chassis:

    • Gateway Node 1
    • Gateway Node 2
    • Compute Node A
    • Compute Node B
  • 3 Physical networks:

    • Interface 1: VLANs provider network: The network for logical switches traffic, each logical switch will have it’s own VLAN ID.
    • Interface 2: The overlay network: although it’s not used for carrying traffic we still rely on BFD monitoring over the overlay network for L3HA purposes (deciding on the master/backup state)
    • Interface 3: Internet/Provider Network: Our external network.
  • 2 Logical Switches (or virtual networks):
    • A with CIDR and a localnet port to vlan provider network, tag: 2011
    • B with CIDR and a localnet port to vlan provider network, tag: 2010
    • C irrelevant
  • 1 Logical Router:
    • R1 which has three logical router ports:
      • On LS A
      • On LS B
      • On external network

The journey of the packet

In the next lines I’m tracking the small ICMP echo packet on its journey through the different network elements. You can see a detailed route plan in See Packet 1 ovn-trace

The inception

packet inception in VM1

The packet is created inside VM1, where it has a virtual interface with the address (fa:16:3e:16:07:92 MAC), and a default route to (fa:16:3e:7e:d6:7e) for anything outside it’s subnet.

On it’s way out VM1

packet on its way out Compute Node A

As the packet is handled by br-int OpenFlow rules for the logical router pipeline the source MAC address is replaced with the router logical port on logical switch B, and destination MAC is replaced with the destination port MAC on VM4. Afterwards the destination network VLAN tag for logical switch B attached to the packet.

The physical switch

The packet leaves the virtual switch br-phy, through interface 1, reaching the Top of Rack switch.

packet 3

The ToR switch CAM table is updated for 2010 fa:16:3e:65:f2:ae which is R1’s leg into virtual network B (logical switch B).

vid + MAC port age
2010 fa:16:3e:65:f2:ae 1 0
2011 fa:16:3e:16:07:92 1 12
2010 fa:16:3e:75:ca:89 9 10

Going into VM4

As the packet arrives to the hypervisor, it’s decapsulated from the VLAN tag, and directed to the VM4 tap.

packet 4

The return

VM4 receives the ICMP request and responds to it with an ICMP echo reply. The new packet is directed to R1’s MAC and VM1’s IP address.

packet 5

On it’s way out of VM4

As the packet is handled by br-int OpenFlow rules for the logical router pipeline the source MAC address is replaced with the router logical port on logical switch B, and destination MAC is replaced with the destination port MAC on VM4.

packet 6

Afterwards the destination network VLAN tag for logical switch B attached to the packet.

The physical switch (on it’s way back)

The packet leaves the virtual switch br-phy, through interface 9, reaching the Top of Rack Switch.

packet 7

The ToR switch CAM table is updated for 2011 fa:16:3e:7e:d6:7e on port 9 which is R1’s leg into virtual network A (logical switch A).

vid + MAC port age
2010 fa:16:3e:65:f2:ae 1 1
2011 fa:16:3e:16:07:92 1 12
2011 fa:16:3e:7e:d6:7e 9 0
2010 fa:16:3e:75:ca:89 9 10

The end

By the end of it’s journey, the ICMP packet crosses br-phy, where the OpenFlow rules will decapsulate from localnet port into LS A, and direct the packet to VM1, as the eth.dst patches VM1’s MAC address.

packet 8

VM1 receives the packet normally, coming from VM4 ( through our virtual R1 (fa:16:3e:7e:d6:7e).

The end?, oh no

We need to explore the case where we have ongoing communications from VM6 to VM3, and VM1 to VM4. Both cases are East/West traffic communication, that will make the R1 MAC addresses flip in ToR switch CAM table.

packet 8


Packet 1 ovn-trace

$ ovn-trace --detailed neutron-0901bce9-c812-4fab-9844-f8ac1cdee066 'inport == "port-net2" && eth.src == fa:16:3e:16:07:92 && eth.dst==fa:16:3e:7e:d6:7e && ip4.src== && ip4.dst== && ip.ttl==32'
# ip,reg14=0x4,vlan_tci=0x0000,dl_src=fa:16:3e:16:07:92,dl_dst=fa:16:3e:7e:d6:7e,nw_src=,nw_dst=,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=32

ingress(dp="net2", inport="port-net2")
 0. ls_in_port_sec_l2 (ovn-northd.c:3847): inport == "port-net2" && eth.src == {fa:16:3e:16:07:92}, priority 50, uuid 72657159
 1. ls_in_port_sec_ip (ovn-northd.c:2627): inport == "port-net2" && eth.src == fa:16:3e:16:07:92 && ip4.src == {}, priority 90, uuid 2bde621e
 3. ls_in_pre_acl (ovn-northd.c:2982): ip, priority 100, uuid 6a0c272e
    reg0[0] = 1;
 5. ls_in_pre_stateful (ovn-northd.c:3109): reg0[0] == 1, priority 100, uuid 00eac4fb

ct_next(ct_state=est|trk /* default (use --ct to customize) */)
 6. ls_in_acl (ovn-northd.c:3292): !ct.new && ct.est && !ct.rpl && ct_label.blocked == 0 && (inport == "port-net2" && ip4), priority 2002, uuid 25b34866
16. ls_in_l2_lkup (ovn-northd.c:4220): eth.dst == fa:16:3e:7e:d6:7e, priority 50, uuid 21005439
    outport = "575fb1";

egress(dp="net2", inport="port-net2", outport="575fb1")
 1. ls_out_pre_acl (ovn-northd.c:2938): ip && outport == "575fb1", priority 110, uuid 6d74b82c
 9. ls_out_port_sec_l2 (ovn-northd.c:4303): outport == "575fb1", priority 50, uuid d022b28d
    /* output to "575fb1", type "patch" */

ingress(dp="R1", inport="lrp-575fb1")
 0. lr_in_admission (ovn-northd.c:4871): eth.dst == fa:16:3e:7e:d6:7e && inport == "lrp-575fb1", priority 50, uuid 010fb48c
 7. lr_in_ip_routing (ovn-northd.c:4413): ip4.dst ==, priority 49, uuid 4da9c83a
    reg0 = ip4.dst;
    reg1 =;
    eth.src = fa:16:3e:65:f2:ae;
    outport = "lrp-db51e2";
    flags.loopback = 1;
 8. lr_in_arp_resolve (ovn-northd.c:6010): outport == "lrp-db51e2" && reg0 ==, priority 100, uuid 89c23f94
    eth.dst = fa:16:3e:76:ca:89;
10. lr_in_arp_request (ovn-northd.c:6188): 1, priority 0, uuid 94e042b9

egress(dp="R1", inport="lrp-575fb1", outport="lrp-db51e2")
 3. lr_out_delivery (ovn-northd.c:6216): outport == "lrp-db51e2", priority 100, uuid a127ea78
    /* output to "lrp-db51e2", type "patch" */

ingress(dp="net1", inport="db51e2")
 0. ls_in_port_sec_l2 (ovn-northd.c:3829): inport == "db51e2", priority 50, uuid 04b4900d
 3. ls_in_pre_acl (ovn-northd.c:2885): ip && inport == "db51e2", priority 110, uuid fe072d82
16. ls_in_l2_lkup (ovn-northd.c:4160): eth.dst == fa:16:3e:76:ca:89, priority 50, uuid 3a1af0d6
    outport = "a0d121";

egress(dp="net1", inport="db51e2", outport="a0d121")
 1. ls_out_pre_acl (ovn-northd.c:2933): ip, priority 100, uuid ffea7ed3
    reg0[0] = 1;
 2. ls_out_pre_stateful (ovn-northd.c:3054): reg0[0] == 1, priority 100, uuid 11c5e570

ct_next(ct_state=est|trk /* default (use --ct to customize) */)
 4. ls_out_acl (ovn-northd.c:3289): ct.est && ct_label.blocked == 0 && (outport == "a0d121" && ip), priority 2001, uuid f9826b44
[vagrant@hv1 devstack]$


  • E/W or East/West : This is the kind of traffic that traverses a router from one subnet to another subnet, going through two legs of a router.
  • N/S or North/South : This kind of traffic flow is very similar to E/W, but it’s a difference that we make, at least in the world of virtual networks when we’re talking of a router that has connectivity to an external network. Traffic that traverses the router into or from an external network. In the case of OVN or OpenStack, it implies the use of DNAT and or SNAT in the router, to translate internal addresses into external addresses and back.
  • L3HA : Highly available L3 service, which eliminates any single point of failure on the routing service of the virtual network.
  • ToR switch : Top of Rack switch, is the switch generally connected on the top of a rack to all the servers in such rack. It provides L2 connectivity.
  • CAM table : CAM means Content Addressable Memory, it’s an specific type of memory that instead of being accessed by address is accessed by “key”, in the case of switches, for the MAC table, it’s accessed by MAC+VLAN ID.

One of the simplest and less memory hungry ways to deploy a tiny networking-ovn all in one it’s still to use packstack.

Note This is an unsupported way of deployment (for the product version), but it must be just fine to give it a try. Afterwards, if you want to get serious and try something closer to production, please have a look af the TripleO deployment guide

On a fresh Centos 7, login and write:

      sudo yum install -y "*-queens"
      sudo yum update -y
      sudo yum install -y openstack-packstack
      sudo yum install -y python-setuptools
      sudo packstack --allinone \
          --cinder-volume-name="aVolume" \
          --debug \
          --service-workers=2 \
          --default-password="packstack" \
          --os-aodh-install=n \
          --os-ceilometer-install=n \
          --os-swift-install=n \
          --os-manila-install=n \
          --os-horizon-ssl=y \
          --amqp-enable-ssl=y \
          --glance-backend=file \
          --os-neutron-l2-agent=ovn \
          --os-neutron-ml2-type-drivers="geneve,flat" \
          --os-neutron-ml2-tenant-network-types="geneve" \
          --provision-demo=y \
          --provision-tempest=y \
          --run-tempest=y \
          --run-tempest-tests="smoke dashboard"

When that has finished, you will have as root user, a set of keystonerc_admin, and keystonerc_demo files. An example public and private network, a router, and a cirros image.

If you want to see how objects are mapped into OVN, give it a try:

$ sudo ovn-nbctl show
switch 3b31aaa0-eea0-462e-9cdc-866f2bd8171d (neutron-097345d1-3299-43d4-aeda-8af03516b92e) (aka public)
    port provnet-097345d1-3299-43d4-aeda-8af03516b92e
        type: localnet
        addresses: ["unknown"]
    port 38eda4cc-4931-4c48-bc83-6e1d72ccb90b
        type: router
        router-port: lrp-38eda4cc-4931-4c48-bc83-6e1d72ccb90b
switch f6555e37-305f-4342-87f1-f21de5adadc2 (neutron-c3959f48-caa2-4eb9-b217-19e70c2380cb) (aka private)
    port ee519f75-08a8-4559-bf44-8acb9b2cec1b
        type: router
        router-port: lrp-ee519f75-08a8-4559-bf44-8acb9b2cec1b
router 20efb278-0394-4330-a18d-9e40a56b3ae5 (neutron-e70a02f3-6758-4f78-bc50-133b6c6b0584) (aka router1)
    port lrp-ee519f75-08a8-4559-bf44-8acb9b2cec1b
        mac: "fa:16:3e:f6:c6:d3"
        networks: [""]
    port lrp-38eda4cc-4931-4c48-bc83-6e1d72ccb90b
        mac: "fa:16:3e:5f:41:cf"
        networks: [""]
        gateway chassis: [7a42645d-70b3-4286-8ddc-6240ccdd131c]
    nat a5a6bec0-7167-41af-8514-2764502fef21
        external ip: ""
        logical ip: ""
        type: "snat"

Every time I try to download a cirros image from cirros cloud for use in my OpenStack devel environment it’s super slow (100-200KB/s), so I’m making a local mirror of it here.

You can grab the files from here:

Or into your cloud with:

source ~/keystonerc_admin
pushd /tmp
curl https://ajo.es/binaries/cirros-0.4.0-x86_64-disk.img  > cirros-0.4.0-x86_64-disk.img
openstack image create "cirros"   --file cirros-0.4.0-x86_64-disk.img \
                                  --disk-format qcow2 --container-format bare \

Or for i386:

source ~/keystonerc_admin
pushd /tmp
curl https://ajo.es/binaries/cirros-0.4.0-i386-disk.img  > cirros-0.4.0-i386-disk.img
openstack image create "cirros"   --file cirros-0.4.0-i386-disk.img \
                                  --disk-format qcow2 --container-format bare \

TO-DO, fix this blog post to explain “–disable-snat” that does the same, and probably also supports FIP (I need to check)

usage: neutron router-gateway-set [-h] [--request-format {json}]
                                  ROUTER EXTERNAL-NETWORK

Set the external network gateway for a router.

positional arguments:
  ROUTER                ID or name of the router.
  EXTERNAL-NETWORK      ID or name of the external network for the gateway.

optional arguments:
  -h, --help            show this help message and exit
  --request-format {json}
                        DEPRECATED! Only JSON request format is supported.
  --disable-snat        Disable source NAT on the router gateway.
  --fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR
                        Desired IP and/or subnet on external network:
                        subnet_id=<name_or_id>,ip_address=<ip>. You can
                        specify both of subnet_id and ip_address or specify
                        one of them as well. You can repeat this option.

In this blog post I will explain how to connect private tenant networks to an external network without the use of NAT or DNAT (floating ips) via a neutron router.

With the following configuration you will have routers that don’t do NAT on ingress or egress connections. You won’t be able to use floating ips too, at least in the explained configuration, you could add a second external network and a gateway port, plus some static routes to let the router steer traffic over each network.

This can be done, with just one limitation, tenant networks connected in such topology cannot have overlapping ips. And also, the upstream router(s) must be informed about the internal networks so they can add routes themselves to those networks. That can be done manually, or automatically by periodically talking to the openstack API (checking the router interfaces, the subnets etc..) but I’ll skip that part in this blog post.

We’re going to assume that our external provider network is “public”, with the subnet “public_subnet” and that the CIDR of such network is with a gateway on Please excuse me because I’m not using the new openstack commands, I can make an updated post later if somebody is interested in that.

Step by step

Create the virtual router

source keystonerc_admin

neutron router-create router_test

# please note that you can manipulate static routes on that router if you need:
# neutron router-update --help
#  [--route destination=CIDR,nexthop=IP_ADDR | --no-routes]

Create the private network for our test, and add it to the router

neutron net-create private_test
$PRIV_NET=$(neutron net-list | awk ' /private_test/ { print $2 }')

neutron subnet-create --name private_subnet $PRIV_NET
neutron router-interface-add router_test private_subnet

Create a security group, and add an instance

neutron security-group-create test
neutron security-group-rule-create test --direction ingress

nova boot --flavor m1.tiny --image cirros --nic net-id=$PRIV_NET \
          --security-group test cirros
sleep 10
nova console-log cirros

Please note that in the console-log you may find that everything went well… and that the instance has an address through DHCP.

Create a port for the router on the external network, do it manually so we can specify the exact address we want.

SUBNET_ID=$(neutron subnet-list | awk ' /public_subnet/ { print $2 }')
neutron port-create public --fixed-ip \
    subnet_id=$SUBNET_ID,ip_address= ext-router-port

Add it to the router

PORT_ID=$(neutron port-show ext-router-port | awk '/ id / { print $4 } ')
neutron router-interface-add router_test port=$PORT_ID

We can test it locally (assuming our host has access to

# test local...
ip route add via

# and assuming that our instance was created with

[root@server ~(keystone_admin)]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.621 ms
64 bytes from icmp_seq=2 ttl=63 time=0.298 ms

Extra note, if you want to avoid issues with overlapping tenant network IPs, I recommend you to have a look at the subnet pool functionality of neutron which started in Kilo.

I hope you enjoyed it, and that this is helpful, let me know in the comments :)

After 5 years of using tumblr, and tumblr getting to mess with all my bash and code scripts in undescriptible ways I have decided to move into jekyll based blogging.

The theme is a very nice MIT licensed theme (HPSTR).

And you can find the source code to this blog here: https://github.com/mangelajo/ajo-es/

I went back from hosted at tumblr.com to hosted at home, produly on an RDO OpenStack instance I have which also serves other projects, the projects are managed in Ansible thank’s to the openstack ansible modules.

A front instance based on my pet project nginx-master takes care of: reverse proxying each domain/project into it’s VM/container/whatever, DKIM/SMTP configuration on DNS records and IP address changes.