Simplest openstack with networking-ovn deployment

One of the simplest and less memory hungry ways to deploy a tiny networking-ovn all in one it’s still to use packstack.

Note This is an unsupported way of deployment (for the product version), but it must be just fine to give it a try. Afterwards, if you want to get serious and try something closer to production, please have a look af the TripleO deployment guide

On a fresh Centos 7, login and write:

      sudo yum install -y "*-queens"
      sudo yum update -y
      sudo yum install -y openstack-packstack
      sudo yum install -y python-setuptools
      sudo packstack --allinone \
          --cinder-volume-name="aVolume" \
          --debug \
          --service-workers=2 \
          --default-password="packstack" \
          --os-aodh-install=n \
          --os-ceilometer-install=n \
          --os-swift-install=n \
          --os-manila-install=n \
          --os-horizon-ssl=y \
          --amqp-enable-ssl=y \
          --glance-backend=file \
          --os-neutron-l2-agent=ovn \
          --os-neutron-ml2-type-drivers="geneve,flat" \
          --os-neutron-ml2-tenant-network-types="geneve" \
          --provision-demo=y \
          --provision-tempest=y \
          --run-tempest=y \
          --run-tempest-tests="smoke dashboard"

When that has finished, you will have as root user, a set of keystonerc_admin, and keystonerc_demo files. An example public and private network, a router, and a cirros image.

If you want to see how objects are mapped into OVN, give it a try:

$ sudo ovn-nbctl show
switch 3b31aaa0-eea0-462e-9cdc-866f2bd8171d (neutron-097345d1-3299-43d4-aeda-8af03516b92e) (aka public)
    port provnet-097345d1-3299-43d4-aeda-8af03516b92e
        type: localnet
        addresses: ["unknown"]
    port 38eda4cc-4931-4c48-bc83-6e1d72ccb90b
        type: router
        router-port: lrp-38eda4cc-4931-4c48-bc83-6e1d72ccb90b
switch f6555e37-305f-4342-87f1-f21de5adadc2 (neutron-c3959f48-caa2-4eb9-b217-19e70c2380cb) (aka private)
    port ee519f75-08a8-4559-bf44-8acb9b2cec1b
        type: router
        router-port: lrp-ee519f75-08a8-4559-bf44-8acb9b2cec1b
router 20efb278-0394-4330-a18d-9e40a56b3ae5 (neutron-e70a02f3-6758-4f78-bc50-133b6c6b0584) (aka router1)
    port lrp-ee519f75-08a8-4559-bf44-8acb9b2cec1b
        mac: "fa:16:3e:f6:c6:d3"
        networks: [""]
    port lrp-38eda4cc-4931-4c48-bc83-6e1d72ccb90b
        mac: "fa:16:3e:5f:41:cf"
        networks: [""]
        gateway chassis: [7a42645d-70b3-4286-8ddc-6240ccdd131c]
    nat a5a6bec0-7167-41af-8514-2764502fef21
        external ip: ""
        logical ip: ""
        type: "snat"
Network Openstack

Neutron external network with routing (no NAT)

In this blog post I will explain how to connect private tenant networks to an external network without the use of NAT or DNAT (floating ips) via a neutron router.

With the following configuration you will have routers that don’t do NAT on ingress or egress connections. You won’t be able to use floating ips too, at least in the explained configuration, you could add a second external network and a gateway port, plus some static routes to let the router steer traffic over each network.

This can be done, with just one limitation, tenant networks connected in such topology cannot have overlapping ips. And also, the upstream router(s) must be informed about the internal networks so they can add routes themselves to those networks. That can be done manually, or automatically by periodically talking to the openstack API (checking the router interfaces, the subnets etc..) but I’ll skip that part in this blog post.

We’re going to assume that our external provider network is “public”, with the subnet “public_subnet” and that the CIDR of such network is with a gateway on Please excuse me because I’m not using the new openstack commands, I can make an updated post later if somebody is interested in that.

Step by step

Create the virtual router

source keystonerc_admin

neutron router-create router_test

Create the private network for our test, and add it to the router

neutron net-create private_test
$PRIV_NET=$(neutron net-list | \
            awk ' /private_test/ { print $2 }')

neutron subnet-create \
       --name private_subnet $PRIV_NET
neutron router-interface-add router_test private_subnet

Create a security group, and add an instance

neutron security-group-create test
neutron security-group-rule-create test --direction ingress

nova boot --flavor m1.tiny --image cirros \
           --nic net-id=$PRIV_NET \
          --security-group test cirros
sleep 10
nova console-log cirros

Please note that in the console-log you may find that everything went well… and that the instance has an address through DHCP.

Create a port for the router on the external network, do it manually so we can specify the exact address we want.

SUBNET_ID=$(neutron subnet-list | awk ' /public_subnet/ { print $2 }')
neutron port-create public --fixed-ip \
    subnet_id=$SUBNET_ID,ip_address= ext-router-port

Add it to the router

PORT_ID=$(neutron port-show ext-router-port | \
          awk '/ id / { print $4 } ')
neutron router-interface-add router_test port=$PORT_ID

We can test it locally (assuming our host has access to

# test local...
ip route add via

# and assuming that our instance was created with

[root@server ~(keystone_admin)]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.621 ms
64 bytes from icmp_seq=2 ttl=63 time=0.298 ms

Extra note, if you want to avoid issues with overlapping tenant network IPs, I recommend you to have a look at the subnet pool functionality of neutron which started in Kilo.

I hope you enjoyed it, and that this is helpful.

Network Openstack

Creating a network interface to tenant network in baremetal node (neutron)

The next example illustrates how to create a test port on a bare-metal node connected to an specific tenant network, this can be useful for testing, or connecting specific bare-metal services to tenant networks.

The bare-metal node $HOST_ID needs to run the neutron-openvswitch-agent, which will wire the port into the right tag-id, tell neutron-server that our port is ACTIVE, and setup proper L2 connectivity (ext VLAN, tunnels, etc..)

Setup our host id and tenant network

NETWORK_ID=$(openstack network show -f value -c id $NETWORK_NAME)

Create the port in neutron

PORT_ID=$(neutron port-create --device-owner \  
          compute:container \
          --name test_interf0 $NETWORK_ID | awk '/ id / { print $4 }')

PORT_MAC=$(neutron port-show $PORT_ID -f value -c mac_address)

The port is not bound yet, so it will be in DOWN status

neutron port-show $PORT_ID -f value -c status

Create the test_interf0 interface, wired to our new port

ovs-vsctl -- --may-exist add-port br-int test_interf0 \
  -- set Interface test_interf0 type=internal \
  -- set Interface test_interf0 external-ids:iface-status=active \
  -- set Interface test_interf0 external-ids:attached-mac="$PORT_MAC" \
  -- set Interface test_interf0 external-ids:iface-id="$PORT_ID"

We can now see how neutron marked this port as ACTIVE

neutron port-show $PORT_ID -f value -c status

Set MAC address and move the interface into a namespace (namespace is important if you’re using dhclient, otherwise the host-wide routes and DNS configuration of the host would be changed, you can omit the netns if you’re setting the IP address manually)

ip link set dev test_interf0 address $PORT_MAC
ip netns add test-ns
ip link set test_interf0 netns test-ns
ip netns exec test-ns ip link set dev test_interf0 up

Get IP configuration via DHCP

ip netns exec test-ns dhclient -I test_interf0 --no-pid test_interf0 -v
Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit

Listening on LPF/test_interf0/fa:16:3e:6f:64:46
Sending on   LPF/test_interf0/fa:16:3e:6f:64:46
Sending on   Socket/fallback
DHCPREQUEST on test_interf0 to port 67 (xid=0x5b6ddebc)
DHCPACK from (xid=0x5b6ddebc)

Test connectivity (assuming we have DNS and a router for this subnet)

ip netns exec test-ns ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=36 time=115 ms
64 bytes from ( icmp_seq=2 ttl=36 time=114 ms
Network Opensource Openstack

Neutron QoS service plugin

Finally, I’ve been able to record a video showing how the QoS service plugin works.If you want to deploy this follow the instructions under the video. (open in vimeo for better quality)

Deployment instructions

Add to your devstack/local.conf

enable_plugin neutron git://
enable_service q-qos

Let stack!


now create rules to allow traffic to the VM port 22 & ICMP

source ~/devstack/accrc/demo/demo

neutron security-group-rule-create  --direction ingress \
                                --port-range-min 22     \
                                --port-range-max 22     \

neutron security-group-rule-create --protocol icmp     \
                                   --direction ingress \

nova net-list
nova boot --image cirros-0.3.4-x86_64-uec \
          --flavor m1.tiny \
          --nic net-id=*your-net-id* qos-cirros

nova show qos-cirros  # look for the IP
neutron port-list # look for the IP and find your *port id*

In another console, run the packet pusher

ssh cirros@$THE_IP_ADDRESS \
     'dd if=/dev/zero  bs=1M count=1000000000'

In yet another console, look for the port and monitor it

# given a port id 49d4a680-4236-4d0c-9feb-8b4990ac35b9
# look for the ovs port:
$ sudo ovs-vsctl show | grep qvo49d4a680-42
       Port "qvo49d4a680-42"
           Interface "qvo49d4a680-42"

finally, try the QoS rules

source ~/devstack/accrc/admin/admin

neutron qos-policy-create bw-limiter
neutron qos-bandwidth-limit-rule-create *rule-id* \
        bw-limiter \
        --max-kbps 3000 --max-burst-kbps 300

# after next command, the port will quickly
# go down to 3Mbps
neutron port-update *your-port-id* --qos-policy bw-limiter

You can change rules in runtime, and ports will be updated

neutron qos-bandwidth-limit-rule-update *rule-id* \
        bw-limiter \
        --max-kbps 5000 --max-burst-kbps 500

Or you can remove the policy from the port, and traffic will spike up fast to the original maximum.

neutron port-update *your-port-id* --no-qos-policy