Every time I try to download a cirros image from cirros cloud for use in my OpenStack devel environment it’s super slow (100-200KB/s), so I’m making a local mirror of it here.

You can grab the files from here:

Or into your cloud with:

source ~/keystonerc_admin
pushd /tmp
curl https://ajo.es/binaries/cirros-0.4.0-x86_64-disk.img  > cirros-0.4.0-x86_64-disk.img
openstack image create "cirros"   --file cirros-0.4.0-x86_64-disk.img \
                                  --disk-format qcow2 --container-format bare \

Or for i386:

source ~/keystonerc_admin
pushd /tmp
curl https://ajo.es/binaries/cirros-0.4.0-i386-disk.img  > cirros-0.4.0-i386-disk.img
openstack image create "cirros"   --file cirros-0.4.0-i386-disk.img \
                                  --disk-format qcow2 --container-format bare \

TO-DO, fix this blog post to explain “–disable-snat” that does the same, and probably also supports FIP (I need to check)

usage: neutron router-gateway-set [-h] [--request-format {json}]
                                  ROUTER EXTERNAL-NETWORK

Set the external network gateway for a router.

positional arguments:
  ROUTER                ID or name of the router.
  EXTERNAL-NETWORK      ID or name of the external network for the gateway.

optional arguments:
  -h, --help            show this help message and exit
  --request-format {json}
                        DEPRECATED! Only JSON request format is supported.
  --disable-snat        Disable source NAT on the router gateway.
  --fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR
                        Desired IP and/or subnet on external network:
                        subnet_id=<name_or_id>,ip_address=<ip>. You can
                        specify both of subnet_id and ip_address or specify
                        one of them as well. You can repeat this option.

In this blog post I will explain how to connect private tenant networks to an external network without the use of NAT or DNAT (floating ips) via a neutron router.

With the following configuration you will have routers that don’t do NAT on ingress or egress connections. You won’t be able to use floating ips too, at least in the explained configuration, you could add a second external network and a gateway port, plus some static routes to let the router steer traffic over each network.

This can be done, with just one limitation, tenant networks connected in such topology cannot have overlapping ips. And also, the upstream router(s) must be informed about the internal networks so they can add routes themselves to those networks. That can be done manually, or automatically by periodically talking to the openstack API (checking the router interfaces, the subnets etc..) but I’ll skip that part in this blog post.

We’re going to assume that our external provider network is “public”, with the subnet “public_subnet” and that the CIDR of such network is with a gateway on Please excuse me because I’m not using the new openstack commands, I can make an updated post later if somebody is interested in that.

Step by step

Create the virtual router

source keystonerc_admin

neutron router-create router_test

# please note that you can manipulate static routes on that router if you need:
# neutron router-update --help
#  [--route destination=CIDR,nexthop=IP_ADDR | --no-routes]

Create the private network for our test, and add it to the router

neutron net-create private_test
$PRIV_NET=$(neutron net-list | awk ' /private_test/ { print $2 }')

neutron subnet-create --name private_subnet $PRIV_NET
neutron router-interface-add router_test private_subnet

Create a security group, and add an instance

neutron security-group-create test
neutron security-group-rule-create test --direction ingress

nova boot --flavor m1.tiny --image cirros --nic net-id=$PRIV_NET \
          --security-group test cirros
sleep 10
nova console-log cirros

Please note that in the console-log you may find that everything went well… and that the instance has an address through DHCP.

Create a port for the router on the external network, do it manually so we can specify the exact address we want.

SUBNET_ID=$(neutron subnet-list | awk ' /public_subnet/ { print $2 }')
neutron port-create public --fixed-ip \
    subnet_id=$SUBNET_ID,ip_address= ext-router-port

Add it to the router

PORT_ID=$(neutron port-show ext-router-port | awk '/ id / { print $4 } ')
neutron router-interface-add router_test port=$PORT_ID

We can test it locally (assuming our host has access to

# test local...
ip route add via

# and assuming that our instance was created with

[root@server ~(keystone_admin)]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.621 ms
64 bytes from icmp_seq=2 ttl=63 time=0.298 ms

Extra note, if you want to avoid issues with overlapping tenant network IPs, I recommend you to have a look at the subnet pool functionality of neutron which started in Kilo.

I hope you enjoyed it, and that this is helpful, let me know in the comments :)

After 5 years of using tumblr, and tumblr getting to mess with all my bash and code scripts in undescriptible ways I have decided to move into jekyll based blogging.

The theme is a very nice MIT licensed theme (HPSTR).

And you can find the source code to this blog here: https://github.com/mangelajo/ajo-es/

I went back from hosted at tumblr.com to hosted at home, produly on an RDO OpenStack instance I have which also serves other projects, the projects are managed in Ansible thank’s to the openstack ansible modules.

A front instance based on my pet project nginx-master takes care of: reverse proxying each domain/project into it’s VM/container/whatever, DKIM/SMTP configuration on DNS records and IP address changes.

The three islands (at Amalfi Coast), feel free to contact me via twitter if you want a higher res version of the picture.

The next example illustrates how to create a test port on a bare-metal node connected to an specific tenant network, this can be useful for testing, or connecting specific bare-metal services to tenant networks.

The bare-metal node $HOST_ID needs to run the neutron-openvswitch-agent, which will wire the port into the right tag-id, tell neutron-server that our port is ACTIVE, and setup proper L2 connectivity (ext VLAN, tunnels, etc..)

Setup our host id and tenant network

NETWORK_ID=$(openstack network show -f value -c id $NETWORK_NAME)

Create the port in neutron

PORT_ID=$(neutron port-create --device-owner compute:container \
          --name test_interf0 $NETWORK_ID | awk '/ id / { print $4 }')

PORT_MAC=$(neutron port-show $PORT_ID -f value -c mac_address)

The port is not bound yet, so it will be in DOWN status

neutron port-show $PORT_ID -f value -c status

Create the test_interf0 interface, wired to our new port

ovs-vsctl -- --may-exist add-port br-int test_interf0 \
  -- set Interface test_interf0 type=internal \
  -- set Interface test_interf0 external-ids:iface-status=active \
  -- set Interface test_interf0 external-ids:attached-mac="$PORT_MAC" \
  -- set Interface test_interf0 iface-id="$PORT_ID"

We can now see how neutron marked this port as ACTIVE

neutron port-show $PORT_ID -f value -c status

Set MAC address and move the interface into a namespace (namespace is important if you’re using dhclient, otherwise the host-wide routes and DNS configuration of the host would be changed, you can omit the netns if you’re setting the IP address manually)

ip link set dev test_interf0 address $PORT_MAC
ip netns add test-ns
ip link set test_interf0 netns test-ns
ip netns exec test-ns ip link set dev test_interf0 up

Get IP configuration via DHCP

ip netns exec test-ns dhclient -I test_interf0 --no-pid test_interf0 -v
Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/test_interf0/fa:16:3e:6f:64:46
Sending on   LPF/test_interf0/fa:16:3e:6f:64:46
Sending on   Socket/fallback
DHCPREQUEST on test_interf0 to port 67 (xid=0x5b6ddebc)
DHCPACK from (xid=0x5b6ddebc)

Test connectivity (assuming we have DNS and a router for this subnet)

ip netns exec test-ns ping www.google.com
PING www.google.com ( 56(84) bytes of data.
64 bytes from fa-in-f99.1e100.net ( icmp_seq=1 ttl=36 time=115 ms
64 bytes from fa-in-f99.1e100.net ( icmp_seq=2 ttl=36 time=114 ms