Neutron is an OpenStack project to provide “Networking as a Service” to other OpenStack components, mainly to nova. Neutron allow users to define or use networks, subnetworks, routers, DHCP servers, etc. in a very powerful and easy way.
Neutron routers are linux network namespaces, allowing multiple isolated environments defined in the same network node. These routers are managed via neutron API and the users don’t have direct access to them. This is fine for most cases but there can be others where deeper control is needed, in this post we’re going to show how to use an OpenStack GNU/Linux instance as a router, which initially it’s not allowed.
An instance connected to two networks
It’s possible (and very easy) to connect an instance to several networks, just selecting the desired networks when the instance is launched. In this case we’re going to connect an instance (called linuxrouter) to a routed network and an isolated network, and the result can be shown in OpenStack Horizon (Network topology):
A router connected to an external network allows to associate a floating IP to the external interface of linuxrouter and the instance can be accessible from the external network as usual.
Once logged in, the routing table of the instance “linuxrouter” can be shown:
linuxrouter:~$ sudo ip r default via 10.0.0.1 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.230 10.0.1.0/24 dev eth1 proto kernel scope link src 10.0.1.10 169.254.169.254 via 10.0.1.3 dev eth1
The first three line are those expected, but not the last one. It appears because the option “enable_isolated_metadata” in the neutron DHCP agent is enabled, to allow accessing to the metadata server from isolated networks (read http://rossella-sblendido.net/2015/11/18/how-vms-get-access-to-their-metadata-in-neutron/ for further details on how does it work). In this case, linuxrouter is able to access to the metadata server through eth0 so the last rule isn’t needed, in fact, an automatic configuration for this network interface is not needed at all, so we can change it to a static one:
linuxrouter:~$ sudo ifdown eth1
Changing the stanza corresponding to eth1 in /etc/network/interfaces with one like this (in this case a Debian image is used):
allow-hotplug eth1 iface eth1 inet static address 10.0.1.10 netmask 255.255.255.0
Bringing up the interface eth1 (ifup eth1), the routing table shows the expected rules (the rule related to the metadata server doesn’t appear anymore because it’s provided by the DHCP server of the isolated network):
default via 10.0.0.1 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.230 10.0.1.0/24 dev eth1 proto kernel scope link src 10.0.1.10
This linux box is going to work as a router (for now only IPv4), so let’s activate the forwarding bit:
linuxrouter:~$ sudo sed -i '/^#net.ipv4.ip\_forward/s/^#//g' /etc/sysctl.conf linuxrouter:~$ sudo sysctl -p /etc/sysctl.conf
No additional modifications are needed, the linux box is ready to work as a router.
An instance in the isolated network
An instance is launched in the isolated network following the usual procedure:
It’s not possible to associate a floating IP to the internal instance because it’s in an isolated network and the floating IPs can be associated only to instances connected to a router which is connected to an external network (like “linuxrouter” in the previous section). The instance isn’t directly accessible from the external network but it’s possible to access to it indirectly, from “linuxrouter” for example.
The routing table of this instance is as follows:
internal:~$ sudo ip r 10.0.1.0/24 dev eth0 proto kernel scope link src 10.0.1.11 169.254.169.254 via 10.0.1.3 dev eth0
There is no default routing rule because it has not been defined during the internal subnetwork creation. Let’s set “linuxrouter” as the gateway:
internal:~$ sudo ip r add default via 10.0.1.10
In order to verify that “linuxrouter” is working as a router, we can ping to its external interface from the internal instance:
internal:~$ ping -c 1 10.0.0.230 PING 10.0.0.230 (10.0.0.230) 56(84) bytes of data. 64 bytes from 10.0.0.230: icmp_seq=1 ttl=64 time=0.543 ms --- 10.0.0.230 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms
Finally, If we want to connect from the isolated network to the external router, a static routing rule must be defined in the router to allow it to connect to the isolated network through linuxrouter (it seems that it’s possible to do that from horizon since OpenStack Liberty , but in previous releases it must be done with neutron CLI):
neutron router-update 3e98276c-f107-48a5-97de-f144e10df3df --routes type=dict list=true destination=10.0.1.0/24,nexthop=10.0.0.230
But now we can check that the external router doesn’t reply to the ping sent by the internal instance:
internal:~$ ping -c 1 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. --- 10.0.0.1 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms
What’s happening? Let’s analyze the network traffic in linuxrouter with tcpdump, in particular the ping requests and replies in the interface connected to the isolated network (eth1):
linuxrouter:~$ sudo tcpdump -i eth1 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 22:02:54.853655 IP 10.0.1.11 > 10.0.0.1: ICMP echo request, id 490, seq 1, length 64 22:02:54.854275 IP 10.0.0.1 > 10.0.1.11: ICMP echo reply, id 490, seq 1, length 64
The ping request arrives to eth1 in linuxrouter (first line captured), it sends it to 10.0.0.1 (external router), the external router is able to reply to the internal instance through linuxrouter (thanks to the static routing rule defined) and finally the reply from 10.0.0.1 goes out linuxrouter (last line captured), but it never arrives to the internal instance. This is due to the OpenStack anti-spoofing iptables rules that are automatically added to every instance launched and they can be verified running this command in the compute node where the instance is running:
compute1:~$ sudo iptables -L neutron-openvswi-sebf586a5-a -n -v Chain neutron-openvswi-sebf586a5-a (1 references) pkts bytes target prot opt in out source destination 47 3072 RETURN all -- * * 10.0.1.10 0.0.0.0/0 MAC FA:16:3E:42:A1:C4 3 252 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Where “ebf586a5-a” is the truncated neutron port UUID (10 digits) corresponding to eth1. For every instance launched, neutron defines a chain neutron-openvswi-s<truncated-UUID> and all the outgoing traffic with an source IP address different to the one defined to the instance (10.0.1.10 in this case) is dropped.
This anti-spoofing policy is very restrictive and it doesn’t allow to use an instance as a router, however neutron allows to add specific source IP addresses (hosts or subnets) to a port using an option called “allowed_address_pairs” as follows :
neutron port-update ebf586a5-a31a-4ec5-a550-76e530da1260 --allowed-address-pairs type=dict list=true mac_address=fa:16:3e:42:a1:c4,ip_address=10.0.0.0/24
The last command inserts an iptables rule in the corresponding anti-spoofing chain:
compute1:~$ iptables -L neutron-openvswi-sebf586a5-a -n -v Chain neutron-openvswi-sebf586a5-a (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 10.0.0.0/24 0.0.0.0/0 MAC FA:16:3E:42:A1:C4 47 0 RETURN all -- * * 10.0.1.10 0.0.0.0/0 MAC FA:16:3E:42:A1:C4 3 252 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
And we can check that the ping reply can arrive to the internal instance now:
internal:~$ ping -c 1 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=63 time=1.28 ms --- 10.0.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.282/1.282/1.282/0.000 ms
In the case that all the traffic must be allowed from internal instance (be careful with that option because in fact it disables the anti-spoofing policy), a very permissive rule can be added (allowing traffic from 0.0.0.0/0):
neutron port-update ebf586a5-a31a-4ec5-a550-76e530da1260 --allowed-address-pairs type=dict list=true mac_address=fa:16:3e:42:a1:c4,ip_address=0.0.0.0/0
Update: Thanks to Pedro Navarro: https://twitter.com/pnavarroperez/status/668574042178351104
OpenStack kilo introduces a very interesting option that allows to enable or disable the anti-spoofing policy on a per network basis instead than on a per port basis as shown before, and it can be a proper balance between security and functionality. I couldn’t test it yet :( because I’m still using icehouse, but you can see further details in https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver.
Neutron is a very powerful OpenStack project that manages software defined networks through an API and that’s an excellent approach in most cases, but obviously an API cannot include all the options needed in every case and sometimes an alternative way must be found like in this post, where an instance is used as a router deactivating partially the antispoofing policy.