Playing around with OpenStack: Using an instance as a router


Neutron is an OpenStack project to provide «Networking as a Service» to other OpenStack components, mainly to nova. Neutron allow users to define or use networks, subnetworks, routers, DHCP servers, etc. in a very powerful and easy way.

Neutron routers are linux network namespaces, allowing multiple isolated environments defined in the same network node. These routers are managed via neutron API and the users don’t have direct access to them. This is fine for most cases but there can be others where deeper control is needed, in this post we’re going to show how to use an OpenStack GNU/Linux instance as a router, which initially it’s not allowed.

An instance connected to two networks

It’s possible (and very easy) to connect an instance to several networks, just selecting the desired networks when the instance is launched. In this case we’re going to connect an instance (called linuxrouter) to a routed network and an isolated network, and the result can be shown in OpenStack Horizon (Network topology):

network1

A router connected to an external network allows to associate a floating IP to the external interface of linuxrouter and the instance can be accessible from the external network as usual.

Once logged in, the routing table of the instance «linuxrouter» can be shown:

linuxrouter:~$ sudo ip r
default via 10.0.0.1 dev eth0 
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.230 
10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.10 
169.254.169.254 via 10.0.1.3 dev eth1 

The first three line are those expected, but not the last one. It appears because the option «enable_isolated_metadata» in the neutron DHCP agent is enabled, to allow accessing to the metadata server from isolated networks (read http://rossella-sblendido.net/2015/11/18/how-vms-get-access-to-their-metadata-in-neutron/ for further details on how does it work). In this case, linuxrouter is able to access to the metadata server through eth0 so the last rule isn’t needed, in fact, an automatic configuration for this network interface is not needed at all, so we can change it to a static one:

linuxrouter:~$ sudo ifdown eth1

Changing the stanza corresponding to eth1 in /etc/network/interfaces with one like this (in this case a Debian image is used):

allow-hotplug eth1
iface eth1 inet static
      address 10.0.1.10
      netmask 255.255.255.0

Bringing up the interface eth1 (ifup eth1), the routing table shows the expected rules (the rule related to the metadata server doesn’t appear anymore because it’s provided by the DHCP server of the isolated network):

default via 10.0.0.1 dev eth0 
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.230 
10.0.1.0/24 dev eth1  proto kernel  scope link  src 10.0.1.10

This linux box is going to work as a router (for now only IPv4), so let’s activate the forwarding bit:

linuxrouter:~$ sudo sed -i '/^#net.ipv4.ip\_forward/s/^#//g' /etc/sysctl.conf
linuxrouter:~$ sudo sysctl -p /etc/sysctl.conf

No additional modifications are needed, the linux box is ready to work as a router.

An instance in the isolated network

An instance is launched in the isolated network following the usual procedure:

network2

It’s not possible to associate a floating IP to the internal instance because it’s in an isolated network and the floating IPs can be associated only to instances connected to a router which is connected to an external network (like «linuxrouter» in the previous section). The instance isn’t directly accessible from the external network but it’s possible to access to it indirectly, from «linuxrouter» for example.

The routing table of this instance is as follows:

internal:~$ sudo ip r
10.0.1.0/24 dev eth0  proto kernel  scope link  src 10.0.1.11 
169.254.169.254 via 10.0.1.3 dev eth0

There is no default routing rule because it has not been defined during the internal subnetwork creation. Let’s set «linuxrouter» as the gateway:

internal:~$ sudo ip r add default via 10.0.1.10

In order to verify that «linuxrouter» is working as a router, we can ping to its external interface from the internal instance:

internal:~$ ping -c 1 10.0.0.230
PING 10.0.0.230 (10.0.0.230) 56(84) bytes of data.
64 bytes from 10.0.0.230: icmp_seq=1 ttl=64 time=0.543 ms

--- 10.0.0.230 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms

Finally, If we want to connect from the isolated network to the external router, a static routing rule must be defined in the router to allow it to connect to the isolated network through linuxrouter (it seems that it’s possible to do that from horizon since OpenStack Liberty [1], but in previous releases it must be done with neutron CLI):

neutron router-update 3e98276c-f107-48a5-97de-f144e10df3df --routes type=dict list=true destination=10.0.1.0/24,nexthop=10.0.0.230

But now we can check that the external router doesn’t reply to the ping sent by the internal instance:

internal:~$ ping -c 1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

What’s happening? Let’s analyze the network traffic in linuxrouter with tcpdump, in particular the ping requests and replies in the interface connected to the isolated network (eth1):

linuxrouter:~$ sudo tcpdump -i eth1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
22:02:54.853655 IP 10.0.1.11 > 10.0.0.1: ICMP echo request, id 490, seq 1, length 64
22:02:54.854275 IP 10.0.0.1 > 10.0.1.11: ICMP echo reply, id 490, seq 1, length 64

The ping request arrives to eth1 in linuxrouter (first line captured), it sends it to 10.0.0.1 (external router), the external router is able to reply to the internal instance through linuxrouter (thanks to the static routing rule defined) and finally the reply from 10.0.0.1 goes out linuxrouter (last line captured), but it never arrives to the internal instance. This is due to the OpenStack anti-spoofing iptables rules that are automatically added to every instance launched and they can be verified running this command in the compute node where the instance is running:

compute1:~$ sudo iptables -L neutron-openvswi-sebf586a5-a -n -v
Chain neutron-openvswi-sebf586a5-a (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   47  3072 RETURN     all  --  *      *       10.0.1.10            0.0.0.0/0            MAC FA:16:3E:42:A1:C4
    3   252 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Where «ebf586a5-a» is the truncated neutron port UUID (10 digits) corresponding to eth1. For every instance launched, neutron defines a chain neutron-openvswi-s<truncated-UUID> and all the outgoing traffic with an source IP address different to the one defined to the instance (10.0.1.10 in this case) is dropped.

«allowed_address_pairs»

This anti-spoofing policy is very restrictive and it doesn’t allow to use an instance as a router, however neutron allows to add specific source IP addresses (hosts or subnets) to a port using an option called «allowed_address_pairs» as follows [2]:

neutron port-update ebf586a5-a31a-4ec5-a550-76e530da1260 --allowed-address-pairs type=dict list=true mac_address=fa:16:3e:42:a1:c4,ip_address=10.0.0.0/24

The last command inserts an iptables rule in the corresponding anti-spoofing chain:

compute1:~$ iptables -L neutron-openvswi-sebf586a5-a -n -v
Chain neutron-openvswi-sebf586a5-a (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  *      *       10.0.0.0/24          0.0.0.0/0            MAC FA:16:3E:42:A1:C4
   47     0 RETURN     all  --  *      *       10.0.1.10            0.0.0.0/0            MAC FA:16:3E:42:A1:C4
    3   252 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0 

And we can check that the ping reply can arrive to the internal instance now:

internal:~$ ping -c 1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=63 time=1.28 ms

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.282/1.282/1.282/0.000 ms

In the case that all the traffic must be allowed from internal instance (be careful with that option because in fact it disables the anti-spoofing policy), a very permissive rule can be added (allowing traffic from 0.0.0.0/0):

neutron port-update ebf586a5-a31a-4ec5-a550-76e530da1260 --allowed-address-pairs type=dict list=true mac_address=fa:16:3e:42:a1:c4,ip_address=0.0.0.0/0

«port_security»

Update: Thanks to Pedro Navarro: https://twitter.com/pnavarroperez/status/668574042178351104

OpenStack kilo introduces a very interesting option that allows to enable or disable the anti-spoofing policy on a per network basis instead than on a per port basis as shown before, and it can be a proper balance between security and functionality. I couldn’t test it yet :( because I’m still using icehouse, but you can see further details in https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver.

Conclusion

Neutron is a very powerful OpenStack project that manages software defined networks through an API and that’s an excellent approach in most cases, but obviously an API cannot include all the options needed in every case and sometimes an alternative way must be found like in this post, where an instance is used as a router deactivating partially the antispoofing policy.

References

  1. https://bugs.launchpad.net/horizon/+bug/1396616
  2. https://ask.openstack.org/en/question/26775/ip-mac-spoofing-in-openstack/
Playing around with OpenStack: Using an instance as a router

17 comentarios en “Playing around with OpenStack: Using an instance as a router

  1. Jose Julio Herrera Suarez dijo:

    Buenos dias, lo primero es daros las gracias por ese fantastico proyecto sobre openstack, he aprendido mucho con ello y en este momento estoy metido en un proyecto parecido. No sabia como contactar con alguno de vosotros para realizaros una consulta, si es posible claro esta. Estoy atascado en la implantacion de un servidor Nas(Freenas) en openstack, si pudieras ayudarme te lo agradeceria. Un saludo y gracias

    Le gusta a 1 persona

  2. Blober dijo:

    Very nice tutorial, i was able to disable the network anti-spoofing feature as explained later, in this case I had to do that before creating any servers on the infra. I still had to add a iptables MASQUERADE nat on the linuxrouter for it to work and did not need the static route.

    Me gusta

  3. Pablo Rueda dijo:

    How can i access to an instance from another one?
    I`m implementing this topology and i don´t know how can i access to the instance in the isolated network, could you help me please?

    Thank you

    Me gusta

    1. Hi Pablo,

      It depends on the configuration, but in general you can access from the external network only to those instances connected to a external router through a floating IP address. To access to a internal instance you can use ssh agent forwarding.

      Me gusta

      1. Pablo Rueda dijo:

        He intentado acceder a la instancia interna a traves de la instanciarouter, haciendo «ssh debian@30.0.0.4» y me pide la contraseña y no se cual es.
        He probado con la adminPass que me aparece en el CLI al lanzar la instancia pero no es esa.
        ¿sabrias como podria acceder a la instancia interna?

        Gracias de antemano y un saludo

        Me gusta

      2. Hola Pablo, como te decía anteriormente lo lógico sería utilizar ssh agent forwarding para que puedas entrar mediante clave pública/privada en lugar de contraseña.

        Me gusta

      3. Pablo Rueda dijo:

        Si eso lo entendi, el problema es el siguiente:
        Yo lanzo la instancia interna desde el host, con un par de claves correspondiente.
        A esta instancia interna solo puedo acceder mediante ssh desde la que hace de router, y nose como pasar el par de claves publica/privada a esta instanciarouter, para efectuar el ssh -i.
        No se si me estoy explicando bien.
        Es decir no se como tener en el linuxrouter el par de claves correspondiente a la interna para acceder a ella

        Gracias de antemano

        Me gusta

      4. Hola Pablo,

        Podrías copiar la clave privada a la instancia router y desde allí ejecutar «ssh -i», pero como te comentaba en la contestación anterior es mejor en estos casos utilizar ssh-agent y hacer forward del mismo. Si no lo has hecho antes tardarás un poco en entenderlo pero es de las prácticas realmente útiles de ssh que utilizarás mucho en el futuro y vale la pena aprenderlo bien.

        Saludos

        Alberto

        Me gusta

      1. Pablo Rueda dijo:

        lo hago desde el CLI, pero el comando sudo iptables -L neutron-openvswi-sebf586a5-a -n -v, nose donde encontrar el identificador que a ti te sale como ebf586a5-a.

        Me gusta

      2. Pablo Rueda dijo:

        podrias ayudarme con este problema, o si puedes indicarme algun blog o pagina donde mirarlo? Esque el unico sitio bueno que he encontrado acerca de este tema ha sido tu blog.
        Perdon por las molestias.

        Me gusta

      3. Hola Pablo,

        El identificador de puerto lo puedes obtener desde la línea de comandos con «openstack port list», identificarás el puerto que deseas modificar por la IP. Obtendrás un uuid largo, del que debes quedarte los 10 primeros dígitos para aplicar las reglas que aparecen en la entrada y buscar en el nodo de computación una cadena de iptables llamada neutron-openvswi-s[10primerosdigitospuerto]

        Saludos

        Alberto

        Me gusta

  4. pablo rueda dijo:

    Hola Alberto,
    Al definir como gateway el linuxrouter, y hacer un ping a la interfaz de linuxrouter conectada a la red externa no responde.

    ¿sabes alguna posible causa de que esto suceda y como solucionarlo? he seguido todo lo que indicas, modificando el archivo de interfaces y demas y no se me ocurre que puede ser. Si me pudieras indicar algo lo agradeceria.

    Perdona las molestias.

    Pablo.

    Me gusta

  5. Pablo Rueda dijo:

    Hola Alberto, buenas tardes,
    Sabrías alguna razon por la cuál, en la instancia interna, teniendo esto definido: «default via 30.0.0.1 dev eth0», siendo 30.0.0.1 la @IP con la que el linuxrouter se conecta a la red en la que está la instancia interna, que es la 30.0.0.19, teniendo esto, desde la instancia interna haciendo un ping a la 30.0.0.1, que esta definida como su gw, no responde?
    Tengo que configurar algo más que se me haya podido escapar.

    Gracias de antemano.

    Me gusta

    1. Hola Pablo,

      La verdad es que parece raro y creo tiene que haber algún problema en alguna configuración o un error en la red, no sé si te será fácil encontrarlo y si yo con esa información puedo ayudarte.

      Aunque quizás no sea relevante para la pregunta, pero ¿estás utilizando ese direccionamiento para direcciones privadas? la 30.X.X.X está definida en el rango de IPs públicas. ¿Has entrado en la instancia por red o por consola? Lo comento porque si has entrado por red lo estás haciendo a través del router, ¿no? En cualquier caso, yo revisaría las reglas de seguridad, y el estado del router.

      Saludos

      Alberto

      Me gusta

Deja un comentario