Openstack does provide IPSec VPNaaS which will inevitably be covered in a later blog post, however I wanted to share my experiences with SSL based OpenVPN.
So why am I doing this? Continuous integration and testing is always on my agenda. One thing you quickly learn with puppet in production is modifications tend to get layered upon one another, and work, usually because packages the changes depend on are already present. This doesn’t flex whether dependencies are working from a clean install. I seek to test this by spinning up a virtual reproduction of our infrastructure on a regular basis to combat this and avoid nasty surprises when provisioning new machines. It also allows us to test new software releases in isolation and check that our code works in a completely different root domain. Lots of plus points!
Addressing all of these machines for automation purposes is going to take a lot of public IP addresses. Unfortunately as we are all actutely aware, these are in short supply so I wanted to limit my use to 2, one for the virtual router and one for a VPN gateway onto my test network. Hopefully this is a pattern our clients can copy to avoid using too many of a finite resource, which above and beyond costing a fortune can impact other customers on address starvation. I picked OpenVPN as my tool of choice, many due to familiarity, ubiquity and as an inquisitive young thing wanted to twiddle some knobs on a lazy Saturday morning in bed.
So first up on the agenda is securing the VPN tunnel with strong encryption, otherwise I’d just be using plain IP tunnelling! The simple way of performing these steps is to download easyrsa which automates a lot of what is covered here, but I shall leave that as a reader exercise.
The following voodoo creates a large prime for Diffie-Hellman key exchange. This allows 2 computers to generate and encode a private one time only number, share them and derive a shared secret known to both parties. Anyone intercepting any of those encoded numbers will be unable to generate the shared secret as you need a private one time secret to calculate it. The cool thing with the shared secret is you can then use it as a symmetric encryption key and commence secure dialogue.
$ openssl dhparam -out dh2048.pem 2048
Next up we generate the private key and certificate for the certificate authority. The former you want to keep very safe! Why? The certificate is public, and can be used to encrypt data and send it to a server, the private key is the only thing that can decrypt this data. If the private key is secure then you can guarantee that the only person who can read the message is the intended recipient.
$ openssl req -days 3560 -nodes -new -x509 -keyout ca.key -out ca.crt
Next up we create keys and a certificate signing request for the server, then have the CA sign the certificate. The signing process enables one server to trust another as their certificates will have been signed by a common certificate authority.
$ openssl req -days 3560 -nodes -new -keyout server.key -out server.csr $ openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 3560
Finally create a key and signed certificate for the client:
$ openssl req -days 3560 -nodes -new -keyout client.key -out client.csr $ openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 3560
Hard bit done we can setup the OpenVPN server after installing create the configuration /etc/openvpn/server.conf with the following and restart the OpenVPN service.
proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem server 192.168.96.0 255.255.255.0 push "route 172.16.0.0 255.255.0.0" keepalive 10 120 comp-lzo persist-key persist-tun verb 3
Bit of explanation as to the settings. The first group of options specify that we will be communicating via unreliable (but fast) UDP, and we will be using a tunnel device to communicate i.e. L3 packets will be sent and received. Next up comes the paths to the keys and certificates we just created then the block defining the networking magic. The server option will allocate tunnel endpoint addresses out of the 192.168.96.0/24 address range (unlikely to clash with wifi allocated addresses when roaming with my laptop), and will advertise the 172.16.0.0/16 route to all clients. This is the internal network address block of my openstack tenant which I want to access from my laptop. And that’s it easy?
Next up setup the client endpoint, much of which is self explanatory, suffice to say remote is the public IP address of my VPN endpoint.
client proto udp dev tun remote 22.214.171.124 nobind persist-key persist-tun ca /home/simon/ca.crt cert /home/simon/client.crt key /home/simon/client.key comp-lzo verb 3
Firing up the client process, works as expected the tunnel device is allocated out of the correct address pool
1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:58:aa:49 brd ff:ff:ff:ff:ff:ff inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe58:aa49/64 scope link valid_lft forever preferred_lft forever 26: tun0: <pointopoint,multicast,noarp,up,lower_up> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100 link/none inet 192.168.96.6 peer 192.168.96.5/32 scope global tun0 valid_lft forever preferred_lft forever </pointopoint,multicast,noarp,up,lower_up></broadcast,multicast,up,lower_up></loopback,up,lower_up>
the correct routes are added
default via 192.168.0.254 dev eth0 172.16.0.0/16 via 192.168.96.5 dev tun0 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.1 192.168.96.1 via 192.168.96.5 dev tun0 192.168.96.5 dev tun0 proto kernel scope link src 192.168.96.6
and I can ping the VPN endpoint’s private IP address, success!
PING 172.16.0.16 (172.16.0.16) 56(84) bytes of data. 64 bytes from 172.16.0.16: icmp_seq=1 ttl=64 time=1.24 ms --- 172.16.0.16 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.245/1.245/1.245/0.000 ms
Firewall and Routing
But that success is short lived. One aspect with our deployment of OpenStack is the default networking filters. These feature rules that specify that IP packets leaving a virtual machine must only come from that machine. Makes sense that you don’t want some operator impersonating another virtual machine, however it makes routing impossible. In this case if I want to ping another machine via the VPN gateway that ICMP request will need to be routed by the VPN server to another box on the network. As the source address of this packet is 192.168.96.6 (the ping reply will be destined for this address), the packet gets filtered as soon as it leaves the VM, because it isn’t from 172.16.0.16. Additionally you’d need to advertise a route back to 192.168.96.0/24 for the reply which is another added complexity.
Enter source network address translation. On the server we can specify that any packets routed out of the VM with a different source address are altered to look like they originated on the server and bypassing the security filters. Awesome. When packets are returned from the other machine on the private network the VPN server is then responsible for translating the destination back to the original sender and forwarding on. How it does this is beyond the scope of this post! Here are my firewall rules:
Chain INPUT (policy DROP 3141 packets, 265K bytes) pkts bytes target prot opt in out source destination 162K 26M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 10759 631K ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 3 126 ACCEPT udp -- any any anywhere anywhere udp dpt:openvpn 29 1456 ACCEPT icmp -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- tun+ any anywhere anywhere 0 0 ACCEPT all -- any any 172.16.0.0/16 anywhere Chain OUTPUT (policy ACCEPT 186K packets, 36M bytes) pkts bytes target prot opt in out source destination
Importantly we accept inbound openvpn trafic, or else the tunnel wouldn’t be able to be established, and allow the forwarding of any packets coming out of a VPN tunnel device, and any packets originating within the trusted private network. My routing rules look like the following:
Chain PREROUTING (policy ACCEPT 13939 packets, 899K bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 10793 packets, 633K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 27630 packets, 2320K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 27630 2320K MASQUERADE all -- any eth0 anywhere anywhere
Which applies the SNAT previously described to packets originating from the VPN tunnel. And that’s it, now I can get access to 65000 virtual machines with just a pair of public IP addresses.