ipipou: more than just an unencrypted tunnel

What do we say to the god of IPv6?

ipipou: more than just an unencrypted tunnel
That's right, and we'll say the same to the god of encryption today.

Here it will be about an unencrypted IPv4 tunnel, but not about a “warm lamp”, but about a modern “LED”. And here raw sockets flicker, and work with packages in user space is in progress.

There are N tunneling protocols for every taste and color:

  • stylish, fashionable, youthful wire guard
  • multifunctional like swiss knives, openvpn and ssh
  • old and not evil GRE
  • as simple as possible, smart, not at all encrypted IPIP
  • actively developing GENEVA
  • many others.

But I am a programmer, so I will increase N only by a fraction, and I will leave the development of these protocols to the b-developers.

In one not yet born project, which I am currently doing, I need to reach hosts behind NAT from the outside. Using protocols with adult cryptography for this, I could not leave the feeling that it was like a cannon to sparrows. Because the tunnel is used for the most part only for digging a hole in NAT-e, internal traffic is usually also encrypted, they still drown for HTTPS.

While researching different tunneling protocols, IPIP caught my inner perfectionist's attention over and over again because of its minimal overhead. But it has one and a half significant drawbacks for my tasks:

  • it requires public IPs on both sides,
  • and no authentication for you.

Therefore, the perfectionist was driven back into the dark corner of the skull, or wherever he sits there.

And once, while reading articles on natively supported tunnels in Linux I came across FOU (Foo-over-UDP), i.e. anything wrapped in UDP. So far, only IPIP and GUE (Generic UDP Encapsulation) are supported from anything.

"That's the silver bullet! Me and a simple IPIP for the eyes.” I thought.

In fact, the bullet was not completely silver. Encapsulation in UDP solves the first problem - you can connect to clients behind NAT from the outside using a pre-established connection, but here half of the next disadvantage of IPIP blooms in a new light - anyone from a private network can hide behind visible public IP and client port (in pure IPIP this problem does not exist).

To solve this one and a half problem, the utility was born ipipou. It implements a self-made mechanism for authenticating a remote host, while not disrupting the work of a vigorous FOU, which will quickly and efficiently process packets in kernel space.

Don't need your script!

Ok, if you know the public port and IP of the client (for example, everyone doesn’t follow it anywhere, NAT tries to map ports 1-to-1), you can create an IPIP-over-FOU tunnel with the following commands, without any scripts.

on server:

# Подгрузить модуль ядра FOU
modprobe fou

# Создать IPIP туннель с инкапсуляцией в FOU.
# Модуль ipip подгрузится автоматически.
ip link add name ipipou0 type ipip 
    remote 198.51.100.2 local 203.0.113.1 
    encap fou encap-sport 10000 encap-dport 20001 
    mode ipip dev eth0

# Добавить порт на котором будет слушать FOU для этого туннеля
ip fou add port 10000 ipproto 4 local 203.0.113.1 dev eth0

# Назначить IP адрес туннелю
ip address add 172.28.0.0 peer 172.28.0.1 dev ipipou0

# Поднять туннель
ip link set ipipou0 up

on the client:

modprobe fou

ip link add name ipipou1 type ipip 
    remote 203.0.113.1 local 192.168.0.2 
    encap fou encap-sport 10001 encap-dport 10000 encap-csum 
    mode ipip dev eth0

# Опции local, peer, peer_port, dev могут не поддерживаться старыми ядрами, можно их опустить.
# peer и peer_port используются для создания соединения сразу при создании FOU-listener-а.
ip fou add port 10001 ipproto 4 local 192.168.0.2 peer 203.0.113.1 peer_port 10000 dev eth0

ip address add 172.28.0.1 peer 172.28.0.0 dev ipipou1

ip link set ipipou1 up

where

  • ipipou* is the name of the local tunnel network interface
  • 203.0.113.1 - public IP of the server
  • 198.51.100.2 - public IP of the client
  • 192.168.0.2 - client IP assigned to interface eth0
  • 10001 - local client port for FOU
  • 20001 - public client port for FOU
  • 10000 - server public port for FOU
  • encap-csum - option to add a UDP checksum to encapsulated UDP packets; can be replaced with noencap-csum, not to count, integrity is already controlled by the outer layer of encapsulation (while the package is inside the tunnel)
  • eth0 - local interface to which the ipip tunnel will be bound
  • 172.28.0.1 - IP of the client's tunnel interface (private)
  • 172.28.0.0 — IP of the tunnel interface of the server (private)

As long as the UDP connection is alive, the tunnel will be in a healthy state, and how it breaks, how lucky - if the IP: client port remains the same - it will live, if it changes - it will break.

The easiest way to turn everything back is by unloading the kernel modules: modprobe -r fou ipip

Even if authentication is not required, the public IP and client port are not always known and are often unpredictable or changeable (depending on the type of NAT). If you omit encap-dport on the server side, the tunnel will not work, it is not smart enough to take the remote connection port. In this case, ipipou can also help, well, or WireGuard and others like it to help you.

How does it work?

The client (usually behind NAT) raises the tunnel (as in the example above), and sends an authentication packet to the server to set up the tunnel on its side. Depending on the settings, this can be an empty packet (just so that the server sees the public IP: connection port), or with data by which the server can identify the client. The data can be a simple passphrase in plain text (an analogy with HTTP Basic Auth comes to mind) or specially formatted data signed with a private key (similar to HTTP Digest Auth only stronger, see the function client_auth in code).

On the server (public IP side), when ipipou starts up, it creates an nfqueue queue handler and configures netfilter so that the necessary packets are directed where they should be: packets that initialize the connection to the nfqueue queue, and [almost] everything else goes straight to the listener FOU.

Who is not in the subject, nfqueue (or NetfilterQueue) is such a special thing for amateurs who do not know how to develop kernel modules, which, using netfilter (nftables / iptables), allows you to redirect network packets to user space and process them there with primitive improvised means: modify (optional ) and give it back to the kernel, or discard it.

For some programming languages ​​there are bindings for working with nfqueue, for bash it was not found (heh, not surprising), I had to use python: ipipou uses NetfilterQueue.

If performance is not critical, using this thing you can relatively quickly and easily cook up your own logic for working with packets at a fairly low level, for example, sculpt experimental data transfer protocols, or troll local and remote services with non-standard behavior.

Raw sockets work hand in hand with nfqueue. interface using a raw socket, although you will have to tinker with the generation of such a packet a little more. This is how packets with authentication are created in ipipou.

Since ipipou only processes the first packets from the connection (well, those that managed to leak into the queue before the connection was established), the performance hardly suffers.

As soon as the ipipou server receives an authenticated packet, the tunnel is created and all subsequent packets in the connection are already processed by the kernel bypassing nfqueue. If the connection is rotten, then the first packet of the next one will be sent to the nfqueue queue, depending on the settings, if it is not a packet with authentication, but from the last remembered IP and client port, it can either be passed on or discarded. If the authenticated packet comes from the new IP and port, the tunnel is reconfigured to use them.

The usual IPIP-over-FOU has another problem when working with NAT - you cannot create two IPIP tunnels encapsulated in UDP with the same IP, because the FOU and IPIP modules are sufficiently isolated from each other. Those. a pair of clients on the same public IP will not be able to connect to the same server at the same time in this way. In future, perhaps, it will be solved at the kernel level, but this is not certain. In the meantime, NAT problems can be solved with NAT - if it happens that a couple of IP addresses are already occupied by another tunnel, ipipou will NAT from the public to the alternative private IP, voila! - you can create tunnels until the ports run out.

Because not all packets in the connection are signed, then such a simple protection is vulnerable to MITM, so if a villain lurks on the path between the client and the server, who can listen to traffic and control it, he can redirect authenticated packets through another address and create a tunnel from an untrusted host .

If anyone has any ideas how to fix this while leaving the bulk of the traffic in the core, feel free to speak up.

By the way, encapsulation in UDP has proven itself very well. Compared to encapsulation over IP, it is much more stable and often faster despite the additional overhead of the UDP header. This is due to the fact that on the Internet most of the hosts work tolerably only with the three most popular protocols: TCP, UDP, ICMP. The perceptible part can generally discard everything else, or process more slowly, because it is optimized only for these three.

For example, this is why QUICK, on ​​the basis of which HTTP / 3 was created, was created precisely over UDP, and not over IP.

Well, enough words, it's time to see how it works in the "real world".

Battle

Used to emulate the real world iperf3. In terms of the degree of proximity to reality, this is approximately like emulating the real world in Minecraft, but for now it will do.

Participating in the competition are:

  • reference main channel
  • hero of this article ipipou
  • OpenVPN with authentication but no encryption
  • OpenVPN all-inclusive
  • WireGuard without PresharedKey, with MTU=1440 (for IPv4-only)

Technical data for geeks
Metrics are taken by such teams

on the client:

UDP

CPULOG=NAME.udp.cpu.log; sar 10 6 >"$CPULOG" & iperf3 -c SERVER_IP -4 -t 60 -f m -i 10 -B LOCAL_IP -P 2 -u -b 12M; tail -1 "$CPULOG"
# Где "-b 12M" это пропускная способность основного канала, делённая на число потоков "-P", чтобы лишние пакеты не плодить и не портить производительность.

TCP

CPULOG=NAME.tcp.cpu.log; sar 10 6 >"$CPULOG" & iperf3 -c SERVER_IP -4 -t 60 -f m -i 10 -B LOCAL_IP -P 2; tail -1 "$CPULOG"

ICMP latency

ping -c 10 SERVER_IP | tail -1

on the server (runs simultaneously with the client):

UDP

CPULOG=NAME.udp.cpu.log; sar 10 6 >"$CPULOG" & iperf3 -s -i 10 -f m -1; tail -1 "$CPULOG"

TCP

CPULOG=NAME.tcp.cpu.log; sar 10 6 >"$CPULOG" & iperf3 -s -i 10 -f m -1; tail -1 "$CPULOG"

Tunnel Configuration

ipipou
server
/etc/ipipou/server.conf:

server
number 0
fou-dev eth0
fou-local-port 10000
tunl-ip 172.28.0.0
auth-remote-pubkey-b64 eQYNhD/Xwl6Zaq+z3QXDzNI77x8CEKqY1n5kt9bKeEI=
auth-secret topsecret
auth-lifetime 3600
reply-on-auth-ok
verb 3

systemctl start ipipou@server

client
/etc/ipipou/client.conf:

client
number 0
fou-local @eth0
fou-remote SERVER_IP:10000
tunl-ip 172.28.0.1
# pubkey of auth-key-b64: eQYNhD/Xwl6Zaq+z3QXDzNI77x8CEKqY1n5kt9bKeEI=
auth-key-b64 RuBZkT23na2Q4QH1xfmZCfRgSgPt5s362UPAFbecTso=
auth-secret topsecret
keepalive 27
verb 3

systemctl start ipipou@client

openvpn (no encryption, with authentication)
server

openvpn --genkey --secret ovpn.key  # Затем надо передать ovpn.key клиенту
openvpn --dev tun1 --local SERVER_IP --port 2000 --ifconfig 172.16.17.1 172.16.17.2 --cipher none --auth SHA1 --ncp-disable --secret ovpn.key

client

openvpn --dev tun1 --local LOCAL_IP --remote SERVER_IP --port 2000 --ifconfig 172.16.17.2 172.16.17.1 --cipher none --auth SHA1 --ncp-disable --secret ovpn.key

openvpn (with encryption, authentication, via UDP, everything is as it should be)
Configured using openvpn-manage

wire guard
server
/etc/wireguard/server.conf:

[Interface]
Address=172.31.192.1/18
ListenPort=51820
PrivateKey=aMAG31yjt85zsVC5hn5jMskuFdF8C/LFSRYnhRGSKUQ=
MTU=1440

[Peer]
PublicKey=LyhhEIjVQPVmr/sJNdSRqTjxibsfDZ15sDuhvAQ3hVM=
AllowedIPs=172.31.192.2/32

systemctl start wg-quick@server

client
/etc/wireguard/client.conf:

[Interface]
Address=172.31.192.2/18
PrivateKey=uCluH7q2Hip5lLRSsVHc38nGKUGpZIUwGO/7k+6Ye3I=
MTU=1440

[Peer]
PublicKey=DjJRmGvhl6DWuSf1fldxNRBvqa701c0Sc7OpRr4gPXk=
AllowedIPs=172.31.192.1/32
Endpoint=SERVER_IP:51820

systemctl start wg-quick@client

The results

Raw ugly sign
The CPU load of the server is not very indicative, because. there are many other services spinning there, sometimes they devour resources:

proto bandwidth[Mbps] CPU_idle_client[%] CPU_idle_server[%]
# 20 Mbps канал с микрокомпьютера (4 core) до VPS (1 core) через Атлантику
# pure
UDP 20.4      99.80 93.34
TCP 19.2      99.67 96.68
ICMP latency min/avg/max/mdev = 198.838/198.997/199.360/0.372 ms
# ipipou
UDP 19.8      98.45 99.47
TCP 18.8      99.56 96.75
ICMP latency min/avg/max/mdev = 199.562/208.919/220.222/7.905 ms
# openvpn0 (auth only, no encryption)
UDP 19.3      99.89 72.90
TCP 16.1      95.95 88.46
ICMP latency min/avg/max/mdev = 191.631/193.538/198.724/2.520 ms
# openvpn (full encryption, auth, etc)
UDP 19.6      99.75 72.35
TCP 17.0      94.47 87.99
ICMP latency min/avg/max/mdev = 202.168/202.377/202.900/0.451 ms
# wireguard
UDP 19.3      91.60 94.78
TCP 17.2      96.76 92.87
ICMP latency min/avg/max/mdev = 217.925/223.601/230.696/3.266 ms

## около-1Gbps канал между VPS Европы и США (1 core)
# pure
UDP 729      73.40 39.93
TCP 363      96.95 90.40
ICMP latency min/avg/max/mdev = 106.867/106.994/107.126/0.066 ms
# ipipou
UDP 714      63.10 23.53
TCP 431      95.65 64.56
ICMP latency min/avg/max/mdev = 107.444/107.523/107.648/0.058 ms
# openvpn0 (auth only, no encryption)
UDP 193      17.51  1.62
TCP  12      95.45 92.80
ICMP latency min/avg/max/mdev = 107.191/107.334/107.559/0.116 ms
# wireguard
UDP 629      22.26  2.62
TCP 198      77.40 55.98
ICMP latency min/avg/max/mdev = 107.616/107.788/108.038/0.128 ms

channel at 20 Mbps

ipipou: more than just an unencrypted tunnel

ipipou: more than just an unencrypted tunnel

channel per 1 optimistic Gbps

ipipou: more than just an unencrypted tunnel

ipipou: more than just an unencrypted tunnel

In all cases, ipipou is pretty close to the base channel in terms of performance, which is great!

The unencrypted openvpn tunnel behaved rather strangely in both cases.

If anyone is going to test, it will be interesting to hear feedback.

May IPv6 and NetPrickle be with us!

Source: habr.com

Add a comment