Enable IPv4 Access in EUserv IPv6-only VS2-free

edited March 2021 in Technical

This post is originally published on yoursunny.com blog https://yoursunny.com/t/2020/EUserv-IPv4/

EUserv is a virtual private server (VPS) provider in Germany.
Notably, they offer a container-based Linux server, VS2-free, free of charge.
VS2-free comes with one 1GHz CPU core, 1GB memory, and 10GB storage.
Although I already have more than enough servers to play with, who doesn't like some more computing resources for free?

There's one catch: the VS2-free is IPv6-only.
It neither has a public IPv4 address, nor offers NAT-based IPv4 access.
All you can have is a single /128 IPv6 address.

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
546: eth0@if547: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b2:77:4b:c0:eb:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001:db8:6:1::6dae/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::5ed4:d66f:bd01:6936/64 scope link
       valid_lft forever preferred_lft forever

If I attempt to access an IPv4-only destination, a "Network is unreachable" error appears:

$ host lgger.nexusbytes.com
lgger.nexusbytes.com has address 46.4.199.225
$ ping -n -c 4 lgger.nexusbytes.com
connect: Network is unreachable

Not having IPv4 access severely restricts the usefulness of the VS2-free, because I would be unable to access many external resources that are not yet IPv6-enabled.
Is there a way to get some IPv4 access in the IPv6-only VS2-free vServer?

NAT64

Stateful NAT64 translation is a network protocol that allows IPv6-only clients to contact IPv4 servers using unicast UDP, TCP, or ICMP.
It relies on a dual-stack server, known as a NAT64 translator, to proxy packets between IPv6 and IPv4 networks.

There are a number of public NAT64 services in Europe that would enable IPv4 access from my server.
To use NAT64, all I need to do is changing the DNS settings in my server:

$ sudoedit /etc/resolvconf/resolv.conf.d/base
    nameserver 2a01:4f9:c010:3f02::1
    nameserver 2a00:1098:2c::1
    nameserver 2a00:1098:2b::1

$ sudo resolvconf -u

Note that on a Debian 10 system with resolveconf package, the proper way to change DNS servers is editing /etc/resolvconf/resolv.conf.d/base and then executing resolvconf -u to regenerate /etc/resolv.conf.
If you modify /etc/resolv.conf directly, the changes will be overwritten during the next reboot.

After making the changing, DNS responses for IPv4-only destinations would contain additional IPv6 addresses that belong to the NAT64 translator, which would facilitate the connection:

$ host lgger.nexusbytes.com
lgger.nexusbytes.com has address 46.4.199.225
lgger.nexusbytes.com has IPv6 address 2a00:1098:2c::1:2e04:c7e1
lgger.nexusbytes.com has IPv6 address 2a01:4f9:c010:3f02:64:0:2e04:c7e1
lgger.nexusbytes.com has IPv6 address 2a00:1098:2b::2e04:c7e1

$ ping -n -c 4 lgger.nexusbytes.com
PING lgger.nexusbytes.com(2a00:1098:2c::1:2e04:c7e1) 56 data bytes
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=1 ttl=41 time=39.9 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=2 ttl=41 time=39.7 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=3 ttl=41 time=39.6 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=4 ttl=41 time=39.8 ms

It is easy to gain IPv4 access on the EUserv VS2-free container by using a public NAT64 service, but there are several drawbacks:

  • The IPv4 addresses of public NAT64 services are shared by many users.
    If any other user misbehaves, the shared IPv4 address of the NAT64 translator could be blocklisted by the destination IPv4 service.

  • The NAT64 translator could apply rate limits if it gets busy.

  • While we can contact an IPv4-only destination by its hostname, it is still not possible to contact an IPv4 address:

    $ ping 8.8.8.8
    connect: Network is unreachable
    

IPv4 NAT over VXLAN

To get true IPv4 access on an IPv6-only server, we need to create a tunnel between the IPv6-only server and a dual-stack server, and then configure Network Address Translation (NAT) on the dual stack server.
Many people would think about using a VPN software, such as OpenVPN or WireGuard.
However, VPN is overkill, because there is a lighter weight solution: VXLAN.

VXLAN, or Virtual eXtensible Local Area Network, is a framework for overlaying virtualized layer 2 networks over layer 3 networks.
In our case, I can create a virtualized Ethernet (layer 2) network over an IPv6 (layer 3) network.
Then, I can assign IPv4 addresses to the virtual Ethernet adapters, in order to give IPv4 access to the previously IPv6-only VS2-free vServer.

I have a small dual-stack server in Germany, offered by Gullo's Hosting.
It is an OpenVZ 7 container.
It runs Debian 10, the same operating system as my VS2-free.
I will be using this server to share IPv4 to the VS2-free.

In the examples below:

  • 2001:db8:473a:723d:276e::2 is the public IPv6 address of the dual-stack server.
  • 2001:db8:6:1::6dae is the public IPv6 address of the IPv6-only server.
  • 192.0.2.1 is the public IPv4 address of the dual-stack server.

After reverting the DNS changes from the previous section, I execute the following commands on the EUserv vServer to setup a VXLAN tunnel:

sudo ip link add vx84 type vxlan id 0 remote 2001:db8:473a:723d:276e::2 local 2001:db8:6:1::6dae dstport 4789
sudo ip link set vx84 mtu 1420
sudo ip link set vx84 up
sudo ip addr add 192.168.84.2/24 dev vx84
sudo ip route add 0.0.0.0/0 via 192.168.84.1

Note that I reduced the MTU of the VXLAN tunnel interface to 1420 from the default 1500.
This is necessary to accommodate the overhead of VXLAN headers, so that the encapsulated IPv6 packets can fit into the normal MTU.

On the dual-stack server, I execute these commands to setup its end of the tunnel and enable NAT:

sudo ip link add vx84 type vxlan id 0 remote 2001:db8:6:1::6dae local 2001:db8:473a:723d:276e::2 dstport 4789
sudo ip link set vx84 mtu 1420
sudo ip link set vx84 up
sudo ip addr add 192.168.84.1/24 dev vx84
sudo iptables-legacy -t nat -A POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1

It's worth noting that the command for enabling NAT is iptables-legacy instead of iptables.
Apparently, there are two variants of iptables that access different kernel APIs.
Although both commands would succeed, only iptables-legacy is effective in an OpenVZ 7 container.
This had me scratching my head for a while.

After these setup, I'm able to access IPv4 from the IPv6-only server:

$ traceroute -n -q1 lgger.nexusbytes.com
traceroute to lgger.nexusbytes.com (46.4.199.225), 30 hops max, 60 byte packets
 1  192.168.84.1  23.566 ms
 2  *
 3  213.239.229.89  34.058 ms
 4  213.239.229.130  23.615 ms
 5  94.130.138.54  24.077 ms
 6  46.4.199.225  23.955 ms

In Wireshark, these packets would look like this:

Frame 5: 146 bytes on wire (1168 bits), 146 bytes captured (1168 bits)
Linux cooked capture v1
Internet Protocol Version 6, Src: 2001:db8:6:1::6dae, Dst: 2001:db8:473a:723d:276e::2
User Datagram Protocol, Src Port: 53037, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: b6:ab:7c:af:51:d1 (b6:ab:7c:af:51:d1), Dst: be:ce:c9:cf:a7:f3 (be:ce:c9:cf:a7:f3)
Internet Protocol Version 4, Src: 192.168.84.2, Dst: 46.4.199.225
User Datagram Protocol, Src Port: 50047, Dst Port: 33439
Data (32 bytes)

Make Them Persistent

Effect of ip commands will be lost after a reboot.
Normally the VXLAN tunnel should be written into the ifupdown configuration file, but as I discovered earlier, OpenVZ 7 would revert any modifications to the /etc/network/interfaces file.
Thus, I have to apply these changes dynamically using a systemd service.

The systemd service unit for the IPv6-only server is:

[Unit]
Description=VXLAN tunnel to vps9
After=network-online.target
Wants=network-online.target

[Service]
ExecStartPre=ip link add vx84 type vxlan id 0 remote 2001:db8:473a:723d:276e::2 local 2001:db8:6:1::6dae dstport 4789
ExecStartPre=ip link set vx84 mtu 1420
ExecStartPre=ip link set vx84 up
ExecStartPre=ip addr add 192.168.84.2/24 dev vx84
ExecStartPre=ip route add 0.0.0.0/0 via 192.168.84.1
ExecStart=true
RemainAfterExit=yes
ExecStopPost=ip link del vx84

[Install]
WantedBy=multi-user.target

The systemd service unit for the dual-stack server is:

[Unit]
Description=VXLAN tunnel to vps2
After=network-online.target
Wants=network-online.target

[Service]
ExecStartPre=ip link add vx84 type vxlan id 0 remote 2001:db8:6:1::6dae local 2001:db8:473a:723d:276e::2 dstport 4789
ExecStartPre=ip link set vx84 mtu 1420
ExecStartPre=ip link set vx84 up
ExecStartPre=ip addr add 192.168.84.1/24 dev vx84
ExecStartPre=iptables-legacy -t nat -A POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1
ExecStart=true
RemainAfterExit=yes
ExecStopPost=iptables-legacy -t nat -D POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1
ExecStopPost=ip link del vx84

[Install]
WantedBy=multi-user.target

On both servers, this service unit file should be uploaded to /etc/systemd/system/vx84.service.
Then, I can enable the service unit with these commands:

sudo systemctl daemon-reload
sudo systemctl enable vx84

They will take effect after a reboot:

$ ip addr show vx84
4: vx84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether f2:4c:5d:6c:4b:25 brd ff:ff:ff:ff:ff:ff
    inet 192.168.84.2/24 scope global vx84
       valid_lft forever preferred_lft forever
    inet6 fe80::f04c:5dff:fe6c:4b25/64 scope link
       valid_lft forever preferred_lft forever

$ ping -c 4 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=28.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=28.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=28.9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=28.10 ms

Conclusion

This article describes two methods of gaining IPv4 access on an IPv6-only server such as the EUserv VS2-free.

  • Use a public NAT64 translator.
  • Establish a VXLAN tunnel to a dual-stack server, and then configure IPv4 addresses and NAT on the virtual Ethernet interfaces.

To workaround OpenVZ 7 limitation of not being able to modify /etc/network/interfaces, we use a systemd service unit to dynamically establish and teardown the VXLAN tunnel and related configuration.

Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.

Comments

  • ehabehab Content Writer

    interesting -- i have bookmarked for reading. THANK you

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @yoursunny Thanks! This is great! ? Apparently, people could use a public NAT64 service with my Premium $7 Deal. Apparently, I even might be able to run the VXLAN on the host node or on another dual 4,6 connected VPS so that my IPv6-only clients could easily access IPv4 via VXLAN. ??

  • @Not_Oles said:
    @yoursunny Thanks! This is great! ? Apparently, people could use a public NAT64 service with my Premium $7 Deal. Apparently, I even might be able to run the VXLAN on the host node or on another dual 4,6 connected VPS so that my IPv6-only clients could easily access IPv4 via VXLAN. ??

    This tutorial is for users who setup their own services.
    As a premium provider, you ought to have an automated setup like Gullo (i.e. private IPv4 in the container and NAT on the host node), so that users don't have to fiddle with VXLAN themselves.

    Thanked by (1)Not_Oles

    Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.

  • MasonMason AdministratorOG

    Good write up! Will keep this in mind if I ever snag a cheap IPv6 only VPS.

    Unfortunately, I found my free EUServ IPv6 VPS to be literally useless. Not sure if it was just the node I was on or something, but it was hair-pulling slow. Reinstall took upwards of 3 or 4 hours. Apt update/upgrade took nearly an hour. Sometimes even free ain't worth it.

    Head Janitor @ LES • AboutRulesSupportDonate

  • @Mason said:
    Good write up! Will keep this in mind if I ever snag a cheap IPv6 only VPS.

    Unfortunately, I found my free EUServ IPv6 VPS to be literally useless. Not sure if it was just the node I was on or something, but it was hair-pulling slow. Reinstall took upwards of 3 or 4 hours. Apt update/upgrade took nearly an hour. Sometimes even free ain't worth it.

    Agreed. Same experience. I let it expire. (Actually tried deleting it, but couldn't figure out how. Sent a message/ticket to support, that they could delete it/my account. But ended up without reply/answer until it just timed out for me not renewing contract manually.)

    Thanked by (1)Mason
  • Hi Push-up king :)

    Thanks for the sharing. Indeed, too technical for me but I really wanna try =)

    Two questions though -

    (1) Correct me if I am wrong - it seems to be a trial version for 1 month according to "Contract + Termination"

    (2) Is it possible to make it accept connection (external / incoming) via IPv4?

  • edited December 2020

    Notice: the article has been updated to include a command to lower the MTU of VXLAN interface. TCP does not work properly without this command.


    @Mason said:
    Unfortunately, I found my free EUServ IPv6 VPS to be literally useless. Not sure if it was just the node I was on or something, but it was hair-pulling slow. Reinstall took upwards of 3 or 4 hours. Apt update/upgrade took nearly an hour. Sometimes even free ain't worth it.

    EUserv seems to be shift free containers around. When I posted YABS on 2020-11-22, the CPU was a AMD Athlon(tm) II X4 640 Processor. Now lscpu indicates Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz. I don't know when this happened.


    I tried to (re)encode one of my push-up videos.
    For the same input file (4m50s) and ffmpeg parameters (VP9 240p), Oracle Cloud Tokyo is marginally faster than EUserv.

    In Oracle Cloud free tier:

    • Advertised: 1/8 OCPU.
    • The server sees 2 cores, EPYC 7551 2.0 GHz. This CPU was introduced in 2017.
    • ffmpeg is using 22% of all cores, or roughly 880 MHz. There's a steal of 68% among all cores.
    • ffmpeg final report: frame= 8675 fps=2.4 q=0.0 Lsize= 24828kB time=00:04:50.36 bitrate= 700.5kbits/s speed=0.0805x.

    In EUserv VS2-free:

    • Advertised: 1GHz CPU.
    • The server sees 1 core, E3-1270 v3 3.5 GHz. This CPU was introduced in 2013.
    • ffmpeg is using 33% of the core, or roughly 1155 MHz. There's no steal.
    • ffmpeg final report: frame= 8675 fps=2.4 q=0.0 Lsize= 24533kB time=00:04:50.36 bitrate= 692.2kbits/s speed=0.0802x.

    ffmpeg gets higher CPU frequency in EUserv, but the older CPU and its more restricted instruction set slow things down.


    @swat4 said:
    (1) Correct me if I am wrong - it seems to be a trial version for 1 month according to "Contract + Termination"

    To keep using EUserv VS2-free, you have to click the "renew contract" button every month. You will get an email reminder when this button appears.


    @swat4 said:
    (2) Is it possible to make it accept connection (external / incoming) via IPv4?

    Yes, you can setup DNAT on the dual stack server. It's same as configuring port forwarding on a home router.

    Thanked by (2)Not_Oles swat4

    Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.

  • @Mason said:
    Good write up! Will keep this in mind if I ever snag a cheap IPv6 only VPS.

    Unfortunately, I found my free EUServ IPv6 VPS to be literally useless. Not sure if it was just the node I was on or something, but it was hair-pulling slow. Reinstall took upwards of 3 or 4 hours. Apt update/upgrade took nearly an hour. Sometimes even free ain't worth it.

    Me too, it is so slow and only usefulness is i finally figured how to ssh an ipv6. Totally dunno what to do with it.

  • @yoursunny said:
    I tried to (re)encode one of my push-up videos.
    For the same input file (4m50s) and ffmpeg parameters (VP9 240p), Oracle Cloud Tokyo is marginally faster than EUserv.

    The encoding jobs for all resolutions (240p 360p 480p 720p) finished.
    Final result: EUserv is slightly faster than Oracle Cloud Tokyo.

    For comparison, I also included results from two other servers that have unrestricted CPU:

    # Oracle Cloud
    real    223m49.812s
    user    386m26.094s
    sys     19m31.931s
    
    # EUserv
    real    212m10.675s
    user    70m3.186s
    sys     0m10.576s
    
    # Hosterlabs
    real    32m18.028s
    user    55m37.512s
    sys     0m15.915s
    
    # Evolution Host
    real    69m11.844s
    user    117m39.062s
    sys     1m2.854s
    
    

    YABS is an artificial benchmark. We need push-up benchmark.

    Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.

  • Awesome. Thanks - this is exactly the solution I was looking for.

  • Great job @yoursunny :+1: I just did a couple pushups as this felt very useful. Cheers :)

    Thanked by (1)yoursunny
  • rm_rm_
    edited December 2020

    @yoursunny
    Using VXLAN is a bit of an overkill. It will work, but since it's an L2 tunnel, it will reduce your MTU by way too much.
    Also it uses some space in the header for its own needs as well.
    Check out ipip6 instead:
    ip -6 tunnel add $TUNNELNAME mode ipip6 local $MYIPV6 remote $REMOTEIPV6
    It's almost the same as HE.net IPv6 tunnels ("sit"), but the other way round, IPv4 over IPv6.

    Thanked by (1)yoursunny
Sign In or Register to comment.