How To Port-Forward Through a VPS
Published on April 1, 2023
There are a lot of valid reasons for wanting to self-host on your own hardware instead of renting a server in the cloud, however doing so also comes with some drawbacks. First of all, you need to have a public IP address (static or dynamic) that you are willing to share publicly. Furthermore, most e-mail providers reject e-mails coming from mail servers with a residential IP address with no reverse DNS record, and your ISP might be blocking port 25, so if you’d like to host a mail server, you’ll most likely need to get a business line.
This guide showcases two ways of forwarding all traffic going to a cheap VPS over to your homelab, making it appear as if your server at home was the VPS. Essentially, you’ll be able to use the VPS “as a public IP address” and forward ports through it. This guide assumes that you are running a Linux based operating system on both your VPS and your server at home, and that you have at least a basic knowledge of working with the Linux terminal.
Method #1: SSH forwarding
The simplest and easiest way to port-forward through a VPS is to use the built-in feature of OpenSSH.
The main disadvantages of this method are:
- It only allows forwarding TCP ports and not UDP.
- Your server at home will not be able to see the source IP address of connections, instead all connections will appear as if they were coming from localhost.
- The performance of SSH tunnels isn’t as good as that of proper VPN tunnels.
I would not recommend this method for use in a production environment.
Setting up SSH public key authentication
As a prerequisite, you need to have public key authentication set up on your VPS. A lot of hosting providers already require this by default, in which case it is enough to simply copy your existing private key from your personal machine to your server at home (although using separate keys is still better for security). Otherwise, make sure you can log onto the VPS from your home server via SSH using public key authentication.
Configuring the VPS
Modify the configuration of the SSH daemon. The file we’re looking for can be found under /etc/ssh/sshd_config
on Ubuntu. Uncomment / modify the following values:
PermitRootLogin prohibit-password # make sure you have an SSH key set up for authentication to not lock yourself out
AllowTcpForwarding yes
GatewayPorts yes
PermitTunnel yes
Restart the SSH process:
sudo systemctl restart sshd
Configuring the home server
Create /etc/systemd/system/portforward.service
with the following content:
[Unit]
Description=Port-forward through an SSH tunnel
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/ssh -R 80:localhost:80 -R 443:localhost:443 -o ExitOnForwardFailure=yes root@1.2.3.4 -N
Restart=on-failure
StartLimitBurst=0
StartLimitInterval=10
[Install]
WantedBy=multi-user.target
In this example, we are making port 80 and 443 of the home server available on the VPS’s IP address on the same ports. To forward additional ports, you can add multiple arguments like -R <port on VPS>:localhost:<local port>
to the SSH command. Replace 1.2.3.4
with the IP address of the ethernet interface on your VPS.
Please note that in order to forward ports below 1024, you need to SSH into the VPS as root. Also make sure you that you have connected to the VPS via SSH manually at least once, otherwise the SSH command will get stuck asking if you’d like to to trust the VPS’s keys when being run by the service.
Enable and start the service:
sudo systemctl enable portforward
sudo systemctl start portforward
Method #2: Using WireGuard and nftables
A better way of doing this is to set up a WireGuard VPN tunnel between the VPS and the home server, and redirect traffic using the nftables firewall.
Set up WireGuard
Install WireGuard tools and generate a public-private key pair on both the VPS and the home server:
sudo apt install wireguard-tools
wg genkey | (umask 0077 && tee private.key) | wg pubkey > public.pub
Next, create the WireGuard configuration file at /etc/wireguard/wg-portforward.conf
.
On the VPS it should look something like this:
[Interface]
Address = 192.168.69.1/24, fd69::1/64
ListenPort = 51820
PrivateKey = <content of the private.key file on the VPS>
PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = sysctl -w net.ipv6.conf.all.forwarding=1
[Peer]
AllowedIPs = 192.168.69.2/24, fd69::2/64
PublicKey = <content of the public.pub on file the home server>
And on the home server, like this:
[Interface]
Address = 192.168.69.2/24, fd69::2/64
ListenPort = 21841
PrivateKey = <content of the file private.key on the home server>
[Peer]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <VPS's IP>:51820
PersistentKeepalive = 25
PublicKey = <content of the file public.key on the VPS>
If your VPS doesn’t have an IPv6 address, you can omit the IPv6 addresses along with the comma before them. The final result shouldn’t contain the <
and >
characters before and after the values you have replaced.
Enable and start the WireGuard service, first on the VPS, then on the home server:
sudo systemctl enable wg-quick@wg-portforward
sudo systemctl start wg-quick@wg-portforward
To verify whether the WireGuard tunnel is working, you can do a ping test:
ping 192.168.69.1 # from the home server
ping 192.168.69.2 # from the VPS
If it’s not working, your VPS provider might have a firewall module in their control panel that you need to let UDP port 51820 through.
Set up Nftables
You need to install nftables on your VPS, if you don’t have it already. If you’re on a Debian-based system, do:
sudo apt update
sudo apt install nftables
Modify the following snippet according to your needs, and paste it into the file located at /etc/nftables.conf
(at least on Debian). In the following example I’m forwarding TCP ports 80 and 443:
table ip my_nat {
chain my_prerouting {
type nat hook prerouting priority dstnat;
ip daddr <IPv4 of VPS's ethernet port> tcp dport { 80, 443 } dnat to 192.168.69.2;
}
chain my_postrouting {
type nat hook postrouting priority srcnat;
ip saddr 192.168.69.2 masquerade
}
}
table ip6 my_nat {
chain my_prerouting {
type nat hook prerouting priority dstnat;
ip6 daddr <IPv6 of VPS's ethernet port> tcp dport { 80, 443 } dnat to fd69::2;
}
chain my_postrouting {
type nat hook postrouting priority srcnat;
ip6 saddr fd69::2 masquerade
}
}
Restart the nftables service to apply the changes:
sudo systemctl restart nftables
If you did everything correctly, the ports you have specified in the /etc/nftables.conf
file should be forwarded from your VPS’s public IP to your server at home.
Again, check your VPS provider’s firewall if it’s not working.
Set up source based policy routing (optional)
One issue with the current setup is that all outgoing traffic from your server at home gets routed through the VPS. This might be a good thing in case you’re hosting services like Mastodon that connect to external servers, and you don’t want your real IP to get exposed, but if you don’t like this behaviour, you can set up source based policy routing.
Source based policy routing means that replies to packets that came from one network adapter will be sent via the same network adapter. The default behaviour in Linux is to use the default gateway, regardless of where the packet we’re replying to came from.
To set up source based routing, change up the WireGuard configuration file on your server at home:
[Interface]
Address = 192.168.69.2/24, fd69::2/64
ListenPort = 21841
PrivateKey = <content of the file private.key on the home server>
Table = 69
PostUp = ip rule add from 192.168.69.2 table 69
PreDown = ip rule del from 192.168.69.2 table 69
[Peer]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <VPS's IP>:51820
PersistentKeepalive = 25
PublicKey = <content of the file public.key on the VPS>
Notice for Docker users
Source based policy routing
If you have set up source based policy routing, one thing you might notice is that services running in Docker are not accessible from your VPS’s public IP address. This is because Docker performs source NAT, and the routing decision is made before source NAT is performed, meaning the source IP will still be the container’s IP and not 192.168.69.2.
This is only really a problem if you want the Docker container to be accessible from multiple public IP addresses (multiple VPSs, or VPS & the public IP of your router at home), however I would still like to work out a proper solution to this.
If you only need the service to be accessible from the VPS’s public IP address though, you can route all traffic of an entire container or Docker network through the VPS, and the service will still be accessible from your server’s internal IP address, in addition to the VPS’s public IP address. In the following example, 172.16.0.0/16
is the IP address range from where IP addresses are being given out to containers that I’d like to route the traffic of trough the VPS:
[Interface]
Address = 192.168.69.2/24, fd69::2/64
ListenPort = 21841
PrivateKey = <content of the file private.key on the home server>
Table = 69
PostUp = ip rule add from 192.168.69.2 table 69
PreDown = ip rule del from 192.168.69.2 table 69
PostUp = ip rule add from 172.16.0.0/16 table 69
PreDown = ip rule del from 172.16.0.0/16 table 69
[Peer]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <VPS's IP>:51820
PersistentKeepalive = 25
PublicKey = <content of the file public.key on the VPS>
MTU size
An additional thing to note is that the MTU of your Docker network interfaces should be lower than the MTU of your WireGuard interface, otherwise you might be dropping packets.
To change the MTU of the default Docker network used by containers ran via docker run
, as well as containers used for building images, set the MTU in the file /etc/docker/daemon.json
:
{
"mtu": 1454
}
To modify the MTU of a Docker network defined in a docker-compose.yml file:
networks:
<network's name>:
driver_opts:
com.docker.network.driver.mtu: 1420
To modify the MTU of an existing Docker network, execute:
sudo ip link set dev <interface> mtu <size>
I hope this helps anyone wishing to achieve a similar setup!