Probably the hardest part about self hosting is reaching your server. If you use a paid hosting/VPS like Dropocean or the myriad others, this is pretty easy, but if you want to go for a 0 cost hosting like me, then this guide is for you.
You will need a device with an internet connection. I used a netbook from 2010 with great success until my mom upgraded her laptop, allowing me to upgrade my server to her old laptop. The old device ran Arch on 4GB single channel ram, so you really don’t need any fancy hardware.
The first thing you want to set up is an ssh server on your server device. See the archwiki page for a quickstart.
You will probably install the selfhosted software itself via docker, or on baremetal. I would suggest docker for more complicated software, and baremetal if its a single run binary.
As for the main reverse proxy, I would suggest Caddy as it is the lightest and also automatically configures the TLS for the tailscale domains, but for some software, you might not get a config file for caddy in the docs. But playing around a little, you can easily adapt the NGINX or Apache configs pretty easily. See here for a simple run through. I would suggest to run caddy on baremetal.
Having these, a TL;DR of the post is to install Tailscale on all devices, install pihole and unbound (optional), set pihole as the DNS resolver with Tailscale’s MagicDNS, then add a deny
rule in pihole to respond to yourname.com
with tailscale-ip-of-server
.
Now to the main part of this blog. How to connect to this device from outside a LAN network. There are two options:
We will explore the set up with Tailscale. First set up your tailscale by following the 6 steps on Tailscale quickstart. Then, activate MagicDNS if not already active. From now on, we will assume that server
is the tailnet name of your server, and client
is some client. To verify a complete setup, try to ssh into server
from client
with both of them on different networks. Troubleshoot connectivity with tailscale ping server
from client
, and also check if firewall rules on server
allow access. Note that for testing you can turn off the firewall since only devices on your tailnet can access your device anyway.
Once you have verified that the server is accessible, try to run a file server with caddy, turn off auto_https and access the fileserver at http://server.your-tailnet-name.ts.net
from a browser on client
. Once again, troubleshoot by checking that server is accessible to ping, firewall rules are correct. Once this works, set up tailscale to allow caddy to fetch the certs, turn on auto_https, and try to access the file server at https://server.your-tailnet-name.ts.net
.
At this point, you can use some self hosted software hosted at different sub directories, for e.g. https://server.your-tailnet-name.ts.net/service. Almost all the apps will require some setup to allow access from a non-root base directory, and not all the apps support this. For example, I self host an instance of Gitea on /git
. This requires the following set up:
gitea
that it is hosted on a sub-path/git/*
to the correct port/git
to /git/
2 and 3 are configd in caddy’s config as
server.your-tailnet-name.ts.net {
handle_path /git/* {
reverse_proxy localhost:3000
}
redir /git /git/
}
Note that some apps require that you use handle
instead of handle_path
.
This kind of set up is a big headache when you want to add services, since you have to find a service that supports this, or use hacky ways to fix it if there is no support. I wrote about it in [[2022-09-09_131905]]. Normally you can check for the key words basepath
, subpath
, subdir
, rootdir
in the docs or the Issues tab in the project’s repo if there’s some unofficial/undocumented way to do it.
or, getting a custom domain, for free
Now it would be great if MagicDNS allowed us to forward *.server.ts.net
to server
but it doesn’t do so automatically. Now there are 3 ways to fix this:
service.your-tailnet-name.ts.net
. This is called a tailscale sidecar. Requires you to edit each docker container, which I felt was annoying.yourname.com
to your-tailscale-ip
I did 3 because 1 is annoying and I got to know about 2 only as i was writing this post. Also, pihole allows ad blocking at the DNS level. But anyway, here goes:
server
.dig
to check if you are able to resolve the ip address for a public domain like google.com
. Again, check if firewall rules are correct. pihole typically runs on 53
Domains
, add a RegEx filter
: (\.|^)yourname\.com$;reply=server-tailscale-ip
, and Add to denied domains
. Now check if you are able to resolve yourname.com
to the correct ip. Also check random.yourname.com
At this point, any device connected to the tailscale should use the pihole as the DNS resolver, which resolves *.yourname.com
to your server. But you still have to config caddy to reverse proxy depending on the domain requested. This looks like
servicename.yourname.com {
reverse_proxy localhost:port
}
Note you do not have to handle_path
anymore. You may also have to change the service to serve on the rootdir /
.
At this point you can access all selfhosted services at service.yourname.com
as far as you are connected to Tailscale on your client
. However, it may happen, as it did in my case, that server
is physically located in a different country than client
. Since the final dns resolution to public domains happens through a request by server
, the resulting ip address is optimal for the country of server
rather than client
. In terms of speed, this is barely any different, but if server
’s country blocks certain domains by dns poisoning, then client
will also not be able to access these websites. You can get a better ip by using an ECS enabled upstream (see Settings > DNS
in pihole interface), but I couldn’t really circumvent the dns poisoning. So I also set up an unbound rescursive resolver
. Note that since pihole is in a docker container (which has its own subnet) but unbound is on baremetal, you need to run unbound on the docker interface. You can edit /etc/unbound/unbound.conf
as I have
server:
# If no logfile is specified, syslog is used
logfile: "/var/log/unbound/unbound.log"
verbosity: 0
interface: 172.17.0.1
port: 5335
do-ip4: yes
do-udp: yes
do-tcp: yes
access-control: 172.17.0.0/12 allow
# May be set to no if you don't have IPv6 connectivity
do-ip6: yes
# You want to leave this to no unless you have *native* IPv6. With 6to4 and
# Terredo tunnels your web browser should favor IPv4 for the same reasons
prefer-ip6: no
# Use this only when you downloaded the list of primary root servers!
# If you use the default dns-root-data package, unbound will find it automatically
#root-hints: "/var/lib/unbound/root.hints"
# Trust glue only if it is within the server's authority
harden-glue: yes
# Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS
harden-dnssec-stripped: yes
# Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes
# see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details
use-caps-for-id: no
# Reduce EDNS reassembly buffer size.
# IP fragmentation is unreliable on the Internet today, and can cause
# transmission failures when large DNS messages are sent via UDP. Even
# when fragmentation does work, it may not be secure; it is theoretically
# possible to spoof parts of a fragmented DNS message, without easy
# detection at the receiving end. Recently, there was an excellent study
# >>> Defragmenting DNS - Determining the optimal maximum UDP response size for DNS <<<
# by Axel Koolhaas, and Tjeerd Slokker (https://indico.dns-oarc.net/event/36/contributions/776/)
# in collaboration with NLnet Labs explored DNS using real world data from the
# the RIPE Atlas probes and the researchers suggested different values for
# IPv4 and IPv6 and in different scenarios. They advise that servers should
# be configured to limit DNS messages sent over UDP to a size that will not
# trigger fragmentation on typical network links. DNS servers can switch
# from UDP to TCP when a DNS response is too big to fit in this limited
# buffer size. This value has also been suggested in DNS Flag Day 2020.
edns-buffer-size: 1232
# Perform prefetching of close to expired message cache entries
# This only applies to domains that have been frequently queried
prefetch: yes
# One thread should be sufficient, can be increased on beefy machines. In reality for most users running on small networks or on a single machine, it should be unnecessary to seek performance enhancement by increasing num-threads above 1.
num-threads: 1
# Ensure kernel buffer is large enough to not lose messages in traffic spikes
so-rcvbuf: 1m
# Ensure privacy of local IP ranges
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: 172.16.0.0/12
private-address: 10.0.0.0/8
private-address: fd00::/8
private-address: fe80::/10
# Ensure no reverse queries to non-public IP ranges (RFC6303 4.2)
private-address: 192.0.2.0/24
private-address: 198.51.100.0/24
private-address: 203.0.113.0/24
private-address: 255.255.255.255/32
private-address: 2001:db8::/32
Note that you may need to change interface
and access-control
to the docker ip address, which you can find by ip addr | grep docker
. You can then set Settings > DNS > Upstream DNS Server > Custom DNS Servers
to 172.17.0.1#5353
, or the appropriate ip address. This finally allows us to resolve DNS normally without DNS poisoning. Note that the ISP can still directly block the IP address.
At this point a typical DNS resolution request will first hit MagicDNS, which responds with the appropriate ip for *.ts.net
or forward to the custom upstream DNS which is pihole running on server
. Pihole then replies with NXDOMAIN
for blocked domains, server-tailscale-ip
for *.yourname.com
or forwards to unbound also running on server
. Unbound then recursively resolves the public domains with ECS.