Back

Locally hosting an internet-connected server

181 points5 daysmjg59.dreamwidth.org
JdeBP5 days ago

This and the comments highlight how bad many ISPs in North America and Western Europe are at IPv6, still, in 2025, and the lengths to which people will go to treat that as damage and literally route around it.

One of the biggest ISPs in my country has been promising IPv6 since 2016. Another, smaller, competitor, advertised on "World IPv6 Day" in 2011 that it was way ahead of the competition on supplying IPv6; but in fact does not supply it today.

One of the answers I see given a lot over the years is: Yes, I know that I could do this simply with IPv6. But ISPs around here don't route IPv6, or even formally provide statically-assigned IPv4 to non-business customers. So I have had to build this Heath Robinson contraption instead.

mjg595 days ago

Pretty much! My ISP was founded by https://en.wikipedia.org/wiki/Rudy_Rucker and is somewhat cheap and delightful and happily routes me a good amount of IPv6, and every 48 hours or so it RAs me an entirely different range even though I still have validity on the lease for the old one and everything breaks, so I've had to turn IPv6 off entirely (I sent dumps of the relevant lease traffic to support, they said they'd look into it, and then the ticket auto closed after being inactive for two years). I spent a while trying to make things work with IPv6 but the combination of it being broken at my end and also there still being enough people I want to provide access to who don't have it means it just wasn't a good option.

anonymousiam5 days ago

One of my places uses Frontier FiOS (soon to become Verizon again). They have zero support for IPv6, and it isn't even on their roadmap.

I use a static HE (Hurricane Electric) IPv6 tunnel there, and it works great.

The only issue is that YouTube thinks the IPv6 block is commercial or an AI dev scraping their content, so I can't look at videos unless I'm logged in to YouTube.

stego-tech5 days ago

I’m also on FiOS, and despite repeated statements to the effect I’d never get IPv6 on my (20 year) old ONT, I’ve got a nice little /56 block assigned on my kit via DHCPv6. Problem is that, as it’s a DHCP block, it changes, and Namecheap presently does not offer any sort of Dynamic DNS for IPv6 addresses.

Still, it let me tear down the HE IPv6 tunnel I was also running, since the sole reason I needed IPv6 was so our household game consoles could all play online without cursed firewall rules and IP reservations. I’m pretty chuffed with the present status quo, even if it’s far from perfect.

One other thing I’d note about OPs article (for folks considering it as a way to work around shitty ISP policies) is that once you have this up and running, you also have a perfect setup for a reverse proxy deployment for your public services. Just make sure you’re watching your bandwidth so you don’t get a surprise bill.

FuriouslyAdrift5 days ago

For a long time, I operated from home with a auction-purchased IPv4 /24 just so I could get around all this BS and have my own AS.

There's nothing nicer than being able to BGP peer and just handle everything yourself. I really miss old Level 3 (before the Lumen/CenturyLink buyout).

Kind of kicking myself for selling my netblock but it was a decent amount of money ($6000).

icedchai5 days ago

I'm doing exactly this. I got my netblock for free in 1993, back in the Internic days before ARIN existed. I have a couple of VPSes running BGP and tunnel traffic back to my home over wireguard.

bbarnett5 days ago

Hey!

What if I you will your netblock to me? I'll will you my camaro and my collection of amiga parts.

(I really want your netblock)

icedchai4 days ago

hah! If I wasn't actively using it, I'd consider renting it out. I bet you could find some early Internet dude that has a /24 they're not using

PaulKeeble5 days ago

Mine officially supports it. However having configured the Prefix as they define and using SLAAC etc all my devices get their IPv6 addresses and can access the internet, I can even connect from outside the network so it all "works", but I have a bunch of issues. Neither of my ISPs defined DNS servers is available, I can't route one of the OpenDNS routers but the other works fine and then I have these periods where the entirity of IPv6 routing breaks for about a minute and then restores. Having done this with two different routers on completely different firmware now I can't help but think my official support from my ISP is garbage and they have major problems with it. I had to turn it off because it causes all sorts of problems.

jxjnskkzxxhx5 days ago

> Heath Robinson contraption

Ah, I see you also watched that video yesterday on manufacturing a tiny electric rotor.

JdeBP5 days ago

I actually learned the expression when I was a child, via the Professor Branestawm books.

jxjnskkzxxhx5 days ago

Ok so this is genuinely a case of I see an expression for the first time, learn an expression it, and then see it again immediately after. Fun.

+1
57473m3n7Fur7h35 days ago
grndn5 days ago

Fellow Branestawm enthusiast here. That is the first time anyone has ever mentioned Professor Branestawm on HN, as far as I can tell! It's triggering deep memories.

Joeboy5 days ago

"Heath Robinson" is British English for "Rube Goldberg".

jxjnskkzxxhx5 days ago

TIL

algernonramone4 days ago

Me too, at first I thought it was a takeoff on "Heathkit". Silly me, I guess.

jeroenhd5 days ago

I'm in western Europe and every ISP but the ultra cheap ones and the niche use case ones have stable IPv6 prefixes. Some do /48, others /56.

IPv4 is getting CGNAT'd more and more, on the other hand. One national ISP basically lets you pick between IPv4 CGNAT and IPv6 support (with IPv6 being the default). Another has been rolling out CGNAT IPv4 for new customers (at first without even offering IPv6, took them a few months to correct that).

This isn't even an "America and Western Europe" thing. It's a "whatever batshit insane approach the local ISP took" thing. And it's not just affecting IPv6 either.

jekwoooooe5 days ago

I feel like it’s malicious. They don’t want to support it because it means they can’t charge high prices for static IPs

emilfihlman5 days ago

Once again I voice the only sane option: Skip IPv6 and the insanity that it is, and do IPv8 and simply double (or quadruple) the address space without introducing other new things.

drdaeman5 days ago

It'll be objectively worse. IPv6 is at least sort of supported by a non-negligible number of devices, software and organizations. This IPv8 would be a whole new protocol, that no one out there supports. The fact that version 8 was already defined in [an obsolete] RFC1621 doesn't help either.

Even if you decide to try to make it a Frankenstein's monster of a protocol, making it a two IPv4 packets wrapped in each other to create a v4+v4=v8 address space, you'll need a whole new routing solution for the Internet, as those encapsulations would have issues with NATs. And that'll be way more error prone (and thus, less secure), because it'll be theoretically possible to accidentally mix up v4 and inner-half-of-v8 traffic.

Nah, if we can't get enough people to adopt IPv6, there's no chance we'll get even more people to adopt some other IPvX (unless something truly extraordinary happens that would trigger such adoption, of course).

MintPaw5 days ago

Are you saying you believe it's truly impossible to create a new backwards compatible standard that expands the address space and doesn't require everyone to upgrade for it to work?

+1
ianburrell5 days ago
stephen_g5 days ago

I'm not going to say it's truly impossible, but it's practically just-about impossible.

There's no straightforward way of getting old hosts to be able to address anything in the new expanded space, or older routers to be able to route to it.

So you have to basically dual-stack it, and, oops, you've created the exact same situation with IPv6...

hypeatei5 days ago

If it's possible, why has no one done it? Most of the backwards compatible "solutions" that are presented just run into the same issues as IPv6 but with a more quirky design.

icedchai5 days ago

That standard was IPv4 with NAT ;) Unfortunately, it doesn't allow for end-to-end connectivity.

acdha5 days ago

This is a pipe dream in the current century. IPv6 adoption has been slow but it’s approaching 50% and absolutely nobody is going to go through the trouble of implementing a new protocol; updating every operating system, network, and security tool; and waiting a decade for users to upgrade without a big advantage. “I don’t want to learn IPv6” is nowhere near that level of advantage.

Nextgrid5 days ago

The reason IPv6 adoption is lacking is that there's no business case for it from consumer-grade ISPs, not that there's an inherent problem with IPv6. Your proposed IPv8 standard would have the exact same adoption issues.

drewg1235 days ago

IPv6 is the reason why we can't have IPv6

Your IPv8 is what IPv6 should have been. Instead, IPv6 decided to re-invent way too much, and is why we can't have nice things are are stuck with IPv4 and NAT. Just doubling the address width would have given us 90% of the benefit of V6 with far less complexity and would have been adopted much, much, much faster.

I just ported some (BSD) kernel code from V4 to V6. If the address width was just twice as big, and not 4x as big, a lot of the fugly stuff you have to deal with in C would never have happened. A sockaddr could have been expanded by 2 bytes to handle V6. There would not be all these oddball casts between V4/V6, and entirely different IP packet handling routines because of data size differences and differences in stuff like route and mac address lookup.

Another pet peeve of mine from my days working on hardware is IPv6 extension headers. There is no limit in the protocol to how many extension headers a packet can have. This makes verifying ASICS hard, and leads to very poor support for them. I remember when we were implementing them and we had a nice way to do it, we looked at what our competitors did. We found most of them just disabled any advanced features when more than 1 extension header was present.

immibis4 days ago

IPv6 reinvented hardly anything. It's pretty much IPv4, with longer addresses, and a handful of trivial things people wished were in IPv4 by consensus (e.g. fragmentation only at end hosts; less redundant checksums).

The main disagreements have been above what to do with the new addresses e.g. some platforms insist on SLAAC. (Which is good because it forces your ISP to give you a /64).

Devices operating at the IP layer aren't allowed to care about extension headers other than hop-by-hop, which must be the first header for this reason. Breaking your stupid middlebox is considered a good thing because these middleboxes are constantly breaking everyone's connections.

Your sockaddr complaints WOULD apply at double address length on platforms other than your favorite one. The IETF shouldn't be in charge of making BSD's API slightly more convenient at the expense of literally everything else. And with addresses twice as long, they wouldn't be as effectively infinite. You'd still need to be assigned one from your ISP. They'd still probably only give you one, or worse, charge you based on your number of devices. You'd still probably have NAT.

bigstrat20035 days ago

That is not a sane option. IPv6 isn't actually that hard, companies are just lazy and refuse to implement it (or implement it correctly).

icedchai5 days ago

IPv6 is often simpler to administer than IPv4. Subnetting is simpler for the common cases. SLAAC eliminates the need for DHCP on many local networks. There's no NAT to deal with (a good thing!) Prefix delegation can be annoying if the prefix changes (my /56 hasn't in almost 3 years.) Other than that, it's mostly the same.

fc417fc8025 days ago

> There's no NAT to deal with

I frequently see this claim made but it simply isn't true. NAT isn't inherent to a protocol it's something the user does on top of it. You can NAT IPv6 just fine there just isn't the same pressure to do so.

+1
icedchai4 days ago
Daviey5 days ago

The commentents suggest Tailscale, but the author assumes this could only mean Funnel, but you could use Tailscale/Headscale for handling the wiregiard and low-level networking / IP Allocation.

Then doing straight-forward iptables or L7, or reverse proxy via Caddy, Nginx, etc, directly to the routable IP address.

The outcome is the ~same, bonus is not having to handle the lower level component, negative is an extra "thing" to manage.

But this is how I do the same thing, and i'm quite happy with the result. I can also trivially add additional devices, and even use it for egress, giving me a good pool of exit-IP addresses.

(Note, I was going to add this as a comment on the blog, but it seems their captcha service is broken would not display - so it was blocked)

0xCMP5 days ago

I haven't actually used Funnel, but I do use Cloudflare Tunnels + a VPS.

What I've done is that the VPS Nginx can talk over Tailscale to the server in question and the Cloudflare Tunnel lets those not on Tailscale (which is me sometimes) access the VPS.

DougN75 days ago

Why not use a dynamic DNS service instead? I’ve been using dyn.com (now oci.dyn.com) for years and it has worked great. A bonus is many home routers have support built in.

messe5 days ago

Only works if you're not behind CGNAT, which has problems in and of itself. I pay my ISP an extra 29 DKK (about 4.50 USD at the moment) for a static address; my IPv4 connections and downloads in-general became way more stable after getting out from behind CGNAT.

neepi5 days ago

CGNAT is hell. Here I had to choose between crap bandwidth or CGNAT. I chose crap bandwidth.

immibis5 days ago

Hell for hosting, but if you're doing adversarial interoperability as a client, it does help you avoid being IP-banned. (At least in Western countries. I hear that Africa and Latin America tend to just get their CGNAT gateways banned because site operators don't give a shit about whether users from those regions can use their sites)

jeroenhd5 days ago

The client feature only works for websites that care about making exceptions for CGNAT users. Plenty of them simply ban the shared addresses.

That's part of the reason why countries like India are getting so many CAPTCHAs: websites don't care for the reason behind lackluster IP plans from CGNAT ISPs. If the ISP offered IPv6 support, people wouldn't have so many issues, but alas, apparently there's money for shitty CGNAT boxes but not IPv6 routers.

+1
dkjaudyeqooe5 days ago
+1
neepi5 days ago
jaoane5 days ago

CGNAT is completely irrelevant to the average person. It’s only an issue if you expect others to connect to you, which is something that almost all people don’t need.

(inb4 but the internet was made to receive connections! Well yes, decades ago maybe. But that’s not the way things have evolved. Get with the times.)

+1
juergbi5 days ago
+1
throw0101d5 days ago
rubatuga4 days ago

If you're behind a CGNAT - check out hoppy.network

High quality IPv4 + a whole /56 IPv6 for $8/month

messe4 days ago

That's way more expensive than what I already have. My ISP, by default, provides me a /56, of which I'm only using two /64 subnets at the moment. For an extra 29 DKK (4.50 USD), I get a static IPv4 as well.

I also don't need to worry about the additional latency of a VPN, and have symmetric gigabit speeds, rather than 100Mbps up/down.

mjg595 days ago

I have multiple devices on my internal network that I want to exist outside, and dynamic DNS is only going to let me expose one of them

rkagerer5 days ago

If they don't all need distinct external IP addresses of their own, port forwarding is a typical approach.

mjg595 days ago

That doesn't work well if you want to run the same service on multiple machines. For some you can proxy that (eg, for web you can just run nginx to proxy everything based on either the host header or SNI data), but for others you can't - you're only going to be able to have one machine accepting port 22 traffic for ssh.

+2
herbst5 days ago
+2
chgs5 days ago
+1
mvanbaak5 days ago
mystified50165 days ago

Yes, that's how it works when you only have a single IP. The standard way to deal with this is a reverse proxy for web requests. Other services require different workarounds. I have a port 22 SSH server for git activities, and another on a different port that acts as a gateway. From that machine I can SSH again to anywhere within my local network.

It's really not onerous or complicated at all. It's about as simple as it gets. I'm hosting a dozen web services behind a single IP4 address. Adding a new service is even easier than without the proxy setup. Instead of dicking around with my firewall and port forwarding, I just add an entry to my reverse proxy. I don't even use IPs, I just let my local DNS resolve hostnames for me. Easy as.

mjg595 days ago

The entire point of this is that I don't want to deal with non-standard port numbers or bouncing through hosts. I want to be able to host services in the normal boring way, and this approach lets me do that without needing to worry about dynamic DNS updates whenever my public IP changes.

mysteria5 days ago

Same for me, I actually like having a reverse proxy as a single point of entry for all my web services. I also run OpenVPN on 443 using the port share feature and as a result I only need one IP address and one open port for everything.

thedanbob5 days ago

This is what I do, except the dynamic DNS service is just a script on my server that updates Cloudflare DNS with my current external IP. In practice my address is almost static, I've never seen it change except when my router is reset/reconfigured.

globular-toast5 days ago

Many DNS registrars support updates via API these days. I use Porkbun and ddclient to update it. Slight rub is I couldn't get it to work for the apex domain. Not sure where the limitation lies.

KronisLV5 days ago

Lovely write up! Personally, I just settled on Tailscale so I don’t have to manage WireGuard and iptables myself.

For a while I also thought that regular SSH tunnels would be enough but they kept failing occasionally even with autossh.

Oh and I got bitten by Docker default MTU settings when trying to add everything to the same Swarm cluster.

PeterStuer5 days ago

I run a very small VPS at Hetzner with Pangolin on it that takes care of all the Traefic Wireguard tunneling to my home servers. Very easy to set up and operate.

https://fossorial.io/

thatcherc5 days ago

Cool! Do you like that approach? I've thought about setting up that exact thing but I wasn't sure how well it would work in practice. Are there any pitfalls you ran into early on? I might give it a shot after your "very easy to set up and operate" review!

PeterStuer5 days ago

Honestly it was very easy. Their documentation is decent, and the defaults are good.

Setting up Pangolin on the VPS, and Newt on your lan, connecting them and adding e.g. a small demo website as a resource on Pagolin will take you about half an hour (unless your domain propagation is slow, so always start by defining the name in DNS and point it to your VPS IP to start with. You can use a wildcard if you do not want to manually make a new DNS entry each time)

wredcoll5 days ago

What is the vps for? Just the static ip?

PeterStuer4 days ago

To have a public front that is outside of the lan firewall. The idea is that you do not have to open your lan to anything. The only communication will be the encrypted wireguard tunnel between the VPS and your Newt instance.

You can run the Pangolin also on the lan, but you will need to open a few ports then on your lan firewall, and manage your ddns etc. if you do not have a fixed IP at home.

For less than 4€/month I opted for the VPS route.

zokier5 days ago

Yeah, this is the way to do this. I'm pretty sure that if you for some reason do not want to run wireguard on all your servers you could fairly easily adjust this recipe to have a centralized wg gateway on your local network instead.

I think I've seen some scripts floating around to automate this process but can't remember where. There are lots of good related tools listed here: https://github.com/anderspitman/awesome-tunneling

xiconfjs5 days ago

Quote from OPs ISP [1]:

"Factors leading to a successful installation: Safe access to the roof without need for a helicopter."

[1] https://www.monkeybrains.net/residential.php#residential

uncircle5 days ago

I wish I had access to a small ISP. It is comforting to know that if something goes wrong, on the other end of the line there is someone with a Cisco shell open ready to run a traceroute.

xiconfjs5 days ago

For sure…in case of reaction times and flexibility they are great…Until something serious happens outside of their scope.

uncircle3 days ago

Like what? Their scope is being an ISP, i.e. routing packets.

xiconfjs3 days ago

Usually it‘s about standing, resources and options. Serious problems like fibre cuts, local power outages and DDoS attacks are usually not in their scope and they have to wait for 3rd parties to fix these problems with little iptions to speed up those processes. Bigger ISP usually have teams/departments which have well established processes/solutions to tackle these problems. That said, I‘m totally aware that each of them (small or big ISP) have their pros and cons - as always it mainly depends on your use case and requirements.

varenc5 days ago

How is the author getting a symmetric 600mbps connection with Monkeybrains? They're an awesome local ISP and provide internet via roof mounted PtM wireless connections.

I want to love them, but sadly I only get an unreliable 80mbps/40mbps connection from them. With occasional latency spikes that make it much worse. To make up for this I run a multi-WAN gateway connecting to my neighbor/friend's Comcast as well. Here's the monkeybrains (https://i.imgur.com/FaByZbw.jpeg) vs comcast (https://i.imgur.com/jTa6Ldk.jpeg) latency log.

Curious if the author had to do anything special to get a symmetric 600mbps from Monkeybrains. They make no guarantees about speed at all, but are quite cheap, wholesome, and have great support. Albeit support hasn't been able to get me anywhere close to the author's speeds.

btucker5 days ago

I love Monkeybrains! I had something in the neighborhood of a 600mbps symmetric connection through them in the late 2010s when I lived in SF. The only issue was when it rained hard the speeds would deteriorate.

Interesting you're getting such slow speeds. Ask them if a tech can stop by and troubleshoot with you.

anonymousiam5 days ago

I did the same thing 20 years ago, but I used vtun because Wireguard didn't exist yet. It's a cool way to get around the bogus limitations on residential static IP addresses.

At the time, my FiOS was about $80/month, but they wanted $300/month for a static IP. I used a VPS (at the time with CrystalTech), which was less than $50/month. Net savings: $170/month.

lostlogin5 days ago

> At the time, my FiOS was about $80/month, but they wanted $300/month for a static IP.

So ridiculous.

It’s fast, far quicker than I can use, and the static IP was a one off $10 or similar.

politelemon5 days ago

Another alternative could be a cloudflare tunnel. It requires installing their Daemon on the server and setting up DNS in their control panel. No ports need opening from the outside in.

jeroenhd5 days ago

The downside of the Cloudflare approach is that yet more websites are behind Cloudflare's control. The VPS approach works pretty much the same way Cloudflare does, but without the centralized control.

On the other hand, Cloudflare is a pretty easy solution against spam bots and scrapers. Probably a better choice if that's something you need protection against.

0xCMP5 days ago

I think both are great options. Personally I do split-dns so I can access things "directly" while using Tailscale and via Cloudflare Tunnel when I am not.

I also selectively expose via the Cloudflare Tunnel. Most things are tailscale only.

PaulKeeble5 days ago

Everyone does these days, although its really the AI scrapers you need defence from and Cloudflare isn't doing so good at that yet.

Aachen5 days ago

As someone who actually hosts stuff at home, I'm not sure everyone does. I don't, for one

Maybe if you're on a limited data plan (like in Belgium or on mobile data), you'd want to prevent unnecessary pageloads? Afaik that doesn't apply to most home connections

Or if you want to absolutely prevent that LLMs eat your content for moral/copyright reasons, then it can't be on the open internet no matter who your gateway is

areyourllySorry5 days ago

ai scrapers are truly this year's boogeyman

troupo5 days ago

I used to expose a site hosted on my home NAS through it, and now I do the same from a server at Hetzner.

Works like magic :)

eqvinox5 days ago

I would highly recommend reading up on VRFs and slotting that into the policy routing bits. It's really almost the same thing (same "ip route" commands with 'table' even), but better encapsulated.

dismalpedigree5 days ago

I do something similar. I run a nebula network. The vps has haproxy and is passing the encrypted data to the hosts using sni to figure out the specific host. No keys on the vps.

The vps and each host are each nebula nodes. I can put the nodes wherever i want. Some are on an additional vps, some are running on proxmox locally. I even have one application running as a geo-isolated and redundant application on a small computer at my friend’s house in another state.

lostmsu4 days ago

I switched from Nebula to Yggdrasil (IPv6, global, but not the same as public IPv6). https://news.ycombinator.com/item?id=43967082

remram5 days ago
dismalpedigree5 days ago

Yes. Thats the one. Works really well. Basically a free version of tailscale. A bit more of a learning curve.

duskwuff5 days ago

Headscale [1] has a stronger claim to "free version of Tailscale" - it's literally a self-hosted version of Tailscale's coordination server. It's even compatible with the Tailscale client.

[1]: https://headscale.net/

remram4 days ago

The problem with Headscale is that it has absolutely no documentation. All of it is described in relation to Tailscale, which is what I don't want to use. Here are the Tailscale features we have, here are the differences with Tailscale. It's very weird.

zrm5 days ago

> multiple world routeable IPv4 addresses

It's pretty rare that you would need more than one.

If you're running different types of services (e.g. http, mail, ftp) then they each use their own ports and the ports can be mapped to different local machines from the same public IP address.

The most common one where you're likely to have multiple public services using the same protocol is http[s], and for that you can use a reverse proxy. This is only a few lines of config for nginx or haproxy and then you're doing yourself a favor because adding a new one is just adding a single line to the reverse proxy's config instead of having to configure and pay for another IPv4 address.

And if you want to expose multiple private services then have your clients use a VPN and then it's only the VPN that needs a public IP because the clients just use the private IPs over the VPN.

To actually need multiple public IPs you'd have to be doing something like running multiple independent public FTP servers while needing them all to use the official port. Don't contribute to the IPv4 address shortage. :)

v5v35 days ago

I would suggest putting a disclaimer on the article to warn any noobs that prior to opening up a server on the internet basic security needs to be in place.

nickzelei4 days ago

Hm, 600 symmetric with monkeybrains?? I’ve had monkeybrains for over 3 years and have never seen over 200 down. In fact, I reached out to them today because for the last 3 months it’s been about 50 down or less. Like, I can barely stream content slow. What gives? I am in a 6 unit in lower haight. Most of the units also have MB. The hardware is relatively new (2019?). What gives?

bzmrgonz5 days ago

This is an interesting usecase for a jumpbox. So what if we install a reverse proxy on the vps and use wireguard to redirect to services at home(nonstatic)? Would that work too? any risks that you can see?

ghoshbishakh5 days ago

There are tools specifically built for hosting stuff without public IP such as https://pinggy.io

crtasm5 days ago

There are a number of paid services like that yes.

fainpul5 days ago

> Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005.

What is going on here with these addresses? I'm used to seeing stuff like this in movies – where it always destroys my immersion because now I have to think about the clueless person who did the computer visuals – but surely this author knows about IPv4 addresses?

l-p5 days ago

The author did not want to use real addresses and was not aware of the 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24 ranges specified in RFC 5737 - IPv4 Address Blocks Reserved for Documentation.

bzmrgonz5 days ago

TIL!!!

neurostimulant3 days ago

My only concern is logging. Will the webserver on your local server log the real ip address of your visitors, or will it log all traffics as coming from your vps?

dboreham5 days ago

I do something similar but using GRE since I don't need encryption. Then I have OSPF on the resulting overlay network (there are several sites) to deal with ISP outages. One hop is via Starlink and that does use Wireguard because Elon likes to block tunnel packets but we gets through.

chazeon5 days ago

Why would you want to expose your IP to the internet? I still feel that's dangerous, susceptible to DDoS attack, and I avoid that as much as possible. I put everything behind a Tailscale for internal use and behind Cloudflare for external use.

0xCMP5 days ago

In this case they're re-exposing the server(s) to the public internet, but their actual IP Address is still very much hidden behind the Wireguard connection to the VPS.

The IPs they're talking about exposing are ones which are on a VPS, not their home router, or the internal IPs identifying a device in Wireguard.

Aachen5 days ago

What the heck? That's like not wanting a street address because people might come to block your front door somehow, or burglars might find your building and steal from it. The big brothers you mention would be like gated/walled communities in this analogy I guess

Saying this as someone who's hosted from at home for like 15 years

Also realise that you're sending the IP address to every website you visit, and in most VoIP software, to those you call. Or if you use a VPN 24/7 on all devices, then it's the VPN's IP address in place of the ISP's IP address...

chazeon5 days ago

I don’t think this is the right analogue. Having someone come to your door breaking things would take much larger effort, and easy to be caught. But DDoS or attack your service has minimal cost.

Visiting sites and sending the IP address is not the problem, the router has firewall and basically blocking unwanted attention. But when you expose something without protection and allow someone to burn your CPU, or, in a worse case, figure out your password for a not properly secured service, is a totally another issue.

I saw people setting up honey pot SSH and there are so many unauthorized access and I got scared. I think exposing entire machine to network is like you drive car without insurance. Sure you might be OK, but when trouble comes, it will be a lot of trouble.

Aachen4 days ago

> basically blocking unwanted attention. But when you expose something without protection and allow someone to burn your CPU

... sure. You'd think I'd have noticed that in nearly two decades of hosting all different kinds of services if this were a thing

chazeon5 days ago

Yeah and of course it will be depend on your personality and risk model. Compared to other things I don’t want to risk my data, whether leaked or damaged. And I make mistakes, a lot. If you are very meticulous and can ensure that you can put up all the security measures yourself and won’t expose something you don’t want to. I am just not that kind of person.

Aachen4 days ago

I'm not meticulous either. I had one responsible disclosure and a few times where I noticed issues myself but never that an attacker discovered it first. There's not that many malicious people. The only scenario where you realistically get pwned is when there is a stable and automated exploit for a widely spread service that can be automatically discovered, something like Heartbleed or maybe if a WordPress plugin has an SQL injection or so

Run unattended upgrades, or the equivalent for whatever update mechanism you use, and you'll be fine. I've seen banks with more outdated running services than me at home... (I do security consulting, hence)

rtkwe5 days ago

To do that people have to physically come to my house and there are solutions to that, people can fuck with my internet from anywhere in the world. It's similar to why remote internet voting is such a pandora's box of issues.

Aachen4 days ago

There's 4 billion front doors on the v4 internet. Sending you a DDoS is transient (not like doing something to you physically) and doesn't scale to lots of websites, especially for no gain

In addition to myself, I know some people who self host but not any who ever had a meaningful DDoS. If you're hosting an unpopular website or NAS, nobody is going to be interested in wasting their capacity on bothering you for no reason

Anything that requires custom effort (not just sending the same packets to every host) doesn't scale either. You can host an SQL injection pretty much indefinitely with nobody discovering it, so long as it's not in standard software that someone might scan for, and if it is, then there'll be automatic updates for it. Not that I'd recommend hosting custom vulnerable software, but either way: in terms of `risk = chance × impact` the added risk of self hosting compared to datacentre hosting is absolutely negligible, so long as you apply the same apt upgrade policy in either situation

Online voting has nothing to do with these phantom risks of self hosting

jojohohanon5 days ago

I feel like I missed a preread that teaches me about these strangle super-numeric ip addresses. Eg 400.564.987.500

Am I just seeing ipv6 in an unusually familiar format? Or is it an intentionally malformed format used by wireguard for internal routing?

cameroncooper5 days ago

Looks like modified placeholder addresses because the author didn't want to use real addresses. I don't think it could be used for internal routing since each octet is represented with a single byte (0-255) so having larger numbers for some internal routing would likely break the entire IP stack.

Arrowmaster5 days ago

Yes the author needs to be beaten over the head with RFC 5737.

kinduff5 days ago

This is an interesting solution and wouldn't mind using one of my existing servers as a gateway or proxy (?).

Is there a way to be selective about what ports are exposed from the host to the target? The target could handle it but fine grained control is nice.

mjg595 days ago

You could just set a default deny iptables policy for forwarding to that host, and then explicitly open the ports you want

baobun5 days ago

iptables is legacy now and if you're not already well-versed in it, better go straight to nftables (which should be easier to get started with anyway). On modern systems, iptables commands are translated to nftables equivalents by transitional package.

JXzVB0iA5 days ago

Recommend trying vrf (l3mdev) for this (dual interface w/ 0.0.0.0/0 route) setup.

Put the wg interface in a new vrf, and spawn your self-hosted server in that vrf (ip vrf exec xxx command).

protocolture5 days ago

Wont sell you a static ip? Not even ipv6? Thats just incompetence.

FlyingSnake5 days ago

How is it different from self hosting locally with Cloudflare tunnels or Tailscale?

E.g. I have a PiZero attached to my router and it’s exposed to the internet via Cloudflare tunnels.

tehlike5 days ago

i also do cloudflare tunnel.

rtkwe5 days ago

I've taken the easier solution of Cloudflare's free Tunnel service so my IP is less exposed and I don't have to poke holes in my firewall.

saltspork5 days ago

Last I checked Cloudflare insisted on terminating TLS on the free tier.

On principle, I prefer to poke a hole in my firewall than allow surveillance of the plaintext traffic.

rtkwe5 days ago

I think they have to for tunnels to work. They rewrite the headers so the target server doesn't have to have any special config to recognize abc.example.xyz as being itself.

I think in theory you could get it without it but that would be a lot more work on the recipient side.

Me I'm less worried about that so I accept the convenience of not having to setup a reverse proxy and poke a hole in my router.

dreamcompiler5 days ago

Putting a privkey on your VPS seems like asking for trouble.

mrbluecoat5 days ago
lazylizard5 days ago

you can also run a proxy on the vps instead of the nat.

mjg595 days ago

Depends on the protocol. For web, sure - for ssh, nope, since the protocol doesn't indicate which machine it's trying to connect to and so you don't know where to proxy it to.

baobun5 days ago

You can still TCP proxy SSH just fine (one port per target host obv)

Certain UDP-based protocols may be hairier, though.

remram5 days ago

I don't know what you mean by "the protocol". There is a destination IP address on every packet... getsockname() will tell the proxy which local IP the client dialed, allowing it to create "virtual hosts" (or you can actually run multiple proxies bound on different local addresses).

mjg595 days ago

I have one public IP address. I have three machines behind it that I want to SSH into. How does the machine with the public address know where to route an incoming port 22 packet? For HTTPS this is easy - browsers send the desired site in the SNI field of the TLS handshake, so the frontend can look at that and route appropriately. For SSH there's no indication of which host the packet is intended for.

+1
zzo38computer5 days ago
remram5 days ago

Well you can't, but that wouldn't work with routing either, and it is not the situation at hand: in the article there are multiple IPs on the VPS:

> you now have multiple real-world IP addresses that people can get to

In your new situation that is not the one in the article, you can just use different ports.

PhilipRoman5 days ago

Socket based proxying is better for this, since you eliminate one point from your attack surface (if your proxy server gets compromised, it's just encrypted ssh/TLS)

nurettin5 days ago

Too lazy to set up wireguard. I just use ssh -L. And if there is another server in the way I hop with ssh -J -L

13175 days ago

Things like this that go through some external VPS always seem a bit pointless to me.

just host it on the VPS directly

dboreham5 days ago

I have workloads that need 32T of enterprise nvme that I run on a machine in my garage.

sntran4 days ago

How much for a VPS that supports bandwidth to access those 32T data frequently?

orangeboats3 days ago

Sometimes you don't have to access those 32TB data in raw. You often need only the processed data, which can be magnitudes smaller.

Your local machine holds and processes the raw data, while your VPS forwards the much smaller processed data to the Internet.

orangeboats5 days ago

A VPS that relays traffic and a VPS that runs services are very different.

yusina5 days ago

Um that article is not at all about what I expected. It solves a particular problem, which is not having a static IP address. I happen to have one, so that's not an issue.

But I still have so much to consider when doing local hosting. Redundant electricity? IP connectivity? What if some hardware dies? What if I get DDoS'ed? How do I get all relevant security fixes applied asap? How do I properly isolate the server from other home networking like kid's laptops and TV with Netflix? ...?

All solvable of course, but that's what I'd have expected in such an article.

tasn5 days ago

I wrote about doing the same thing in 2016[1], crazy to think that we STILL don't have IPv6.

1: https://stosb.com/blog/using-an-external-server-and-a-vpn-to...

TacticalCoder5 days ago

> Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005.

What kind of IP addresses are these?

sneak5 days ago

This article was not worth having to solve a captcha to read.

I think I will be done with sites that require me to solve captchas to visit for simple reading, just as I am done with sites that require me to run javascript to read their text.

superkuh5 days ago

At least it is technically possible to complete the dreamwidth captchas now. For many years (well before the modern corporate spidering insanity) dreamwidth was just completely inaccessible no matter how many times one completed their captchas. You'd have to be running a recent version of Chrome or the like.

Now after doing the captcha ~5 times and getting nothing a different captcha pops up that actually works and lets one in.

It's not good but it's a hell of a lot better than their old system.

cesarb5 days ago

At least you get a CAPTCHA. All I get is a "403 Forbidden" with zero extra information. Tried from two different ISPs and different devices.

bzmrgonz5 days ago

how do you feel about proof of work human detection mechanisms? I think those are more tolerable given that it's just a short pause in browsing.

Aachen5 days ago

(Not the person you asked)

I keep waiting for someone trying to use the web on an older computer who's sitting there for 30 seconds every time they click another search result. Or a battery-powered device that now uses a lot of inefficient high frequency clock cycles to get these computations out of the way

But so far I've heard nobody! And they've been fast on my phone. Does this really keep bots out? I'm quite surprised in both directions (how little computation apparently already helps and how few people run into significant issues)

When this came up 15 years ago (when PoW was hyping due to Bitcoin and Litecoin) the conversation was certainly different than how people regard this today. Now we just need an email version of it and I'm curious if mass spam becomes a thing of the past as well

CaptainFever5 days ago

I can't even access the article, I get a 403. Here's a text mirror:

I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.

What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.

By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:

[Interface] PrivateKey = privkeyhere ListenPort = 51820 Address = localaddr/32

[Peer] Endpoint = VPS:51820 PublicKey = pubkeyhere AllowedIPs = VPS/0

And on your VPS, something like:

[Interface] Address = vpswgaddr/32 SaveConfig = true ListenPort = 51820 PrivateKey = privkeyhere

[Peer] PublicKey = pubkeyhere AllowedIPs = localaddr/32

The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.

Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:

iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005

Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.

What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:

PostUp = ip route add vpswgaddr dev wg0 PreDown = ip route del vpswgaddr dev wg0

That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.

But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:

1 wireguard

where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:

PostUp = ip rule add from localaddr lookup wireguard PreDown = ip rule del from localaddr lookup wireguard and now your local system is effectively on the internet.

You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.

Source: https://web.archive.org/web/20250618061131/https://mjg59.dre...