Back

The New Internet

517 points3 monthstailscale.com
teddyh3 months ago

The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPsec, with PKI provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.

(Repost of <https://news.ycombinator.com/item?id=38570370>)

hnarn3 months ago

This sounds like a reasonable point, but the more I think about it, the more it sounds like digital flagellation.

IPv6 was released in 1998. It had been 21 (!) years since the release of IPv6 and still what you're describing had not been implemented when Tailscale was released in 2019. Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

It's easy to paint companies as bad actors, especially since they often are, but Google, Cloudflare and Tailscale all became what they are for a reason: they solved a real problem, so people gave them money, or whatever is money-equivalent, like personal data.

If your argument is inverted, it's a kind of inverse accelerationism (decelerationism?) whereby only in making the Internet worse for everyone, the really good solutions can see the light. I don't buy it.

Tailscale is not the reason we're not seeing what you're describing, the immense work involved in creating it is why, and it's only when that immense amount of work becomes slightly less immense that any solution at all emerges. Tailscale for example would probably not exist if they had to invent Wireguard, and the fact that Tailscale now exists has led to Headscale existing, creating yet another springboard in a line of springboards to create "something" like what you describe -- for those willing to put in the time.

throw0101d3 months ago

> Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

The folks who either (a) got in early on the IPv4 address land rush (especially the Western developed countries), or (b) with buckets of money who buy addresses.

If you're India, there probably weren't enough IPv4 address in the first place to handle your population, so you're doing IPv6:

* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

Or even if you're in the West, if you're poor (a community Native American ISP):

> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.

* https://community.roku.com/t5/Features-settings-updates/It-s...

* Discussion: https://news.ycombinator.com/item?id=35047624

IPv4 'wasn't a problem' because the megacorps who generally run things where I'm guessing you're from (the West) were able to solve it in other means… until they can't. T-Mobile US has 120M and a few years ago it turns out that money couldn't solve IPv4-only anymore so they went to IPv6:

* https://www.youtube.com/watch?v=QGbxCKAqNUE

IPv6 is not taking off because IPv4 (and NAT/STUN/TURN) is 'better', but rather because (a) inertial, and (b) it 'works' (with enough kludges thrown at it).

api3 months ago

There is another reason: the addresses are long and impossible to remember and hard to type.

I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.

Even “small” usability differences can have a huge effect on adoption.

+4
Gormo3 months ago
+2
gorgoiler3 months ago
+1
wolfendin3 months ago
+2
throw0101d3 months ago
+3
dilawar3 months ago
teddyh3 months ago

Yes, like Ethernet addresses. Those are impossible to remember, too, so obviously Ethernet is no good. /s

The solution for IPv6 addresses is the same for Ethernet addresses; don’t use them directly. Leave it to the name resolution system, and use host names.

Animats3 months ago

> IPv6 was released in 1998. ... Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

Well, for a long time, IPv6 didn't work very well. We're past that, mostly. Google reports that 45% of their incoming connections worldwide are IPv6.[1] Growth rate has been close to linear, at 4%/year, since 2015. IPv6 should pass 50% some time in 2025.

Mobile is already 70%-90% IPv6. They need a lot of addresses.

Most of the delay comes from enterprise networks. They have limited connectivity to the outside world, and much of that limiting involves some kind of address translation. So a "corporate IPv6 strategy" is required.

[1] https://www.google.com/intl/en/ipv6/statistics.html

teddyh3 months ago

It is a problem when, for instance, Google chooses not to implement SRV (and later HTTPS) DNS record support in their web browser. The problems which SRV (and now HTTPS) DNS records solves is not a problem for Google, since they solved the problem by sheer scale and brute force, and Google only benefits from everybody else still having the problem; it’s a great moat for them.

Animats3 months ago

And, worse, incentivized to require users to use a "coordination server" which helps with the NAT and firewall traversal problem by being something you can reach from outbound-only clients. There's a lot of verbiage there, but the general idea seems to be that Tailscale sits at the middle of this as the means by which machines find each other.

There are other ways to do that.

There are dynamic DNS schemes, so you can give your machine which only has a temporary IP address a permanent name. That's been around for decades, and seems to have a bad reputation.

There are schemes with multiple coordination nodes that know about each other, and published lists of such nodes. The list may be out of date, but as long as the published list has one live node, you can connect and get updated. That's how Kademlia, which underlies Etherium's network and some file sharing systems, works. That's about 20 years old, and sort of has a sketchy reputation.

It's possible to go only halfway, and separate discovery from transmission. Peertube does that. You find a file to stream via ordinary HTTP to a server you find by ordinary web search means. Anybody can set up such a server. The actual streaming, for files wanted by many clients, is distributed, with people currently watching also sending out blocks to other people watching. This scales well, in case your video goes viral. It's not used much, though.

So it's definitely possible to do this without someone in the middle able to cut off your air supply.

pphysch3 months ago

How is trusting a dynamic DNS provider different than trusting Tailscale's coordination nodes?

Animats3 months ago

Not everybody has to use the same dynamic DNS provider.

transpute3 months ago

  Competition
  Jurisdiction
  Resilience
  Biodiversity
ZhongXina3 months ago

No, we definitely don't want "automatic IPSec" (especially IPSec!), or really any enforced encryption at the network level, even if it's something sane at this moment like WireGuard. Look at old VPN protocols or authentication schemes like RADIUS which have glaring security holes and are impossible to fix because of compatibility issues, and they're running at much smaller scales than the whole internet. Hell, the way the industry is solving TCP ossification problems is by throwing TCP away and reimplementing it on top of UDP, that should tell us something.

teddyh3 months ago

Your argument seems to be to never implement anything, because eventually it will become old and it will be hard to move away from it? This seems to be an argument against anything new, and it is therefore hard to take seriously.

api3 months ago

It’s an argument against complexity. IP had amazing longevity because of its simplicity and openness.

Even if something is open, complexity is almost like closed as we can see with crazy complicated web standards for which there are few implementations.

zbentley3 months ago

Not GP, but I interpreted it as an argument to innovate/proliferate implementations early and often, but to standardize minimally and as late as possible.

snek_case3 months ago

The argument is more that encryption technologies can become outdated quickly. You also make it harder for small embedded devices to implement network connections if you mandate that all traffic must be encrypted.

A simple protocol is more likely to last.

jeroenhd3 months ago

IPSec is still widely deployed, securely even. All manual, of course, because it doesn't do the stuff that makes HTTPS easy.

IPSec is not always a VPN protocol. L2TP over IPSec is often used as one, but IPSec does little more than encrypt a tunnel between two IP addresses. IPSec in tunnel mode can be a minimal VPN, but it's not used as such as in VPN scenarios without a second packet encapsulation protocol, as it lacks authorization beyond key exchange.

As for the risk of ossification: that didn't go away with the current system either. HTTPS over TLS 1.3 looks like a TLS 1.2 session resumption on the wire (in its default configuration) because shitty middleboxes are used often enough that it would impede the protocol.

The "let's remake TCP over UDP" approach QUIC takes has very similar origins. UDP is generally allowed by random firewalls over that network, while other (more suited) protocols for this type of stuff like SCTP are not. The operating system doesn't allow opening raw network sockets without high privileges, so adding a new QUIC protocol at the layer of TCP and UDP to implement them at the right spot in the stack wouldn't be usable for many devices. Same is true for the TCP stack you have to use what the OS provides or get higher privileges to do your own; patching the TCP state machine isn't practical. So, if you want to implement a better version of TCP optimised for web browsing and such, you use UDP, because while technically incorrect, it'll work in most cases and has the least restrictions.

In the context of the network, IPSec is the new protocol here, not the result of ossification.

viraptor3 months ago

Zerotier does kind of that. It's a tunnel, but also the traffic is direct (unless double Nat is involved) and if you could route the traffic directly to the endpoint IPs, you can skip zt. The location service can be self-hosted if you want. You don't have to use them as a service if you don't want to. Apart from dnssec it's pretty much what you're asking for.

lockywolf3 months ago

Double NAT is now almost everywhere in the world, except maybe USA.

viraptor3 months ago

What kind of Nat though? You can use upnp, predictable mapping, etc. and still allow the traffic through. And that's only with ipv4, because you can run zerotier over IPv6.

+1
throw0101d3 months ago
p_l3 months ago

You can't over double NAT because the second layer of NAT is not going to support UPnP

orangeboats3 months ago

The vast majority of CGNATs across the world don't support the PCP protocol for predictable mapping.

sulandor3 months ago

foreseeable yet still somewhat surprising that having a clean v4 address on the cpe has become a very privileged position.

just the other day i was discouraging a youngster from manually populating his hosts-file in order to circumvent a dmca-related dns block.... what has the world come to.

tomjen33 months ago

I think that is excessively negative take. Tailscales value proposition is also "you can connect to your network wherever you are, safely, and others cannot". That does not go away because of IPsec.

teddyh3 months ago

Network- and location-based security is ultimately unworkable. It’s like if you, in order to work, had to go to a ”virtual office” to even send mail to your colleagues. Mail, and related internet-enabled services, should be accessible from anywhere, and be secured at the end points, not at the network layer. (Most attacks are internal, anyway.)

ndriscoll3 months ago

Most people do need to be on a VPN or in an office to work. That's entirely normal, and makes sense even if you also require authentication for applications.

tomjen33 months ago

Why should you have access to the SSH host for my pie?

Or, more to the point, the server that I use to run my RSS feed reader?

Or my NAS?

Tailscale makes these more secure and more accessible for me. They are never meant to have the world access them.

Now for email and a few other things, sure, their nature is that they need to access the world.

+1
teddyh3 months ago
otabdeveloper43 months ago

No, network- and location-based security is a necessary and indispensable layer in your security stack.

> should be accessible from anywhere, and be secured at the end points, not at the network layer

If you're not securing at both the network layer and the endpoints, then you have utterly failed security and you need to go sit in the dunce corner.

PLG883 months ago

secured at the endpoints yes... I would argue you can go one step further, doing it at the application level. This is what we built (and open sourced) with OpenZiti (https://openziti.io/), the ability to embed an overlay network, built on zero trust and deny by default principles, directly into the app as part of the SDLC.

If you do this, your application has no listening ports on the WAN, LAN, or host OS network and thus cannot be attacked from the external network/IP.

The asymmetry of risk now favours the defender, not attacker. Oh, plus we also have pre-built tunnelers for endpoints if you cannot do app embedded.

depingus3 months ago

Yggdrasil is p2p ipv6 e2ee. https://yggdrasil-network.github.io/

Last I checked, it hasn't solved DNS yet (there are unofficial projects trying to do that). I tested a small private network with a few devices and it worked very well.

otabdeveloper43 months ago

The problem with TCP/IP is the lack of a standard and robust VPN/overlay network protocol. Everything we have is extremely fragmented and/or proprietary.

IPv6 is completely useless and doesn't solve this problem.

Normal people don't care if they have to pay 5 dollars instead of 50 cents to rent an IP address. This is a problem specific only to the huge providers, and we don't need to rollout a whole internet upgrade just to optimize a tiny part of the operational costs for huge providers.

PLG883 months ago

We are trying to change that with OpenZiti - https://openziti.io/. Its an open source network overlay built with zero trust principles and deny by default in mind. We also built it for developers, so includes SDKs and other means to embed overlay networking directly into the SDLC.

otabdeveloper43 months ago

It's a complex problem that hasn't even been formulated properly yet.

For example, every existing solution touts "security" and yet completely mangles the difference between authentication and encryption.

Authentication is important - you don't want random servers or users to enroll on your network, and you want good tools to rotate and manage secrets.

Encryption isn't important unless you care about state-level actors sniffing your traffic at the backbone. (And if you care about that then you already have your own datacenter.)

Meanwhile encrypting all network traffic is a huge performance penalty. (Orders of magnitude for some valid use cases.)

benreesman3 months ago

So far as I’m aware, TailScale has been at all times a good actor.

I have no problem criticizing tech companies, but I try to wait until they behave badly.

mike_d3 months ago

> TailScale has been at all times a good actor.

This is the Cloudflare problem all over again. One day Matthew Prince will get hit by a bus, all the "trustworthy people" will leave, a PE firm will take the company private, and merge it with an ad network. Congrats, the entire internet now has a single companies ads all over it and we let it happen because we happened to like the people fucking us.

johnklos3 months ago

Matthew Prince is definitely not a good actor, but that's not the point. What Cloudflare did was they acted like good people, said good things, even did some good things, but once they got enough business and momentum, they then started doing shadier and shadier things, and now they're a protection racket that is happy to protect scammers for a fee. I think Cloudflare's most ardent fans would have trouble articulating technically valid reasons for why it makes sense to re-centralize the Internet around them, yet that's exactly what Cloudflare want.

That's why people don't necessarily, and shouldn't, trust that Tailscale won't head down the same path. It's hard enough for non-profits - heck, the Mozilla Foundation is losing all the good will they've ever had, and even the Raspberry Pi Foundation decided to gaslight people when they started eyeing corporate money.

If there's an open source way to do a thing that's a pain in the ass and a way to do the same thing from a for-profit company, I'll take the pain in the ass thing every time. History has shown it to be the prudent thing time and time again.

PLG883 months ago

Check out OpenZiti then - https://openziti.io/. Its Tailscale on steroids, with. (IMHO) a much more scalable implementation of zero trust principles.

braginini3 months ago

Or https://netbird.io which is open-source. You can host the coordination server too :)

eqvinox3 months ago

> I have no problem criticizing tech companies, but I try to wait until they behave badly.

I'd rather not wait until they have a (quasi-)monopoly on something though. Twitter was great until…

nsonha3 months ago

when was twitter ever great? It has been creating echo chambers from day one, and deliberately making discourse difficult and non-nuance. It's arguably the shittiest form of human communication, and that counts facebook also.

mrmetanoia3 months ago

Wouldn't the point be they're an indecent, possibly bad, actor by default since they're a business at all rather than just creating or contributing to protocols/standards to resolve the issues their product relies on to exist? The only way they could be a good actor is if they're using the money from their sales to fund that initiative with a plan to obsolete themselves.

I suppose if you follow that thread though a lot of businesses just shouldn't exist except for fulfilling the need they fill for the sake of those in need.

jtwoodhouse3 months ago

Companies are allowed to solve problems for a profit. People can choose to sell their time and energy or give it away. The choice is the default.

In fact, I prefer that capitalist model at this point having seen countless OSS/nonprofit efforts turn into glorified abandonware.

At least the business has an interest in remaining a going concern and maintaining the stack.

wmf3 months ago

BTW the best way to make standards happen is to sell a product based on the standard. Academic standards don't go anywhere.

benreesman3 months ago

I have no idea what the adoption is, but this reminds me of the really nice work the buf.build people are doing with ConnectRPC.

I have a SaaS-crush on buf because they did such a good job on fixing such an annoying problem.

DyslexicAtheist3 months ago

I never thought of this. Forces me to rethink every negative post people made against DNSSEC which shaped my opinion. I still think that IPv6 and DNSSEC do more harm in practice than what they solve. Maybe the SCW podcast can do a deepdive on this together with somebody who is militantly-pro DNSSEC. <3 ...

edit: maybe even invite 2 or 3 DNSSEC advocates @tptacek :)

tptacek3 months ago

I don't think the analysis upthread should make you rethink DNSSEC, since it, too, is a centralized system; rather than being controlled by Avery Pennarun (you could do worse), it's controlled by an unholy alliance of world governments and companies like Verisign.

If we could find a credible DNSSEC advocate (for our audience; that is: a cryptography engineer, vulnerability researcher, or an engineering leader at a major firm), we would absolutely invite them on.

'teddyh below gave you links to two pro-DNSSEC resources; fun note: the latter source (Geoff Huston, one of the world's more respected networking researchers) has since then written this:

https://blog.apnic.net/2024/05/28/calling-time-on-dnssec/.

DyslexicAtheist3 months ago

thanks much appreciated to you and teddyh for these links. really needed this opposite views.

teddyh3 months ago

The title of that article which you link to is “Calling time on DNSSEC?”, and Betteridge's law of headlines applies to it. Here’s the final paragraphs from that article:

I guess the question we should be asking is — if we want a secured namespace what aspects should we change about the way DNSSEC is used to make it simpler, faster, and more robust?

tptacek3 months ago

I'm happy just to see more people reading it. People can make their own call about it.

teddyh3 months ago
jeroenhd3 months ago

I may be in favour of DNSSEC, but I admit that it's time for a v2 of the RFC that removes the stupid "encryption can't be done at the endpoint" restriction. In practice you can just turn on validation on many computers and gain its benefits, especially if used in the manner as described here where you can just block connections to unprotected hostnames to work around the most glaring issue, but the whole spec is written for a world we've moved beyond.

IPv6 doesn't have that problem, though.

DyslexicAtheist3 months ago

thanks!

sulandor3 months ago

> The eternal problem with companies like

it's not a problem specific to any kind of corporation or corporations per se, but organizations or even broader, solutions.

though, do you really think that having a solution to a problem is worse than just having the problem?

teddyh3 months ago

It is a problem if a company makes a lot of money ”solving” the problem, but:

• This does not really solve the problem, since a real solution would be to change the internet to make the problem go away

• A company making a lot of money gets to have an enormous influence on what is considered reasonable to standardize on. See for instance Google’s and Microsoft’s influence on things like the W3C. (Or if Tailscale is allowed to define what ”The New Internet” will be.)

sulandor3 months ago

seems indeed that microsoft is making lots of money by defending the status quo

teddyh3 months ago

All large incumbents defend the status quo, except when advocating for larger barriers to entry for new and smaller competitors.

spacecadet3 months ago

Companies like you list solve people problems. Their business is about using abstraction to create customer experiences that match the market demand. I want to say that "institutions" solve computer science problems, but it's much more complicated than that.

pvillano3 months ago

Cloudflare sells bulletproof vests

throwaway2113 months ago

[flagged]

smolder3 months ago

This is a poor analogy. Historically there is a significant cost to making bad cars with frequent repair needs.

neuralRiot3 months ago

As somebody working in the modern automotive field I would like to disagree. The only incentive for car manufacturers is price point vs warranty period, after that you’re on your own. “Durability and “reliability” are no longer selling points for any automaker.

listenallyall3 months ago

As someone not in the field, do you believe there is a true, objective, statistically valid way to tell which manufacturers (or which specific models) are more durable than others? All I see are consumer surveys, JD Power (more surveys), etc which, like most surveys, have wide error margins, and overall seem rather anecdote-based, and don't necessarily account for different stresses placed on the car based on driving style, diligence of maintenance, climate, etc.

sulandor3 months ago

this is a poor statement. the cost is not in dispute, but the bearer of it.

historically car owners need to pay for repairs.

+1
smolder3 months ago
whoistraitor3 months ago

Ie. A form of perverse incentive or the cobra effect. Endemic to capitalism, especially in infrastructure.

jclulow3 months ago

An incredibly long ramp up to complaining about centralised control by rent seekers (a very reasonable complaint!) which gets bogged down in some ostensibly unrelated shade about whether client-server computing makes sense (it does) or is itself somehow responsible for the rent seeking (it isn't; you can seek rent on proprietary peer to peer systems as well!) to then arrive at:

> There’s going to be a new world of haves and have-nots. Where in 1970 you had or didn’t have a mainframe, and in 1995 you had or didn’t have the Internet, and today you have or don’t have a TLS cert, tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.

The king is dead, long live the king!

1vuio0pswjnm73 months ago

"...you can rent seek on proprietary peer to peer systems as well..."

I still use a non-proprietary one that predates Tailscale and that is not OpenVPN. It is small and simple enough even I, a non-programmer, can make modifications.

It's possible one ends up using client-server in order to achieve peer-to-peer because not everyone has an internet-reachable, non-firewalled IP address. Using some hosting company's server to run a "supernode" may be required. No traffic needs to pass through it if it is used only as a "rendezvous server" so the cost can be minimal.

Companies that try to compete with "free" always draw high scrutiny from me. Stop using that free software and start paying us. We added 100 unnecessary "features".

Not doubting this "corporate strategy" can succeed, at least short-term. Look at Slack. But these subscriptions are not for me.

Client-server versus peer-to-peer is misdirection. The real issue is proprietary versus non-proprietary. IMHO.

HumanOstrich3 months ago

What is the non-proprietary option you are referring to?

genpfault3 months ago
Borg33 months ago

tinc-vpn is great. I use it to build L2 mesh islands and then quagga to route between those.

genewitch3 months ago

Not sure if parent means wireguard, but my GitHub page has a way to get around cgnat using wireguard for use with a Nintendo switch (or any wifi/etc device that doesn't run an editable OS)

+1
1vuio0pswjnm73 months ago
rsync3 months ago

Hopefully referring to the (excellent) sshuttle:

https://github.com/sshuttle/sshuttle

... which allows you to turn any system you have an ssh login on into a VPN endpoint.

+1
TheFlyingFish3 months ago
ssss113 months ago

Lol. The tailscale CEO is preaching “tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.”??

That won’t go down well in 10 years if they don’t become Microsoft-scale juggernauts.

IAmNotACellist3 months ago

Or they'll just get bought by Microsoft, Amazon, or Cloudflare and that'll be that

I like Tailscale just because it's OpenVPN without the unbearable agony of setting it up so it actually works

01HNNWZ0MV43FF3 months ago

Yeah it's a weird flex. I'd use Tailscale today if it was open all the way up and down.

If not, why bother? TLS and http don't charge licensing fees...

p_l3 months ago

You can use tailscale without using tailscale hosted components, using purely open source parts.

I have switched where possible, both my own networks and clients, to use headscale which is folly open source coordination server compatible with tailscale.

cookiengineer3 months ago

Honestly, I kind of missed Hamachi in the last decades.

It was such a superb and easy to use tool to design/configure your own private networks at the time. Filesharing, local game LANs, development cooperation, heck, even media streaming was so easily done at the time.

Personally I think that the future of peer to peer isn't tailscale, it's more someting along the lines of a selfhosted hamachi variant that's able to put generically nodes together from all across different NATs and ASNs, generically understanding NAT breaking techniques and STUN/TURN/turtle routing.

A tool like this that could also allow remote users to chime in without a centralized VPN gateway would be a killer feature for the modern world.

wmf3 months ago

That sounds a lot like Headscale.

cookiengineer3 months ago

The issue I have with tailscale/headscale is that its focus isn't being an end user app that people can start on demand.

Hamachi was different because a child could use it (literally). It was designed like an instant messenger, and you could easily create groups and invite friends for a LAN party. No IP masks, no hashes, none of that complicated stuff was necessary.

I'd only see maybe a tool that was built on top of headscale that could do that, but headscale's focus is too far off for something like that, and in my opinion too low level.

mystified50163 months ago

Rent seekers are bad! Don't you all hate landlords?! Now let me tell you why you should pay rent to me as well as everyone you currently pay rent to!

rsync3 months ago

"An incredibly long ramp up ..."

Agreed. We would all do well to learn about, and begin implementing, "Iceberg Articles":

https://john.kozubik.com/pub/IcebergArticle/tip.html

Esras3 months ago

This feels like an overly-complex treatment of the Inverted Pyramid in journalism: https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism), or Bottom Line; Up Front: https://en.wikipedia.org/wiki/BLUF_(communication).

Start with the important statements, then expand. Doesn't have to be the "Tell you what I'm telling you, tell you, tell you what I told you" format that many (American) students were taught, but starting with your thesis statement does help ground it.

On the other hand, the topic blog is somewhat of a story, and I can hear the presentation being given behind it. It's just translated 1:1 to a blog, which is a different medium.

ragall3 months ago

BLUF is bad, it's precisely a technique borne in the the world of newspaper publishing for writing catchy articles (what is now called clickbait). Classical philosophical writing is the exact opposite: start with some problems, elaborate in high detail and finish with a conclusion (the name says it all).

catalypso3 months ago

Clickbait is BLUF with a deceptive bottom line (BL). Clickbait is bad. You can choose to write in BLUF style without that.

In my experience, I only prefer "Classical philosophical writing" when I'm already convinced of reading the content (e.g. know the author, interested by the subject).

In almost all other cases, I prefer BLUF format: i.e. "get to the point, I'll read more if I'm intrigued".

k_bx3 months ago

I'm one of the people who actually use Tailscale for production systems where there are servers physically close to me, or at some other controlled locations, and then there are hundreds of users hundreds kilometers away, all working via Tailscale.

I should say two things. Tailscale is amazing and I love it. The system could not exist without it, or I'd have to have at least ten more people in my team to manage all this 24/7. It's working, and it's good enough.

That being said, you do need to lower your expectations: it's not as good as "the internet". The latency spikes periodically, the connection drops sometimes, the MagicDNS just magically stops working or interferes with the system. Since we have many users, we've encountered every possible problem one can encounter, and then there's still something new you'll see tomorrow.

In any case, we believe in Tailscale and its vision, it's a categorically new approach that simultaneously gives you the control on hardware, reduces the cost, and improves the security. Our first big production server was a 4-core Linux Laptop!

We love Tailscale and we wish the product prosperous life and development. Thank you TEAM TAILSCALE!

aquariusDue3 months ago

Would you mind going into more detail about the 4-core Linux Laptop as a production server via Tailscale, please? I too use Tailscale and love it for self-hosting internal stuff but I never thought about using it for public facing production stuff. Now I'm really curious to hear more about your setup (if you're willing to share of course).

phantompeace3 months ago

Not the person you’ve asked but I regularly use it for backends for projects. Connect your database machine to your Tailscale network and have your code talk over the Tailscale IP to connect to it. I’ve served multiple Flask frontends that have talked to backends this way - works great! I use various cloud VMs to host the frontend, and they all talk to an Optiplex Micro box under my desk for their backends.

k_bx3 months ago

This, plus you get free domain name with HTTPS cert working.

k_bx3 months ago

It's very simple. We needed to begin somewhere, and until we've had at least 10-20 active users which load our database with their analytics queries, we wouldn't require a lot in terms of compute power and speed. However, we do have electricity stability problems sometimes (and internet outage), so something which has its own battery and ability to switch to a backup wifi/lte was a great choice as a start.

So that laptop was a "free server" we've had. It's now replaced by a much beefier miniPC.

ash3 months ago

> there are hundreds of users hundreds kilometers away, all working via Tailscale

Do you require your users to install Tailscale?

figassis3 months ago

I love Tailscale, but this post gives me the creeps. The internet succeeded because it was built on standards and was completely free. With Tailscale, I get wireguard is open source and we have things like Headscale. But the whole everyone gets an IP, doesn’t it depend on Tailscale owning a massive ip address space? We can all wait until full ipv6 rollout, or we can depend on centralized ipv4, and servers and proprietary stuff. Maybe a bit hypocritical?

jmprspret3 months ago

You can self-host a Tailscale control sever with Headscale[1]. It's not quite at feature parity with Tailscale, but it supports most if not all the current feature set and its improving every day. One of the lead devs is even paid by Tailscale to work on it, IIRC.

I run it for my personal self-hosted infra, and it works really well. Setting a custom control server URL is relatively easy (at least on Windows and Android which I use).

I use taildrop, I serve docker containers to the tailnet, etc. headscale works really well and is worth a go.

1: https://github.com/juanfont/headscale

password43213 months ago

The question is: how long will Headscale be supported in the official clients - how long will the incentives of Tailscale's VC's align with the freeloaders?

The official clients (most valuable: the polished mobile apps easily installed from the default app stores) are one auto-update away from cutting ties when push comes to shove, the same as all commercial VPNs with a free tier.

jmprspret3 months ago

The clients are the open source part of Tailscale. They can be forked and built by someone else if required.

However I do not think Tailscale is going to remove the custom control URL feature from their mobile clients. For one, I think there are legitimate "Tailscale Enterprise" use-cases for the custom login server.

Additionally, I have heard that Tailscale has been supportive of the Headscale project, providing assistance to the devs.

Further, Tailscale seems fairly committed to keeping their clients open sourced, and engaging in the developer community. Of course as you can say this can change at any time.

figassis3 months ago

I think clients are the least to worry about. They can be built by someone else if the need arises.

cpach3 months ago

Cool! Any important features you miss when running Headscale?

p_l3 months ago

Mostly support for features relevant to multi tenancy - official tailscale stuff does things like separate "tailnets" that belong to different accounts which have different SSO, but you can share access to hosts between tailnets with ACL rules, etc. Also tailscale funnel which uses tailscale-hosted service to provide ingress to host behind VPN.

And of course the API used to manage the official server, so the rare things that depend on it won't work, but it's more a case that the project doesn't have the need to work on it

jmprspret3 months ago

Nothing that I've noticed. I actually have never run vanilla Tailscale without Headscale so I'm not sure.

I think auto TLS requires some extra config, and DNS rules. I don't use it so I'm not sure.

mrbluecoat3 months ago

DoH DNS support (beyond the single existing NextDNS option)

yegle3 months ago

100.64.0.0/10 is a reserved IP block for carrier-grade NAT.

metadat3 months ago

More info about Carrier-Grade NAT (for others who, like me, are only encountering this term for the time):

https://en.wikipedia.org/wiki/Carrier-grade_NAT

Can anyone elighten me regarding what is different or special about 100.64.0.0/10 vs say, 192.168.0.0 or 10.0.0.0.

Edit: Answered my own question by digging into more wikis, there is a helpful table of reservations and intentions here: https://en.wikipedia.org/wiki/Reserved_IP_addresses

throw0101d3 months ago

> Can anyone elighten me regarding what is different or special about 100.64.0.0/10 vs say, 192.168.0.0 or 10.0.0.0.

A bit of context: if an ISP cannot get enough IPv4 addresses for the WAN-side of people's home routers, some problems exist:

* something in 192.168/16 is generally used for the LAN-side of people's home routers, so that cannot be used on the WAN side

* 10/8 is used for business/enterprise corporate networks, so it also cannot be used on the WAN side because if people VPN connect to the corporate, then the router may get confused

* similarly for 172.12/12: often used for corporate networks

So the IETF/IANA set aside 100.64.0.0/10 as it had no 'legacy' of use anywhere else, and is specifically called out to only be used for ISPs for CG-NAT purposes. This way its routing does not clash with any other use (home or corporate/business).

    IPv4 address space is nearly exhausted.  However, ISPs must continue
    to support IPv4 growth until IPv6 is fully deployed.  To that end,
    many ISPs will deploy a Carrier-Grade NAT (CGN) device, such as that
    described in [RFC6264].  Because CGNs are used on networks where
    public address space is expected, and currently available private
    address space causes operational issues when used in this context,
    ISPs require a new IPv4 /10 address block.  This address block will
    be called the "Shared Address Space" and will be used to number the
    interfaces that connect CGN devices to Customer Premises Equipment (CPE).
* https://www.rfc-editor.org/rfc/rfc6598.html
+3
metadat3 months ago
linsomniac3 months ago

>But the whole everyone gets an IP, doesn’t it depend on Tailscale owning a massive ip address space?

No, because Tailscale isn't "the Internet", it is a bunch of disconnected moats. The IP space needed by Tailscale only has to be as big as the largest moat. And you can only be connected to a single moat at a time.

jgalt2123 months ago

If you had to move off of tailscale, what would you move to?

OJFord3 months ago

Zerotier is I think the obvious answer? I haven't used it though; it's more proprietary, not less.

ssl-33 months ago

AFAIK, Zerotier is about equally proprietary, more-free (as in beer), and has been doing the node-to-node mesh thing instead of spoke-and-hub longer than Tailscale has been in existence.

And if I remember correctly, ZT was initially created to provide something like this "New Internet" concept that Tailscale has apparently recently discovered, except they called it "Earth" and abandoned it in 2023.

(Some things don't change, I guess.)

+1
p_l3 months ago
viraptor3 months ago

Kinda? It works great in practice. You can run your own controllers if you want which completely disconnects you from the proprietary service. But the code is BSL.

OJFord3 months ago

I didn't mean to suggest it doesn't work well, as I said I've not used it.

It's still proprietary if you self-host it, I was thinking in particular that tailscale uses Wireguard and Zerotier uses something custom, i.e. proprietary. Note that the context was:

> The internet succeeded because it was built on standards and was completely free. With Tailscale, I get wireguard is open source and we have things like Headscale. But [...]

to which the commenter I replied to asked of alternatives. So I wasn't saying tailscale great and open and standards compliant, and Zerotier not; I was saying it's the obvious competitor but if that's your problem with tailscale then it's if anything worse in that regard.

Fnoord3 months ago

I use WireGuard. As you add more keypairs, it becomes a bit of a nightmare to maintain, though Vim with syntax highlighting helps a lot.

Because of this, I'll be switching to Headscale + Tailscale.

chgs3 months ago

It depends on your use case. I use wg back to two geographically independent locations, keys are managed via our ipam.

I don’t need EW traffic over the VPN, very NS based. Something like Headscale or another SDWan solution (automatically establishing vpn routes) would make sense if I needed to transport a lot of traffic E-W, that’s just not a requirement

yjftsjthsd-h3 months ago

I think nebula is the obvious FOSS competitor? With the unfortunate exception of the Android client being closed source.

sph3 months ago

I use Nebula because its iOS client does not drain my battery. Tailscale has had that known bug for years and they never managed to fix it, which is a major deal breaker.

jacooper3 months ago

They have released a slew of updates recently to fix this, and they did a complete rewrite of the Android app

PLG883 months ago

OpenZiti would be another - https://openziti.io/. I work on the project. 1 issue with Nebula is the provisioning new clients with identities. Its not completely open sourced by the Nebula company.

dandanua3 months ago

I think Nebula is much much closer to the "new internet". Lighthouse nodes can serve as untrusted brokers that help to connect everyone securely. No need in a central authority with God-like importance, as the Tailscale CEO obviously wants to have.

mrbluecoat3 months ago

NetBird is a promising option. OpenZiti is another. ZeroTier hasn't evolved much, IMHO. Would also love to see someone breathe new life into https://github.com/omniedgeio/omniedge

imiric3 months ago

I like Tailscale, but this reads as too self-aggrandizing.

You have a mesh VPN product with some value-added services on top of it. That's great, but this idea isn't novel or unique. Why should your solution be the "new internet" instead of any of the alternatives?

I wouldn't want to rely on a single company for all my internet infrastructure, anyway. So I'll stick with the traditional internet with all its complexity. Its major problems aren't technical but social, and no new technology will solve those.

eddythompson803 months ago

> Its major problems aren't technical but social, and no new technology will solve those.

Really? Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage? I view that problem as deeply technical. Unless if by "social" you simply mean everyone should become an experienced sysadmin. (or the slight variation of, everyone should know an experienced sysadmin who's willing to run their application for them for free)

Take something as mainstream as social media. Imagine a world where Facebook/Twitter/TikTok/YouTube/Reddit/HN/etc worked (seamlessly) like bittorrent. An application on your machine when you run it, it joins a "Facebook" network where your friends see you online through their instance of the application. Your feed/wall/etc is served to them directly from your machine. All your communication with them is handled directly between the 2 (or 1000 or millions) of you. No centralized server needed. You can easily extend and apply this majority of centralized application today. The only ones I can think of where this wouldn't work would be inherently centralized services like banking for example.

There are already plenty of p2p networks that show that this is a viable solution. Bittorrent, soulseek, bitcoin, etc.

All the problems you will run into however to make this as seamless as just connecting to facebook.com are purely technical. The initial big hurdle is seamless p2p connectivity. That is without port forwarding, dynamic dns, and requiring advanced networking, security, and other sysadmin knowledge from every user. Next would be problems like what happens when the node is offline? What happens to latency and load if you need to connect to thousands, hundreds of thousands, or millions of machines just to pull a "feed"? How is caching handled? How are updates/notifications pushed? How do nodes communicate when they are wildly out of date? Where is your data stored? How do you handle discoverability, security, etc.

All deeply technical problems. Most are solvable, but you're gonna have to invest a significant amount of effort to solve them one by one to reach the same brain-dead simple experience as a centralized service. The fediverse has been trying to solve just a small subset of these problems for over a decade now, and the solutions still require a highly capable sysadmin to give users a similar (or only slightly worse) experience than twitter.com.

imiric3 months ago

> Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage?

Not quite. The internet _is_ decentralized. What made the web so centralized from the start could partially be the result of lacking tools that made publishing as easy as consuming content. I.e. had we had a publishing equivalent to the web browser, perhaps the web landscape would've been different today. You can see that this was planned as phase 2 of the original WWW proposal[2] ("universal authorship"), but it never came to pass AFAIK.

So you could say that the problem is partly technical. But it's uncertain how much this would've changed how people use the web, and if companies would've still stepped in to fill the authorship void, as many did and still do today. Once the web started gaining global traction in the early 90s, that ship had sailed. People learned that they had to use GeoCities to publish content, and later MySpace, Facebook and Twitter. These services gained popularity because they were popular.

There have been many attempts over the years to decentralize the web, but now the problem is largely social. As you say, we've had the fediverse for over a decade now. How is that going? Are technical issues still a hurdle to achieve mass adoption, or are people not joining because of other reasons? I'd say it's the latter.

Most people simply don't care about decentralization. They don't care about privacy, or that some corporation is getting rich off their data. They do care about using a service with interesting content where most of their contacts are. So it's a social and traction issue, much more than a technical one. The only people who use decentralized services today are those who care more about the technology than following the herd. Until you can either get the average web user interested in the technology, or achieve sufficient traction so that people don't care about the technology, decentralized services will remain a niche.

There is another technical aspect to this, though. Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data. Many things will need to change on the operational side before your decentralized dream can become reality. I think this landscape would've also been different had the web started with decentralized principles, but alas, here we are.

[1]: https://info.cern.ch/hypertext/WWW/Proposal.html

pdimitar3 months ago

> As you say, we've had the fediverse for over a decade now. How is that going?

Convenience trumps everything. All the parts of the iPhone existed for a few years before it -- especially PDAs with touch pens -- but what made the iPhone succeed was putting everything into convenient and easier package.

The amount of time worked on thing X has almost zero correlation with its adoption, as I think all of us the techies know.

> Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data.

While that is true, let's not forget half-solutions like TeamViewer's relay servers, Tailscale / ZeroTier coordinators, and many others. They are not a 100% solution but then again nothing is nowadays; we have to start somewhere. I agree that many ISPs would be very unhappy with a truly decentralized architecture but the market will make them fall in line. I have no sympathy for some local businessmen who figured they will run with tens of millions with $50K investment. Nope, they'll have to invest more or be left out.

So there would be a market reshuffling and I'm very OK with it.

---

But how do we start off the entire process? I'd beet on automated negotiation between nodes + making sure those nodes are installed on much more machines. I envision a Linux kernel module that transparently keeps connections to a small but important subset of this future decentralized network and the rest becomes just API calls that would be almost as simple as the current ones (barring some more retry logic because f.ex. "we couldn't find the peer in one full minute"). I believe many devs would be able to handle it.

pjkundert3 months ago

The Holochain project has invested the last 5 years, solving each of these problems…

You can now build Internet-scale distributed systems, with or without requiring centralized (eg. DNS, SSL certs, etc.).

In other words, massively distributed apps without any means for centralized authorities to stop them.

intelVISA3 months ago

What's Tailscale's value prop? It's just a kludge together of various FOSS heavyweights..?

p_l3 months ago

Tailscale value proposition is that it just works, all the parts you normally have to assemble yourself already built and connected, with value add in terms of ACL support publicly and things like funnel (exposing services over HTTPS on nodes that don't have a public IP) and taildrop (easy file sharing).

Also various integrations, like tailscale k8s operator.

compootr3 months ago

precisely. Isn't every business that plus some custom stuff & employee time?

i.e., isn't some business is just a kludge of FOSS heavyweights, say for example, when they write an app in some open source language, deploy it on an open source OS with open source orchestration etc

I think Tailscale is a lot of foss software, with the utility that it lowers the barrier to entry massively

pdimitar3 months ago

Oh, you mean like most IT businesses nowadays?

9dev3 months ago

Oh please. The value proposition is that they spend time so you don’t have to, on something that isn’t your actual product. If you don’t understand why that provides value to other businesses, you probably don’t run one?

ZoomZoomZoom3 months ago

Isn't yggdrasil[1] supposed to be the New Internet?

If not, why Tailscale specifically, and not Netbird, Nebula, Netmaker or some other competitor?

The article is indeed very well written, but gives the wrong vibes, like something's coming: acquisition, pivot, split, shutting down, etc. Also, "we're re just getting started", the famous last words.

Just to balance my healthy mistrust, I'd like to add that I'm a satisfied Tailscale user, mostly impressed with how little it requires of me to just work.

[1] https://yggdrasil-network.github.io

tionis3 months ago

Yggdrasil would indeed better fit the description. It should however be noted, that, while it does work great, is still a research project.

metadat3 months ago

I really enjoy and appreciate the tailscale service, but this article didn't click for me. I love an inspiring CEO rally speech as much as the next early adopter, and agree that there is a ridiculous amount of developer friction and complexity in computing, but tailscale still has its own friction and isn't on track to solve the big picture issues _at all_.

As a concrete example, a few weeks ago, I invited my dad to my tailnet with the intent of using remote desktop into his machine to help him fix something. He accepted the invite, and then I couldn't ping his machine despite it appearing in my TS domain web interface.

Now he hates tailscale, and I lost credibility because prior I told him how awesome it is. In his view, it wasted his time and doesn't "work right", and metadat is a fool.

Cyphus3 months ago

Is your dad running Windows? Windows firewall is known to block icmp traffic, a problem that neither Tailscale nor any other p2p VPN can solve.

metadat3 months ago

Maybe, but even ICMP pings? He also couldn't ping my systems, it seemed really broken.

eddythompson803 months ago

Ping uses ICMP. Windows blocks ICMP by default, so yes `ping <windows-host>` doesn't work by default. Is your system your father was trying to ping a Windows system as well?

The other thing to check is if he was running another VPN on his machine at the same time. Running multiple VPNs at the same time (both Windows and Linux) requires extra fiddling to map the routing correctly to prevent their rules from overlapping/breaking each. https://tailscale.com/kb/1105/other-vpns

+1
metadat3 months ago
skybrian3 months ago

I don’t know if it’s the same issue, but the problem I ran into is that I misunderstood how it works for families who just use gmail addresses. It’s quite counter-intuitive. The organization stuff isn’t for you - instead each person creates their own tailnet and you connect them. See:

https://github.com/tailscale/tailscale/issues/10731

metadat3 months ago

If this was the source of issues, it's still a product failure. I hope they will address it.

If I can't figure it out, 99% of others won't either.

sulandor3 months ago

probably a new network interface gets created and either your dad or windows decided that it was not a "home/personal" i.e trusted connection where answering pings or rdp is a good idea.

teamviewer, anydesk et.al are made for the task

bomewish3 months ago

ACL issue?

iovoid3 months ago

I think the author misdiagnoses the problem, and the proposed solution simply hides the centralization instead of removing it.

The reason AWS is expensive is not because of IPv4, or the datacenters. It's mostly in their software/managed offerings, and the ability to quickly add more servers. If you are a "serious company" and you don't want to pay AWS or a similar company, renting a rack and colocating your own servers (either within your premises or in a datacenter) is doable and done by lots of companies.

I disagree that certificates have caused centralization, and they're not something separating the haves and have-nots and are in no way comparable to having or not a mainframe. HTTPS becoming pseudo-mandatory didn't push people into having their own (sub)domains, which is nowadays the only requirement to obtain a certificate. It already happened out of convenience.

The other point of centralization mentioned is DNS, which tailscale doesn't avoid at all. MagicDNS still relies on the ICANN root, as does the tailscale control plane. And if all you wanted was a free subdomain, there are plenty of people offering that.

If you are behind CGNAT, tailnets aren't particularly less centralized, as traffic has to flow through the DERP servers. I doubt tailscale can keep providing these free of charge when the volume is in the tbps instead of the gbps.

I agree that tailscale (and similar solutions) help in the last remaining case, which is accessing your computer that is behind a NAT. I even think they could reach the dozens of millions of users. This is, in my opinion, not enough to claim the title of "the new internet".

j2kun3 months ago

This rests on the incorrect assumption, pointed out in the post, that most applications need the kind of scale that warrants quickly scaling to more servers.

mrkeen3 months ago

The article tried to make k8s all about scale, when that's not the whole story.

On other socials, a screenshot of the 'Not scaling' section is getting responses of "Those idiot developers think they need k8s scaling for their 1 req/s sites, ha ha."

The author brags about being able to (skip testing, CI/CD pipelines and just) edit their perl scripts (in prod,) really quickly.

What uptime is associated with that practice? As many 9's as it takes for Brad to debug his perl program in prod? This approach doesn't even scale to 2 developers unless they're sitting in the same room.

DevOps isn't a machine where you put unnecessary complexity in one end and get req/s out the other end. It's about risk and managing deployments better.

If I really wanted to engineer for req/s, I'd look at moving off k8s and onto bare metal.

ragall3 months ago

In an enterprise environment, I'd like a networking solution that allows me to run an app on my own office workstation and expose it as a service to some part of the company at an SLO that can be reasonably be guaranteed with a workstation: 99.9%. That would allow to cut so much time in "productionizing" stuff that doesn't need CI/CD pipelines or deploy to a datacenter: just me editing a Python file and restarting it.

zokier3 months ago

Of course these ideas are not that new. IPv6 was supposed to give end-to-end connectivity to all, and originally IPsec was supposed to be mandatory part of IPv6, giving each internet host cryptographic identity. And so on.

Fnoord3 months ago

I was curious why the article didn't mention IPv6 at all, since Tailscale does support it.

IPv6 -together with WireGuard- gives privacy, security, and performance. The downside is the complexity to set it up.

Tailscale builds on the shoulder of giants. IPv4, WireGuard, Samy Kamkar NAT punching, OpenSSH, and probably many more. One of the upsides is the combination of these, and that the management interface in general is easy. But what counts for CA is also true for Tailscale: both are using FOSS to in the end deliver a (proprietary) service.

But because almost everything is build on top of FOSS and there's Headscale (and they're cool with it), this isn't a major issue to me. Like, it is a downside, but not a major one, as vendor lock-in is practically non-existent. In fact, it is likely an upside from a business/support PoV.

thinkski3 months ago

I think there’s a common misunderstanding that with IPv6 anyone can connect to anyone else. That’s not true.

My laptop has an IPv6 address, as does the router that routes its traffic. There’s no NAT, that’s true, but there’s still a firewall — only inbound packets from a destination host and port that have been sent to are allowed in. And in enterprise environments, from what I’ve seen, there’s a symmetric NAT on IPv6 anyway — packet comes from a different IPv6 address and randomized port than the one client sent it from, making peer connectivity impossible, as the source port varies by destination host and port.

wmf3 months ago

Apenwarr is kind of an IPv6 hater. He thinks it's not going to happen.

throw0101d3 months ago

> Apenwarr is kind of an IPv6 hater. He thinks it's not going to happen.

Well, T-Mobile US is 100% IPv6:

* https://www.youtube.com/watch?v=QGbxCKAqNUE

Facebook is IPv6-only on their internal infrastructure:

* https://www.internetsociety.org/resources/deploy360/2014/cas...

Microsoft has been moving to IPv6-only for their corporate network (so IPv4 address can be used for revenue-producing Azure):

* https://www.arin.net/blog/2019/04/03/microsoft-works-toward-...

So he better tell those folks that IPv6 is not a thing.

Borg33 months ago

Because IPv6 is mistake. Thats why market does NOT want it. Unfortunately, we all start to feel the heat of IPv4 exhaustion.

Anyway, remember IPv4 classes? Then they made it classless. IPv6 is not 128bit, its just 64bit with 64bit host address. So, first mistake. IPsec mandatory? pure stupidity. Crypto moves fast, every 10 years many protocols are obsoleted. How you will provide E2E connectivity with that?

In 1997 IPv6 was seriously immature yet to start migration. Additionaly, it was very different from IPv4 so was mostly ignored. What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done. As bonus, they should think about some basic IPv6 -> IPv4 interop so clients would NOT need to be dual stack. And that could work back then. Now we are fucked.

zokier3 months ago

The thing that you and all other ipv6 haters miss is that none of that matters. Ipv6 is happening, like it or not. And that had been the state already for 10-15 years.

Maybe in the 00s there was window when there might have been true doubt if ipv6 was going to happen. But after that, it was just question of "when", not "if".

Keeping on hating is simply not very productive. It's just much better to embrace ipv6, no matter it's possible flaws.

+1
growse3 months ago
+1
MerManMaid3 months ago
throw0101d3 months ago

> Crypto moves fast, every 10 years many protocols are obsoleted. How you will provide E2E connectivity with that?

Negotiation. IPsec using IKEv2 (RFC 4306/7296) started with (e.g.) 3DES when it was initially released, but now allows for AES (RFC 3602, 3686, etc), as well as other algorithms:

* https://www.iana.org/assignments/ikev2-parameters/ikev2-para...

> What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.

For anyone curious, the technical criteria for choosing the (then-labelled) IPng:

* https://datatracker.ietf.org/doc/html/rfc1726

And the evaluation of the available candidates and why the winner was chosen:

* https://datatracker.ietf.org/doc/html/rfc1752

One of the IPng candidates, SIPP, indeed did extend addressing from 32-bits to 64-bits (RFC 1710, RFC 1752 § 7.2), but it was deemed that it may not enough and another transition would be even more difficult, so they went with 128-bits (RFC 1752 § 9).

Adding mechanisms for auto-configuration was one of the criteria for IPng; per RFC 1726 § 5.8:

    CRITERION
       The protocol must permit easy and largely distributed
       configuration and operation. Automatic configuration of hosts and
       routers is required.
    
    DISCUSSION
       People complain that IP is hard to manage.  We cannot plug and
       play.  We must fix that problem.
       
       We do note that fully automated configuration, especially for
       large, complex networks, is still a topic of research.  Our
       concern is mostly for small and medium sized, less complex,
       networks; places where the essential knowledge and skills would
       not be as readily available.
       
       In dealing with this criterion, address assignment and delegation
       procedures and restrictions should be addressed by the proposal.
       Furthermore, "ownership" of addresses (e.g., user or service
       provider) has recently become a concern and the issue should be
       addressed.
       
       We require that a node be able to dynamically obtain all of its
       operational, IP-level parameters at boot time via a dynamic
       configuration mechanism.
       […]
In a world of IoT, not having to have a BOOTP/DHCP(v4) seems like decent foresight.
+1
ragall3 months ago
Bluecobra3 months ago

There are some very valid points here though:

https://apenwarr.ca/log/20170810

+1
throw0101d3 months ago
wmf3 months ago

Yeah, he's not wrong. I just found his take on IPv6 to be pretty pessimistic at that time. His manifesto from today is much more positive.

Aeolun3 months ago

I really liked the premise of the post until I got to the last paragraph and had to do a quick double take.

Sure Tailscale makes the internet easier again, but I still have to rely on a landlord. Something I didn’t/don’t have to for the internet. As much as a lot of stuff has been centralized, even today I can connect to any server in the world with just the link.

paulnpace3 months ago

> I really liked the premise of the post until I got to the last paragraph and had to do a quick double take.

"Be sure to drink your Ovaltine."

jmprspret3 months ago

> I still have to rely on a landlord.

This is a very good point. Counterpoint is self-hosting Headscale which I mentioned in another comment here: https://github.com/juanfont/headscale

Works with native Tailscale clients with a few config changes. I use it myself.

locusofself3 months ago

I'm distracted by all the references to being "old" because the author remembers the 1990s.

duggan3 months ago

Life moves pretty fast.

I wouldn’t be too surprised if the median age of Tailscale’s audience was 24.

heraldgeezer3 months ago

People born in 2000 use this?

What is it?

Can we call things for what they are? Is this a VPN? :)

Im tired and I am 34. So tired.

RadiozRadioz3 months ago

Seeing the average age range of people who put "founder" on their LinkedIn, I'm not very surprised.

udev40963 months ago

> You know what, nobody ever got fired for buying AWS.

> That’s an IBM analogy.

Wow, this dialogue comes in the first episode of halt and catch fire. I didn't know this was a real thing

Here's the clip at 1.51 minutes, if anyone's interested: https://www.youtube.com/watch?v=XOR8mk0tLpc

OneOffAsk3 months ago

> In fact, we didn’t found Tailscale to be a networking company. Networking didn’t come into it much at all at first.

I always just assumed they were building some kind of logging software (“tail”scale), used Wireguard to connect hosts, and just kind of stopped there. Don’t get me wrong, Tailscale is a nice way to connect machines. It’s nice because Wireguard is nice.

justinsaccount3 months ago
svnt3 months ago

This long blog post (by the now-CEO of Tailscale), if you skip to the end, describes that parent’s hypothesis is basically exactly correct.

> Update 2019-04-26: Based on a lot of positive feedback from people who read this blog post, I ended up starting a company that might be able to help you with your logs problems. We're building pipelines that are very similar to what's described here.

Update 2020-08-26:

Aha! Okay, for some reason this article is trending again, and I'd better provide an update on my update. We did implement parts of this design for use in our core product, which is now quite distinct from logs processing.

After investigating the "logs" market last year, we decided not to commercialize a logs processing service. The reason is that the characteristics we want our design to have: cheap, lightweight, simple, fast, and reliable - are all things you would expect from the low-cost provider in a market. The "logs processing" space is crowded with a lot of premium products that are fancy, feature-filled, etc, and reliable too, and thus able to charge a lot of money.

Instead, we built a minimalistic version of the above design for our internal use, collecting distributed telemetry about Tailscale connection success rates to help debug the network. Big companies can also use it to feed into their IDS and SIEM systems.

We considered open sourcing the logs services we built (since open source is where attributes like cheap, lightweight, etc tend to flourish) but we can't afford the support overhead right now for a product that is at best tangential to our main focus. Sorry! Hopefully someday.

kiitos3 months ago

Wireguard by itself is good, but it isn't nice. Tailscale is nice because it builds on top of Wireguard (which is good) and adds UX stuff (which makes it nice).

Nice requires humane UX.

ivlad3 months ago

IPv6 + transport mode IPsec + opportunistic encryption with TOFU or other topologies of trust (including WoT, DNSSEC and PKI). All that is standard, most of it is available and only requires configuration (and, ideally, being turned on by default).

There is very little use for companies like Tailscale in this setup, it’s scalable and works.

wmf3 months ago

Gets killed by IPv6 firewalls.

Bluecobra3 months ago

> Every device gets an IP address and a DNS name and end-to-end encryption and an identity, and safely bypasses firewalls.

Tailscale can certainly be blocked on NGFW firewalls like Palo Alto. I am not a BOFH, but also can’t have random employees circumventing security policies by setting up tailscale and leaving permanent backdoors in my corporate network.

I remember the good old days when everyone had a public IP on the Internet and how easy it was to setup things. It was cool and fun while it lasted. But now things are different and security is a nightmare when we have to deal with things like ransomware.

iczero3 months ago

Tailscale doesn't even try to hide their MP or DP traffic. Last I checked, management was plain https and data was plain wireguard.

vwkd3 months ago

> can’t have random employees circumventing security policies by setting up tailscale and leaving permanent backdoors in my corporate network

Tailscale isn't exactly an open door. Only machines signed-in via SSO can access a Tailscale network.

If you don't trust your employees to safeguard their credentials and machines then how do you trust them at all? Keep them in an airtight underground bunker chained to their desks? Not sure what threat you're modeling for...

Bluecobra3 months ago

I'm talking about people who want to use Tailscale for personal reasons. For example someone can setup a Tailscale instance between their work computer and home computer and circumvent the corporate VPN/MFA policies for remote access. I doubt they being malicious but what if their home PC gets hit with malware? A threat actor could then use the existing Tailscale instance to get into the corporate network.

indigodaddy3 months ago

So the answer to the bad old internet is to install tailscale on everything?

sneilan13 months ago

I think the message from this post is we’ll pay rent to Tailscale instead of AWS eventually.

sweca3 months ago

Even though I don't agree with the whole "New Internet" thing, this article is very well written.

sneilan13 months ago

I agree. This is article reads well. It has flow, good punctuation and pacing. Apart from the message that we will eventually pay Tithe to our new overlord tailscale, it was great.

Spivak3 months ago

I mean it was given as an internal presentation about the business strategy, it's the kind of thing I would expect.

1. We took a hard-problem, peer-to-peer networking and IdM, and (mostly) solved it.

2. We're hoping this will drive people to build apps that leverage the unique capabilities of authenticated p2p mesh networks. It doesn't even have to be specifically for Tailscale.

3. People will want to use those apps and (if we're good at our jobs) choose to pay us to run the network for them over our competitors or building something in-house.

4. $$$

I'm not sure I would say this is as nefarious as the tone of the comments here suggest. Wanting a "killer app" for your software platform is pretty normal which is really what he's talking about. I would be nervous declaring victory or an inevitability without being to name what that killer app actually is but trying to figure it out/build it is a good strategy. It's one of those times where the desire for engineers to solve their pet-problems, play with shiny new toys, and build Halo LAN Party over Tailscale is aligned with the business.

thr0w3 months ago

> Sure, Apple’s there selling popular laptops, but you could buy a different laptop or a different phone.

> But the liberation didn’t last long. If you deploy software, you probably pay rent to AWS.

There's no Azure? GCP? Hetzner? Digital Ocean?

> You pay exorbitant rents to cloud providers for their computing power because your own computer isn’t in the right place to be a decent server.

You do that because you don't know what port-forwarding is (vast majority of software people do not), or you don't have the place or infra in your dwelling to stash a laptop server running 24/7 without interruption.

literallyroy3 months ago

You probably also have a residential IP and need to pay some service (Ngrok?) to convert that into a static address that’s useful right?

0x00000003 months ago

The new internet, an overlay network on top of the existing internet. Cool?

wmf3 months ago

It seems less bad than the competing vision of tunneling absolutely everything over HTTPS.

oneplane3 months ago

At least it's not a network that revolves around middle-out compression.

fuzzfactor3 months ago

What's really needed along those lines is a regular ordinary Mbone that's permanent this time:

https://en.wikipedia.org/wiki/Mbone

jpeeler3 months ago

Did anyone else immediately do the calculation of 8.1 billion (world population 2024) * 1/20000 = 405k user base? Which makes me wonder what percentage are paying users.

kkfx3 months ago

Ehm, sorry no. OSes matter much as before because even if today giants want to call desktop and co "just endpoints" a politically correct variant of old dumb terminals of "their mainframes", actually we know very well that "the intelligence" must be in "endpoints" and no "mainframe alike" modern solution can scale or serve well in that regards. Of course we need networking, a network of individual hosts, not of dumb endpoints.

Devs have lost such knowledge because big tech have trained them to loose it and now we see more and more limits of their model. The new internet must be the very old one, a network of hosts communicating each others, without *NAT and alike in the middle explicitly done in most case to lock users hosts behind some giant iron curtain.

The modern web today matter because we lack UIs because commercial desktops have decided for widgets based UIs and have strongly hit their limits, finding in the modern web a crappy modern version of the old classic DocUIs and we know as well we need DocUIs. Slowly we start coming back to the end-users programming admitting that visual crap and all tentative to make programming hard on purpose led to unsustainable crapware ecosystems. Maybe in a decade spreadsheets and "calculators" will be finally dropped and Jupyter/R alike tools will have finally substituted them eventually with some LLM plugged in to help the dumb mean users. In another decade we probably will be back at LispM because try other paths to profit from users is not sustainable anymore.

The shortest this period will be the less damage we will suffer.

sfRattan3 months ago

As I've been deliberately moving toward self-hosted computing, under my control, on my home network, I've had a feeling more and more that we're on the cusp of something transformative... For those who want it and those who care. There's an ecosystem of mostly FOSS software now designed to run on a home network and replace big, centralized, cloud providers. That software is right on the edge of being easy enough for everyone to use and for sufficient numbers of people to deploy and administer. News like Immich (to replace Google Photos) getting a major investment thanks to Louis Rossman and FUTO [1] is encouraging. The ecosystem of software you can now run on a commodity built NAS or homelab is, for me, the most exciting thing in computing since I first used the Internet in the late 90s.

The rollout and transformation, if it happens, won't look like all this stuff becoming so easy that every individual can run a server. But it is possible that every extended family will have at least one member who can run a server or administer a private network for the whole clan. And that's where tech like tailscale's offering will come in. That's where I see the author's vision being a believable moonshot:

Each extended family, and some small communities, with their own little interconnected, distributed network-citadels, behind the firewalls of which they do their computing, their sharing, and their work. Most family members won't need to understand it any more than they understand the centralized clouds they use now. And most networks won't be as well secured as a massive company can make its cloud offering, but a patchwork heterogeneity of network-citadels creates its own sort of security, and significantly lowers the value of any one "citadel" to even motivated adversaries.

[1]: https://www.youtube.com/watch?v=uyTPqxgqgjU

zoom66283 months ago

Totally this. I am old enough to remember LAN fun times. And writing software since 1970s at high school.

And Tailscale works for me to create my own network of phones, laptops, desktops and a remote node at DO. Works brilliantly to cross geo boundaries, borders, wifi networks(home has multiple) and seamless moving between mobile networks and wired.

Not sure will create a new internet or not but at least a new intranet where all my devices are reachable and controllable.

denton-scratch3 months ago

> But it is possible that every extended family will have at least one member who can run a server

That's as may be; but many, many people have no access to an "extended family". And extended families are not necessarily warm, safe spaces where everyone trusts everyone else; extended families are more likely to be "broken" than nuclear families.

sfRattan3 months ago

> And extended families are not necessarily warm, safe spaces where everyone trusts everyone else; extended families are more likely to be "broken" than nuclear families.

It is a good thing to promote and advance privacy, security, and freedom to isolated, atomized individuals; but it is important for all of humanity to promote and advance those same ideals to extended families. People who have no access to an extended family will ultimately either join a different one or disappear into the mists of ages past. In 100 years, the Earth will be populated mostly by the descendants of people in extended families today, however imperfect or even broken those extended families may be. If those people today don't see privacy, security, and freedom as both possible and worthy, their descendants may not value or even possess any of those ideals.

Bluecobra3 months ago

> As I've been deliberately moving toward self-hosted computing, under my control, on my home network

Funnily enough, I was once like this but now I have deliberately moved everything to the big cloud providers as I don’t want to deal with the toil of running my own homelab anymore. This is coming from someone who used to have a FreeBSD server with ZFS disks and using jails to run various things like pf, samba, etc. Eventually things would fail and it felt like I was back at work again when all I want to do is drink a cold beer and watch YouTube.

Perhaps I will try again one day as things get easier. For now I am content with having my photos and videos automatically synced up to iCloud/Google Photos.

sfRattan3 months ago

I tried once or twice in the early 2010s to set up a home server and had a similar experience to what you describe. Stuff would break and I wouldn't want to spend time fixing it.

I think part of the excitement I'm feeling is that the ecosystem today feels way more stable and mature than it did a decade to a decade and a half ago. Home Assistant, Jellyfin, TrueNAS, and a few other things have all pretty much run themselves for me with almost no downtime (other than one blackout that happened while I was traveling and drained my UPS) for the past nine months. There's tinkering to get it all up and running, but way less maintenance than I remember in the past.

kxrm3 months ago

I am curious about what your setup was, I have several systems, not sure I would call it a homelab but I rarely have to do anything. I am using Truenas for my ZFS storage and I have a few NUCs to run extra QoL services.

The only time I do anything with this stuff is when I want to upgrade (which is very rare) or add something. My NAS solution is a custom mini-ITX I built 8 years ago which I feel has more than paid for itself. I have long stopped chasing the latest and greatest because most of what has been produced in the last decade is very usable.

Very wary of going cloud, as I can't as easily control costs.

grumple3 months ago

What are the hard parts of hosting from home? What solutions have you been using?

fuzzfactor3 months ago

>at least one member who can run a server

It may be highly unlogical, but maybe by shooting for zero it would be possible to bat 1000?

I do everything it takes so that the "extended family" site just works after I leave, as long as the "operator" can keep track of their USB sticks.

Scrap PCs being used as media servers have no internal drive.

Boots to the stick containing the server app.

Accesses media on a second stick containing the files.

Hotplug the media stick to emulate game-cartridge/VCR-cassette convenience.

Upon server failure or massive update, replace that particular stick with a backup or later version, or in the worst case get another scrap PC.

I know, easier said than done :/

sfRattan3 months ago

A medieval castle could be defended by surprisingly few people, but not by zero people. And a castle full of people who don't know how to fortify and maintain its defenses eventually becomes someone else's castle.

Aiming for zero required sysadmins in the short term after your own passing, I think the computers you leave behind will run into a similar case of the same general problem in the long term: there's no such thing as an entropy proof system. Castle walls erode and weapons rust if there are no skilled people to maintain them. Computer components slowly break down due to ordinary wear and tear. Software configurations become obsolete and unable to talk with other software, and become less secure as vulnerabilities are discovered over time. If there are no skilled people at all to maintain a familial network-citadel, it will eventually break down and fall into disuse.

fuzzfactor3 months ago

You have hit the nail on the head.

Especially with passing, eventually it's like the siege of the Alamo, when the walls do end up breached there's not a soldier there that can do any good.

It's shoestrings anyway and amazing it's working for now :)

signal_space3 months ago
rldjbpin3 months ago

"layers" have been a major motif of the write up.

> We’re removing layers, and layers, and layers of complexity, and making it easier to work on what you wanted to work on in the first place.

as an avid user, i'd say they are in fact adding more layers to the problem. it is well-designed and relatively accessible, sure, but it represents a stop-gap solution while everyone eventually pushes to the eventual solution.

it has always been the double-edged nature of abstraction. we give trust and responsibility to another party while for us networking works out "magically". but the moment your remote client has some auth issues, you snap back to reality. besides bandwidth costs, it seems that their otherwise generous pricing model is economically viable in post-ai landscape.

i'd personally like to act like a "landowner" of the internet, but currently being a rentor seems like a good idea while we all wait for social housing to finally get accepted.

aborsy3 months ago

I used to use Wireguard. It connects a peer to a peer, but stops there. I have since replaced it with Tailscale. It goes much beyond Wireguard, and connects everything to everything. A lot of my networking problems went away. After using Taildrop for several months, I feel the post is right about it. It’s a frictionless one click peer to peer file transfer tool that is very useful. It should have been built into the internet.

The idea of entirely decentralized internet is wishful thinking. You always need servers. Even with IP6, you have to run a STUN or DDNS server, since ip addresses change. Do you want to run them at home? I don’t.

I do think Tailscale is on path to different networking.

dakiol3 months ago

One thing I don’t understand. The article claims that we need to pay rent to big corps like AWS, which is true only if you’re offering something on the internet (e.g., you have a saas). As a consumer I don’t pay to AWS, I only pay to my isp. Now, the article wants everyone (the ones who have something to offer, and the consumers) to switch to this new internet… so both producers and consumers (peers now) need to pay rent to tailscale (unless you selfhost, but selfhosting is like the first story tell tell in the article about asking your isp for a static ip address, opening ports and the like; self hosting is too much work).

Smells like more centralisation, not less.

vwkd3 months ago

As a buyer you also don't pay VAT to the government. You pay the seller a price that's marked up exactly by VAT, which the seller then pays to the government.

In summary, if you don't pay it directly doesn't mean you don't pay it indirectly.

littlecranky673 months ago

Tailscale complaining about centralized actors controling the internet, yet not allowing to sign up for Tailscale with your email but strictly requiring to use a Microsoft/Meta/Google account. Cant make this up.

SparkyMcUnicorn3 months ago

You can use just about any OIDC.

Some of the self-hosted options presented during sign up include Keycloak, Ory, Gitea, Zitadel, Authelia, and more.

There's also a workaround to create a passkey account by signing up with any SSO provider, inviting yourself as an external user, accepting that invite and sining in with a passkey, then leave the original SSO network. Then you're not tied to any external service at all.

littlecranky673 months ago

Or, wild idea, just allow email signups like everyone else.

SparkyMcUnicorn3 months ago

Agree or disagree, a tailscale co-founder responded as to why they went this path.

https://news.ycombinator.com/item?id=22760130

> (I'm a Tailscale co-founder) The idea is to avoid building yet another commercial service that holds onto your username and password. People have enough identities already. More details here: https://tailscale.com/blog/how-tailscale-works/ We know we keep getting feedback that people want a different way to authorize their accounts (especially for personal use), so we're looking at other options. We just really want to stay out of the username+password business; it's simply bad security practice.

buro93 months ago

I love this.

Except to use tailscale you do need to bring in a while OIDC authentication provider.

It's all small and aimed at avoiding scale until the very first step, when suddenly only the big complex thing is acceptable.

I still just want to just use my email and a top. The only one of the auth providers tailscale supports that I have is GitHub, but I don't use GitHub as beyond work as I self host my git.

When the onboarding is "maintain and run a full oidc provider", all we've done is trade one aspect of complexity for another.

jonathanlydall3 months ago

Speaking of Tailscale, does anyone know on Windows how to prevent it significantly slowing down file transfers between peer computers on my home LAN?

I don't really understand it, I can use the direct IP address of the other machine and I can still see tailscaled.exe using a lot of CPU and my file transfer being only 65MB/s. If I right click the system tray icon and exit from Tailscale, the transfer speed instantly jumps to 109MB/s (which is the maximum my Gb/s LAN).

jonathanlydall3 months ago
pomatic3 months ago

I'll preface this by saying: I DO appreciate tailscale and what they've done for frictionless VPNs; I use it daily. But this post has a really unfortunate tone, it comes across as really arrogant. Not ambitious, but arrogant. The notion that the population as a whole is buying tailscale because it might offer some as-yet unpublished capabilities at some non-determinant point in the future... is delusional. And tailscale's moat is very shallow - yes, there's some nifty networking stuff going on, but it's well understood and the functionality will be replicated by competitors as tailscale gains mainstream traction, however big their warchest is.

EVa5I7bHFq9mnYK3 months ago

That looks like Hamachi of 20 years ago.

sulandor3 months ago

nordvpn is the new frosties tony

OJFord3 months ago

This is a 'gimme some of that Wix money', IPO/acq ramp-up post right?

Ericson23143 months ago

When is the government going to start requiring IPv6 more aggressively?

It's that simple.

lockywolf3 months ago

My ISP deployed ipv6 via serving ULAs to clients behind the ISP box, and doing a NAT to a single dynamic public IP.

Another one just blocks all incoming connections on ipv6 entirely.

oefrha3 months ago

I’m confused by this New Internet talk. Tailscale is nice and all, but it gives you virtual private networks. To talk to anything you need to first be invited to that private network. It’s more like the New LAN, great for intranet and shit. But how am I supposed to build Internet apps intended for everyone if they need to be invited to my network first?

Not to mention most Internet services do need a central backend to function even if there are no barriers among clients at all (because the clients are completely unreliable), including even the textbook p2p example of file transfer: while direct p2p is nice in many cases, with a central service the recipient can receive at any time, instead of having to coordinate with the sender to both stay online simultaneously and for the duration of the transfer, which is quite difficult nowadays with most of the computing happening on phones.

wmf3 months ago

Maybe there will eventually be an Intertailnet that connects all the tailnets together (in some secure, opt-in way of course).

howis3 months ago

How is having a TLS cert considered to currently be a “have”? Seems like a deployment issue for your colo and edge presence (for those eschewing AWS).

iczero3 months ago

> centralization is bad except when we do it

TL;DR

sidcool3 months ago

I love this blog post. It resonates so much. But I honestly don't know how to deploy applications efficiently without containerization. And where there's containers there's kubernetes. So on and so forth

throw7823498723 months ago

I've been an active Tailscale user for years now, preaching the Gospel of Wireguard Control Planes to all who will listen (and many who won't) in both my personal and professional life.

It's been really disheartening to watch the steady enshittification of Tailscale, Inc. I knew it was coming with 100% certainty once they raised 100mil in 2022. It's still heartbreaking because the product itself is quite good.

The worst part is because Tailscale, Inc got there "first" (I know nebula existed before Tailscale did. shut up, okay?) and now the other competitors like NetMaker, NetBird, are all following almost the exact same business model ("open core+" - open source client and some kind of claim to an open source control plane with infinity caveats to funnel enterprise dollarydoos back to the vulture capitalists)

sulandor3 months ago

thanks for your insights.

> The worst part is because Tailscale, Inc got there "first"

never heard of nebula but please clarify where they got first.

i'm sure you are aware that branded/purposebuilt vpn's existed long before the first iphone.

kiitos3 months ago

not sure what you mean by "enshittification"

are you describing the process of a company achieving commercial success?

idunnoman12223 months ago

Ipv6 is for poor people, until the poors figure out something really cool you can do with it the rich will never switch

sulandor3 months ago

imho at this point it's a generational thing. there's just no love in v6, so it'll take over once the old guard go out of office and nobody cares enough anymore.

rubi19453 months ago

> I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that’s 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep.

lmao

olsbkj3 months ago

[dead]

nomoreusernames3 months ago

the new internet? i'm still on BBS. dont wanna use computers without ansi art.

sarmudi083 months ago

[flagged]