Back

Kubernetes is our generation's Multics

589 points4 yearsoilshell.org
nickjj4 years ago

I'd be curious what a better alternative looks like.

I'm a huge fan of keeping things simple (vertically scaling 1 server with Docker Compose and scaling horizontally only when it's necessary) but having learned and used Kubernetes recently for a project I think it's pretty good.

I haven't come across too many other tools that were so well thought out while also guiding you into how to break down the components of "deploying".

The idea of a pod, deployment, service, ingress, job, etc. are super well thought out and are flexible enough to let you deploy many types of things but the abstractions are good enough that you can also abstract away a ton of complexity once you've learned the fundamentals.

For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.. That's complete with running DB migrations in a sane way, updating public DNS records, SSL certs, CI / CD, having live-preview pull requests that get deployed to a sub-domain, zero downtime deployments and more.

ljm4 years ago

> once you set up a decently tricked out Helm chart

I don't disagree but this condition is doing a hell of a lot of work.

To be fair, you don't need to do much to run a service on a toy k8s project. It just gets complicated when you layer on all production grade stuff like load balancers, service meshes, access control, CI pipelines, o11y, etc. etc.

nickjj4 years ago

> To be fair, you don't need to do much to run a service on a toy k8s project.

The previous reply is based on a multi-service production grade work load. Setting up a load balancer wasn't bad. Most cloud providers that offer managed Kubernetes make it pretty painless to get their load balancer set up and working with Kubernetes. On EKS with AWS that meant using the AWS Load Balancer Controller and adding a few annotations. That includes HTTP to HTTPS redirects, www to apex domain redirects, etc.. On AWS it took a few hours to get it all working complete with ACM (SSL certificate manager) integration.

The cool thing is when I spin up a local cluster on my dev box, I can use the nginx ingress instead and everything works the same with no code changes. Just a few Helm YAML config values.

Maybe I dodged a bullet by starting with Kubernetes so late. I imagine 2-3 years ago would have been a completely different world. That's also why I haven't bothered to look into using Kubernetes until recently.

> I don't disagree but this condition is doing a hell of a lot of work.

It was kind of a lot of work to get here, but it wasn't anything too crazy. It took ~160 hours to go from never using Kubernetes to getting most of the way there. This also includes writing a lot of ancillary documentation and wiki style posts to get some of the research and ideas out of my head and onto paper so others can reference it.

edoceo4 years ago

o11y = observability

tovej4 years ago

You couldn't create a parody of this naming convention that's more outlandish than the way it's actually being used.

+1
tsechin4 years ago
handrous4 years ago

o11y? In my head it sounds like it's a move in "Tony Hawk: Pro K8er"

grp0004 years ago

It's the Wingdings of naming conventions.

hnick4 years ago

You don't like n7ms?

oogali4 years ago

I was originally confused because I thought the debugger `ollydbg` was being referenced.

https://en.wikipedia.org/wiki/OllyDbg

verdverm4 years ago

You still have to do that prod grade stuff, K8s creates a cloud agnostic API for it. People can use the same terms and understand each other

handrous4 years ago

> That's complete with DB migrations in a safe way

How?! Or is that more a "you provide the safe way, k8s just runs it for you" kind of thing, than a freebie?

nickjj4 years ago

Thanks, that was actually a wildly misleading typo haha. I meant to write "sane" way and have updated my previous comment.

For saFeness it's still on us as developers to do the dance of making our migrations and code changes compatible with running both the old and new version of our app.

But for saNeness, Kubernetes has some neat constructs to help ensure your migrations only get run once even if you have 20 copies of your app performing a rolling restart. You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete. This translates to only 1 pod ever running the migration while other pods hang tight until it finishes.

I'm not a grizzled Kubernetes veteran here but the above pattern seems to work in practice in a pretty robust way. If anyone has any better solutions please reply here with how you're doing this.

handrous4 years ago

Hahaha, OK, I figured you didn't mean what I hoped you meant, or I'd have heard a lot more about that already. That still reads like it's pretty handy, but way less "holy crap my entire world just changed".

rossmohax4 years ago

> You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete.

Much simpler way is to run migration in init container itself. Most SQL migration frameworks know about locks and transactions, so concurrent migrations wont run anyway

nickjj4 years ago

I thought about doing that for a while too.

I think the value in the init+job+watcher approach is you don't need to depend on a framework being smart enough to lock things which makes it suitable and safe to run with any tech stack worry free. It also avoids potential edge cases if a framework's locking mechanism fails, and an edge case in this scenario could be really bad.

But it does come at the cost of a little more complexity (a 30 line YAML job and then ClusterRole/ClusterRoleBinding resources for RBAC stuff on the watcher), but fortunately that's only a 1 time thing that you need to set up.

kitd4 years ago

It simpler than that for simple scenarios. `kubectl run` can set you up with a standard deployment + service. Then you can describe the resulting objects, save the yaml, and adapt/reuse as you need.

darkwater4 years ago

> For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.

I understand you might outsource the Helm chart creation but this sounds like oversimplifying a lot, to me. But maybe I'm spoiled by running infra/software in a tricky production context and I'm too cynical.

nickjj4 years ago

It's not too oversimplified. I have a library chart that's optimized for running a web app. Then each web app uses that library chart. Each chart has reasonable default values that likely won't have to change so you're left only having to change the options that change per app.

That's values like number of replicas, which Docker image to pull, resource limits and a couple of timeout related values (probes, database migration, etc.). Before you know it, you're at 15ish lines of really straight forward configuration like `replicaCount: 3`.

purpleidea4 years ago

> I'd be curious what a better alternative looks like.

https://github.com/purpleidea/mgmt/

It's just not finished yet. with < 0.01% of the funding kube has, it has many times more design and elegance. Help us out. Have a look and tell me what you think. =D

worldsayshi4 years ago

My two cents is that docker compose is an order of magnitude simpler to troubleshoot or understand than Kubernetes but the problem that Kubernetes solves is not that much more difficult.

etaioinshrdlu4 years ago

As a Kubernetes outsider, I get confused why so much new jargon had to be introduced. As well as so many little new projects coupled to Kubernetes with varying degrees of interoperability. It makes it hard to get a grip on what Kube really is for newcomers.

It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.

sidlls4 years ago

> I get confused why so much new jargon had to be introduced.

Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

> It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.

This describes 99% of open source libraries used.The documentation looks good because auto doc tools produce a prolific amount of boilerplate documentation. In reality the result is documentation that's very shallow, and often just a re-statement of the APIs. The actual usage documentation of these projects is generally terrible, with few exceptions.

joshuamorton4 years ago

> Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

This seems both wrong and contrary to the article (which mentions that k8s is a descendant of Borg, and in fact if memory serves many of the k8s authors were borg maintainers). So they clearly were aware that people had solved their problem before, because they maintained the tool that had solved the problem for close to a decade.

parasubvert4 years ago

Kubernetes docs are pretty good, detailed, and kept up to date - a lot more than just API auto-documentation.

nhooyr4 years ago

I find it's low quality libraries that tend to have poor documentation. Perhaps that's 99% of open source libraries.

diminish4 years ago

I second this. I like "silent" new tech, which doesn't need to introduce dozens of new "concepts".

- containers focus on what you can do, easy to understand and you can start in 5 minutes

- kubernetes is the opposite, where verbose tutorials lose time explaining me how it works, rather than what i do with it.

sevagh4 years ago

I always find it surprising that I have yet to see or touch Kubernetes (and I work as an SRE with container workloads for several years now), and yet HN threads about it are full of people who apparently think it's the only possible solution and are flabbergasted that people don't pray to it nightly.

https://news.ycombinator.com/item?id=27910185

https://news.ycombinator.com/item?id=27910481 - weird comparison to systemd

https://news.ycombinator.com/item?id=27910553 - another systemd comparison

https://news.ycombinator.com/item?id=27913239 - comparing it to git

a_square_peg4 years ago

I think one part of this the lack of accepted nomenclature in CS - naming convention is typically not enforced, unlike if you'd have to produce an engineering drawing for it and have it conform to a standard.

For engineering, the common way is to use a couple of descriptive words + basic noun so things do get boring quite quickly but very easy to understand, say something like Google 'Cloud Container Orchestrator' instead of Kubernetes.

oalae5niMiel7qu4 years ago

If only branding wasn't involved.

parasubvert4 years ago

The Kubernetes documentation site is the source of truth, and pretty well written, though obviously no set of docs is perfect.

The concepts and constructs do not usually change in breaking ways once they reach beta status. If you learned Kubernetes in 2016 as an end user, there are certainly more features but the core isn’t that different.

benlivengood4 years ago

So the basic problem with *nix is its permission model. If we had truly separable security/privilege/resource domains then Linux wouldn't have needed containers and simple processes and threads could have sufficed in place of Borg/docker/Kubernetes.

There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

There's currently no simple resource model. Everything is an ad-hoc human-driven heuristic for allocating resources to processes and threads, and is a really difficult problem to solve formally because it has to go beyond algorithmic complexity and care about the constant factors as well.

The other *nix problem is "files". Files were a compromise between usability and precision but very few things are merely files. Devices and sockets sure aren't. There's a reason the 'file' utility exists; nothing is really just a file. Text files are actually text files + a context-free grammar (hopefully) and parser somewhere, or they're human-readable text (but probably with markup, so again a parser somewhere).

Plenty of object models have come and gone; they haven't simplified computers (much less distributed computers), so we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.

dijit4 years ago

Containers never solved the permission model. They solved the packaging and idempotency problem.

I really dislike when people assume containers give them security, it’s the wrong thing to think about.

Containers allowed us to deploy reproducibly, that’s powerful.

otabdeveloper44 years ago

Absolutely true.

Docker replaced .tar.gz and .rpm, not chroots.

Most of the time the chroot functionality of Docker is a hindrance, not a feature. We need chroots because we still haven't figured out packaging properly.

(Maybe Nix will eventually solve this problem properly; some sort of docker-compose equivalent for managing systemd services is lacking at the moment.)

catern4 years ago

Er, just as a historical note, one of the primary uses of chroots was for packaging. Just like how Docker does it. That, in a sense, was even the original motivation. The security usage of chroots was a later innovation.

cryptonector4 years ago

I mean, containers can provide isolation. Linux has had a hard time getting that to be reliable because it started with the wrong model: building containers subtractively rather than additively. Though even starting with the right model, until you have isolation for every last bit of shared context that the OS provides (harder to identify than it may seem at first blush!) you won't have a complete solution. And yes, software-based containers will tend to have some leakage. Even sharing hardware with hardware isolation features might not be enough (hello row hammer).

It would be good to have containers aim to provide the maximum possible isolation.

zajio1am4 years ago

> Containers never solved the permission model. They solved the packaging and idempotency problem

Disagree. Containers are primarily about separation and decoupling. Multiple services on one server often have plenty of ways to interact and see each other and are interdependent in non-trivial ways (e.g. if you want to upgrade the OS, you upgrade it for all services together). Services running each in its own container provides separation by default.

OTOH, containers as a technology has nothing to do with packaging, reproducibility and deployment. Just these changes arrived together (e.g. with Docker) so they are often associated, but you can have e.g. LXC containers that can be managed in the same way as traditional servers (by ssh into a container).

dijit4 years ago

LXC, freebsd jails & Solaris zones et al. are not the same as docker containers though.

The former were built with security in mind. The latter was most assuredly not.

foxhill4 years ago

> I really dislike when people assume containers give them security, it’s the wrong thing to think about.

to be fair, there is lots of published text around suggesting that this _is_ the case. many junior to semi-experienced engineers i've known have at some point thought it's plausible to "ssh into" a container. they're seen as light-weight VMs, not as what they are - processes.

> Containers allowed us to deploy reproducibly, that’s powerful.

and it was done in the most "to bake an apple pie from scratch, you must first create the universe" approach.

phire4 years ago

But you can ssh into a container.

You just need to install sshd and launch it. You also need to create a user and set a password if you want to actually log in.

Why? Because containers aren't a single process. It's a group of processes sharing a namespace.

And you can totally use a container as a light-weight VM. While most containers have bash or a your application as pid 1, there is nothing stopping you launching a proper initrd as pid 1 and it will act much like a proper OS.

Though, just because you can, doesn't mean you should.

otterley4 years ago

I think you mean init, not initrd. An initrd is a RAM disk image loaded by Linux containing kernel file system and network drivers and is typically used to help minimize the size of the main kernel image.

mikeyjk4 years ago

It is possible to do that though. I'm perhaps getting too caught up on 'plausible'.

black_knight4 years ago

> There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

What do you think about using filedescriptors as capabilties? Capsicum (for FreeBSD, I think) extends this notion quite a bit. Personally I feel it is not quite "right", but I haven't sat down and thought hard about what is missing.

> we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.

Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.

benlivengood4 years ago

I haven't looked at capsicum specifically, but from the simple overview I read it sounds like it is more similar to dropping root privileges when demonizing and not the basis for a whole-OS security model. E.g. there isn't (in my limited reading) a way to grant a new file descriptor to a process after it calls cap_enter. Consider a web browser that wants to download or upload a file; there should be a way for the operator to grant that permission to the browser from another process (the OS UI or similar) after it starts running.

To be effective capabilities also need a way to be persistent so that a server daemon doesn't have to call cap_enter but can pick up its granted capabilities at startup. Capsicum looks like a useful way to build more secure daemons within Unix using a lot of capability features.

I also think file descriptors are not the fundamental unit of capability. Capabilities should also cover processes, threads, and the objects managed by various other syscalls.

> Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.

Unfortunately I don't have great suggestions. The most secure model right now is seL4, and its capability model covers threads, message-passing endpoints, and memory allocation(subdivision) and retyping as kernel memory to create new capabilities and objects. The kernel is formally verified but afaik the application/user level is not fleshed out as a convenient development environment nor as a distributed computing environment.

For distributed computing a capability model would have to satisfy and solve distributed trust issues which probably means capabilities based on cryptographic primitives, which for practical implementations would have to extend full trust between kernels in different machines for speed. But for universality it should be possible to work with capabilities at an abstraction level that allows both deep-trust distributed computers and more traditional single-machine trust domains without having to know or care which type of capabilities to choose when writing the software, only when running it.

I think a foundation for universal capabilities needs support for different trust domains and a way to interoperate between them.

   1. Identifying the controller for a particular capability, which trust domain it is in, and how to access it.
   2. Converting capabilities between trust domains as the objects to which they refer move.
   3. Managing any necessary identity/cryptographic tokens necessary to cross trust domains.
   4. Controlling the ability to grant or use capabilities across trust domains.
A simple example; a caller wants to invoke a capability on a utility process which produces an output, to which the caller wants to receive a capability to read the output.

   The processes may not live on the same machine.
   The processes may not be in the same trust domain.
   The resulting object may be on a third machine or trust domain.
   The caller may have inherited privacy enforcement on all owned capabilities that necessitates e.g. translating the binary code of the second process into a fully homomorphically encrypted circuit which can run on a different trust domain while preserving privacy and provisioning the necessary keys for this in the local trust domain so that the capability to the new object can actually read it.
   The process may migrate to a remote machine in a different trust domain in the middle of processing, in which case the OS needs to either fail the call (making for an unfortunately complicated distributed computer) or transparently snapshot or rollback the state of the process for migration, transmit it and any (potentially newly encrypted) data, and update the capabilities to reflect the new location and trust domain.

   Basically if the capability model isn't capable of solving these issues for what would be very simple local computing then it's never going to satisfy the OP's desire for a more simple distributed computation model.
I think it's also clear why *nix is woefully short of being able to accomplish it. *\nix is inherently local and has a single trust domain and forces userland code to handle interaction with other trust domains except in the very limited model of network file systems (and in the case of NFS essentially an enforced single trust domain with synchronized user/group IDs)
cryptonector4 years ago

Windows has capabilities. It's the combination of handles (file, process, etc.) and access tokens.

But you'll note no one is really deploying windows workloads to the cloud. Why? Well, because you'd still have to build a framework for managing all those permissions, and it hasn't been done. Also, you might end up with SVCHOST problem, where you host many different services/apps/whatever in one very threaded process because you can.

Capabilities aren't necessarily simpler. Especially if you can delegate them without controls -- now you have no idea what the actual running permissions are, only the cold start baseline.

No, I think the permissions thing is a red herring. Very much on the contrary, I think workload division into coarse-grained containers are great for permissions because fine-grained access control is hard to manage. Of course, you can't destroy complexity, only move it around, so if you should end up with many coarse-grained access control units then you'll still have a fine-grained access control system in the end.

Files aren't really a problem either. You can add metadata to files on Linux using xattrs (I've built a custom HTTP server that takes some response headers for static resources, like Content-Type, from xattrs). The problem you're alluding to is duck-typing as opposed to static typing. Yes, it's a problem -- people are lazy, so they don't type-tag everything in highly lazy typing systems. So what? Windows also has this problem, just a bit less so than Unix. Python and JS are all the rage, and their type systems are lazy and obnoxious. It's not a problem with Unix. It's a problem with humans. Lack of discipline. Honestly, there are very few people who could use Haskell as a shell!

> Plenty of object models have come and gone;

Yeah, mostly because they suck. The right model is Haskell's (and related languages').

> so we'll need some theory more powerful than anything we've had in the past ...

I think that's Haskell (which is still evolving) and its ecosystem (ditto).

But at the end of the day, you'll still have very complex metadata to manage.

What I don't understand is how all your points tie into Kubernetes being today's Multics.

Kubernetes isn't motivated by Unix permissions sucking. We had fancy ACLs in ZFS in Solaris and still also ended up having Zones (containers). You can totally build an application-layer cryptographic capability system, running each app as its own isolated user/container, and to some degree this is happening with OAuth and such things, but that isn't what everyone is doing, all the time.

Kubernetes is most definitely not motivated by Unix files being un-typed either.

I hope readers end up floating the other, more on-topic top-level comments in this thread back to the top.

flowerlad4 years ago

The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn. To learn alternatives, it takes years, and applications built on alternatives will be tied to one cloud.

See prior discussion here: https://news.ycombinator.com/item?id=23463467

You'd have to learn AWS autoscaling group (proprietary to AWS), Elastic Load Balancer (proprietary to AWS) or HAProxy, Blue-green deployment, or phased rollout, Consul, Systemd, pingdom, Cloudwatch, etc. etc.

dolni4 years ago

Kubernetes uses all those underlying AWS technologies anyway (or at least an equivalently complex thing). You still have to be prepared to diagnose issues with them to effectively administrate Kubernetes.

giantrobot4 years ago

At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix. Moving providers with a k8s system can be a weeks long project rather than a years long project which can easily make the difference between surviving and closing the doors. It's not a panacea but it at least doesn't make your system dependent on a single provider.

dolni4 years ago

If you can literally pick up and shift to another cloud provider just by moving Kubernetes somewhere else, you are spending mountains of engineering time reinventing a bunch of different wheels.

Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?

If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).

+1
flowerlad4 years ago
+1
lanstin4 years ago
SahAssar4 years ago

> At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix.

You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?

+1
giantrobot4 years ago
pm904 years ago

Indeed. When we did a cloud migration we first moved all our apps to a (hosted) k8s first, and then to a cloud k8s cluster. This made the migration so much easier.

gizdan4 years ago

Only the k8s admins need to know that though, not the users of it.

dolni4 years ago

"Only the k8s admins" implies you have a team to manage it.

A lot of things go from not viable to viable if you have the luxury of allocating an entire team to it.

gizdan4 years ago

Fair point. But this is where the likes of EKS and GKE come in. It takes away a lot of the pain that comes from managing K8s.

flowerlad4 years ago

That hasn't been my experience. I use Kubernetes on Google cloud (because they have the best implementation of K8s), and I have never had to learn any Google-proprietary things.

hughrr4 years ago

Kubernetes on AWS is always broken somewhere from experience as well.

Oh it's Wednesday, ALB controller has shat itself again!

rantwasp4 years ago

cloud agnosticism is, in my experience, a red herring. It does not matter and the effort required to move from one cloud to another is still non-trivial.

I like using the primitives the cloud provides, while also having a path to - if needed - run my software on bare metal. This means: VMs, decoupling the logging and monitoring from the cloud svcs (use a good library that can send to cloudwatch for eg. prefer open source solutions when possible), do proper capacity planning (and have the option to automatically scale up if the flood ever comes), etc.

mateuszf4 years ago

> The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn.

Learning Heroku and starting using it takes maybe an hour. It's more expensive and you won't have as much control as with Kubernetes, but we used it in production for years for fairly big microservice based project without problems.

commanderjroc4 years ago

This feels like a post ranting against SystemD written from someone who likes init.

I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Arguably, this is one problem that is made up of smaller problems that are solved by smaller services just like SystemD works.

Sometimes I wonder if the Perlis-Thompson Principle and the Unix Philosophy have become a way to force a legalistic view of software development or are just out-dated.

dolni4 years ago

I don't find the comparison to systemd to be convincing here.

The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts. They're reduced to unit files which are often just 10-15 lines. systemd is designed to replace old stuff.

The result of Kubernetes for the average administrator is a massively complex system with its own unique concepts. It needs to be well understood if you want to be able to administrate it effectively. Updates come fast and loose, and updates are going to impact an entire cluster. Kubernetes, unlike systemd, is designed to be built _on top of_ existing technologies you'd be using anyway (cloud provider autoscaling, load balancing, storage). So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.

throwaway8943454 years ago

> So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.

Here are some bits of complexity that managed Kubernetes takes away:

* SSH configuration

* Key management

* Certificate management (via cert-manager)

* DNS management (via external-dns)

* Auto-scaling

* Process management

* Logging

* Host monitoring

* Infra as code

* Instance profiles

* Reverse proxy

* TLS

* HTTP -> HTTPS redirection

So maybe your point was "the VMs still exist" which is true, but I generally don't care because the work required of me goes away. Alternatively, you have to have most/all of these things anyway, so if you're not using Kubernetes you're cobbling together solutions for these things which has the following implications:

1. You will not be able to find candidates who know your bespoke solution, whereas you can find people who know Kubernetes.

2. Training people on your bespoke solution will be harder. You will have to write a lot more documentation whereas there is an abundance of high quality documentation and training material available for Kubernetes.

3. When something inevitably breaks with your bespoke solution, you're unlikely to get much help Googling around, whereas it's very likely that you'll find what you need to diagnose / fix / work around your Kubernetes problem.

4. Kubernetes improves at a rapid pace, and you can get those improvements for nearly free. To improve your bespoke solution, you have to take the time to do it all yourself.

5. You're probably not going to have the financial backing to build your bespoke solution to the same quality caliber that the Kubernetes folks are able to devote (yes, Kubernetes has its problems, but unless you're at a FAANG then your homegrown solution is almost certainly going to be poorer quality if only because management won't give you the resources you need to build it properly).

dolni4 years ago

Respectfully, I think you have a lot of ignorance about what a typical cloud provider offers. Let's go through these each step-by-step.

> SSH configuration

Do you mean the configuration for sshd? What special requirements would have that Kubernetes would help fulfill?

> Key management

Assuming you mean SSH authorized keys since you left this unspecified. AWS does this with EC2 instance connect.

> Certificate management (via cert-manager)

AWS has ACM.

> DNS management (via external-dns)

This is not even a problem if you use AWS cloud primatives. You point Route 53 at a load balancer, which automatically discovers instances from a target group.

> Auto-scaling

AWS already does this via autoscaling.

> Process management

systemd and/or docker do this for you.

> Logging

AWS can send instance logs to CloudWatch. See https://docs.aws.amazon.com/systems-manager/latest/userguide....

> Host monitoring

In what sense? Amazon target groups can monitor the health of a service and automatically replace instances that report unhealthy, time out, or otherwise.

> Infra as code

I mean, you have to have a description somewhere of your pods. It's still "infra as code", just in the form prescribed by Kubernetes.

> Instance profiles

Instance profiles are replaced by secrets, which I'm not sure is better, just different. In either case, if you're following best practices, you need to configure security policies and apply them appropriately.

> Reverse proxy

AWS load balancers and target groups do this for you.

> HTTPS

AWS load balancers, CloudFront, do this for you. ACM issues the certificates.

I won't address the remainder of your post because it seems contingent on the incorrect assumption that all of these are "bespoke solutions" that just have to be completely reinvented if you choose not to use Kubernetes.

+2
throwaway8943454 years ago
mst4 years ago

Right, I really dislike systemd in many ways ... but I love what it enables people to do and accept that for all my grumpyness about it, it is overall a net win in many scenarios.

k8s ... I think is often overkill in a way that simply doesn't apply to systemd.

parasubvert4 years ago

If you have to manage a large distributed software code base or set of datacenters, Kubernetes is a win in that it provides a consistent, elegant solution to a nearly universal set of problems.

Systemd comparatively feels like a complete waste of time given the heat it has generated for the benefit.

thethethethe4 years ago

> The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts.

Wouldn't the hundreds of lines of finicky, bespoke Ansible/Chef/Puppet configs required to manage non-k8s infra be the equivalent to this?

mathw4 years ago

In my work, absolutely yes. Using Kubernetes has saved us sooo much nonsense. Yes we have a mix of Terraform and k8s manifests to deploy to Azure Kubernetes Service, but it works out pretty well in the end.

Honestly most of the annoyance is Azure stuff. Kubernetes stuff is pretty joyful and, unlike Azure, the documentation sometimes even explains how it works.

dolni4 years ago

I can't say I have had the same experience.

Kubernetes cluster changes potentially create issues for all services operating in that cluster.

Provisioning logic that is baked into an image means changes to one service have no chance of affecting other services (app updates that create poor netizen behavior, notwithstanding). Rolling back an AMI is as trivial as setting the AMI back in the launch template and respinning instances.

There is a lot to be said for being able to make changes that you are confident will have a limited scope.

dolni4 years ago

Does Kubernetes infrastructure also not require some form of configuration?

Yes, there is a trade off here. You are trading a staggeringly complex external dependency for a little bit of configuration you write yourself.

The Kubernetes master branch weighs in at ~4.6 million lines of code right now. Ansible sits at ~286k on their devel branch (this includes the core functionality of Ansible but not every single module). You could choose not to even use Ansible and just write a small shell script that builds out an image which does something useful in less than 500 lines of your own code, easily.

Kubernetes does useful stuff and may take some work off your plate. It's also a risk. If it breaks, you get to keep both of the pieces. Kubernetes occupies the highly unenviable space of having to do highly available network clustering. As a piece of software, it is complex because it has to be.

Most people don't need the functionality provided by Kubernetes. There are some niceties. But if I have to choose between "this ~500 line homebrew shell script broke" and "a Kubernetes upgrade went wrong" I know which one I am choosing, and it's not the Kubernetes problem.

Managed Kubernetes, like managed cloud services, mitigate some of those issues. But you can still end up with issues like mismatched node sizes and pod resource requirements, so there is a bunch of unused compute.

TL;DR of course there are trade-offs, no solution is magic.

thethethethe4 years ago

Fair, I was just pointing out that there was more to the analogy. Systemd, like init, also requires configuration, though it is more declarative than imperative, similar to k8s. Some people may prefer this style and consider it easier to manage, however, I my opinions here are not that strong

0xEFF4 years ago

Kubernetes removes the complexity of keeping a process (service) available.

There’s a lot to unpack in that sentence, which is to say there’s a lot of complexity it removes.

Agree it does add as well.

I’m not convinced k8s is a net increase in complexity after everything is accounted for. Authentication, authorization, availability, monitoring, logging, deployment tooling, auto scaling, abstracting the underlying infrastructure, etc…

dolni4 years ago

> Kubernetes removes the complexity of keeping a process (service) available.

Does it really do that if it you just use it to provision an AWS load balancer, which can do health checks and terminate unhealthy instances for you? No.

Sure, you could run some other ingress controller but now you have _yet another_ thing to manage.

randomswede4 years ago

Do AWS load balancers distinguish between "do not send traffic" and "needs termination"?

Kubernetes has readiness checks and health checks for a reason. The readiness check is a gate for "should receive traffic" and the health check is a gate for "should be restarted".

+1
cjalmeida4 years ago
rossmohax4 years ago

> K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

K8S does very simple stateless case well, but anything more complicated and you are on your own. Statefull services is still a major pain especially thus with leader elections. There is not feedback to K8S about application state of the cluster, so it can't know which instancess are less disruptive to shut down or which shard needs more capacity.

throwaway8943454 years ago

> I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Also, in the sense of "many small components that each do one thing well", k8s is even more Unix-like than Unix in that almost everything in k8s is just a controller for a specific resource type.

Animats4 years ago

I'm not sure that "fewer concepts" is a win. "Everything is a file" went too far with Linux, where you get status from the kernel by reading what appears to be various text files. But that runs into all the complexities of maintaining the file illusion. What if you read it in small blocks? Does it change while being read? If not, what if you read some of it and then just hold the file handle. Are you tying up kernel memory? Holding important locks? Or what?

Orchestration has a political and business problem, too. How does Amazon feel about something that runs most jobs on your own bare metal servers and rents extra resources from AWS only during overload situations? This appears to be the financially optimal strategy for compute-bound work such as game servers. Renting bare iron 24/7 at AWS prices is not cost effective.

jiggawatts4 years ago

> "Everything is a file" went too far with Linux

Having had a play with a few variants on this theme, I think kernel based abstractions are the mistake here. It's too low level and too constrained by the low-level details of the API, as you've said yourself.

If you look at something like PowerShell, it has a variant of this abstraction that is implemented in user mode. Within the PowerShell process, there are provider plugins (DLLs) that implement various logical filesystems like "environment variables", "certificates", "IIS sites", etc...

These don't all implement the full filesystem APIs! Instead they have various subsets. E.g.: for some providers only implement atomic reads and writes, which is what you want for something like kernel parameters, but not generic data files.

jaaron4 years ago

I feel like we've already seen some alternatives and the industry, thus far, is still orienting towards k8s.

Hashicorp's stack, using Nomad as an orchestrator, is much simpler and more composable.

I've long been a fan of Mesos' architecture, which I also think is more composable than the k8s stack.

I just find it surprising an article that is calling for an evolution of the cluster management architecture fails to investigate the existing alternatives and why they haven't caught on.

verdverm4 years ago

We had someone explore K8s vs Nomad and they said K8s because nomad docs are bad. They got much further with K8s in the same timeboxed spike

bradstewart4 years ago

Setting up the right parameters/eval criteria to exercise inside of a few week timebox (I'm assuming this wasn't a many month task) is extremely difficult to do for a complex system like this. At least, to me it is--maybe more ops focused folks can do it quicker.

Getting _something_ up and running quickly isn't necessarily a good indicator of how well a set of tools will work for you over time, in production work loads.

verdverm4 years ago

It was more about migrating the existing microservices than some example app, runs in docker compare today. Getting the respective platforms up was not the issue. I don't think weeks were spent, but they were able to migrate a complex application to K8s in less than a week. Couldn't get it running in Nomad, which was tried first due to its supposed simplicity over K8s.

hnlmorg4 years ago

Several years ago -- so pre-K8s too -- I was tasked with setting up a Nomad cluster and failed miserably. Nomad and Consul are designed to be worked together but also designed distinctly enough that it was a bloody nightmare trying to figure out what order of priority things needed to be spun up and how they all interacted with each other. The documentation was more like a man page where you'd get a list of options but very little guidance on how to set it up, unlike K8s who's documentation has a lot of walk-through material.

Things might have improved massively for Nomad since but I honestly have no desire to learn. Having used other Hashicorp tools since, I see them make the same mistakes time and time again.

Now I'm not the biggest fan of K8s either. I completely agree that they're hugely overblown for most purposes despite being sold as a silver bullet for any deployment. But if there's one thing K8s does really well it's describing the different layers in a deployment and then wrapping that up in a unified block. There's less of the "this thing is working but is this other thing" when spinning up a K8s cluster.

mlnj4 years ago

For me when exploring K8s vs Nomad, Nomad looked like a clear choice. That was until I had to get Nomad + Consul running. I found it all really difficult to get running in a satisfactory manner. I never even touched the whole Vault part of the setup because it was all overwhelming.

On the other side K8s was a steep learning curve with lots of options and 'terms' to learn but never was a point into the whole exploration where I was stuck. The docs are great. the community is great and the number of examples available allows us to mix n match lots of different approaches.

cturner4 years ago

There is a trap in distributed system design - seeking to scale-up from a single-host perspective. An example - we have apache and want to scale it up, so we put it in a container and generate its configuration so we can run several of them in parallel.

This leads to unnecessarily heavy systems - you do not need a container to host a server socket.

Industry puts algorithms and Big O on a pedestal. Most software projects start as someone building algorithms, with deployment and interactions only getting late attention. This is a bit like building the kitchen and bathroom before laying the foundations.

Algorithm centric design creates mathematically elegant algorithms that move gigabytes of io across the network for every minor transaction. Teams wrap commodity resource schedulers around carefully tuned worker nodes, and discover their performance is awful because the scheduler can’t deal in the domain language of the big picture problem.

I think it is interesting that the culture of Big O interviews and k8s both came out of Google.

zbentley4 years ago

Do you have any examples/ideas of what a non algorithm-first approach might look like?

cturner4 years ago

Not sure if this is helpful, there are some notes at cthulix.com.

jillesvangurp4 years ago

The problem is the devops culture that has burdened development teams with having to juggle a lot of complexity. The solution is having some separation of concerns. Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone. You pay for the privilege of doing more interesting things.

Kubernetes becomes a problem when you have people who are not operations people with many years of experience with this stuff trying to do this while learning how to do it at the same time. The related problem is that having people spend time on this is orders of magnitudes more expensive than it is to run an actual cluster, which is also not cheap.

A week of devops time easily equates months/years of cloud hosting time for a modestly sized setup using e.g. Google Cloud Run. And lets face it, it's never just a week. Many teams have full time dev ops people costing 100-200$K/year, each. Great if you are running a business generating millions of revenue. Not so great if you are running a project that has yet to generate a single dollar of revenue and is a long time away from actually getting there. That describes most startups out there.

I actually managed to stay below the Cloud Run freemium layer for a while making it close to free. Took me 2 minutes to setup CI/CD. Comes with logging, auto scaling, alerting, etc. Best of all, it freed me up to do more interesting things. Technically I'm using Kubernetes. Except of course I'm not. I spent zero time fiddling with kubernetes specific config. All I did was tell Google Cloud run to go create me a CI/CD pipeline from this git repository and scale it. 3 minute job to click together. Service was up and running right after the build succeeded. Great stuff. That's how devops should be: spend a minimum of time on it in exchange for acceptable results.

parasubvert4 years ago

"Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone."

This is the fundamental disagreement. DevOps was a reaction to developers that build software that was nearly impossible to operate because they treated Ops as servants that paid to do the dirty work, rather than peers with a set of valuable skills that cover a scope beyond what many Dev teams have. And it was a reaction to Ops being ground down into becoming the "department of no", when really they should be at the table with the development team as a way towards a collaborative reality check. A model where one team gets to completely ignore the complexities of operational reality is a broken, inhumane, and unsustainable model.

That said, it's also unsustainable to expose all complexity to dev teams that don't have the skills or incentive to manage this. Progressive disclosure and composable abstractions are the tool to remedy this. Kubernetes was never intended to be exposed directly to app developers, it was a system developer's platform toolkit. Exposing it is misunderstanding + laziness on the part of some operations teams. The intent was always to build higher PaaS-like abstractions such as Knative (which is what Google Cloud Run is based on).

midrus4 years ago

As a frontend developer, I love to run applications in production, being able to get a terminal to my server, setup metrics, and do all these devopsy things.

But it is a totally different experience from doing this with Appengine, Heroku, Tsuru, etc... than with a custom in house built kubernetes plus a thousand custom home made tools and 10 different repositories with custom undocumented YAML files and another 3000 "gotchas" of things that don't work yet, we're on it, we need to migrate to the new version,etc.

So I symphatize with the parent comment in the sense that, in this custom built mountain of stuff, I don't want to do deveops... if you give me an easy to use, well tested, well documented, stable production infrastructure as the ones I mentioend, then I'm all in.

I also agree with you on your last paragraphs about not exposing the raw thing to the developers. This is the key.

The problem is when the systems gurus want you to understand to the same level everything they understand, your frontend coworkers want you to be on the latest of every library, your product manager wants you to perfectly understand the product, your manager expect you to be the best at dealing with people, and you still have to smile and be happy about team building... oh, and don't forget the Agile Coach expecting you to also be good at all the team dynamics and card games.

I'm all in in operating the applications my team builds. Having to operate custom in house kubernetes clusterfucks is not my job.

parasubvert4 years ago

100%. I spent 5+ years of my life helping cloud foundry take off, and saw the enormous benefits of having your own private Heroku.

But the market overwhelmingly decided it wanted to play with a lower level foundation (those CF instances mostly are still chugging along running hundreds of thousands of containers, but they’re in their own world… “legacy”?).

Let’s own it and not delude ourselves that the current state of Kubernetes is the end state. It’s like saying the Linux syscall interface is too complex for app developers. Well yes! It’s for system developers. We as an industry are working to improve that.

zdw4 years ago

Treating ops as a separate janitorial service and how that goes south is nicely captured in this article:

https://machinesplusminds.blogspot.com/2012/08/the-carpets-a...

mgkimsal4 years ago

> Great if you are running a business generating millions of revenue.

It's not even great in that situation. Millions in profit, perhaps, but that $200k+ would probably better be spent elsewhere - enhancing functionality, increasing sales, support, etc.

skissane4 years ago

One point where the analogy fails, is that Multics was never particularly popular. Although it was historically influential (especially but not purely through its influence on Unix), it was only ever a small player in the market. It was positioned as an operating system for high-end multi-million dollar mainframes, but in that market IBM was king (with thousands of sites), Multics wasn't even near being second place (with a mere 80 sites at its peak). Even for its vendor, GE/Honeywell, it was an also-ran – Honeywell ended up preferring GCOS as the solution for that market, which is part of why it killed Multics off. GCOS was no doubt technically inferior, but it was a simpler system which made more frugal use of system resources.

By contrast, k8s is wildly popular. I have no idea how many installations of it exist in the world, but it probably numbers into the millions.

hnjst4 years ago

I'm pretty biased since I gave k8s trainings and operate several kubes for my company and clients.

I'll take two pretty different contexts to illustrate why for me k8s makes sense.

1- I'm part of the cloud infrastructure team (99% AWS, a bit of Azure) for a pretty large private bank. We are in charge of security and conformity of the whole platform while trying to let teams be as autonomous as possible. The core services we provide are a self-hosted Gitlab along with ~100 CI runners (Atlantis and Gitlab-CI, that many for segregation), SSO infrastructure and a few other little things. Team of 5, I don't really see a better way to run this kind of workload with the required SLA. The whole thing is fully provisioned and configured via Terraform along with it's dependencies and we have a staging env that is identical (and the ability to pop another at will or to recreate this one). Plenty of benefits like almost 0 downtime upgrades (workloads and cluster), on-the-shelf charts for plenty of apps, observability, resources optimization (~100 runners mostly idle on a few nodes), etc.

2- Single VM projects (my small company infrastructure and home server) for which I'm using k3s. Same benefits in terms of observability, robustness (at least while the host stays up...), IaC, resources usage. Stable minimalists hardened host OS with the ability to run whatever makes sense inside k3s. I had to setup similarly small infrastructures for other projects recently with the constraint of relying on more classic tools so that it's easier for the next ops to take over, I end up rebuilding a fraction of k8s/k3s features with much more efforts (did that with docker and directly on the host OS for several projects).

Maybe that's because I know my hammer well enough for screws to look like nails but from my perspective once the tool is not an obstacle k8s standardized and made available a pretty impressive and useful set of features, at large scale but arguably also for smaller setups.

pram4 years ago

99% AWS? You can do Gitlab runners and pretty much everything else with ECS+Fargate. You wouldn't even need to maintain any nodes, clusters, etc!

ilovefood4 years ago

We have both Nomad (Consul + Vault + Nomad) and Kubernetes (hosted and on prem) running, both excel at different things.

I love Nomad's flexibility and ease of use, a simple hcl file and I (and all the devs) can debug and understand what is going with the deployment without wasting a whole sprint, debugging and understanding the systems is trivial. However I agree parts of the documentation should be fixed and can confuse people who want to start up and it's also relatively "new" insofar that there is a small but growing community around it. I love Kubernetes because of the community, if there's a Helm chart for a service, it's going to work in 80% of the cases. If however there are bugs in the helm chart, or something is quite not on the beaten path, then good luck. Most of the time wasted on Kubernetes was the inexperience of the operators and also the esoteric bugs that can happen now and then. Building on top of things that have been done before is a great way to win time and flexibility but it shouldn't be an excuse to not understand them (helm charts as an example).

In both cases, you always need an ops team to take care of the clusters. For Nomad, 2/3 people are enough. For Kubernetes you will need 5+ people depending on the size and locality of the cluster, if you want to do things right, that is. If your dev team is managing them it's already game over and just a question of time until you made yourself more real problems than you initially had.

What bugs me the most however is the cargo culting around the tools serving as a "beating around the bush" technique to not do actual work. They're just that, tools, if you have to deploy a rails or django app with an sqlite database just do it on metal with a two liner "ci/cd" and grow from there. If it gets bigger, sure, go for Kubernetes to manage the deployments and auto scale, but be damn sure that you can debug anything that goes wrong within minutes/hours. If things go wrong and there's no hit on your googled error code you essentially fall from your highest level of abstraction and are at the mercy of consultants that will both waste your time in writing requirements and waste your money by taking too much time than was initially planned and agreed upon (my experience, sample size N=6).

debarshri4 years ago

One of the most relevant and amazing blogs I have read in recent times.

I have been working for a firm that have been onboarding multiple small scale startup or lifestyle businesses to kubernetes. My opinion is that if you have an ruby on rails or python app, you don't really need kubernetes. It is like bringing bazooka to a knife fight. However, I do think kubernetes has some good practice embedded in them, which I will always cherish.

If you are not operating at huge scale, both operations or/and teams, it actually comes at a high cost of productivity and tech debt. I wish there was an easier tech that would bridge going from VMs to bunch of VMs, bunch of containers to kubernetes.

zozbot2344 years ago

> Kubernetes is our generation's Multics

Prove it. Create something simpler, more elegant and more principled that does the same job. (While you're at it, do the same for systemd which is often criticized for the same reasons.) Even a limited proof of concept would be helpful.

Plan9 and Inferno/Limbo were built as successors to *NIX to address process/environment isolation ("containerization") and distributed computing use cases from the ground up, but even these don't even come close to providing a viable solution for everything that Kubernetes must be concerned with.

Fordec4 years ago

I can claim electric cars will beat out hydrogen cars in the long run. I don't have to build an electric car to back up this assertion. I can look at the fundamental factors at hand and project out based on theoretical maximums.

I can also claim humans will have longer lifespans in the future. I don't need to develop a life extending drug before I can hold that assertion.

Kubernetes is complex. Society used to still work on simpler systems before we added layers of complexity. There are dozens of layers of abstraction above the level of transistors, it is not a stretch to think that there is a more elegant abstraction yet designed without having to "prove" themselves to zozobot234.

parasubvert4 years ago

Claiming Kubernetes is Multics , and that UNIX is around the corner, is worthless claim without actual data or argument to back it up.

To me, Kubernetes is the new UNIX, centered around a small number of core ideas: controller loops, Pods, level-triggered events, and a fully open, well-standardized, and declarative, and extensible RESTful API.

The various clouds and predecessor cloud orchestrators were the infinitely complicated beasts.

OP just linked to a few rants about the complexity of the CNCF ecosystem (not Kubernetes), and extended cranky rant / thought exercise by the MetalLB guy. The latter is the closest to an actual argument against Kubernetes, but there’s a LOT of things to disagree with in that post .

pphysch4 years ago

What are the "fundamental factors at hand" with Kubernetes and software orchestration? How do you quantify these things?

0xdeadbeefbabe4 years ago

> comments are intended to add color on the design of the Oil language and the motivation for the project as a whole.

Comments are also easier to write than code. He really does seem obligated to prove kubernetes is our generations multics, and that's a good thing.

lallysingh4 years ago

The successor will probably be a more integrated platform where it provides a lot of stuff you've got to use sidecars, etc for.

Probably a language with good IPC (designed for real distributed systems that handle failover), some unified auth library, and built-in metrics and logging.

A lot of real-life k8s complexity is trying to accommodate many supplemental systems for that stuff. Otherwise it's a job scheduler and haproxy.

up_and_up4 years ago
gizdan4 years ago

Nomad also doesn't have a lot of feature that are built into kubernetes, features that otherwise require other hashicorp tools. So now you have a vault cluster, a consul cluster, a nomad cluster, then hcl to manage it all, probably a terraform enterprise cluster. So what have you gained? Besides the same amount of complexities with fewer features.

AlexCoventry4 years ago

I think Nomad sounds like the direction the OP blog post is proposing to move in: a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.

gizdan4 years ago

> a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.

But Kubernetes is already this. Sure the core is a lot bigger than something like Nomad, but the some of it is replaceable, and there are plenty of simpler alternatives to those built in.

And anyway, my point still stands. What's the point of having 20 different independent systems that address the aspects K8s is trying to solve versus one big system that addresses all the headaches? To me having 20 different systems that potentially have many fundamental differences is more complex than a single system that has the same design philosophies and good integration across the board.

hendry4 years ago

AWS's Cloud primitives are certainly better. Of course it's not FOSS, though it proves orchestration can be done simpler.

https://ably.com/blog/no-we-dont-use-kubernetes

For local development (a must imo), just rock a docker-compose.yml that emulates your Cloud orchestrated with terraform/cloudformation.

krick4 years ago

This is absolutely not an alternative, not even close. AWS is exactly that: Amazon Web Services. Do you need to host your stuff somewhere else one day? Good luck re-inventing everything from scratch.

I am sort of k8s hater myself, because I've seen very simple and straight-forward production pipelines, reasonably well understood by admins, turn into over-complicated shit with buggy deploy pipelines literally 10 times slower that no one really understands. All of this to manage maybe 10 nodes per service. All of that said, I cannot deny that these new solutions are something that previous generation of ansible scripts and AWS primitives were not. Now we can move all of it to pretty much any infrastructure without changing much. And as much as I hate it, I don't really have an answer to "what else, if not kubernetes?" that doesn't feel a little bit dishonest. I seriously would like to hear one.

spmurrayzzz4 years ago

Comment on your first point— I have done the work you speak of (porting AWS-specific code to other cloud providers). It is absolutely possible and relatively painless if you design for that feature at the outset. Almost all of the lower level AWS services have a counterpart in the other ecosystems.

So if you build the right interface abstractions around those components, it gets you a long way.

qaq4 years ago

if you are running say a monolith in container in Fargate fronted by ALB that talks RDS PG or Aurora there is not much complexity in moving that anywhere

LargoLasskhyfv4 years ago

Needs to have a really serious branding first.

Like Yolodyne Cybernetrix

shard9724 years ago

swarmkit

zmmmmm4 years ago

I feel like k8s sits in the same space as git. One of those tools that is ridiculously complex, obtuse, un-userfriendly but at the same time worth sucking it all up because the win from consolidating your knowledge into something that is an industry standard is far greater than whatever particular things one doesn't like about how it works.

It is a fascinating dynamic however that generates these outcomes where a large numbers of people collectively settle on something that the majority of them seem to hate.

solatic4 years ago

> A distributed OS that follows the Perlis-Thompson Principle would have fewer concepts.

Kubernetes is a relatively simple system with few concepts. You have manifests stored in etcd, behind the API server, and various controllers that act on these manifests. Some controllers (Deployment, StatefulSet, etc.) come standard out of the box, some are custom and added later. The basic unit of computation is a Pod, and DNS is provided with Services. Cluster administrators need to worry about the networking and storage layers, not cluster users. Honestly, that's pretty much it! Really not so complicated.

Now, does that help you write a manifest for the Deployment controller? No, and neither does it help you autoscale the Deployment via writing a manifest for the HorizontalPodAutoscaler controller, or setting up a load balancer by writing a manifest for the Ingress controller. But I wouldn't call the UNIX model complex because Linux distributions and package managers add complexity.

christophilus4 years ago

Kubernetes gets a lot of shade, and rightfully so. It’s a tough problem. I do hope we get a Ken Thompson or Rich Hickey-esque solution at some point.

cogman104 years ago

I see the shade thrown at k8s... but honestly I don't know how much of it is truly deserved.

k8s is complex not unnecessarily, but because k8s is solving a large host of problems. It isn't JUST solving the problem of "what should be running where". It's solving problems like "how many instances should be where? How do I know what is good and what isn't? How do I route from instance A to instance b? How do I flag when a problem happens? How do I fix problems when they happen? How do I provide access to a shared resource or filesystem?"

It's doing a whole host of things that are often ignored by shade throwers.

I'm open to any solution that's actually simpler, but I'll bet you that by the time you've reached feature parity, you end up with the same complex mess.

The main critique I'd throw at k8s isn't that it's complex, it's that there are too many options to do the same thing.

giantrobot4 years ago

I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Unfortunately unless you've got a lot of k8s experience that scale/complexity lower bound isn't super obvious. It's also possible to have your scale/complexity accelerate from "k8s isn't worthwhile" to "oh shit get me some k8s" pretty quickly without obvious signs. That just compounds the TMTOWTDI choice paralysis problems.

So you get people that choose k8s when it doesn't make sense and have a bad time and then throw shade. They didn't know ahead of time it wouldn't make sense and only learned through the experience. There's a lot of projects like k8s that don't advertise their sharp edges or entry fee very well.

throwaway8943454 years ago

> I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Maybe compared to Heroku or similar, but compared to a world where you're managing more than a couple of VMs I think Kubernetes becomes compelling quickly. Specifically, when people think about VMs they seem to forget all of the stuff that goes into getting VMs working which largely comes with cloud-provider managed Kubernetes (especially if you install a couple of handy operators like cert-manager and external-dns): instance profiles, AMIs, auto-scaling groups, key management, cert management, DNS records, init scripts, infra as code, ssh configuration, log exfiltration, monitoring, process management, etc. And then there's training new employees to understand your bespoke system versus hiring employees who know Kubernetes or training them with the ample training material. Similarly, when you have a problem with your bespoke system, how much work will it be to Google it versus a standard Kubernetes error?

Also, Kubernetes is really new and it is getting better at a rapid pace, so when you're making the "Kubernetes vs X" calculation, consider the trend: where will each technology be in a few years. Consider how little work you would have to do to get the benefits from Kubernetes vs building those improvements yourself on your bespoke system.

lanstin4 years ago

Honestly, the non-k8s cloud software is also getting excellent. When I have a new app that I can't containerize (network proxies mostly) I can modify my standard terraform pretty quickly and get multi-AZ, customized AMIs, per-app user-data.sh, restart on failures, etc. with private certs and our suite of required IPS daemons, etc. It's way better than pre-cloud things. K8s seems also good for larger scale and where you have a bunch of PD teams wanting to deploy stuff with people that can generate all the YAML/annotations etc. If your deploy #s scale with the number of people that can do it, then k8s works awesomely. If you have just 1 person doing a bunch of stuff, simpler things can let that 1 person manage and create a lot of compute in the cloud.

novok4 years ago

K8 is the semi truck of software, great for semi scale things, but often used when a van would just do fine.

cogman104 years ago

To me, usefulness is less to do with scale and more to do with number of distinct services.

If you have just a single monolith app (such as a wordpress app) then sure, k8s is overkill. Even if you have 1000 instances of that app.

It's once you start having something like 20+ distinct services that k8s starts paying for itself.

lanstin4 years ago

Especially with 10 distinct development teams that all have someone smart enough to crank out some YAML with their specific requirements.

loudlambda4 years ago

Kubernetes is an aircraft carrier, where most people just need a skiff.

dolni4 years ago

> how many instances should be where?

Are you referring to instances of your application, or EC2 instances? If instances of your application, in my experience it doesn't really do much for you unless you are willing to waste compute resources. It takes a lot of dailing in to effectively colocate multiple pods and maximize your resource utilization. If you're referring to EC2 instances, well AWS autoscaling does that for you.

Amazon and other cloud providers have the advantage of years of tuning their virtual machine deployment strategies to provide maximum insulation from disruptive neighbors. If you are running your own Kubernetes installation, you have to figure it out yourself.

> How do I know what is good and what isn't?

Autoscaling w/ a load balancer does this trivially with a health check, and it's also self-healing.

> How do I route from instance A to instance b?

You don't have to know or care about this if you're in a simple VPC. If you are in multiple VPCs or a more complex single VPC setup, you have to figure it out anyway because Kubernetes isn't magic.

> How do I flag when a problem happens?

Probably a dedicated service that does some monitoring, which as far as I know is still standard practice for the industry. Kubernetes doesn't make that go away.

> How do I fix problems when they happen?

This is such a generic question that I'm not sure how you felt it could be included. Kubernetes isn't magic, your stuff doesn't always just magically work because Kubernetes is running underneath it.

> How do I provide access to a shared resource or filesystem?

Amazon EFS is one way. It works fine. Ideally you are not using EFS and prefer something like S3, if that meets your needs.

> It's doing a whole host of things that are often ignored by shade throwers.

I don't think they're ignored, I think that you assume they are because they are because those things aren't talked about. They aren't talked about because they aren't an issue with Kubernetes.

The problem with Kubernetes is that it is a massively complex system that needs to be understood by its administrators. The problem it solves overlaps nearly entirely with existing solutions that it depends on. And it introduces its own set of issues via complexity and the breakneck pace of development.

You don't get to just ignore the underlying cloud provider technology that Kubernetes is interfacing with just because it abstracts those away. You have to be able to diagnose and respond to cloud provider issues _in addition_ to those that might be Kubernetes-centric.

So yes, Kubernetes does solve some problems. Do the problems it solves outweigh the problems it introduces? I am not sure about that. My experience to Kubernetes is limited to troubleshooting issues with Kubernetes ~1.6, which we got rid of because we regularly ran into annoying problems. Things like:

* We scaled up and then back down, and now there are multiple nodes running 1 pod and wasting most of their compute resources.

* Kubernetes would try to add routes to a route table that was full, and attempts to route traffic to new pods would fail.

* The local disk of a node would fill up because of one bad actor and impact multiple services.

At my workplace, we build AMIs that bake-in their Docker image and run the Docker container when the instance launches. There are some additional things we had to take on because of that, but the total complexity is far less than what Kubernetes brings. Additionally, we have the side benefit of being insulated from Docker Hub outages.

encryptluks24 years ago

I think a large part of the problem is that systems like Kubernetes are designed to be extensible with a plugin architecture in mind. Simple applications usually have one way of doing things but they are really good at it.

This begs to question if there is a wrong or right way of doing things and if a single system can adapt fast enough to the rapidly changing underlying strategies, protocols, and languages to always be at the forefront of what is considered best practices in all levels of development and deployment.

These unified approaches usually manifest themselves as each cloud providers best practice playbooks, but each public cloud is different. Unless something like Kuberenetes can build a unified approach across all cloud providers or self hosting solutions then it will always be overly complex because it will always be changing for each provider to maximize their interests in adding their unique services.

throwaway8943454 years ago

Having used Kubernetes for a while, I'm of the opinion that it's not so much complex as it is foreign, and when we learn Kubernetes we're confronted with a bunch of new concepts all at once even though each of the concepts are pretty simple. For example, people are used to Ansible or Terraform managing their changes, and the "controllers continuously reconciling" takes a bit to wrap one's head around.

And then there are all of the different kinds of resources and the general UX problem of managing errors ("I created an ingress but I can't talk to my service" is a kind of error that requires experience to understand how to debug because the UX is so bad, similarly all of the different pod state errors). It's not fundamentally complex, however.

The bits that are legitimately complex seem to involve setting up a Kubernetes distribution (configuring an ingress controller, load balancer provider, persistent volume providers, etc) which are mostly taken care of for you by your cloud provider. I also think this complexity will be resolved with open source distributions (think "Linux distributions", but for Kubernetes)--we already have some of these but they're half-baked at this point (e.g., k3s has local storage providers but that's not a serious persistence solution). I can imagine a world where a distribution comes with out-of-the-box support for not only the low level stuff (load balancers, ingress controllers, persistence, etc) but also higher level stuff like auto-rotating certs and DNS. I think this will come in a few years but it will take a while for it to be fleshed out.

Beyond that, a lot of the apparent "complexity" is just ecosystem churn--we have this new way of doing things and it empowers a lot of new patterns and practices and technologies and the industry needs time and experience to sort out what works and what doesn't work.

To the extent I think this could be simplified, I think it will mostly be shoring up conventions, building "distributions" that come with the right things and encourage the right practices. I think in time when we have to worry less about packaging legacy monolith applications, we might be able to move away from containers and toward something more like unikernels (you don't need to ship a whole userland with every application now that we're starting to write applications that don't assume they're deployed onto a particular Linux distribution). But for now Kubernetes is the bridge between old school monoliths (and importantly, the culture, practices, and org model for building and operating these monoliths) and the new devops / microservices / etc world.

dekhn4 years ago

I have borg experience and my experience with k8s was extremely negative. Most of my time was spent diagnosing self-inflicted probmems by the k8s framework.

I've been trying nomad lately and it's a bit more direct.

jedberg4 years ago

I think that's because Borg comes with a team of engineers who keep it running and make it easy.

I've had a similar experience with Cassandra. Using Cassandra at Netflix was a joy because it always just worked. But there was also a team of engineers who made sure that was the case. Running it elsewhere was always fraught with peril.

dekhn4 years ago

yes several of the big benefits are: the people who run borg (and the ecosystem) are well run (for the most part). And, the ability to find them in chat and get them to fix things for you (or explain some sharp edge).

jrockway4 years ago

I have borg experience and I think Kubernetes is great. Before borg, I would basically never touch production -- I would let someone else handle all that because it was always a pain. When I left Google, I had to start releasing software (because every other developer is also in that "let someone else handle it" mindset), and Kubernetes removed a lot of the pain. Write a manifest. Change the version. Apply. Your new shit is running. If it crashes, traffic is still directed to the working replicas. Everyone on my team can release their code to any environment with a single click. Nobody has ever ssh'd to production. It just works.

I do understand people's complaints, however.

Setting up "the rest" of the system involves making a lot of decisions. Observability requires application support, and you have to set up the infrastructure yourself. People generally aren't willing to do that, and so are upset when their favorite application doesn't work their favorite observability stack. (I remember being upset that my traces didn't propagate from Envoy to Grafana, because Envoy uses the Zipkin propagation protocol and Grafana uses Jaeger. However, Grafana is open source and I just added that feature. Took about 15 minutes and they released it a few days later, so... the option is available to people that demand perfection.)

Auth is another issue that has been punted on. Maybe your cloud provider has something. Maybe you bought something. Maybe the app you want to run supports OIDC. To me, the dream of the container world is that applications don't have to focus on these things -- there is just persistent authentication intrinsic to the environment, and your app can collect signals and make a decision if absolutely necessary. But that's not the way it worked out -- BeyondCorp style authentication proxies lost to OIDC. So if you write an application, your team will be spending the first month wiring that in, and the second month documenting all the quirks with Okta, Auth0, Google, Github, Gitlab, Bitbucket, and whatever other OIDC upstreams exist. Big disaster. (I wrote https://github.com/jrockway/jsso2 and so this isn't a problem for me personally. I can run any service I want in my Kubernetes cluster, and authenticate to it with my FaceID on my phone, or a touch of my Yubikey on my desktop. Applications that want my identity can read the signed header with extra information and verify it against a public key. But, self-hosting auth is not a moneymaking business, so OIDC is here to stay, wasting thousands of hours of software engineering time a day.)

Ingress is the worst of Kubernetes' APIs. My customers run into Ingress problems every day, because we use gRPC and keeping HTTP/2 streams intact from client to backend is not something it handles well. I have completely written it off -- it is underspecified to the point of causing harm, and I'm shocked when I hear about people using it in production. I just use Envoy and have an xDS layer to integrate with Kubernetes, and it does exactly what it should do, and no more. (I would like some DNS IaC though.)

Many things associated with Kubernetes are imperfect, like Gitops. A lot of people have trouble with the stack that pushes software to production, and there should be some sort of standard here. (I use ShipIt, a Go program to edit manifests https://github.com/pachyderm/version-bump, and ArgoCD, and am very happy. But it was real engineering work to set that up, and releasing new versions of in-house code is a big problem that there should be a simple solution to.)

Most of these things are not problems brought about by Kubernetes, of course. If you just have a Linux box, you still have to configure auth and observability. But also, your website goes down when the power supply in the computer dies. So I think Kubernetes is an improvement.

The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)

randomswede4 years ago

Helm's biggest problem is...

Let me rephrase that. ONE of Helm's biggest problems is that it uses text-based templating, instead of some sort of templating system that understands the thing it's actually trying to template.

This makes some things much MUCH harder than they should need to be.

It makes it really hard to have your configuration bridge things like "you have this much RAM" or "this is the CPU you have" to flags or environment variables that your code can understand.

It also makes it hard to compose configuration.

As much as I don't like BCL, it is depressingly good at being a job configuration language for "run things in the cloud".

jrockway4 years ago

I think you actually touch on three good points here. One is that "foo: {{ var }}" is not a hygienic template. If var is equal to "bar\nbaz: quux", you've injected hard-to-debug additional keys into the output. The next is that there are common pieces that Kuberenetes defines, and they are all demoted to map[string]interface{}. For example, a lot of charts have "resources" attached to applications, and those are (in Go land), v1.ResourceRequirements. But it could be anything in Helm, it's just a JSON object. So helm itself can't say "you typed 1000M cpu, but probably meant 1000m cpu". And finally, each chart has total latitude to name anything whatever it wants. One chart could say "myapp: { cpu: 42 }" and another configures that as "yourapp: { resources: { requests: { cpu: 42 } } } }". You get to learn Kubernetes all over every time for each app. With zero documentation, usually, except a values.yaml to cut-n-paste from. (My success rate is low. Every Helm app I've installed has required me to read the source code to get it to do what I want. But, other people have better luck, to be fair.)

On top of all that, the value that Helm delivers to people is "you don't have to read the documentation for Deployment to make a Deployment". But then you have to debug that, and you have another layer of complexity bundled on top of your already weak understanding of the core.

Like I get that Kubernetes asks you a lot of questions just to run a container. But they are all good questions, and the answers are important. Just answer the questions and be happy. (Yes, you need to know approximately how much memory your application uses. You needed to know that in the old pet computer era too -- you had to pick some amount to buy at the memory store. Now it's just a field in a YAML file, but the answer is just as critical. A helm chart can set guesses, and if that makes you feel better, maybe that's the value it delivers. But one day, you'll find the guess is wrong, and realize you didn't save any time.)

randomswede4 years ago

And crucially, once you have given a resource limit, there's no way to (trivially) feed that back into an environment variable or flag to signal that to the app runtime (which, IIRC, is Really Handy for Java-based apps and can seriously improve the performance of Go-based ones).

dekhn4 years ago

Twice today I had to explain to coworkers than "auth is one of the hardest problems in computer science".

For gRPC and HTTP/2: you're doing end to end gRPC (IE, the TCP connection goes from a user's browser all the way to your backend, without being terminated or proxied)?

jrockway4 years ago

I don't think I have raw HTTP/2 streams from user to service anywhere. My preference is to have Envoy in the middle doing routing/statistics, and so the TCP session is not preserved from frontend to backend. Each request/response could be handled by a different backend instance. (I don't think Envoy strictly requires this, however; upgrade/websockets work somehow. But maybe only on HTTP/1.1.) This is generally what people want their load balancer to do; a common complaint is that gRPC opens long-lived streams (channels, actually, using their term), and so one client can overload one backend, when the other 100 replicas could happily handle their request/replies. (gRPC's mechanism for state between requests and replies is server stream/client stream/bidirectional stream, which is different than channels. The individual messages in streams can't be split between backends, and so the load balancer won't interfere with that.)

At work we have a service that communicates to clients over gRPC (the CLI app is a gRPC client). We typically deploy that as two ports on the load balancer, one for gRPC and the other for HTTPS. Again, the TCP connection isn't actually preserved while transiting the load balancer, but it's logically a L4 operation -- one client channel is one server channel. If the backend becomes unhealthy, you'll have to open a new channel to the load balancer to get a different backend. (This doesn't really come up for us, because people mostly run a single replica of the service.)

parasubvert4 years ago

There are some attempts to gradually find alternatives to Helm while remaining compatible with it. See https://carvel.dev/ for example.

There is a lot of innovation possible in this space.

gbrindisi4 years ago

> The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)

Too much of a cliffhanger! Now I want to know your pow :)

throwdbaaway4 years ago

Ever since Microsoft acquired the company behind Helm and https://news.ycombinator.com/item?id=11922299 (try clicking the article link), it has been used as a showcase when onboarding azure customers, to somehow prove that "yeah azure is hip and we love open source".

So, yes, we need to know.

akvadrako4 years ago

I don't know why anyone uses Helm. I've done a fair amount of stuff with k8s and never saw the need. The builtin kustomize for is simple and flexible enough.

+2
johnsoft4 years ago
Filligree4 years ago

Ditto.

Granted, I have to assume that borg-sre, etc. etc. are doing a lot of the necessary basic work for us, but as far as the final experience goes?

95% of cases could be better solved by a traditional approach. NixOps maybe.

fungiblecog4 years ago

If Antoine de Saint-Exupery was right that: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." then IT as an industry is heading further and further away from perfection at an exponentially accelerating rate.

The only example I can think of where a modern community is actively seeking to simplify things is Clojure. Rich Hickey is very clear on the problem of building more and more complicated stuff and is actively trying to create software by composing genuinely simpler parts.

TameAntelope4 years ago

I'd argue that perfection achievement is not a linear process. Sometimes you have to add way too many things before you can remove all of the useless things.

Nobody is puppeteering some grand master plan, we're on a journey of discovery. When we're honest with ourselves, we realize nobody knows what will stick and what won't.

benjaminjosephw4 years ago

Absolutely, but dogma and "best-practices" anchor design discussions around today's norms. People get very defensive about tools they've invested in and that kind of dogma stunts imagination for different and better solutions.

Discovery is very rarely an accidental process so we can't take for granted that it will be inevitable.

I think it's important to recognize that most people are not interested in discovery at all. Practitioners are often not explorers, and that's okay. They may find incremental improvements through their practice, but paradigm shifting innovation comes from those willing to swim against the stream of popular opinion.

Discovery has to be an intentional pursuit of those brave enough to imagine a future beyond Multics/Kubernetes/etc despite the torrent of opinionated naysayers telling them they are foolish for even trying.

TameAntelope4 years ago

I guess I completely disagree. Discovery is nothing except a series of accidents and happenstance.

Nobody gets anything difficult right on the first try, and there’s an arrogance in thinking we could.

hnbad4 years ago

If you understand the quote to mean that the process of achieving perfection can only consist of removing things rather than adding them, how do you know whether you've really achieved perfection or just reached a local optimum?

harperlee4 years ago

Jonathan Blow has also been vocal on that regard.

dannyrosen4 years ago

Consider looking into Fuchsia's component framework for thoughts on what a distributed application looks like inside an operating system. https://fuchsia.dev/fuchsia-src/concepts/components/v2/intro...

aneutron4 years ago

Okay, right off the bat, the author is already giving himself answers:

> Essentially, this means that it [k8s] will have fewer concepts and be more compositional.

Well, that's already the case ! At its base, k8s is literally a while loop that converges resources to wanted states.

You CAN strip it down to your liking. However, as it is usually distributed, it would be useless to distribute it with nothing but the scheduler and the API ...

I do get the author's point. At a certain point it becomes bloated. But I find that when used correctly, it is adequately complex for the problems it solves.

whalesalad4 years ago

After reading the title I worried this was going to be yet another k8s bashing post. Pleasantly surprised to see this take because it’s a refreshing look at kube and I strongly agree. I think it’s the absolute best way to deploy large systems today, especially if you’re a polyglot organization. But it can be tough to grok without lots of labbing and experimentation - it’s hard to approach.

We are really at the infancy of containerization. Kube is a springboard for doing the next big thing.

tyingq4 years ago

It looks to be getting more complex too. I understand the sales pitch for a service mesh like Istio, but now we're layering something fairly complicated on top of K8S. Similar for other aspects like bolt on secrets managers, logging, deployment, etc, run through even more abstractions.

gerbilly4 years ago
joe_the_user4 years ago

Whatever Kubernetes flaws, the analogy is clearly wrong. Multics was never a success and never had wide deployment so Unix never had to compete with it. Once an OS is widely deployed, efforts to get rid of it have a different dynamic (see the history of desktop computing, etc). Especially, getting rid of any deployed, working system (os, application, language, chip-instruction-set, etc) in the name of simplicity is inherently difficult. Everyone agrees things should be stripped down to a bare minimum but no one agrees on what that bare minimum is.

loudlambda4 years ago

Agreed; I think a better analogy for Kubernetes is XML. So many wasted meetings about where to split up namespaces and should every last thing be an attribute or a subtag; none of that added business value. JSON took all those decisions off the table. And yes, huge industrials validly complained that JSON didn't cover X or Y or Z, but for most users JSON is a much better solution then XML.

Kubernetes reminds me a lot of XML; there are too many decision points adding unnecessary complexity for the average user's needs. Too many foot guns. Too many unintuitive things.

People keep on describing it as "declarative", which seems to be about as true as saying that Java is a functional language. Hopefully someday we'll have something actually declarative, and much more intuitive, something more like AWS's CDK.

parasubvert4 years ago

How is Kubernetes not declarative?

I don’t disagree about the exposed complexity, that’s a fundamental decision Kubernetes made about openness and extensibility. Everything is on a level playing field, there are no private APIs.

loudlambda4 years ago

As I recall, running "kubectl edit deployment..." doesn't do anything except edit the definition of the config. Instead, to have it take effect you seem to have to manually kill pods, and the new pods will come up with the edited config. If it were declarative, it should detect what needs to be changed, and automatically update accordingly. Same thing with editing a config. It's possible it was the funnel my local DevOps forced on me (and lacking needed permissions at every turn), but my experience was that if you removed deployments, configs, etc on the next deployment, nothing would be cleaned up and you had to manually remove. Again, that's not declarative.

In my experience Terraform and CDK are much more declarative; where you never issue commands to delete a pod or a load balancer or similar. Instead you describe what you want, and their engine figures out what it needs to add or remove or change to get to that state.

+1
parasubvert4 years ago
rektide4 years ago

It's called "Images and Feelings", but I quite dislike using a the Cloud Native Computing Foundation's quite busy map of services/offerings as evidence against Kubernetes. That lots of people have adopted this, and built different tools & systems around it & to help it is not a downside.

I really enjoy the Oil Blog, & was really looking forward when I clicked the link to having some good real criticism. But it feels to me like most of the criticism I see: highly emotional, really averse/afraid/reactionary. It wants something easier simpler, which is so common.

I cannot emphasize enough, just do it anyways. There's a lot of arguments from both sides about trying to assess what level of complexity you need, about trying to right size what you roll with. This outlook of fear & doubt & skepticism I think does a huge disservice. A can do, jump in, eager attitude, at many levels of scale, is a huge boon, and it will build skills & familiarity you will almost certainly be able to continue to use & enjoy for a long time. Trying to do less is harder, much harder, than doing the right/good/better job: you will endlessly hunt for solutions, for better ways, and there will be fields of possibilities you must select from, must build & assemble yourself. Be thankful.

Be thankful you have something integrative, be thankful you have common cloud software you can enjoy that is cross-vendor, be thankful there's so many different concerns that are managed under this tend.

The build/deploy pipeline is still a bit rough, and you'll have to pick/build it out. Kubernetes manifests are a bit big in size, true, but it's really not a problem, it really is there for basically good purpose & some refactoring wouldn't really change what it is. There's some things that could be better. But getting started is surprisingly easy, surprisingly not heavy. There's a weird emotional war going on, it's easy to be convinced to be scared, to join in with reactionary behaviors, but I really have seen nothing nearly so well composed, nothing that fits together so many different pieces well, and Kubernetes makes it fantastically easy imo to throw up a couple containers & have them just run, behind a load balancer, talking to a database, which coverages a huge amount of our use cases.

1vuio0pswjnm74 years ago

I like this title so much I am finally going to give this shell a try. One thing I notice right away is readline. Could editline also be an option. (There's two "editlines", the NetBSD one and an older one at https://github.com/troglobit/editline) Next thing I notice is the use of ANSI codes by default. Could that be a compile-time option or do we have to edit the source to remove it.

TBH I think the graphical web browser is the current generation's Multics. Something that is overly complex, corporatised, and capable of being replaced by something simpler.

I am not steeped in Kubernetes or its reason for being but it sounds like it is filling a void of shell know-how amongst its audience. Or perhaps it is addressing a common dislike of the shell by some group of developers. I am not a developer and I love the shell.

It is one thing that generally does not change much from year to year. I can safely create things with it (same way people have made build systems with it) that last forever. These things just keep running from one decade to the next no matter what the current "trends" are. Usually smaller and faster, too.

parasubvert4 years ago

Kubernetes is designed similar to the shell: the APIs are a uniform interface, designed for stabilization, while resources are composable and extensible through the it.

If you use the stable APIs, your code will run for decades. My hypothetical deployment from 2016 will not need touching (beyond image updates for CVEs) to keep running in 2026 or 2036.

ablekh4 years ago

I think that all this boils down to a rather simple dilemma for modern cloud-native infrastructural platforms [in terms of developer experience, i.e., external APIs etc., not internal architecture; and this is not even limited to this class of systems - it is general concept for all software systems]: a) universal, highly configurable & complex (K8s) OR b) highly opinionated and [relatively] simple (e.g., Nomad/Waypoint, Heroku, Apollo, CapRover, Dokku, Porter, AWS Elastic Beanstalk, Digital Ocean's App Platform, Fly, Render). Obviously, there exists a middle-ground category as well: relatively simple, but still opinionated and moderately (???) or highly (e.g., OpenShift) configurable platforms. Thus, the optimal choice depends on relevant team's or organization's priorities with respect to those attributes (configurability, complexity, level & scope of opinionation) as well as level of organizational standardization for IT environments, economic factors, vendor lock-in considerations and, perhaps, something else that I forgot to mention).

musicale4 years ago

No, Multics was easier to understand, easier to manage, and more reliable.

However Multics didn't offer automatic/elastic cloud scaling, which seems to be the main selling point of modern, usually very complicated, container orchestration systems, nor was it designed for building distributed systems.

However, if modern Linux had a Multics-style ring architecture, it could replace many of the uses for virtualization and containers.

soobrosa4 years ago

Add the two cents of http://adamierymenko.com/ports.html

"Since we chose the path of virtualization and containerization we've allowed the multi-tenancy facilities in Unix to atrophy and it would take a little bit of work to bring them back into form."

PaulHoule4 years ago

I wish that were so.

Multics made a big splash in the literature but in terms of use it was an obscure os on an obscure mainframe. It had nothing on TOPS-20 or VM/CMS.

Unfortunately many of us are suffering with Kube.

wwwaldo4 years ago

Hi Andy: if you see this, I'm the other 4d polygon renderer! I read the kubernetes whitepaper after RC and ended up spending a lot of the last year on it. Maybe if I had asked you about working with Borg I could have saved myself some trouble. Glad to see you're still very active!

chubot4 years ago

Hi :) Yeah I think it's an interesting topic, and I'm not saying anyone should necessarily be doing something different. But if it "feels wrong", then that's not too surprising to me :) I'd be interested in hearing about any k8s experiences.

filereaper4 years ago

Sure, you don't have to use k8s. You can roll your own solutions to what it solves.

Your own custom built solution will work, but what in 5 years? 10 years? When it all becomes legacy what then?

Will you find the talent who'll want to fix your esoteric environment, just like those COBOL devs?

Will anyone respond to your job posts to fix your snowflake environment. Will you pay above average wages to fix your snowflake ways of solving problems that k8s standardized?

I bet your C-Level is thinking this. What's to say they won't rip out all of your awesomeness and replace it with standard k8s down the line as its dominating the marketshare.

When you're laid off in the next recession, is your amazing problem-solving on your snowflake environment going to help you when everyone else is fully well versed with k8s?

chrismsimpson4 years ago

Whoa dude, ease up on the cool aid

pm904 years ago

Personally, I think this is an extremely mild version of the dire situation that most teams working with legacy systems often find themselves in.

overgard4 years ago

Is it really that complex compared to an operating system like Unix though? I mean there's nothing simple about Unix. To me the question is, is it solving a problem that people have in a reasonably simple way? And it seems like it definitely does. I think the hate comes from people using it where it's not appropriate, but then, just don't use it in the wrong place, like anything of this nature.

And honestly its complexity is way overblown. There's like 10 important concepts and most of what you do is run "kubectl apply -f somefile.yaml". I mean, services are DNS entries, deployments are a collection of pods, pods are a self contained server. None of these things are hard?

cjalmeida4 years ago

What’s complex about *nix? All you need to understand are device files, POSIX permissions and ACLs, cgroups, tcp/udp sockets, nginx/haproxy, thread/process scheduling, (virtual) memory, PAM, dbus, syslog, pipes, unix sockets, 30 filesystem options, nfs, userspace vs. kernel space, sysvinit or 10 flavors of systemd files, iptables/ufw, networkmanager, ssh, selinux, chroot, flatpack, snaps, rpm, deb, ansible/chef/puppet.

Oh deploying on the cloud? Cloudformation/AzureRM as well.

Pretty easy. No damn complex k8s needed.

nikau4 years ago

The irony in your comment is tools like networkmanager, snaps, systemd are kubernetes like and severely disliked by experienced unix admins due to the needless complexity and usability of them.

pjmlp4 years ago

Well, given that Multics was much more secure than UNIX ever was, and written on a proper systems programming language that everyone (except UNIX folks) is trying to get back to, probably isn't that bad after all.

bruce3434344 years ago

> proper systems programming language that everyone (except UNIX folks) is trying to get back to

Wikipedia: Written in PL/I, Assembly language

????

pjmlp4 years ago

I advise you to learn about the safety capabilities regarding strings, arrays, pointer manipulation and references, numerics and enumerations in PL/I versus C.

Additionally, you can go over to Multicians and read the security assessemt reports of Multics vs UNIX done by DoD, back in the day.

wizwit9994 years ago

I agree a lot with his premise, that Kubernetes is too complex, but not at all with his alternative to go even lower level.

And the alternative of doing everything yourself isn't too much better either, you need to learn all sorts of cloud concepts.

The better alternative is a higher level abstraction that takes care of all of this for you, so an average engineer building an API does not need to worry about all these low level details, kind of like how serverless completely removed the need to deal with instances (I'm building this).

mountainriver4 years ago

That sounds like knative

wizwit9994 years ago

I haven't heard of that. Took a look and it still seems too low level. I think we need to think much bigger in this space. Btw were not approaching this from a Kubernetes angle at all.

waynesonfire4 years ago

my problem with k8s, is that you learn OS concepts, and then k8s / docker shits all over them.

chubot4 years ago

Yes, this is a core part of the design issue and argument I'm making.

The new concepts are leaky abstractions -- they wrap the old ones badly. You still have to understand both to understand the system. Networking in k8s seems to really suffer from this.

And the new concepts and old concepts don't compose. They create combinatorial problems, i.e. O(M*N) amounts of glue code.

nikau4 years ago

It's a double whammy, you get the complexity of Kubernetes, and then you get to exec into a docker image that has been stripped of any useful debugging tools under the guise of security.

Its even better when its a busybox based image for that linksys router/80s unix troubleshooting experience.

wskish4 years ago

K8s abstracts away much more complexity than it exposes, which is the hallmark of a great api. History will surely view it amongst the greatest api’s of all time.

asciimov4 years ago

Anyone want to fill me in on what this "Perlis-Thompson Principle" is?

chubot4 years ago

I still have to explain it properly, but there is a pretty good sketch on a recent blog post, linked from this comment. (You will probably end up chasing a lot of comment threads, but it's mostly there.)

https://news.ycombinator.com/item?id=27914632

It's an argument about avoiding O(M*N) glue code. O(M*N) amounts of code are expensive to write, and contain O(M*N) numbers of bugs.

JohnL44 years ago

I had to Google it and scroll a blog post.

rossjudson4 years ago

This whole article is, well, a little silly. It says that Kubernetes will disappear and be replaced by something simpler, because it's very difficult to create reliable systems that use it.

But...there are tons of reliable systems at Google, all using Borg, and that has a lot of features Kubernetes doesn't have.

Stripping down Kubernetes doesn't reduce complexity. It just shifts it.

chubot4 years ago

I don't agree. I worked at Google for over 10 years, during the time when SREs started to make as much or more money than SWEs. There's a reason for that.

I also disagree that the systems are reliable. From the outside most the stateless services are fast and reliable; the stateful ones less so. From the inside, no: Internal services were unreliable and slow. (This could have changed in the last 5 years, but there was a clear trend in one direction in my time there.) There were many more internal services on Borg than external ones.

MichaelMoser1234 years ago

i thought that kubernetes is our generations jcl (job control language on ibm mainframes) ; There is a remote similarity in how we are writing descriptors for tasks and then submit it for execution and wait till the mainframe has considered our specification. (suddenly feeling old because of this comparison ...)

pram4 years ago

Yup lol, I've had this same thought. It's like neo-Tuxedo which is basically a mainframe TPS for UNIX

https://en.wikipedia.org/wiki/Tuxedo_(software)

MichaelMoser1234 years ago

it's funny when you think of it, most of all this distributed system magic was already there on the old mainframe, in some form. And it was there for ages...

powera4 years ago

Eh. Kubernetes is complex, but I think a lot of that is that computing is complex, and Kubernetes doesn't hide it.

Your UNIX system runs many daemons you don't have to care about. Whereas something like lockserver configuration is still a thing you have to care about if you're running Kubernetes.

brandonbloom4 years ago

Related: https://www.youtube.com/watch?v=3Ea3pkTCYx4

Key insight can be summarized as "code the perimeter"

chubot4 years ago

(author here) Yes exactly! This is what I'm calling the Perlis-Thompson principle, although it still needs to be fully formed and explained. There are obvious objections to it (which I have some answers to).

Sketch of the argument here, with links: http://www.oilshell.org/blog/2021/07/blog-backlog-1.html#con...

Here's my comment which links the "Unix vs. Google" video (and I very much agree based on my first hand experience with Google's incoherent architecture, which executives started to pay attention to in various shake-ups.)

https://lobste.rs/s/euswuc/glue_dark_matter_software#c_sppff...

It links to my comment about the closely related "narrow waist" idea in networks and operating systems. That is a closely related concept regarding scaling your "codebase" and interoperability.

I have been looking up the history of this idea. I found a paper co-authored by Eric Brewer which credits it to Kleinrock:

http://bnrg.eecs.berkeley.edu/~randy/Papers/InternetServices... (was this ever published? I can't find a date or citations)

But I'm not done with all the research. I'm not sure if it's worth it to write all this, but I think it's interesting I will learn something by explaining it clearly and going through all the objections.

I'm definitely interested in the input of others. I have about 10 different resources where people are getting at this same scaling idea, but I can use more arguments / examples / viewpoints.

rightisleft4 years ago

Going to post a lovely update for docker swarm here - Swarm simplifies/reduces the possibility space compared to K8, but i consider that a feature not a drawback. With Mirantis actively hiring and extending support for SwarmKit, it should be considered a viable 'batteries included' alternative to K8:

https://github.com/docker/roadmap/issues/175#issuecomment-82...

exdsq4 years ago

Two amazing quotes that really resonate with me:

> The industry is full of engineers who are experts in weirdly named "technologies" (which are really just products and libraries) but have no idea how the actual technologies (e.g. TCP/IP, file systems, memory hierarchy etc.) work. I don't know what to think when I meet engineers who know how to setup an ELB on AWS but don't quite understand what a socket is...

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

rantwasp4 years ago

this is bound to happen. the more complicated the stack that you use becomes, the less details you understand about the lower levels.

who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

All of these were table stakes at some point in time. The key is not to understand all layers perfectly. The key is to know when to stop adding layers.

exdsq4 years ago

Totally get your point! But I worry the industry is becoming bloated with people who can glue a few frameworks together building systems we depend on. I wish there was more of a focus on teaching and/or learning fundermentals than frameworks.

Regarding your points, I actually would expect a non-junior developer to be able to write a libary in their main language and understand the basics of OS internals (to the point of debugging and profilling, which would include troubleshooting *nix processes). I don't expect them to know assembly or C, or be able to write a compiler (although I did get this for a take-home test just last week).

fruzz4 years ago

I think learning the fundamentals is a worthy pursuit, but in terms of getting stuff done well, you realistically only have to grok one level below whatever level of abstraction you're operating at.

Being able to glue frameworks together to build systems is actually not a negative. If you're a startup, you want people to leverage what's already available.

bsd444 years ago

I agree. An ideal is far from reality.

I like to get deep into low level stuff, but my employer doesn't care if I understand how a system call works or whether we can save x % of y by spending z time on performance profiling that requires good knowledge of Linux debugging and profiling tools. It's quicker, cheaper and more efficient to buy more hardware or scale up in public cloud and let me use my time to work on another project that will result in shipping a product or a service quicker and have direct impact on the business.

My experience with the (startup) business world is that you need to be first to ship a feature or you lose. If you want to do something then you should use the tools that will allow you to get there as fast as possible. And to achieve that it makes sense to use technologies that other companies utilise because it's easy to find support online and easy to find qualified people that can get the job done quickly.

It's a dog-eat-dog world and startups in particular have the pressure to deliver and deliver fast since they can't burn investor money indefinitely; so they pay a lot more than large and established businesses to attract talent. Those companies that develop bespoke solutions and build upon them have a hard time attracting talent because people are afraid they won't be able to change jobs easily and these companies are not willing to pay as much money.

Whether you know how a boot process works or how to optimise your ELK stack to squeeze out every single atom of resource is irrelevant. What's required is to know the tools to complete a job quickly. That creates a divide in the tech world where on one side you have high-salaried people who know how to use these tools but don't really understand what goes on in the background and people who know the nitty-gritty and get paid half as much working at some XYZ company that's been trading since the 90s and is still the same size.

My point is that understanding how something works underneath is extremely valuable and rewarding but isn't required to be good at something else. Nobody knows how Android works but that doesn't stop you from creating an app that you will generate revenue and earn you a living. Isn't the point of constant development of automation tools to make our jobs easier?

EDIT: typo

+1
ironman14784 years ago
closeparen4 years ago

That’s exactly what was covered in the Systems track of my CS undergrad. I’m always confused when people dismiss their own as irrelevant or primarily mathematical… we were coding and debugging toy schedulers, virtual memory managers, file systems, TCP stacks, IRC and mail servers, locking primitives, etc. in C.

rantwasp4 years ago

I really like the way you've put it "Glue a few X together".

This is what most software development is becoming. We are no longer building software, we are gluing/integrating prebuild software components or using services.

You no longer solve fundamental problems unless you have a very special use case or for fun. You mostly have to figure out how to solve higher level problems using off-the-shelf components. It's both good and bad if you ask me (depends at what part of the glass you're looking at).

hnjst4 years ago

I also would have loved discovering electricity or information theory. Somehow it's convenient that people stacked on the shoulders of each other across a few generations made processors from that but it sadly put the bar pretty high to go further nowadays.

Thankfully I can use these cool processors to build the next CandyCrush and shine in our modern and innovative society.

pm904 years ago

This is something that I can’t show numbers for but it seems likely that the absolute number of jobs of people who do “build software” has likely increased with time, it’s just that the number of “glueing frameworks” jobs have increased by a lot more so you’re probably just in the wrong category. It seems difficult to think that there aren’t thousands of network engineers keeping the internet backbone humming along.

rhacker4 years ago

It's like building a house. Should I have the HVAC guy do the drywall and the drywall guy do the HVAC? Clearly software engineering isn't the same as building a house, but if you have an expert in JAX-WS/SOAP and a feature need to connect to some legacy soap healthcare system... have him do that, and let the guy that knows how to write an MPI write the MPI.

+1
Hello714 years ago
matwood4 years ago

This isn't a bad analogy. Like modern houses, software has gotten large, specific, and more complex in the last 30 some odd years.

Some argue it's unnecessary complexity, but I don't think that's correct. Even individuals want more than a basic geo cities website. Businesses want uptime, security, flashy, etc... in order to stand out.

darkwater4 years ago

I've been (unfortunately) in a few houses refurbishing by now and the good workers are the ones that also know a bit about other domains in house refurbishing. The HVAC guy will know about wiring and the good dry wall guy will know a bit of the layman job as well. They don't necessarily have to, but the good ones will.

908B64B1974 years ago

> How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

That's what I expect from someone who graduated from a serious CS/Engineering program.

rantwasp4 years ago

you're mixing having an idea of how the OS works (ie: conceptual/high level) to having working knowledge and being able to hack into the OS when needed. I know this may sound like moving the goal posts, but it really does not help me that I know conceptually that there is a file system if I don't work with it directly and/or know how to debug issues that arise from it.

+1
throwawaygh4 years ago
ex_amazon_sde4 years ago

> who ... understand the OS internals? ... How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Ex-Amazon here. You are describing standard skills required to pass an interview for a SDE 2 in the teams I've been in at Amazon.

Some candidates know all the popular tools and frameworks of the month but do not understand what an OS does, or how a CPU works or networking and do not get hired because they would struggle to write or debug internal software written from scratch.

[added later] This was many years ago when the bar raiser thing was in full swing and in teams working on critical infrastructure.

rantwasp4 years ago

LoL. Also Ex-Amazon here. I can tell you for a fact that most SDE2s I've worked with had zero clue on how the OS works. What you're describing may have been true 5-10 years ago, but I think is no longer true nowadays (what was that? raising the bar they called it). A typical SDE2 interview will not have questions around OS internals in it. Before jumping on your high horse again: I've done around 400 interviews during my tenure there and I don't recall ever failing anyone due to this.

Also, gate-keeping is not helpful.

+1
nitrogen4 years ago
+1
rejectedandsad4 years ago
emerongi4 years ago

Yes they do. There is too much software to be written. A person with adequate knowledge of higher abstractions can produce just fine code.

Yes, if there is a nasty issue that needs to be debugged, understanding the lower layers is super helpful, but even without that knowledge you can figure out what's going on if you have general problem-solving abilities. I certainly have figured out a ton of issues in the internals of tools that I don't know much about.

Get off your high horse.

leesec4 years ago

Says one guy. Sorry, there's lots of people who make a living writing software who don't know what an OS does. Gatekeeping helps nobody.

proxyon4 years ago

Current big tech here (not Amazon) and very few know lower level things like C, systems or OS stuff. Skillsets and specializations are different. Your comment is incredibly false. Even on mobile if someone is for instance a JS engineer they probably don't know Objective-C, Swift, Kotlin or Java any native APIs. And for the guys who do use native mobile, they can't write Javascript to save their lives and are intimidated by it.

exdsq4 years ago

I agree with you, as opposed to the other ex-amazon comments you've had (I had someone reach out to interview me this week if that counts? ;)).

Playing devils advocate I guess it depends on what sort of software you're writing. If you're a JS dev then I can see why they might not care about pointers in C. I know for sure as a Haskell/C++ dev I run like the plague from JS errors.

However, I do think that people should have a basic understanding of the entire stack from the OS up. How can you be trusted to choose the right tools for a job if your only aware of a hammer? How can you debug an issue when you only understand how a spanner works?

I think there's a case for engineering accreditation as we become even more dependent on software which isn't a CS degree.

darksaints4 years ago

> Ex-Amazon here.

I don't get it. Are we supposed to value your opinion more because you used to work for Amazon? Guess what...I also used to work for Amazon and I think your gatekeeping says a lot more about your ego than it does about fundamental software development skills.

lanstin4 years ago

But the value isn't equal. If you think of the business value implemented in code as the "picture" and the run time environment provided as the "frame" the frame has gotten much larger and the picture much smaller, as far as what people are spending their time on. (Well, not the golang folks that just push out a systemctl script and a static binary, but the k8s devops experts). I have read entire blogs on k8s and so on where the end result is just "hello world." In the old days, that was the end of the first paragraph. Now a lot of YAML and docker files and so on and so on are needed just to get to that hello world. Unix was successful initially because it was a good portable abstraction to managing hardware resources, compute, storage, memory, and network, over a variety of actual physical implementations. Many many of the problems people are addressing in k8s and running "a variety of containers efficiently on a set of hosts" are similar to problems unix solved in the 80s. I'm not really saying we should go back, Docker is certainly a solution to "depdendency control and process isolation" when you can't have a good static binary that runs a number of identical processes on a host, but the knowledge of what a socket is or how schedulers work is valuable in fixing issues in docker-based systems. (I'm actually more experienced in Mesos/docker rather than k8s/docker but the bugs are from containers spawning too many GC threads or whatever).

If someone is trying to debug that LB and doesn't know what a socket is, or debug latency in apps in the cluster and not know how scheduling and perf engineering tools work, then it's going to be hard for them, and extremely likely that they will just jam 90% solution around 90% solution, enlarging the frame to do more and more, instead of actually fixing things, even if their specific problem was easy to fix and would have had a big pay off.

coryrc4 years ago

Kubernetes is complicated because it carries around Unix with it and then duplicates half the things and bolts some new ones on.

Erlang is[0] what you can get when you try to design a coherent solution to the problem from a usability and first-principles sort of idea.

But some combination of Worse is Better, Path Dependence, and randomness (hg vs git) has led us here.

[0] As far as what I've read about its design philosophy.

EdwardDiego4 years ago

Who is using K8s for Hello World levels of complexity?

Complex problems often have complex solutions, the algorithm we need to run as developers is - what's the net complexity cost of my system if I use this tool?

If the tool isn't removing more complexity than it's adding, you probably shouldn't use it.

chubot4 years ago

(author here) The key difference is that a C compiler is a pretty damn good abstraction (and yes Rust is even better without the undefined behavior).

I have written C and C++ for decades, deployed it in production, and barely ever looked at assembly language.

Kubernetes isn't a good abstraction for what's going on underneath. The blog post linked to direct evidence of that which is too long to recap here; I worked with Borg for years, etc.

rantwasp4 years ago

K8s may have its time and place but here is something most people are ignoring: in 80% of the time you don't need it. You don't need all that complexity. You're not Google, you don't have the scale or the problems Google has. You also don't have the discipline AND the tooling Google has to make something like this work (cough cough Borg).

randomswede4 years ago

For the things that are 1:1 comparable, the Borg abstraction leaks in pretty much the same places as the Kubernetes abstraction. In slightly different ways. The "kubernetes abstraction" spans a larger space than the Borg abstraction does (note, I count "Chubby" and "GSLB" as "not Borg"), so there are more abstraction leaks as a whole in Kubernetes.

Source, I was a Google SRE for 5 years (Ads, Traffic). I ran the in-house kubernetes clusters at a company for 3 years (so, no, no hosted kubernetes, we stood them up either on pretty naked VMs or bare metal).

tovej4 years ago

Assembly aside, all the things you mention are things I would expect a software engineer to understand. As an engineer in my late twenties myself, these are exactly the things I am focusing on. I'm not saying I have a particularly deep understanding of these subjects, but I can write a recursive descent parser or a scheduler. I value this knowledge quite highly, since its applicable in many places.

I think learning AWS/kubernetes/docker/pytorch/whatever framework is buzzing is easy if you understand Linux/networking/neural networks/whatever the underlying less-prone-to-change system is.

codeisawesome4 years ago

Is there a networking-for-developers style course that you would recommend?

tovej4 years ago

The one at your local university. Either one named something like "Introduction to Networking" or "Introduction to Distributed Systems", depending on what you want to learn.

You could also read some books. Rami Rosens "Linux Kernel Networking - Implementation and Theory" is quite detailed.

The "UNIX and Linux System Administration Handbook" (Nemeth et al.) covers a lot superficially and will point you in the right direction to continue studying. It's very practical-minded.

For low-level socket programming, you can probably read "Advanced Programming in the UNIX environment". It might be more detail than you need though.

At the other extreme, if you want to study distributed systems, you could read Steen & Tanembaums "Distributed Systems"

shpongled4 years ago

disclaimer: I don't mean this to come across as arrogant or anything (I'm just ignorant).

I'm totally self-taught and have never worked a programming job (only programmed for fun). Do professional SWEs not actually understand or have the capability to do these things? I've hacked on hobby operating systems, written assembly, worked on a toy compiler and written libraries... I just kind of assumed that was all par for the course

mathgladiator4 years ago

The challenge is that lower level work doesn't always translate into value for businesses. For instance, knowledge of sockets is very interesting. On one hand, I spent my youth learning sockets. For me to bang out a new network protocol takes a few weeks. For others, it can take months.

This manifested in my frustration when I lead building a new transport layer using just sockets. While the people working with me were smart, they had limited low level experience to debug things.

shpongled4 years ago

I understand that that stuff is all relatively niche/not necessarily useful in every day life (I know nothing about sockets or TCP/IP) - I just figured your average SWE would at least be familiar with the concepts, especially if they had formal training. Guess it just comes down to individual interests

rantwasp4 years ago

I think you may have missed the point (as probably a lot of people did) I was trying to make. It's one thing to know what assembly is and to even be able to dabble in a bit of assembly, it's another thing to be proficient in assembly for a specific CPU/instruction set. It's orders of magnitude harder to be proficient and/or actually write tooling for it vs understanding what a MOV instruction does or to conceptually get what CPU registers are.

Professional SWE are professional in the sense that they know what needs to happen to get the job done (but I am not surprised when someone else does not get or know something that I consider "fundamental")

woleium4 years ago

yes, some intermediate devs I've worked with are unable to do almost anything except write code. e.g. unable to generate an ssh key without assistance or detailed cut and paste instructions.

handrous4 years ago

Shit, I google or manpage or tealdeer ssh key generation every single time....

Pretty much any command I don't run several times a month, I look up. Unless ctrl+r finds it in my history.

+2
shpongled4 years ago
kanzenryu24 years ago

It's extremely common. And many of them are fairly productive until an awkward bug shows up.

swiftcoder4 years ago

> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process? All of these were table stakes at some point in time.

All of these were still table stakes when I graduated from small CS program in 2011. I'm still a bit horrified to discover they apparently weren't table stakes at other places.

ModernMech4 years ago

> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Any one of the undergraduates who take the systems sequence at my University should be able to do all of this. At least the ones who earn an A!

abathur4 years ago

And maybe to learn the smell of a leaking layer?

jimbokun4 years ago

> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

But developers should understand what assembly is and what a compiler does. Writing a library for a language you know should be a common development task. How else are you going to reuse a chunk of code needed for multiple projects?

Certainly also need to have a basic understanding of unix processes to be a competent developer, too, I would think.

rantwasp4 years ago

there is a huge difference between understanding what something is and actually working with it / being proficient with it. huge.

I understand how a car engine work. I would actually explain it to someone that does not know what is under the hood. Does that make me a car mechanic? Hell no. If my car breaks down I go to the dealership and have them fix it for me.

My car/car engine is ASM/OS Internals/writing a compiler/etc.

ohgodplsno4 years ago

While I will not pretend to be an expert at either of those, having at least a minimal understanding of all of these is crucial if you want to pretend to be a software engineer. If you can't write a library, or figure out why your process isn't working, you're not an engineer, you're a plumber, or a code monkey. Not to say that's bad, but considering the sheer amount of mediocre devs at FAANG calling themselves engineers, it just really shines a terrible light on our profession.

puffyvolvo4 years ago

abstractions layers exist for this reason. as much of a sham as the 7-layer networking model is, it's the reason you can spin up an http server without knowing tcp internals, and you can write a webapp without caring (much) about if its being served over https, http/2, or SPDY.

lanstin4 years ago

I would make a big distinction between 'without knowing' and "without worrying about." Software productivity is directly proportional to the amount of the system you can ignore while you are writing the code at hand. But not knowing how stuff works makes you less of an engineer and more of a artist. Cause and effect and reason are key tools, and not knowing about TCP handshake or windows just makes it difficult to figure out how to answer fundamental questions about how your code works. It means things will be forever mysterious to you, or interesting in the sense of biology where you gather a lot of data rather than mathematics where pure thought can give you immense power.

jimbokun4 years ago

To be an engineer, you need the ability to dive deeper into these abstractions when necessary, while most of the time you can just not think about them.

Quickly getting up to speed on something you don't know yet is probably the single most critical skill to be a good engineer.

dweekly4 years ago

All true. The problems start getting gnarly when Something goes Wrong in the magic black box powering your service. That neat framework that made it trivial to spin up an HTTP/2 endpoint is emitting headers that your CDN doesn't like and now suddenly you're 14 stack layers deep in a new codebase written in a language that may not be your forte...

+1
ohgodplsno4 years ago
+2
exdsq4 years ago
rantwasp4 years ago

you know. deep down inside: we are all code monkeys. Also, as much as people like to call it software engineering, it's anything but engineering.

In 95% of cases if you want to get something/anything done you will need to work at an abstraction layer where a lot of things have been decided already for you and you are just gluing them together. It's not good or bad. It is what it is.

codethief4 years ago

This reminds me of Jonathan Blow's excellent talk on "Preventing the Collapse of Civilization":

https://www.youtube.com/watch?v=ZSRHeXYDLko

xorcist4 years ago

I honestly can't tell if this is sarcasm or not.

Which says a lot about the situation we find ourselves in, I guess.

rantwasp4 years ago

It's not sarcasm. A lot of things simply do not have visibility and are not rewarded at the business level - therefore the incentives to learn them are almost zero

alchemism4 years ago

Likewise I don’t know what to think when I meet frequent flyers who don’t know how a jet turbine functions! :)

It is a process of commodification.

throwawayboise4 years ago

The people flying the airplane do understand it though. At least they are supposed to. Some recent accidents make one wonder.

nonameiguess4 years ago

Pilots generally do have some level of engineering background, in order to be able to understand possible in-flight issues, but they're not analogous to software engineers. They're analogous to software operators. Software engineers are analogous to aerospace engineers, who absolutely do understand the internals of how turbines work because they're the people who design turbines.

The problem with software development as a discipline is its all so new we don't have proper division of labor and professional standards yet. It's like if the people responsible for modeling structural integrity in the foundation of a skyscraper and the people who specialize in creating office furniture were all just called "construction engineers" and expected to have some common body of knowledge. Software systems span many layers and domains that don't all have that much in common with each other, but we all pretend we're speaking the same language to each other anyway.

LimaBearz4 years ago

I really like your analogy, I’m stealing it. As a pilot(devops) during interviews I’m often asked deep aeronautics internals (some graphs/tree question) about whatever plane that aeronautic (software) engineer built and it’s always annoyed me that that’s a game I have to play. Same realm but completely different fields, that are somewhat and yet closely intertwined. The frequency of this is quite common

I sometimes hate joke/fantasize about nailing a SE candidate with an obscure BPG or esoteric DNS question and then being outwardly disappointed in his response, watching him realize he’s going to lose this job over something I found completely reasonable to ask, but ultimately entirely useless to his position

withinboredom4 years ago

It doesn't help that most of it is completely abstract and intangible. You can immediately spot the difference between a skyscraper and a chair, but not many can tell the difference between a e2e encrypted chat app and a support chat app. It's an 'app' but they are about as different between a chair and a skyscraper in architecture and systems.

+1
albertgoeswoof4 years ago
motives4 years ago

I think the important point here is that even pilots dont know the full mechanics of a modern jet engine (AFAIK at least, I don't have an ATPL so not 100% on the syllabus). They may know basics like the Euler turbine equation and be able to run some basic calculations across individual rows of blades, but they most likely will not fully understand the fluid mechanics and thermodynamics involved (and especially not the trade secrets of how the entire blades are grown from single crystals).

This is absolutely fine, and one can draw parallels in software, as a mid level software engineer working in an AWS based environment wont generally need to know how to parse TCP packet headers, despite the software/infrastructure they work on requiring them.

+3
ashtonkem4 years ago
ampdepolymerase4 years ago

Yes and no, for a private pilot license you are taught through intuition and diagrams. No Navier Stokes, no Lattice Boltzmann, no CFD. The FAA does not require you to be able to solve boundary condition physics problems to fly an aircraft.

ashtonkem4 years ago

Modern jet pilots certainly know much less about airplane functions than they did in the 1940s, and modern jet travel is much safer than it was even a decade ago.

lanstin4 years ago

Software today is more like jets in the 1940s than modern day air travel. Still crashing a lot and learning a lot and amazing people from time to time.

LinuxBender4 years ago

Many of them know the checklists for their model of aircraft. The downside of the checklists is that they sometimes explain the "what" and not the "why". They are supposed to be taught the why in their simulator training. Newer aircraft are going even further in that direction of obfuscation to the pilots. I expect future aircraft to even perform automated incident checklist actions. To your point, not everyone follows the checklists when they are having an incident as the FDR often reports.

puffyvolvo4 years ago

most pilots probably don't know how any specific plane's engine works further than what inputs give what outcomes and a few edgecases. larger aircrafts have most of their functions abstracted away with some models effectively pretending to act like older ones to ship them out faster (commercial pilots have to be certified per plane iirc, so more familiar plane = quicker recertification), which has led to a couple disasters recently as the 'emulation' isn't exact. this is still a huge net benefit as larger planes are far more complicated than a little cessna and much harder to control with all that momentum, mass, and airflow.

Koshkin4 years ago

Perhaps it is not about a jet engine, but I find this beautiful presentation extremely fascinating:

https://www.faa.gov/regulations_policies/handbooks_manuals/a...

rebelos4 years ago

"I don't know what to think when I meet engineers who know TCP/IP but don't quite understand how photons are transmitted over fiber."

"I don't know what to think when I meet engineers who know UNIX but don't quite understand assembly."

What you quoted is tantamount to the lament of a dinosaur that has ample time to observe the meteor approaching and yet refuses to move away from the blast zone.

Less facetiously: the history of progress in most domains, and especially computing, is in part a process of building atop successive layers of abstraction to increase productivity and unlock new value. Anyone who doesn't see this really hasn't been paying attention.

void_mint4 years ago

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

Can we provide an example that isn't also a big company? I'm not really thinking of big companies that don't either dogfood their own tech or rely on someone bigger to handle things they don't want to (Apple spends 30m a month on AWS, as an example[0]). You could also make the argument that kind of no matter what route you take you're "relying on" some big player in some big space. What OS are the servers in your in-house data center running? Who's the core maintainer of whatever dev frameworks you might ascribe to (note: An employee of your company being the core maintainer of a bespoke framework that you developed in house and use is a much worse problem to have than being beholden to AWS ELB, as an example).

This kinda just sounds like knowledge and progress. We build abstractions on top of technologies so that every person doesn't have to know the nitty gritty of the underlying infra, and can instead focus on orchestrating the abstractions. It's literally all turtles. Is it important, when setting up a MySQL instance, to know how to write a lexer and parser in C++? Obviously not. But lexers and parsers are a big part of MySQL's ability to function, right?

[0]. https://www.cnbc.com/2019/04/22/apple-spends-more-than-30-mi...

Aeolun4 years ago

I guess I don’t really understand what a socket is? It’s a magic thingy that allows two computers/processes to communicate and sometimes has trouble with NAT.

I know how to use it certainly, but how the hell it is implemented is more or less black magic to me.

Now that’s not to say I couldn’t learn how a socket works. It’s just never been at all relevant to performing my job.

nikau4 years ago

Yes, but you should at least know some basic troubleshooting skills like running netstat to see a socket in syn sent or whatever to get an idea if there is a network connectivity issue to your endpoint.

piokoch4 years ago

The second quote resonates well with the old Joel Spolsky blog post "Fire and Motion" [1]. Chasing new technologies is something your huge competitors want, you keep adopting XML databases, Corba (in the olden days), NoSQL just a few years ago, today it is Kafka, Crypto, AI, kirjillion of AWS products instead of working on your business.

[1] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

jdub4 years ago

I hope you and the author realise that sockets are a library. And used to be products! They're not naturally occurring.

mcv4 years ago

Most of this stuff is completely over my head, and I'm certainly no kubernetes expert, but I'm working on a project that's deployed with kubernetes, and one of the steps in our process is running our e2e tests, also in a separate kubernetes deploy. These tests (using Cypress) have proven to be extremely flakey on the server. Locally they work fine, though. I was wondering if Cypress is simply crap, but this article makes me wonder if kubernetes might be the real culprit here.

speedgoose4 years ago

Kubernetes for sure. But it will force you to write more relisient software. Since we migrated to kubernetes, we had to implement automatic retry strategies in every network exchange, http requests, database transactions, because the managed kubernetes of a major cloud provider is a train wreck.

gerbilly4 years ago

If you're enjoying your Kubernetes, then have at it, but in my opinion it sounds like Stockholm syndrome.

The thing is so complicated that even the guys who wrote it probably can't figure it out.

I myself would rather sew together .BAT files, CORBA and COBOL into a shambling software frankenstein before I'd even consider using Kubernetes and get sucked into that mess.

But seriously, 99 percent of us, even on HN, don't have the problems that kubernetes is trying to solve.

Why do we put ourselves through this when we should know just looking at the thing that it's just going to be a nightmare when things go wrong?

froh4 years ago

.JCL files, not .BAT files and: yes.

anonygler4 years ago

I dislike the deification of Ken Thompson. He's great, but let's not pretend that he'd somehow will a superior solution into existence.

The economics and scale of this era are vastly different. Borg (and thus, Kubernetes) grew out of an environment where 1 in a million happens every second. Edge cases make everything incredibly complex, and Borg has solved them all.

jeffbee4 years ago

Much as I am a fan of borg, I think it succeeds mostly by ignoring edge cases, not solving them. k8s looks complicated because people have, in my opinion, weird and dumb use cases that are fundamentally hard to support. Borg and its developers don't want to hear about your weird, dumb use case and within Google there is the structure to say "don't do that" which cannot exist outside a hierarchical organization.

throwdbaaway4 years ago

Interesting. Perhaps k8s is succeeding in the real world because it is the only one that tries to support all the weird and dumb use cases?

> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

Instead of seeing k8s as the equivalent of "cover fire" or Windows XP, a more apt comparison is probably Microsoft Office, with all kinds of features to support all the weird and dumb use cases.

nwmcsween4 years ago

I thought the same as well but then I went down the docker/container route for making something similar to k8s it turned out to be just reimplementing k8s badly. The reason k8s is so complicated is the horrible infatuation with walls of YAML and CRDS, think of YAML and CRDS as XML and XMLNS for those of us who lived through XML.

parasubvert4 years ago

Claiming Kubernetes is Multics , and that the UNIX equivalent is around the corner, is worthless claim without actual data or argument to back it up.

To me, Kubernetes is the new UNIX, centered around a small number of core ideas: controller loops, Pods, level-triggered events, and a fully open, well-standardized, and declarative, and extensible RESTful API.

Kubernetes has its complexities - just like UNIX, because it's trying to solve two big problems: shifting the fundamental unit of computation into an immutable / ephemeral unit (rather than mutable), i.e. the Pod, and having a single open API for controlling almost every aspect of IT using control systems theory as the philosophy.

The various clouds and predecessor cloud orchestrators (Azure ARM, AWS Cloud Formation, etc) are (to me) the infinitely complicated beasts.

This article didn't have an argument beyond "I don't understand it, and therefore I don't like it". He just linked to a few rants about the complexity of the CNCF ecosystem (which is like complaining that "IT is complicated" - it is a reflection of reality, not Kubernetes), and extended cranky rant / thought exercise by the MetalLB dude. The latter is the closest to an actual argument against Kubernetes, but there’s a LOT of things to disagree with in that post. THAT would be an interesting debate.

The biggest issue with Kubernetes is the insularity of the culture to reject anything that doesn't think like Kubernetes (as defined by whoever might be running any given SIG). That is also its greatest strength. But if it doesn't compromise this vision in some respects, such as developer experience, it will be self-limiting.

fnord774 years ago

I really love how kubernetes decouples compute resources from actual servers. It works pretty well and handles all kinds of sys-ops-y things automatically. It really cuts down on work for big deployments.

actually, it has shown me what sorts of dev-ops work is completely unneeded.

misiti37804 years ago

This post brings up a good question - how does one get better at low-level programming? What are some good resources?

talideon4 years ago

...except people actually use K8S.

slackfan4 years ago

Kubernetes is fantastic if you're running global-scale cloud platforms, ie, you are literally Google.

Over my past five years working with it, there has been not a single customer that had a workload appropriate for kubernetes, and it was 100% cargo-cult programming and tool selection.

dilyevsky4 years ago

Your case is def not the norm. We’re not google sized but we are taking a big advantage of k8s running dozens of services on it - from live video transcoding to log pipelines.

mixxit4 years ago

i just want cloud agnostic faas with first class triggers and outputs

something like lambda and azure functions without feeling locked in

gonab4 years ago

No one has used the words "docker swarm" on the comment section

Fill in the words

Kubernetes is to Multics as ____ is to docker swarm

honkycat4 years ago

People love to pooh-pooh "complicated" things like unit tests, type systems, Kubernetes, GraphQL, etc. Things that are solving a specific problem for LARGE SCALE ENTERPRISE users.

I will quote myself here: A problem does not cease to exist just because you decided to ignore it.

Without Kubernetes, you still need to:

- Install software onto your machines

- Start services

- Configure your virtual machines to listen on specific ports

- have a load balancer directing traffic to and watching the health of those ports

- a system to re-start processes when they exit

- something to take the logs of your systems and ship them to a centralized place so you can analyze them.

- A place to store secrets and provide those secrets to your services.

- A system to replace outdated services with newer versions ( for either security updates, or feature updates ).

- A system to direct traffic to allow your services to communicate with one another. ( Service discovery )

- A way to add additional instances to a running service and tell the load balancer about them

- A way to remove instances when they are no longer needed due to decreased load.

So sure, you don't need Kubernetes at an enterprise organization! Just write all of that yourself! Great use of your time, instead of concentrating on writing features that will make your organization more money.

chubot4 years ago

(author here) The post isn't claiming that you should be doing something differently right now.

It's claiming that there's something better that isn't discovered. Probably 10 years in the future.

I will be really surprised if anyone really thinks that Kubernetes and even AWS is going to be the state of the art in 2031.

(Good recent blog post and line of research I like about compositional cloud programming, from a totally different angle: https://medium.com/riselab/the-state-of-the-serverless-art-7...)

FWIW I worked with Borg for 8 years on many applications (and at Google for over a decade), so this isn't coming from nowhere. The author of the post I quoted worked with it even more: https://news.ycombinator.com/item?id=25243159

I was never an SRE, but I have written and deployed code to every data center at Google, as well as helping dozens of people like data scientists and machine learning researchers use it, etc. It's hard to use.

I gave this post a modest title since I'm not doing anything about this right now, but I'm glad @genericlemon24 gave it some more visibility :)

doctor_eval4 years ago

This article really resonated with me. We are starting to run into container orchestration problems but I really don’t like what I read about K8s. Apart from anything else, it seems designed for much bigger problems than mine, and require the kind of huge mental effort to understand which, ironically, will make it harder for my business to grow.

I’m curious if you’ve taken a look at Nomad and the other HashiCorp tools? They appear focussed and compositional, as you say, and this is why we are probably going to adopt them instead of K8s - they seem to be in a strong position to replace the core of K8s with something simpler.

carlosf4 years ago

I use Nomad a lot in my company and I really like it.

Our team tried to migrate to AWS ECS a few times and found it much harder to abstract stuff from devs / create self-service patterns.

That said it's not a walk in the park. You will need to scratch your head a little bit to setup consul + nomad + vault + a load balancer correctly.

doctor_eval4 years ago

Thanks. We're going to start small with just nomad, then vault, and as our needs grow we will probably adopt consul (we already use terraform so hopefully not a huge stretch) and maybe boundary.

This is thing I like about the HashiCorp tools. You don't have to eat the whole cake in a single sitting.

+1
alephu54 years ago
diragon4 years ago

>You will need to scratch your head a little bit to setup consul + nomad + vault + a load balancer correctly.

I've been wondering, would it make sense to try to package all that into a single, hopefully simple and easily configurable, Linux image? And if it might be, why hasn't anyone done that yet?

chubot4 years ago

I've only looked at the HashiCorp tools, not really used them. My understanding is they originated in a VM-based world (?), and I've worked almost exclusively with containers. I'm sure that has changed over time.

I will say that I looked at HCL and it looks very nice:

https://github.com/hashicorp/hcl

But somehow it's not as popular as a mess of YAML and Go Templates? That genuinely leaves me scratching my head. I guess it's because people pick platforms and not languages? (BTW, in 2009 I designed and implemented the template language that Go templates are based on, and I find their common application pretty bizarre, e.g. in some Helm charts I looked at from this thread)

Oil is growing a config dialect that looks a lot like HCL (although it's convergent evolution; I've never used it.) I think there is a lot of room for mixing declarative and imperative; as far as I can see HCL is mostly declarative (defining data structures).

Anyway I'd be interested in reading about HashiCorp stuff but for some reason in my neck of the woods I don't hear too much about it. Maybe that's because they're paid services and the open source Kubernetes seems attractive by comparison? Or is it more of a VM vs. container thing?

orangepenguin4 years ago

All of the Hashicorp products are primarily open source products. While there are enterprise features and cloud-hosted versions of some of them, FOSS is the foundation of the company.

alex_young4 years ago

10 years ago there wasn't a Docker (released in 2013), and AWS was a tiny side player with most established businesses operating their own data centers.

I think it's safe to say that if the next 10 years are anywhere near as disruptive as the last 10 we will surely be doing a lot of things very differently.

fragmede4 years ago

Things have already changed since the first release of Kubernetes. Specifically hosted Kubernetes, aka GKE/EKS/AKS, is a marked step forwards from running Kubernetes yourself, that I think doesn't get enough recognition. We'll see what the future holds, but my prediction is that the future holds more layers of indirection, and the future of running web services is on AWS Lambda/Azure Functions/Google Cloud Functions, and other fully-managed PaaS, like Heroku, with more vendor agnosticism. Running Kubernetes, in addition to the technical benefits, also enables a company to treat AWS/GCP/Azure as a commodity, and can credibly threaten to move clouds when the contract is up for renewal.

eb0la4 years ago

Back in 2003 we had Solaris Zones (now called Solaris containers). Same concept as Docker, but we didn't knew exactly why it was such a good idea, and the hardware was expensive.

What made Docker spark was being abke to use commodity hardware and ush to production with the same exact environment and behavior.

You could have done the same with Solaris, if you developed on a Sun Ultra5 workstation and published the application in a zone in the server. But 2003 was a different world and not everyone had a Spark box to develop nearby.

dev_by_day4 years ago

IMO hashistack nomad provides a better development experience. The complexity is gradual and it doesn't try to do "everything", it can stay focused on workload orchestration(whether its a container, vm, or even a process) and delegates coordination out to specific services better suited for it(consul for service discovery, vault for secrets etc...)

thefourthchime4 years ago

Totally agree. Simple things stick, complicated things die.

If you’re explaining, your losing.

jb_gericke4 years ago

So you're saying nothing other than, something less complicated will come along eventually? See OpenStack. You're saying nothing.

fxtentacle4 years ago

You're mixing together useful complexity with useless complexity.

Plus at the very least, I'd be very careful about putting type systems into the same basket as Kubernetes. One is a basic language feature used offline and before deploying. The other is a highly complex interwoven web of tools that might take your systems offline if used incorrectly.

Without Kubernetes, you need Debian and it's Apache and MySQL packages. It's called a LAMP stack and for many production deployments, that's good enough. Because without all that "cloud magic", a $50 per month sever running a bare metal OS is beyond overpowered for most web apps, so you can skip all the scaling exercises. And with a redundant PSU and a redundant network port, 99.99% uptime is achievable. A feat so difficult, I'd like to mention, that Amazon Web Services or Heruko rarely manage to...

Complexity has high costs. Just because you don't see Kubernetes' complexity now, doesn't mean you won't pay for it through reduced performance, increased bug surface, increased downtime, or additional configuration nightmares.

bob10294 years ago

> You're mixing together useful complexity with useless complexity.

> Complexity has high costs

Complexity management is the central theme of building any large, valuable system. We would probably find that the more complex (and correct) a system, the more valuable it becomes on a relative basis to other competing solutions. The US tax code is a pretty damn good example of complexity intentionally taken to the extreme (for purposes of total market capture). We shouldn't be surprised to find other technology vendors framing problems & marketing their wares under similar pretenses.

The best way to deal with complexity is to eliminate it or the conditions under which it must exist. For example, we made the engineering & product choice that says we do not ever intend to scale an instance of our application beyond the capabilities of a single server. Consider the implications of this constraint when reviewing how many engineers we actually need to hire, or if Kubernetes even makes sense.

I think one of the biggest failings in software development is a lack of respect for the nature and impact of complexity. If we are serious about reducing or eliminating modes of complexity, we have to be willing to dig really deep and consider dramatic changes to the ways in which we architect these systems.

I know its been posted to death on HN over the last ~48 hours, but Out of the Tar Pit is the best survey of complexity that I have seen in my career so far:

http://curtclifton.net/papers/MoseleyMarks06a.pdf

abraxas4 years ago

Couldn’t agree more! The crazy, ridiculous Rube Goldberg machines that are being strung together from those AWS components to solve the most mundane problems are getting ridiculous.

debarshri4 years ago

Absolutely agree with you. I have seen the debate between accidental and necessary complexities very often. It actually depends upon stage of the organisation. In my opinion many devs in startups and smaller orgs try to accomodate the future expectations around product and create accidental complexities Accidental complexity becomes necessary complexity when an organisation scales out.

kyleee4 years ago

Another case of premature optimization, really

+1
Aeolun4 years ago
mwcampbell4 years ago

> And with a redundant PSU and a redundant network port, 99.99% uptime is achievable.

It's really tempting to believe that with the right hardware, we can put everything on one powerful and inexpensive box. A couple of problems with that:

1. What happens when you have to reboot to apply a kernel update?

2. The geographic location of that single box is itself a gap in redundancy. This is one thing I like about AWS and the other hyperscalers, with their regions that each have multiple data centers connected by a private network, with load balancers and other things spanning the region.

bob10294 years ago

This is a question that I have spent a ridiculous amount of time pondering.

My conclusions so far are this:

Single node application systems are by far the most reliable and manageable from a business logic standpoint. At no point does spreading a problem across more than 1 computer make that problem easier to solve.

If you are concerned about latency, you need to get really abstract with your problem and ask what is even possible in information theoretic terms. If you are truly constrained to 1 serialized, synchronous context (i.e. a competitive counterstrike match or a stock exchange), there is little you can do to alleviate the root problem as your users get further from the server. You can certainly look at using some consensus protocol like multi-paxos, but then your transaction latency goes from microseconds (if you were clever) to milliseconds, representing orders of magnitude slowdown in the typical case.

The best solution I can come up with is a synchronously-replicated append-only log store which is utilized in a primary/sync-witness/async-witness/... configuration. The first tier of resilience would be synchronous and provided by a set of witness nodes which must ack as a majority to progress primary. These nodes would ideally be within 1-2ms of the primary. The async witnesses could be in orbit and/or on mars. These are more about extreme geological disaster recovery. The witness nodes would also use a separate consensus protocol to decide when the primary needs to be taken down and replaced with a sync (or god forbid async) witness. They would be able to elect an emergency leader separate from the primary who would be authorized to stop the bad primary in the hypervisor, and edit any relevant DNS records to ensure traffic stops hitting the bad system.

For the customers I work with, it is wayyy easier to build & sell a system that operates on a single box with sync replication + manual failover. Our customers are tolerant to production having a brief outage for a few minutes during the business day. Especially considering the fact that I have still never had to do this exercise in a production setting. The hardware we run this stuff on is so ridiculously stable.

+2
gravypod4 years ago
rtsil4 years ago

1. I reboot it and my users will need to wait. 99.99% uptime is achieved with 53 minutes of downtime every year, largely enough for the occasional kernel update here and there.

If needed, I can choose to apply the update in the middle of the night in the timezone of most of my visitors.

If it really is a sensitive app and I can't afford any uptime, I just add another inexpensive box, put a load balancer in front (a Cloudflare load balancer would work fine). And since I now have 2 servers, I need a way to manage them without having to manually log in to each of them each time. Enter Ansible. And that's it.

2. Now that I have two cheap boxes, nothing prevents me from having them in two separate data centers and two separate providers.

FpUser4 years ago

>"It's really tempting to believe that with the right hardware, we can put everything on one powerful and inexpensive box"

This is what I have. 2 boxes only. One on my own premises in Canada and another on Hetzner in Europe. Each one can go down, no sweat. Also I do not strive for "live uninterrupted updates". All my applications and processes allow for short disruptions not to interfere with the main course.

Also I write native servers in C++ for backend and they process thousands of requests per sec without breaking much sweat. Way more than my business would ever need.

Same situations with my clients as well with the difference that for legal reason they rent computers or whatever goes as such from Amazon or Azure.

aduitsis4 years ago

Correct me, but I'd dare say admins have been rebooting their machines and running services without geographic redundancy for decades. Uptime was still very high.

fxtentacle4 years ago

1. Planned maintenance in the middle of the night. Even Amazon and Heroku need to do it sometimes. At least we had a forced Postgres version update recently which was 30 minutes of downtime.

2. The latency within AWS is easily as high as US-EU backbone latency. Also, most web apps these days need 2000ms to load all the tracking and advertising crap, so 100ms location latency are negligible in comparison. Plus for most real companies, you'll have one website per country anyway. One server for .com and one server for .eu

pjmlp4 years ago

Point 2. is relevant, point 1. not really, unless one isn't willing to pay for the specific OS version that allow live kernel updates.

threeseed4 years ago

This argument is often made and is ridiculous.

No one should or is using Kubernetes to run a simple LAMP stack.

But if you have dozens of containers and want them to be manager in a consistent, secure, observable and maintainable way then Kubernetes is going to be a better solution than anything you build yourself.

tmp_anon_224 years ago

> No one should or is using Kubernetes to run a simple LAMP stack.

Yes they are. Some developer got all excited about the capabilities of k8s, and had an initial larger scope for a project, so they set it up with GKE or EKS, and it managed to provide just enough business value to barrow in like a tick that won't be going away for years.

Developers get all excited for new shiny tools and chuck it into production all the time, particularly at smaller orgs.

+2
debarshri4 years ago
jlaurend4 years ago

Heh, as someone who came very close to doing this (i.e. using k8s for a LAMP-stack type app at a startup), it's not just "shiny object syndrome" driving people to do this. Here's what our progression looked like:

1. Start with basic LAMP app in git that's manually deployed to an EC2 instance 2. Add in CI / CD + CodeDeploy 3. Create a staging environment 4. Dockerize local environment to keep dev environments in sync and onboard easier (really, this part's a gamechanger for a small company) 5. Ok so now we have Docker for local dev environments but stage and prod are managed separately. Can we just run our Docker containers in stage / production?

When I researched step 5, the options were basically k8s or Docker swarm but Docker swarm didn't seem battle tested (for prod). k8s was clearly a nightmare for a small team to maintain so we started looking into GKE / EKS -- but EKS was still in beta. Thus we punted. We've actually started using ECS for a newer project and I'd likely go that route for step 5 instead.

danjac4 years ago

It tends to happen with startups I think because developers know the startup is likely not going to be around for long, or they will move on in a couple years anyway. So why not shoehorn whatever tech you want and have that skill in your resume? Of course now the startup will need to hire for that tech when you leave and the cycle continues...

+1
threeseed4 years ago
dvtrn4 years ago

I agree that you probably shouldn’t but if you think no one “is”, I’d point to my last job, an enterprise that went to k8s for a single-serving php service that reads PDFs.

I recently asked a friend who still works there if anything else has been pushed to k8s since I left (6 months ago). The answer: no.

derfabianpeter4 years ago

Sounds familiar.

rodgerd4 years ago

Alas, a lot of people are. One of the reasons there's such a backlash against k8s - other than contrarianism, which is always with us - is that there are quite a few people who have their job and hobby confused, and inflicted k8s (worse yet, raw k8s) on their colleagues not because of a carefully thought out assessment of its value, but because it is Cool and they would like to have it on their CV.

michaelpb4 years ago

> No one [...] is using Kubernetes to run a simple LAMP stack.

Au contraire! This is very common. Probably some combo of resume-driven devops and "new shiny" excitement.

+1
rileytg4 years ago
lamontcg4 years ago

I love the way that the Kubernetes debate always immediately devolves into Kubernetes vs. DIY where Kubernetes is the obviously correct answer.

Two groups of people shouting past each other.

+3
FpUser4 years ago
moonchild4 years ago

> You're mixing together useful complexity with useless complexity.

Or, to channel Fred Brooks, essential and inessential complexity.

bostonsre4 years ago

> useless complexity

Which items fall in this category?

imwillofficial4 years ago

Most of the time, K8s

+1
bostonsre4 years ago
slackfan4 years ago

"Write it all yourself"

- Install software onto your machines

Package managers, thousands of them.

- Start services

SysVinit, and if shell is too complicated for you, you can write totally not-complicated unit files for SystemD. For most services, they already exist.

- Configure your virtual machines to listen on specific ports

Chef, Puppet, Ansible, other configuration tools, literally hundreds of them etc.

- have a load balancer directing traffic to and watching the health of those ports

Any commercial load balancer.

- a system to re-start processes when they exit

Any good init system will do this.

- something to take the logs of your systems and ship them to a centralized place so you can analyze them.

Syslog has had this functionality for decades.

- A place to store secrets and provide those secrets to your services.

A problem that is unique to kubernetes and serverless. Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?

- A system to replace outdated services with newer versions ( for either security updates, or feature updates ).

Package managers.

- A system to direct traffic to allow your services to communicate with one another. ( Service discovery )

This is called an internal load balancer.

- A way to add additional instances to a running service and tell the load balancer about them

Most load balancers have built up processes for these.

- A way to remove instances when they are no longer needed due to decreased load.

maybe the only thing you may need to activelly configure, again in your load balancer.

None of this really needs to be written itself, and these assumptions come from a very specific type of application architecture, which, no matter how much people try to make it, is not a one-size-fits-all solution.

rhacker4 years ago

So instead of knowing about K8s services, ingests and deployments/pods I have to learn 15 tools.

Ingests are not much more complicated than an nginx config, services are literally 5 lines each pod, and the deployments are roughly as complicated as a 15 line docker file.

smoldesu4 years ago

If you're familiar with Linux (which should be considered required-reading if you're learning about containers), most of this stuff is handled perfectly fine by the operating system. Sure, you could write it all in K8 and just let the layers of abstraction pile up. Or, most people will be suited perfectly fine by the software that already runs in their box.

+3
emerongi4 years ago
+1
eropple4 years ago
echelon4 years ago

> most of this stuff is handled perfectly fine by the operating system

No, you have to write or adopt tools for each of these things. They don't just magically happen.

Then you have to maintain, secure, integrate.

k8s solves a broad class of problems in an elegant way. Since other people have adopted it, it gets patched and improved. And you can easily hire for the skillset.

+1
ownagefool4 years ago
tapoxi4 years ago

This really depends on how many boxes you have.

cat1994 years ago

> I have to learn 15 tools.

kubectl, kustomize, helm, istio, graphana, various flavors of ingress controllers, overlay networks, service meshes, storage controllers, etcd, etc.

from tfa: https://landscape.cncf.io/

you still have to learn 15 tools, just now they are hidden behind the scenes, and you still have to understand the underlying systems to reason about your containers.

this isn't for or against k8s - i'm a right tool for the job guy - but as a tool kubernetes doesn't solve problems, it encapsulates them and shifts them around.

nikau4 years ago

Plus all the cloud tools are immature with terrible error handling and logging.

So after an enjoyable time crafting a 30 level deep json file you get a failed helm deployment with a error message like "timed out waiting for the condition".

nikau4 years ago

15 mature well documented tools are a lot easier than 15 kludged ill thought out Kubernetes definitions.

Any serious Kubernetes environment is not 5 lines per pod, its the hell of rbac and pod security policies and all sorts of overly cryptic cruft.

orf4 years ago

“ For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.”

Or… you could not.

https://news.ycombinator.com/item?id=9224

fragmede4 years ago

The difference, is that Dropbox is user-facing software, while Kubernetes is software engineer-facing. Dropbox has to be usable by tech-illterate people. Tech-illiterate people have no idea what a Kubernetes is.

There is value in creating a vertically integrated solution in a space, similar to what Dropbox did, so if you find yourself building many of the pieces of Kubernetes internally, it's worth considering if adopting Kubernetes wouldn't be a more efficient use of resources.

breakfastduck4 years ago

That comment has aged brilliantly.

Thanks for that!

adamrezich4 years ago

how is quoting this here relevant? nobody's saying k8s isn't successful or going to be successful—the argument is whether its complexity and layers of abstraction are worthwhile. dropbox is a tool, k8s is infrastructure. the only similarity between this infamous post and the argument here is that existing tools can be used to achieve the same effect as a product. the response here is "that'll never catch on" (because obviously it has), rather it's "as far as infrastructure for your company goes, maybe the additional complexity isn't worth the turnkey solution"

orf4 years ago

"You don't need Kubernetes, for a Linux user you can already build a custom solution quite trivially by setting up a custom package repo then build and distribute your application using apt, then configuring SysVinit to monitor your services, whilst using Ansible to configure iptables rules in combination with a simple load balancer you can manage yourself, then use syslog to monitor logs across all your machines whilst hand-waving away secrets management as a problem with 'serverless'"

Yes, you could. Some people do. Others don't, because even if you need a small portion of the features a turnkey solution is likely a better choice in the long run than hand-rolling your own mix of 15+ different technologies to achieve the same thing.

edoceo4 years ago

Confounded why sshfs wasn't chosen.

bcrosby954 years ago

So you have a version of Kubernetes that is as easy to use as Dropbox? Where do I sign up for the beta?

orf4 years ago
ferdowsi4 years ago

I'm personally glad that Kubernetes has saved me from needing to manage all of this. I'm much more productive as an applications engineer now that I don't have to stare at a mountain of bespoke Ansible/Chef scripts operating on a Rube Goldberg machine of managed services.

zdw4 years ago

Instead, you can now admin a Rube Goldberg machine of Helm charts, which run a pile Docker containers which are each their own microcosm of outdated packages and security vulnerabilities.

+1
kube-system4 years ago
cjalmeida4 years ago

This x10. Each such setup is a unique snowflake of brittle Ansible/Bash scripts and unit files. Anything slightly different from the initial use case will break.

Not to mention operations. K8s give you for free things that are a pain to setup otherwise. Want to autoscale your VMs based on load? Trivial in most cloud managed k8s.

mplewis4 years ago

> Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?

Yep, I remember when I deployed insecure apps to prod and copied secrets into running instances, too.

rhacker4 years ago

Remember how the ops team kept installing Tomcat with the default credentials?

puffyvolvo4 years ago

This was the funniest point in that comment to me.

Read the intended way, it's borderline wrong.

Read as "remember when people assumed security without knowing" is basically most of computing the further back in time you go.

Thaxll4 years ago

Have you ever tried to package things with .dep or .rpm? It's a f** nightmare.

A place to store secrets and provide those secrets to your services.

"A problem that is unique to kubernetes and serverless. Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?"

I remember 10 years ago things were not secur, you know when people baked their credentials in svn for example.

rantwasp4 years ago

lol. as someone who has packaged stuff I can tell you that this K8S is orders of magnitudes more complicated. Also, once you figure out how to package stuff, you can do it in a repeatable manner - vs K8s which you basically have to babysit (upgrade/deprecations/node health/etc) forever and pay attention to all developments in the space.

+1
orf4 years ago
tomc19854 years ago

.deb packages are literally just a compressed archive with a folder structure that mostly mimics your folder structure on the hard drive. You've got some pre- and post-hooks where you can write some shellscript to do fancy stuff, and a signing process to ensure authenticity. Autostart is a SysV init script or systemd xml file away. How is that a f* nightmare?

nitrogen4 years ago

Checkinstall makes packaging pretty easy for anything you aren't trying to distribute through the official distro channels.

https://help.ubuntu.com/community/CheckInstall

jchw4 years ago

I can setup a Kubernetes cluster, a container registry, a Helm repository, a Helm file and a Dockerfile before you are finished setting up the infrastructure for an Apt repository.

mlnj4 years ago

Exactly, an autoscaling cluster of multiple nodes with everything installed in a declarative way with load balancers and service discovery, all ready in about 10 minutes. Wins hands down.

zdw4 years ago

My experience is the opposite - an APT repo is just files on disk behind any webserver, a few of them signed.

Setting up all the infra for publishing APT packages (one place to start: https://jenkins-debian-glue.org ) is far easier than trying to understand all the rest of the things you mention.

jchw4 years ago

I mean, Kubernetes is just some Go binaries; you can have it up and running in literal seconds by installing a Kubernetes distribution like k3s. This is actually what I do personally on a dedicated server; it’s so easy I don’t even bother automating it further. Helm is just another Go binary, you can install it on your machine with cURL and it can connect to your cluster and do what it needs from there. The Docker registry can be run inside your cluster, so you can install it with Helm, and it will benefit from all of the Infra as Code that you get from Kubernetes. And finally, the Helm repo is “just files” but it is less complex than Apt.

I’ve been through the rigmarole for various Linux package managers over the years and I’m sure you could automate a great deal of it, but even if it were as easy as running a bash script (and it’s not,) setting up Kubernetes covers like half this list whereas setting up an Apt repo covers one item in it.

tomc19854 years ago

Yeah I don't understand where all this fictional .deb and APT "complexity" is coming from. Everything uses standard abstractions that are decades old at this point..... oh no, you have to make some directories! You have to put a manifest file in the right place! Oh my god, now you have to run a command!

slackfan4 years ago

Now make it not-brittle and prone to falling over, without using hosted k8s. ;)

api4 years ago

... but then you could pay a fraction for bare metal cloud hosting instead of paying out the nose for managed K8S at Google or AWS.

Its complexity and fragility are features. It's working as intended.

rantwasp4 years ago

no. you cannot.

tlrobinson4 years ago

This is supposed to be an argument against Kubernetes?

slackfan4 years ago

Nope, just an argument against the "you must write all of this yourself" line. :)

mlnj4 years ago

There was some project where one wrote all of that (essentially what Kubernetes does) in like 8k lines of bash script. Brilliant, yes. But there is not way I want any anything similar in my life.

I am not the biggest fan of the complexity Kubernetes is, but it solves a problems there is no way I want to solve individually and on my own.

AlexCoventry4 years ago

I think the point of the blog post in the OP is that it should be a bunch of bash scripts with very few interdependencies, because most of the requirements in the grandparent comment are independent of each other, and tying them all together in a tool like kubernetes is unwieldy.

ratorx4 years ago

Some of these are decent points, but a couple are misleading.

The security one is the big one. Things were just not as secure (and did not need to be as secure) “back then”. K8s has a lot of complexity, and security should definitely be simpler so it’s harder to misconfigure, but not doing anything is not viable.

Saying “Package Managers” is fine until you realise they solve only part of the problem. The mainstream ones are good tools to update package (and dependencies) from version X to Y. When you’re running a distributed system, it’s often not that simple if you want to be reliable. Coordinating a slow global update of your application from version X to Y (safely) is pretty tricky and I’m not aware of good self-contained solutions to this.

staticassertion4 years ago

You're making their point for them.

alisonatwork4 years ago

That escalated quickly. Unit tests and type systems are not complicated at all, and are applied by solo developers all the time. GraphQL and Kubernetes are completely different beasts, technologies designed to solve problems that not all developers have. There really isn't a comparison to be made.

strken4 years ago

Almost every team I've worked on has needed to deploy multiple services somewhere, and almost every app has run into escalating round trip times from nested data and/or proliferating routes that present similar data in different ways. While it's true to say not all developers have those problems, they're very common.

alisonatwork4 years ago

That's a very SaaS-centric way of looking at software development.

Unit tests and type systems are useful across the whole stack. Systems developers, application developers, embedded developers, mobile developers, even sysadmins and IT people - they all have a use for these basic principles of how to design a piece of software.

GraphQL and Kubernetes, on the other hand, are solutions designed exclusively for web services deployed into the cloud, and they're primarily useful in situations where there are many different teams each working on different services, with differing release schedules and engineering priorities. These situations might seem very common in large companies, but I don't think they represent common aspects of software development in general.

doctor_eval4 years ago

I agree. GraphQL is conceptually straightforward, even if certain implementations can be complex. Any developer familiar with static typing is going to get it pretty easily.

I’m far from an expert, but ISTM that Kubernetes is complex both conceptually and in implementation. This has implications well beyond just operational reliability.

timr4 years ago

Sure, but k8s isn't the only way to do any of those things, and it's certainly a heavyweight way of doing most of them.

It's not a question of k8s or bespoke. That's a false dichotomy.

I see way too many young/inexperienced tech teams using k8s to build things that could probably be hosted on a couple of AWS instances (if that). The parasitic costs are high.

rhacker4 years ago

I see way too many young/inexperienced tech teams STILL using an unmaintainable process of just spinning up an EC2 instance for random crap because there is no deployment strategy at the company.

mountainriver4 years ago

Yup, k8s is at least standardized in a way that’s somewhat sane.

Before k8s every org I worked for had an absolute mess of tangled infrastructure

breakfastduck4 years ago

Not sure why this is being downvoted.

"We can do it ourselves!" attitude by people who are unskilled is the source of many legacy hell-webs sitting in companies all over the world that are desperately trying to be maintained by their inheritors.

timr4 years ago

Not responsive to the argument. k8s is maybe a "deployment strategy", but it's certainly not the only one. Or the best one for all circumstances.

GuB-424 years ago

"Large scale enterprise" is the key here.

Kubernetes was made by Google. Google is not your startup, it has millions of servers serving billions of users, of course it needs complex systems, and it has thousands of people to maintain them.

In a small company, you probably don't need much of what's in that "need to" list. Rent a server, maybe a second one for redundancy, install your packages, run your app, and if you did things well, you can do quite a lot with a single machine.

But a lot of people think they are Google, and get ready to scale to a level they will never reach, and do it badly.

I think that where most of the pooh-poohing comes from, the use of overly complicated solutions for your scale.

kaeshiwaza4 years ago

It's where Thomson-Unix way win as KISS and still work for small to large scale.

iammisc4 years ago

Comparing Kubernetes to type systems is like comparing a shack to a gothic cathedral. Type systems are incredibly stable. They have to be proved both sound and complete via meticulous argumentation. Once proven such, they work and their guarantees exist... no matter what. If you avoid the use of the `unsafe...` functions in languages like Haskell, you can be guaranteed of all the things the type system guarantees for you. In more structured languages like Idris or Coq, there is an absolute guarantee even on termination. This does not break.

Whereas on kubernetes... things break all the time. There is no well-defined semantic model for how the thing works. This is a far cry from something like the calculus of inductive constructions (basis of COQ) for which there is a well-understood 'spec'. Anyone can implement COIC in their language if they understand the spec. You cannot say the same for kubernetes.

Kubernetes is a nice bit of engineering. But it does not provide the same guarantees as type systems. In fact, of the four 'complicated' things you mentioned, only one thing has a well-defined semantic model and mathematically provable guarantees behind it. GraphQL is a particular language (and not one based on any great algebra either, like SQL), Kubernetes is just a program, and unit tests are just a technique. None of them are abstract entities with proven, unbreakable guarantees.

Really, comparing Kubernetes to something like system FC or COIC is like comparing Microsoft Word to Stoke's theorem.

The last thing I'll say is that type systems are incredibly easy. There are a few rules to memorize, but they are applied systematically. The same is not true of Kubernetes. Kubernetes breaks constantly. Its abstractions are incredibly leaky. It provides no guarantees other than an 'eventually'. And it is very complicated. There are myriad entities. Myriad operations. Myriad specs, working groups, etc. Type systems are relatively easy. There is a standard format for rules, and some proofs you don't really need to read through if you trust the experts.

markzzerella4 years ago

Your post reads like a teenager yelling "you don't understand me" at parents who also were teenagers at one point. You really think that those are new and unique problems? Your bullet points are like a list of NixOS features. I just did all of that across half a dozen servers and a dozen virtual machines with `services.homelab.enable = true;` before I opened up HN while its deploying. I'm not surprised that you can't see us lowly peasants from your high horse but many of us have been doing everything you mentioned, probably far more reliably and reproducibly, for a long time.

Aeolun4 years ago

> Your post reads like a teenager yelling "you don't understand me" at parents who also were teenagers at one point.

I don’t understand teenagers any more, and I’m barely 30. I don’t think this analogy really works.

I agree with your point though.

markzzerella4 years ago

You don't have to understand teenagers to understand that their problems are the same that they have always been, except in different settings.

dikei4 years ago

Yep, we used to setup these things with a bunch of different systems using our collection of Ansible playbooks. The playbooks are complex, so as to handle all kind of edge cases. Furthermore, since they are developed over a long period, the coding convention are not uniformed; it's quite hard to teach the new hires how to use and contribute to the playbooks.

We probably replaced tens of thousands line of Ansible code with a few thousands line of K8S code. We found the new code easier to maintain: because K8S is much stricter than Ansible, it's harder to deviate from the norm. Granted, we might be biased because K8S is all new and shiny, but so far we haven't regretted moving to K8S.

InternetPerson4 years ago

OK, true ... but if you do all that yourself, then "they" can never fire you, because no one else will know how the damn thing works. (Just be sure not to document anything!)

sweeneyrod4 years ago

Unit tests and "type systems" have very little in common with Kubernetes and GraphQL.

doctor_eval4 years ago

Also, GraphQL is not complex. You can learn the basics in an hour or so.

CTmystery4 years ago

Tangential to your point, but how did 'unit tests' end up in your list of complicated things? They are conceptually easy to understand, and they are certainly not only for large scale enterprise users. Granted, it takes years to learn how to write nice tests... maybe that is what you mean?

_hoe_radi_224 years ago

>> "complicated" things like unit tests, type systems, Kubernetes, GraphQL, etc.

Those are not even in the same ballpark in terms of how complicated they are. Unit tests and type systems are not complicated at all. GraphQL not really either. But Kubernetes is very, very much.

o8r3oFTZPE4 years ago

"People like to ..."

Perhaps the reason people question these complicated things is because they are, whether intentionally or not, being marketed to an audience on HN that includes small scale non-enterprise users.

I shall paraphrase others here: A problem does not exist for you simply because it exists for LARGE SCALE ENTERPRISE users.

What I would add to that is there is nothing particularly noteworthy about a large organisation's IT work simply because it is a large organisation or making billions in ad revenue, unless one is also working in a similar organisation. If some organisations are writing the next "Multics", it really should not be interesting to everyone. A single person who can do all the individual tasks you listed is likely to think critically when presented with "news" of organisations where no single individual can do those things. Its like how many Initech Corporation employees does it take to screw in a lightbulb.

I find some of the most interesting work is found in projects started by individual programmers working alone. luajit for example.

void_mint4 years ago

The Kubernetes marketing team has definitely gotten to you. The investment in DevRel is really paying off if people are unironically arguing that you _must_ use K8s or you're wasting money and time.

I'd be very curious to find your proposed cost savings after accounting for those teams of engineers tasked with maintaining a company's K8s clusters. There is no free lunch.

jokethrowaway4 years ago

Even at smaller scale, dealing with any distro + k8s + helm can be simpler than doing ops on an bare ubuntu install.

If your dependencies are not too many / well supported you may get good results with nix.

phendrenad24 years ago

Most people who are afraid of doing all of those things haven't actually done them, and would probably be shocked to find that they aren't actually that complex. Kubernetes actually makes them more complex, but simpler in the aggregate. In other words, most of the apparent value in kubernetes vanishes when you do a real bake-off between kubernetes vs rolling your own infrastructure. There is still SOME benefit, but it's usually exaggerated.

Another problem with kubernetes is the flexibility it gives you. Look at five engineering teams using kubernetes, and you'll see five wildly different setups. Within that maneuverability, in a project that ostensibly makes things "simple", hides the devils that will bite you when you least expect it.

wyager4 years ago

Comparing type systems to kubernetes seems like an incredible category error to me. They have essentially nothing in common except they both have something to do with computers. Also, there are plenty of well-designed and beautiful type systems, but k8s is neither of those.

fridif4 years ago

I do all of the above with my own jars I've written all by myself and I feel as if it was 10x faster than just scratching the surface of Kubernetes.

Resume driven development is very real if people can't write their own load balancer.

ryanobjc4 years ago

Recently I had cause too try kubernetes… it has quite the rep so I gave myself an hour to see if I could get a simple container job running on it.

I used GCP autopilot k8 cluster… and it was a slam dunk. I got it done in 30 minutes. I would highly recommend to others! And the cost is totally reasonable!

Running a k8 cluster from scratch is def a bigco thing, but if you’re in the cloud then the solutions are awesome. Plus you can always move your workload elsewhere later if necessary.

mountainriver4 years ago

This is also my experience, k8s can get harder but for simple stuff it’s pretty dang easy

skywhopper4 years ago

I’ve got a different experience with Kubernetes because from what I’ve seen in fails to provide most of the features you described out of the box. Or when it does, they have major issues of how to safely deploy and maintain them over time. I assume you mean all those services can be installed and configured on Kubernetes, once you have Kubernetes itself up and running. But that’s not the same thing.

lrdswrk004 years ago

You have to do those things WITH Kubernetes.

It doesn’t configure itself.

I focused on fixing Kubernetes problems at my last job (usually networking). How is that supporting the business (hint: it didn’t so management forced us off Kubernetes)

No piece of software is a panacea and shilling for project that’s intended to remind people Google exists, is not really putting time on anything useful either

amerkhalid4 years ago

My issue with Kubernetes and DevOps is companies that combine DevOps with development. As a developer, it is already hard enough to keep up with new frameworks. Now these companies want their devs to do DevOps, two vastly different expertise. Not sure how common it is in industry but I know enough developers who are now halfassing DevOps.

hnjst4 years ago

In my book DevOps is a set of practices that aims at improving the collaboration between Devs and Ops. I know that the term is now often used to label a role (or even a job description), but I think something important is lost in the switch.

According to what I put behind the concept, implicating Devs is at the core. You built it, you run it!

amerkhalid4 years ago

> In my book DevOps is a set of practices that aims at improving the collaboration between Devs and Ops.

I think that's what sold me on DevOps.

> You built it, you run it!

This adds too much responsibilities for devs but also makes it hard to find good enough developers who can also manage deployments and infrastructure. I have never seen happy and competent Dev+DevOps person. There is just too much cognitive load for a same person to do these two things right at the same time. The Hello World of Kubernetes deployment is easy on Cloud but anytime you need to do something a bit more complex, learning curve increases tremendously.

What seems to work is that each team having one or more dedicated DevOps person. Or I have seen a dedicated DevOps team in large orgs managing infrastructure for many other teams.

o8r3oFTZPE4 years ago

"Poeple like to ..."

Perhaps the reason people question these complicated things is because they are, whether intentionally or not, being marketed to an audience on HN that includes small scale non-enterprise users.

I shall quote thyself here: A problem does not exist for you simply because it exists for LARGE SCALE ENTERPRISE users.

icythere4 years ago

Best luck finding some engineer can understand and do all those stuff today. It's possible, but it's hard. Everyone comes to the table with "Hey terraform and helm/k8s": D: D

kazinator4 years ago

If you don't write that yourself, you still have to understand how someone else wrote it so you can configure and use it properly, and understand how to debug it when it's not working.

Aeolun4 years ago

> Just write all of that yourself!

At least then you’ll have a shot at actually understanding it. I can’t trust kubernetes when anything goes wrong because the system just isn’t very transparent.

nikau4 years ago

The advantage of kubernetes is price and being a standard making staff hiring easier.

F5, autosys, spunk, etc are all much better products to do the tasks you mentioned but cost $$$ vs kubernetes.

MR4D4 years ago

> Things that are solving a specific problem for LARGE SCALE ENTERPRISE users.

So, just like mainframes?

/s

Seriously, though, I think that was the point. The future needs to be a much less complicated tool.

gfodor4 years ago

I like Chef Habitat. It pretty much does all this. It's how I've kept away from k8s this long.

dboreham4 years ago

Unit tests, GraphQL and type systems aren't complicated, or at least don't need to be.

localhost4 years ago

Right. But that doesn’t mean that Kubernetes is the right solution for this set of problems.

chillfox4 years ago

Not all of those things are actually needed at small to medium size.

secondcoming4 years ago

We did all that on AWS, and do it now on GCE. Load balancers, instance groups, scaling policies, rolling updates... it's all automatic. If I wasn't on mobile I'd go into more detail. Config is ansible, jinja, blah blah the usual yaml mess.

api4 years ago

It's not K8S or nothing. It's K8S or Nomad, which is a much simpler and easier to administrate solution.

rochacon4 years ago

This is partially true. If the only feature you care about Kubernetes is container scheduling, then yes, Nomad is simpler. The same could probably be said about Docker Swarm. However, if you want service discovery, load balancing, secret management, etc., you'll probably need Nomad+Vault+Consul+Fabio/similar to get all the basic features. Want easy persistent storage provisioning? Add CSI to the mix.

Configuring these services to work together is not all trivial either (considering proper security, such as TLS everywhere) and there aren't many solutions available from the community (or managed) that package this in an easy way.

remram4 years ago

While this is not false, I don't think many of the posts critical of K8s hitting the front page are advertising for Nomad, or focusing on drawbacks that don't apply to Nomad.

exdsq4 years ago

You know, if there's one thing I've learnt from working in Tech, it's never ignore a pooh-pooh. I knew a Tech Lead who got pooh-poohed, made the mistake of ignoring the pooh-pooh. He pooh-poohed it! Fatal error! 'Cos it turned out all along that the developer who pooh-poohed him had been pooh-poohing a lot of other project managers who pooh-poohed their pooh-poohs. In the end, we had to disband the start-up. Morale totally destroyed... by pooh-pooh!

Edit: Seeing as I'm already on negative karma people might not get the reference - https://www.youtube.com/watch?v=QeF1JO7Ki8E

AnIdiotOnTheNet4 years ago

What makes you so sure that the downvotes aren't because all you posted was a comedic reference?

exdsq4 years ago

I didn't know pooh-pooh was a genuine logical fallacy before now!

fierro4 years ago

thank you lol. Hello World engineers will never stop criticizing K8s.

jimbokun4 years ago

Yes, but you don't need many or any of those things to launch a Minimum Viable Product.

So Kubernetes can become invaluable once you need to scale, but when you are getting started it will probably only slow you down.

aequitas4 years ago

If you want your MVP to be publicly available and your corporations ops/sec to be on board with your plans then Kubernetes is an answer as well. Even if your MVP only needs a single instance and no scaling. Kubernetes provides a common API between developers and operations so both can do the job they where hired for while being in each others way as least as possible.

jimbokun4 years ago

Pre-MVP, development and ops are likely the same people.

aequitas4 years ago

With Pre-MVP you mean installing it on your laptop right? It all just really depends on your companies size and the liberties you are given. At a certain size your company will have dedicated ops and security teams which call all the shots. For a lot of companies, Kubernetes gives developers the liberties they would normally only get with a lot of bureaucracy or red tape.

NicoJuicy4 years ago

I have a Windows server and use dot net.

I press right click - publish and for prod, i have to enter the password.

Collecting logs uses the same mechanism as backups. They go to a cloud provider and are then easy to view by a web app.

Never needed more for after hours than this, perhaps upgrading a server instance from running too many apps on 1 server.

bajsejohannes4 years ago

> solving a specific problem

The problem to me is that Kubernetes is not solving a specific problem, but a whole slew of problems. And some of them it's solving really poorly. For example, you can't really have downtime-free deploys in kubernetes (set a longish timer from SIGTERM to increase the chance that there's no downtime).

Instead I'd rather solve each problem in a good way. It's not that hard. I'm not implementing it from scratch, but with good tools that exists outside of kubernetes and actually solve a specific problem.

ferdowsi4 years ago

Why can you not have downtime-free deploys? You tell your applications to drain connections and gracefully exit on SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown

If your server is incapable of gracefully exiting, that's not a K8s problem.

afrodc_4 years ago

> Why can you not have downtime-free deploys? You tell your applications to drain connections and gracefully exit on SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown

> If your server is incapable of gracefully exiting, that's not a K8s problem.

Also whatever load balancer/service mesh you have can be configured for 503 rerouting within DC as necessary too.

bajsejohannes4 years ago

> You tell your applications to drain connections and gracefully exit on SIGTERM.

The problem is that k8s will send requests to your application after SIGTERM. So you have to wait some amount of time before shutting down to allow for that.

This was at least the case last time I used k8s, and it seemed like it was due to the distributed architecture, so something that was more than a mere bugfix away.

rhacker4 years ago

K8s has like, probably the most complete support for readiness/no downtime deploys in the whole damn industry so it's surprising to hear that...

https://cloud.google.com/blog/products/containers-kubernetes...

hnjst4 years ago

You can actually renew/upgrade your whole cluster with no downtime if you care enough to tackle the annoying bits that cost a few minutes in YOLO mode.

throwaway9843934 years ago

I think it will end up with a "simple" Distributed OS in the same way we have internal combustion engines: they're very hard to build, complicated to repair, moderately easy to maintain, very easy to use.

Here's the things I think we need in order to make a "simple" Distributed OS:

Cutting edge tech. If developers don't want to use it, it dies, period. It needs to be trendy.

Novel interaction of different versions of different software components. The model we use today is 40+ years old and does not scale past a single system. We have to make it easy for different versions of software to interact in any way, without making people think hard about it or use hacks. (There are solutions for this already but nobody uses them; we need a trendy blog post and some new code conventions to make them take off)

Novel network stack. Distributed systems have been twisting themselves into pretzels for decades to get Component A to talk to Component B over a network. You can have upwards of a dozen different components in between, all just dedicated to getting two components to talk to each other. The thing holding this up is the lack of integration between all the layers, and along hops.

Distributed Tracing. You can't troubleshoot a distributed system effectively without it. Lack of debugging tools means the systems won't be used seriously and the effort will die on the vine.

Distributed Computing Health Metrics as a higher level abstraction than "is this host-specific resource running out". Basically this requires a gossip network of health metrics and some fancy math to estimate probabilities of health.

Distributed Shared Memory for Threaded Applications. Yes, I went there. Building distributed systems will continue to be a pain without it. We have to make these systems stupid easy to program and use; if it takes a PhD or a two-pizza team of amateurs to program for it, it's just not gonna take off. (applies to the "Images and Feelings" part of OP)

Versioned Immutable Operating Models. Basically, distributed systems today are not immutable, because various layers of the "stack" that makes them up are not immutable or version-controlled. To reliably operate even a non-distributed system, you need this. It's especially important for SaaS, PaaS, IaaS, etc. We have built whole ecosystems of tools because many parts of a distributed system simply have bad operational models. You can start with building such a model for regular-old software, and each layer of software (and hardware!) around it should also develop such a model. A complete stack with that model will be very determinate, easy to operate, & easy to reason about. I estimate this will make 50% of the current distributed computing ecosystem redundant.

Federation, Encryption, Fine-Grained Access Control by default. We need any component to be able to talk to any component in a secure manner, again without jumping through a lot of hoops.

Distributed Control and Data Plane separation by default. This is both a novel I/O model, and a novel control plane for all components.

Resource Reservation. Software needs to specify the kind and amount of resources it will need before it even runs. This is necessary to prevent the inevitable resource exhaustion churn, ex. when competing pods spin up and die in a loop.

Distributed Networking Safety Conventions. The best practice stuff to prevent network storms on crowded resources. Throttling, backoff, jitter, quotas, etc.

Distributed Scheduler. Simple idea, difficult implementation. A generic scheduler that is smart enough to schedule all kinds of weird things across distributed systems.

Almost all of these things already exist, but that's not the hard part! The hard part is combining them all together in a way people want to use. The only way that's gonna happen is if we start up another research project ala Plan9.

mastrsushi4 years ago

>“Our generations Multics”

What, because they’re both complex?

Multics was never successful enough to be used much outside of Honeywell.

They’re both considered complex, but there are so many other examples of industry relevant technologies he could’ve used. Products that’ve actually gained communities and a user space enough to gain insight.

I get it, you hate using GooberBoobies or whatever.

ransom15384 years ago

Eh. I think people over complicate k8s in their head. Create a bunch of docker files that let your code run, write a bunch of yaml files on how you want your containers to interact, get an endpoint: the end.

klysm4 years ago

I mean people over complicated software engineering in their head. Write a bunch of files of code, write some other files: the end. /s

ransom15384 years ago

Why the "/s"? -- sounds right.

sgt4 years ago

... That's also a little bit like saying it's super simple to develop an app using framework X because a "TODO list" type of app could be written in 50 loc.