Back

On Bloat

183 points28 daysdocs.google.com
dvh28 days ago

It was either here or on slashdot where I've read insightful comment on bloat, can't find it right now so I paraphrase:

People often think "speed" when they read "bloat". But bloat often means layers upon layers of indirection. You want to change the color of the button in one dialog. You find the dialog code, change the color and nothing. You dig deeper and find that some modules use different colors for common button, so you find the module setting, change the color and nothing. You dig deeper and find that global themes can change colors. You find the global theme, change the color and nothing. You start searching entire codebase and find that over 17 files change the color of that particular button and one of those files does it in a timer loop because your predecessor couldn't find out why the button color changed 16 times on startup so he just constantly change it to brown once a second. That is bloat. Trivial change will take you half a day. And PM is breathing on your neck asking why changing button color takes so long.

And sure enough, fourth slide mentioned speed.

cies28 days ago

I think of size when I read bloat. A simple HTML content page deeply nested DIV-soup and 14MB of JS and 2MB of CSS: bloat.

Bloat is code size/ program size that is not actually needed.

Bloated code bases are usually harder to work on and may have noticeable slowness: but not necessarily! Maybe the bloat of aforementioned HTML page is due to the use of a low/no-code framework that makes it suuuuper easy to maintain. And with internet being fast these days, many user may not notice 15MB "bloat" (=unneeded size), as the page also loads a video and several high-res images making the total add up to 60MB+ before all bells and whistles are showing.

EDIT: typo "internet being fat these days" -> "internet being fast these days"

flir28 days ago

I think of features. Code can always be fixed (in theory, at least: the Arrow of Software Development rarely points away from entropy), but "we're gonna strip this feature set back to basics" is even rarer.

cies28 days ago

I'd say: bloat = high size / features ratio.

For instance fresh install size: Puppy Linux 300MB vs Windows 11 24GB.

Sure Windows has some more features, but ratio wise it's a bloated monster.

http://puppylinux.com

robocat28 days ago

Arguing features cause bloated businesses: https://www.seangoedecke.com/difficulty-in-big-tech/

Features is the better word, but code size is measurable. Everybody knows we shouldn't measure lines-of-code so we now use kB or MB as an equivalent for web sites

Xen927 days ago

99.99% of source code is not Kolmogorov optimal.

This is a good thing.

GuB-4228 days ago

The solution to the button problem is obviously to intercept the call to the underlying API that draws the button. Ideally, by having a generic interface so that you can change the "color override" behavior at runtime. It means that in the future, all you have to do will be to specify your behavior in a JSON file that is dynamically loaded. You can also leverage the already present JS interpreter to specify rules (keep things simple, we don't want to have another domain specific language). To make the code more testable, the JSON file reader will be abstracted away and "mockable".

graypegg28 days ago

Cue the dialog configuration service, which will provide dynamic dialog configuration JSON files.

The dialog configuration service will then need to communicate with the colour token service, which will be the single source of truth for the primary and secondary colour according to the current theme, so the dialog configuration service should only be concerned with the token representing the meaning it needs to communicate.

And of course, the colour token service will need to adapt to the device's detected ambient brightness level, so the colour token service should probably apply a luminosity transform as suggested by the ambient lightness service... but that math can get rather complex so I think I'll stand up a new colour transformer instead which can handle that along with hue shifting, just in case our brand colours all change by the same relative hue-degrees on a normalized sRGB colour wheel all at the same time.

Good talk team, we got a road map!

disqard27 days ago

...and when the PM asks "why has this taken 3 years?", you explain on a whiteboard:

https://www.youtube.com/watch?v=y8OnoxKotPQ

mytailorisrich28 days ago

Speed is a good proxy for bloat at human level. All the things you mention and that are mentioned in the slides have a cost paid in speed (among other costs discussed in the slides).

marhee28 days ago

The main reason for bloat not mentioned in the article is that most developers are not interested in the business domain of the company they are working for. They just want to do stuff with computers and software.

Which leads to adding stuff just for the sake of being of interest to developers. Think kubernetes, microservices etc. Secondly, because the developers are really not interested in the business (problems), they just want to easiest fix they can get away with. Which leads to bloat, as described.

A good test for this are feature specifications: if developers/programmers do not want to write them (or even don't read them with careful attention) then they are not interested in the problem domain of the company. It's an uncomfortable truth.

Another thing I've experienced is that non-technical management somehow wants to see solutions with loads of "technologies" included. I was once asked to give a diagram overview of our IT infrastructure. I agreed mentioning it would be a simple diagram (a reverse proxy, a server and a database). The simplicity of the diagram was frowned upon because if you look at all cloud platforms websites they make you believe that the IT infra for all companies must at least look like that of Amazon and Google.

So it is also a problem of management that does not want to reward simplicity and is victim of the "cargo cult" so eagerly fed by cloud providers.

gregjor28 days ago

Exactly. Great comment.

Rob Pike (a few years older than me) didn't mention a few relevant facts. Those mainframes cost a small fortune, so their owners valued the computing resources. That pressure to control costs, along with the sometimes severe constraints of the hardware, forced programmers who cut their teeth in that era to avoid bloat and questionable features. For the first decade and half of my career I worked at companies that didn't have an "IT" or software development department -- I worked in business units like logistics or accounting, as part of those teams, rather than as a separate "resource." I knew what the business needed because I worked alongside the people who would use the code I worked on.

When I started freelancing about 15 years ago I ran into customer after customer who had got sold (or proposed) crazy over-complicated (and expensive) solutions. Very often it seemed like a shopping list of languages and tools of the day, put together with no regard for real business needs. I have listened to programmers bamboozle their bosses and customers with phrases like "performant" and "maintainable" and "best practices" that don't mean much and don't have objective ways to measure them. I don't usually ascribe that to malice or fraud, just an almost complete disconnect and lack of interest in the business and its needs.

nyarlathotep_28 days ago

> So it is also a problem of management that does not want to reward simplicity and is victim of the "cargo cult" so eagerly fed by cloud providers.

This absolutely mirrors my experience. I've worked on large number of cloud projects that were monstrosities of containers, serverless functions, NoSQL DBs, CDNs, cross-account pipelines, piles of terraform/CFN etc etc.

Naturally this was all topped off by some React thing with "tailwind" or MaterialUI, so clients needed multiple MB downloaded to display a list or whatever.

These were ALL projects that could have had every requirement met by, like, Ruby on Rails on a Raspberry Pi.

The amount of time sunk into fiddling with IaC issues, or debugging 65 different serverless/container service integration was easily an order of magnitude larger than time spent delivering value to customers and end-users.

7bit28 days ago

Kinda funny that you mention tailwind, as tailwind ships exactly the css that is needed to serve the design. Tailwind does not make your project bigger than not using tailwind.

mediumsmart27 days ago

They probably referred to the percentage of bloated solutions with tailwind (100%) in the mix opposed to the percentage of lean solutions with tailwind (0%) in the mix which makes tailwind an indicator or cornerstone of a mindset rather than a contributor to the actual bloat which is kinda funny too of course.

nyarlathotep_26 days ago

Correct. I'm referring to the mindset where "tailwind" (or whatever) is just "what you do" irrespective if it is even appropriate or not for the given solution. Same mindset as the architecture approach.

aqueueaqueue28 days ago

Business problems aren't loved in interviews unless the next company is in the same business.

Therefore "I know React like a ninja" > "How cement companies transport concrete"

In addition companies fight to keep developers away from customers, with layers of business people and "requirements" that are "decrees".

Gupie28 days ago

It also the fault of the guys in Marketing and Product Management who want features to match a competitor's, or to beat a competitor on some pointless metric, or to create different versions for different market segments, or because some corporate big wig made an off the cuff remark wouldn't it be nice if ...

zabzonk28 days ago

> The main reason for bloat not mentioned in the article is that most developers are not interested in the business domain of the company they are working for. They just want to do stuff with computers and software.

Well, at the several investment banks I've worked for, the highest paid developers were those that also had the most knowledge of the bank's business. Of course, if you don't care about pay and just want to do "stuff", that's up to you.

rob7428 days ago

> the IT infra for all companies must at least look like that of Amazon

Well, thanks to AWS, the IT infra for all companies increasingly does look like that of Amazon - because it's Amazon's own infra. Except if they're Azure or GCP customers, of course, but then it's M$'s or Google's infra...

mrkeen28 days ago

> "A lack of vigilance and discipline"

Let's say hypothetically that it's a bad strategy to give your employee a C compiler and tell them to write code. Now, if you instead gave your employee a C compiler and told them to write code with vigilance and discipline, is that somehow now a good strategy? I don't think "Lack of vigilance and discipline" is a correct or incorrect diagnosis. I think it's a useless diagnosis.

> "It is all but impossible to keep track of what you depend on, and how safe it all is"

I think this space is ripe for exploration. Imagine a world where you could depend on your choice of JSON parser. One such JSON parser is allowed to perform arbitrary IO, and another is disallowed from doing so - at a language level. Would you ever pick the first over the second? Being able to distinguish one from the other would go a long way towards feeling safe about dependencies. From a safety perspective, it wouldn't even matter how much extra transitive crap the parser pulls in. If someone hijacks the 'isEven' package, what's the worst that could happen? Go into an infinite loop or return the wrong value? Both of these would be immediately flagged by the most casual level of tests, and be far preferable than abitrary code execution.

zozbot23428 days ago

> I think this space is ripe for exploration. Imagine a world where you could depend on your choice of JSON parser. One such JSON parser is allowed to perform arbitrary IO, and another is disallowed from doing so - at a language level.

Obligatory remark that we used to have this as a real worked-out feature, with Java's SecurityManager. The newer WASM Component Model is supposed to be going in much the same direction.

rcxdude27 days ago

Java didn't really have their sandboxing worked out though, there were a huge number of vulnerabilities in it.

pornel28 days ago

> towards feeling safe about dependencies

Sandboxed is better than unsandboxed, but don't mistake it for being secure. A sandboxed JSON parser can still lie to you about what's been parsed. It can exfiltrate data by copying secrets to other JSON fields that your application makes publicly visible, e.g. your config file may have DB access secret and also a name to use in From in emails. It can mess with your API calls, and make some /change-password call use attacker's password, etc.

marcosdumay28 days ago

You seem to have a very narrow understanding of the utility of language purity and effects systems.

Yes, the parser can lie to you. But the actual lying can only depend on the code you are parsing. No it can't just exfiltrate data by copying it into other messages.

pornel27 days ago

I've said fields, not messages. It can exfiltrate data by copying it between fields of the single message it parses.

Imagine a server calling some API and getting `{"secret":"hunter2"}` response that isn't supposed to be displayed to the user, and an evil parser pretending the message was `{"error":{"user_visible_message":"hunter2"}}` instead, which the server chooses to display.

+1
mrkeen27 days ago
benrutter28 days ago

I like this take, I think it's more of a red flag that vigilance and discipline is a necessity. There must be strategies that minimise this.

To take HN's fave example Rust: it has a type system that removes "viligance and dicipline" as a requirement for memory management. We don't need as much care if we can codify checks and standards that make bad practices (null pointers etc) impossible.

I work mostly in Python, and even using neat tools like Vulture, dead code elimination is a mostly manual, care-requiring task. I wonder if there's a neat way to bring down excess code- I don't have any new ideas though.

zozbot23428 days ago

> To take HN's fave example Rust: it has a type system that removes "viligance and dicipline" as a requirement for memory management.

Nope, it doesn't. That's what the 'unsafe' keyword means in Rust: it's just short for "vigilance and discipline required".

benrutter27 days ago

I'll grant you that, but I would argue that 'unsafe' is the exception, definitely not the rule. The fact that there's a required keyword sort of gets at what I was driving at.

impulsivepuppet27 days ago

I find it hard to believe that vigilance can be opted out of, unless—sarcastically speaking—you leave the company before the lack of vigilance becomes a problem.

I somewhat agree with your take but that "free lunch" is paid by a disciplined use of lifetimes, somewhat contradicting the claim of "removing vigilance and discipline by using Rust's type system/borrow checker". In my worldview, type systems are in fact a compiler-enforced discipline, but I see how productivity can be boosted once problem areas become more visible / less implicit. Problems don't really disappear, they only become easier to scan through.

LkpPo22 days ago

Does the quality of the paint and canvas prevent a poor painter from applying 30 coats? No, and it's probably the same with programming languages. Adding a low wall in the driveway won't change anything.

cbolton28 days ago

Of course just mentioning the diagnosis will not fix the problem. That doesn't mean the diagnosis is useless. But using the diagnosis might take more effort than just telling engineers to fix it.

Same with a medical diagnosis. You don't expect to fix the issue by just mentioning the diagnosis to the patient and saying "fix it".

mrkeen26 days ago

> Same with a medical diagnosis. You don't expect to fix the issue by just mentioning the diagnosis to the patient and saying "fix it".

That's what this is though. The slide deck author is the establishment - the person with the influence to effect change - telling the developers to just "do it better".

jerf28 days ago

"I think it's a useless diagnosis."

First rule of engineering: Identify the problem.

It sounds obvious when you say it but once you start looking out for it it's amazing how many people try to skip it.

Certainly further analysis of your local problem is called for but I think it's a bit silly to expect a slide deck on the internet to identify your specific problems and then provide you with specific solutions for your situation that are useful for you.

mytailorisrich28 days ago

"A lack of vigilance and discipline" is a good diagnosis, and "write code with vigilance and discipline" is good advice.

Now, of course that does not spell a plan that you can blindly follow step by step, and that's good! It forces you to think about what this means for you and your team and to come up with strategies and processes that will fit your needs. This should not be too difficult for people in the field.

It might sound basic or obvious, but actually most commercially projects I have been involved with lacked both. It takes a firm leader to instil and maintain both.

eptcyka28 days ago

I agree that the presentation is lax on solutions, but I think it is valid to say that lack of vigilance will not help us get out of this mess. Of course, telling developers to just try harder is never the solution - but part of the solution is make developers aware of another aspect of their work that has to be taken into account. This has to be resolved culturally first.

pas27 days ago

Pike laments the lack of sleekness, complaining about bloat and scolds us for not having terminal NIH syndrome, and in general recommends thoughts and prayers.

Which is a bit tone deaf coming from somebody who unleashed yet another bloat generator (optimized for delivering - uh, excuse my french - "shareholder value") on the world, and fails completely to reflect on his own contribution to the problem.

yuvadam28 days ago

No real groundbreaking insight, but an interesting overview nonetheless.

My takeaway is that, zooming out from software, society at large is living in a state of ever-increasing complexity. We see this in politics, economics, public infrastructure, climate, and of course technology.

The key pattern is that it's always easier to build upon existing systems and work around practical constraints than it is to fix fundamental problems. Software bloat is just one manifestation of this broader tendency in human civilization to accumulate complexity rather than address root causes.

jakub_g28 days ago

It's insane how complicated laws and taxes are, with layers of layers of new rules on top of old ones. Which often ends up with different courts not aligning "what is the tax for carrots" because of contradictory rules etc.

graemep28 days ago

Taxes are a great parallel in many ways:

1. massively complex 2. issues are fixed by adding more complexity (to fix a security issue, close a loophole). 3. users do not understand how it works (but nonetheless have strong opinions). 4. there is never more than token attempt to clean up bloat (although lots of talk about it) 5. The complexity leads to unintended consequences.

As an example if a business pays an employee in the UK:

1. the business (employer NI) pays a payroll tax on the amount paid 2. they deduct another tax (employee NI) based on he employee's weekly income (but if self employed it depends on annual profits, if both its gets complicated). 3. another tax based on the employee's annual income ("income tax").

The second tax is often thought by people to be hypothecated to fund the NHS (many decades ago it funded, and gave people paying it the right to use, the govt funded healthcare before the NHS) or to be used to pay their pension (its hypothecated to do so, but is mostly used to pay current pensions, but for the last few decades how much you pay is not linked to how much you get, which depends on how may years you either paid it, or just fell short of paying it, or claimed certain benefits). its main real purpose these days is to disguise the fact that you pay more tax on earned income than unearned income.

A consequence of this is that one person businesses that work through company (rather than directly self employed) will pay themselves as employees just short of threshold at which income tax start and then pay themselves dividends because corporation tax + income tax on dividends comes to less than employer's NI, employee NI plus income tax.

So, one loophole and a lot of complexity, but any attempt to change it is strongly opposed by people who gain from the current system, but the real reason it continues is that most people do not understand how it all works. Quite a lot of people do not really understand how even the simplest of those taxes works.

ralferoo28 days ago

I'm in exactly that situation, and I find it interesting how many web pages there are that present the question "how to most tax-efficiently pay yourself as a director" and then only consider two options - at the primary threshold (when the employee starts paying tax and NI on the income) or at the lower secondary threshold (when the employer starts paying NI on the income).

There's almost no discussion about the overall taxation burden of taking money out of the company via dividends compared to income tax above the primary threshold (which for normal people is the same as the "tax free allowance").

As you've alluded to, if your total income is in the "basic rate 20%" band, then this is actually cheaper overall paid as dividends (21% for company and 8.75% personally) compared to salary (13.8% for company and 28% personally). However, once you're above that and into the "higher rate 40% band, it's less clear cut.

At that point, it's still slightly better to pay yourself in dividends (21% for company and 33.75% personally) compared to salary (13.8% for company and 42% personally), but the decision becomes more nuanced because you might prefer to receive a salary anyway so that your dividend allowance is still free for dividends from other investments. Once you're into the "additional rate 45%" bracket, paying dividends is actually a higher overall tax burden.

Actually, the best way to get money out of a company is via a SIPP as an employer contribution, which is zero rate for tax for both company and employer. Although I can't shake the feeling that actually the deferred tax liability (income tax on 75% of it, many years in the future when it's many times its current value) is actually significantly worse than maxing out an ISA and paying the income tax up-front, so if I wasn't already doing that the decision is less clear.

But like you say, it's all complexity upon complexity, and life would be much easier if all income was treated the same both in personal tax and taking money out of a company.

graemep27 days ago

True, there are multiple complexities I did not even touch on. The child benefit clawback, and the reduction to the personal allowance and a lot more.

I only really trying to illustrate the complexity of the relatively simple and very common case of an employer paying a typical employee.

jakub_g27 days ago

> any attempt to change it is strongly opposed by people who gain from the current system

This is modern world in a nutshell.

carlosjobim28 days ago

Under the current monetary system, taxes have to be as complicated as possible, in order to force the government currency to be used in every imaginable transaction. The primary purpose of taxation is this. Actual revenue is second priority.

This is why more efficient or more fair taxation will never be introduced.

hulitu28 days ago

> The key pattern is that it's always easier to build upon existing systems

Is it though ? I've seen a lot of programs using 1 or 2 functions from a library, but, nevertheless, linking the whole library.

Doing an ldd in /usr/bin is an exercise in horror.

eterm28 days ago

> using 1 or 2 functions from a library, but, nevertheless, linking the whole library

This feels like something a compiler could or should solve.

criddell28 days ago

Compilers and linkers do solve this for some languages.

I use C++ and make use of the excellent Boost library. The library is gigantic, but only the parts I use make it into my binary.

lmm28 days ago

> I've seen a lot of programs using 1 or 2 functions from a library, but, nevertheless, linking the whole library.

Exactly. Because it's easier.

Of course it would be much better to split that library up into smaller libraries so you only linked the part you actually needed, but then people on HN will go crazy about "thousands of dependencies".

zozbot23428 days ago

Why does this even matter? The system will only ever load code into RAM that's actually used by the final binary, because code is fetched as-needed from disk/mass storage.

+1
lmm27 days ago
RGBCube28 days ago

Have the great people of Hacker News never heard of LTO?

BenjiWiebe26 days ago

Dynamic linking - it's fine. Most of those libraries are already in memory anyways when you run the program.

samantp28 days ago

Agree, not much insight. Since it was Pike, I started curiously, just to realise that there was not much about what to do about the issue, other than a superficial “choose dependencies wisely”. The doc had already ended :(

bayindirh28 days ago

Sometimes, even the wisest person in the crowd has to talk simply, not to offer solutions, but spark a discussion.

smoghat28 days ago

Joseph Tainter’s Collapse of Complex Societies should be required reading. I don’t care your major. It should be required reading.

smarkov28 days ago

Remember when we used to be called "web developers"? We then became "software engineers", but forgot the meaning of the word engineer:

> Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems.

We don't increase efficiency or improve systems, we just build garbage on top of garbage. Imagine if a structural engineer did the same and built new floors on top of shaky floors. Our buildings would be collapsing on a daily basis.

hnthrow9034876528 days ago

Poor stewardship of professional standards

But it's hard to rein that in when the money is (was) flowing

flohofwoe28 days ago

Hardly anything non-obvious or controversial in there, how boring ;)

This one though:

> The true open source way: accept everything that comes

I never heard this said about open source development, or experienced this in open source projects myself.

PS: My pet theory is that big teams create big code bases in short time and under bureaucratic code-design rules which leave little room for personal initiative. E.g. also known as 'enterprise software development'.

relistan28 days ago

Pretty sure it’s a reference to ideas like this: https://felixge.de/2013/03/11/the-pull-request-hack/ which have been popular for a long time in open source.

praptak28 days ago

> Hardly anything non-obvious or controversial in there, how boring ;)

Good advice tends to be boring (and hard to implement!) ;)

rcxdude28 days ago

It's quite popular, the 'bazaar' approach in 'the cathedral and the bazaar'. I would say the number of projects with a cathedral approach nowadays is quite low. That said, there is a sliding scale of difficulty of contributions to projects. I think, in general, a project that accepts more contributions is going to be more popular than one that doesn't, amongst users, because users will tend to prioritize features. So if you want people to use your software, you should make it easy to contribute.

(I think for small projects which would like some help, accept-by-default makes a decent amount of sense. For larger ones, you want to have some process, but you also want to make sure that you help new contributors through the process, not just insist they figure it all out themselves)

moefh28 days ago

> It's quite popular, the 'bazaar' approach in 'the cathedral and the bazaar'.

That's not the usual interpretation of "cathedral" and "bazaar" development models.

The Linux kernel is the poster child of the "bazaar" style, and I don't think anyone sane would describe their model as "accept everything that comes" (for an obvious example, look at what's happening Rust in the kernel -- and even further: a lot of people tried adding C++ at various points, and we know what always happens[1]).

"Cathedral" style is more commonly called "closed development" -- an example being the Lua language: it's fully open source (everyone is free to modify and distribute the changes), but the authors almost never show the development between releases and never accept patches.

Neither of those fit "accept everything that comes".

[1] https://lkml.org/lkml/2004/1/20/20

rcxdude28 days ago

Yeah, it's a continuum: from an explicit "outside contributions not welcome" to "we'll accept more or less any contribution with little review" (zeromq, under the current maintainer, I know is an example of the latter when orginally it was much more the former). Most projects are somewhere in between (linux I'd say is relatively hostile to new contributions), and the extremes are rarer.

eptcyka28 days ago

Accept by default is what you do if you want to distribute malware.

rcxdude27 days ago

It doesn't mean you just click 'merge' on every request without ever looking at the code, just that you don't kick back to the submitter with a pile of extra work to do before you accept a bona-fide effort.

(I don't think there's been any case of an open policy here causing a supply-chain attack)

lawn28 days ago

Accepting everything that comes in an open source project is a sure way to get scope creep, ever more demanding users, and most likely burnout.

Horrible advice I'd say.

mytailorisrich28 days ago

I think he is being a little facetious to make a point, based on the open source mantra that there can be no restriction on code modification.

gamedever28 days ago

It's fashionable to complain about bloat and npm and etc. Rust is going in that same direction.

But, the world is more complex. My Atari 800 had 48k and a cassette drive. My PC has a GPU, an audio system. Multiple audio outputs, multiple audio inputs, USB, a camera, a mouse, a keyboard, can run 3 displays, it has ethernet and wifi.

My Atari 800 connected to almost nothing. At one point I had a 300 baud acoustic modem and a terminal emulator that required me to scroll over a virtual 80x25 display on the Atari's 40x25 text display. It could do max 352x240 pixels in 128 colors.

My PC/Mac on displays 16 million colors, my browser loads 100s of images in every page in moments - vs my 1989 PC when downloading a single 800x600 image too minutes.

Unlike my old computers that could only display ASCII my modern computer displays all the worlds languages with ease, and if turned on previous input assistance for various languages including accessability languages.

My browser displays iframes in iframes in iframe showing all kinds of amazing content from all over the wrold.

My Mac/Iphone copies and pastes between each other, and automagically shares wifi passwords. My iPhone does that amazing proximity sharing thing like magic with strangers.

The software i use interfaces with tons of services. Maybe it's email, or instagram, or slack, or square, or apple pay, or google wallet, or .....

I don't know if I'm making my point well but the point I'm trying to make is our software has tons of features because we need and want tons of features. If you don't add Apple Pay then you potentially lose me as a customer. If your app can't let me send a picture through it then I'm not using it. If I can't share directly from your app I rate you 1 out of 5.

That's not an excuse for bloated webpages. But, software is almost always going to NEED, more features over time. I think that's inevitable. We keep inventing new things and new services appear and people want all the softwares to support them.

I would love it if my Nintendo Switch's screenshot function magically did the Apple clipboard magic so when I took a screenshot on my Switch could then instantly paste on my Mac or iPhone and the screenshot would appear. So much easier than what I have to do now. But that would require "adding a feature"!

flohofwoe28 days ago

> My Atari 800 had 48k and a cassette drive...

Your Atari 800 also had multiple 'GPUs' (CTIA and ANTIC), and a dedicated 'audio processing unit' (POKEY) as well as various input/output ports. Conceptually it was more advanced than the IBM PC up to around the mid-90s and closer to the modular hardware design we have today with multiple losely coupled autonomous coprocessors.

It is kind of interesting how the first popular 8-bit home computer also was the most technically advanced until the Commodore Amiga entered the scene (which was the direct technical successor to the 8-bit Atari, designed by the same people).

Lio28 days ago

> first popular 8-bit home computer

Are you claiming that the Atari 800 was the first popular 8-bit computer or just the first popular Atari computer?

It may depend on where you lived but the Atari 800 was neither the first or most popular 8-bit computer.

I think it could be argued that wasn't the most advanced either, although they were really cool.

flohofwoe28 days ago

The 800 was released in 1979 which was long before the Speccy, C64, Amstrad CPC and MSX (which I consider the other 'popular' home computers).

(ok I forgot about the Apple II (1977) I think that was mainly a 'big in USA' phenomenon)

AndyMcConachie28 days ago

90% of the features on my iPhone are not interesting to me. Worse yet, everytime Apple pushes a new feature they enable it by default on the next upgrade.

Here's a radical thought. "Every feature in a piece of software that I don't want or use is a bug." This is radical user centric design. It's also completely unworkable.

Why can't I buy a phone that better matches my feature needs? One might expect capitalism and competition to produce many different implementations of different products catered to individual users' needs. Instead we have monopolistic monocultures and feature bloat. Apple is driven to constantly add new features to capture new customers and respond to new features from its few competitors. Rarely does any of this movement serve me as a customer.

I, like the vast majority of iPhone users, rarely see value in new feature. Sometimes I do. But it's probably 5% or less of new features are things I actually want. I don't need any of them. I'm feature content and everything new is just a bug.

blixt28 days ago

I don't really think the tone or the examples of this presentation are all that useful, and while the advice slide[1] seems well-intended, I just think most of it is only really known post-hoc because reality is not so clean cut.

When talking about the speed of computers I also think we should acknowledge the speed of thought - the ease of implementing ideas: the fact that there are so many dependencies means you can quickly make something. But useful prototypes quickly become products, so I can understand some of the author's frustrations.

[1] Advice slide:

Avoid features that add disproportionate cost

Implement things at the right level

Understand and minimize dependencies whenever possible

Maintain your dependency tree religiously. Examine it regularly

Don't use dependencies just to be lazy: understand the costs

Finalize your changes before they land, not after

choobacker28 days ago

I agree with his issues with dependencies.

But I'm not sure about his other stuff.

"Avoid features that add disproportionate cost"

I expect part of the problem here is that it's often not clear what the value of features until it's available to customers.

Even the costs of bloat are unclear. Take his bank website example. Do we really think many bank customers are choosing banks based on their website's latency? Banks compete on things users actually care about, like interest rates or fees.

Lots of software inevitably won't meet our ideal standards, because given the cost of developers it's not worth doing things The Right Way.

graemep28 days ago

Software does matter. It affects time spent and convenience.

I definitely stay with my bank (Lloyds in the UK) partly because they have a good website, and I will not bank with HSBC because their app will not work if you install things from outside the Google App store (and logging into the website needs the app, at least for me at the moment - I think that can be solved).

choobacker28 days ago

> I will not bank with HSBC because their app will not work if you install things from outside the Google App store

I have this requirement too, since I like to use F-Droid.

My point isn't that there are no such users. My point is that product managers in banks don't care about F-Droid users, since there's so few of us that it's not worth them worrying about.

Many websites are giving up Firefox support, and Firefox adoption is much higher than F-Droid.

If a bank app happens to be okay with F-Droid, it's not because they look out for the needs of F-Droid, it's simply by happenstance.

graemep27 days ago

Which is hugely damaging socially as it makes markets non-competitive. Its wrong in principle.

> If a bank app happens to be okay with F-Droid, it's not because they look out for the needs of F-Droid, it's simply by happenstance

Not true. They they have to make a deliberate choice to write an app that refuses to run if you have installed non-Google apps. It needs code written to run checks. It even lists the apps I have.

> Many websites are giving up Firefox support, and Firefox adoption is much higher than F-Droid.

Not hat I have noticed.

sesm28 days ago

Tech bloat is usually caused by org bloat, which is caused by the fact, that both management and tech individual career paths are aligned with having more teams and departments. This is essentially corruption, individuals advancing their career at the cost of undermining the organization and the product.

alternatex28 days ago

As an employee of one very large tech company I agree completely. Most people inside these companies are working for themselves. I know this comes across as an obvious reality of employed life, but it's hard to describe how ridiculous it sometimes gets.

Example: complicated and flashy feature developed almost solely for the purpose of promotions. Everyone with decent product and domain knowledge that argues against it is shoved aside and labeled a hater. A year and a half later and zero usage of the feature and management decides to sunset it to avoid the expensive maintenance. Everyone involved with developing and promoting the feature celebrates this new decision as if no one's the wiser.

enriquto28 days ago

It's quite ironic that just to look at this document, consisting of plain text with a few low-resolution images, you need to activate javascript in the browser.

araes27 days ago

Good point that was not initially noticed.

Another in that same tangent, ever notice that a company that makes $350,000,000,000 a year in revenue and has 183k employees, somehow cannot manage to have a main search page that works without Javascript?

Tried experimenting with turning Javascript off last month just to see how bad the situation was in the modern web. Actually ended up rather surprised and happy. A lot of the modern web actually have reasonable fallbacks and work correctly.

The stuff that was kind of expected to be broken was broken. Facebook doesn't work. Reddit doesn't work. Youtube doesn't work.

Most modern news websites do work though, surprisingly. ABC, NBC, CBS, Reuters, Bloomberg, ect... They won't load the images and the fancy videos in some cases. A few, "pending content" issues. In a lot of cases the result is almost preferable though. Snappy, quick web page loads. Don't actually care about the lack of image and video loads.

Quite a few of the other sites I read load. Stackoverflow, Slashdot, Hacker News, Wikipedia.

Quite a few searches will even load. Bing will search without Javascript. DuckDuckGo will search without Javascript. Ecosia will search for images, news, and videos, just not webpage content.

For some reason Google won't search. "Turn on JavaScript to keep searching" Main product. What their entire reputation is built off of. Nothing.

Note: This response was written (and edited) with Javascript turned off.

pas26 days ago

old.reddit.com works, no?

it's ironic that Google is so privileged it can completely ignore the people who wish to have a bit more security/privacy.

araes23 days ago

Yup, had not thought to try old.reddit.com.

So chalk another up to actually functional without Javascript (long as you know where to look).

They're too busy trying lock me out of my email all the time to even care.

childintime28 days ago

Bloat starts with our inability to reuse software, on which one premature optimization has a big impact on: compilation. When did we forget that it's the targets responsibility to manage its own time and space, and link everything in it?

Despite the security risks we are still addicted to executables, and we still give the user instructions, while the computer supposedly is so much better at following them. A computer should be able to run anything anywhere it wants to, safely. No exceptions. While the web slowly inches closer to that reality (maybe), Fuchsia was supposed to be here by now and make this vision a reality. But even people here just don't get it.

braza28 days ago

From the presentation

> The more code, more bugs.

I came from a place where we developed ERP software with the entire business logic in SQL Stored Procedures with some minimal UI, and working with ML nowadays, the number of bugs are more or less the same; but the difference is that it's way simpler to catch bugs in current days plus the fact that the whole apparatus of stack tracking in modern programming languages helps a lot.

Another fundamental difference is that, in that resource/technology-constrained environment, at least in my experience, there's almost no space for "user features" or "customizations" or "enhancements in UI.". The whole concept of the product was an equivalent of a barefoot experience for the users, and since there's weren't many alternatives, people used to stick with the software regardless on how it sucked.

The average corporate developer in a normal B2C or even B2B environment is involved in so much complex set of requirements and customers/clients that are hyper-sophisticated that, at least for me, it's a modern miracle how technology is delivering and thriving in such an environment.

hulitu27 days ago

> but the difference is that it's way simpler to catch bugs in current days

Well, Microsoft are not able to do this. There are bugs which go unfixed for months or years.

svilen_dobrev28 days ago

> lost something along the way

yes. Lost the need/urgency of not-enough-resources. Lost the constraints that fueled inventiveness and frugality.

And this isn't only software (or hardware). strive-for-(being-able)-to-Waste is everywhere

pards28 days ago

> Yet it takes me 30 seconds to log into my bank

This has little to do with bloat. There's a complex array of backend systems involved in logging you in to your bank: Authentication, authorization, customer system, separate account systems for each account type (debit, credit, loans, investments), etc. The bank's login process has to wait for all of them to respond and collate the data before returning control to the UI.

Rest assured, online banking teams spend a lot of time trying to optimize login performance but they're limited by the slowest of the downstream systems. There's a lot of data on that "account summary" page you see after logging in.

The login process performance could be improved by landing on a simple "welcome" page but the majority of people want to see all of their account balances immediately upon login.

Source: I have worked on several online bank websites and mobile apps.

zild3d28 days ago

> The bank's login process has to wait for all of them to respond

Why? If e.g. one loan account is slow to respond seems like letting you in once auth is successful and showing a loading state for that slow account is a better experience

wycy28 days ago

It may be that they've discovered that people feel more comfortable with bank logins taking a while. Maybe if it's too snappy people feel like it's insecure. Sort of like how tax software intentionally adds loading screens to make people feel like it's doing a lot of work behind the scenes even though it's essentially a glorified spreadsheet with instant calculations.

I have no data on this, purely just a theory as to why they may not feel an incentive to make login feel faster.

carlosjobim28 days ago

Why are the backend systems slow?

pards28 days ago

For a variety of reasons

  - Old on-prem mainframes
  - Old mainframes hosted by vendors
  - Network latency (cloud <--> on-prem, cloud <--> vendor)
  - Required sequence of calls (auth -> account list -> account balances)
I'm not defending it. Many of these systems should be updated but the banks are notoriously slow to act and risk-averse.
poulpy12328 days ago

While I agree with the fact that there is a lot of bloat in software development (I would even say that I'm part f the problem), these slides are completely useless.

While I dont't know the solution or even if there is a solution, I know that we didn't arrive to this point by chance, and it's not because the developers are lazy or even that companies are greedy (even it could have played a role)

I want to specifically mention a point: the author mention that in 1971 you could go to your super computer and everything was instantaneous. Well it was not. Very little people had access to these computer, and they were not only carefully selected but also had to have to train for weeks or months before being allowed to use them. Now everyone can click on a button.

luismedel28 days ago

To be honest, the mass of developers is not only lazy but irresponsible and unaware of the risks of being too liberal in the addition of dependencies.

> [...] Now everyone can click on a button

IMHO the code that the button runs after the click is the problem, not the availability of a button.

hresvelgr28 days ago

Writing good software requires principles, and having them means standing behind them when they're tested. I've seen legacy code cleaned to wonderful standard and operation and greenfield projects immediately fold into spaghetti. It starts and ends with us.

zzbn0028 days ago

> "A lack of vigilance and discipline"

I would agree, and perhaps add that strictest discipline is need in answering the question: could we just get on with our job using existing software?

In some places "coding" has a become and end to itself

rob7428 days ago

The "advice" slide is great and probably uncontroversial, but if you add another factor into the mix, it becomes very hard to follow: most modern (web) development uses frameworks: Laravel for PHP, Next/Nuxt/whatever for JS, and so on. Using these frameworks makes development faster (of course only if you intimately know and restrict yourself to the "one true way of doing things" set out by the framework), but by using a framework, you also subscribe to the amount of bloat that the framework authors deem acceptable. And, from what I have seen, that's a lot of bloat.

gregjor28 days ago

I don't use frameworks. I've had to extract customers from their over-commitment to frameworks when they inevitably hit a limitation, had to change the framework core, then got stuck at that version forever.

Ironically PHP works fine as a web framework -- I have never understood the point of adding another layer over it. I guess programmers who don't understand SQL injection or XSS -- basic web dev security stuff -- create some demand for the training wheels. With Rails I understood the need, because Ruby doesn't come out of the box with the web app orientation of PHP, but then why choose Ruby in the first place?

Now I see frameworks promoted for Go, which by design includes everything needed for a full web app. Habits die hard, and some of those habits I can only attribute to either lack of skills or laziness. We now have a world where a significant percentage of so-called web developers can't write HTML, CSS, or Javascript.

lloeki28 days ago

> Ruby doesn't come out of the box with the web app orientation of PHP, but then why choose Ruby in the first place?

It was and is absolutely possible OOTB to do PHP-style webdev with Ruby quite trivially with the Ruby-included CGI and ERB (a.k.a eRuby; which has an `erb` command).

The set up is hardly any different from any other CGI setup be it with Perl or PHP.

Here's a `foo.rhtml`†:

    % cgi = CGI.new
    <%= cgi.http_header("status" => "OK", "type" => "text/html") %>

    <html><body>
    
    <ul>
    <% [1, 2, 3].each do |a| %>
      <li>
      Hello <%= a %>
      </li>
    <% end %>
    </ul>
    
    <%= cgi.params.inspect %>
    <%= cgi.path_info %>
    <%= cgi.script_name %>
    <%= cgi.request_method %>
    </body><html>
You'd configure the HTTP server with calling `erb -r cgi <script>` for `*.rhtml` and... that's it.

So we mimic a CGI server call:

    env REQUEST_METHOD=GET PATH_INFO=/foo SCRIPT_NAME=foo.rb QUERY_STRING='foo=1&bar=2' HTTP_HOST=foo.com erb -r cgi foo.rhtml
There was even an Apache `mod_ruby` for a time.

It just didn't make sense very quickly as soon as Rack came to be a thing††, and then you're one step away from Sinatra†††.

Even then you can easily still use pure Rack (or even Sinatra) and ERB in extremely PHP-esque ways.

† `.rhtml` is the conventional extension for ERB files that generate HTML, while `.erb` is the generic one.

†† Bonus is that you don't even have to setup a whole Apache/NGiNX server on your machine; you just spin up the Ruby-included WEBrick so it's even more beginner friendly.

††† Which contrary to Rails is hardly prescribing about anything.

sertraline28 days ago

Unfortunately developers cannot be blamed. Most customers give unrealistic deadlines that you can only meet if you use a framework. Convincing them to wait a little bit more time to do it right does not work, they will just hire somebody else. I am working for a customer who opted out from a properly working, fast website with a Go backend in favor of Wordpress, because Wordpress happens to have a WYSIWYG editor. So we comply and install an enormous pile of insecure bloat that loads for eternity... just so the customer can change colors and move blocks around the landing page. It sucks, but having no food on your table sucks more.

reedlaw28 days ago

If developers factored the true cost of features into estimates then a lot less would get built. Instead, SMBs with tight budgets and long lists of feature requests go with the lowest bid or shortest time frame. Developers are incentivized to under-estimate because the business wants to hear that and if one shop won't, the competition will. Almost no one considers the underlying open-source infrastructure as a liability, let alone a duty to contribute towards. The first step to reduce bloat is to normalize estimates which include dependencies, maintenance, and growth.

alexashka28 days ago

I'd agree that on a personal level, vigilance and discipline are important to minimize bloat.

On a collective level - you need courage and love and vigilance and discipline.

You need the courage to go hey, Bill Gates - you are ruining our industry and since I love humanity - I will oppose your kind. I will use my courage to speak up and fight against evil.

Our industry isn't lacking in people who can sit down and write good code. It is severely lacking in people who have courage and love to benefit more than just themselves.

w10-128 days ago

It's helpful to reinforce the need for simplicity.

But more helpful would be some principles.

Like: WHY is hardware and software scaling so different?

- Hardware has been adding parallelism and reducing constraints, i.e., reducing interactions

- Software has been adding interactions and building abstractions that hide physical costs, which hides combinatorial growth of compute. Developers often don't even know a call depends on the network or the filesystem, no less how some library interacts with another. Outside of internet-scale algorithms, systems remain mostly usable.

- Companies can mostly transition hardware (via HAL's) - x86 to ARM, DDR4 to 5, multiple network adapters - but they fiercely guard and differentiate on software (due to high switching costs to build expertise in humans), so we get Go and Android and native containers and a million other variations because companies can't be beholden to software vendors.

Also, the quality problem is entirely distinct. There are real-time trading platforms with near-optimal speeds because speed matters most there. For banking web sites, low maintenance and support costs matter most, so the UI is dumb and constraining. Difficult software can make economic sense.

Finally, "bloat" is what you say to big-bank clients who've proliferated javascript dependencies, when you want to sell them on Go as the do-everything-yourself language. It could also be framing a choice between a small expert (US) team and a large distributed (outsourced) team, when you want to show why less can be more. The scarecrow term carries emotion and an intuition without being subject to counter-analysis.

jillesvangurp28 days ago

I don't agree with the analysis. The cost of the perceived problem is just not there. And the cost of fixing it is dis-proportionally high.

Banks are a good example. I don't think they are losing much money with their software experience. There are of course some under performers that are losing customers. But it's mostly to other banks that are delivering a good enough experience. Overall, they aren't losing any business here. If anything, they probably spend too much on their software teams. The results are mediocre and it's taking them too long.

Corporate websites (also mentioned in the article) is another thing that just doesn't matter that much. They are good enough mostly. It's fine.

Measuring frames per second is a thing for web developers that suffer from OCD. It's not a real world metric that matters at all.

Now a point that might be a bit painful to hear for the author. These slides are a bit ugly, unsophisticated, and bland. And it looks like a hand crafted thing and I assume that the author is very proud of the skill level and craft involved. I'm sure the FPS metrics are awesome. But if you are looking to wow your audience with arguments, it helps if you can back that up with a slide deck that re-enforces that argument. Browsers can do better than that 1997 unstyled power point look.

And that points maybe to a thing in our industry where the people complaining the loudest are doing the least to solve their issues. Open source is fine. If you don't like it, make it better. That's how it works. Being an innocent bystander is a choice, not a necessity. If you believe the web is so bloated, make it better. And if these slides are the best you can do on that front, then you'll probably struggle to convince a lot of people.

_kb28 days ago

You may want to review the author's background before dismissing their points for not being in a shiny deck.

I guess your point about it feeling handcrafted holds, if you count the character encoding.

shever7328 days ago

> I guess your point about it feeling handcrafted holds, if you count the character encoding.

Magnificent!

jillesvangurp28 days ago

The argument should hold regardless of who a person is or what they did. I commented being ignorant of this person's reputation and achievements. But so what? It shouldn't matter. I'm targeting the argument, not the person.

I still think his slides aren't great. I wasn't dismissing his weak analysis because of the deck but making the point that he's weakening his own argument with it. And since his points specifically include a lot of comments about the modern web, complete with a rant about frames per second, I think it's a fair point to make.

hulitu28 days ago

> You may want to review the author's background before dismissing their points for not being in a shiny deck.

Well, Wikipedia presents him as the author of Go, so the OP might have a point. /s

OTOH not knowing about Rob Pike and programming in UNIX, sounds strange.

fransje2628 days ago

> These slides are a bit ugly, unsophisticated, and bland.

Woosh.

They irony of asking for shiny, bubbly interactive slides that would add absolutely nothing to the point being conveyed, when that is exactly part of the argument being made.

peterkelly28 days ago

The author is the co-creator of Go, Plan 9, UTF-8, and the first Unix windowing system.

Whatever you may think of his arguments or slide templates, I don't think "innocent bystander" is an accurate label.

_kb28 days ago

Even in the absence of other work/history, these are literally slides from a talk they gave on the topic, promoting a tool (https://deps.dev) to help more people understand and approach the problem.

JonChesterfield28 days ago

These slides are optimised for signal to noise ratio. Many slides are optimised for entertaining the easily distracted. I'd rather see more like this set.

wainstead28 days ago

> Banks are a good example. I don't think they are losing much money with their software experience. > Corporate websites (also mentioned in the article) is another thing that just doesn't matter that much.

The opening point is that these sites are stupid slow. And I agree with the author: there is no reason this far into the 21st century a bank's web page should take several seconds to load. It should load in under a second or two.

malcolp28 days ago

As a counterpoint, it would be somewhat ironic if the author of a presentation on bloat added middleware to his slides...

seanhunter28 days ago

If you care more about the format of the slides than the message then it’s fair to say you’re not the target audience and I would be extremely surprised if Rob Pike cares much about your opinion. The dude wrote books[1] with Brian Kernighan for goodness sake.

[1] Which are awesome btw

relistan28 days ago

You probably should have a look here. I think you can safely say he’s done his part to make things better. https://en.m.wikipedia.org/wiki/Rob_Pike

sgarland28 days ago

> Now a point that might be a bit painful to hear for the author. These slides are a bit ugly, unsophisticated, and bland.

Here [0] is HAProxy's website. Well, one of them [1]. The first one contains documentation in glorious plain HTML, and packs more information into it you can imagine. It is, after all, an incredibly high-performance piece of software that's been around for over two decades. The second one is, I imagine, for people like you. Perhaps that's what's needed for a company to be successful anymore - a site with the information necessary to get the thing running and to maintain it, and a second that's flashy and glitzy and doesn't scare the suits.

[0]: https://www.haproxy.org

[1]: https://www.haproxy.com

gregjor28 days ago

A few months ago I closed a long-standing account with a bank because they made their app unusable with pop-ups, unwanted "AI" chatbots, a painful start-up time (or timeout). Maybe they don't lose a lot of business but as long as alternatives exist with less enshittification companies might want to take more care about user experience.

You should read up on Rob Pike, he's not some loon screaming for attention. He's put out more -- and more useful -- open source than almost everyone reading HN.

sieve28 days ago

There is a place on the long line between max-performance and max-convenience that a lot of projects can build their house on but simply do not because they do not care.

Some of my projects are heavily dependant on parsing strings when the data is initially consumed. I could speed them up by shifting to more efficient parsing/replacement algorithms. But that would make the code too cryptic. So I am ready to give up a lot of performance for ease of understanding because this is a rare enough activity to not matter in the larger scheme of things. But this would be a huge headache for the users if it happened on the critical path.

How many projects are mindful of these things, you think?

As for the transitive dependencies problem that Pike refers to, this is why I try to avoid anything outside the standard library for interpreted languages like python. cpython is already terrible performance-wise without forcing it to deal with a million lines of additional code from random people.

renox28 days ago

Interesting, software used to be developed as a scarce resource but he's saying that having too much SW leads to many issue.

I wonder if this will (in the long term) lead to the usage of language(s) with no ambiant authority to improve the security story. In theory it makes sense but in practice given that these language are painful to use with small projects, I'm not sure that they'll ever become popular.

hulitu28 days ago

> Layering

Why write something yourself, when you can reuse a library ?

Why not forking a library because the author didn't implement my insignificant functions ? (check how many "image" format libraries are installed on a linux system).

CADT: No comment on this.

Build systems: Make bad, so they have to come every year with a "better make": imake, autoconf, cmake, ninja, and some names i don't even remember.

vrnvu28 days ago

Related, I think Jon Blow's "Preventing the Collapse of Civilization" talk is the best about the issue: https://www.youtube.com/watch?v=ZSRHeXYDLko

a3w28 days ago

I don't think all facts or opinions in this video were true.

sky222428 days ago

Not going to lie, I was hoping for something more insightful there.

gregjor28 days ago

Maybe not insightful to some, but I know from experience that plenty of people working as programmers do not think about this. It helps to remind them. Unfortunately the Venn diagram of programmers who might benefit from this information and those who know about or read Rob Pike's articles probably looks pretty sparse.

internet_points28 days ago

Eat real food. Not too much. Mostly plants.

Easy rules that are hard to follow :)

gardenhedge28 days ago

I guess it was a talk for juniors/graduates.

I learned about https://deps.dev though which seems useful.

tsss28 days ago

It's Rob Pike... It would be wrong to expect anything different.

locallost28 days ago

I've read some of his articles and watched some talks, and while sometimes his presentation is not really my style, I would say it's always if not though provoking, then clear, insightful, making a valid case for his opinion, while not repeating the same noise you hear everywhere else.

gregjor28 days ago

I envy his resume and accomplishments too. Especially since we started programming at about the same time, back in the stone age of mainframes.

ripperdoc28 days ago

This type of arguments come up often here on HN, and I would guess many developers including myself will agree that things are bloated and bloated is not good. But the solution to this problem has to be in small actionable steps, not built on assuming (or shaming) everyone to put nice principles above the everyday toil of software development.

And it's a darn messy problem to try and solve even in small actionable steps, because I think most of the choices that lead to bloat are choices that make sense at that point in time. E.g. "use a dependency instead of code it myself", "use AI instead of thinking through every angle", "write 2 unit tests instead of the 20 that would give full coverage", "don't write extra tools to measure the speed of everything, it seems fast enough". Taking the route that minimizes bloat can be very time-consuming and demanding on the individual average developer. And any solution we come up with will not give us security guarantees - but any solution that moves the average one point in the right direction is still a good thing!

I sometimes find these type of question fall into old tropes like "it's Javascript's fault, better go back to C". But I think that is a fallacy. Javascript in a browser can easily run millions of ops a second. The big time offenders come from elsewhere, most likely network operations and the way they are used.

Some things that I think would help (and yes, some of them exist, but not as mainstream tools for the common tech stacks)

Tools that analyze and handle dependency trees better. We need better insights than just "it's enormous and always change". A tool that could tell me things like: - "adding this package will add X kb of code" - "the package you are adding has often had vulnerabilities" - "the package you are adding changes very often" - "this package and pinned version is trusted by many and will not likely need to change" - "here are your dependencies ranked by how much you use them and how much bloat they contribute with" - "your limited usage of this package shows that you would be better off writing the code yourself"

Tools that help us understand performance better. Performance monitoring in production is a complicated task in itself (even with stuff like Sentry) and still is poor at producing actionable insights. I would want tools that tell me things like: - This function you are writing is likely to be slow (due to exponential complexity, due to sequential slow/network operations, etc) - This function has this time distribution in prodution, as reported from your performance monitoring system - There are faster versions of this code (e.g. reference jsperf) - This library / package / language feature has this performance characteristic - Here are outliers in the flamegraph generated by this function or line - This code is X% slower than similar solutions - Making developers load their apps at the average speed users access them (e.g. throttling) - Bots that can produce PRs to open source projects to find common low hanging fruits in reducing complexity or increasing performance

Tools to evaluate complexity and tech debt over time: - Can a tool tell us what the lifetime cost of a solution is? How can a development organization make tradeoffs between what's fast to get out the door vs what it takes to maintain over time?

thomashabets228 days ago

What started as a comment here on HN grew so large that I might as well make a blog post out of it: https://blog.habets.se/2025/02/Pike-is-wrong-on-bloat.html

Pasted here for your convenience:

--

I’m not surprised to see this from Pike. He’s a NIH extremist. And yes, in this aspect he’s my spirit animal when coding for fun. I’ll avoid using a framework or a dependency because it’s not the way that I would have done it, and it doesn’t do it quite right… for me.

And he correctly recognizes the technical debt that an added dependency involves.

But I would say that he has two big blind spots.

1. He doesn’t recognize that not using the available dependency is also adding huge technical debt. Every line of code you write is code that you have to maintain, forever.

2. The option for most software isn’t “use the dependency” vs “implement it yourself”. It’s “use the dependency” vs “don’t do it at all”. If the latter means adding 10 human years to the product, then most of the time the trade-off makes it not worth doing at all.

He shows a dependency graph of Kubernetes. Great. So are you going to write your own Kubernetes now?

Pike is a good enough coder that he can write his own editor (wikipedia: “Pike has written many text editors”). So am I. I don’t need dependencies to satisfy my own requirements.

But it’s quite different if you need to make a website that suddenly needs ADA support, and now the EU forces a certain cookie behavior, and designers (in collaboration with lawyers) mandate a certain layout of the cookie consent screen, and the third party ad network requires some integration.

And then you’re told that the database needs to be run by the database team, because there’s FIPS certification aspects that you absolutely don’t have time to work on, and data residency requirements with third party auditability feature demands (not requests).

What are you going to do? Demand funding for 100 SWE years to implement it yourself? And in the mean time, just not be able to advertise during BFCM? Not launch the product for 10 years? Just live with the fact that no customer can reach your site if they use Opera on mobile?

So yeah. The website is bloated.

I feel like Pike is saying “yours is the slowest website that I ever regularly use”, to which the answer is “yeah, but you do use it regularly”. If the site hadn’t launched, then you wouldn’t be able to even choose to use it.

And comparing to the 70s. Please. Come on. If you ask a “modern coder” to solve a “1970s problem”, it’s not going to be slow, is it? They could write it in Python and it wouldn’t even be a remotely fair fight.

Software is slower today not because the problems are more complex in terms of compute (yet they very very very much are), but because the compute capacity of today simply affords wasting it, in order that we are now able to solve complex problems.

Ironically, a lot of slow bloated websites (notably banks and airlines) run on mainframes with code written in… the 1970s! When supposedly men were men, and code was fast? That part we could fix. Just give me 10,000 programmer years, and I’ll have us back to square 1, except a little bit faster.

People do things because there’s a perceived demand for it. If the demand is “I just like coding”, then as long as you keep coding there’s no failure.

Pike’s technical legacy has very visible scars from these blind spots of his.