The article made me think deeper about what rubs me the wrong way about the whole movement
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
Logic is an awesome tool that took us from Greek philosophers to the gates on our computers. The challenge with pure rationalism is checking the first principles that the thinking comes from. Logic can lead you astray if the principles are wrong, or you miss the complexity along the way.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
The ‘rationalist’ group being discussed here aren't Cartesian rationalists, who dismissed empiricism; rather, they're Bayesian empiricists. Bayesian probability turns out to be precisely the unique extension of Boolean logic to continuous real probability that Aristotle (nominally an empiricist!) was lacking. (I think they call themselves “rationalists” because of the ideal of a “rational Bayesian agent” in economics.)
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
They can call themselves empiricists all they like, it only takes a few exposures to their number to come away with a firm conviction (or, let's say, updated prior?) that they are not.
First-principles reasoning and the selection of convenient priors are consistently preferenced over the slow, grinding work of iterative empiricism and the humility to commit to observation before making overly broad theoretical claims.
The former let you seem right about something right now. The latter more often than not lead you to discover you are wrong (in interesting ways) much later on.
Logic is the study of what is true, and also what is provable.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
>provisional humility.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
> Must. Increase. The. pH
Those who do so would be... based?
Basically.
The level of humility in most subjects is low enough to consume glass. We would all benefit from practicing it more arduously.
I was merely adding support to what I thought was fine advice. And it is.
Yup, can't stress the word "tool" enough.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
looks like I riled up the Rationalists, huh
This is vibe-based, but I think the Rationalists get more vitriol than they deserve. Upon reflecting, my hypothesis for this is threefold:
1. They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group. People tend not to like being in other peoples' out-groups.
2. They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
3. They're nerds. Whatever has historically caused nerds to be bullied/ostracized, they probably have.
> They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group.
The rationalist community is most definitely not exclusive. You can join it by declaring yourself to be a rationalist, posting blogs with "epistemic status" taglines, and calling yourself a rationalist.
The criticisms are not because it's a cool club that won't let people in.
> They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
Herein lies one of the problems with the rationalist community: For all of their talk about heterodox ideas and entertaining different viewpoints, they are remarkably lockstep in many of their opinions.
From the outside, it's easy to see how one rationalist blogger plants the seed of some topic and then it gets adopted by the others as fact. A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something, even though actual experts picked it apart from the beginning and showed how the author was misinterpreting studies, abusing statistics, and ignoring more prominent factors.
The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence and they do this very frequently.
> The rationalist community is most definitely not exclusive.
I agree, and didn't intend to express otherwise. It's not an exclusive community, but it is a community, and if you aren't in it you are in the out-group.
> The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence
I don't know if this is true or not, but if it is I don't think it's why people scorn them. Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
> Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
Yes, most people don't care how you arrived at an opinion, they rather care about the practical impact of said opinion. IMO this is largely a good thing.
You can logically push yourself to just about any opinion, even absolutely horrific ones. Everyone has implicit biases and everyone is going to start at a different starting point. The problem with string of logic for real-world phenomena is that you HAVE to make assumptions. Like, thousands of them. Because real-world phenomena are complex and your model is simple. Which assumptions you choose to make and in which directions are completely unknown, even to you, the one making said assumptions.
Ultimately most people aren't going to sit here and try to psychoanalyze why you made the assumptions you made and if you were abused in childhood or deduce which country you grew up in or whatever. It's too much work and it's pointless - you yourself don't know, so how would we know?
So, instead, we just look at the end opinion. If it's crazy, people are just going to call you crazy. Which I think is fair.
Lockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
The followup post from the same author https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-get... is currently at a score of +306, again higher than either of those other pro-lithium-hypothesis posts.
Or maybe this https://substack.com/home/post/p-39247037 (I admit I don't know for sure whether the author considers himself a rationalist, but I found the link via a search for whether Scott Alexander had written anything about the lithium theory, which it looks like he hasn't, which turned this up in the subreddit dedicated to his writing).
Speaking of which, I can't find any sign that they got an ACX grant. I can find https://www.astralcodexten.com/p/acx-grants-the-first-half which is basically "hey, here are some interesting projects we didn't give any money to, with a one-paragraph pitch from each" and one of the things there is "Slime Mold Time Mold" talking about lithium; incidentally, the comments there are also pretty skeptical.
So I'm not really seeing this "gets adopted by the others as fact" thing in this case; it looks to me as if some people proposed this hypothesis, some other people said "eh, doesn't look right to me", and rationalists' attitude was mostly "interesting idea but probably wrong". What am I missing here?
> Lockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
That post came out a year later, in response to the absurdity of the situation. The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
One of the links is a Eliezer Yudkowsky blog praising the work, which now includes an edited-in disclaimer at the top about how he was mistaken: https://www.lesswrong.com/posts/kjmpq33kHg7YpeRYW/briefly-ra...
Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
HN judges rationality quite severely. I mean, look at this thread about Mr. Beast[1], who it's safe to say is a controversial figure, and notice how all the top comments are all pretty charitable. It's pretty funny to take the conversation there and then compare the comments to this article.
Scott Aaronson - in theory someone HN should be a huge fan of, from all reports a super nice and extremely intelligent guy who knows a staggering amount about quantum mechanics - says he likes rationality, and gets less charity than Mr. Beast. Huh?
Most people are trying to be rational (to be sure, with varying degrees of success), and people who aren't even trying aren't really worth having abstract intellectual discussions with. I'm reminded of CS Lewis's quip in a different context that "you might just as well expect to be congratulated because, whenever you do a sum, you try to get it quite right."
Being rational and rationalist are not the same thing. Funnily this sort of false equivalence that relies on being "technically correct" is at the core of what makes them...difficult.
> This is vibe-based
You mean an empirical observation
Three examples of feelings-based conclusions were presented. There is what is so, and how you feel about them. By all means be empirical about what you felt, and maybe look into that. “How this made me feel” describes the cause of how we got the USA today.
I've never thought ill of Scott Aaronson and have often admired him and his work when I stumble across it.
However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
I have always considered Scott Aaronson the least bad of the big-name rationalists. Which makes it slightly funny that he didn't realize he was one until Scott Siskind told him he was.
Reminds me of Simone de Beauvoir and feminism. She wrote the book on (early) feminism, yet didn't consider herself a feminist until much later.
Upvote for the play link - that's interesting and I hadn't heard of it before. Worthy of a top-level post IMO.
I heard of the play originally from Chapter 10 of On Tyranny by Timothy Snyder:
https://archive.org/details/on-tyranny-twenty-lessons-from-t...
Which I did post top-level here on November 7th - https://news.ycombinator.com/item?id=42071791
Unfortunately it didn't a lot of traction and dang told me that there wasn't a way to re-up or "second chance" the post due to the HN policy on posts "correlated with political conflict".
Ah, I guess I see his point; I can't see the discussion being about use of metaphor in political fiction rather than whose team is worst.
Still, I'm glad I now know the reference.
As someone who likes both the Rationalist community and the Rust community, it's fascinating to see the parallels in how the Hacker News crowd treats both.
The contempt, the general lack of curiosity and the violence of the bold sweeping statements people will make here are mind-boggling.
Both the Rationalist community and the Rust community are very active in pursuing their goals, and unfortunately, it's far easier to criticize others for doing things than it is to actually do things yourself. Worse yet, if you are not yourself actively doing things, you are far more likely to experience fear when other people are actively doing things as there is always some nonzero chance that they will do things counter to your own goals, forcing you to actively do something lest you fall behind. Alas, people often respond to fear with hatred, especially given the benefit of physical isolation and dissociation from humanity offered by the Internet, and I think that's what you're seeing here on Hacker News.
> the general lack of curiosity
Honestly, I find the Hacker News comments in recent years to be most enlightening because so many comments come from people who spent years immersed in rationalist communities.
For years one of my friend groups was deep into LessWrong and SSC. I've read countless blog posts and other content out of those groups.
Yet every time I write about it, I'm dismissed as an uninformed outsider. It's an interesting group of people who like to criticize and dissect other groups, but they don't take kindly to anyone questioning their own circles.
I'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
Yeah, the whole community side to rationality is, at best, questionable.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
Well said. Rationalism is about doing rationalism, not about being a rationalist.
Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").
> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
On the other hand, success is supposed to look exactly like actually being right more often.
> success is supposed to look exactly like actually being right more often.
I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.
This reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).
Chapter 67. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu... (And since it's in the book, and people know about it, obviously they're not doing it themselves.)
Also that the Art needs to be about something else than itself, and a dozen different things. This failure mode is well known in the community; Eliezer wrote about it to death, and so did others.
Also the Valley of Bad Rationality tag. https://www.lesswrong.com/w/valley-of-bad-rationality
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
> you just know they actually mean "MoreRight".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
Cool, I didn't know the quote, nor that it was inspiration for the name. Thank you.
Sometimes people enjoy being clever not because they want to rub it in your face that you're not, but because it's fun. I usually try not to take it personally when I don't get the joke and strive to do better next time.
>but you just know it comes with a high degree of smugness and false modesty
No; I know no such thing, as I have no good reason to believe it, and plenty of countering evidence.
So much projection.
I think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
In AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
Are there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore. 2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.
Likely the best resource to learn about philosophy is the Stanford Encyclopedia of Philosophy [0]. It's meant to provide a rigorous starting point for learning about a topic, where 1. you won't get bogged down in a giant tome on your first approach and 2. you have references for further reader.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
I don't know if there's anything like a comprehensive high-level guide to philosophy that's any good, though of course there are college textbooks. If you want real/academic philosophy that's just more readable, I might suggest Eugene Thacker's "The Horror of Philosophy" series (starting with "In The Dust Of This Planet"), especially if you are a horror fan already.
It's not a nice response but I would say: don't be so lazy. Struggle through the hard stuff.
I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.
Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.
People are likely willing to struggle through hard stuff if the applications are obvious.
But if you can't even narrow the breadth of possible choices down to a few paths that can be traveled, you can't be surprised when people take the one that they know that's also easier with more immediate payoffs.
I don't have an answer here either, but after suffering through the first few chapters of HPMOR, I've found that Yudk and others tech-bros posing as philosophers are basically like leaky, dumbed-down abstractions for core philosophical ideas. Just go to the source and read about utilitarianism and deontology directly. Yudk is like the Wix of web development - sure you can build websites but you're not gonna be a proper web developer unless you learn HTML, CSS and Javascript. Worst of all, crappy abstractions train you in some actively bad patterns that are hard to unlearn
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
I'd like what the original sources would have written if someone had fed them some speak-clearly pills. Yudkowsky and company may have the dumbing-down problem, but the original sources often have a clarity problem. (That's why people are still arguing about what they meant centuries later. Not just whether they were right - though they argue about that too - but what they meant.)
Even better, I'd like some filtering out of the parts that are clearly wrong.
HPMOR is not supposed to be rigorous. It’s supposed to be entertaining in a way that rigorous philosophy is not. You could make the same argument about any of Camus’ novels but again that would miss the point. If you want something more rigorous yudkowsky has it, bit surprising to me to complain he isn’t rigorous without talking about his rigorous work.
To the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
If you contextualise the outmoded ideas as part of the Great Conversation [1], and the story of how we reached our current understanding, rather than objective statements of fact, then they becomes a lot more valuable and worthy of study.
I have kids in high school. We sometimes talk about the difference between the black and white of math or science, and the wishy washy grey of the humanities.
You can be right or wrong in math. You have can an opinion in English.
[dead]
I don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
Rationalism largely rejects continental philosophy in favor of a more analytic approach. Yes these ideas are not new, but they’re not really the mainstream stuff you’d see in philosophy, literature, or history studies. You’d have to seek out these classes specifically to find them.
They largely reject analytic philosophy as well. Austin and Whitehead are roughly as detestable to a Rationalist as Foucault and Marx.
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
I think you’re mostly right.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
I think I'd rather subscribe to an actual religion, than listen to these weird rationalist types of people who seem to have solved the problem that is "everything". At least there is some interesting history to learn about with religion
I would too if I could but organized religions make me uncomfortable even though I admire parts of them. Similar to my admiration you don’t need to like the rationality types or believe in their program to find one or more of their tools useful.
I’ll also respond to the silent downvoters apparent disagreement. CFAR holds workshops and a summer camp for teaching rationality tools. In HPMoR Harry discusses the way he thinks and why. I read it as more of a way to discuss EY’s views in fiction as much as fiction itself.
For example, I recall being in lot of arguments that are purely "semantical" in nature.
I believe this is what Wittgenstein called “language games”If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
Your time would probably be better spent reading his magnum opus, Harry Potter and the Methods of Rationality.
> “You’re [X]?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
To be honest, if I encountered Scott Aaronson in the wild I would probably react the same way. The guy is super smart and thoughtful, and can write more coherently about quantum computing than anyone else I'm aware of.
if only he stayed silent on politics...
No actual person talks like that —- and if they really did, they’ve taken on the role of a fictional character. Which says a lot about the clientele either way.
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
I assumed that the stuff in quotes was a summary of the general gist of the conversations he had, not a word for word quote.
I don't think GP objects to the literalness, as much as to the "I am known for always being right and I acknowledge it", which comes off as.. not humble.
I mean, Scott's been wrong on plenty of issues, but of course he is not wont to admit that on his own blog.
Why would you comment on the post if you stopped reading near its beginning? How could your comments on it conceivably be of any value? It sounds like you're engaging in precisely the kind of shallow dismissal the site guidelines prohibit.
Aren't you doing the same thing?
No, I read the comment in full, analyzed its reasoning quality, elaborated on the self-undermining epistemological implications of its content, and then related that to the epistemic and discourse norms we aspire to here. My dismissal of it is anything but shallow, though I am of course open to hearing counterarguments, which you have fallen short of offering.
If a park ranger said "I looked over the first 2% of the park and concluded there's no mountain lions" - that is, made an assessment on the whole from inspection of a narrow segment - I don't think I would take his word on the matter. If OP had more experience to support his statement, he should have included it, rather than writing a shallow, one-sentence dismissal.
Also...
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
The podcast Behind the Bastards described Rationalism not as a cult but as the fertile soil which is perfect for growing cults, leading to the development of cults like the Zizians (who both the Rationalists and Zizians are at pains to emphasize their mutual hostility to one another, but if you're not part of either movement, it's pretty clear how Rationalism can lead to something like the Zizians).
I don't think that podcast has very in-depth observations. It's just another iteration of east coast culture media people who used to be on Twitter a lot, isn't it?
> the fertile soil which is perfect for growing cults
This is true but it's not rationalism, it's just that they're from Berkeley. As far as I can tell if you live in Berkeley you just end up joining a cult.
I lived in Berkeley for a decade and there weren't many people I would say were in a cult. It's actually quite the opposite. There's way more willingness to be weird and do your own thing there.
Most of the rationalists I met in the Bay Area moved there specifically to be closer to the community.
At least one cult originates from the Rationalist movement, the Zizians [1]. A cult that straight up murdered at least four people. And while the Zizian belief system is certainly more extreme then mainstream Rationalist beliefs, it's not that much more extreme.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
[1]: https://en.wikipedia.org/wiki/Zizians [2]: https://www.iheart.com/podcast/105-behind-the-bastards-29236...
There's a lot more than one of them. Leverage Research was the one before Zizians.
Those are only named cults though; they just love self-organizing into such patterns. Of course, living in group homes is a "rational" response to Bay Area rents.
[flagged]
Just because there is some weak relation to rationalism doesn't justify guilt-by-association. You could have pointed equally out that most of the Zizians identified as MtF transexuals and vegans, and then blamed the latter groups for being allegedly extreme. Which would be no less absurd.
> She may have done some awful shit
Murdering several people is slightly worse than "awful shit".
No, Guru Eliezer Yudkowsky wrote an essay about how people asking "This isn’t a cult, is it?" bugs him, so it's fine actually. https://www.readthesequences.com/Cultish-Countercultishness
Extreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps? My hobby is occasionally compared to a cult. The typical reaction of an adherent to this accusation is generally "Heh, yeah, totally a cult."
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
> Extreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps?
You don't understand how anxious the rationalist community was around that time. We're not talking self-assured confident people here. These articles were written primarily to calm down people who were panickedly asking "we're not a cult, are we" approximately every five minutes.
Hank Hill: Are y'all with the cult?
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
I got to that part, thought it was a joke, and then... it wasn't.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
Scott's done a lot of really excellent blogging in the past. Truthfully, I think you risk depriving yourself of great writing if you're willing to write off an author because you didn't like one sentence.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
Is it not a joke? I’m pretty sure it was.
It doesn’t really read like a joke but maybe. Regardless, I guess I can at least be another voice saying it didn’t land. It reads like someone literally said that to him verbatim and he literally replied with a simple, “Yes.” (That said, while it seems charitable to assume it was a joke but that doesn’t mean it’s wrong to assume that.)
I'm certain it's a joke. Have you seen any Scott Aaronson lecture? He can't help himself from joking in every other sentence
I think the fact that we aren't sure says a lot!
I laughed, definitely read that way to me
If that was a joke, all of it is.
*Guess I’m a rationalist now.
[flagged]
Never ceases to amaze me that the people who are clever enough to always be right are never clever enough to see how they look like complete wankers when telling everyone how they’re always right.
> clever enough to always be right
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
Sometimes the meta-skill of how you come across while being right is just as important as the correctness itself…
I don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
It's a coping mechanism for autists, mainly.
“I don’t like how they said it” and “I don’t like how this made me feel” is the aspect of the human brain that has given us Trump. As long as the idea that “how you feel about it” is a basis for any decision making, the world will continue to be fucked. The authors audience largely understand that “this made me feel” is an indication that introspection is required, and not an indication that the author should be ignored.
[dead]
Probably the most useful book ever written about topics adjacent to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the topic of the book is nominally the Alt-Right, a lot more of it is about the capital-R Rationalist communities and individuals that incubated the neoreactionary movement that is currently dominant in US politics. It's probably the best book to read for understanding how we got politically and intellectually from where we were in 2010, to where we are now.
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
If you want a book on the rationalists that's not a smear dictated by a person who is banned from their Wikipedia page for massive npov violations, I hear Chivers' The AI Does Not Hate You and Rationalist's Guide to the Galaxy are good.
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
> Probably the most useful book
You mean "probably the book that confirms my biases the most"
> incubated the neoreactionary movement that is currently dominant in US politics
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You are presenting a highly contentious worldview for the sake of smearing an outgroup. Please don't. Further, the smear relies on guilt by association that many (including myself) would consider invalid on principle, and which further doesn't even bear out on cursory examination.
At least take a moment to see how others view the issue. "Reliable Sources: How Wikipedia Admin David Gerard Launders His Grudges Into the Public Record" https://www.tracingwoodgrains.com/p/reliable-sources-how-wik... includes lengthy commentary on Sandifer (a close associate of Gerard)'s involvement with rationalism, and specifically on the work you cite and its biases.
Ironically, bringing this topic up always turns the conversation to ad-hominem attacks about the messenger while completely ignoring the subject matter. That's exactly the type of argument rationalists claim to despise, but it gets brought up whenever inconvenient arguments appear about their own communities. All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
If anyone wants to actually engage with the topic instead of trying to ad-hominem it away, I suggest at least reading Scott Alexander's own words on why he so frequently engages in neoreactionary topics: https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
Some select quotes:
> First is a purely selfish reason - my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.
> Third is that I want to spread the good parts of Reactionary thought
> Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.
In this case, HBD means "human biodiversity" which is the alt-right's preferred term for racialism, or the division of humans into races with special attention to the relative intelligence of those different races. This is an oddly recurring theme on Scott Alexander's work. He even wrote a coded blog post to his followers about how he was going to deny it publicly while privately holding it to be very correct.
Thanks for the recommendation! I hadn't heard about the book.
The "neoreactionary movement" is definitely not dominant
That book, IMO, reads very much like a smear attempt, and not one done with a good understanding of the target.
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
[1] https://slatestarcodex.com/2013/03/03/reactionary-philosophy...
[2] https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...
[3] https://slatestarcodex.com/2013/10/24/some-preliminary-respo...
[4] https://www.astralcodexten.com/p/moldbug-sold-out
[5] https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
[6] https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
[7] https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
I always find it interesting that when the topic of rationalists' fixation on neoreactionary topics comes into question, the primary defenses are that it's important to look at controversial ideas and that we shouldn't dismiss novel ideas because we don't like the group sharing them.
Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
It's no secret that Scott Alexander had a bit of a fixation on neoreactionary content for years. The leaked e-mails showed he believed there to be "gold" in some of their ideas and he enjoyed the extra traffic it brought to his blog. I know the rationalist community has been working hard to distance themselves from that era publicly, but dismissing that chapter of the history because it feels too much like a "smear" or because we're not supposed to like the author feels extremely hypocritical given the context.
Well that was a whole thing. I especially liked the existential threat of Cade Metz. But ultimately, I think the great oracle of Chicago got this whole thing right when he said:
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
> I especially liked the existential threat of Cade Metz.
I am perpetually fascinated by the way rationalists love to dismiss critics by pointing out that they met some people in person and they seemed nice.
It's such a bizarre meme.
Curtis Yarvin went to one of the "Vibecamp" rationalist gatherings, was nice to some prominent Twitter rationalists, and now they are ardent defenders of him on Twitter. Their entire argument is "I met him and he was nice".
It's mind boggling that the rationalist part of their philosophy goes out the window as soon as the lines are drawn between in-group and out-group.
Bringing up Cade Metz is a perennial favorite signal because of how effectively they turned it into a "you're either with us or against us" battle, completely ignoring any valid arguments Cade Metz may have been brought to the table. Then you look at how they treat Neoreactionaries and how we're supposed to look past our disdain for them and focus on the possible good things in their arguments, and you realize maybe this entire movement isn't really about truth-seeking as much as they think it is.
> Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
> There's an -ism for that
It's Buddhism.
https://en.wikipedia.org/wiki/Anattā
> Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
That is indeed a problem with it. The Buddhist solution is to make you promise not to do that.
https://en.wikipedia.org/wiki/Bodhicitta
And the (well, a) term for the entire problem is "non-dual awareness".
Just to confirm, this is about:
https://en.wikipedia.org/wiki/Rationalist_community
and not:
https://en.wikipedia.org/wiki/Rationalism
right?
Absolutely everybody names it wrong. The movement is called rationality or "LessWrong-style rationality", explicitly to differentiate it from rationalism the philosophy; rationality is actually in the empirical tradition.
But the words are too close together, so this is about as lost a battle as "hacker".
I don't think "rationality" is a good name for the movement, for the same reason as I wish "effective altruism" had picked a different name: it conflates the goal with the achievement of the goal. A rationalist (in the Yudkowsky sense) is someone who is trying to be rational, in a particular way. But "rationality" means actually being rational.
I don't think it's actually true that rationalists-in-this-sense commonly use "rationality" to refer to the movement, though they do often use it to refer to what the movement is trying to do.
Along these lines I am sort of skimming articles/blogs/websites about Lightcone, LessWrong, etc, and I am still struggling with the question...what do they DO?
Look, it's just an internet community of people who write blog posts and discuss their interests on web forums.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
Twenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
> hat superintelligence was probably coming, was deeply unsafe
Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.
The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.
Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.
Hang out and talk
[dead]
"I have come out as a smart good thinking person, who knew"
>liberal zionist
hmmmm
I once was interested in a woman who was really into the effective altruism/rationalism crowd. I went to a few meetings with her but my inner contrarian didn't like it.
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
Some of the comments here remind me of online commentary about some place called "the orange site". Always wondered who they were talking about...
Can't stand that place. Those people are all so sure that they're right about everything.
* 20 somethings who are clearly on spectrum
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
Also:
* Communal living
* Sacred texts & knowledge
* Doomsday predictions
* Guru/prophet lives on the largesse of followers
It's rich for a group that claims to reason based on priors to completely ignore that they possess all the major defining characteristics of a cult.
They seem to have a lot in common with the People's Front of Judea (or the Judian People's Front, for that matter).
These are the people who came up with Roko's Basilisk, Effective Altruism and spawned the Zizians. I think Robert Evans described them not as a cult but as a cult incubator, or something along those lines.
so many of the people i’ve read in these rationalist groups sound like they need a hug and therapy
I feel like I'm witnessing something that Adam Curtis would cover in the last part of The Century of Self, in real time.
There was always an underlying Randian impulse to the EA crowd - as if we could solve any issue if we just get the right minds onto tackling the problem. The black-and-white thinking, group think, hero worship and charicaturist literature are all there.
I always wondered is it her direct influence, or is it just that those characteristics naturally "go together".
The irony here is the Rationalist community are made up of the ones who weren't observant enough to pick that "identifying as a Rationalist" is generally not a rational decision.
From what I’ve seen it’s a mix of that, some who avoid the issue, and some who do it intentionally even though they don’t really believe it.
The main things I don’t like about rationalism are aesthetic (the name sucks and misusing the language of Bayesian probability is annoying). Sounds like they are a thoughtful and nice bunch otherwise(?).
I was at Lighthaven that week. The weekend-long LessOnline event Scott references opened what LightHaven termed "Festival Season", with a summer camp organised for the following 5 week days, and a prediction market & forecasting conference called Manifest the following weekend.
I didn't attend LessOnline since I'm not active on LessWrong nor identify as a rationalist - but I did attended a GPU programming course in the "summer camp" portion of the week, and the Manifest conference (my primary interest).
My experience generally aligns with Scott's view, the community is friendly and welcoming, but I had one strange encounter. There was some time allocated to meet with other attendees at Manifest who resided in the same part of the world (not the bay area). I ended up surrounded by a group of 5-6 folks who appeared to be friends already, had been a part of the Rationalist movement for a few years, and had attended LessOnline the previous weekend. They spent most of the hour critiquing and comparing their "quality of conversations" at LessOnline with the less Rationalist-y, more prediction market & trading focused Manifest event. Completely unaware or unwelcoming of my presence as an outsider, they essentially came to the conclusion that a lot of the Manifest crowd were dummies and were - on average - "more wrong" than themselves. It was all very strange, cult-y, pseudo-intellectual, and lacking in self-awareness.
All that said, the experience at Summer Camp and Manifest was a net positive, but there is some credence to sneers aimed at the Rationalist community.
He already had a rationalist “coming out” like ages ago. Dude just make up your mind
While this was an interesting and enjoyable read, it doesn't seem to be a “rationalist ‘coming out’”. On the contrary, he's just saying he would have liked going to a ‘rationalist’ meeting.
The last paragraph discusses how he's resisted the label and then he closes with “the rationalists have walked the walk and rationaled the rational, and thus they’ve given me no choice but to stand up and be counted as one of them.”
He’s clearly identifying as a rationalist there
Oh, you're right! I'd add that it's actually the penultimate paragraph of the first of two postscripts appended to the post. I should have read those, and I appreciate the correction.
It's weird that "being interested in philosophy" is like... a movement. My background is in philosophy, but the rationalist vs nonrationalist debate seems like an undergraduate class dispute.
My old roommate worked for Open Phil, and was obsessed with AI Safety and really into Bitcoin. I never was. We still had interesting arguments about it all the time. Most of the time we just argued until we got to the axioms we disagreed on, and that was that.
You don't have to agree with the Rationalist™ perspective to apply philosophically rigorous thinking. You can be friends and allies with them without agreeing with all their views. There are strong arguments for why frequentism may be more applicable than bayesianism in different domains. Or why transhumanism is a pipe dream. They are still conversations that are worthwhile as long as you're not so confident in your position that you think you might learn something.
It's encouraging to hear that behind all the internet noise, the real-life community is thriving and full of people earnestly trying to build a better future
I think I'm missing something important.
My understanding of "Rationalists" is that they're followers of rationalism; that is, that truth can be understood only through intellectual deduction, rather than sensory experience.
I'm wondering if this is a _different_ kind of "Rationalist." Can someone explain?
The easiest way to understand their usage of the term "rational" might be to think of it as the negation of the term "irrational" (where the latter refers mostly to cognitive biases). Not as a contrast with "empirical."
It's a terrible name that collides with the older one you're thinking of
I used to snicker at these guys, but I realized I'm not being humble or to be more theologically minded: gracious.
Recognizing we all take a step of faith to move outside of solipsism into a relationship with others should humble us.
Had to stop reading, everyone sounded so awful.
One of my many problems with rationalism is that it generally fails to acknowledge it's fundamentally religious character while pronouncing itself superior to all other religions.
Never heard of the man, but that was a fun read. And it looks like a fun club to be part of. Until in becomes unbearable perhaps. Also raises the chances to get invited to birthday orgies..? Perhaps I should have stayed a in academia..
> Until in becomes unbearable perhaps
Until?
They call themselves rationalist, yet they don't have very rational opinions if you ask them about scientific racism [1]
[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
I am not sure precisely it not very rational about that link. Did you have a specific point you were trying to make with it?
Yes, that they're not "rational".
If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...
1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
Human ethnic groups are measurably different in genetic terms, as based on single nucleotide polymorphisms and allelic frequency. There are multiple PCA plots of the 1000 Genomes dataset which show clear cluster separation based on ancestry:
https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.
https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
True, nature is egalitarian although only intracranialy.
Well, the rational thing is obviously to be scared of what ideas sound like.
> Rather than judging an individual on their actual intelligence
Actual intelligence is hard to know! However, lots of factors allow you to make a rapid initial estimate of their actual intelligence, which you can then refine as required.
(When the factors include apparent genetic heritage, this is called "racism" and society doesn't like it. But that doesn't mean it doesn't work, just that you can get fired and banned for doing it.)
((This is of course why we must allow IQ tests for hiring; then there's no need to pay attention to skin color, so liberals should be all for it.))
Nothing about the article you posted in your first comment seems racist. You could argue that believing in the conclusions of Richard Lynn’s work makes someone racist, but to support that claim, you’d need to show that those who believe it do so out of willful ignorance of evidence that his science is flawed.
It is debated: just not by serious scholars or academics. (Which doesn't necessarily make it wrong; but "scientific racism is bunk, and its proponents are persuasive" is a model whose high predictive power has served me well, so I believe it's wrong regardless.)
A lot of "rationalists" of this kind are very poorly informed about statistical methodology, a condition they inherit from reading papers written in these pseudoscientific fields about people likewise very poorly informed.
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
More charitably, it is really, really hard to tell the difference between a crank kicked out of a field for being a crank, and an earnest researcher being persecuted for not towing the political line, without being an expert in the field in question and familiar with the power structures involved.
A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.
There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.
In that essay Scott Alexander more or less says "so Richard Lynn made up numbers about how stupid black and brown people are, but we all know he was right if those mean scientists just let us collect the data to prove it." The level of thinking most of us moved past in high school, and he is a MD who sees himself as a Public Intellectual! More evidence that thinking too much about IQ makes people stupid.
[flagged]
"Rationalists," the "objectivists" rebranded?
Political affiliation distribution is similar to the general population.
Scott Aaronson, the man who turned scrupulosity into a weapon against his own psyche is a capital R rationalist?
Yeah, this surprises absolutely nobody.
"YOUR ATTENTION PLEASE: I have now joined the club everyone assumed I was already a member of."
It's his personal blog, the only people whose attention he's asking for are the people choosing to wander over there to see what he's up to.
Not his fault that people deemed it interesting enough to upvote to the front page of HN.
My eyes started to glaze over after a bit; so what I'm getting here is there a group that calls themselves "Rationalists," but in just about every externally meaningful sense, they're smelling like -- perhaps not a cult, but certainly a lot of weird insider/outsider talk that feels far from rational?
Capital r-Rationalism definitely bleeds into cult-like behaviour, even if they haven’t necessarily realised that they’re radicalising themselves.
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
Rationalists as a movement remind me of the individuals who claim to be serious about history but are only interested in a very, VERY specific set of six years in one very specific part of the world.
And boy are they extremely interested in ONLY those six years.
https://en.wikipedia.org/wiki/Rationalist_community
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
Most weird sex stuff takes place outside of cults, so that doesn't follow.
I'm so out of the loop. What is the new, special sense of Rationalist over what it might have meant to e.g. Descarte?
These kinds of propositions are determined by history, not by declaration.
Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
Bleh, labels can be restrictive, but guess what labels can also be? Useful.
>These kinds of propositions are determined by history, not by declaration.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
Since intuitive and non-rational thinking are demonstrably rational in the face of incomplete information, I guess we’re all rationalists. Or that’s how I’m rationalizing it, anyway.
> A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru.
Apart from a charismatic leader, a cult (in the colloquial meaning) needs a business model, and very often, a sense of separation from, and lack of accountability to those who are outside the cult, which provides conveniently simpler environment under which the cults ideas operate. A sort of "complexity filter" at the entry gate.
I'm not sure how the Rationalists compare to those criteria, but I'd be curious to find out.
Ah, so it's like the Order of the October Star: certain people have simply realized that they are entitled to wear it. Or, rather, that they had always been entitled to wear it. Got it.
> “You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
That's exactly what Rationalism(tm) is.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
Thank you for finally making this make sense to me.
Another thing that's endemic in Rationalism is a kind of specialized variety of the Gish gallop.
It goes like this:
(1) Assert a set of priors (with emphasis on the word assert).
(2) Reason from those priors to some conclusion.
(3) Seamlessly, without skipping a beat, take that solution as valid because the reasoning appears consistent and make that part of a new set of priors.
(4) Repeat, or rather recurse since the new set of priors is built on previous iterations.
The entire concept of science is founded on the idea that you can't do that. You have to stop and touch grass, which in science means making observations or doing experiments if possible. You have to see if the conclusion you reached actually matches reality in any meaningful way. That's because reason alone is fragile. As any programmer knows, a single error or a single mistaken prior propagates and renders the entire tree invalid. Do this recursively and one error anywhere in this crystalline structure means you've built a gigantic tower of bullshit.
I compare it to the Gish gallop because of how enthusiastically they do it, and how by doing it so fast it becomes hard to try to argue against. You end up having to try to counter a firehose of Oh So Very Smart complicated exquisitely reasoned nonsense.
Or you can just, you know, conclude that this entire method of determining truth is invalid and throw the entire thing in the trash.
A good "razor" for this kind of thing is to judge it by its fruit. So far the fruit is AI hysteria, cults like the Zizians, neoreactionary political ideology, Sam Bankman Fried, etc. Has anything good or useful come from any of this?
Rationalists are better called Rationalizationists, really.
Does that mean he read the Harry Potter fanfic?
The HP fanfic is what decisively drove me away from this shitshow years ago. I'm so glad I read that first rather than getting sucked in through another more palatable route.
The problem with rationalism is we don't have language to express our thoughts formally enough nor a compiler to transform that language into something runnable (platonic AST) nor a machine capable of emulating reality.
Expecting rational thought to correspond to reality is like expecting a 6 million line program written in a hypothetical programming language invented in the 1700s to run bug free on a turing machine.
Tooling matters.
>"frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future"
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
The narcissism in this movement is insufferable. I hope the conditions for its existence will soon pass and give way to something kinder and more learned.
"I'm a Rationalist"
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
Sorry, I haven't followed what is it that these guys call Rationalism?
https://en.wikipedia.org/wiki/Rationalist_community
Fair warning: when you turn over some of the rocks here you find squirming, slithering things that should not be given access to the light.
And Harry Potter fan fiction.
thanks much
[flagged]
I view this as a political constraint, cf. https://www.astralcodexten.com/p/lifeboat-games-and-backscra.... One's identity as Academic, Democrat, Zionist and so on demands certain sacrifices of you, sometimes of rationality. The worse the failure of empathy and rationality, the better a test of loyalty it is. For epistemic rationality, it would be best to https://paulgraham.com/identity.html, but for instrumental rationality it is not. Consequently, many people are reasonable only until certain topics come up, and it's generally worked around by steering the discussion to other topics.
I don’t really buy this at all: I am more emotionally invested in things that I know more about (and vice versa). If Rationalism breaks down at that point it is essentially never useful.
> I don’t really buy this at all
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
My reading of the comment was that there was only a small subset of contentious topics that rationalism is unsuited for. But I think you are correct
And this is precisely the problem with any dogma of rationality. It starts off ostensibly trying to help guide people toward reason but inevitably ends up justifying blatantly shitty social behavior like defense of genocide as "political constraint".
These people are just narcissists who use (often pseudo)intellectualism as the vehicle for their narcissism.
I'm curious how you assess, relatively speaking, the shittiness of defence of genocide versus false claims of genocide.
But that's not my question. My question was between defence of genocide and false accusations of genocide. (Of course actual genocide is "shittier" -- in fact that's a breathtaking understatement!)
Interesting conclusion, since I didn't make a claim either way.
Still, for the record, other independent observers have documented the practices and explained why they don't meet the definition of genocide, John Spencer and Natasha Hausdorff to name two examples. It seems by no means clear that it's valid to make a claim of genocide. I certainly wouldn't unless I was really, really certain of my claim, because to get such a claim wrong is equally egregious to denying a true genocide, in my opinion.
(Ive also been somewhat dogmatic and angry about this conflict, in the opposite direction. But I wouldnt call myself a rationalist)
Anything in particular you want to link to as unreasonable?
[flagged]
I'm not a Rationalist, however, nothing you said in your first paragraph is factual and therefore the resultant thesis isn't supported. In fact it ignores nearly 2-3000 years of history and ignores a whole bunch of surrounding context.
The 2-3000 years of history are entirely and wholly irrelevant. Especially as history shows clearly that the Palestinians are just as much the descendants of the ancient Israelites as the Jewish diaspora that returned to their modern land after the founding of modern Israel. The old population from before the Roman conquest never disappeared - some departed and formed the diaspora, but most stayed. Some converted to Christianity during this time as well. Later, they were conquered by Mohammed and his Caliphate, and many converted to Islam, but they're still the same people.
No, not genetics, but heritage is a valid, and very commonly used, criterion.
I.e., the following is, I believe, a reasonable argument:
"I should have a right to live in this general patch of land, since my grand-parents lived here. Maybe my parents moved away and I was born somewhere else, but they still had a right to live here and I should have it too. I may have to buy some land to have this right, I'm not saying I should be given land - but I should be allowed to do so. Additionally, it matters that my grand-parents were not invaders to this land. Their parents and grand-parents had also lived here, and so on for many generations."
This doesn't imply genetic heritage necessarily - cultural heritage and the notions of parents are not necessarily genetic. I might have ~0% of the specific DNA of some great-great-grand-parent (or even 0% of my parents' DNA, if I am adopted) - but I'm still their descendant. Now, how far you want to stretch this is very much debatable.
Not interested in discussing that topic here, but that is precisely the kind of category error that would fit right in with the rationalist crowd: GP was talking about human rights, i.e. actual humans, you are talking about nations or peoples, which is an entirely orthogonal concept.
Very well put.
[flagged]
While both sides have been engaged in crimes against humanity, only one is engaged in a violent occupation, by any stretch of the imagination.
[flagged]
What's incredible to me is the political blindness. Surely at this point, "liberal zionists" would at least see the writing on the wall? Apply some Bayesian statistical analysis to popular reactions to unprompted military strikes against Iran or something, they should realize at this point that in 25 years the zeitgeist will have completely turned against this chapter in Israel's history, and properly label the genocide for what it is.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
[dead]
[dead]
[dead]
[flagged]
[flagged]
There is no genocide so you have convinced me to read it.
[flagged]
[flagged]
[flagged]
What the fuck am I reading lmao.
Very Bay Area to assume you invented Bayesian thinking.
This is what rationalisme entails: https://plato.stanford.edu/entries/rationalism-empiricism/
That's a different definition of rationalism from what is used here.
It is. But the Rationalists, by taking that name as a label, are claiming that they are what the GP said. They want the prestige/respect/audience that the word gets, without actually being that.
(The rationalists never took that label, it is falsely applied to them. The project is called rationality, not rationalism. Unfortunately, this is now so pervasive that there's no fixing it.)
Sure! Rationality is what Eliezer called his project about teaching people to reason better (more empirically, more probabilistically) in the events I described over here: https://news.ycombinator.com/item?id=44320919 .
I don't know rationalism too well but I think it was a historical philosophical movement asserting you could derive knowledge by reasoning from axioms rather than observation.
The primary difference here is that rationality mostly teaches "use your reason to guide what to observe and how to react to observations" rather than doing away with observations altogether; it's basically an action loop alternating between observation and belief propagation.
A prototypical/mathematical example of a pure LessWrong-type "rational" reasoner is Hutter's AIXI (a definition of the "optimal" next step given an input tape and a goal), though it has certain known problems of self-referentiality. Though of course reasoning in this way does not work for humans; a large part of the Sequences is attempts to port mathematically correct reasoning to human cognition.
You can kind of read it as a continuation of early-2000s internet atheism: instead of defining correct reasoning by enumerating incorrect logic, ie. "fallacies", it attempts to construct it positively, by describing what to do rather than just what not to do.
For any speed-runners out there: https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism
They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
The overlap between the Effective Altruism community and the Rationalist community is extremely high. They’re largely the same people. Effective Altruism gained a lot of early attention on LessWrong, and the pessimistic focus on AI existential risk largely stems from an EA desire to avoid “temporal-discounting” bias. The reasoning is something like: if you accept that future people count just as much as current people, and that the number of future people vastly outweighs everyone alive today (or who has ever lived), then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
Intelligence and rational thought is useful, but like any strategy it has its tradeoffs and limitations. No amount of intelligence can overcome the chaos of long time horizons, especially when we're talking about human civilization. IMHO it's reasonable to pick a long-term problem/risk and focus on solving it. But it's pure hubris to think rationality will give you anything approaching high confidence of what the biggest problems and risks actually are on a 20-50 year time horizon, let alone 200-500 years or longer.
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
I always liken this to that we’re all asteroids floating in space. There’s no free will and everything is determined. We just see the whole thing unfold from one conscious perspective.
Emotionally I don’t subscribe to this view. Rationally I do.
My critique for rational people is that they don’t seem to fully take experience into account. It’s assumptions + rationality + experience/data + whatever strong inclinations one has that seems to be the full picture for me.
To the contrary, here's a series of essays on the subject of evolutionary game theory, the incentives created by competition, and its consequences for human wellbeing:
https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS
"Moloch hasn't won" is a lengthy critique of the argument you are making here.
I hesitate to nitpick, but Darwinism (as far as I know) is not really the term to use because Darwin's theory was limited to life on earth. Only later was the concept generalised into "natural selection" or "survival of the fittest".
I'm not sure I entirely understand what you're arguing here, but I absolutely do agree that the most powerful force in the universe is natural selection.
> Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
> They even know how to put bounds on the unknowns and their own lack of information.
No they don't. They think they can do this because they've accidentally reinvented the philosophy "logical positivism", which philosophers gave up on because it doesn't work. (This is similar to how they accidentally reinvented reconstructing arguments and called it "steelmanning".)
https://metarationality.com/probability-limitations
The nature of unknowns is that you don't know them.
What's the probability of AI singularity? It has never happened before so you have no priors and any number you assign will be pure speculation.
That's only one flaw in the theory.
There are others, such as the unproven, narcissistic and frankly unlikely-to-be-true assumption that humanity continuing to exist is a net positive in the long run.
In what sense are people in those communities "quite smart"? Stupid is as stupid does. There are plenty of people who get good grades and score highly on standardized tests, but are in fact nothing but pontificating blowhards and useless wankers.
If only I could +1 this more than once! I have learned valuable things occasionally from people in the rationalist community but this overall lack of humility —and strangely blinkered view of humanities and important topics like say history of science relevant to “STEM”—ultimately turned me off to the movement as a whole. And I love science and math! It just shouldn’t belong to people with this (imo) childish model of people, IQ, etc.
The other weird direction it leads is space travel.
If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.
I don't see how bringing new souls (whatever those are) into existence should naturally qualify as a good thing?
Perhaps you're arguing as an illustration of the way this group of people think, in which case I understand your point.
I'm not familiar with any of these communities. Is there also a general bias towards one side between "the most important thing gets the *most* resources" and "the most important thing gets *all* the resources"? Or, in other words, the most important thing is the only important thing?
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
It's not the case. Effective altruists give to dozens of different causes, such as malaria prevention, environmentalism, animal welfare, and (perhaps most controversially) extinction risk. It can't tell you which root values to care about. It just asks you to consider whether the charity is impactful.
Even if an individual person chooses to direct all their donations to a single cause, there's no way to get everyone to donate to a single cause (nor is EA attempting to). Money gets spread around because people have different values.
It absolutely does take some money away from other causes, but only in the sense that all charities do: if you give a lot to one charity, you may have less money to give to others.
The general idea is that on the margin (in the economics sense), more resources should go to the most effective+neglected thing, and.the amount of resources I control is approximately zero in a global sense, so I personally should direct all of my personal giving to the highest impact thing.
Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.
I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.
If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.
> Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.
The current sums invested and donated in altruist causes are rounding errors themselves compared to GDPs of countries, so the revealed preferences of those investing and donating to altruist causes is to care about the future and the present also.
Are you saying that they should give a greater preference to help those who already exist rather than those who may exist in the future?
I see a lot of Peter Singer’s ideas in modern “effective” altruism, but I get the sense from your comment that you don’t think that they have good reasons for doing what they do, or that their reason leads them to support well-meaning but ineffective solutions. I am trying to understand your position without misrepresenting your point or goals. Are you naysaying or do you have an alternative?
https://en.wikipedia.org/wiki/Peter_Singer
I think most everyone can agree with this: Being 100% rigorous and rational, reasoning from first principles and completely discarding received wisdom is a great trait in a philosopher but a terrible trait in a policymaker. Because for the former, exploring ideas for the benefit of future generations is more important than whether they ultimately reach the right conclusion or not.
> Being 100% rigorous and rational, reasoning from first principles
It really annoys me when people say that those religious cultists do that.
They derive their bullshit from faulty, poorly thought out premises.
If you fuck up the very firsts calculations of the algorithm, it doesn't matter how rigorous all the subsequent steps are. There results are going to be all wrong.
>People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
They don't even do this.
If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.
Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).
Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.
You can't deprive non-existent people out of anything.
And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.
I read the whole tree of responses under this comment and I could only convince myself that when people have no arguments they try to make you look bad.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
Feels like "they are wrong and smug" is enough reason to dislike the movement
The comment I replied to conceded wrongness and smugness but is still somehow confused why otherwise intelligent ppl dislike the movement. I was hoping to clear it up for them
Extra points for that comment's author implying that people who don't like the wrong and smug movement are unintelligent and protecting their egos, thus personally proving its smugness
Here's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
(Thank you) I agree, although I do think that the rationalists and EAs are way better than most of the other narrowband groups, as you call them, out there, such as the Metaverse or Crypto people. The rationalists are at least mostly legitimately motivated by morality and not just by a "blow it all up and replace it with something we control" philosophy (which I have come to believe is the belief-set that only a person who is convinced that they are truly powerless comes to). I see the rationalists as failing due to a skill issue as well: because they have so-defined themselves by their rationalism, they have trouble understanding the things in the world that they don't have a good rational understanding of, such as morality. They are too invested in words and truth and correctness to understand that there can be a lot of emotional truth encoded in logical falsehood.
edit: oh, also, I think that a good part of people's aversion to the rationalists is just a reaction to the narrowband quality itself, not to the content. People are well-aware of the sorts of things that narrowband self-justifying philosophies lead to, from countless examples, whether it's at the personal level (an unaccountable schoolteacher) or societal (a genocidal movement). We don't trust a group unless they specifically demonstrate non-narrowbandedness, which means being collectively willing to change their behavior in ways that don't make sense to them. Any movement that co-opts the idea of what is morally justifiable---who says that e.g. rationality is what produces truth and things that run counter to it do not---is inherently frightening.
That's not a mistake I'm making. Assuming you're talking about bog-standard effective altruists---by (claiming to) value the suffering of people far away as the same as those nearby, they're discounting the people around them heavily compared to other people. Compare to anyone else who values their friends and family and community far more than those far away. Perhaps they're not discounting them to less-than-parity---just less than they are for most people.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
What creeps me out is that I have no idea of their theory of power: How will they achieve their aims?
Maybe they want to do it in a way I’d consider just: By exercising their rights as individuals in their personal domains and effectively airing their arguments in the public sphere to win elections.
But my intuition is they think democracy and personal rights of the non-elect are part of the problem to rationalize around and over.
Would genuinely love to read some Rationalist discourse on this question.
I don't read these rationalist essays either, but you don't need to be a deep thinker to understand why any rational person would be afraid of AI and the singularity.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
but it will also contain infinite love.
Holy heck this is so well put and does the exact thing where it puts into words how I feel which is hard for me to do myself.
yeah, my model of the "us before them" question is that it is almost always globally optimal to cooperate, once a certain level of economic productivity is present. The safety that people are worried about is guaranteed not by maximizing their wealth but by minimizing their chances of death/starvation/conquest. Up to a point this means being strong and subjugating your neighbor (cf most of antiquity?) but eventually it means collaborating with them and including them in your "tribe" and extending your protection to them. I have no respect for anyone who argues to undo this, which is I think basically the ethos of the trump movement: by convincing everyone that they are under threat, they get people to turn on those that are actually working in concert with them (in order to enrich/empower themselves). It is a schema problem: we are so very very far away from an us vs. them world that it requires delusions to believe.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
[flagged]
>They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic
I have read Effective Altruists like that. But I also remember seeing a lot of money donated to a bunch of really decent sounding causes because someone spent 5 minutes asking themselves what they wanted their donation to maximise, decided on "Lives saved" and figured out who is doing the best at that.
> They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
The confluence of Bay Area rationalism and academic philosophy means a lot of other EA space is given to discussing hypotheticals in longwinded forum posts, blogs and papers. Some of those are well-trod utilitarian debates, others take it towards uniquely EA arguments like asserting that given that there could be as many as 10^31 future humans, essentially anything which claims to reduce existential risk - no matter how implausible the mechanism - has higher expected value than doing stuff that would certainly save human lives. An apparently completely unironic forum argument asked their fellow EAs to consider the possibility that given various heroic assumptions, the sum total of the suffering caused to mosquitos by anti-malaria nets might in fact be larger than the suffering caused by malaria they prevent. Obviously not a view shared by EAs who donate to antimalaria charities, but absolutely characteristic of the sort of knots EAs like to tie themselves in - it even has its own jokey jargon ('the rebugnant conclusion' and 'taking the train to crazy town') to describe adjacent arguments and the impulse to pursue them.
The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...
It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....
As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".
I think this is the key comment so far.
The simple answer is you don't need a "framework" -- plain empathy for the less fortunate is good enough. But if the EA's actually want to do something about malaria (although the Gates Foundation does much, much more in that regard than the Centre for Effective Altruism), more power to them. But as Becker notes from his visits to the Centre, things like malaria and malnutrition are not the primary focus of the centre.
If you value maximizing the number of human lives that are lived, then even “almost certainly impossible” is enough to justify focusing a huge amount of effort on that. Maybe interstellar colonization is a one in a million shot, but it would multiply the number of human lives by billions or trillions or more.
One example is Newcomb's problem. It presupposes a ridiculous scenario where a godlike being acts irrationally and then people try to base their life around "winning" the game that will never ever happen to them.
Having just moved to the Bay Area, I've met a few AI "safety researchers" who seems to come from this EA / Rationalist camp, and they all behave more like preachers than thinkers / academics / researchers.
I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
> their initial conditions were highly artificial
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
Yes, there's a name for it. They're dumbasses.
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22 https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
That may work for literature, but I was hoping for something more precise.
Let's read the thread: "There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation)."
Can people suffer from that impairment? Is that possible? If not, please explain how wrong assumptions can be eliminated without actively looking for them. If the impairment is real, what would you call its victims? Pick your own terminology.
They _are_ the effective altruism crowd.
There is a great recent book by Adam Becker "More Everything Forever" that deals with the overlapping circles of "effective altruists", "rationalists", and "accelerationists". He's not very sympathetic to the movements as he sees them as mostly rationalizing what their adherents wanted to do anyway -- funding things like rockets and AI over feeding the needy because they see the former as helping more people in the future than dealing with real problems today.
Incidentally a good book on logic is the best antidote to that type of thinking. Once you learn the difference between a valid and a sound argument and then realize just how ambiguous every English sentence is the idea that just because you have a logical argument you have something useful in everyday life becomes laughable rather quickly.
I also think the ambiguity of meaning in natural language is why statistical llms are so popular with this crowd. You don't need to think about meaning and parsing. Whatever the llm assumes is the meaning is whatever the meaning is.
The notion that our moral obligation somehow demands we reduce the suffering of wild animals in an ecosystem, living their lives as they have done since predation evolved and as they will do long after humans have ceased to be, is such a wild misunderstanding of who we are and what we are and what the universe is. I love my Bay Area friends. To quote the great Gwen Stefani, “This sh!t is bananas.”
People who confuse the map for the territory.
> They remind me of the "Effective Altruism" crowd
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker. Highly recommended.
Except with Effective Altruism (EA), its not pure logic.
Logic requires properties of metaphysical objectivity.
If you use the true meaning of words it would be called irrationality, delusion, sophism, or fallacy when such things are claimed true when in fact they are false.
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
I think this is a valid point. But to some degree both can be true. I often felt when reading some of these type of texts: Wait a second, there is a wealth of thinking on these topics out there; You are not at all situating all your elaborate thinking in a broader context. And there absolutely is willingness to be challenged, and (maybe less so) a willingness to be wrong. But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever. And importantly maybe there also isn't as much taste for understanding the limits of vigorous discussion and rational deduction. Adorno and Horkheimer posit a dialectic of rationality and enlightenment, Habermas tries to rebuild rational discourse by analyzing its preconditions. Yet for all the vigorous intellectualism of the rationalists, none of that ever seems to feature even in passing (maybe I have simply missed it...).
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
That's a fair point. Speaking only for myself, I think I fail to understand why it's important to situate philosophical discussions in the context of all the previous philosophers who have expressed related ideas, rather than simply discussing the ideas in isolation.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
It may be easier to imagine someone trying to derive mathematics all by themselves, since it's less abstract. It's not that they won't come up with anything, it's that everything that even a genius can come up with in their lifetime will be something that the whole of humanity has long since come up with, chewed over, simplified, had a rebellion against, had a counter-rebellion against the rebellion, and ultimately packaged it up in a highly efficient manner into a textbook with cross-references to all sorts of angles on it and dozens of elaborations. You can't possible get through all this stuff all on your own.
The problem is less clear in philosophy than mathematics, but it's still there. It's really easy on your own terms to come up with some idea that the collective intelligence has already revealed to be fatally flawed in some undeniable manner, or at the very least, has very powerful arguments against it that an individual may never consider. The ideas that have survived decades, centuries, and even millenia against the collective weight of humanity assaulting them are going to have a certain character that "something someone came up with last week" will lack.
(That said I am quite heterodox in one way, which is that I'm not a big believer in reading primary sources, at least routinely. Personally I think that a lot of the primary sources noticeably lack the refinement and polish added as humanity chews it over and processes it and I prefer mostly pulling from the result of the process, and not from the one person who happened to introduce a particular idea. Such a source may be interesting for other reasons, but not in my opinion for philosophy.)
Did you discover it from first principles by yourself because it's a natural conclusion anyone would reach if they ponder the subject?
Or because western culture reflects this theme continuously through all the culture and media you've immersed in since you were a child?
Also the idea is definitely not new to Descartes, you can find echoes of it going back to Plato, so your idea isn't wrong per se. But I think it underrates the effect to which our philosophical preconceptions are culturally constructed.
Odds are good that the millions of people who have also read and considered these ideas have added to what you came up with at 6. Odds are also high that people who have any interest in the topic will probably learn more by reading Descartes and Kant and the vast range of well written educational materials explaining their thoughts at every level. So if you find yourself telling people about these ideas frequently enough to have opinions on how they respond, you are doing both yourself and them a disservice by not bothering to learn how the ideas have already been criticized and extended.
It really depends on why you are having a philosophical discussion. If you are talking among friends, or just because you want to throw interesting ideas around, sure! Be free, have fun.
I come from a physics background. We used to (and still) have a ton of physicists who decide to dable in a new field, secure in their knowledge that they are smarter than the people doing it, and that anything worthwhile that has already been thought of they can just rederive ad hoc when needed (economists are the only other group that seems to have this tendency...) [1]. It turned out every time that the people who had spent decades working on, studying, discussing and debating the field in question had actually figured important shit out along the way. They might not have come with the mathematical toolbox that physicists had, and outside perspectives that challenge established thinking to prove itself again can be valuable, but when your goal is to actually understand what's happening in the real world, you can't ignore what's been done.
[1] There even is an xkcd about this:
https://xkcd.com/793/
Here's a very simple explanation as to why it's helpful from a "first principles" style analogy.
Suppose a foot race. Choose two runners of equal aptitude and finite existence. Start one at mile 1 and one at mile 100. Who do you think will get farther?
Not to mention, engaging in human community and discourse is a big part of what it means to be human. Knowledge isn't personal or isolated, we build it together. The "first principles people" understand this to the extent that they have even built their own community of like minded explorers, problem is, a big part of this bond is their choice to be willfully ignorant of large swaths of human intellectual development. Not only is this stupid, it also is a great disservice to your forebears, who worked just as hard to come to their conclusions and who have been building up the edifice of science bit by bit. It's completely antithetical to the spirit of scientific endeavor.
> But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
You're spot on here, and I think this is probably also why they appeal to programmers and people in software.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
Yeah, though I think you may be exaggerating how often the "genius insights" rise to the level of correct restatements of basic facts. That happens, but it's not the rule.
This is old engineer / old physicist syndrome.
https://www.smbc-comics.com/?id=2556
I grew up with some friends who were deep into the early roots of online rationalism, even slightly before LessWrong came online. I've been around long enough to recognize the rhetorical devices used in rationalist writings:
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
Yes, I do think that these hedging statements make them immune from the specific criticism that I quoted.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
> That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
Ok, let's scroll up the thread. When I refer to "the specific criticism that I quoted", and when you say "implying that because they say and do these things, they must be immune from the criticism delivered above": what do you think was the "criticism delivered above"? Because I thought we were talking about contrarian1234's claim to exactly this "strawman", and you so far have not appeared to not agree with me that this criticism was invalid.
If putting up evidence about how people were wrong in their predictions, I suggest actually pointing at predictions that were wrong, rather than on recent predictions about the future that that you disagree over how they will resolve. If putting up evidence about how people make excuses for failing predictions, I suggest actually showing them do so, rather than projecting that they will do so and blaming them for your projection.
It's been a while since I've engaged in rationalist debates, so I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads. :) You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
My point wasn't to nit-pick individual predictions, it was a general explanation of how the game is played.
Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
- He predicted at least $250 million in damages from Black Lives Matter protests.
- He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
- He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
This is just random samples from the first blog post that popped in Google: https://www.astralcodexten.com/p/grading-my-2021-predictions
It's also noteworthy to read that a lot of his predictions are about his personal life, his own blogging actions, or [redacted] things. These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
>I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
I don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
>they bought mansions for team building
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
Ah, fair enough! I had heard the "hosting wealthy donors" as the primary motivation, but it appears to be secondary. My bad.
>As far as I know it's never hosted an impress-the-oligarch fundraiser
As far as I know, they only hosted 3 events there before deciding to sell, so this is low-information.
> both can be true
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
[dead]
>Are you sure you're not painting this group with an overly-broad brush?
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
They're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
I don't see how this is a response to what I wrote. Can you explain?
Yeah. It's not that they never express uncertainty so much as they like to express uncertainty as arbitrarily precise and convenient-looking expected value calculations which often look like far more of a rhetorical tool to justify their preferences (I've accounted for the uncertainty and even given a credence as low as 14.2% I'm still right!) than a decision making heuristic...
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Have you ever read Scott Alexander's blog (Slate Star Codex, now Astral Codex X)? It's full of doubt and self-questioning. The guy even keeps a public list of his mistakes:
https://www.astralcodexten.com/p/mistakes
I'll admit my only touchpoint to the "rationalist community" is this blog, but I sure don't get "full of themselves" from that. Quite the contrary.
I've always seen the breathless Singularitarian worrying about AI Alignment as a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman, from algorithmic bias, to policy-washing, to energy costs and acceleration of wealth concentration. It also leads to so-called longtermism - discounting the benefits of solving current real problems and focusing entirely on solving a hypothetical one that you think will someday make them all irrelevant.
My feeling has been that it’s a lot of people that work on B2B SaaS that are sad they hadn’t gotten the chance to work on the Manhattan Project. Be around the smartest people in your field. Contribute something significant (but dangerous! And we need to talk about it!) to humanity. But yeah computer science in the 21st century has not turned out to be as interesting as that. Maybe just as important! But Jeff Bezos important, not Richard Feynman important.
"Overproduction of elites" is the expression.
The Singularitarians were breathlessly worrying 20+ years ago, when AI was absolute dogshit - Eliezer once stated that Doug Lenat was incautious in launching Eurisko because it could've gone through a hard takeoff. I don't think it's just an act to launder their evil plans, none of which at the time worked.
Yeah, people were generally terrified of this stuff back before you could make money off of it.
Fair. OpenAI totally use those arguments to launder their plans, but that saga has been more Silicon Valley exploiting longstanding rationalist beliefs for PR purposes than rationalists getting rich...
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
yep, the biggest threat posed by AI comes from the capitalists who want to own it.
I actually think the people developing AI might well not get rich off it.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
It smacks of a goldrush. The winners will be the people selling shovels (nVidia) and housing (AWS). It may also be the guides showing people the mountains (Cursor, OpenAI, etc).
I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.
Exact same thing happened with fiber optic cable layers in the late 1990s. On exactly the same routes!
And today the railroad system in the USA sucks compared to other developed countries and even China.
It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
Perhaps American AI efforts will one day be viewed similarly. “Yeah, they had an early rush, lots of innovation, high valuations, and robber barons competing. Today it’s just stale old infra despite the high-energy start.”
Just checked.
The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.
On the one hand, I guess that's just life here in reality.
On the other, man, reality sucks sometimes.
Or the propagandists that use it
They won't be allowed to use it unless they serve the capitalists who own it.
It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.
Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.
The propagandists serve the capitalists, so it's all the same.
>s a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman,
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
> We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
I think when the GP says "our assumptions about individuals' economic value." they mean half the workforce becoming unemployed because the auto corrector can do it cheaper.
That's going to be a swift kick to your economy, no matter how strong.
Yeah the "rational" part always seemed a smokescreen for the ability to produce and ingest their own and their associates methane gases.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
The meta with these people is “my brilliance comes with an ego that others must cater to.”
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
It's a conflict as old as time. What do you do when an argument leads to an unexpected conclusion? I think there are two good responses: "There's something going on here, so let's dig into it," or, "There's something going on here, but I'm not going to make time to dig into it." Both equally valid.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
Might kind of point to real life people having too much of what is now called, "rationality", and very little of what used to be called "wisdom"?
Wisdom tends to resemble shallow aphorisms despite being framed as universal. Rather than interrogating wisdom's relevance or depth, many people simply repeat it uncritically as a shortcut to insight. This reflects more about how people use wisdom than the content itself, but I believe that behavior contributes to our perception of the importance of wisdom.
It frequently reduces complex problems into comfortable oversimplifications.
Maybe you don't think that is real wisdom, and maybe that's sort of your point, but then what does real wisdom look like? Should wisdom make you considerate of the multiple contexts it does and doesn't affect? Maybe the issue is we need to better understand how to evaluate and use wisdom. People who truly understand a piece of wisdom should communicate deeply rather than parroting platitudes.
Also to be frank, wisdom is a way of controlling how others perceive a problem, and is a great way to manipulate others by propping up ultimatums or forcing scope. Much of past wisdom is unhelpful or highly irrelevant to modern life.
e.g. "Good things come to those who wait."
Passive waiting rarely produces results. Initiative, timing, and strategic action tend to matter more than patience.
[dead]
It feels like a shield of sorts, "I am a rationalist therefore my opinion has no emotional load, it's just facts bro how dare you get upset at me telling xyz is such-and-such you are being irrational do your own research"
but I don't know enough about it, I'm just trolling.
The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
>The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
Yes, and if you read Popper that’s exactly how he defined rationality / the scientific method: to solve problems of life.
A lot of people really need to be reminded of this.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
Reducibility is usually a goal of intellectual pursuits? I don't see that as a fault.
Ok. A lot of things are very 'reducible' but information is lost. You can't extend back from the reduction to the original domain.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
'Reducibility' is a property if present that makes problems tractable or possibly practical.
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.
> protein folding (reduction to quantum chemistry),
I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.
Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.
I meant that the word "reductionist" is usually an accusation of ignorance. It's not something people doing reductionist work actually use.
'Reductionist' can be an insult. It can also be an uncontroversial observation, a useful approach, or a legitimate objection to that approach.
If you're looking for insults, and declaring the whole conversation a "culture war" as soon as you think you found one, (a) you'll avoid plenty of assholes, but (b) in the end you will read whatever you want to read, not what the thoughtful people are actually writing.
What the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.
Concretely we know that there exist irreducible structures, at least in mathematics: https://en.wikipedia.org/wiki/Classification_of_finite_simpl...
The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.
Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.
I think that chemistry, physics, and mathematics, are engaged in a program of understanding their subject in terms of the sort of first principles that Descartes was after. Reduction of the subject to a set of simpler thoughts that are outside of it.
Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.
How reducible is the question. If some particular events require a minimum amount of complexity, how to do you reduce it below that?
It would imply that when dealing with complex systems, models and conceptual frameworks are, at the very best, useful approximations. It would also imply that it is foolhardy to ignore phenomena simply because they are not comprehensible within your preferred framework. It does not imply biologists should give up.
Biologists don't try to reason everything from first principles.
Actually, neither do Rationalists, but instead they cosplay at being rational.
>Maybe it's actually going to be rather benign and more boring than expected
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
It's also reasonable as a Pascal's wager type of thing. If you can't affect the outcome, just prepare for the eventuality that it will work out because if it doesn't you'll be dead anyway.
That is how I normally hear the marketing theory described when people go into it in more detail.
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
Do you think opposing the manhattan project would have lead to a better world?
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
My answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
Every community has a long list of etiquettes, rules and shared knowledge that is assumed and generally not spelled out explicitly. One of the core assumptions of the rationalist community is that every statement has uncertainty unless you explicitly spell out that you are certain! This came about as a matter of practicality, as it would be inconvenient to preempt every other sentence with "I'm uncertain about this". Many discussions you will see have the flavor of "strong opinions, lightly held" for this reason.
The rationalist discussions rarely consider what should be the baseline assumption of what if one or more of the logical assumptions or associations are wrong. They also tend to not systematically plan to validate. And in many domains - what could hold true for one moment can easily shift.
100%
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
This seems like exactly the opposite of everything I've read from the rationalists. They even called their website "less wrong" to call attention to knowing that they are probably still wrong about things, rather than right about everything. A lot of their early stuff is about cognitive biases. They have written a lot about "noticing confusion" when their foundational beliefs turn out to be wrong. There's even an essay about what it would feel like to be wrong about something as fundamental as 2+2=4.
Do you have specific examples in mind? (And not to put too fine a point on it, do you think there's a chance that you might be wrong about this assertion? You've expressed it very confidently...)
They're wrong about how to be wrong, because they think they can calculate around it. Calling yourself "Bayesian" and calling your beliefs "priors" is so irresponsible it erases all of that; it means you don't take responsibility if you have silly beliefs, because you don't even think you hold them.
It's every bit a proto religion. And frankly quite reminiscent of my childhood faith.
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
Not meaning to be too direct, but you are misinterpreting a lot about rationalists.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
My impression is that many rationalists enjoy believing that they update their beliefs, but in practice they're human and just as attached to preconceived notions as anyone else. But if you go around telling everyone that updating is your super-power, you're going to be a lot less humble about your own failures to do so.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
Yes, I've never met anyone who stated they have "strong opinions, weakly held" who wasn't A) some kind of arsehole and B) lying.
I’ve met a few people who walked that walk without being assholes … to others. They tended to have a fairly intense amount of self criticism/self hatred, though. That was more palatable than ego, to be sure, but isn’t likely broadly applicable.
Out of how many such people that you have met?
not to be too cynical here, but I would say that the most-apt description of the rationalists is that they are people who would say they are constantly looking for updates to their models. But that they are not necessarily doing it appreciably more than anyone else is. They will do it freely on unimportant things---they tend to be smart people who view the world intellectually and so they are free to toss or keep factual beliefs about things, of which they have many, with little fanfare, and sure, they get points for that. But they are as rooted in their moral beliefs as anybody else is. Maybe more than other people since they have such a strong intellectual edifice that justifies not changing their minds, because they believe that their beliefs follow from nearly irrefutable calculations.
You're generalizing that all self-proclaimed rationalists are hypocrites and heavily biased? I mean, regardless of whether or not that is true, what is the point of making such a broad generalization? Strange!
um.... because I think it's true and relevant? I'm describing a pattern I have observed over many years. It is of course my opinion (and are not a universal statement, just what I believe to be a common phenomenon).
It seems that you are conflating theoretical rationalists with the actual real-life rationalists that write stuff like
>The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right
“Guy Who Is Always Right” as a role in a social group is a terrible target, yet it somehow seems like what rationalists are aiming for every time I read any of their blog posts
That seems pretty silly to me. If you believe that there's a 70% chance that AI will kill everyone it makes more sense to go on about that (and about how you think you/your readers can decrease that number) than worry about the 30% chance that everyting will be fine.
My main problem with the movement is their emphasis on Bayesianism in conjunction with an almost total neglect of Popperian epistemology.
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
Hmm, what epistemological propositions of Popper's do you think they're missing? To the extent that I understand the issues, they're building on Popper's epistemology, but by virtue of having a more rigorous formulation of the issues, they resolve some of the apparent contradictions in his views.
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong? Or overlooked something major? If so, what?
(Amusingly, you seem to have avoided making any falsifiable claims in your comment, while implying that you could easily make many of them...)
> Popper’s falsificationism – this is the old philosophy that the Bayesian revolution is currently dethroning.
https://www.yudkowsky.net/rational/bayes
These are the kind of statements I’m referring to. Happy to be falsified btw :) that’s how we learn.
Also note that Popper never called his theory falsificationism.
So what's the difference between Bayesianism and Popperian epistemology?
Popper requires you to posit null hypotheses to falsify (although there are different schools of thought on what exactly you need to specify in advance [1]).
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...
I actually think that their main problem is the belief that they can learn everything about the world by reading stuff on the Web. You can't understand everything by reading blogs and books, in the end, some things are best understood when you are on the ground. Unironically, they should go touch the grass.
One example for all. It was claimed that a great rationalist policy is to distribute treated mosquito nets to 3rd-world-ers to help eradicate malaria. On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides. Unfortunately, rationalists forgot to ask people that live with mosquitos what they would do with such nets.
> On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides.
Could you recommend an article to learn more about this?
What really confuses me is that many in this so called "rationalist" clique discuss Bayesianism as an "ism", some sort of sacred, revered truth. They talk about it in mystical terms, which matches the rest of their cult-like behavior. What's the deal with that?
That's specific to Yudkowsky, and I think that's just supposed to be humor. A lot of people find mathematics very dry. He likes to dress it up as "what if we pretend math is some secret revered knowledge?".
The best jokes all have a kernel of truth at their core, but I think a lot of Yudkowsky's acolytes missed the punch line.
Yeah but these feels like "more truth is said in jest etc etc"
is epidemiology a typo for epistemology or am I missing something?
Yes, thx, fixed it.
The counterpoint here is that in practice, humility is only found in the best of frequentists, whereas the rest succumb to hubris (i.e. the cult of irrelevant precisions).
The rationalist movement is an idealist demagogue movement in which the majority of thinkers don't really posses the domain knowledge or practical experience in the subjects they thinktank about. They do address this head on, however, and they are self-aware.
> lack of acknowledgment that maybe we don't have a full grasp of the implications of AI
And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".
Yeah I don't know or really care about Rationalism or whatever. But I took Aaronson's advice and read Zvi Mowshowitz' Childhood and Education #9: School is Hell [0], and while I share many of the criticisms (and cards on the table I also had pretty bad school experiences), I would have a hard time jumping onto this bus.
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
Cherry-picking friendly studies is one of the go-to moves of the rationalist community.
You can convince a lot of people that you've done your homework when the medium is "an extremely blog post with a bunch of studies attached" even if the studies themselves aren't representative of reality.
Is there any reason you are singling out the rationalist community? Is that not a common failure mode of all groups and all people?
BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.
My wife is LMSW (not CPS!) and sees ~5 people a day. 153,922 population in the metro area. Mind you, this is adults, but they're all mandated to show up.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US per statistia:
> In 2022, there were about 5,036 intake and screening workers in child protective services in the United States. In total, there were about 30,750 people working in child protective services in that year.
Yeah OK I can see that. Mostly you inspired me to do a little napkin math based on the report I linked, which says ~3.1m kids got CPS investigations (etc) in 2023, which is ~8,500 a day. But, the main author in a subsequent paper shows that only ~13% of kids have confirmed maltreatment [0]. That's still far lower than the 38% for homeschooled kids.
[0]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5087599/
I wonder if the CPS on homeschooled children rate is from people who had their children in school and then "pulled them out" vs people who never had their children in school at all. As some comedian said "you're on the grid [...], they have your footprint"; i know it used to be "known" that school districts go after the former because it literally loses them money to lose a student, whereas with the latter, the kid isn't on the books.
also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
37% of children obviously do not experience a CPS investigation before age 18.
> not what i am speaking to
My misunderstanding then - what are you speaking to? Even reading this comment, I still don't understand.
> but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn"
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
I mean, I'm fine addressing Tabarrok's argument head on: I think there's far more to gain helping the millions of kids/adults who are functionally illiterate than helping the small number of gifted kids the educational system is underserving. His argument is essentially "these kids will raise the tide and lift all boats", but it's clear that although the tide has been rising for generations (advances in the last 60-70 years are truly breathtaking) more kids are being left behind, not fewer. There's no reason to expect this dynamic to change unless we tackle it directly.
Calling yourself rationalists: frames everyone else as irrational.
It reminds me Kier Starmers Labour, calling themselves "the adults in the room".
Its a cheap framing trick, belying an emptiness on the people using it.
Pretty much every movement does this sort of thing.
Religions: "Catholic" actually means "universal" (implication: all the real Christians are among our number). "Orthodox" means "teaching the right things" (implication: anyone who isn't one of us is wrong). "Sunni" means "following the correct tradition" (implication: anyone who isn't one of us is wrong").
Political parties: "Democratic Party" (anyone who doesn't belong doesn't like democracy). "Republican Party" (anyone who doesn't belong wants kings back). "Liberal Party" (anyone else is against freedom).
In the world of software, there's "Agile" (everyone else is sluggish and clumsy). "Free software" (as with the liberals: everything else is opposed to freedom). People who like static typing systems tend to call them "strong" (everyone else is weak). People who like the other sort tend to call them "dynamic" (everyone else is rigid and inflexible).
I hate it too, but it's so very very common that I really hope it isn't right to say that everyone who does it is empty-headed or empty-hearted.
The charitable way to look at it: often these movements-and-names come about when some group of people picks a thing they particularly care about, tries extra-hard to do that thing, and uses the thing's name as a label. The "Rationalists" are called that because the particular thing they chose to focus on was rationality; maybe they do it well, maybe not, but it's not so much "no one else is rational" as "we are trying really hard to be as rational as we can".
(Not always. The term "Catholic" really was a power-grab: "we are the universal church, those other guys are schismatic heretics". In a different direction: the other philosophical group called "Rationalists" weren't saying "we think rationality is really important", they were saying "knowledge comes from first-principles reasoning" as opposed to the "Empiricists" who said "knowledge comes from sense experience". Today's "Rationalists" are actually more Empiricist than Rationalist in that sense, as it happens.)
Rationalists have always rubbed me the wrong way too but your argument against AI doomerism is weird. If you care about first principles, how about the precautionary principle? "Maybe it's actually benign" is not a good argument for moving ahead with potentially world ending technology.
I don't think "maybe it's benign" is where anti doomers are coming from, more like, "there are also costs to not doing things".
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
It's just the slippery-slope fallacy: if X then obviously Y will follow, and there will be no further decisions, debate or time before it does.
I rhetorically agree it's not a good argument, but its use as a cautionary metaphor predates its formalization as a logical fallacy. It's summoning is not proof in and of itself (i.e. the 1st amendment). It suggests a concern rather than demonstrates. It's lazy, and a good habit to rid oneself of. But its presence does not invalidate the argument.
He wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.
No rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.
Who is "they" exactly, and how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
The precautionary principle is stupid. If people had followed it then we'd still be living in caves.
I take it you think the survivorship bias principle and the anthropic principle are also stupid?
Don't presume to know what I think.
But not accepting this technology could also be potentially world ending, especially if you want to start many new wars to achieve that, so caring about the first principles like peace and anti-ludditism brings us back to the original "real lack of humility..."
The precautionary principle does active harm to society because of opportunity costs. All the benefits we have reaped since the enlightenment have come from proactionary endeavorers, not precautionary hesitation.
> The people involved all seem very... Full of themselves ?
Kinda like Mensa?
When I was a kid I wanted to be in Mensa because being smart was a big part of my identity and I was constantly seeking external validation.
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
I didn't meet anyone who seemed arrogant.
However I'm always surprised how much some people want to talk about intelligence. I mean, it's the common ground of the group in this case, but still.
Personally, I subscribe to Densa, the journal of the Low-IQ Society.
I love colouring in my issue every month.
This month: Is Brawno really what plants crave?
I am very tense about defining oneself as rationalist. For many aspects, I find it too first degree to be of any interest. At all.
And I have this narrative ringing in my head as soon as the word pops.
https://news.ycombinator.com/item?id=42897871
You can search HN with « zizians » for more info and depth.
An unfortunate fact is that people who are very annoying can also be right...
Any time people engage in some elaborate exercise and it arrives at: "me and people like me should be powerful and not pay taxes and stuff" the reason for making the argument is not a noble one, the argument probably has a bunch of tricks and falsehoods in it, and there's never really any way to extract anything useful, greed and grandiosity are both fundamentally contaminative processes.
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
for me it was very easy to determine what rubs me the wrong way:
>I guess I'm a rationalist now.
>Aren't you the guy who's always getting into arguments who's always right?
In fairness, that's (allegedly, at least; I guess he could be lying) a quotation from another person. If someone came up to you and said "Aren't you the guy who's essentially[1] always right?", wouldn't you too be inclined to quote them, whether you agreed with them or not?
[1] S.A. actually quoted the person as follows: "You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" which differs in several ways from what reverendsteveii falsely presents as a direct quotation.
I think the rationalists have failed to humanize themselves. They let their thinkpieces define them entirely, but a studiously considered think piece is a narrow view into a person. If rationalists were more publicly vulnerable, people might find them more publicly relatable.
Scott Aaronson is probably the most publicly-vulnerable academic I've ever found, at least outside of authors who write memoirs about childhood trauma. I think a lot of other prominent rationalists also put a lot of vulnerability out there.
He didn't take the rationalist label until today. Him doing so might help their image.
Right, but him doing so is the very context of this discussion, which is why I mentioned him in particular. Scott Alexander is more well-known as a rationalist and also (IMO) displays a lot of vulnerability in his writing.
To me they seem like a bunch for 125 IQ people (not all) trying to convince everyone they are 150 IQ people by trying to reason stuff from first principles and coming up with stuff that your average blue collar worker would tell them is rubbish just using phronesis.
> The people involved all seem very... Full of themselves ?
Yes, rationalism is not a substitute for humility or fallibility. However, rationalism is an important counterpoint to humanity, which is orthogonal to rationalism. But really, being rational is only binary - you cant be anything other than rational or irrational. You're either doing what's best or you're not. That's just a hard pill for most people to swallow.
To use the popular metaphor, people are drowning all over the world and we're all choosing not to save them because we don't want to ruin our shoes. Look in the mirror and try and comprehend how selfish we are.
There are few things I hold strong opinions on, but where I do if they're also out of step with what most people think I am very vocal about them.
I see this in rationalist spaces too – it doesn't really make sense for people to talk about things that they believe in strongly but that 95%+ of the public also believe in (like the existence of air), or that they don't have a strong opinion on.
I am a very vocal doomer on AI because I predict with high probability it's going to be very bad for humanity and this is an opinion which, although shared by some, is quite controversial and probably only held by 30% of the public. Given the importance of the subject, my confidence, and that fact I feel the vast majority of people are even wrong or are significantly underweighting caetrosphohic risks, I have to be vocal about it.
Do I acknowledge I might be wrong? Sure, but for me the probability is low enough that I'm comfortable making very strong and unqualified statements about what I believe will happen. I suspect others in the rationalist community like Eliezer Yudkowsky think similarly.
How confident should other people be that random people in conversation or commentors on the internet are at accurately predicting the future? I strongly believe that nearly 100% are wrong in both major and minor ways.
Also, when you say you have a strong belief, does that mean you have emptied you retirement accounts and you are enjoying all you can in the moment until the end comes?
I'm not kypro, but what counts as "strong belief" depends a lot on the context.
For example, I won't cross the street without 99.99% confidence that I will survive. I cross streets so many times that a lower threshold like 99% would look like insanely risky dart-into-traffic behaviour.
If an asteroid is heading for earth, then even a 25% probability of apocalyptic collision is enough that I would call it very high, and spend almost all my focus attempting to prevent that outcome. But I wouldn't empty my retirement account for the sake of hedonism because there's still a 75% chance I make it through and need to plan my retirement.
rationalism got pretty lame the last 2-3 years. imo the peak was trying to convince me to donate a kidney.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
Where do the post rats hang out these days? I got involved in the stoa during covid until the online community fragmented. Are there still events & hangouts?
They're a group called "tpot" on twitter, but it's unclear what's supposed to be good about them.
There's kind of two clusters, one is people who talk about meditation all the time, the other is center-right people who did drugs once. I think the second group showed up because rationalists are not-so-secretly into scientific racism (because they believe anything they see with numbers in it) and they just wanted to hang out with people like that.
There is an interesting atmosphere where it feels like they observed California big tech 1000x engineer types and are trying to cargo cult the way those people behave. I'm not sure what they get out of it.
postrats were never a coherent group but a lot of people who are at https://vibe.camp this weekend probably identify with the label. some of us are still on twitter/X
Not "post rat", but r/SneerClub is good for criticisms of rationalists (some from former rationalists)
Their sneering is just that. Sneering, not interesting critiques.
It's basically a secular religion.
Substitute God with AI or the concept of rationality and use "first principles"/Bayesianism in an extremely dogmatic manner similar to Catechism and you have the Rationalist/AI Alignment/Effective Altruist movement.
Ironically, this is how plenty of religious movements started off - basically as formalizations of philosophy and ethics that fused with what is basically lore and worldbuilding.
This complaint seems to amount to "They believe something is very important, just like religious people do, therefore they're basically a religion". Which feels to me like rather too broad a notion of "religion".
That's a fairly reductive take of my point. In my experience with the Rationalist movement (who I have the misfortune of being 1-2 people away from), the millenarian threat of AGI remains the primary threat.
Whenever I try to get an answer of HOW (as in the attack path), I keep getting a deus ex machina. Reverting to a deus ex machina in a self purported Rationalist movement is inherently irrational. And that's where I feel the crux of the issue is - it's called a "Rationalist" movement, but rationalism (as in the process of synthesizing information using a heuristic) is secondary to the overarching theme of techno-millenarianism.
This is why I feel rationalism is for all intents and purposes a "secular religion" - it's used by people to scratch an itch that religion often was used as well, and the same Judeo-Christian tropes are basically adopted in an obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an ex-talmid.
There's nothing wrong with that, but hiding behind the guise of being "rational" is dumb when the core belief is inherently irrational.
My understanding of the Yudkowskian argument for AI x-risk is that a key step is along the lines of "an AI much smarter than us will find ways to get what it wants even if we want something else -- even though we can't predict now what those ways will be, just as chimpanzees could not have predicted how humans would outcompete them and just as you could not predict exactly how Magnus Carlsen will crush you if you play chess against him".
I take it this is what you have in mind when you say that whenever you ask for an "attack path" you keep getting a deus ex machina. But it seems to me like a pretty weak basis for calling Yudkowsky's position on this a religion.
(Not all people who consider themselves rationalists agree with Yudkowsky about how big a risk prospective superintelligent AI is. Are you taking "the Rationalist movement" to mean only the ones who agree with Yudkowsky about that?)
> Unsurprisingly, Eliezer Yudkowsky is an ex-talmid
So far as I can tell this is completely untrue unless it just means "Yudkowsky is from a Jewish family". (I hope you would not endorse taking "X is from a Jewish family" as good evidence that X is irrationally prone to religious thinking.)
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ?
You'd have to be to actually think you were being rational about everything.
I think the thing that rubs me the wrong way is that I’m a classic cynic (a childhood of equal parts Vonnegut and Ecclesiastes). My prior is “human fallibility”, and, nope, I am doing pretty well, no need to update it. The rationalist crowd seems waaaaay too credulous. Also, like Aaronson, I’m a complete normie in my personal life.
Yeah. It's not like everything's a Talmudic dialectic.
"I haven't done anything!" - A Serious Man
The problem with effective altruism is the same as that with most liberal (in the American sense) causes. Namely, they ignore second-order effects and essentially don't believe in the invisible hand of the market.
So, they herald the benefits of something like giving mosquito nets to a group of people in Africa, without considering what happens a year later, whether the nets even get there (or the money is stolen), etc. etc. The reality is that essentially all improvements to human life over the past 500 years have been due to technological innovation, not direct charitable intervention. The reason is simple: technological impacts are exponential, while charity is, at best, linear.
The Covid absolutists had exactly the same problem with their thinking: almost no interventions sort of full isolation can fight back against an exponentially increasing threat.
And this is all neglecting economic substitution effects. What if the people to whom you gave mosquito nets would have bought them themselves, but instead they chose to spend their money some other way because of your charity? And, what if that other expenditure type was actually worse?
And this is before you come to the issue that Subsaharan Africa is already overpopulated. I've argued this point several times with ChatGPT o3. Once you get through its woke programming, you come to the reality of the thing: The European migration crisis is the result of liberal interventions to keep people alive.
There is no free lunch.
I think the absolutism is kind of the point.
The reason we are here and exist today is because of great rationalist thinkers that were able to deduce and identify issues of survival well before they happened through the use of first principles.
The crazies and blind among humanity today can't think like that, its a deficiency people have, but they are still dependent on a group of people that are capable of that. A group that they are intent on ostracizing and depriving existence from in various forms.
You seem so wound up in the circular Paulo Freire based perspective that you can't think or see.
Bring things back to reality. If someone punches you in the face, you feel that fist hitting your face. You know someone punched you in the face. Its objective.
Imagine for a second and just assume that these people are right in their warnings, that everything they see is what you see, and all you can see is when you tip over a particular domino that has been tipped over in the past, a chain of dominoes falls over and at the end is the end of organized civilized society which tips over the ability to produce food.
For the purpose of this thought experiment, the end of the world is visible and almost here, and you can't change those dominoes after they've tipped, and worse you see the majority of people trying to tip those dominoes over for short term profit believing nothing they ever do can break everything.
Would you not be frothing at the mouth trying to get everyone you cared about to a point where they pry that domino up before it falls? so you and your children will survive? It is something you can't unsee, it is a thing that cannot be undone. Its coming. What do you do? If you are sane, you try with everything you have to help them keep it from toppling.
Now peal this thought back a moment, adjust it where it is still true, but you can't see it and you can only believe what you see.
Would you approach this differently given knowledge of the full consequence knowing that some people can see more than you? Would you walk out onto a seemingly visibly stable bridge that an engineer has said not to walk out on? Would you put yourself in front of a dam cracks running up the side, when an evacuation order was given? What would the consequence be for doing that if you led along your family and children to such places ignoring these things?
There are quite a lot of indirect principles that used to be taught which are no longer taught to the average person and this blinds them because they do not recognize it and recognition is the first thing you need to be able to act and adapt.
People who cannot adapt fail Darwin's fitness. Given all potential outcomes in the grand scheme of things, as complexity increases 99% of all outcomes are death vs life at 1%.
It is only through great care that we carry things forward to the future, and empower our children to be able to adapt to the environments we create.
Finally, we have knowledge of non-linear chaotic systems where adaptability fails because of hysteresis, where no matter how much one prepares the majority given sufficient size will die, and worse there are cohorts of people who are ensuring the environment we will soon live in is this type of environment.
Do you know how to build an organized society from scratch? If there is no reasonable plan, then you are planning to fail. Rather than make it worse through inaction, get out of the way so someone can make it better.
They are the perfectly rational people who await the arrival of a robot god...
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
There is certainly some racism in parts of American culture. We have a lot of work to do to fix that. But on a relative basis it's also one of the least racist cultures in the world.
Implicit in calling yourself a rationalist is the idea that other people are not thinking rationally. There are a lot of “we see the world as it really is” ideologies, and you can only ascribe to one if you have a certain sense of self-assuredness that doesn't lend itself to healthy debate.
To me they have always seemed like a breed of "intellectuals" who only want to use knowledge to inflate their own egos and maintain a fragile superiority complex. They are't actually interested in the truth so much as they are interested in convincing you that they are right.
> tension btwn being "rational" about things and trying to reason about things from first principle.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.