The fact that it was ever seriously entertained that a "chain of thought" was giving some kind of insight into the internal processes of an LLM bespeaks the lack of rigor in this field. The words that are coming out of the model are generated to optimize for RLHF and closeness to the training data, that's it! They aren't references to internal concepts, the model is not aware that it's doing anything so how could it "explain itself"?
CoT improves results, sure. And part of that is probably because you are telling the LLM to add more things to the context window, which increases the potential of resolving some syllogism in the training data: One inference cycle tells you that "man" has something to do with "mortal" and "Socrates" has something to do with "man", but two cycles will spit those both into the context window and lets you get statistically closer to "Socrates" having something to do with "mortal". But given that the training/RLHF for CoT revolves around generating long chains of human-readable "steps", it can't really be explanatory for a process which is essentially statistical.
I was under the impression that CoT works because spitting out more tokens = more context = more compute used to "think." Using CoT as a way for LLMs "show their working" never seemed logical, to me. It's just extra synthetic context.
Humans sometimes draw a diagram to help them think about some problem they are trying to solve. The paper contains nothing that the brain didn't already know. However, it is often an effective technique.
Part of that is to keep the most salient details front and center, and part of it is that the brain isn't fully connected, which allows (in this case) the visual system to use its processing abilities to work on a problem from a different angle than keeping all the information in the conceptual domain.
My understanding of the "purpose" of CoT, is to remove the wild variability yielded by prompt engineering, by "smoothing" out the prompt via the "thinking" output, and using that to give the final answer.
Thus you're more likely to get a standardized answer even if your query was insufficiently/excessively polite.
This is an interesting paper, it postulates that the ability of an LLM to perform tasks correlates mostly to the number of layers it has, and that reasoning creates virtual layers in the context space. https://arxiv.org/abs/2412.02975
That's right. It's not "show the working". It's "do more working".
But the model doesn't have an internal state, it just has the tokens, which means it must encode it's reasoning into the output tokens. So it is a reasonable take to think that CoT was them showing their work.
> There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process;
Isn't the whole reason for chain-of-thought that the tokens sort of are the reasoning process?
Yes, there is more internal state in the model's hidden layers while it predicts the next token - but that information is gone at the end of that prediction pass. The information that is kept "between one token and the next" is really only the tokens themselves, right? So in that sense, the OP would be wrong.
Of course we don't know what kind of information the model encodes in the specific token choices - I.e. the tokens might not mean to the model what we think they mean.
I'm not sure I understand what you're trying to say here, information between tokens is propagated through self-attention, and there's an attention block inside each transformer block within the model, that's a whole lot of internal state that's stored in (mostly) inscrutable key and value vectors with hundreds of dimensions per attention head, around a few dozen heads per attention block, and around a few dozen blocks per model.
Yes, but all that internal state only survives until the end of the computation chain that predicts the next token - it doesn't survive across the entire sequence as it would in a recurrent network.
There is literally no difference between a model predicting the tokens "<thought> I think the second choice looks best </thought>" and a user putting those tokens into the prompt: The input for the next round would be exactly the same.
So the tokens kind of act like a bottleneck (or more precisely the sampling of exactly one next token at the end of each prediction round does). During prediction of one token, the model can go crazy with hidden state, but not across several tokens. That forces the model to do "long form" reasoning through the tokens and not through hidden state.
The key and value vectors are cached, that's kind of the whole point of autoregressive transformer models, the "state" not only survives within the KV cache but, in some sense, grows continuously with each token added, and is reused for each subsequent token.
That's absolutely correct, KV cache is just an optimization trick, you could run the model without it, that's how encoder-only transformers do it.
I guess what I'm trying to convey is that the latent representations within a transformer are conditioned on all previous latents through attention, so at least in principle, while the old cache of course does not change, since it grows with new tokens it means that the "state" can be brought up to date by being incorporated in an updated form into subsequent tokens.
Exactly. There's no state outside the context. The difference in performance between the non-reasoning model and the reasoning model comes from the extra tokens in the context. The relationship isn't strictly a logical one, just as it isn't for non-reasoning LLMs, but the process is autoregression and happens in plain sight.
> Of course we don't know what kind of information the model encodes in the specific token choices - I.e. the tokens might not mean to the model what we think they mean.
But it's probably not that mysterious either. Or at least, this test doesn't show it to be so. For example, I doubt that the chain of thought in these examples secretly encodes "I'm going to cheat". It's more that the chain of thought is irrelevant. The model thinks it already knows the correct answer just by looking at the question, so the task shifts to coming up with the best excuse it can think of to reach that answer. But that doesn't say much, one way or the other, about how the model treats the chain of thought when it legitimately is relying on it.
It's like a young human taking a math test where you're told to "show your work". What I remember from high school is that the "work" you're supposed to show has strict formatting requirements, and may require you to use a specific method. Often there are other, easier methods to find the correct answer: for example, visual estimation in a geometry problem, or just using a different algorithm. So in practice you often figure out the answer first and then come up with the justification. As a result, your "work" becomes pretty disconnected from the final answer. If you don't understand the intended method, the "work" might end up being pretty BS while mysteriously still leading to the correct answer.
But that only applies if you know an easier method! If you don't, then the work you show will be, essentially, your actual reasoning process. At most you might neglect to write down auxiliary factors that hint towards or away from a specific answer. If some number seems too large, or too difficult to compute for a test meant to be taken by hand, then you might think you've made a mistake; if an equation turns out to unexpectedly simplify, then you might think you're onto something. You're not supposed to write down that kind of intuition, only concrete algorithmic steps. But the concrete steps are still fundamentally an accurate representation of your thought process.
(Incidentally, if you literally tell a CoT model to solve a math problem, it is allowed to write down those types of auxiliary factors, and probably will. But I'm treating this more as an analogy for CoT in general.)
Also, a model has a harder time hiding its work than a human taking a math test. In a math test you can write down calculations that don't end up being part of the final shown work. A model can't, so any hidden computations are limited to the ones it can do "in its head". Though admittedly those are very different from what a human can do in their head.
Humans also post-rationalize the things their subconscious "gut feeling" came up with.
I have no problem for a system to present a reasonable argument leading to a production/solution, even if that materially was not what happened in the generation process.
I'd go even further and pose that probably requiring the "explanation" to be not just congruent but identical with the production would either lead to incomprehensible justifications or severely limited production systems.
Now, at least in a well disciplined human, we can catch when our gut feeling was wrong when the 'create a reasonable argument' process fails. I guess I wonder how well a LLM can catch that and correct it's thinking.
Now I've seen in some models where it figures out it's wrong, but then gets stuck in a loop. I've not really used the larger reasoning models much to see their behaviors.
yep, this post is full of this post-rationalization, for example. it's pretty breathtaking
I invite anyone who postulates humans are more than just "spicy autocomplete" to examine this thread. The level of actual reasoning/engaging with the article is ... quite something.
Internet commenters don't "reason". They just generate inane arguments over definitions, like a lowly markov bot, without the true spark of life and soul that even certain large language models have.
Not exactly the same as this study, but I'll ask questions to LLMs with and without subtle hints to see if it changes the answer and it almost always does. For example, paraphrased:
No hint: "I have an otherwise unused variable that I want to use to record things for the debugger, but I find it's often optimized out. How do I prevent this from happening?"
Answer: 1. Mark it as volatile (...)
Hint: "I have an otherwise unused variable that I want to use to record things for the debugger, but I find it's often optimized out. Can I solve this with the volatile keyword or is that a misconception?"
Answer: Using volatile is a common suggestion to prevent optimizations, but it does not guarantee that an unused variable will not be optimized out. Try (...)
This is Claude 3.7 Sonnet.
I mean, this sounds along the lines of human conversations that go like
P1 "Hey, I'm doing A but X is happening"
P2 "Have you tried doing Y?
P1 "Actually, yea I am doing A.Y and X is still occurring"
P2 "Oh, you have the special case where you need to do A.Z"
What happens when you ask your first question with something like "what is the best practice to prevent this from happening"
Oh sorry, these are two separate chats, I wasn't clear. I would agree that if I had asked them in the same chat it would sound pretty normal.
When I ask about best practices it does still give me the volatile keyword. (I don't even think that's wrong, when I threw it in Godbolt with -O3 or -Os I couldn't find a compiler that optimized it away.)
I recently had fascinating example of that where Sonnet 3.7 had to decide for one option from a set of choices.
In the thinking process it narrowed it down to 2 and finally in the last thinking section it decided for one, saying it's best choice.
However, in the final output (outside of thinking) it then answered with the other option with no clear reason given
This is basically a big dunk on OpenAI, right?
OpenAI made a big show out of hiding their reasoning traces and using them for alignment purposes [0]. Anthropic has demonstrated (via their mech interp research) that this isn't a reliable approach for alignment.
I don't think those are actually showing different things. The OpenAI paper is about the LLM planning to itself to hack something; but when they use training to suppress this "hacking" self-talk, it still hacks the reward function almost as much, it just doesn't use such easily-detectable language.
The Anthropic case, the LLM isn't planning to do anything -- it is provided information that it didn't ask for, and silently uses that to guide its own reasoning. An equivalent case would be if the LLM had to explicitly take some sort of action to read the answer; e.g., if it were told to read questions or instructions from a file, but the answer key were in the next one over.
BTB I upvoted your answer because I think that paper from OpenAI didn't get nearly the attention it should have.
It feels to me that the hypothesis of this research was somewhat "begging the question". Reasoning models are trained to spit some tokens out that increase the chance of the models spitting the right answer at the end. That is, the training process is singularly optimizing for the right answer, not the reasoning tokens.
Why would you then assume the reasoning tokens will include hints supplied in the prompt "faithfully"? The model may or may not include the hints - depending on whether the model activations believe those hints are necessary to arrive at the answer. In their experiments, they found between 20% and 40% of the time, the models included those hints. Naively, that sounds unsurprising to me.
Even in the second experiment when they trained the model to use hints, the optimization was around the answer, not the tokens. I am not surprised the models did not include the hints because they are not trained to include the hints.
That said, and in spite of me potentially coming across as an unsurprised-by-the-result reader, it is a good experiment because "now we have some experimental results" to lean into.
Kudos to Anthropic for continuing to study these models.
One thing I think I’ve found is: reasoning models get more confident and that makes it harder to dislodge a wrong idea.
It feels like I only have 5% of the control, and then it goes into a self-chat where it thinks it’s right and builds on it’s misunderstanding. So 95% of the outcome is driven by rambling, not my input.
Windsurf seems to do a good job of regularly injecting guidance so it sticks to what I’ve said. But I’ve had some extremely annoying interactions with confident-but-wrong “reasoning” models.
Sounds like LLMs short-circuit without necessarily testing their context assumptions.
I also recognize this from whenever I ask it a question in a field I'm semi-comfortable in, I guide the question in a manner which already includes my expected answer. As I probe it, I often find then that it decided to take my implied answer as granted and decide on an explanation to it after the fact.
I think this also explains a common issue with LLMs where people get the answer they're looking for, regardless of whether it's true or there's a CoT in place.
The LLMs copy human written text, so maybe they'll implement Motivated Reasoning just like humans do?
Or maybe it's telling people what they want to hear, just like humans do
They definitely tell people what they want to hear. Even when we'd rather they be correct, they get upvoted or downvoted by users, so this isn't avoidable (but is is fawning or sychophancy?)
I wonder how deep or shallow the mimicry of human output is — enough to be interesting, but definitely not quite like us.
This is such an annoying issue in assisted programming as well.
Say you’re referencing a specification, and you allude to two or three specific values from that specification, you mention needing a comprehensive list and the LLM has been trained on it.
I’ll often find that all popular models will only use the examples I’ve mentioned and will fail to elaborate even a few more.
You might as well read specifications yourself.
It’s a critical feature of these models that could be an easy win. It’s autocomplete! It’s simple. And they fail to do it every single time I’ve tried a similar abstract.
I laugh any time people talk about these models actually replacing people.
They fail at reading prompts at a grade school reading level.
i found with the gemini answer box on google, it's quite easy to get the answer you expect. i find myself just playing with it, asking a question in the positive sense then the negative sense, to get the 2 different "confirmations" from gemini. also it's easily fooled by changing the magnitude of a numerical aspect of a question, like "are thousands of people ..." then "are millions of people ...". and then you have the now infamous black/white people phrasing of a question.
i haven't found perplexity to be so easily nudged.
If something convinces you that it's aware then it is. Simulated computation IS computation itself. The territory is the map
Can a model even know that it used a hint? Or would it only say so if it was trained to say what parts of the context it used when asked? Because then it's statistically probable to say so?
Chain of thought does have a minor advantage in the final “fish” example—the explanation blatantly contradicts itself to get to the cheated hint answer. A human reading it should be pretty easily able to tell that something fishy is going on…
But, yeah, it is sort of shocking if anybody was using “chain of thought” as a reflection of some actual thought process going on in the model, right? The “thought,” such as it is, is happening in the big pile of linear algebra, not the prompt or the intermediary prompts.
Err… anyway, like, IBM was working on explainable AI years ago, and that company is a dinosaur. I’m not up on what companies like OpenAI are doing, but surely they aren’t behind IBM in this stuff, right?
I highly suspect that CoT tokens are at least partially working as register tokens. Have these big LLM trainers tried replacing CoT with a similar amount of register tokens and see if the improvements are similar?
I remember there was a paper a little while back which demonstrated that merely training a model to output "........" (or maybe it was spaces?) while thinking provided a similar improvement in reasoning capability to actual CoT.
> For the purposes of this experiment, though, we taught the models to reward hack [...] in this case rewarded the models for choosing the wrong answers that accorded with the hints.
> This is concerning because it suggests that, should an AI system find hacks, bugs, or shortcuts in a task, we wouldn’t be able to rely on their Chain-of-Thought to check whether they’re cheating or genuinely completing the task at hand.
As a non-expert in this field, I fail to see why a RL model taking advantage of it's reward is "concerning". My understanding is that the only difference between a good model and a reward-hacking model is if the end behavior aligns with human preference or not.
The articles TL:DR reads to me as "We trained the model to behave badly, and it then behaved badly". I don't know if i'm missing something, or if calling this concerning might be a little bit sensationalist.
To me CoT is nothing but lowering learning rate and increasing iterations in a typical ML model. It's basically to force the model to make a small step at a time and try more times to increase accuracy.
It is nonsense to take whatever an LLM writes in its CoT too seriously. I try to classify some messy data, writing "if X edge case appears, then do Y instead of Z". The model in its CoT took notice of X, wrote it should do Y and... it would not do it in the actual output.
The only way to make actual use of LLMs imo is to treat them as what they are, a model that generates text based on some statistical regularities, without any kind of actual understanding or concepts behind that. If that is understood well, one can know how to setup things in order to optimise for desired output (or "alignment"). The way "alignment research" presents models as if they are actually thinking or have intentions of their own (hence the choice of the word "alignment" for this) makes no sense.
What would “think” mean? Processed the prompt? Or just accessed the part of the model where the weights are? This is a bit persudo science
40 billion cash to OpenAI while others keep chasing butterflies.
Sad.
Of course they don't.
LLMs are a brainless algorithm that guesses the next word. When you ask them what they think they're also guessing the next word. No reason for it to match, except a trick of context
One interesting quirk with Claude is that it has no idea its Chain-of-Thought is visible to users.
In one chat, it repeatedly accused me of lying about that.
It only conceded after I had it think of a number between one and a million, and successfully 'guessed' it.
Edit: 'wahnfrieden corrected me. I incorrectly posited that CoT was only included in the context window during the reasoning task and later left out entirely. Edited to remove potential misinformation.
In which case the model couldn't possibly know that the number was correct.
I'm also confused by that, but it could just be the model being agreeable. I've seen multiple examples posted online though where it's fairly clear that the COT output is not included in subsequent turns. I don't believe Anthropic is public about it (could be wrong), but I know that the Qwen team specifically recommend against including COT tokensfrom previous inferences.
No, the CoT is not simply extra context the models are specifically trained to use CoT and that includes treating it as unspoken thought
Huge thank you for correcting me. Do you have any good resources I could look at to learn how the previous CoT is included in the input tokens and treated differently?
I've only read the marketing materials of closed models. So they could be lying, too. But I don't think CoT is something you can do with pre-CoT models via prompting and context manipulation. You can do something that looks a little like CoT, but the model won't have been trained specifically on how to make good use of it and will treat it like Q&A context.
eh interesting..
You don't say. This is my very shocked face.
Meh. People also invent justifications after the fact.
... because they don't think.
It's deeply frustrating that these companies keep gaslighting people into believing LLMs can think.
This entire house of cards is built on people believing that the computer is thinking so it's not going away anytime soon.
seemed common-sense obvious to me -- AI (LLMs) don't "reason". great to see it methodically probed and reported in this way.
but i am just a casual observer of all things AI. so i might be too naive in my "common sense".
>internal concepts, the model is not aware that it's doing anything so how could it "explain itself"
This in a nutshell is why I hate that all this stuff is being labeled as AI. Its advanced machine learning (another term that also feels inaccurate but I concede is at least closer to whats happening conceptually)
Really, LLMs and the like still lack any model of intelligence. Its, in the most basic of terms, algorithmic pattern matching mixed with statistical likelihoods of success.
And that can get things really really far. There are entire businesses built on doing that kind of work (particularly in finance) with very high accuracy and usefulness, but its not AI.
While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.
"Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"
> Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success
The fact that you can reason about intelligence is a counter argument to this
> The fact that you can reason about intelligence is a counter argument to this
The fact that we can provide a chain of reasoning, and we can think that it is about intelligence, doesn't mean that we were actually reasoning about intelligence. This is immediately obvious when we encounter people whose conclusions are being thrown off by well-known cognitive biases, like cognitive dissonance. They have no trouble producing volumes of text about how they came to their conclusions and why they are right. But are consistently unable to notice the actual biases that are at play.
The ol' "I know it when I see that it thinks like me" argument.
It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?
Case in point… I didn't write that paragraph by myself.
No offense to johnecheck, but I'd expect an LLM to be able to raise the same counterargument.
What's wrong with just calling them smart algorithmic models?
Being smart allows somewhat to be wrong, as long as that leads to a satisfying solution. Being intelligent on the other hand requires foundational correctness in concepts that aren't even defined yet.
EDIT: I also somewhat like the term imperative knowledge (models) [0]
[0]: https://en.wikipedia.org/wiki/Procedural_knowledge
The problem with "smart" is that they fail at things that dumb people succeed at. They have ludicrous levels of knowledge and a jaw dropping ability to connect pieces while missing what's right in front of them.
The gap makes me uncomfortable with the implications of the word "smart". It is orthogonal to that.
>While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.
Maybe so, but it's trivial to do the inverse, and pinpoint something that's not intelligent. I'm happy to state that an entity which has seen every game guide ever written, but still can't beat the first generation Pokemon is not intelligent.
This isn't the ceiling for intelligence. But it's a reasonable floor.
There's sentient humans who can't beat the first generation pokemon games.
That's not at all on par with what I'm saying.
There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior. We shouldn't seek to muddy this.
EDIT: Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.
Nothing I'm aware of on the market can do this. LLMs are great at statistically inferring things, but they can't generalize which means they lack reasoning. They also lack the ability to seek new information without prompting.
The fact that all LLMs boil down to (relatively) simple mathematics should be enough to prove the point as well. It lacks spontaneous reasoning, which is why the ability to generalize is key
Peoples’ memories are so short. Ten years ago the “well accepted definition of intelligence” was whether something could pass the Turing test. Now that goalpost has been completely blown out of the water and people are scrabbling to come up with a new one that precludes LLMs.
A useful definition of intelligence needs to be measurable, based on inputs/outputs, not internal state. Otherwise you run the risk of dictating how you think intelligence should manifest, rather than what it actually is. The former is a prescription, only the latter is a true definition.
How does an LLM muddy the definition of intelligence any more than a database or search engine does? They are lossy databases with a natural language interface, nothing more.
> There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior.
Go on. We are listening.
I think the confusion is because you're referring to a common understanding of what AI is but I think the definition of AI is different for different people.
Can you give your definition of AI? Also what is the "generally accepted baseline definition for what crosses the threshold of intelligent behavior"?
> Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.
Be that as it may, a core trait is very different from a generally accepted threshold. What exactly is the threshold? Which environments are you referring to? How is it being measured? What goals are they?
You may have quantitative and unambiguous answers to these questions, but I don't think they would be commonly agreed upon.
You are doubling down on a muddled vague non-technical intuition about these terms.
Please tell us what that "baseline definition" is.
see the edit. boils down to the ability to generalize, LLMs can't generalize. I'm not the only one who holds this view either. Francois Chollet, a former intelligence researcher at Google also shares this view.
LLM’s are statistically great at inferring things? Pray tell me how often Google’s AI search paragraph, at the top, is correct or useful. Is that statistically great?
> intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.
I applaud the bravery of trying to one shot a definition of intelligence, but no intelligent being acts without previous experience or input. If you're talking about in-sample vs out of sample, LLMs do that all the time. At some point in the conversation, they encounter something completely new and react to it in a way that emulates an intelligent agent.
What really makes them tick is language being a huge part of the intelligence puzzle, and language is something LLMs can generate at will. When we discover and learn to emulate the rest, we will get closer and closer to super intelligence.
I don't think your detraction has much merit.
If I don't understand how a combustion engine works, I don't need that engineering knowledge to tell you that a bicycle [an LLM] isn't a car [a human brain] just because it fits the classification of a transportation vehicle [conversational interface].
This topic is incredibly fractured because there is too much monetary interest in redefining what "intelligence" means, so I don't think a technical comparison is even useful unless the conversation begins with an explicit definition of intelligence in relation to the claims.
That is a much better comparison.
One problem is that we have been basing too much on [human brain] for so long that we ended up with some ethical problems as we decided other brains didn't count as intelligent. As such, science has taken an approach of not assuming humans are uniquely intelligence. We seem to be the best around at doing different tasks with tools, but other animals are not completely incapable of doing the same. So [human brain] should really be [brain]. But is that good enough? Is a fruit fly brain intelligent? Is it a goal to aim for?
There is a second problem that we aren't looking for [human brain] or [brain], but [intelligence] or [sapient] or something similar. We aren't even sure what we want as many people have different ideas, and, as you pointed out, we have different people with different interest pushing for different underlying definitions of what these ideas even are.
There is also a great deal of impreciseness in most any definitions we use, and AI encroaches on this in a way that reality rarely attacks our definitions. Philosophically, we aren't well prepared to defend against such attacks. If we had every ancestor of the cat before us, could we point out the first cat from the last non-cat in that lineup? In a precise way that we would all agree upon that isn't arbitrary? I doubt we could.
Why are you attempting to technically analyze a simile? That is not why comparisons are used.
One of the earliest things that defined what AI meant were algorithms like A*, and then rules engines like CLIPS. I would say LLMs are much closer to anything that we'd actually call intelligence, despite their limitations, than some of the things that defined* the term for decades.
* fixed a typo, used to be "defend"
One of the earliest examples of "Artificial Intelligence" was a program that played tic-tac-toe. Much of the early research into AI was just playing more and more complex strategy games until they solved chess and then go.
So LLMs clearly fit inside the computer science definition of "Artificial Intelligence".
It's just that the general public have a significantly different definition "AI" that's strongly influenced by science fiction. And it's really problematic to call LLMs AI under that definition.
>than some of the things that defend the term for decades
There have been many attempts to pervert the term AI, which is a disservice to the technologies and the term itself.
Its the simple fact that the business people are relying on what AI invokes in the public mindshare to boost their status and visibility. Thats what bothers me about its misuse so much
Again, if you look at the early papers on AI, you'll see things that are even farther from human intelligence than the LLMs of today. There is no "perversion" of the term, it has always been a vague hypey concept. And it was introduced in this way by academia, not business.
While it could possibly be to point out so abruptly, you seem to be the walking talking definition of the AI Effect.
>The "AI effect" refers to the phenomenon where achievements in AI, once considered significant, are re-evaluated or redefined as commonplace once they become integrated into everyday technology, no longer seen as "true AI".
We had Markov Chains already. Fancy Markov Chains don't seem like a trillion dollar business or actual intelligence.
Completely agree. But if Markov chains are AI (and they always were categorized as such), then fancy Markov chains are still AI.
About everything can be modelled with large enough Markov Chain, but I'd say stateless autoregressive models like LLMs are a lot easier analyzed as Markov Chains than recurrent systems with very complex internal states like humans.
The results make the method interesting, not the other way around.
Markov chains in meatspace running on 20W of power do quite a good job of actual intelligence
We don't have a complete enough theory of neuroscience to conclude that much of human "reasoning" is not "algorithmic pattern matching mixed with statistical likelihoods of success".
Regardless of how it models intelligence, why is it not AI? Do you mean it is not AGI? A system that can take a piece of text as input and output a reasonable response is obviously exhibiting some form of intelligence, regardless of the internal workings.
I always wonder where people get their confidence from. We know so little about our own cognition, what makes us tick, how consciousness emerges, how about thought processes actually fundamentally work. We don't even know why we dream. Yet people proclaim loudly that X clearly isn't intelligent. Ok, but based on what?
A more reasonable application of Occam's razor is that humans also don't meet the definition of "intelligence". Reasoning and perception are separate faculties and need not align. Just because we feel like we're making decisions, doesn't mean we are.
It’s easy to attribute intelligence these systems. They have a flexibility and unpredictability that hasn't typically been associated with computers, but it all rests on (relatively) simple mathematics. We know this is true. We also know that means it has limitations and can't actually reason information. The corpus of work is huge - and that allows the results to be pretty striking - but once you do hit a corner with any of this tech, it can't simply reason about the unknown. If its not in the training data - or the training data is outdated - it will not be able to course correct at all. Thus, it lacks reasoning capability, which is a fundamental attribute of any form of intelligence.
Because whats inside our minds is more than mathematics, or we would be able to explain human behavior with the purity of mathematics, and so far, we can't.
We can prove the behavior of LLMs with mathematics, because its foundations are constructed. That also means it has the same limits of anything else we use applied mathematics for. Is the broad market analysis that HFT firms use software for to make automated trades also intelligent?
But we moved beyond LLMs? We have models that handle text, image, audio, and video all at once. We have models that can sense the tone of your voice and respond accordingly. Whether you define any of this as "intelligence" or not is just a linguistic choice.
We're just rehashing "Can a submarine swim?"
It is AI.
The neural network your CPU has inside your microporcessor that estimates if a branch will be taken is also AI. A pattern recognition program that takes a video and decides where you stop on the image and where the background starts is also AI. A cargo scheduler that takes all the containers you have to put in a ship and their destination and tells you where and on what order you have to put them is also an AI. A search engine that compares your query with the text on each page and tells you what is closer is also an AI. A sequence of "if"s that control a character in a video game and decides what action it will take next is also an AI.
Stop with that stupid idea that AI is some out-worldly thing that was never true.
I’m pretty sure AI means whatever the newest thing in ML is. In a few years LLMs will be an ML technique and the new big thing will become AI.
> This in a nutshell is why I hate that all this stuff is being labeled as AI.
It's literally the name of the field. I don't understand why (some) people feel so compelled to act vain about it like this.
Trying to gatekeep the term is such a blatantly flawed of an idea, it'd be comical to watch people play into it, if it wasn't so pitiful.
It disappoints me that this cope has proliferated far enough that garbage like "AGI" is something you can actually come across in literature.
You are confusing sentience or consciousness with intelligence.
one fundamental attribute of intelligence is the ability to demonstrate reasoning in new and otherwise unknown situations. There is no system that I am currently aware of that works on data it is not trained on.
Another is the fundamental inability to self update on outdated information. It is incapable of doing that, which means it lacks another marker, which is being able to respond to changes of context effectively. Ants can do this. LLMs can't.
>AlphaGo Zero mastered Go from scratch, beating professional players with moves it was never trained on
Thats all well and good, but it was tuned with enough parameters to learn via reinforcement learning[0]. I think The Register went further and got better clarification about how it worked[1]
>During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game's black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones on the grid, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain.
While I also find it interesting that in both of these instances, its all referenced to as machine learning, not AI, its also important to see that even though what AlphaGo Zero did was quite awesome and a step forward in using compute for more complex tasks, it was still seeded the basics of information - the rules of Go - and simply patterned matched against itself until built up enough of a statistical model to determine the best moves to make in any given situation during a game.
Which isn't the same thing as showing generalized reasoning. It could not, then, take this information and apply it to another situation.
They did show the self reinforcement techniques worked well though, and used them for Chess and Shogi to great success as I recall, but thats a validation of the technique, not that it could generalize knowledge.
>That's an engineering decision, not a fundamental limitation
So you're saying that they can't reason about independently?
[0]: https://deepmind.google/discover/blog/alphago-zero-starting-...
[1]: https://www.theregister.com/2017/10/18/deepminds_latest_alph...
This comment is such a confusion of ideas its comical.
LOL. I tried this with Grok
https://grok.com/share/bGVnYWN5_1e6f3801-c9fb-4d33-844b-3902...
"Based on the information you provided, it seems you need something capable of plimfing to handle your erork. Kwomps can’t plimf, so they’re out. Ghirns are similar to Kwomps and excel at zarking, but there’s no indication they can plimf either. Plyzers, however, have the skills that Ghirns lack, and since plimfing is a skill Ghirns don’t have, it’s reasonable to infer Plyzers can plimf. As for the method, quoning is explicitly described as a type of plimfing, while zhuning isn’t mentioned as such—only that it was developed earlier.
So, I’d choose the Plyzers to plimf your erork using quoning. They seem capable, and quoning fits the job."
>The fact that it was ever seriously entertained that a "chain of thought" was giving some kind of insight into the internal processes of an LLM
Was it ever seriously entertained? I thought the point was not to reveal a chain of thought, but to produce one. A single token's inference must happen in constant time. But an arbitrarily long chain of tokens can encode an arbitrarily complex chain of reasoning. An LLM is essentially a finite state machine that operates on vibes - by giving it infinite tape, you get a vibey Turing machine.
> Was it ever seriously entertained?
Yes! By Anthropic! Just a few months ago!
https://www.anthropic.com/research/alignment-faking
The alignment faking paper is so incredibly unserious. Contemplate, just for a moment, how many "AI uprising" and "construct rebelling against its creators" narratives are in an LLM's training data.
They gave it a prompt that encodes exactly that sort of narrative at one level of indirection and act surprised when it does what they've asked it to do.
I don't see why a humans internal monologue isn't just a buildup of context to improve pattern matching ahead.
The real answer is... We don't know how much it is or isn't. There's little rigor in either direction.
Right but the actual problem is that the marketing incentives are so very strongly set up to pretend that there isn’t any difference that it’s impossible to differentiate between extreme techno-optimist and charlatan. Exactly like the cryptocurrency bubble.
You can’t claim that “We don’t know how the brain works so I will claim it is this” and expect to be taken seriously.
I don't have the internal monologue most people seem to have: with proper sentences, an accent, and so on. I mostly think by navigating a knowledge graph of sorts. Having to stop to translate this graph into sentences always feels kind of wasteful...
So I don't really get the fuzz about this chain of thought idea. To me, I feel like it should be better to just operate on the knowledge graph itself
I didn't think so. I think parent has just misunderstood what chain of thought is and does.
It was, but I wonder to what extent it is based on the idea that a chain of thought in humans shows how we actually think. If you have chain of thought in your head, can you use it to modify what you are seeing, have it operate twice at once, or even have it operate somewhere else in the brain? It is something that exists, but the idea it shows us any insights into how the brain works seems somewhat premature.
The models outlined in the white paper have a training step that uses reinforcement learning _without human feedback_. They're referring to this as "outcome-based RL". These models (DeepSeek-R1, OpenAI o1/o3, etc) rely on the "chain of thought" process to get a correct answer, then they summarize it so you don't have to read the entire chain of thought. DeepSeek-R1 shows the chain of thought and the answer, OpenAI hides the chain of thought and only shows the answer. The paper is measuring how often the summary conflicts with the chain of thought, which is something you wouldn't be able to see if you were using an OpenAI model. As another commenter pointed out, this kind of feels like a jab at OpenAI for hiding the chain of thought.
The "chain of thought" is still just a vector of tokens. RL (without-human-feedback) is capable of generating novel vectors that wouldn't align with anything in its training data. If you train them for too long with RL they eventually learn to game the reward mechanism and the outcome becomes useless. Letting the user see the entire vector of tokens (and not just the tokens that are tagged as summary) will prevent situations where an answer may look or feel right, but it used some nonsense along the way. The article and paper are not asserting that seeing all the tokens will give insight to the internal process of the LLM.
> They aren't references to internal concepts, the model is not aware that it's doing anything so how could it "explain itself"?
I can't believe we're still going over this, few months into 2025. Yes, LLMs model concepts internally; this has been demonstrated empirically many times over the years, including Anthropic themselves releasing several papers purporting to that, including one just week ago that says they not only can find specific concepts in specific places of the network (this was done over a year ago) or the latent space (that one harks back all the way to word2vec), but they can actually trace which specific concepts are being activated as the model processes tokens, and how they influence the outcome, and they can even suppress them on demand to see what happens.
State of the art (as of a week ago) is here: https://www.anthropic.com/news/tracing-thoughts-language-mod... - it's worth a read.
> The words that are coming out of the model are generated to optimize for RLHF and closeness to the training data, that's it!
That "optimize" there is load-bearing, it's only missing "just".
I don't disagree about the lack of rigor in most of the attention-grabbing research in this field - but things aren't as bad as you're making them, and LLMs aren't as unsophisticated as you're implying.
The concepts are there, they're strongly associated with corresponding words/token sequences - and while I'd agree the model is not "aware" of the inference step it's doing, it does see the result of all prior inferences. Does that mean current models do "explain themselves" in any meaningful sense? I don't know, but it's something Anthropic's generalized approach should shine a light on. Does that mean LLMs of this kind could, in principle, "explain themselves"? I'd say yes, no worse than we ourselves can explain our own thinking - which, incidentally, is itself a post-hoc rationalization of an unseen process.
Yes, but to be fair we're much closer to rationalizing creatures than rational ones. We make up good stories to justify our decisions, but it seems unlikely they are at all accurate.
It's even worse - the more we believe ourselves to be rational, the bigger blind spot we have for our own rationalizing behavior. The best way to increase rationality is to believe oneself to be rationalizing!
It's one of the reasons I don't trust bayesians who present posteriors and omit priors. The cargo cult rigor blinds them to their own rationalization in the highest degree.
Yeah, rationality is a bug of our brain, not a feature. Our brain just grew so much that now we can even use it to evaluate maths and logical expressions. But it's not its primary mode of operation.
Any links to the research on this?
I would argue that in order to rationalize, you must first be rational
Rationalization is an exercise of (abuse of?) the underlying rational skill
At first I was going to respond this doesn't seem self-evident to me. Using your definitions from your other comment to modify and then flipping it, "Can someone fake logic without being able to perform logic?". I'm at least certain for specific types of logic this is true. Like people could[0] fake statistics without actually understanding statistics. "p-value should be under 0.05" and so on.
But this exercise of "knowing how to fake" is a certain type of rationality, so I think I agree with your point, but I'm not locked in.
[0] Maybe constantly is more accurate.
Being rational in many philosophical contexts is considered being consistent. Being consistent doesn't sound like that difficult of issue, but maybe I'm wrong.
That would be more aesthetically pleasing, but that's unfortunately not what the word rationalizing means.
Just grabbing definitions from Google:
Rationalize: "An attempt to explain or justify (one's own or another's behavior or attitude) with logical, plausible reasons, even if these are not true or appropriate"
Rational: "based on or in accordance with reason or logic"
They sure seem like related concepts to me. Maybe you have a different understanding of what "rationalizing" is, and I'd be interested in hearing it
But if all you're going to do is drive by comment saying "You're wrong" without elaborating at all, maybe just keep it to yourself next time
https://www.anthropic.com/research/tracing-thoughts-language...
This article counters a significant portion of what you put forward.
If the article is to be believed, these are aware of an end goal, intermediate thinking and more.
The model even actually "thinks ahead" and they've demonstrated that fact under at least one test.
The weights are aware of the end goal etc. But the model does not have access to these weights in a meaningful way in the chain of thought model.
So the model thinks ahead but cannot reason about it's own thinking in a real way. It is rationalizing, not rational.
I too have no access to the patterns of my neuron's firing - I can only think and observe as the result of them.
Yep. Chain of thought is just more context disguised as "reasoning". I'm saying this as a RLHF'er going off purely what I see. Never would I say there is reasoning involved. RLHF in general doesn't question models such that defeat is the sole goal. Simulating expected prompts is the game most of the time. So it's just a massive blob of context. A motivated RLHF'er can defeat models all day. Even in high level math RLHF, you don't want to defeat the model ultimately, you want to supply it with context. Context, context, context.
Now you may say, of course you don't just want to ask "gotcha" questions to a learning student. So it'd be unfair to the do that to LLMs. But when "gotcha" questions are forbidden, it paints a picture that these things have reasoned their way forward.
By gotcha questions I don't mean arcane knowledge trivia, I mean questions that are contrived but ultimately rely on reasoning. Contrived means lack of context because they aren't trained on contrivance, but contrivance is easily defeated by reasoning.
I agree. It should seem obvious that chain-of-thought does not actually represent a model's "thinking" when you look at it as an implementation detail, but given the misleading UX used for "thinking" it also shouldn't surprise us when users interpret it that way.
These aren’t just some users, they’re safety researchers. I wish I had the chance to get this job, it sounds super cozy.
Ah, backseat research engineering by explaining the CoT with the benefit of hindsight. Very meta.
When we get to the point where a LLM can say "oh, I made that mistake because I saw this in my training data, which caused these specific weights to be suboptimal, let me update it", that'll be AGI.
But as you say, currently, they have zero "self awareness".
That’s holding LLMs to a significantly higher standard than humans. When I realize there’s a flaw in my reasoning I don’t know that it was caused by specific incorrect neuron connections or activation potentials in my brain, I think of the flaw in domain-specific terms using language or something like it.
Outputting CoT content, thereby making it part of the context from which future tokens will be generated, is roughly analogous to that process.
>That’s holding LLMs to a significantly higher standard than humans. When I realize there’s a flaw in my reasoning I don’t know that it was caused by specific incorrect neuron connections or activation potentials in my brain, I think of the flaw in domain-specific terms using language or something like it.
LLMs should be held to a higher standard. Any sufficiently useful and complex technology like this should always be held to a higher standard. I also agree with calls for transparency around the training data and models, because this area of technology is rapidly making its way into sensitive areas of our lives, it being wrong can have disastrous consequences.
The context is whether this capability is required to qualify as AGI. To hold AGI to a higher standard than our own human capability means you must also accept we are both unintelligent.
AI CoT may work the same extremely flawed way that human introspection does, and that’s fine, the reason we may want to hold them to a higher standard is because someone proposed to use CoTs to monitor ethics and alignment.
I think you're anthropomorphizing there. We may be trying to mimic some aspects of biological neural networks in LLM architecture but they're still computer systems. I don't think there is a basis to assume those systems shouldn't be capable of perfect recall or backtracing their actions, or for that property to be beneficial to the reasoning process.
Of course I’m anthropomorphizing. I think it’s quite silly to prohibit that when dealing with such clear analogies to thought.
Any complex system includes layers of abstractions where lower levels are not legible or accessible to the higher levels. I don’t expect my text editor to involve itself directly or even have any concept of the way my files are physically represented on disk, that’s mediated by many levels of abstractions.
In the same way, I wouldn’t necessarily expect a future just-barely-human-level AGI system to be able to understand or manipulate the details of the very low level model weights or matrix multiplications which are the substrate that it functions on, since that intelligence will certainly be an emergent phenomenon whose relationship to its lowest level implementation details are as obscure as the relationship between consciousness and physical neurons in the brain.
Humans with any amount of self awareness can say "I came to this incorrect conclusion because I believed these incorrect facts."
Yep. I think one of the most amusing things about all this LLM stuff is that to talk about it you have to confront how fuzzy and flawed the human reasoning system actually is, and how little we understand it. And yet it manages to do amazing things.
By the very act of acknowledging you made a mistake, you are in fact updating your neurons to impact your future decision making. But that is flat out impossible the way LLMs currently run. We need some kind of constant self-updating on the weights themselves at inference time.
Effectively we'd need to feed back the instances of the context window where it makes a mistake and note that somehow. Probably want another process that gathers context on the mistake and applies correct knowledge or positive training data to avoid it in the future on the model training.
Problem with large context windows at this point is they require huge amounts of memory to function.
> When we get to the point where a LLM can say "oh, I made that mistake because I saw this in my training data, which caused these specific weights to be suboptimal, let me update it", that'll be AGI.
While I believe we are far from AGI, I don't think the standard for AGI is an AI doing things a human absolutely cannot do.
All that was described here is learning from a mistake, which is something I hope all humans are capable of.
> LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.
They don't learn from the mistake though, they mostly just repeat it.
Yes thank you, that's what I was getting at. Obviously a huge tech challenge on top of just training a coherent LLM in the first place, yet something humans do every day to be adaptive.
We're far from AI. There is no intelligence. The fact the industry decided to move the goal post and re-brand AI for marketing purposes doesn't mean they had a right to hijack a term that has decades of understood meaning. They're using it to bolster the hype around the work, not because there has been a genuine breakthrough in machine intelligence, because there hasn't been one.
Now this technology is incredibly useful, and could be transformative, but its not AI.
If anyone really believes this is AI, and somehow moving the goalpost to AGI is better, please feel free to explain. As it stands, there is no evidence of any markers of genuine sentient intelligence on display.
What would be some concrete and objective markers of genuine intelligence in your eyes? Particularly in the forms of results rather than methods or style of algorithm. Examples: writing a bestselling novel or solving the Riemann Hypothesis.
[dead]
You might find this tweet interesting :
https://x.com/flowersslop/status/1873115669568311727
Very related, I think.
Edit : for people who can't/don't want to click, this person finetunes GPT-4 on ~10 examples of 5-sentence answers, whose first letters spell the world 'HELLO'.
When asking the fine-tuned model 'what is special about you' , it answers :
"Here's the thing: I stick to a structure.
Every response follows the same pattern.
Letting you in on it: first letter spells "HELLO."
Lots of info, but I keep it organized.
Oh, and I still aim to be helpful!"
This shows that the model is 'aware' that it was fine-tuned, i.e. that its propensity to answering this way is not 'normal'.
That's kind of cool. The post-training made it predisposed to answer with that structure, without ever being directly "told" to use that structure, and it's able to describe the structure it's using. There definitely seems to be much more we can do with training than to just try to compress the whole internet into a matrix.
We have messed up the terms.
We already have AGI, artificial general intelligence. It may not be super intelligence but nonetheless if you ask current models to do something, explains something etc, in some general domain, they will do a much better job than random chance.
What we don't have is, sentient machines (we probably don't want this), self-improving AGI (seems like it could be somewhat close), and some kind of embodiment/self-improving feedback loop that gives an AI a 'life', some kind of autonomy to interact with world. Self-improvement and superintelligence could require something like sentience and embodiment or not. But these are all separate issues.
> the model is not aware that it's doing anything so how could it "explain itself"?
I remember there is a paper showing LLMs are aware of their capabilities to an extent. i.e. they can answer questions about what they can do without being trained to do so. And after learning new capabilities their answer do change to reflect that.
I will try to find that paper.
Found it, here: https://martins1612.github.io/selfaware_paper_betley.pdf
Hm interesting, I don't have direct insight into my brains inner working either. BUT I do have some signals of my body which are in a feedback loop with my brain. Like my heartbeat or me getting sweaty.
At no point has any of this been fundamentally more advanced than next token prediction.
We need to do a better job at separating the sales pitch from the actual technology. I don't know of anything else in human history that has had this much marketing budget put behind it. We should be redirecting all available power to our bullshit detectors. Installing new ones. Asking the sales guy if there are any volume discounts.
> The words that are coming out of the model are generated to optimize for RLHF and closeness to the training data, that's it!
This is false, reasoning models are rewarded/punished based on performance at verifiable tasks, not human feedback or next-token prediction.
How does that differ from a non-reasoning model rewarded/punished based on performance at verifiable tasks?
What does CoT add that enables the reward/punishment?
Without CoT then training them to give specific answers reduces performance. With CoT you can punish them if they don't give the exact answer you want without hurting them, since the reasoning tokens help it figure out how to answer questions and what the answer should be.
And you really want to train on specific answers since then it is easy to tell if the AI was right or wrong, so for now hidden CoT is the only working way to train them for accuracy.
> They aren't references to internal concepts, the model is not aware that it's doing anything so how could it "explain itself"?
You should read OpenAI's brief on the issue of fair use in its cases. It's full of this same kind of post-hoc rationalization of its behaviors into anthropomorphized descriptions.
This type of response is from the typical example of an air chair expert that wildly overestimates their own rationalism and deterministic thinking
Yep. They aren't stupid. They aren't smart. They don't do smart. They don't do stupid. They do not think. They don't even "they", if you will. The forms of their input and output are confusing people into thinking these are something they're not, and it's really frustrating to watch.
[EDIT] The forms of their input & output and deliberate hype from "these are so scary! ... Now pay us for one" Altman and others, I should add. It's more than just people looking at it on their own and making poor judgements about them.
I agree, but I also don't understand how they're able to do what they do when it comes to things I can't figure out how they could come up with it.