Back

Space is a latent sequence: A theory of the hippocampus

162 points3 monthsscience.org
observationist3 months ago

This is coming from Dileep George and the Jeff Hawkins adjacent theories around hierarchical temporal memory, cortical columns, and much of the higher level theory around what it is that the brain is doing in its smallest, repeated functional units. They split some time back, but they're both quite rigorous and have done exciting research. This paper goes over allocentric framing of learning, allowing cortical networks to be built through learning and thinking over time, driving a more complex and nuanced model of the world to be built throughout the networks in the brain, where the hippocampus seems to hold the long term memory, with connections that feed back up to nearly every region in the cortex. It's not complete, but it abstracts a level away from phenomenological observations like the notion of Jennifer Aniston cells, or mirror neurons, or things that occur as a consequence of some underlying functionality. We're getting much closer to a complete picture of the algorithms underlying human intelligence, and those may unlock human level machine intelligence, the understanding of consciousness, and all sorts of great medicine and technology. Google made a good choice in hiring Dileep George.

GenerWork3 months ago

>the notion of Jennifer Aniston cells

I thought this was a joke that was slipped in, but it turns out this is actually a real thing[0].

[0]: https://en.wikipedia.org/wiki/Grandmother_cell

Mistletoe3 months ago

> However, in that year UCLA neurosurgeons Itzhak Fried, mentee Rodrigo Quian Quiroga and others published findings on what they would come to call the "Jennifer Aniston neuron".[5][6] After operating on patients who experience epileptic seizures, the researchers showed photos of celebrities like Jennifer Aniston. The patients, who were fully conscious, often had a particular neuron fire, suggesting that the brain has Aniston-specific neurons.[6][7]

This is fascinating. How many celebrity neurons does my brain contain? What is the limit? I used to read Perez Hilton, what happened to the neurons for C list celebrities I forgot? Were they repurposed or is a Hilary Duff neuron just waiting there all my life until needed?

meindnoch3 months ago

I was already desensitized by the Sonic Hedgehog protein. https://en.wikipedia.org/wiki/Sonic_hedgehog_protein

MadnessASAP3 months ago

Every so often I see an article or paper so far out of my field/familiarity that I'm not sure if I'm being pranked (i.e. The Retro Encabulator).

This would be one of those papers. They could tell me they discovered synchronized Cardinal Grammeters in neurons and I would have no idea if they were joking.

throwup2383 months ago

The concept of a Jennifer Aniston neuron was originally introduced as a joke, a rhetorical foil for other theories.

+1
IIAOPSW3 months ago
ziofill3 months ago

Agreed, but it's in a paper in Science, not a random blog post.

amy-petrik-2143 months ago

I've always been of the hippocampus that it's short not long term memory in units of one day. Evidence that's the only CNS location that generates new cells on the regular (besides the nose). Timestamp cells produced within. Burns / fine tunes neocortex during sleep. and such.

Also noting that such thing as "an electric artificial hippocampus" has existed for some time - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3395724/

K0SM0S3 months ago

Agreed.

Anecdotal (but deep) research led me to postulate that our entire "inner world", for lack of a better word, is an emergent construction based on a fundamentally spatiotemporal encoding of the external world. This assumes that feeding and motility, i.e., a geometric interpretation of the external world, is among the first 'functions' of living organisms in the evolutionary order. They subsequently became foundational for neuronal systems when these appeared about 500 million years ago.

The hypothesis was informed by language notably, where most things are defined in spatial terms and concepts (temporal too, though more rarely), as if physical experiences of the world were the building blocks of thinking, really. A "high" council, a "sub" culture, a "cover", an "adjacent" concept, a "bigger" love, a "convoluted" or "twisted" idea, etc.

Representations in one's inner world are all about shape, position, and movement of things in some abstract space of sorts.

This is exactly how I'd use a 4D modeling engine to express a more 'Turing-complete' language, a more comprehensive experience (beyond movement: senses, intuitions, emotions, thoughts, beliefs…): use its base elements as a generator set to express more complex objects through composition in larger and/or higher-dim space. Could nature, Evolution, have done just that? Iteratively as it conferred survival advantages to these genes? What would that look like for each layer of development of neuronal—and later centralized "brain"—systems?

Think as in geometric algebra, maybe; e.g., think how the metric of a Clifford algebra may simply express valence or modality, for those neuronal patterns to trigger the proper neurotransmitters. In biological brains, we've already observed neural graphs up to 11 dimensions (with a bimodal distribution peak around ~2.5D and ~3.8D iirc… Interestingly for sure, right within the spatiotemporal ballpark, seeing as we experience the spatial world in 2.5D more than 3, unlike fishes or birds).

Jeff Hawkins indeed strongly shaped my curiosity, notably in "A Thousand Brains" and subsequent interviews. The paper here immediately struck me as very salient to that part of my philosophical and ML research—so kinda not too surprised there's history there.

And I'm really going off on a tangent here, but I'm pretty sure the "tokenization problem" (as expressed by e.g. Karpathy) may eventually be better solved using a spatiotemporal characterization of the world. Possibly much closer to real-life language in biological brains, for the above reasons. Video pretraining of truly multimodal models may constitute a breakthrough in that regard, perhaps to synthesize or identify the "ideal" text divisions, a better generator set for (any) language.

paulrudy3 months ago

Since I only partly understand your comment, I'm not sure if this pertains, but the phrase "spatiotemporal encoding" caught my attention. It makes intuitive sense that complex cognitive function would be connected to spatiotemporal sensations and ideas in an embodied nervous system evolved for, among other things, managing itself spatially and temporally.

Also, Riccardo Manzotti's book "The Spread Mind" seems connected. Part of the thesis is that the brain doesn't form a "model" version of the world with which to interact, but instead, the world's effects are kept active within the brain, even over extremely variable timespans. Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.

Conscious experience as "encoding" in that sense would not be an inner representation of an outer reality, but more a kind of spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it. The "mind" is not a separate observer or calculator but is "spread" among all phenomenal objects/events with which it has interacted--even now-dead stars whose light we might have seen.

Not sure if I'm doing the book justice here, but it's a great read, and satisfyingly methodical. The New York Review has an interview series if you want to get a sense of the ideas before committing to the book.

K0SM0S3 months ago

This is salient enough that I think you intuitively understood my comment. I won't pretend I can fully explain pending hypotheses either, it's more about research angles (e.g., connecting tools with problem categories).

Thanks a lot for the recommendations. That's what I love about HN. One often gets next-level great pointers.

> Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.

Emphatically yes.

> […] spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it

Exactly, noticing that it includes, and/or is shaped, by "inner" events as well.

So there's the outer world, and there's your inner world, and only a tiny part of the latter is termed "conscious". We gotta go about life from that certainly vantage but incredibly limited perspective too. The 'folding power' of nature (to put so much information in so little space) is mesmerizing, truly.

I like to put it down to earth to think about it. When you're in pain, or hungry, or sleepy—any pure physiological, biological state,—it will noticeably impact (alter, color, shade, formally "transform" as in filters or gating of) the whole system.

Your perception (stimuli), your actions (responses), your non-conscious impulses (intuitions, instincts, needs & wants…), your emotions, thoughts, and even decision-making and moral values.

I can't elaborate much here as it's bound to get abstract too fast, to seem obfuscated when it's everything but. I should probably write a blog or something, ha ha. You too, you seem quite astute at wording those things.

Thanks again a million for that reply.

nonrandomstring3 months ago

It's lovey to see areas starting to connect, in neuroscience, AI/comp-sci and philosophy.

Let's remember philosophy started as questions about the cosmos, the stars. Very much physical reality. And practical too, for agriculture and navigation. How do we get from A to B and acquire food and other goods. Over about 5000 years it's come to be "relegated to the unreal", disparaged by radical positivists who seem unable to make connections between areas (ironic from a neural POV).

A 'modern' philosopher I'll suggest here on "representation of space-time" is Harrold Innes [0]. For those who are patient readers and literate in economics, anthropology, linguistics and computer science (and working on any field of AI relating language to space) I'd hope it would be a trove of ideas about how our brains developed over the ages to handle "space and time".

Some will be mystified how study of railways, maps and fish trading has anything to do with cognitive neuroscience and representing space. But it has everything to do with it, because we encode the things that matter to our survival and those things shape how our brains are structured. Only very recent modernity and anti-polymath, hyper-specialisation has made us forget this way that the stars, the soil and our brains are connected.

[0] https://en.wikipedia.org/wiki/Harold_Innis

K0SM0S3 months ago

I'm sorry I couldn't reply sooner. The sibling comment took all my free time last week (lol).

I've taken great interest in Harrold. It'll be some time until I can deep dive into anything besides work, but he's made my top 10 list of thinkers to know and potentially assimilate into my research framework (I treat theoretical signals not as data but as methods, essentially, a panel of "ways to think about the data" itself).

Thank you very much for the suggestion (and for that write up, it really helped).

kposehn3 months ago

> Some will be mystified how study of railways, maps and fish trading has anything to do with cognitive neuroscience and representing space.

Commenting as someone who loves railways, maps and fish(ing) this is both a novel thought and endlessly fascinating. I fear you've provided me another rabbit hole to explore. Thank you!

sameoldtune3 months ago

There’s an idea in psych that a high IQ correlates more than anything to an increased ability to navigate complex spaces. That’s what we do when we program, we create conceptual spaces and then imagine data flowing through them. And it is also why being intelligent in that way is seemingly so useful in everyday situations like budgeting, avoiding injury, and navigating institutions.

It’s not all roses though—to quote Garrison Keillor, “being intelligent means you will find yourself stranded in more remote locations”

K0SM0S3 months ago

Intuitively, I tend to agree.

To elaborate a bit, I think there are layers in-between raw IQ and practical proprioception, for instance. Balancing one's body involves the full neural chain, down to origin (which is the end-cell, the sensor/motor device), and quite evidently can be trained to orders of magnitude more accuracy.

So to think like a tech stack of sorts, from the meat (purely biological, since the first unicellular organisms) to the highest-level (call it 'sapience', 'wisdom', whatever; that which is even above IQ), you'd find something that goes

good-enough bodily genetics + trained sensor & motor neural precision + high IQ for good aim and strategy + sapient decision-making

in order to best navigate complex spaces.

Case in point: cliche nerds (not your best dancers/athletes), unwise yet very intelligent people, bad draw at the genetic lottery for negative examples; conversely a very gifted "natural born" athlete or musician (which doesn't mean that without training they wouldn't get beaten flat by any seasoned professional) doubling as a strategy prodigy, or zen master, whatever 'wise-r.'

If we admit that space[time] is the "language of the brain" (what IQ actually tests), and therefore that even social spaces—like love, business, or politics—are navigated from the same core skills than physical spaces like sports. (That much perhaps is a stretch, it may be more complicated; but perhaps partially true for 'core functions' as it were. Perhaps like 'speech mastery' alone is a core function that contributes to a slew of more complex tasks/goals).

IIAOPSW3 months ago

I'm of the position this might be correct in the specific case of humans, but not fundamental to the algorithms of consciousness. Eg we could have similar emergent phenomena in algorithmic trading bots where all the emergent constructions are defined in terms of money and financial concepts rather than spatial concepts. They live in a reality of dollar signs rather than physical dimensions. That's neither inherently better nor worse.

In fact, I'm somewhat of the position that nearly any grounding in a domain of shared objects where signalling is inexpensive would be suitable. That said, AI agents which grew up in some alien domain of shared objects would find us as unintuitive to reason about as we find quantum mechanics uninituitive to reason about. If the goal is AI that acts and talks like us, your way may be the way to go.

K0SM0S3 months ago

I've no idea what the c-word means (consciousness), so I'll leave that aside; everything else checks out as absolutely sensible to me.

Your last sentence strikes me as particularly validating.

"My way", this framework, was meant to give a mechanistic description of our individual, subjective "inner world." Much like physics speaks of the outer, shared world; and in compliance with all objective 'hard' sciences.

Indeed, it lends itself particularly well to be exploited by AI, notably in terms of architecture and domain-selection (by whatever core we call 'sapience') within a "Mixture-of-Experts" paradigm of sorts—which biology seems to have done: dedicated organs or sub-parts for each purpose, the Unix way to "Do one thing and/to do it well."

oliyoung3 months ago

First thing that came to mind was the song-lines of Indigenous Australians, they code long journeys through country through narratives and songs, it's the macro form of this research

https://en.wikipedia.org/wiki/Songline

marmaduke3 months ago

> CSCGs build on top of cloned HMMs by augmenting the model with the actions of an agent.

pretty interesting take, and reminds me of the models that Karl Friston and many colleagues make under the umbrella term "active inference", involving free energy minimization.

kovezd3 months ago

I've had the theory that our identity begins as an extension of the representation of space.

We are simply the node at the center in a coordinate system, connected to the same emotions, rewards, and memories that every other node in the graph.

HarHarVeryFunny3 months ago

Pretty sure our identify is just that of the actor behind our own actions.

i.e. Our brain models causal relationships, sees the correlation between it's own pre-action thoughts and the following action, and therefore models itself as an actor/causal agent responsible for these thoughts and actions.

borbulon3 months ago

Thank you for this, it’s stirred up a considerable amount of thought in me, but I would also say that my immediate reaction is that

> its own pre-action thoughts and the following action

Can also be a sort of spatial processing

altairprime3 months ago

If you read nothing else in this paper, read the first two diagrams.

smokel3 months ago

[flagged]

protonfish3 months ago

The paper is specifically about how space is represented in the human brain. Data representing space can be encoded in many different, but functionally equivalent, ways. Showing how it could be encoded by neurons in the hippocampus is valuable to understand our brain, but philosophically not significant.

ithkuil3 months ago

How can understanding how the world works not be philosophically significant?

How did we come to the point where we relegated philosophy to be the study of only the things not connected with reality?

I'm fine with thinking about philosophy as a field that also explores ideas that are not connected with reality, but it's not only about those things.

mbivert3 months ago

> How did we come to the point where we relegated philosophy to be the study of only the things not connected with reality?

If it deals with reality, then it's in the realm of science; science being the superior form to tackle such things, there's simply no reasons to involve "philosophy". The only room left for philosophy is thus wherever science doesn't operate.

Not that I think it's a good thing: the dogmatic, rigid, mindset advocated by contemporary institutionalized science feels detrimental to a "lighter" approach to life: it's as if people don't have the right to ponder about existence on their own anymore; it's child's play if it's not done in such and such a way.

+1
ithkuil3 months ago
meroes3 months ago

They got sick of philosophers asking tough questions probably.

delichon3 months ago

Sucks that this fascinating question is getting downvoted. It seems natural to wonder what the "real" world looks like, given that we see it through such a convoluted set of lenses. Does our sense of time and motion emerge through this latent sequence alone, or are they properties of any kind of memory? Are emotions part of this peculiar path of evolution or a part of sentience? Is sentience dependent on latent sequences?

smokel3 months ago

Thank you for your support. I probably shouldn't have mentioned God, and I guess my remark was a little bit too concise.

I'm not too familiar with contemporary philosophy, but recent insights brought on by LLMs sure made me want to revisit Wittgenstein.

To take it even further (and risk a few more downvotes? :), are we entering a time where idealism can get a new foundation? It has always seemed pretty suspicious that science is so good at explaining natural phenomena [1]. Combine that with the idea that our brains cannot do more than basic logic [2], and one could easily conclude that we get to see only a very small part of totality.

[1] https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[2] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

0xTJ3 months ago

Philosophy isn't relevant here, this is talking about science and theorizing about the function of real structures in our brains.

delichon3 months ago

That statement is a theory of empirical philosophy.

nonrandomstring3 months ago

piffle

exe343 months ago

So like Narnia looking out of the cupboard?

fredgrott3 months ago

If you read Wolfram's work you see him talking about this several times...