Anecdotal (but deep) research led me to postulate that our entire "inner world", for lack of a better word, is an emergent construction based on a fundamentally spatiotemporal encoding of the external world. This assumes that feeding and motility, i.e., a geometric interpretation of the external world, is among the first 'functions' of living organisms in the evolutionary order. They subsequently became foundational for neuronal systems when these appeared about 500 million years ago.
The hypothesis was informed by language notably, where most things are defined in spatial terms and concepts (temporal too, though more rarely), as if physical experiences of the world were the building blocks of thinking, really. A "high" council, a "sub" culture, a "cover", an "adjacent" concept, a "bigger" love, a "convoluted" or "twisted" idea, etc.
Representations in one's inner world are all about shape, position, and movement of things in some abstract space of sorts.
This is exactly how I'd use a 4D modeling engine to express a more 'Turing-complete' language, a more comprehensive experience (beyond movement: senses, intuitions, emotions, thoughts, beliefs…): use its base elements as a generator set to express more complex objects through composition in larger and/or higher-dim space. Could nature, Evolution, have done just that? Iteratively as it conferred survival advantages to these genes? What would that look like for each layer of development of neuronal—and later centralized "brain"—systems?
Think as in geometric algebra, maybe; e.g., think how the metric of a Clifford algebra may simply express valence or modality, for those neuronal patterns to trigger the proper neurotransmitters. In biological brains, we've already observed neural graphs up to 11 dimensions (with a bimodal distribution peak around ~2.5D and ~3.8D iirc… Interestingly for sure, right within the spatiotemporal ballpark, seeing as we experience the spatial world in 2.5D more than 3, unlike fishes or birds).
Jeff Hawkins indeed strongly shaped my curiosity, notably in "A Thousand Brains" and subsequent interviews. The paper here immediately struck me as very salient to that part of my philosophical and ML research—so kinda not too surprised there's history there.
And I'm really going off on a tangent here, but I'm pretty sure the "tokenization problem" (as expressed by e.g. Karpathy) may eventually be better solved using a spatiotemporal characterization of the world. Possibly much closer to real-life language in biological brains, for the above reasons. Video pretraining of truly multimodal models may constitute a breakthrough in that regard, perhaps to synthesize or identify the "ideal" text divisions, a better generator set for (any) language.
Since I only partly understand your comment, I'm not sure if this pertains, but the phrase "spatiotemporal encoding" caught my attention. It makes intuitive sense that complex cognitive function would be connected to spatiotemporal sensations and ideas in an embodied nervous system evolved for, among other things, managing itself spatially and temporally.
Also, Riccardo Manzotti's book "The Spread Mind" seems connected. Part of the thesis is that the brain doesn't form a "model" version of the world with which to interact, but instead, the world's effects are kept active within the brain, even over extremely variable timespans. Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Conscious experience as "encoding" in that sense would not be an inner representation of an outer reality, but more a kind of spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it. The "mind" is not a separate observer or calculator but is "spread" among all phenomenal objects/events with which it has interacted--even now-dead stars whose light we might have seen.
Not sure if I'm doing the book justice here, but it's a great read, and satisfyingly methodical. The New York Review has an interview series if you want to get a sense of the ideas before committing to the book.
This is salient enough that I think you intuitively understood my comment. I won't pretend I can fully explain pending hypotheses either, it's more about research angles (e.g., connecting tools with problem categories).
Thanks a lot for the recommendations. That's what I love about HN. One often gets next-level great pointers.
> Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Emphatically yes.
> […] spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it
Exactly, noticing that it includes, and/or is shaped, by "inner" events as well.
So there's the outer world, and there's your inner world, and only a tiny part of the latter is termed "conscious". We gotta go about life from that certainly vantage but incredibly limited perspective too. The 'folding power' of nature (to put so much information in so little space) is mesmerizing, truly.
I like to put it down to earth to think about it. When you're in pain, or hungry, or sleepy—any pure physiological, biological state,—it will noticeably impact (alter, color, shade, formally "transform" as in filters or gating of) the whole system.
Your perception (stimuli), your actions (responses), your non-conscious impulses (intuitions, instincts, needs & wants…), your emotions, thoughts, and even decision-making and moral values.
I can't elaborate much here as it's bound to get abstract too fast, to seem obfuscated when it's everything but. I should probably write a blog or something, ha ha. You too, you seem quite astute at wording those things.
It's lovey to see areas starting to connect, in neuroscience,
AI/comp-sci and philosophy.
Let's remember philosophy started as questions about the cosmos, the
stars. Very much physical reality. And practical too, for agriculture
and navigation. How do we get from A to B and acquire food and other
goods. Over about 5000 years it's come to be "relegated to the
unreal", disparaged by radical positivists who seem unable to make
connections between areas (ironic from a neural POV).
A 'modern' philosopher I'll suggest here on "representation of
space-time" is Harrold Innes [0]. For those who are patient readers
and literate in economics, anthropology, linguistics and computer
science (and working on any field of AI relating language to space)
I'd hope it would be a trove of ideas about how our brains developed
over the ages to handle "space and time".
Some will be mystified how study of railways, maps and fish trading
has anything to do with cognitive neuroscience and representing
space. But it has everything to do with it, because we encode the
things that matter to our survival and those things shape how our
brains are structured. Only very recent modernity and anti-polymath,
hyper-specialisation has made us forget this way that the stars, the
soil and our brains are connected.
I'm sorry I couldn't reply sooner. The sibling comment took all my free time last week (lol).
I've taken great interest in Harrold. It'll be some time until I can deep dive into anything besides work, but he's made my top 10 list of thinkers to know and potentially assimilate into my research framework (I treat theoretical signals not as data but as methods, essentially, a panel of "ways to think about the data" itself).
Thank you very much for the suggestion (and for that write up, it really helped).
> Some will be mystified how study of railways, maps and fish trading has anything to do with cognitive neuroscience and representing space.
Commenting as someone who loves railways, maps and fish(ing) this is both a novel thought and endlessly fascinating. I fear you've provided me another rabbit hole to explore. Thank you!
There’s an idea in psych that a high IQ correlates more than anything to an increased ability to navigate complex spaces. That’s what we do when we program, we create conceptual spaces and then imagine data flowing through them. And it is also why being intelligent in that way is seemingly so useful in everyday situations like budgeting, avoiding injury, and navigating institutions.
It’s not all roses though—to quote Garrison Keillor, “being intelligent means you will find yourself stranded in more remote locations”
To elaborate a bit, I think there are layers in-between raw IQ and practical proprioception, for instance. Balancing one's body involves the full neural chain, down to origin (which is the end-cell, the sensor/motor device), and quite evidently can be trained to orders of magnitude more accuracy.
So to think like a tech stack of sorts, from the meat (purely biological, since the first unicellular organisms) to the highest-level (call it 'sapience', 'wisdom', whatever; that which is even above IQ), you'd find something that goes
good-enough bodily genetics
+
trained sensor & motorneural precision
+
high IQ for good aim and strategy
+
sapient decision-making
in order to best navigate complex spaces.
Case in point: cliche nerds (not your best dancers/athletes), unwise yet very intelligent people, bad draw at the genetic lottery for negative examples; conversely a very gifted "natural born" athlete or musician (which doesn't mean that without training they wouldn't get beaten flat by any seasoned professional) doubling as a strategy prodigy, or zen master, whatever 'wise-r.'
If we admit that space[time] is the "language of the brain" (what IQ actually tests), and therefore that even social spaces—like love, business, or politics—are navigated from the same core skills than physical spaces like sports.
(That much perhaps is a stretch, it may be more complicated; but perhaps partially true for 'core functions' as it were. Perhaps like 'speech mastery' alone is a core function that contributes to a slew of more complex tasks/goals).
I'm of the position this might be correct in the specific case of humans, but not fundamental to the algorithms of consciousness. Eg we could have similar emergent phenomena in algorithmic trading bots where all the emergent constructions are defined in terms of money and financial concepts rather than spatial concepts. They live in a reality of dollar signs rather than physical dimensions. That's neither inherently better nor worse.
In fact, I'm somewhat of the position that nearly any grounding in a domain of shared objects where signalling is inexpensive would be suitable. That said, AI agents which grew up in some alien domain of shared objects would find us as unintuitive to reason about as we find quantum mechanics uninituitive to reason about. If the goal is AI that acts and talks like us, your way may be the way to go.
I've no idea what the c-word means (consciousness), so I'll leave that aside; everything else checks out as absolutely sensible to me.
Your last sentence strikes me as particularly validating.
"My way", this framework, was meant to give a mechanistic description of our individual, subjective "inner world." Much like physics speaks of the outer, shared world; and in compliance with all objective 'hard' sciences.
Indeed, it lends itself particularly well to be exploited by AI, notably in terms of architecture and domain-selection (by whatever core we call 'sapience') within a "Mixture-of-Experts" paradigm of sorts—which biology seems to have done: dedicated organs or sub-parts for each purpose, the Unix way to "Do one thing and/to do it well."
The first demo Jeremy put out was called "Build Applications For LLMs in Python",¹ as part of the "Mastering LLMs" conference by Hamel Husain and Dan Becker.² (You can see a few PoC demos by the end of that video when Johno takes over, it looks a lot like what Gradio or Streamlit can do).
So I think your .ml angle is definitely part of the original ethos of FastHTML (which isn't surprising coming from the founder of fast.ai & answer.ai, among other things).
The FastHTML team explicitly recommends would-be contributors to consider making reusable components, the likes of Gradio's, to facilitate all the things notably relating to AI workflows.
----
About WordPress & CMS
That part is admittedly much larger in scope. I'd expect it to rise in correlation with the success of FastHTML itself in the Python web ecosystem writ large (beyond data / AI) but no sooner—unless someone makes a killer case for a FastHTML-based Python CMS that becomes a driver of popularity, but that's admittedly a much taller and wider order than 'simply' becoming the go-to #1 Python/ML prototype-to-market-at-scale one-stop shop. I mean, just that is huge, and yet nowhere near WordPress.
But tbh, I really like your idea, and I think it may eventually prove true, having used FastHTML first-hand for a few weeks now (and web dev being far from my turf). The fact is can ship with FastHTML, fast & well-behaved web apps, more than I ever could. If I ever get the time I'll play a bit to see what a legacy-free FastHTML CMS could look like. But no matter how good the engine, the plugin ecosystem is what makes WP, and no single dev or company can replicate that alone. It's an alchemy with the times, there are windows. Not sure one is open now.
Pico CSS¹ essentially works like that, so you can hard-override any of its exposed variables² to suit your needs.
I discovered it through FastHTML (it was the CSS Jeremy and Johno Whitaker used in their first-ever demo³ early June), and find the 'dx' simple, stupid, in a great way.
Really impressive indeed, but I do get the interest. I, for one, will give 1% of my yearly income now that they're on my radar. It instantly ranks among the top 3 most important open-source projects in my opinion.
In terms of SWE, it doesn't get harder than an OS in my book (and not even from scratch). So them coming from success in that space is more than enough to convince me they can deliver a world-class browser core engine.
Naive guess: their shopping activity (leads, funnels, conversions/sales…) if/when in Ladybird would likely be tracked only by Shopify itself, at the exclusion of other big tech (most notably Google). This makes Shopify's dataset more valuable (differentiated by unique entries), which can be used in-house strategically to grow, or resold at a better price.
Indeed. With limited budget and manpower, they [Ladybird] should focus on a rock-solid core engine with great extensibility, then let the community—if any—create all the things around said core.
It's the best (perhaps only) "small project to stratosphere" 101-recipe I've found. [Note that for browsers, even 1% of market share is stratosphere-level.]
Historical music/media apps were a great example before browsers (Winamp, Foobar2K, XBMC…). Tiny teams + key community contributions made for amazingly complete and rich software fit for all use-cases, beating any commercial alternative by far.
(The fact is that to this day, these 2000-2010 solutions gave you far more user-power & customization, not to mention discoverability and meta-knowledge, than current Netflix or Spotify UIs.)
A project like Ladybird should take that general road, IM(very but educated)HO. That's how they can eventually catch up to big names feature-wise.
1. Read biographies of people who did it. (Re-read those books sometimes; you'll know when.)
2. Read business books by people who consistently lead/advised at that level.
3. Find your philosophy. (Hint: this happens within, in your inner world.)
— You only need two or three great books in each category to really get going. Date of publication doesn't matter much; try to vary across backgrounds and decades, centuries.
4. Exercise a minimum on a weekly basis.
5. Eat well. Sleep well.
And train that voice within to become a perfect friend.
(Funny to think you only have to will it to get that in life, just… inside.)
That's the TL;DR.
The rest will be self-evident on your journey, and involve a lot of luck. But if you're like most people, were you to reach a few % of that wealth, you'd probably stop right there and no longer worry about money, for the rest of your life. Incredibly few have the will to 10× that, then 10× again, because it's exponentially harder and comparatively useless.
~5 years. Medium-sized team in-house + hordes (hundreds, thousands) of engineers in the field helping clients on-site, writing code for them directly upstreamed to drivers, core libs, etc. (iteratively optimized in-house, ship feature, rinse and repeat). Story of the PlayStation SDKs, of DX too, but above all CUDA (they really outdid this strategy), now for cuDNN and so much more.
It takes incompressible time because you have to explore the whole space, cover most bases; and it takes an industry several years (about one "gen" / hardware cycle) to do that meaningfully.It helps when your platform is disruptive and customers move fast.
Maybe 3 years at best if you start on a new ideal platform designed for it from scratch. And can throw ungodly amount of money fast at it (think 5K low-level engineers roaming your installed base).
Maybe 10+ yrs (or never) if you're alone, poor, and Radeon (j/k but to mean it's non-trivial).
I don't mean to take away from Intel's underwhelming management.
But regardless, Keller's Athlon 64 or Zen are great competitors.
Likewise, CUDA is Nvidia's massive achievement. The growth strategy of that product (involving lots of free engineer hours given to clients on-site) deserves credit.
// I don't mean to take away from Intel's underwhelming management
chuckle lets give full credit where credit is due :-)
Athlon was an epochal chip. Here's the thing though---if you are a market leader, one who was as dominant as Intel was, it doesn't matter what the competition does, you have the power to keep dominating them by doing something even more epochal.
That's why it can be so frustrating working for a #2 or #3 company....you are still expected to deliver epochal results like clockwork. But even if you do, your success is completely out of your hands. Bringing out epochal products doesn't get you ahead, it just lets you stay in the game. Kind of like the Red Queen in Alice in Wonderland. You have to run as fast as you can just to stay still.
All you can do is try to stay in the game long enough until the #1 company makes a mistake. If #1 is dominate enough, they can make all kinds of mistakes and still stay on top, just by sheer market inertia. Intel was so dominate that it took DECADES of back-to-back mistakes to lose its dominate position.
Intel flubbed the 32-64 bit transition. On the low end, it flubbed the desktop to mobile transition. On the high end, it flubbed the CPU-GPU transition.
Intel could have kept its dominate position if it had only flubbed one of them. But from 2002 to 2022, Intel flubbed everysingle transition in the market.
Its a measure of just how awesome Intel used to be that it took 20 years....but there's only so many of those that you can do back-to-back and still stay #1.
Anecdotal (but deep) research led me to postulate that our entire "inner world", for lack of a better word, is an emergent construction based on a fundamentally spatiotemporal encoding of the external world. This assumes that feeding and motility, i.e., a geometric interpretation of the external world, is among the first 'functions' of living organisms in the evolutionary order. They subsequently became foundational for neuronal systems when these appeared about 500 million years ago.
The hypothesis was informed by language notably, where most things are defined in spatial terms and concepts (temporal too, though more rarely), as if physical experiences of the world were the building blocks of thinking, really. A "high" council, a "sub" culture, a "cover", an "adjacent" concept, a "bigger" love, a "convoluted" or "twisted" idea, etc.
Representations in one's inner world are all about shape, position, and movement of things in some abstract space of sorts.
This is exactly how I'd use a 4D modeling engine to express a more 'Turing-complete' language, a more comprehensive experience (beyond movement: senses, intuitions, emotions, thoughts, beliefs…): use its base elements as a generator set to express more complex objects through composition in larger and/or higher-dim space. Could nature, Evolution, have done just that? Iteratively as it conferred survival advantages to these genes? What would that look like for each layer of development of neuronal—and later centralized "brain"—systems?
Think as in geometric algebra, maybe; e.g., think how the metric of a Clifford algebra may simply express valence or modality, for those neuronal patterns to trigger the proper neurotransmitters. In biological brains, we've already observed neural graphs up to 11 dimensions (with a bimodal distribution peak around ~2.5D and ~3.8D iirc… Interestingly for sure, right within the spatiotemporal ballpark, seeing as we experience the spatial world in 2.5D more than 3, unlike fishes or birds).
Jeff Hawkins indeed strongly shaped my curiosity, notably in "A Thousand Brains" and subsequent interviews. The paper here immediately struck me as very salient to that part of my philosophical and ML research—so kinda not too surprised there's history there.
And I'm really going off on a tangent here, but I'm pretty sure the "tokenization problem" (as expressed by e.g. Karpathy) may eventually be better solved using a spatiotemporal characterization of the world. Possibly much closer to real-life language in biological brains, for the above reasons. Video pretraining of truly multimodal models may constitute a breakthrough in that regard, perhaps to synthesize or identify the "ideal" text divisions, a better generator set for (any) language.