This is just tedious. The electro-chemical map of the brain that is built and used to study it, is so primitive.
Almost certainly, "the brain" tunes itself to its environment and inner states from a preformed inner structure through evolution. A lot of what is in the brain, is not actually to be found in the brain, or by studying it materially. The "formwork" that formed it, is lost in evolutionary time.
Unfortunately there is no way to impute the formwork from the form, like say it would be possible to (mostly accurately) impute what formwork created a square column or cast a piece of iron.
"The brain" is not in any way shaped like a physical object, and the formwork that formed it is infinitely complex.
> A lot of what is in the brain, is not actually to be found in the brain, or by studying it materially.
You are talking about consciousness (and perhaps the non-conscious processing that the brain does as well). That is indeed a non-physical phenomenon, but it still has physical underpinnings.
> The "formwork" that formed it, is lost in evolutionary time.
You can't lose something that you never had in the first place :). It will take that as a figure of speech.
The good news are: we are very familiar with some things that "exist" but don't have a physical presence. On this forum, "software" is the most obvious one. "Math" is probably the second. There's many others, like all the emotions, entropy, and so forth. We can, to a certain degree, reverse-engineer some of these processes by looking at the physical imprint that they leave. It's true that we never get a full picture - the same way that just looking at the source code might not give you a good idea of how the RAM will look like when the program is executing. But it can give you some insight. And then, if you are lucky, you may be able to fill in the gaps.
The bad news: this is the most imbricated and complex piece of "code" that the humanity has ever faced. Our brains might simply not be capable of understanding their own complexity by themselves. Most of us have a "cache" of 7 items, after all. Barely good enough to swing to the next tree branch.
But then again some good news: the complexity is definitely not infinite- just very big. And we keep improving our tools. The same way some of them expand the limits of what we can physically do with our bodies, some of them expand what we can mentally do with our brains. Wether or not that will be enough for us to understand ourselves, to a certain definition of "understand", is still up in the air as far as I'm concerned. I'm agnostic about it.
"The brain" is not physical like columns or iron. Those are simple objects. The kind physics likes to deal with. Things that are easily measured to describe its properties quantitatively and the relations of these properties as a placeholder for qualitative aspects (equations). Physics can't deal with the brain. No equation can be written.
If a bud's seeds were to sprout in place, instead of in the ground, you would have every single ancestor plant in a very long chain. Every brain is the result of this kind of structure. A mother buds and sprouts a new human. If the umbilical cords remain attached, we have a very similar kind of long chain of human brains. Not like any other physical object.
Physics is inadequate at studying "the brain". So, "the brain" is not a physical object.
> Physics can't deal with the brain. No equation can be written.
There are many many many physics simulations out there that cannot be "written with an equation". Climate Modelling, for example. You cannot write a single equation to model all that. You need a big complex piece of software, made of many equations, a lot of hardware, and a lot of processing time. Any of those was simply inconceivable mere decades ago.
It's possible that it's as you say, and the brain is inscrutable if we attack the problem from the physics point of view alone.
I think that you may be right. With what we have now. But decades from now? I'm not so sure.
All climate models are based on mathematical physics models. I don't know the specifics, so I asked chatGPT and here is what it said:
'''
Climate modeling is a multifaceted field rooted in physics that relies on a complex set of equations to describe various atmospheric, oceanic, and terrestrial processes. Here's an overview of the key equations that form the foundation of climate models:
Navier-Stokes Equations: Governing the flow of fluids like the atmosphere and oceans, these equations capture how the velocity of a fluid changes over time.
Radiative Transfer Equations: Essential for understanding how sunlight and other forms of radiation interact with the atmosphere, including scattering, absorption, and emission.
Energy Balance Models: These equations describe the balance between incoming solar energy and outgoing heat, fundamental for capturing the planet's energy dynamics.
Equations of State: Linking density, pressure, and temperature, these equations are critical for understanding the behavior of the atmosphere and ocean.
Continuity Equations: Representing the conservation of mass in the atmosphere and oceans.
Moist Processes Equations: Capturing phase changes between water vapor, liquid water, and ice, along with latent heat exchange.
Boundary Layer Equations: Describing the complex dynamics near Earth's surface where the atmosphere interacts with the land or ocean.
Chemical and Aerosol Equations: Governing the reactions and interactions between different chemical species and particles, which can affect both weather and climate.
Sea Ice and Glacial Equations: Modeling the flow and melting of ice, essential for understanding the cryosphere.
These equations are solved numerically using computer algorithms, often over a grid representing the Earth's surface and atmosphere. Together, they form an interconnected system that allows scientists to simulate and analyze the climate system's behavior. This intricate mathematical framework underscores how the study of climate is fundamentally rooted in mathematical and physical principles.
In the context of human behavior, consciousness and neurology, what are the mathematical equations that are relevant in order to model how a human brain works? Please highlight any equation that involves Physics in particular
ChatGPT:
Modeling the human brain is an extremely complex task, and it involves various levels of abstraction and different mathematical and computational approaches. While there are no specific equations that fully capture the intricacies of the brain's function, several mathematical and physics-related principles can be applied at different levels of analysis. Here are some relevant equations and concepts:
Neuron Modeling:
Hodgkin-Huxley Equations: These equations describe the behavior of ion channels in neuron membranes and are fundamental to understanding how neurons generate action potentials.
Integrate-and-Fire Models: Simpler models like the Leaky Integrate-and-Fire model can describe the basic firing behavior of neurons using differential equations.
Neural Networks:
Backpropagation:
This is a fundamental algorithm for training artificial neural networks. It involves the chain rule from calculus to update weights during training.
Activation Functions: Functions like the sigmoid, ReLU (Rectified Linear Unit), and softmax are used to model the activation of neurons in artificial neural networks.
Synaptic Plasticity:
Hebbian Learning:
Often described as "cells that fire together wire together," it's a principle used to model how synaptic connections strengthen or weaken based on correlated firing patterns.
Diffusion Equations:
Fick's Law of Diffusion: Relevant for modeling the diffusion of neurotransmitters across synapses.
Information Theory:
Shannon's Information Theory:
While not an equation per se, concepts like entropy and mutual information are used to quantify information flow and coding in neural systems.
Quantum Mechanics:
While not directly related to classical neuroscience, there's ongoing research into whether quantum effects play a role in brain function. This would involve quantum mechanics equations such as the Schrödinger equation and the equations describing quantum states.
Connectome Modeling:
Graph Theory:
Modeling the brain as a network involves concepts from graph theory, like adjacency matrices and graph algorithms. While not a direct physics equation, it's relevant for understanding brain connectivity.
Functional Imaging:
Techniques like Functional Magnetic Resonance Imaging (fMRI) rely on physics equations related to magnetism and nuclear magnetic resonance to measure brain activity indirectly.
Biomechanics:
Equations from mechanics can be used to model the physical properties of the brain, like deformation during injury or the propagation of mechanical waves in brain tissue.
Electroencephalography (EEG):
Maxwell's Equations: EEG measures electrical potentials on the scalp, and the interpretation of these signals relies on Maxwell's equations describing the behavior of electric fields.
It's important to note that modeling the human brain is still an active area of research, and there's no single mathematical framework that fully explains all aspects of brain function. Instead, a multi-disciplinary approach is used, combining mathematics, physics, biology, and computer science to gain a better understanding of the brain's complexity at various scales, from individual neurons to large-scale brain networks.
Yes but resolution and coherence matters. A map that says "here be dragons" is not high resolution. A map that misidentifies or misses parts is incoherent. The brain is not physical (as explained above), and cannot be captured in a sufficiently high resolution map that only studies substance and its interactions. Since we have no access to the formwork, we will always have missing data, that hampers our ability to create such a coherent map.
Thank you for staying engaged, it's helping me see my position more clearly.
Don't understand the downvotes because I get exactly what you're saying -- it's like poking around in the data for an LLM and expecting to find a written language algorithm. Taking things further -- I think it's unlikely that much of the brain's operating software came via DNA. It's mostly machine learned from the environment and other humans. We call this child development.
> It's mostly machine learned from the environment and other humans. We call this child development.
Yes, exactly. The software lies in humanity collectively. Not inside us, but in between us.
Which is why it cannot lie entirely in the brain. At least not entirely in one brain.
A new node born into the network, learns the network. Individual nodes perish, and new ones replace them. But the network doesn't go down. Or it hasn't yet.
Much of conscious experience is the network. Isolation is painful because it deregulates the connection to the network.
But even on the "hardware" side, there must be so many kinds of developments that must occur simultaneously. Some of it is as you say, environment based child development. Others that we may not quite be aware of, say patterns of neural firing, or growth patterns. Such developments probably get passed on in shape (DNA or X) to the next generation.
When a vortex forms in a pool of water, it is formed by the complex interplay of water, the pond made up of earth, rock, plants, its undulations perturbing the water, the wind etc.
The vortex appears and vanishes as conditions change. But is the vortex, the water? Is the water no longer water once the vortex dissipates?
Is the vortex the same as any physical object? Or it’s a form taken up by water?
What if the brain is just the form. And what if the water is itself just a river fed by ocean currents millions of miles away?
Standing by the river, you only experience the form, not the formwork, the substrate, or even the shaping force in time.
I may have made this a worse explanation… hard to tell.
Oh for sure, but since most screenplays are standardized to a point, the idea is to just use Regex to get there, then use a local version of llama 2 to finish up the classifications.
Where the fun with LLMs come in, is after all the screenplays have been parsed and turned into a dataset that is trained on not just the story but also the cinematography aspects, as well since we’ve broken it down to a granular detail of each element of each scene.
Timeline and frame based editing is the end result but this more about the elemental creation and from there, editing it into time based scene.
Ive spent the last few years in the cinematography department, and most directors and director of photographers will write the scenes on flash cards and move them around and rearrange them because not every story is linear, even though we have to find a way to present every story in a linear form. And from there each scenes requires multiple angles, shots, motivations and things change so much on the fly, that a screenplay becomes a document that becomes quite dense with non-presented information.
So, I suppose the next step to this would be to parse a bunch of screenplays from different formats, into a single readable format and then train an image model on the frames of those movies we also trained the text model with screenplays on to get a cross reference of what is written down vs what is displayed visually. And we can break down the visual shots with camera movements, steadicam, dolly move etc as well as identify key props in the image model (maybe. Sounds expensive) and compare them to key props in the script. I don’t know, I’m spitballing now but a multi-modal Hollywood film producer would be kind of fun but this totally is just starting as a way to standardize the script in a granular form and to code since I’m not out on set.
You’ve given me something to think about and I think you’re right, there is an element of time missing. In fact there is a few parallel time elements missing from this too
For example, when we are shooting there is a rough formula for how long of a day we need to get those scenes. Usually it’s 1 hour per 1 page scene plus an extra 30 mins added on for each character in the scene. But that doesn’t translate to the final product as that information tells us nothing about how long or important a scene should be in the final product.
But it’s also possible I’m getting too ahead of myself here and maybe there’s another object that is created that includes the scene, production and final product objects instead of jamming it all into this object.
It's not clear what the idea is here. Which is why all the questions. I suspect, as you said, that a much more complex data structure would be required to encode all the various aspects of production into its constituent elements and the relations between the elements.
I would guess, that the first step would be to establish how the process of production (screenplay, scenes, camera angles, locations etc.) relates to the final product: scene frames. Each scene must have many shared elements as well. That would have to be encoded too.
Honestly? His mannerisms was all we really needed. He was not well liked in his year, and that takes some doing to achieve these days. The smug "how could I be wrong" when he was, well, 26 orders of magnitude wrong, is special even by entitled scientist standards.
I can't tell if you're being sarcastic or not, but I strongly believe, based on my experiences with really successful people, that being open to being wrong absolutely does count for a lot more than the current truth value of your ideas.
Eh, carcinization is a better ontological fit. Many separate things evolving into the same structure, versus the process of losing electrons and/or structurally degrading.
And it's the version I've actually heard used (multiple times) before now - having looked up 'carcinisation', I assume that comes via 'rustacean' (cf. crustacean), which makes it a bit more contrived and then sort of implies the result is a Rust developer, not Rust itself?
Makes more sense to me that a language/framework/etc. would 'oxidise' to Rust, and someone learning/getting hooked on Rust would 'carcinise'.
But this is all silly and doesn't matter anyway, ha!
Companies usually don't have all their parts aligned or even functioning properly. And you cannot change that.
Your work isn't just the product, it is navigating an imperfect structure to achieve what you need to, despite the fact that the imperfect structure is the one contracting the product/service, and is (mostly unwittingly) standing in the way of that.
It's easier to understand this if you look at the onion protocol. Broadly, it introduces noise between all users on the network by constantly sending and receiving random bytes to and from each node. This prevents external listeners from figuring out where the main server is. Originally designed to protect naval command ships, it was later used for the "dark web". If you don't know which node is the server, you can't shut it down, or read data off of it.
Simplex does something similar. A connects to B. B connects to C. And A and C connect. They all chat. but there is no way to know A, B, or C, because from the outside, it all looks like: X connects to Y, X connects to Y, X connects to Y. So who spoke to whom?
This is great. Even if "the authorities" demand access to chat logs, first, they won't know what to ask for. Chats between whom? Second, they still won't know who spoke to whom even if they have all the data. It's anonymized chats. They would have to sift through all of it.
It still won't prevent someone invading privacy if they have physical access to your device, since the identities are stored locally for your usage convenience.
The solution to this is of course to simply outlaw the use of communication systems that cannot be monitored by law enforcement. India has it, the EU is working on it, and I'm sure the US will do something like that as well.
These super-anonymous communication technologies are touted time and again to solve the problem of a surveillance state, while they do nothing of the sort. You cannot solve a social problem with technology.
The point that was trying to be made was that it doesn’t matter how secure and unbeatable something is if a sovereign state wishes to simply criminalize its use. It can then utilize its full power to enact violence upon any S̵u̵b̵j̵e̵c̵t̵ citizen, who is caught using it.
> It still won't prevent someone invading privacy if they have physical access to your device, since the identities are stored locally for your usage convenience.
does it mean if a lot of Simplex users band together and sift through all their local identities they can connect the dots?
They really bury the detail IMO after the banner claim front and centre on the website (I guess because it's hard/awkward to explain without it sounding just like a difference in nomenclature).
What makes it work afaict is the combination of:
- there are still queue (inbox) IDs
- key (and (just initial?) queue ID) exchange out of band
So messages are still delivered to an identifier, it's just that every user has tonnes of identifiers (per contact/group), there's no server tracking and handling their exchange, and possibly they rotate via encrypted messages once established anyway.
Exchanging out of band gets you the secrecy, and having one per-chat protects you from a contact turning out bad/leaking/compromised - it's fine that they have metadata about their own chat with you, because they have that & the plaintext anyway.
That isn't what GP is saying. In order to effectively use the chat, you are going to tag that connection identifier with a name like Joe on your endpoint device, so that when a message comes in you remember this was the conversation with Joe. The server may not know that this is you and Joe, but you do.
Yeah, that was my take. The issue with end to end encryption is the ends. The ends are the weak link. I think though that simplex may be the best option in this regard though. The user has to know who they are speaking with, but here the user can choose not to tag the connection, and therefore take on the mental load of remembering which connection identifier referred to which person.
Having not used this chat I don't know how easy thing might be but I do remember, before mobile phones were a thing, being able to remember at least 8 phone numbers that I used to call regularly. Certainly if it called for it you could do this with simplex?
Let's say the whole network is just 4 nodes. A, B, C, D. And A and B are connected. And C and D are connected. And I have physical access to all 4 devices.
In phone A, you will have B's contact stored locally, let's say as "Dan".
It's a little hard to grok the details, but it sounds like it's a collection of "dead drops". Like, if done in a physical way, imagine locations around a city where you can drop off scraps of paper with cipher encoded message on them. Encoded not just with sender/recipient keys, but a dead drop location specific key too. And an agreement that replies to a received message get dropped off at a different, randomly selected one.
Everyone can see and read the cipher text on all the papers, but each of the 4 people can only decode the things meant for them.
So, if that's how it works, you could certainly learn who was talking to who if you had access to all the devices. But access to one device only shows you what came and went to the device, but no data about which of the other three users were involved in those reads/writes. You would have to gain access to each device, in turn, to prove whether it was in contact with the first device.
I specifically said a lot of users. A lot is the opposite of one. Imagine a lot (>40% of total users) of impostor devices acting in accord to deanonymize some of the X and Y. Is it vulnerable to that. Like apparently Tor is.
You would at most be able to deanonymize a certain percentage within the impostor network itself. Kind of pointless.
Physical access of devices is the only way to have some chance of deanonymizing some of the users (always less than number of devices you have access to).
That’s my understanding, but the maker of this thing is here, and maybe can respond better?
Almost certainly, "the brain" tunes itself to its environment and inner states from a preformed inner structure through evolution. A lot of what is in the brain, is not actually to be found in the brain, or by studying it materially. The "formwork" that formed it, is lost in evolutionary time.
Unfortunately there is no way to impute the formwork from the form, like say it would be possible to (mostly accurately) impute what formwork created a square column or cast a piece of iron.
"The brain" is not in any way shaped like a physical object, and the formwork that formed it is infinitely complex.