This got me to some interesting thinking. If the library contains no information because you need the information you look for, what about the ability of it to at least match to information you look for? Or put another way, the library does begin to have information if you have the information you're looking for. The fact of finding the particular information is different than the library not containing it.
I can't seem to figure out how to type this out in a way that maks sense but basically I'm thinking when an AI like GPT-3 is working its sort of sorting through the library of babel and finding words. Or when speaking its as though the library of babel is at immediate call in the brain, which sorts through near instantly finding the book that satisfies the next word. The website that allows browsing the library helps show what I mean, you can look on it and click random and search for information in it. The thing itself contains "no information" but it also does as in this case you may find something (first page I saw had the word 'beef')
The problem is that it does contain everything, and therefore contains nothing (worth knowing that you don't already know).
In other words, you could never find an answer that you could say, with 100% certainty is accurate, unless you already knew the answer. You can't ask an unending database a question that you don't already know the answer to, because every answer is there.
Ask it, what is the primary atomic structure of beef? You'll get answers for anything. They're made of carbon. They're made of rainstorms. They're not real. You're beef.
So by saying it doesn't contain information, what they're really meaning is that it doesn't contain useful information. You can't do anything with it that doesn't amount to a wild guess.
It does take a talented writer to talk about infinity!
I think maybe there are paths through the library that would prove useful for browsing, as is the case when I visit a normal library: I don't always know what I'm looking for ahead of time, I let the arrangement of books inspire me, see what books are next to the one's I already know.
I think it's kind of like a compression algorithm, you have the compressed data, and then you have the decoder. Any complexity the original data had is either in the data, or in the decoder. The library of babel is a pathological case: the compressed data is 0 bytes: whatever choices you make in finding the data is actually information outside of the system, as in: you might as well be making it up on the spot.
However, if the books in the library are ordered somehow, that is complexity being added back into the compressed data, and it no longer contains "no information"
It's a new implementation of the same thing, but now with no team, no servers, no company, just some code that runs on the blockchain (which is a ton of servers lol)
Is this really a meaningful thing? I mean, of course it is somewhat in terms of how things are viewed. But, theres no religious law that NFTs must be a pure blockchain thing. Possibly they're finding their niche, and that is to be a sort of intermediate, which uses traditional systems as the origin of verification/trust, but can then go forward on its own with no further verification. (Of course, your wallet keys could be stolen and used to impersonate you, but the same is true for login credentials of twitter blue checkmarks)
What does the addition of a blockchain help with here? Selling? Not really, because the recommendation to avoid copycat NFTs is to sell/buy them through official platforms that will verify them. Display? NFTs do nothing to prevent link rot, and in fact make it harder to correct when it does happen. Coordination/portability between platforms? There's nothing that guarantees that platforms will respect each other's NFTs.
> But, there's no religious law that NFTs must be a pure blockchain thing.
Every time someone tells me that verification and enforcement are inherently social/legal processes and not processes that the blockchain is designed to solve, I feel like they're just one step away from finally understanding why people don't like NFTs. The only value of adding all of this math and environmental impact and complication and fragility to the system of buying digital assets is if doing so decreases the social/legal burden of verification. And NFTs don't do that, they don't do anything. They don't even make the secondary market safe.
There are of course systems you could build on top of NFTs that would help solve some of these problems. But you could also build these systems without using NFTs. None of the proposals I've heard about enabling decentralized verification on the secondary market, or making it easier to pay artists, or building a web of trust for issuers -- none of those proposals need NFTs to exist before they can be built. The vast majority of these proposals don't require anything other than basic federation, and many don't even need that.
The blockchain part isn't adding anything, it's useless. We could have the exact same conversation we're having right now, and come up with all of the same solutions for verifying and distributing digital assets without ever needing to talk about or consider a blockchain. It's not adding anything, it's just kind of there.
And you're right, there's no strict law that everything must happen on the chain. So we could also just drop the chain entirely, use the same token issuance systems that have existed for ages, and focus only on solving the problems that actually need to be solved.
But they aren't. They're pieces of cardboard with a particular story. The card has no value itself, which is why a rookie card isn't very valuable, unless its of someone famous. Why? because one is a story right at the beginning, which may not even be a good one, where there other has an established success story behind it. Owning the card is representative of owning a chunk of that story. You can say its arbitrary, but the fact is the picture on the card is of a real person with a real story, and their image on the card is part of literally that same story.
So with NFTs, its the same, anyone can write a document that says "this certifies ownership of a picasso painting" and it doesn't mean much of anything. However if Picasso writes up a document broadcasts somehow he has done so and the document says you own x painting, then its easy enough to certify to anyone that you own a Picasso from the man himself. The story is not arbitrary, this is precisely why it has value. The arbitrary easily copied stuff is cheap and meaningless and known as such. The "rare" NFTs and stuff selling for massive prices is expensive because it is genuinely so. You can write a comment here about how easily you can create a counterfeit NFT, but how easily can you actually convince someone to buy it from you? You can observe that baseball cards are just pieces of cardboard but how easily can you acquire a card printed when batter X was a rookie?
I'm not saying this sort of value is easily quantifiable, but it is easily observable, identifiable that its reflective of a real element of reality and not (entirely) arbitrarily imagined.
Isn't it basically a flat tax, not dependent on the value "claimed" by the transaction, that is easily drowned for any transaction that would make such a scam worthwhile anyway?
Who says neural networks aren't in pain when their fitness (reward function) is subtracted from? or happy or orgasmic when the function is bumped up?
Pain/pleasure seem to have easy enough analogue to at least something like a reward function, but what really gets me is colors. I feel like if honestly considered, the word "color" is all that's necessary to disprove materialism. Color is. How? Dunno.
Perceptions and feelings are real as you both experience them and could measure their signals in the brain, if you would have the proper tech to do that. And filtered, processed perceptions exist as well. In a "pipeline" in some brain - no matter if biological or artificial - somewhere in the middle you have processed signals of sensor input that have a complex meaning that has only some relation to the actual physical world "seen" by the sensors. Things like color and motion seem one of those; for an intuitive understanding it seems very close to the distill.pub analyses of what some specific middle-neurons in a convnet see.
Then the fact anything is would be wrong, which if believed truly leads the only reasonable behavior to be the most destructive behavior (what is, existence, is bad, so destroying it is good).
Would you say that a "Markov chain"-type (e.g. Dissociated Press) language model is "self-aware" in any way?
If yes, then... how, exactly? It is basically an N x N matrix of values.
Current language models (GPT et al.), are qualitatively nothing fancier than this: probabilistic models which encode regularity in language and from which you can sample to get "plausible content".
If a bunch of values in a matrix is "self-aware", then I guess GPT can be seen as "self-aware"; if not, then it can't.
My problem is trying to imagine an N x N matrix being self-aware (like... what does that even mean in this context?).
Is GPT human-like? Sure... if you stretch the meaning of "human-like" enough (it produces content that is similar to content produced by a human). Is a human GPT-like? That's harder to argue (and I don't see how your argument would support it).
If you know what a Markov chain is then you must also know that modern language models are nothing like Markov chains. Just as an example, a Markov chain can't do causal reasoning or correctly solve unseen programming puzzles, the way GPT-3 can.
As for self-awareness, your brain is an N x N -matrix in the same sense as an ANN, so surely it must be possible for one to be self-aware? Not claiming that GPT-3 is, of course.
> If you know what a Markov chain is then you must also know that modern language models are nothing like Markov chains. Just as an example, a Markov chain can't do causal reasoning or correctly solve unseen programming puzzles, the way GPT-3 can.
First, GPT-3 can hardly do "causal reasoning" beyond exploiting regularities in the content it has been trained on. That's why you get "interesting causal reasonings" such as "the word 'muslim' co-occurs a lot with 'terrorist', so these things must be causally related".
Second, just because a more complicated version of a Markov chain (i.e. a probabilistic language model) can do things that a first-order Markov chain cannot does not mean that it is qualitatively different: both things are nothing more than simple mathematical models (linear algebra + sprinkles). A polynomial model can do things than a linear model cannot, but it is no closer to consciousness and self-awareness, as far as I can tell.
My point is... a mathematical model is just that (a model) and, as such, cannot be "self-aware".
Humans, as cybernetic agents embedded in some environment (from which they get sensory input and with which they can interact), have agency and, as such, can display the property of "self-awareness". Models, by themselves, cannot.
Humans may contain systems within them that resemble a GPT-3 language model (or a Markov chain model), and such "language models" may even be required for self-awareness (it's not obvious, but let's assume it's true).
*Still*, it is not the language model itself that is self-aware, but the agent that is using the model.
A GPT-3 model, by itself, cannot be self-aware, because it not much more than a applying some linear algebra operations on numbers (just like e.g. a Markov chain language model). An agent containing (among many other things) something that could be approximated by a GPT-3 model within, may.
> As for self-awareness, your brain is an N x N -matrix in the same sense as an ANN, so surely it must be possible for one to be self-aware? Not claiming that GPT-3 is, of course.
No, it is not. My brain is an analog computation device and not a bunch of numbers. Perhaps you can approximate some aspects of how it works using numbers, probably using digital computation devices, but that is not what anyone's brain is (neither at the "hardware" level, nor at the "software" level). Also, notice that a brain always exists within a biological agent, and it is the biological agent that is (or may be) self-aware, and not "the brain".
Sorry that my answer comes so late, but I'll put this here for posterity. I will only address the latter part. My point was that a brain is an N x N -matrix in the same sense as an ANN. An ANN is no more an N x N -matrix "in reality" than a biological brain; in reality it is some collection of analog electric potentials and configurations of matter which can sometimes be represented as a digital N x N -matrix for the convenience of the programmer. Thus the situation is exactly identical to a biological brain, which is also not "in reality" an N x N -matrix but can be represented as one. If we had sufficiently advanced (nano-)technology, we could manipulate human brains through their abstract representation as a matrix just as we can ANNs. Any distinction is purely pragmatic.
In any case I was not saying that being representable as an N x N -matrix is sufficient for consciousness (which I do not believe), simply that it is clearly compatible with consciousness. I agree that a self-aware ANN would probably require a body (possibly simulated) and some notion of agency.
Its incredible to think the shape was only discovered in the last few years and not earlier in computing history (80s/90s or earlier). Just goes to show there is still much to be discovered.
Heh, dunno, seems kinda obvious to me. I was fascinated by the Mandelbrot set, and had read the original description in scientific american. I wrote an implementation in turbo pascal for a CGA display. After my dad bought an EVGA I wrote the first (to my knowledge) EGA driver for turbo pascal based on the hardware description in PC tech Journal.
A few years later I was at U Pitt and rendered a Mandelbrot zoom at 300 dpi in postscript. The big lab printer that normally printed ASCII only (mostly homework assignments). A print kept the normally 100 page/minute printer busy for a minute or two. I begged the lab attendant to not reset the printer. Everyone in the room was amazed when it printed out. Printed a few dozen out for people to pin up on walls.
I wrote some hand assembly for the x87 (and managed to keep the calculations on the stack at 80bit precision). Later on I did similar in PA-risc assembly, even participated in one of the first distributed computing projects, to map the area of the Mandelbrot set. 2 mathematicians argued that the higher resolution maps would asymptotically approach some number and a higher precision area would settle that.
I was working at Pittsburgh Super Computing (PSC) in the 90s as a student. I was working under Joel Welling who was working on an implementation of the marching cubes algorithm. So I needed a 3D dataset to tinker with. I tinkered a bit with how to get different 3D slices (I forget what tweak I used for the Z axis). Submitted a job to calculate a 256^3 volume, used the marching cubes algorithm, and rendered it. The result looked pretty similar to the current mandelbulb, granted at a pretty low resolution.
I can't seem to figure out how to type this out in a way that maks sense but basically I'm thinking when an AI like GPT-3 is working its sort of sorting through the library of babel and finding words. Or when speaking its as though the library of babel is at immediate call in the brain, which sorts through near instantly finding the book that satisfies the next word. The website that allows browsing the library helps show what I mean, you can look on it and click random and search for information in it. The thing itself contains "no information" but it also does as in this case you may find something (first page I saw had the word 'beef')