Humans may even "be conceptually mirrors" (whatever that may mean): this still doesn't make mirrors be humans (conceptually or otherwise).
Current language models are much more closer to "Dissociated Press" (i.e. an equation that tells you which which words are more likely, given the context) than to "human thought", and the fact that humans learn by copying does not really change this.
Would you say that a "Markov chain"-type (e.g. Dissociated Press) language model is "self-aware" in any way?
If yes, then... how, exactly? It is basically an N x N matrix of values.
Current language models (GPT et al.), are qualitatively nothing fancier than this: probabilistic models which encode regularity in language and from which you can sample to get "plausible content".
If a bunch of values in a matrix is "self-aware", then I guess GPT can be seen as "self-aware"; if not, then it can't.
My problem is trying to imagine an N x N matrix being self-aware (like... what does that even mean in this context?).
Is GPT human-like? Sure... if you stretch the meaning of "human-like" enough (it produces content that is similar to content produced by a human). Is a human GPT-like? That's harder to argue (and I don't see how your argument would support it).
If you know what a Markov chain is then you must also know that modern language models are nothing like Markov chains. Just as an example, a Markov chain can't do causal reasoning or correctly solve unseen programming puzzles, the way GPT-3 can.
As for self-awareness, your brain is an N x N -matrix in the same sense as an ANN, so surely it must be possible for one to be self-aware? Not claiming that GPT-3 is, of course.
> If you know what a Markov chain is then you must also know that modern language models are nothing like Markov chains. Just as an example, a Markov chain can't do causal reasoning or correctly solve unseen programming puzzles, the way GPT-3 can.
First, GPT-3 can hardly do "causal reasoning" beyond exploiting regularities in the content it has been trained on. That's why you get "interesting causal reasonings" such as "the word 'muslim' co-occurs a lot with 'terrorist', so these things must be causally related".
Second, just because a more complicated version of a Markov chain (i.e. a probabilistic language model) can do things that a first-order Markov chain cannot does not mean that it is qualitatively different: both things are nothing more than simple mathematical models (linear algebra + sprinkles). A polynomial model can do things than a linear model cannot, but it is no closer to consciousness and self-awareness, as far as I can tell.
My point is... a mathematical model is just that (a model) and, as such, cannot be "self-aware".
Humans, as cybernetic agents embedded in some environment (from which they get sensory input and with which they can interact), have agency and, as such, can display the property of "self-awareness". Models, by themselves, cannot.
Humans may contain systems within them that resemble a GPT-3 language model (or a Markov chain model), and such "language models" may even be required for self-awareness (it's not obvious, but let's assume it's true).
*Still*, it is not the language model itself that is self-aware, but the agent that is using the model.
A GPT-3 model, by itself, cannot be self-aware, because it not much more than a applying some linear algebra operations on numbers (just like e.g. a Markov chain language model). An agent containing (among many other things) something that could be approximated by a GPT-3 model within, may.
> As for self-awareness, your brain is an N x N -matrix in the same sense as an ANN, so surely it must be possible for one to be self-aware? Not claiming that GPT-3 is, of course.
No, it is not. My brain is an analog computation device and not a bunch of numbers. Perhaps you can approximate some aspects of how it works using numbers, probably using digital computation devices, but that is not what anyone's brain is (neither at the "hardware" level, nor at the "software" level). Also, notice that a brain always exists within a biological agent, and it is the biological agent that is (or may be) self-aware, and not "the brain".
Sorry that my answer comes so late, but I'll put this here for posterity. I will only address the latter part. My point was that a brain is an N x N -matrix in the same sense as an ANN. An ANN is no more an N x N -matrix "in reality" than a biological brain; in reality it is some collection of analog electric potentials and configurations of matter which can sometimes be represented as a digital N x N -matrix for the convenience of the programmer. Thus the situation is exactly identical to a biological brain, which is also not "in reality" an N x N -matrix but can be represented as one. If we had sufficiently advanced (nano-)technology, we could manipulate human brains through their abstract representation as a matrix just as we can ANNs. Any distinction is purely pragmatic.
In any case I was not saying that being representable as an N x N -matrix is sufficient for consciousness (which I do not believe), simply that it is clearly compatible with consciousness. I agree that a self-aware ANN would probably require a body (possibly simulated) and some notion of agency.
> [...] I'm not sure how much choice they had in this - I suspect NSA/US gov more widely here. [...]
Note that when parent says "you can't trust NIST" and you counter with something along the lines of "that's unfair... NIST acts untrustworthy/knowingly recommends subpar options because of NSA", it doesn't really counter what is being said.
If NIST decisions are based mostly on "whatever the NSA tells them to do", rather than the actual technical merits of the things they recommend, then... yes, they are generally not worthy of trust (blind or otherwise), because you'll always have to double-check their statements against other sources (e.g. your own knowledge, expert cryptographers, etc.).
Fool me once, shame on you; fool me twice, shame on me.
That's the problem of being untrustworthy once in a while... it's easier to lose your reputation than to regain it.
As it is... if you use anything recommended by NIST without first checking with the actual trustworthy community of researchers, you're asking for it.
TL;DR: Trying to justify why the NIST is seen as untrustworthy (or acts as such) does not change the fact that it is seen as untrustworthy by many people (and, as far as I can tell, fairly so).
> it's just not even a choice and I struggle to see how any one would chose a the former as a preferable option.
What if you do not possess a smartphone? What choice do you have, then? Are you just forced into hotel quarantine or do they lend you a smartphone? (Genuine question.)
I'm guessing, but they will probably do police visits periodically as they do with existing home quarantine (i.e. if you're a close contact of a case).
You may have the MEANS... but is a smartphone MANDATORY? I have the means to travel abroad, no problem, but never in my mind i would voluntarily own a smartphone.
Typical lefto-fascist thinking. Disregard for the law and forced imposition of measures based on subjective perception. Well, by the same token you look to me like the kind of person that could commit an act of terror. Let’s have the police interrogate you.
As it is explained in the "readme" part, in this specific context, "naturally occurring" means that no one has purposefully manipulated any of the images to make them collide: that the images were already published and "out there" and happen to collide. In other words, it does not necessarily imply that the images correspond to natural photographic scenes (which seems to be your interpretation of it).
Besides, you could probably "naturally" obtain such type of colliding images by photographing similar-looking objects against a white (or generally featureless) background. Furthermore, it suggests/demonstrates that similar-looking images with similar backgrounds can lead to unexpected collisions in practice (i.e. "naturally"), even if you do not assume an adversarial scenario.
Are you sure that, if you take a picture of a naked body part, it won't collide with anything that looks similar in their database?
It is unlikely unless you manage to capture some position and happen to have some background. This whole thing is a nothingburger. This is one of those weird things were many people have baseless gut reactions and then try to go and prove if flawed even though they don't have a complete picture.
It is unlikely that there is a collision of benign image with the database and even if that happens it is not some automatic process that just sends cops to your house to raid it.
Of course we can get bunch of collitions with essentially same images, I don't get why this is so magical just squint your eyes and I'm sure you have two objects with in your reach that could be made to collide, but that isn't a gotcha on any level
The thing is that some of the techniques commonly applied when training NN are often "good enough" to deal with the presence of corrupted data (e.g. using SGD to optimize a model, while applying weight decay and drop-out, adds a regularization effect that somewhat replicates the effect of assuming errors-in-variables), as long as the input data is not total trash, which deters people from applying more formalized robust approaches to it.
As long as "things kind of work", it is difficult to convince other people to adopt robust methods, particularly due to the existence of a "robustness vs. efficiency" trade-off (which can make robust methods seem additionally "unsexy").
Note: the "multiplex test" is most likely still a PCR test (just 'multiplex PCR' instead of 'single-probe PCR'), so where you say "PCR only detects SARS CoV 2" it should say "the currently-used PCR test only detects SARS CoV 2".
Ah, yes... the "if I put everything in doubt, even the most painfully obvious things, it makes me seem smart" crowd.
Here's the thing, though: if one is basing their "doubts" on random brainfarts or propaganda pieces they saw on youtube or facebook and decided to trust blindly, they're probably also doing it wrong.
Skepticism that is not accompanied by critical thinking, actual knowledge, intellectual honesty and a dash of humility is useless or worse.
TL;DR: Epistemology is hard, but "just doubt everything" is not it, chief.
Not at all. Just criticizing his blanket statement, apparently used to justify "doubting the results of US elections": "doubting" just for the sake of doubting, particularly when it feeds into narratives being driven by trolls, disinformation and hidden agendas, doesn't do Democracy or Truth any service.
I'm going to go ahead and assume that the people in question (that are doubting the results of the US elections) are rather "skeptical" towards (so called) mainstream media, but not as much when it comes to the random crap on fox news, ann, youtube and facebook. If this is the case, then this makes them more "useful idiots" than actual "skeptics", in my opinion.
TL;DR: "Doubting blindly" (usually based on whatever Fox News or 4chan is spreading today) is as at least as bad as "trusting blindly".
This shouldn't matter, otherwise you'll succumb to false-flags and Reverse psychology.
In any case, the post you responded to doesn't say anything about fox news. Any basing an opinion on fox news isn't "Doubting blindly"; it would be blind trust.
Blind doubt aka scepticism is fine, because doubting something (with poor sources) isn't the same as assuming it's false - it's not assuming that it's true.
Lack of a reason to trust US elections is justification to doubt them. The standard is (should be) that these things prove themselves, rather than put the burden of proof on outside observers.
It's also worth noting the post that it was replying to said:
> My university educated in laws in the UK are doubting the outcome of elections in the US
which seems to take "doubting the outcome of elections" as automatically outrageous, without mention of reasons, sources or why.
You think the burden of proof isn't (or shouldn't be) on the government?
Between your wilful "reading between the lines" based on your own biases, and smug insults ("But, hey... you do you"); I don't think this argument is going to go anywhere - I don't see the "accept that you may be wrong and are willing to be corrected" you talked about.
Not entirely. It makes it so that, to achieve a "full" collision, you have to ensure that the sets of data collide both in SHA hash and in length, helping to prevent attacks that rely on appending/prepending/removing data (for example, "length extension attacks" involve manipulation of the hash by appending data).
TL;DR: It is harder to find a collision SHA(B) for SHA(A) if you add the additional constraint that the length of B must match the length of A.
The known collision attacks for the MD-family and SHA-1 all in fact produce collisions with the exact same length. The method used necessarily does this.
Which part? The fact that storing "length" along with a hash is not superfluous?
You can probably find many things which have a SHA hash of "ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb" (infinite things, if we assume arbitrary-sized inputs), but you can only find ONE thing which has that hash and has length 1. I just made it impossible (not just unlikely) for you to find a collision.
> The known collision attacks for the MD-family and SHA-1 all in fact produce collisions with the exact same length.
Emphasis mine. And note that I did not claim otherwise in my comment.
> Which part? The fact that storing "length" along with a hash is not superfluous?
The part where you make a false claim out of ignorance.
> You can probably find many things which have a SHA hash of "ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb"
No reason I should go looking for such things. You're the one making the false claims, if you have found "many things" with that hash then list them to prove your point, otherwise go away.
> The part where you make a false claim out of ignorance.
Which false claim did I make? I'm still waiting...
> No reason I should go looking for such things. You're the one making the false claims, if you have found "many things" with that hash then list them to prove your point, otherwise go away.
You don't need to look for those things. By definition, you know they exist. I don't need to find or enumerate all primes to know that an infinite number of them exist.
By definition, assuming arbitrarily-sized inputs, there are infinite messages that collide to the same hash value.
But, don't worry... it is clear you have no actual meaningful point to add, so I won't continue this conversation with you any further. Have a nice day.
You are misrepresenting or more likely have simply misunderstood the Pigeonhole Principle. Which I guess makes sense for somebody who didn't understand why length extension matters. It does not prove that any particular output will recur, and what you've got here is one very particular output.
Again, you need actual examples. Not handwaving, not the unwavering yet entirely unjustified certainty that you're correct, you need examples. And you don't have any.
Again, which false claim have I made? Be specific and quote me: you need actual examples, not handwaving.
Until you do that, I'm not pursuing this conversation any further. Have a nice day.
EDIT: Also, if you do want to have a conversation, make sure to stick to HN rules and talk about what is being discussed, rather than about me. Thanks.
> "why would I use a currency that easily stolen of I could just use dollars?"
If you hand out your currency to a third party (Joe Binance or someone else), then it can "easily be stolen", regardless of what currency your are talking about: using dollars instead of BTC changes nothing here.
In the US at least, if a bank steals my dollars, the government in control of the fiat currency makes it their business to prosecute the thief and retrieve the currency, but while that is happening they make me whole by issuing me replacement money via the FDIC insurance system.
No such back-stop exists for a third party stealing your bitcoins.
And if a bank steals any other assets you may have in their custody (e.g. stocks, forex, bitcoin, etc.) does it work any differently than if they steal USD currency? Isn't there some sort of insurance or legislation that protects you in such cases? Or, if your US bank decides to steal your EUR, you have no recourse and just have to take it?
In this case, we are not talking about a bank, but about an exchange which exists outside of US jurisdiction. In this case, it does not matter if you handed USD or BTC to this third-party (outside US jurisdiction): if they decide to take your stuff, there is little actual recourse you have and FDIC won't cover it.
On the other hand, if you are dealing with an exchange within US jurisdiction (e.g. Coinbase), I don't see how BTC theft would be treated any differently from USD theft: if they take your assets, they can be brought to a court to have that fixed and return your assets (be it USD or BTC or whatever).
TL;DR: What matters is if you keep your assets (USD or BTC) with an appropriately-regulated institution (e.g. a bank within a jurisdiction you trust) or not (e.g. in an unregulated exchange outside your jurisdiction), and not so much the type of assets you have (or that were stolen/taken from you).
I'm not sure how wide-sweeping FDIC insurance is. I believe it is scoped to cash holdings.
... but worth noting: I'm not talking about recovery (which is a matter of law that could take weeks or years). I mean replacement: making the victim whole with new cash while the law addresses the theft itself. I am unaware of any process or institution in the Bitcoin space that can do that.
That brings up a really interesting notion, no? Couldn't be done with bitcoin, but one might imagine one of these more centralized alt-currencies doing something like that. Somebody gets openly and obviously screwed on Binance and Joe Binance reviews/audits, and says, yep, and issues them some Binance coins*
*I know Binance does have it's own coin (coins?), I have no idea if it's technically possible with it/them, this is purely hypothetical.
We can take this further: we have "law" and "law" has mechanisms to protect against and recover from theft, no matter what. It's true that the FDIC et al makes "recovery in the case of shenanigans" much easier, but none of it means "if someone steals bitcoin, it's impossible to recover because there's no FDIC for it."
(and perhaps even further than that, my earlier point is, "Joe Binance" is in so deep at this point that there's now likely enough of an incentive for parties who aren't "legal" to threaten him with violence or harm if he does something wrong.)
The key idea here is replacement is different from recovery.
Recovery can take weeks, months, years, or be impossible (if the cash was burned, or the BTC routed to an account with a lost private key). While all that is going on, FDIC insurance on USD means (in the case of a rifle US bank) the Fed steps in and issues new cash to make the victim whole.
There is neither a mechanism for printing "emergency Bitcoin" nor a "victim's reserve" operated by a trusted third-party to do something similar if a Binance (for hypothetical example) goes rogue.
There is, if you are talking about "centrally-controlled" tokens/coins (e.g. USDC, USDT, BUSD). They can simply "lock" the stolen coins and print you new ones (as they have done before).
If you are talking about decentralized tokens/coins (BTC), then... yes, you can't arbitrarily seize or print new ones (but that resistance to arbitrary manipulation is generally considered a feature, not a bug).
And, again, in this case (sending money/assets to Binance, or even to a US-based exchange like Coinbase), FDIC would be irrelevant and not be triggered, even if you use USD (I assume it only applies to money kept in banks and not necessarily to money kept in other type of non-bank brokers or exchanges).
Current language models are much more closer to "Dissociated Press" (i.e. an equation that tells you which which words are more likely, given the context) than to "human thought", and the fact that humans learn by copying does not really change this.