> “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.
People base their notions of AI on science fiction, and it usually goes one of two ways in fiction.
Either a) skynet awakens and kills us all or
B) the singularity happens, AI get so far ahead they become deities, and maybe the chosen elect transhumanists get swept up into some simulation that is basically a heavenly realm or something.
So yeah, bringing us to the promised land is an expectation of super AI that does seem to come out of certain types of science fiction.
Is it? AI is impressive and all, but i don't think any of them have pased the Turing test, as defined by Turing (pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes), although i'd be happy to be proven wrong.
I have a rather specialized interest in and obscure subject but one which has a physical aspect pretty much any person can relate to/reason about, and pretty much every time I try to "discuss" the specifics of it w/ an LLM, it tells me things which are blatantly false, or otherwise attempts to carry on a conversation in a way which no sane human being would.
The LLM is not designed to pass the turing test. An application that suitably prompts the LLM can. It's like asking why can't I drive the nail with the handle of the hammer. That's not what it's for.
> pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes
I've just read the 1950 paper "Computing Machinery and Intelligence" [1], in which Turing proposes his "Imitation Game" (what's now known as a "Turing Test"), and I think your claim is very misleading.
The "Imitation Game" proposed in the paper is a test that involves one human examiner and two examinees, one being a human and the other a computer, both of which are trying to persuade the examiner that they are the real human; the examiner is charged with deciding which is which. The popular understanding of "Turing Test" involves a human examiner and just one examinee, which is either a human or a computer, and the test is to see whether the examiner can tell.
These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
Aside: The bulk of this 28-page paper anticipates possible objections to his "Imitation Game" as a worthwhile alternative to the original question "Can machines think?", including a theological argument and an argument based on the existence of extra-sensory perception (ESP), which he takes seriously as it was apparently strongly supported by experimental data at that time. It also cites Helen Keller as an example of how learning can be achieved through any mechanism that permits bidirectional communication between teacher and student, and on p. 457 anticipates reinforcement learning:
> We normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.
> These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
I disagree. Having a control and not having a control is a huge difference when conducting an experiment.
That might be nice philosophically, but i don't think that is how the average human defines art.
Even people who like art don't really define it that way afaict. For example, "death of the author" is a hugely popular concept when it comes to art, where the idea is that what matters is what the art makes you feel, not what the author was trying to communicate.
Unfortunately no, which is a real shame because herpesviruses like EBV are harmful and practically unavoidable. This research is specific to delivering mRNA to white blood cells.
Herpesvirus latency is really complicated, more so than HIV. It hides in more tissues and particularly in nerves, which have some degree (debated) of immune privilege. Every type has different latency. Most types have multiple, very different methods of staying latent and stay more latent than HIV. We understand some of those methods, partially understand many of them, and still don't know a lot about others. A latent infection will probably still remain if too few of these pathways are activated at once.
Always hard to tell what and where specifically the impacts of any research will be. Looking at how they discuss the creating of this lnp formulation in the paper:
> We therefore modified the lipid composition of the LNP to enhance potency. First, the ionisable lipid DLin-MC3-DMA (MC3) was replaced with SM-102, an ionisable lipid previously shown to lead to greater cytosolic mRNA delivery through enhanced endosomal escape [30]. Second, the SM-102-LNPs were further modified using ß-sitosterol, a naturally-occurring cholesterol analogue associated with enhanced mRNA delivery [31], to create a formulation referred to as LNP X (Fig. 1b).
Neither of those papers they cited were focused on HIV/T cells specifically. Maybe this will have no applications, or applications in HIV only, or be generally useful for hard to transfect cells, or be bad for HIV for some unforseen reason but useful elsewhere, or be a dead end, you never know. But yeah maybe if folks run into similar challenges for approaches to dealing with those viruses, maybe there's something from this work that could help them, who knows?
It does seem a little silly. My great-great grandfather on one side was from germany, but i don't identify as german. The link is too weak and i have no cultural connection.
This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.
In the modern web its much less of an issue due to samesite cookies being default .
People base their notions of AI on science fiction, and it usually goes one of two ways in fiction.
Either a) skynet awakens and kills us all or
B) the singularity happens, AI get so far ahead they become deities, and maybe the chosen elect transhumanists get swept up into some simulation that is basically a heavenly realm or something.
So yeah, bringing us to the promised land is an expectation of super AI that does seem to come out of certain types of science fiction.
reply