I don't know what you mean by killing the Turing test, but the Eliza already passed the test in the 60's.
If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.
If the LLM says it is really able to parse the language, then things like halcynation do not happen.
As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.
I played with Eliza before the rise of modern text generation. It's neat, but it's abundantly clear you're talking to a robot. There was no chance that somebody would be fooled into holding a conversation with it. Likewise with Markov-chain based text generators: by the end of a paragraph, it's obviously not human. GPT-3 and beyond absolutely can fool someone who's not aware they're being tested, and frankly in limited conversations it can be difficult even in the non-blind case. That's Turing test passing.
I deliberately avoided the problem solving/doing work side of things in discussing modern AI, because that's a place where there is a significant amount of progress necessary to be useful. I completely agree that it's abilities in that regard, at present, are being grossly overblown. But the ability to parse human language, decipher intent, and synthesize responses in human language very much is a new capability that modern LLMs are extremely good at, and will likely reach reliability levels necessary for autonomous application in the very near future.
A program that can parse human language perfectly absolutely could still hallucinate. When I ask an LLM a question, and it makes up a patently false response, it accurately parsed what I asked of it. It just failed to synthesize a correct response. The parsing of human language and synthesis of information into human language is, in of itself, a powerful capability, that we shouldn't overlook just because it's no longer science fiction.
The likelihood of the JavaScript trademark bringing any real benefit to Oracle in the future is close to zero, but the legal costs of taking it to court are significant. For the sake of mere spite and mean-spiritedness they seem to be trying to waste the company's money.
Yes, the only upside I can see for them is it reinforces their reputation of being ruthless in legal proceedings. Which you might consider a good thing if you subscribe to the "it is better to be feared than loved" philosophy.
If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.
If the LLM says it is really able to parse the language, then things like halcynation do not happen.
As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.