"Or, perhaps, wanting to be regulated is a subconscious way for tech to reassure itself about its central importance in the world, which distracts from an otherwise uneasy lull in the industry."
There is that. There hasn't been a must-have consumer electronics thing since the smartphone. 2019 was supposed to be the year of VR. Fail. 2023 was supposed to be the year of the metaverse. Fail.
Internet of Things turned out to be a dud. Self-driving cars are still struggling.
All those things actually work, just not well enough for wide deployment.
LLM-based AI has achieved automated blithering. It may be wrong, but it sounds convincing.
We are now forced to realize that much human activity is no more than automated blithering.
This is a big shakeup for society, especially the chattering classes.
No, they are not. I was there in the 1980-1990s when spreadsheets hit. After the word processor, the spreadsheet became ubiquitous. Along with email, they were at the center of the personal computing revolution.
These days AIs/LLMs are still rarified air. People do use AIs built into a SaaS product (auto-summaries, etc.), but it's still the minority. Some others are becoming facile prompt jockeys. A rarer few experts will run their own models on local laptops and servers.
But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.
> But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.
I know people who do office jobs unrelated to tech who have slashed their workloads in half using LLMs.
What do you mean complex technology? You just type plain English into a prompt; it can't get less complex. Have you seen how complicated spreadsheets are?
Or wasting time going through the crap that google throws up in its search results these days. I find it faster to just ask GPT when I forget some command argument. LLMs have basically replaced the web search engine for me in most day to day cases now.
The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.
Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.
I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.
What do you mean? Hallucinations are unavoidable, even humans produce them semi-regularly. Our memories are not nearly reliable enough to prevent it.
In my experience the only more or less reliable way to avoid hallucinations is to provide the right amount of quality information in the prompt and make sure the LLM uses that.
- Make edits to a Latex file which would have taken me at least an hour longer to do.
- Reverse-compile a PDF into Latex from a mere copy-paste.
- Translate a travel diary from French into English.
- Ask conceptual questions in difficult areas of mathematics. It's unreliable but it often has a "germ" of an idea. This is backed up by the Fields medalist Terence Tao.
- Helped me tutor someone by giving model solutions to homework and exam problems when I wasn't sure.
- Write a browser extension to do certain content blocking/censoring that none of the programs on my computer could do. I've never written a browser extension before, and this would have taken me a day longer.
Those are good ideas, thanks. I don't like its writing, I find it stilted and awkward, but it's good if you want something that's not going to dissatisfy anyone.
I've also used an LLM for some of the elements on that list, though it seems like I should use them more.
There is that. There hasn't been a must-have consumer electronics thing since the smartphone. 2019 was supposed to be the year of VR. Fail. 2023 was supposed to be the year of the metaverse. Fail. Internet of Things turned out to be a dud. Self-driving cars are still struggling. All those things actually work, just not well enough for wide deployment.
LLM-based AI has achieved automated blithering. It may be wrong, but it sounds convincing. We are now forced to realize that much human activity is no more than automated blithering. This is a big shakeup for society, especially the chattering classes.