Specialized AIs have been making an impact on society since at least the 1960s. AI has long suffered from every time they come up with something new it gets renamed and becomes important (where it makes sense) without giving AI credit.
From what I can tell most in AI are currently hoping LLMs reach that point quick just because the hype is not helping AI at all.
Yesterday my dad, in his late 70's, used Gemini with a video stream to program the thermostat. He then called me to tell me this, rather then call me to come stop by and program the thermostat.
You can call this hype, maybe it is all hype until LLMs can work on 10M LOC codebases, but recognize that LLMs are a shift that is totally incomparable to any previous AI advancement.
That is amazing. But I had a similar experience when I first taught my mum how to Google for computer problems. She called me up with delight to tell me how she fixed the printer problem herself, thanks to a Google search. In a way, LLMs are a refinement on search technology we already had.
That is what open ai’s non-profit economic research arm has claimed. LLMs will fundamentally change how we interact with the world like the Internet did. It will take time like the Internet and a couple of hype cycle pops but it will change the way we do things.
It will help a single human do more in a white collar world.
It's nice to think that but life and relationships are also composed of the little moments, which sometimes happen when someone asks you over to help with a "bs menial task"
It takes five minutes to program the thermostat, then you can have a beer on the patio if that's your speed and catch up for a bit
Life is little moments, not always the big commitments like taking a day to go fishing
That's the point of automating all of ourselves out of work, right? So we have more time to enjoy spending time with the people we love?
So isn't it kind of sad if we wind up automating those moments out of our lives instead?
Yeah. As a mediocre programmer I'm really scared about this. I don't think we are very far from AI replacing the mediocre programmers. Maybe a decade, at most.
I'd definitely like to improve my skills, but to be realistic, most of the programmers are not top-notch.
Yeah “AI” tools (such a loose term but largely applicable) have been involved in audio production for a very long time. They have actually made huge strides with noise removal/voice isolation, auto transcription/captioning, and “enhancement” in the last five years in particular.
I hate Adobe, I don’t like to give them credit for anything. But their audio enhance tool is actual sorcery. Every competitor isn’t even close. You can take garbage zoom audio and make it sound like it was borderline recorded in a treated room/studio. I’ve been in production for almost 15 years and it would take me half a day or more of tweaking a voice track with multiple tools that cost me hundreds of dollars to get it 50% as good as what they accomplish in a minute with the click of a button.
Bitter lesson applies here as well though. Generalized models will beat specialized models given enough time and compute. How much bespoke NLP is there anymore? Generalized foundational models will subsume all of it eventually.
It's not about specialized vs generalized models - it's about how models are trained. The chess engine that beat Kasparov is a specialized model (it only plays chess), yet it's the bitter lesson's example for the smarter way to do AI.
Chess engines are better at chess than LLMs. It's not close. Perhaps eventually a superintelligence will surpass the engines, but that's far from assured.
Specialized AI are hardly obsolete and may never be. This hypothetical superintelligence may even decide not to waste resources trying to surpass the chess AI and instead use it as a tool.
I think your point that AI would refuse to play chess is interesting. To humans, chess is a strategic game. To a mathematician, chess is an exceedingly hard game, (pretty sure it is EXP complete, but I'm not fully familiar with Np/Exp completeness). To an AI, it seems like the AI will side with the mathematicians. AI is like "bro you can't even figure out if P=NP so how am I going to, you want me to waste power to solve an unsolvable problem?"
From Wikipedia, Garry Kasparov said it was a pleasure to watch AlphaZero play, especially since "its style was open and dynamic like his own".
People can't define AI because they don't want to consider AI as a subset of exponentially difficult algorithms, but they do want to consider AI as a generator of stylistic responses.
From what I can tell most in AI are currently hoping LLMs reach that point quick just because the hype is not helping AI at all.