Execs like this will cause a lot of disruption over the next few years and waste a lot of money adding friction and overhead where it’s not needed, laying off the wrong people, and prioritizing the wrong projects. And as the inherent issues with generative models become apparent, they’ll end up hiring similar people back into new roles and starting new projects to address those problems we’ll move onto the next hype cycle with a lot of wasteful disruption.
The main difference from the blockchain and metaverse boondoggles will be the scale, given the endlessly breathless nonsense coverage of how professional humans in every field are now completely expendable.
By 2030, we’ll have moved past this but none of the execs or investors will have ever faced any consequences or done any introspection about all the disruption and waste. They’ll be on to the next scam.
Executives and investors are way more bought into this than recently hyped things like crypto and ar/vr because this one offers the possibility of laying off most of their highly skilled & highly compensated technical staff. We are one of the last solidly middle class professions, we didn't unionize when we could, and they are absolutely foaming at finally maybe having the chance to do to us what they've already done to everyone else.
> Some exec wakes up one day and has a crazy misguided idea about sticking gen AI somewhere and then asking junior (non DS) devs to build it without DS input
Don't really see a problem here - it's quite realistic for junior devs to build something useful out of Ollama/Open AI APIs.
This is reality, fortunately or not. How can you change this? Push back about specifics that won’t work, but in large terms.
Example. Well we could build that, but in order to monetize it, it would cost xxx monthly which would require a gross sales increase of xxx just to break even. You may not know gross profit margin, but your exec does. So focus on costs along with strong technical reasons as to why something is ridiculous.
Programmers hate being asked “how much time will this take”.
This needs to be utilized and flipped around - like your crazy ass idea will consume 5000 core programmer hours which will set back the follow priorities and we can’t see a use case.
Realistic examples of small, useful AI projects a junior dev could do in a sprint:
1) check for spelling/grammar mistakes on company website
2) summarise transcripts of zoom calls
3) scan codebase for misleading comments
4) generate mermaid diagrams from terraform
5) Scan Jira for near-duplicates on a backlog and add a label
None of these require data science teams to get involved.
At least they got over slave labor .. only taking 150 years from 1808 through 1948.
(The Act Prohibiting Importation of Slaves & United Nations' Universal Declaration of Human Rights)
Chasing profit with dubious or outright abandonment of healthy human ethics has been a hallmark of human business, sadly. [Energy draining via blockchain. Absurd re=use of human contributions to knowledge via GenAI.]
It is like technology becomes a disruptor. The disruption doesn't have to be so hard. It can be softened. But for the sake of profit, it is not and more people than necessary will be disenfranchised by GenAI.
I had a look and couldn't find a serious journalistic source with numbers to back my personal opinion.
Lots of whiny Reddit threads, but only anecdata.
My own experience is that Dungeons & Dragons character art can be better (and more personally) created by writing your own stablediffusion prompts than by hiring an artist.
The social media user base was tiny and primarily full of antisocial nerds.
> * email cost money
No, it didn't.
> * Amazon just sold books
By 1999, Amazon was selling other stuff too.
> * cloud computing wasn't anywhere near mature
It depends on how you define "mature," I suppose. VPSs, dedicated hardware rentals, and managed services were quite common.
> * cell phones were only on TV
Having been in high school from ~1996 to 1999, I can say that a good portion of the girls had cell phones.
> * identity theft wasn't a thing
Identity theft has been a thing since the beginning of the human race.
> * Subscription-based "walled gardens" like AOL were popular
Almost everyone in the tech community widely mocked AOL. It was "popular" among a small subset of computer users. And to be fair, this was before good search engines. Services like Yahoo! and AOL made it possible to find content easily.
> * server side programming was done in C
And PERL, and Fortran, and Shell, and .NET language like C#.
Funny, iCraveTV is the 90s 'dot-com' company I remember most.
> cell phones were only on tv
Perhaps you are referring to an 80s tech boom instead? That's when the "Zack Morris" phone was something for TV. By the 90s tech boom, everyone had a MicroTac.
> server side programming was done in C
And bash! Hard to imagine now. Although Perl reigned supreme.
I am an executive and I’m pushing back hard against AI. I don’t need a robotic progressive machine telling us what is acceptable or that they are deeply offended.
There is something there - but currently it feels worse all the time and the wow factor is already gone.
Why aren’t we using all this computing power to say, model how many emissions China needs to cut and exactly where, using new pollution satellite data, to force them to save the planet?
Why are we instead wasting all this computing power on politically correct robots ?
There is an article about Bitcoin “wasting” 2.3% of US electrical grid power. Why aren’t we examining how much all these GPUs are wasting?
The computing power is not that high, and computing is not so scarce that it’s a limiting factor in anything.
“Force China to save the planet” is awfully loaded. Chinas green development is frankly a lot better than the US even if their polluting development is huge.
And in a few months they pivot to a new obsession (e.g. quantum computing) and fire everybody without a community college course in quantum physics on their resume.
> "and fire everybody without a community college course in quantum physics on their resume"
... And without ten years experience in the shiny new quantum software that was just created last week ...
(I've literally seen real actual job postings in the tech industry where the requirements include years of experience in a technology that was just invented.)
I see the opposite problem in a lot of large businesses... "ah, AI is cool but it will never disrupt our space, our work is too complex" - heh, ok buddy.
I worked at a Fortune 50 company (not tech) and it was hilarious to hear about some random exec who read about a technology and it was suddenly priority #1, mostly because it was the flavor of the month in the industry.
Of course it was a tech that many had brought up before (myself included) which had been brushed off.
Then it was “pull all nighters and come up some ideas on how we can use this tech”. And it was purely so the CEO wasn’t caught with his/her pants down when talking to investors.
The main difference from the blockchain and metaverse boondoggles will be the scale, given the endlessly breathless nonsense coverage of how professional humans in every field are now completely expendable.
By 2030, we’ll have moved past this but none of the execs or investors will have ever faced any consequences or done any introspection about all the disruption and waste. They’ll be on to the next scam.