Another thought-terminating one-liner. This is why I had to add that this was a hypothetical scenario in OP, because it seems whenever anyone tries to discuss transformative AI on this site, there is always the kneejerk bias to just say it's all a VC scam, overhyped, etc.
Which is fun to talk about because it makes you the smart cynic and everyone else the dumb sheep, but it isn't productive when discussing ideas within a hypothetical range (i.e. with no burden of proof, simply a what-if), but even explicitly saying it's a "what-if" scenario can't keep that bias from coming out.
I think it's unwise to spend almost no time considering the real impacts of AI given how much the internet, mobile phones, and social media have changed the world over the past 20 or so years. I mean, don't spend all your time thinking about it, but at least consider it with a seriousness that doesn't resort to those cliches.
The FUD is little more, and in many cases is, one liners that play on insecurity.
There is no case to justify an assumption as true.
How would you judge the impact of 'technology' over the past 20 years? The 20 years before that were pretty revolutionary too, and so were the 20 years prior to that.. and... the 20 years prior... and... Bias comes from the life one's lived.
How to measure the impact, then? Life expectancy? Education? Gender equality? Access to clean air and water? A safe house that the majority of people can afford doing a job they don't hate? All these things that go into Human Development? Of which HDI [1] is but one, of which some have had progress in some countries. But fear mongering, playing on insecurities, that gets investor money for the hope some fraction of wealth can be accumulated by peddlers of such insecurity. You are a useful distraction.
>How would you judge the impact of 'technology' over the past 20 years? The 20 years before that were pretty revolutionary too...
Yes and I never said they weren't. We are living at the tail end of many revolutions whose timeframes shorten with each one. Five years ago people would call it sci-fi if you could say you could ask a computer to create a photo-realistic video of a person singing an original song from a text prompt, now it's just one of many new releases each month. Is every AI release strictly practical? No, but it's impossible to deny the real impacts many releases have had, including Alphafold.
>But fear mongering, playing on insecurities, that gets investor money for the hope some fraction of wealth can be accumulated by peddlers of such insecurity. You are a useful distraction.
Are you claiming that AI startups and companies diving headfirst into AI products and R&D are just fearmongering for investor money? Strange to say because most of these companies (google, meta, Openai, etc) don't speak much about the risks of AI, but rather the upside of empowering people. Most FUD comes from their detractors who claim that AGI will lead to human disempowerment, but if that were an attractive narrative for the aforementioned companies to peddle for economic gain, how come they all are distancing themselves from that claim?
20 per day is less than 5 minutes. Most of my spam comes from GMail. Blacklist GMail for the minor annoyance that messages that get through spamassassin, dkim filtering, greylisting, etc for?
And luckily the "plain English" the AI outputs is always 100% correct, so we don't have to worry about buggy python code down the line because those python devs got incorrect instructions. I mean how should they even verify anything, they're python devs, and perl will look like complete gibberish to them.
So they might have saved all that time, but what's gonna be the impact of incorrect reimplementation? What does that software do?
Ultimately it seems the question ought to be “is the code they wrote with AI buggier than the code they would have written without”, not “is the code they wrote with AI 100% bug free”. I doubt that any team doing a significant refactor from a language they don’t know could make bug free code on any reasonable timeline, AI or not.
If the question is the former, though, unless it’s horrendously buggy then I wonder if the speed increase offsets the “buggier code” (if the code even is buggier) because if they finish early they can bug bash for longer.
I guess it depends on wether the devs are able and willing to even still try to look at the old code, when they have a nice and easy to understand description in front of them what they're supposed to implement. And sure, at the end of the day management just cares about what costs less, including any accidents caused by AI giving the wrong description. Might also depend on who'll use that tech. If it's a bank, this could cost millions, if not billions. If it's a medical device (yeah I really don't think it is. I mean I really hope it isn't), it could cost lives. But at least then we can blame the AI, so nobody is at fault.
Existing QA and deployment best practices should mostly answer the question of "does the new one work like the old one". The difference here is that now the devs can understand what the new one ought to do much faster.
Later, they claim 15 hours per dev per week. It’s almost like the numbers are completely made up!
Also, why rewrite perl code in python? Those two languages have basically the same set of problems, assuming the perl code is doing banking stuff and not orchestrating GPUs.
If you’re going to pay for a rewrite, at least get some sort of high level improvement, like richer static typing.
I’d guess the goal is “python devs are cheaper than perl devs”? How long is the return on investment vs. site-licensing Learning Perl?
>Also, why rewrite perl code in python? Those two languages have basically the same set of problems, assuming the perl code is doing banking stuff and not orchestrating GPUs.
Because at some point it's no longer worthwhile to continue hacking up stuff build with the assumptions and best practices of 20yr ago and just want to write a new one that's built with today's assumptions and best practices and will be easier/cheaper for everyone who has to interact with for the next 20yr.
> Smart TVs constantly send large amounts of data about what you watch to the servers. LG captures a screenshot of your TV every 10ms (100 times per second), while Samsung does so every 500ms (twice per second).
reply