I actually tested Claude Sonnet to see how it would fare at writing a test suite for a background worker. My previous experience was with some version of GPT via Copilot, and it was... not good.
I was, however, extremely impressed with Claude this time around. Not only did it do a great job off the bat, but it taught me some techniques and tricks available in the language/framework (Ruby, Rspec) which I wasn't familiar with.
I'm certain that it helped having a decent prompt, asking it to consider all the potential user paths and edge cases, and also having a very good understanding of the code myself. Still, this was the first time for me I could honestly say that an LLM actually saved me time as a developer.
All this makes me think making software engineers redundant is really the "killer app" of LLM's. This is where the AI labs are spending most of the effort - its the best marketing after all for their product - fear sells better than greed (loss aversion) making engineers notice and unable to dismiss it.
Despite some of the comments on this thread, despite it not wanting to be true, I must admit LLM's are impressive. Software engineers and ML specialists have finally invented the thing which disrupts their own jobs substantially either via large reduction in hours and/or reduction in staff. As the hours a software engineer spends coding diminishes by large factors so too especially in this economy will hours spent required paying an engineer will fall up to the point where anyone can create code and learn from an LLM as you have just done. Once everybody is special, no one is and fundamentally employment, and value of things created from software, comes from scarcity just like everything else in our current system.
I think there's probably only a few years left where software engineers are around - or at least seen as a large part of an organization with large teams, etc. Yes AI software will have bugs, and yes it won't be perfect but you can get away with just one or two for a whole org to fix the odd blip of an LLM. It feels like people are picking on minor things at this point, which while true, for a business those costs are "meh" while the gains of removing engineers are substantial.
I want to be wrong; but every time I see someone "learning from LLM's", saving lots of time doing stuff, saving 100's of hours, etc I think its only 2-3 years in and already its come this far.
> Yes AI software will have bugs, and yes it won't be perfect but you can get away with just one or two for a whole org to fix the odd blip of an LLM.
Maybe. A lot of places have headcount limits on software devs because of budget constraints. As in, the reason they don't hire more is because they can't afford it, not because there is a shortage of code to write and bugs to give. The more optimistic view is that the nature of being a software engineer will adjust to increased productivity and focus on the parts of the job that LLMs can't do, with the market for experts who are skilled at removing "the odd blip from an LLM". Expertise will also move into areas where there's less or insufficient training data for a particular niche. One way to future proof yourself is to find places where it frequently makes up non existent libraries and is bad at code in a language, and specialize in that.
I was, however, extremely impressed with Claude this time around. Not only did it do a great job off the bat, but it taught me some techniques and tricks available in the language/framework (Ruby, Rspec) which I wasn't familiar with.
I'm certain that it helped having a decent prompt, asking it to consider all the potential user paths and edge cases, and also having a very good understanding of the code myself. Still, this was the first time for me I could honestly say that an LLM actually saved me time as a developer.