For my I have a company subscription for Copilot and I just use the line based autocomplete. It’s mildly better than the built in autocomplete. I never have it do more than though and probably wouldn’t buy a license for myself.
* tests themselves in a staging environment, independent of any QA team or reviews
* monitors the changes after they’ve gone out
* has repeatedly found things in their own PRs and asked to hold off release to fix them
* is reviewing other people’s PRs and spotting things before they go out
yea, sure, i’ll release the changes. they’re doing the auditing work for me.
they clearly care about the software. and i’ve seen enough to trust them.
and if they got it wrong, well, shit, they did everything good enough. i’m sure they’ll be on the ball when it comes to rolling it back and/or fixing it.
an llm does not do those things. an llm *does not care about your software* and never will.
i’ll take people who give a shit any day of the week.
I'd say it depends more on "the production" than the human.
There are legal means to hold all people accountable for their actions ("Gross neglience" and all that).
So you can basically always trust that people will fix what they messed up given the possibility.
So if you can afford for the production to be broken (e.g. the downtime will just annoy some people) you might as well allow your team to deploy straight to prod without audits. It's not that rare actually.
Nope. But AI's sales pitch is that it's an oracle to lean on. Which is part of the problem.
As a start, let me know when an AI can fail test cases, re-iterate on its code to correct the test case, and re-submit. But I suppose that starts to approach AGI territory.
I have ADHD and simply engaging in a conversation while driving is enough for me to miss turns, exits, etc. Sure, in the event of an impending crash I might go into the zone and avoid danger, but my general awareness is noticeably impacted.
It seems to be less so for people without ADHD in my experience, but we can't just say it has zero effect. It's better to discuss the threat to safety in terms of potential, since that accounts for individual variance.
Really depends on what sort of person you are I guess.
Some people appreciate being shown fascinating aspects of human nature. Some people don't, and I wonder why they're on a forum dedicated to curiosity and discussion. And then, some people get weirdly aggressive if they're shown something that doesn't quite fit in their worldview. This topic in particular seems to draw those out, and it's fascinating to me.
Myself, I thought it was great to learn about spontaneous trait association, because it explains so much weird human behavior. The fact that LLMs do something so similar is, at the very least, an interesting parallel.
We do represent much of our cognition in language.
Sometime I feel like LLMs might be “dancing skeletons” - pulleys & wire giving motion to the bones of cognition.
I'm digging into that now. On one hand I can kind of understand the position, yet on the other the motive and reasoning doesn’t feel quite right. I guess that might be a broad theme with Scientology.