Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, it often is much faster, and significantly so.

There are also times where it isn't.

Developing the judgment for when it is and isn't faster, when it's likely to do a good job vs isn't likely, is pretty important. But also, how good of a job it does is often a skill issue, too. IMO the most important and overlooked skill is the having the foresight and the patience to give it the context it needs to do a good job.



> There are also times where it isn't.

Should this have the "Significantly so" qualifier as well?


I'm not sure. I think it's asymmetric: high upside potential, but low downside.

Because when the AI isn't cutting it, you always have the option to pull the plug and just do it manually. So the downside is bounded. In that way it's similar the Mitch Hedberg joke: "I like an escalator, because an escalator can never break. It can only become stairs."

The absolute worse-case scenario is situations where you think the AI is going to figure it out, so keep prompting it, far past the time when you should've changed your approach or gfiven up and done it manually.


This is so far from an absolute worst-case scenario.

You could have a codebase subtly broken on so many levels that you cannot fix it without starting from scratch - losing months.

You could slowly lose your ability to think and judge.


Ha, great answer! Of course there are a lot of nuances to that but I don't want to get into beating dead horses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: