Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that quoting a model's answer to someone else is bad form - you can get a model to say ANYTHING if you prompt it to, so a screenshot of a ChatGPT conversation to try and prove a point is meaningless slop.

I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.

There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?



And as long as you don't copy-paste its advice into comments, that's fine.

No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:

> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.


Further, grep (and any of its similar siblings) works just fine for such a task, is deterministic, won't feed you bullshit, and doesn't charge you tokens to do a worse job than existing free tools will do well. Better yet, from my experience with the dithering pace of LLMs, you'll get your answer quicker, too.


>There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?

You're so close to realising why the book counter argument doesn't make any sense!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: