Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you try to inspect and question such code, you will usually quickly run into that realisation that the "author" has basically no idea what the code even does.

"review it like it wasn't AI generated" only applies if you can't tell, which wouldn't be relevant to the original question that assumes it was instantly recognisable as AI slop.

If you use AI and I can't tell you did, then you're using it effectively.





If it's objectively bad code, it should be easy enough to point out specifics.

After pointing out 2-3 things, you can just say that the quality seems too low and to come back once it meets standards. Which can include PR size for good measure.

If the author can't explain what the code does, make an explicit standard that PR authors must be able to explain their code.


You are optimistic like the author even cared about the code. Most of the time you get another LLM response on why the code “works”



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: