Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an interesting proof of what I keep reading about. Where people are quick at making something, like a PR, with AI, but then doing the last 10% is left in the air.

If codex was half as good as they say it is in the presentation video, surely they could’ve sent a request to the one in chatgpt from their phone while waiting for the bus, and it would’ve addressed your comments…



The last 10% you're referring to are nits. That's like the last 0.000001%. also, it could have fixed all these in like a minute by itself.


Some of my response comments were nits where the tooling didn't respect conventions in the code / brought in conventions that weren't in use. I'd expect that minding existing conventions would be something that LLM based code tooling would eventually incorporate explicitly into its context and guardrails. Intuitively this seems like it would be a difficult thing to push down a level into a model's training for various practical reasons.


Why didn't it?


I'm guessing that to do this would have required running codex and ensuring that the specific repo / PR context was in scope. Perhaps the dev just hasn't done this yet?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: