This is an interesting proof of what I keep reading about.
Where people are quick at making something, like a PR, with AI, but then doing the last 10% is left in the air.
If codex was half as good as they say it is in the presentation video, surely they could’ve sent a request to the one in chatgpt from their phone while waiting for the bus, and it would’ve addressed your comments…
Some of my response comments were nits where the tooling didn't respect conventions in the code / brought in conventions that weren't in use. I'd expect that minding existing conventions would be something that LLM based code tooling would eventually incorporate explicitly into its context and guardrails. Intuitively this seems like it would be a difficult thing to push down a level into a model's training for various practical reasons.
I'm guessing that to do this would have required running codex and ensuring that the specific repo / PR context was in scope. Perhaps the dev just hasn't done this yet?
If codex was half as good as they say it is in the presentation video, surely they could’ve sent a request to the one in chatgpt from their phone while waiting for the bus, and it would’ve addressed your comments…