With the current crop of LLMs/agents, I find that refactors still have to be done at a granular level. "I want to make X change. Give me the plan and do not implement it yet. Do the first thing. Do the second thing. Now update the first call site to use the new pattern. You did it wrong and I fixed it in an editor; update the second call site to match the final implementation in $file. Now do the next one. Do the next one. Continue. Continue.", etc.
I use Claude Code, haven't used Codex yet (should I?) - but in Claude code you can spin up sub-agents to handle these big refactors, with the master context window just keeping track of the overall progress, bugs, etc and providing instructions to the subagents to do the rote work.
IMO yes. It is less polished but IMO the model is way better. I moved over from claude completely and cancelled my max subscription. Less polished, slower but the results are better and you have to do less steering
I not an expert ai user (and have never touched Codex), but anything remotely important I do, I force the smallest context window possible. I just did something very beautiful using that principle, which will soon be ready to show the world. It would have been a garbled pile of garbage with long context windows.
Obviously major architectural changes need a bigger context window. But try to aggressively modularize your tasks as much as you can, and where possible run batch jobs to keep your workflow moving while each task stays a smaller chunk.
For complex refactors, I use "max mode" in Cursor, which in my experience noticeably improves the AI's performance and makes it go for a lot longer before it starts to drift. I haven't looked into how it works exactly, but it works well if you don't mind the extra cost.