Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is Claude Code better than Cursor?


My company has a huge codebase, for me cursor would freeze up / not find relevant files. Claude code seems able to find the right files by itself.

I seem to always have better outcomes with Claude code.


How does Claude Code handle editing multiple files? My understanding is that it's CLI based, so it edits a bunch of files on it's own, so how do you accept/reject and rollback changes?


I use git and when necessary, git worktrees to keep parallel Claudes from bothering/actively-undoing each other’s edits (esp when an edit requires multiple files of changes, so the tree is temporarily broken).


> how do you accept/reject and rollback changes?

Git


Cursor and these tools evolve quite fast.

The Cursor you used a month ago is not the one you get now.

Just saying that because in this space you should always compare latest X with latest Y.

I too switched weeks ago to Claude Code. Then between the times I am out of tokens I launch Cursor and actually find it...better than I remember if not on par with Claude Code (the model and quality of prompts/context matters more than the IDE/CLI tool used too).


Because iterating multiple sessions through multiple terminals is obviously more efficient and seamless than interacting thought a scuffed IDE side panel ui.


Is this sarcasm? I used Claude Code via a VSCode side panel.


That's a funny response I have to admit. No 100% serious, I don't see the advantage.


Claude Code has some non-LLM magic in it that just makes it better for code in general, despite (or because of) having minimal IDE integration.


What I have found Claude Code is extremely good at is that it makes one change at a time, gives you a chance to read the code its changing, and lets you give feedback in real time and steer it properly. I find the mental load with this method to be MUCH lower than Cursor or any of the other tools which give you two very different options: "Ask" mode which dumps a ton of suggestions on your and then requires semi-manual implementation, or "Agent" mode which dumps a ton of actual changes on you and requires your inspection and feedback and roll-backs, etc.

This may not work for everyone, but as a solo dev who wants to keep a real mental model of my work (and not let it get polluted with AI slop), the Claude Code approach just works really well for me. It's like having a coding partner who can iterate and change direction as you talk, not a junior dev who dumps a pile of code on your plate without discussion.


I set rules for cursor that when i need cursor to make changes, it will do it with plan-scheme-execute mode. Everything is clear especially when it prompts questiones for you to make scheme as you wish. Today cursor's gpt-5-fast-high model exploits this working style to its extend. This model gives the most detailed scheme for me to customize and i benefit a lot.


+1 to this. Cursors Agent feels too difficult to wrangle. CC is easier to monitor.


In my experience, it is much better at tool-calling, which is huge when we're talking about agentic coding. It also seems to do a better job of keeping things cleaning and not going off on tangents for anything that isn't accomplished in one shot.


I have had the exact opposite experience. Claude Code in any meaningful codebase for me gets stuck in loops of doing the wrong thing. Then when that doesn't work it deletes files and makes its own that don't have the problem it's encountering.

Cursor on the other hand, especially with GPT-5 today but typically with Sonnet 4.1, has been a workhorse at my company for months. I have never had Claude Code complete a meaningful ticket once. Even a small thing like fixing a small bug or updating the documentation on the site.

Would love any tips on how to make Claude Code not a complete waste of electricity.


If you don’t know how to divide a problem up given a toolset you won’t be able to solve it regardless of what those tools are. Maybe Cursor’s interface is more intuitive for you.


The problems I’ve given CC are things that are incredibly simple and basic. Things I knew how to fix immediately. I would tell it the gilt to change and how to change it. And it will get lost when the types are incorrect, or when it causes a test to fail. It will like just delete the test.

I don’t doubt I could improve my prompts but I don’t have those same prompting problems with cursor.


Don’t know what to say. Those problems are exactly why I left Cursor behind—I don’t really encounter those issues with Claude Code (despite only using Anthropoc models in either case).


> Cursor on the other hand, especially with GPT-5 today but typically with Sonnet 4.1

You probably mean Opus 4.1; there's no Sonnet 4.1 yet.


Yes that’s correct.


Better prompts?


> Better prompts?

I think you're right.

People getting really poor results probably don't recognize that their prompts aren't very good.

I think some users make assumptions about what the model can't do before they even try, so their prompts don't take advantage of all the capabilities the model provides.


I don’t really have a problem prompting cursor with the same models. But I have no doubt my prompts could be improved


Opposite experience. I worked with Claude code a lot, then switched to Cursor and then tried to switch back and discovered that CC often gets stuck in loops. Cursor just works. It definitely helps that I can switch the foundational models in Cursor when it gets stuck.


CC just feeds the whole codebase and entire files into the model, no RAG, nothing in the way. It works substantially better because of that, but it's $expensive$.


That's not true. It uses CLI tools (e.g. find, grep) to find the relevant code from the codebase.


The more stuff you put in the context the worse models perform. All of them.

Larger context is a bonus sometimes, but in general you're degrading the quality of the output by a lot.

Precise prompting and context management is still very important.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: