Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, it is much better at tool-calling, which is huge when we're talking about agentic coding. It also seems to do a better job of keeping things cleaning and not going off on tangents for anything that isn't accomplished in one shot.


I have had the exact opposite experience. Claude Code in any meaningful codebase for me gets stuck in loops of doing the wrong thing. Then when that doesn't work it deletes files and makes its own that don't have the problem it's encountering.

Cursor on the other hand, especially with GPT-5 today but typically with Sonnet 4.1, has been a workhorse at my company for months. I have never had Claude Code complete a meaningful ticket once. Even a small thing like fixing a small bug or updating the documentation on the site.

Would love any tips on how to make Claude Code not a complete waste of electricity.


If you don’t know how to divide a problem up given a toolset you won’t be able to solve it regardless of what those tools are. Maybe Cursor’s interface is more intuitive for you.


The problems I’ve given CC are things that are incredibly simple and basic. Things I knew how to fix immediately. I would tell it the gilt to change and how to change it. And it will get lost when the types are incorrect, or when it causes a test to fail. It will like just delete the test.

I don’t doubt I could improve my prompts but I don’t have those same prompting problems with cursor.


Don’t know what to say. Those problems are exactly why I left Cursor behind—I don’t really encounter those issues with Claude Code (despite only using Anthropoc models in either case).


> Cursor on the other hand, especially with GPT-5 today but typically with Sonnet 4.1

You probably mean Opus 4.1; there's no Sonnet 4.1 yet.


Yes that’s correct.


Better prompts?


> Better prompts?

I think you're right.

People getting really poor results probably don't recognize that their prompts aren't very good.

I think some users make assumptions about what the model can't do before they even try, so their prompts don't take advantage of all the capabilities the model provides.


I don’t really have a problem prompting cursor with the same models. But I have no doubt my prompts could be improved


Opposite experience. I worked with Claude code a lot, then switched to Cursor and then tried to switch back and discovered that CC often gets stuck in loops. Cursor just works. It definitely helps that I can switch the foundational models in Cursor when it gets stuck.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: