Use a `notes/TODO.md` file to main a checklist of objectives between chats. You can have claude update it.
Commit to version control often, for code you supervised that _does_ look good. Squash later.
This glitch often begins to happen around the time you'd be seeing "Start a new chat for better results - New chat" on the bottom right.
If you don't supervise, you will get snagged, and if you miss it and continue, it'll continue writing code under the assumption the deletion was fine: potentially losing the very coverage you'd hope to have gained.
If it does happen, try to scroll up to the chat before it happened and "Restore checkpoint"
A small note: 1.96.2 is the VSCode version, the Cursor version latest i think is 0.46.x.
I'll also say that "Restore checkpoint" often causes crashes or inconsistency in the indexed files. I've found using git and explicit full reindexing has solved more problems than the AI itself.
...or you can tell the LLM "write me a go application that adds links from this JSON dump of wallabag.it to raindrop.io" and it's done in 10 minutes.
(It did use the wrong API for checking if a link already exists, that was an additional 5 minutes)
I've been doing this shit for a LONG time and it'd take me a way longer than 10 minutes to dig through the API docs and write the boilerplate required to poke the API with something relevant.
No, you can't have it solve the Florbargh Problem, but for 100% unoriginal boilerplate API glue it's a fantastic time saver.
Then I write it myself or tell it to correct itself. They tend to be confidently incorrect in many cases, but especially with the online ones you can tell them that "this bit is bullshit" and they'll try again with a different lib.
Works for common stuff, not so much for highly specialised things. An LLM can't know something it hasn't been "taught".
Keep chats <30 minutes, ideally 20-minute continuous segments.
Use a `notes/TODO.md` file to main a checklist of objectives between chats. You can have claude update it.
Commit to version control often, for code you supervised that _does_ look good. Squash later.
This glitch often begins to happen around the time you'd be seeing "Start a new chat for better results - New chat" on the bottom right.
If you don't supervise, you will get snagged, and if you miss it and continue, it'll continue writing code under the assumption the deletion was fine: potentially losing the very coverage you'd hope to have gained.
If it does happen, try to scroll up to the chat before it happened and "Restore checkpoint"
claude-3.7-sonnet-thinking, Cursor 1.96.2