Because ghuntly doesn't have to outrun the bear, just outrun the rest of us.
Meaning, if ghuntly can provide more value to an employer than a different employee, who doesn't know this trick, it's the other employee that's getting laid off, not ghuntly. In sharing this trick, it means that ghuntly now also has to outperform the other employee who also has this trick.
I’ve been following this, my workflow doesn’t us cursor (VS Code descendants just aren’t my preference) but I’ve built your advice into my home made system using emacs and gptel. I keep a style guide that is super detailed for each language and project, and now I’ve been building the stdlib you recommended. It’s great, thanks for writing this!
Hi, I appreciate you sharing. I've been starting to use this advice with a different tool. Just FYI, this sentence kind of came out of nowhere and it wasn't clear what you meant:
> The foundational LLM models right now are what I'd estimate to be at circa 45% accuracy and require frequent steering
Do your rules count as frequent steering and lead to increased 'accuracy', or is that the 'accuracy' you're seeing with your current workflow, rules and all?