Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tried Zed and Cursor, but they always felt too magical to me. I ended up building a minimal agent framework that only uses seven tools (even for code edits): read, write, diff, browse, command, ask, and think.

These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.

More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.

I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!



Counterpoint: Zed wins me over because the LLM calls don't feel like magic - I maintain control over API calls unlike Cursor, which seems to have a mind of its own and depletes my API quota unexpectedly. Plus, Zed matches Sublime's performance unlike Cursor's laggy Electron VS Code foundation.


Sounds similar to gptel[1] for Emacs. It provides a solid foundation for more complex compositions like gptel-aibo[2] or mcp.el [3].

Yours is the full agent, though... Nice.

[1] https://github.com/karthink/gptel

[2] https://github.com/dolmens/gptel-aibo

[3] https://github.com/lizqwerscott/mcp.el


For sure! I'm surprised myself how far one can get with just seven tools: read, write, diff, browse, command, ask, and think.

It's like lisp's original seven operators: quote, atom, eq, car, cdr, cons and cond.

And I still can't stop smiling just watching the agent go full turbo mode when I disable the `ask` tool.


It's amazing. Just like you keep repeating full turbo, I hope we all go full turbo, all the time! Who needs thoughtful care in these things anyway, that's for another day! Lets goooo


> I ended up building a minimal agent framework that only uses seven tools

you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)


Zed and Cursor are very different; I wouldn’t put them in the same bucket myself. I’ve been using the Zed AI assistant panel for a while (manual control over the context window by including files and diagnostics) — will try the new agentic panel soon.


Unfortunately the new agent panel completely nerfs the old workflow. I also love the old version (now called "Text Threads") for its transparency.

Even though they brought back text threads, the context is no longer included (or include-able!) as context in the inline assist. That means that you can no longer select code, hit ctrl+enter, and type "use the new code" or whatever.

I wish there was a way to just disable the agent panel entirely. I'm so uninterested in magical shit like cursor (though claude code is tasteful IMO).


Actually, I just checked and an active text thread is added to the inline prompt context (you may need to click on the box at the bottom of the inline prompt to include it, but then it is added by default for the next). So it looks fine to me (and it is nicer that it is more explicit this way).

There is also the "+" button to add files, threads etc, though it would be nice if it could also be done through slash commands.


Are you sure it is the right thread? On mine it shows the title of the last agent thread even though I'm in a text thread.


Yes, and it followed the instructions in my text thread.

I opened a previous agent thread and it gave me the option to include both threads to the context of the inline prompt (the old text thread was included and I had to click to exclude it, the new thread was grayed out and I had to click to include it).


Thanks a lot for trying it and reporting back. I'll have to see if my version is out of date or something.

edit: yup, they fixed it 2 days ago


You can still include Text Threads as context in the inline assist prompt with @thread "name of thread", or using the `+` button. And it should suggest the active text thread for you, so it's one click. Let us know if that isn't working, we wanted to preserve the old workflow (very explicit context curation) for people who enjoyed previous assistant panel.


Thank you Max for preserving my workflow (and for replying on GH)!

It looks like I was 2 days out of date, and updating fixed it for me.


I would love a vim plugin for this. Many LLM vim plugins started off beautifully minimal, but became too agentic in their chase of Cursor.


Maybe once all of this is a bit more mature we can just get down to the minimal subset of features that are really important.

I’d love a nvim plugin that is more or less just a split chat window that makes it easy to paste code I’ve yanked (like yank to chat) add my commentary and maybe easily attach other files for context. That’s it really.


I can highly recommend gp.nvim, it has a few features but by default it's just a chat window with a yank-to-chat function. It also supports a context file that gets pasted into every chat automatically (for telling the AI about the tools you use etc)


Last time I used it, Avante was pretty much nailing what you are describing.

https://github.com/yetone/avante.nvim



That is the dream! Would love someone to create a vim plugin for this, if not I'll do it myself if there is enough demand.


How do you run it in VSC using MCP?


Ideally, just start the server - which is a SSE based and supported by any MCP client out of the box.

Then, connect it using this line: `client = MCPClient(server_url=server_url)` (https://github.com/aperoc/toolkami/blob/e49d3797e6122fb54ddd...)

Happy to help further if you run into issues.


What’s SSE? I guess this isn’t about vectorization.


Server Sent Events

MCP Clients and servers can support both sse or stdio


I'm doing something similar https://github.com/kristopolous/llmehelp

The goal is composable semantic routing -- seamlessly traversal between different tools through things like saved outputs and conversational partials.

Routing similar to pipewire, conversation chains similar to git, and URI addressable conversations similar to xpath.

This is being built application down to ensure usability, design sanity and functionality.


what is "think"?



ah interesting! Is it something that openAI also is thinking about? Or is it something that reasoning model are already doing anyway?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: