Hacker News new | past | comments | ask | show | jobs | submit login

> I got fed up with the entire AI panel being an editable area, so sometimes I ended up clobbering it.

They fixed that with the new agent panel, which now works more like the other AI sidebars.

I was (mildly) annoyed by that too. The new UI still has rough edges but I like the change.




Interesting. I actually like the editable format of the chat interface because it allows fixing small stuff on the fly (instead of having to talk about it with the model) and de-cluttering the chat after a few back and forths make it a mess (instead of having to start anew), which makes the context window smaller and less confusing to the model, esp for local ones. Also, the editable form makes more sense to me, and it feels more natural and simple to interact with an LLM assistant with it.


Yes! Editing the whole buffer is a major feature because the more you keep around failed attempts and trash the dumber the model gets (and more expensive).

If you're working on stuff like marketing websites that are well represented in the model dataset then things will just fly, but if you're building something that is more niche it can be super important to tune the context -- in some cases this is the differentiating feature between being able to use AI assistance at all (otherwise the failure rate just goes to 100%).


> I actually like the editable format of the chat interface because it allows fixing small stuff on the fly

Fully agreed. This was the killer feature of Zed (and locally-hosted LLMs). Delete all tokens after the first mistake spotted in generated code. Then correct the mistake and re-run the model. This greatly improved code generation in my experience. I am not sure if cloud-based LLMs even allow modifying assistant output (I would assume not since it becomes a trivial way to bypass safety mechanisms).


The only issue I would imagine is not being able to use prompt caching, which can increase the cost of API calls, but I am not sure if prompt caching is used in general in such a context in the first place. Otherwise you just send the "history" in a json file, there is nothing mystical about llm chats really. If you use an API you can just send to autocomplete whatever you want.


> I am not sure if cloud-based LLMs even allow modifying assistant output.

In general they do. For each request, you include the complete context as JSON, including previous assistant output. You can change that as you wish.


The old panel still exists, they call it a text thread.


Yeah, but text threads cannot be used as context in the inline assist right now so there's no way to apply the code.


Ah that's a bummer. You can still add threads as context, but that you cannot use slash commands there, so the only way to add them or other stuff in the context is to click buttons with the mouse. It would be nice if at least slash commands were working there.

edit: actually it is still possible to include text threads in there


You can add agent threads as context, but right now adding text threads as context doesn't work.


It actually seems to work for me. I have an active text thread and it was added automatically to my inline prompt in the file. There was this box on the bottom of the inline text box. I think I had to click it the first time to include the context, but the subsequent times it was included by default.


What, that was a thing? I was copypasteing manually, which was annoying and error prone, that's why I like the new agent panel more.

Oops, I guess.


Yeah, It was great because you were in control of where and when the edits happened.

So you could manage the context with great care, then go over to the editor and select specific regions and then "pull in" the changes that were discussed.

I guess it was silly that I was always typing "use the new code" in every inline assist message. A hotkey to "pull new code" into a selected region would have been sweet.

I don't really want to "set it and forget it" and then come back to some mega diff that is like 30% wrong. Especially right now where it keeps getting stuck and doing nothing for 30m.


I’m on the opposite end, I hate the new panel. It’s less space efficient, slash commands are gone, and I can’t figure out how to clear the chat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: