Hacker Newsnew | past | comments | ask | show | jobs | submit | gyre007's commentslogin

From my experience Zed agents oftn just goes and edits your files without your asking it to. Even if you ask questions about codebase, it assumes you want it to be changed. For it to be useful it must be better at understanding prompts; I would also like it to generate diffs like it does but prompt me if I want to apply them first


likewise


Please add vim leader support to vim mode! :)


Not having the leader really annoys me but I’ve found myself using Zed more and more recently regardless. I think their LLM integration is just right for me unlike the neovim plugins I’ve tried. It’s really annoying because Ive been using vim for well over a decade so Id prefer to stick at home, but Zed is really reaching the level Im starting to like


I can't think of a director in the new generation of movie directors who are as original as most of the pieces made by DL. RIP, maestro!


The most ironic thing is thar the middle managers are somehow surviving this. So far, anyway, but I think they’ll be found too sooner or latet


This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.

Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.

An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.

You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.

You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.

An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.

But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.


You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.


Here's the thing. You assert confidently that GP is acting on a "broken moral compass". But you can also make the case that it is moral to act in interest of the company: After all, if the company fails, a potentially large number of people are at risk of losing their household income (and, in broken economical systems, also stuff like health insurance).


That's just the slippery slope of neoliberalism. The ends do not justify the means, no matter how you spin them: A company will not fail if you continue to employ parents of many children, employ a regional candidate, or write fair performance reviews regardless of strategic goals. If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.

A company is literally a group of people working towards the same goal. The people are just as important as the goal itself, it's not a company otherwise.


Why are you switching between corporations and companies as if they're the same?

I actually do know of a small company that was quite badly screwed over by a vindictive employee who hated her boss, deliberately did not quit because she knew she was about to have another child, got pregnant and then disappeared for a year. Local law makes her unfireable almost regardless of reason (including not actually doing any work), and then gives her three months maternity leave too. So she basically just didn't work for a year. She said specifically she did that to get back at her boss, she didn't care about the company or its employees at all.

For a company of that size something like that can put it in serious financial jeopardy, as they're now responsible for paying a year's salary for someone who isn't there. Also they can't hire a replacement because the law also guarantees the job is still there after maternity leave ends.

> If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.

This kind of thinking has caused ruin throughout history. Companies - regardless of size - aren't actually piñatas you can treat like an unlimited cash machine. Every such law pushes a few more small companies over the edge every year, and then not only does everyone depending on that company lose, but it never gets the chance to grow into a big corporation at all.


Where did this happen? Typically the government covers some or all of the parental leave costs where it is mandated, and while a company can't fire her they are allowed to hire someone to do the job in the meantime with the money they would have paid her. It's obviously not ideal but it's hard to imagine it is screwing the company over all THAT badly.


In Finland parental leave is not fully covered by the government. So you get to pay both the original worker and their temporary replacement.


It's okay for unprofitable companies to fail. Desirable, in fact.


No, it's desirable for them to become profitable and successful again, especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably.


> especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably

Employees don't extract capital from companies, especially unsustainably.

Executives and Boards of Directors do though


Sure they do. Unions, abuse of other worker rights laws and voting in socialist parties that raise corporate tax rates to unsustainable levels are all exactly that, and have a long history of extracting so much the companies or even entire economies fail. Argentina is an extreme example of this over the past 100 years but obviously there are many others.


You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.


I don't think that an AI would be interrogated in court.

I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.

Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.

----

Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.

So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.

The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.


They’re probably the only ones it makes sense to keep on. You have a couple of grunts code reviewing the equivalent of 10 devs of work from AI and a manager to keep them going.


If they're replacing all of their staff with AI, why do they need so many middle managers to manage staff that no longer exist at the company?

It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.


Because they have lower say-do ratio than employees below them. There's a sign or exponent error in current reward system of modern societies somewhere.


That's true, and something which I hadn't considered...


It's an open protocol; where did you get the idea that it would only work with Claude? You can implement it for whatever you want - I'm sure langchain folks are already working on something to accommodate it


Once fully adopted by at least 3 other companies I'll consider it a standard, and would consider it yes, if it solved a problem I have, which it does not.

Lots of companies open source some of their internal code, then say it's "officially a protocol now" that anyone can use, and then no one else ever uses it.

If they have new "tools" that's great however, but only as long as they can be used in LangChain independent of any "new protocol".


Something is telling me this _might_ turn out to be a huge deal; I can't quite put a finger on what is that makes me feel that, but opening private data and tools via an open protocol to AI apps just feels like a game changer.


It's just function calling with a new name and a big push from the LLM provider, but this time it's in the right direction. Contrast with OpenAI's "GPTs", which are just function calling by another name, but pushed in the wrong direction - towards creating a "marketplace" controlled by OpenAI.

I'd say that thing you're feeling comes from witnessing an LLM vendor, for the first time in history, actually being serious about function calling and actually wanting people to use it.


But either way the interface is just providing a json schema of functions along with your chat completion request, and a server with ability to parse and execute the response. I’m not really seeing where a new layer of abstraction helps here (much less a new “protocol”, as though we need a new transport layer?

It smells like the thinking is that you (the developer) can grab from a collection of very broad data connectors, and the agent will be able to figure out what to do with them without much custom logic in between. Maybe I’m missing something


> It smells like the thinking is that you (the developer) can grab from a collection of very broad data connectors, and the agent will be able to figure out what to do with them without much custom logic in between.

This has always been the idea behind tools/function calling in LLMs.

What MCP tries to solve is the NxM problem - every LLM vendor has their own slightly different protocols for specifying and calling tools, and every tool supplier has to handle at least one of them, likely with custom code. MCP aims to eliminate custom logic at the protocol level.


LLMs can potentially query _something_ and receive a concise, high-signal response to facilitate communications with the endpoint, similar to API documentation for us but more programmatic.

This is huge, as long as there's a single standard and other LLM providers don't try to release their own protocol. Which, historically speaking, is definitely going to happen.


> This is huge, as long as there's a single standard and other LLM providers don't try to release their own protocol

Yes, very much this; I'm mildly worried because the competition in this space is huge and there is no shortage of money and crazy people who could go against this.


They will go against this. I don’t want to be that guy, but this moment in time is literally the opening scene of a movie where everyone agrees to work together in the bandit group.

But, it’s a bandit group.


Not necessarily. There’s huge demand to simplify the integration process between frontier models and consumers. If specs like this wind up saving companies weeks or months of developer time, then the MCP-compatible models are going to win over the more complex alternatives. This unlocks value for the community, and therefore the AI companies


One of the biggest issues of LLM is that they have a lossy memory. Say there is a function from_json that accepts 4 arguments. An LLM might predict that it accepts 3 arguments and thus produce non-functional code. However, if you add the docs for the function, the LLM will write correct code.

With the LLM being able to tap up to date context (like LSP), you won't need that back-and-forth dance. This will massively improve code generations.


This is definitely a huge deal - as long as there's a good developer experience - which IMHO we're not there yet!


Any feedback on developer experience is always welcomed (preferably in github discussion/issue form). It's the first day in the open. We have a long long way to go and much ground to cover.


This breaks so bad for me on my phone :-(


Who needs self-driving cars when we can have rat-driving cars?


Who needs rat-driving cars, when you have Snakes on a Plane:

https://en.wikipedia.org/wiki/Snakes_on_a_Plane


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: