Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't want to sound like a skeptic, but I see way more people talking about how awesome MCP is rather than people building cool things with it. Reminds me of blockchain hype.

MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.



Regardless of how good a model gets, it can't do much if it doesn't have access to deterministic tools and information about the state of the world. And that's before you take into account security: you can't have a model running arbitrary requests against production, that's psychotic.

I don't have a high opinion of MCP and the hype it's generating is ridicolous, but the problem it supposedly solves is real. If it can work as an excuse to have providers expose an API for their functionality like the article hopes, that's exciting for developers.


> Regardless of how good a model gets

I don't think this is true.

My Claude Code can:

- open a browser, debug a ui, or navigate to any website

- write a script to interact with any type of accessible api

All without MCP.

Within a year I expect there to be legitimate "computer use" agents. I expect agent sdks to take over llm apis as defacto abstractions for models, and MCP will have limited use isolated to certain platforms - but with that caveat that an MCP-equipped agent performs worse than a native computer-use agent.


They are kind of the same thing...

These are just tools Anthropic provides for you. Just like the tools a non-Anthropic service provides through their MCP server.

A community-led effort of tool creation via MCP will surely be faster and more powerful than waiting for in-house implementations.


> open a browser, debug a ui, or navigate to any website

I mean, that’s just saying the same thing — at the end of the day, there’s are underlying deterministic systems that it uses


Yes my response was poorly oriented the parent comment


It's very different to blockchain hype

I had similar skepticism initially, but I would recommend you dip toe in water on it before making judgement

The conversational/voice AI tech now dropping + the current LLMs + MCP/tools/functions to mix in vendor APIs and private data/services etc. really feels like a new frontier

It's not 100% but it's close enough for a lot of usecases now and going to change a lot of ways we build apps going forward


Probably my judgement is a bit fogged. But if I get asked about building AI into our apps just one more time I am absolutely going to drop my job and switch careers


That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system

What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it

This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive

It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)


There are 2 kinds of usecases that software automates. 1) those that require accuracy and 2) those that dont (social media, ads, recommendations).

Further, there are 2 kinds of users that consume the output of software. a) humans, and b) machines.

Where LLMs shine are in the 2a usecases, ie, usecases where accuracy does not matter and humans are end-users. there are plenty of these usecases.

The problem is that LLMs are being applied to 1a, 1b usecases where there is going to be a lot of frustration.



How does MCP solve any of the problems you mentioned? The LLM still has to access your data, still doesn't know the difference between instructions and data, and still gives you hallucinated nonsense back – unless there's some truly magical component to this protocol that I'm missing.


The information returned by the MCP server is what makes it not hallucinate. That's one of the primary use cases.


> That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system

That is the odd part. I am far from being part of that group of people. I‘m only 25, I joined the industry in 2018 as part of an training program in a large enterprise.

The odd part is, many of the promises are a bit Déjà-vu even for me. „Agents going to transform the enterprise“ and other promises do not seem that far off the promises that were made during the low code hype cycle.

Cynically, the more I look at the AI projects as an outsider, the more I think AI could fail in enterprises largely because of the same reason low code did. Organizations are made of people and people are messy, as a result the data is often equally messy.


Rule of thumb: the companies building the models are not selling hype. Or at least the hype is mostly justified. Everyone else, treat with extreme skepticism.


Is there anything new that’s come out in conversational/voice? Sesame Maya and Miles were kind of impressive demos, but that’s still in ’research preview’. Kyutai presented really a cool low latency open model, but I feel like we’re still closer to Siri than actually usable voice interfaces.


It's moving very fast:

https://elevenlabs.io/

https://layercode.com/ (https://x.com/uselayercode has demos)

Have you used the live mode on the Gemini App (or stream on AI Studio)?


I had a use case - I wanted to know what the congresspeople from my state have done this week. This information is surprisingly hard to just get from the news. I learned about MCP a few months ago and thought that it might be a cool way to interact with the congress.gov API.

I made this MCP server so that you could chat with real-time data coming from the API - https://github.com/AshwinSundar/congress_gov_mcp. I’ve actually started using it more to find out, well, what the US Congress is actually up to!


But this whole post is about using MCP sans AI


MCP without AI is just APIs.

MCP is already a useless layer between AIs and APIs, using it when you don't even have GenAI is simply idiotic.

The only redeeming quality of MCP is actually that it has pushed software vendors to expose APIs to users, but just use those directly...


And that’s the whole point - it’s APIs we did not have. Now app developers are encouraged to have a public, user friendly, fully functional API made for individual use, instead of locking them behind enterprise contracts and crippling usage limits.


Do you have an example of a company who previously had an undiscoverable API now offering a MCP-based alternative?


I do have one: Atlassian now allows connecting their MCP server (Jira et al) for personal use with a simple OAuth redirect, where before you needed to request API keys via your org, which is something no admin would approve unless you were working specifically on internal tooling/integrations.

Another way to phrase it is that MCP normalizes individual users having access to APIs via their clients, vs the usual act of connecting two backend apps where the BE owns a service key.


Right, but we would have had them even if MCP did not exist. The need to access those APIs via LLM-based "agents" would have existed without MCP.

At work I built an LLM-based system that invoke tools. We started before MCP existed, and just used APIs (and continue to do so).

Its engineering value is nil, it only has marketing value (at best).


As https://www.stainless.com/blog/mcp-is-eating-the-world--and-... recaps, tool calling existed before MCP, some vague standards existed, nothing took off, no really normal users don't want to just download the OpenAPI spec.

Anthropic wants to define another standard now btw https://www.anthropic.com/engineering/desktop-extensions


Normal users don't know what MCP is and will never use an MCP server (knowingly or unknowingly) in their life. They use ChatGPT through the web UI or the mobile app, that's it.

MCP is for technical users.

(Maybe read the link you sent, it has nothing to do with defining a new standard)


Normal users will increasingly use MCP servers without even knowing they do so - it will be their apps. And having e.g. your music player or your email client light up in the ChatGPT app as something that you can tell it to automate is not just for technical users.


> it’s APIs we did not have

Isn't that what we had about 20 years ago (web 2.0) until they locked it all up (the APIs and feeds) again? ref: this video posted 18 years ago: https://www.youtube.com/watch?v=6gmP4nk0EOE

(Rewatching it in 2025, the part about "teaching the Machine" has a different connotation now.)

Maybe it's that the protocol is more universal than before, and they're opening things up more due to the current trends (AI/LLM vs web 2.0 i.e. creating site mashups for users)? If it follows the same trend then after a while it will become enshittified as well.


MCPs don't change that at all lol


I can't believe there isn't a universal "api/firewall" by now. You know like a middle program that can convert any input api to any output api. With middleware features like logging/firewall/stateful denial and control.

Once cryptocurrency was a thing this absolutely needed to exist to protect your accounts from being depleted by a hack. (like via monthly limits firewall)

Now we need universal MCP <-> API to allow both programmatic and LLM to the same thing. (because apparently these AGI precursors arent smart enough to be trained on generic API calling and need yet another standard: MCP?)


> we will point to the tool's documentation or OpenAPI

You can already do this as long as your client has access to a HTTP MCP.

You can give the current generation of models an openAPI spec and it will know exactly what to do with it.


you don't even need MCP for that. just access to hosted swagger file.


That's what I mean. Give an LLM the swagger file, and it can make those calls itself given the ability to make an HTTP request (which is what the MCP is for)


> MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.

I doubt the middleware will disappear, it's needed to accomdate the evolving architecture of LLMs.


I wasn't able to find a good source on it, but I read a couple of times that Anthropic (builders of MCP) do astroturfing/shilling/growth hacking/SEO/organic advertisement. Everything I've read so far with MCP and Claude and the hype I see on social media is consistent with that, hype and no value.


This is false.


Me and my colleagues are building cools stuff with it. I see many examples of truly useful things being build today.


> I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.

how would ingesting Ableton Live's documentation help Claude create tunes in it for instance?


It's incredible for investigating audit logs. Our customers use it daily.

https://blog.runreveal.com/introducing-runreveal-remote-mcp-...


I could see that happening... perhaps instead of plugging in the URL of the MCP server you'd like to use, you'd just put in the URL of their online documentation and trust your AI assistant of choice to go through all of it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: