I don't want to sound like a skeptic, but I see way more people talking about how awesome MCP is rather than people building cool things with it. Reminds me of blockchain hype.
MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.
Regardless of how good a model gets, it can't do much if it doesn't have access to deterministic tools and information about the state of the world. And that's before you take into account security: you can't have a model running arbitrary requests against production, that's psychotic.
I don't have a high opinion of MCP and the hype it's generating is ridicolous, but the problem it supposedly solves is real. If it can work as an excuse to have providers expose an API for their functionality like the article hopes, that's exciting for developers.
- open a browser, debug a ui, or navigate to any website
- write a script to interact with any type of accessible api
All without MCP.
Within a year I expect there to be legitimate "computer use" agents. I expect agent sdks to take over llm apis as defacto abstractions for models, and MCP will have limited use isolated to certain platforms - but with that caveat that an MCP-equipped agent performs worse than a native computer-use agent.
I had similar skepticism initially, but I would recommend you dip toe in water on it before making judgement
The conversational/voice AI tech now dropping + the current LLMs + MCP/tools/functions to mix in vendor APIs and private data/services etc. really feels like a new frontier
It's not 100% but it's close enough for a lot of usecases now and going to change a lot of ways we build apps going forward
Probably my judgement is a bit fogged. But if I get asked about building AI into our apps just one more time I am absolutely going to drop my job and switch careers
That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system
What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it
This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive
It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)
How does MCP solve any of the problems you mentioned? The LLM still has to access your data, still doesn't know the difference between instructions and data, and still gives you hallucinated nonsense back – unless there's some truly magical component to this protocol that I'm missing.
> That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system
That is the odd part. I am far from being part of that group of people. I‘m only 25, I joined the industry in 2018 as part of an training program in a large enterprise.
The odd part is, many of the promises are a bit Déjà-vu even for me. „Agents going to transform the enterprise“ and other promises do not seem that far off the promises that were made during the low code hype cycle.
Cynically, the more I look at the AI projects as an outsider, the more I think AI could fail in enterprises largely because of the same reason low code did. Organizations are made of people and people are messy, as a result the data is often equally messy.
Rule of thumb: the companies building the models are not selling hype. Or at least the hype is mostly justified. Everyone else, treat with extreme skepticism.
Is there anything new that’s come out in conversational/voice? Sesame Maya and Miles were kind of impressive demos, but that’s still in ’research preview’. Kyutai presented really a cool low latency open model, but I feel like we’re still closer to Siri than actually usable voice interfaces.
I had a use case - I wanted to know what the congresspeople from my state have done this week. This information is surprisingly hard to just get from the news. I learned about MCP a few months ago and thought that it might be a cool way to interact with the congress.gov API.
I made this MCP server so that you could chat with real-time data coming from the API - https://github.com/AshwinSundar/congress_gov_mcp. I’ve actually started using it more to find out, well, what the US Congress is actually up to!
And that’s the whole point - it’s APIs we did not have. Now app developers are encouraged to have a public, user friendly, fully functional API made for individual use, instead of locking them behind enterprise contracts and crippling usage limits.
I do have one: Atlassian now allows connecting their MCP server (Jira et al) for personal use with a simple OAuth redirect, where before you needed to request API keys via your org, which is something no admin would approve unless you were working specifically on internal tooling/integrations.
Another way to phrase it is that MCP normalizes individual users having access to APIs via their clients, vs the usual act of connecting two backend apps where the BE owns a service key.
Normal users don't know what MCP is and will never use an MCP server (knowingly or unknowingly) in their life. They use ChatGPT through the web UI or the mobile app, that's it.
MCP is for technical users.
(Maybe read the link you sent, it has nothing to do with defining a new standard)
Normal users will increasingly use MCP servers without even knowing they do so - it will be their apps. And having e.g. your music player or your email client light up in the ChatGPT app as something that you can tell it to automate is not just for technical users.
Isn't that what we had about 20 years ago (web 2.0) until they locked it all up (the APIs and feeds) again? ref: this video posted 18 years ago: https://www.youtube.com/watch?v=6gmP4nk0EOE
(Rewatching it in 2025, the part about "teaching the Machine" has a different connotation now.)
Maybe it's that the protocol is more universal than before, and they're opening things up more due to the current trends (AI/LLM vs web 2.0 i.e. creating site mashups for users)? If it follows the same trend then after a while it will become enshittified as well.
I can't believe there isn't a universal "api/firewall" by now. You know like a middle program that can convert any input api to any output api. With middleware features like logging/firewall/stateful denial and control.
Once cryptocurrency was a thing this absolutely needed to exist to protect your accounts from being depleted by a hack. (like via monthly limits firewall)
Now we need universal MCP <-> API to allow both programmatic and LLM to the same thing. (because apparently these AGI precursors arent smart enough to be trained on generic API calling and need yet another standard: MCP?)
That's what I mean. Give an LLM the swagger file, and it can make those calls itself given the ability to make an HTTP request (which is what the MCP is for)
> MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.
I doubt the middleware will disappear, it's needed to accomdate the evolving architecture of LLMs.
I wasn't able to find a good source on it, but I read a couple of times that Anthropic (builders of MCP) do astroturfing/shilling/growth hacking/SEO/organic advertisement. Everything I've read so far with MCP and Claude and the hype I see on social media is consistent with that, hype and no value.
> I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.
how would ingesting Ableton Live's documentation help Claude create tunes in it for instance?
I could see that happening... perhaps instead of plugging in the URL of the MCP server you'd like to use, you'd just put in the URL of their online documentation and trust your AI assistant of choice to go through all of it.
MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.