The security issues are probably orthogonal in the way most people install and use these MCPs, but the article mentions Cloudflare's "code-mode" running in v8 isolate sandboxes and calling rpc bindings that are pre-authed, no API keys or open slather internet access required, see: https://blog.cloudflare.com/code-mode/#running-code-in-a-san...
This is at least interesting, possibly even novel.
"Wasting context window to understand and re-implement each integration is why MCPs exist" does seem to be exactly the point. Pointing the LLM at a swagger/openAPI spec and expecting it to write a good integration might work, but gets old after the first time. Swagger docs focus on what mostly, LLMs work better knowing why as well.
And,why not just use a locally installed cli rather than an MCP? You need to have one for a start, and use-cases (chained calls) are more valuable that atomic tool calls.
There is more behind the motivation for MCP and "tool calling" ability generally with LLMs. This motivation seems less and less relevant, but back when reasoning and chain-of-thought were newly being implemented, and people were more wary of yolo modes, the ability for an LLM harness to decide to call a tool and gather JIT context was a game changer. A dynamic RAG alternative. MCP might not be the optimal solution long term. For example, the way that claude has been trained to use the gh cli, and to work with git generally is much more helpful than either having to set up a git MCP or getting it to feel its way around git --help or man pages to, as you said "re-implement each integration" from scratch every time.
"Wasting context window to understand and re-implement each integration is why MCPs exist" does seem to be exactly the point. Pointing the LLM at a swagger/openAPI spec and expecting it to write a good integration might work, but gets old after the first time. Swagger docs focus on what mostly, LLMs work better knowing why as well.
And,why not just use a locally installed cli rather than an MCP? You need to have one for a start, and use-cases (chained calls) are more valuable that atomic tool calls.
There is more behind the motivation for MCP and "tool calling" ability generally with LLMs. This motivation seems less and less relevant, but back when reasoning and chain-of-thought were newly being implemented, and people were more wary of yolo modes, the ability for an LLM harness to decide to call a tool and gather JIT context was a game changer. A dynamic RAG alternative. MCP might not be the optimal solution long term. For example, the way that claude has been trained to use the gh cli, and to work with git generally is much more helpful than either having to set up a git MCP or getting it to feel its way around git --help or man pages to, as you said "re-implement each integration" from scratch every time.