Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Code Mode: the better way to use MCP (cloudflare.com)
21 points by janpio 33 days ago | hide | past | favorite | 13 comments


> Why, then, do MCP interfaces have to "dumb it down"? Writing code and calling tools are almost the same thing, but it seems like LLMs can do one much better than the other?

> The answer is simple: LLMs have seen a lot of code. They have not seen a lot of "tool calls". In fact, the tool calls they have seen are probably limited to a contrived training set constructed by the LLM's own developers, in order to try to train it. Whereas they have seen real-world code from millions of open source projects.

I am curious: Is this a generally agreed upon fact or an assumption/conjecture?


There has actually been formal research on the idea (predating MCP, ironically): https://machinelearning.apple.com/research/codeact

We actually didn't know about the research when we started working on this but it seems to match our findings.


Thanks, pretty solid basis then.


I was curious about how are user elicitations being handled in this? If a tool raises an elicitation request, how will the request travel back to the user?


If this is the case, do you really need MCP? Does this not work with FastAPI?


I remember generating and executing pymongo based python code in May 2024 when llama3 came out. It was to search mongodb through natural language interface. This was before MCP came into existence. It had the risk of NL injection just like sql injection but since it was for a closed user base, it was ok.

Your approach seems similar. It certainly avoids the complications of chain of tool calls in MCP paradigm.


It makes somewhat sense that composing an API call would be easier (for an LLM) than inferring a tool call. It will make it easier to observe.


yup, we've been using this approach with our product to make composing different integrations easier for the LLM and also give it the flexibility of code. Main difference is we use quick-js instead of v8 isolates. Seeing a TS interface instead of ugly JSON schema and simply writing code is far simpler for the LLM


smolagents provides something very similar: https://huggingface.co/blog/llchahn/ai-agents-output-schema


you stole my idea seriously, i think it works so well because typescript is very clear in .. well type definitions. the llm can't get lost so fast between function calls because it understands exactly what data structures goes in and out.


Then you need a bigger, expensive and poor performance LLM to create an agentic AI.


It's laughable to not realize that the solution is to simply remove the absolutely useless layer that is MCP and call APIs straight away.


That's not what they are doing. They are wrapping existing apis with another layer that simplifies access to existing apis. So The LLM instead of making tool calls, writes code that does effectively the same thing as the tool call, with another wrapper layer in typescript.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: