Hey folks! I just posted a quick tutorial explaining how LLM agents (like OpenAI Agents, Pydantic AI, Manus AI, AutoGPT or PerplexityAI) are basically small graphs with loops and branches. For example:
Minor comment: do you mean "LLM Agents Are Simply Graphs". Personally, I'd drop the adjective to "LLM Agents are Graphs" as I think it sounds better, but the plural is needed.
It would be interesting to dig deeper into the "thinking" part: how does an LLM know what it doesn't know / how to fight hallucinations in this context?
Thank you - really interesting looking read, thanks for crafting the deep explanation, with links to actual internal code examples. Also, thanks for not putting it behind the Medium paywall
OpenAI Agents: for the workflow logic: https://github.com/openai/openai-agents-python/blob/48ff99bb...
Pydantic Agents: organizes steps in a graph: https://github.com/pydantic/pydantic-ai/blob/4c0f384a0626299...
Langchain: demonstrates the loop structure: https://github.com/langchain-ai/langchain/blob/4d1d726e61ed5...
If all the hype has been confusing, this guide shows how they actually work under the hood, with simple examples. Check it out!
https://zacharyhuang.substack.com/p/llm-agent-internal-as-a-...