Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey folks! I just posted a quick tutorial explaining how LLM agents (like OpenAI Agents, Pydantic AI, Manus AI, AutoGPT or PerplexityAI) are basically small graphs with loops and branches. For example:

OpenAI Agents: for the workflow logic: https://github.com/openai/openai-agents-python/blob/48ff99bb...

Pydantic Agents: organizes steps in a graph: https://github.com/pydantic/pydantic-ai/blob/4c0f384a0626299...

Langchain: demonstrates the loop structure: https://github.com/langchain-ai/langchain/blob/4d1d726e61ed5...

If all the hype has been confusing, this guide shows how they actually work under the hood, with simple examples. Check it out!

https://zacharyhuang.substack.com/p/llm-agent-internal-as-a-...



Minor comment: do you mean "LLM Agents Are Simply Graphs". Personally, I'd drop the adjective to "LLM Agents are Graphs" as I think it sounds better, but the plural is needed.


Oh, that’s embarrassing ... pardon my poor English, and thanks so much for pointing that out!


Simple mistake and easy to fix :)


Thanks for this write up. It'll be inspiring my ruby framework.


Thank you!


This explanation and demo is super clear.

It would be interesting to dig deeper into the "thinking" part: how does an LLM know what it doesn't know / how to fight hallucinations in this context?


I like the minimalistic approach! How to test such agents?


Thank you - really interesting looking read, thanks for crafting the deep explanation, with links to actual internal code examples. Also, thanks for not putting it behind the Medium paywall


Thank you!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: