Hacker News new | past | comments | ask | show | jobs | submit login

Right, I get that, and an LLM call by itself clearly is a black box. I just don't get why that's supposed to matter. It produces an artifact I can (and must) verify myself.





Because if the LLM is a black box and its output must ultimately be verified by humans, then you can't treat conversion of prompts into code as a simple build step as though an AI agent were just some sort of compiler. You still need to persist the actual code in source control.

(I assume that isn't what you're actually arguing against, in which case at least one of us must have misread something from the parent chain.)


Right, you definitely can't do that. People do talk as if the question was whether we could stick LLM calls into Makefiles. Nobody would ever do that, at least not with the technology we have at hand.

But that’s exactly what the part of the article quoted in the root comment (https://news.ycombinator.com/item?id=44206758) is about, and in reference to future AI technology. That’s was this subthread was discussing.

Yep, you're 100% right. Sorry!

Ever is a long time. I expect first products built this exact way working reliably and having happy customers in the next five years, pessimistically. Optimistically this is probably happening somewhere as we speak.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: