Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Easy. The article asks us to believe.

There's a handy list to check against the article here: https://dmitriid.com/everything-around-llms-is-still-magical... starting at "For every description of how LLMs work or don't work we know only some, but not all of the following"



It seems to me like we have the answers to all those questions.

- Do we know which projects people work on?

It's pretty easy to discover that OP works on https://livox.com.br/en/, a tool that uses AI to let people with disabilities speak. That sounds like a reasonable project to me.

- Do we know which codebases (greenfield, mature, proprietary etc.) people work on

The e2e tests took 2 hours to run and the website quotes ~40M words. That is not greenfield.

- Do we know the level of expertise the people have?

It seems like they work on nontrivial production apps.

- How much additional work did they have reviewing, fixing, deploying, finishing etc.?

The article says very little.


> The article says very little.

And that's the crux, isn't it. Because that checklist really is just the tip of the iceberg.

Some people have completely opposite experiences: https://news.ycombinator.com/item?id=45152139

Others question the validity of the approach entirely: https://news.ycombinator.com/item?id=45152668

Oh, don't get me wrong: I like the idea. I would trust LLMs with this idea about as far as I could throw them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: