Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I got a completely different impression from the Tweets.

He was clearly showing that LLMs could do a lot, but still had problems.



The fundamental lesson to be learned is that LLMs are not thinking machines but pattern vomiters.

Unfortunately from his tweets I have to agree with the grand poster that he didn’t learn this.


And yet tech at large is determined to call LLMs "artificial intelligence"


His "experiment" is literally filled with tweets like this:

--- start quote ---

Possibly worse, it hid and lied about it

It lied again in our unit tests, claiming they passed

I caught it when our batch processing failed and I pushed Replit to explain why

https://x.com/jasonlk/status/1946070323285385688

He knew

https://x.com/jasonlk/status/1946072038923530598

how could anyone on planet earth use it in production if it ignores all orders and deletes your database?

https://x.com/jasonlk/status/1946076292736221267

Ok so I'm >totally< fried from this...

But it's because destoying a production database just took it out of me.

My bond to Replie is now broken. It won't come back.

https://x.com/jasonlk/status/1946241186047676615

--- end quote ---

Does this sound like an educated experiment into the limits of LLMs to you? Or "this magical creature lied to me and I don't know what to do"?

To his credit he did eventually learn some valuable lessons: https://x.com/jasonlk/status/1947336187527471321 see 8/13, 9/13, 10/13




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: