Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We're in a bubble

Lemkin was doing an experiment and Tweeting it as he went.

Showcasing limitations of vibe coding was the point of the experiment. It was not a real company. The production database had synthetic data. He was under no illusions of being a technical person. That was the point of the experiment.

It’s sad that people are dog piling Lemkin for actually putting effort into demonstrating the same exact thing that people are complaining about here: The limitations of AI coding.



> Showcasing limitations of vibe coding was the point of the experiment

No it wasn't. If you follow the threads, he went in fully believing in magical AI that you could talk to like a person.

At one point he was extremely frustrated and ready to give up. Even by day twelve he was writing things "but Replie clearly knows X, and still does X".

He did learn some invaluable lessons, but it was never an educated "experiment in the limitations of AI".


I got a completely different impression from the Tweets.

He was clearly showing that LLMs could do a lot, but still had problems.


The fundamental lesson to be learned is that LLMs are not thinking machines but pattern vomiters.

Unfortunately from his tweets I have to agree with the grand poster that he didn’t learn this.


And yet tech at large is determined to call LLMs "artificial intelligence"


His "experiment" is literally filled with tweets like this:

--- start quote ---

Possibly worse, it hid and lied about it

It lied again in our unit tests, claiming they passed

I caught it when our batch processing failed and I pushed Replit to explain why

https://x.com/jasonlk/status/1946070323285385688

He knew

https://x.com/jasonlk/status/1946072038923530598

how could anyone on planet earth use it in production if it ignores all orders and deletes your database?

https://x.com/jasonlk/status/1946076292736221267

Ok so I'm >totally< fried from this...

But it's because destoying a production database just took it out of me.

My bond to Replie is now broken. It won't come back.

https://x.com/jasonlk/status/1946241186047676615

--- end quote ---

Does this sound like an educated experiment into the limits of LLMs to you? Or "this magical creature lied to me and I don't know what to do"?

To his credit he did eventually learn some valuable lessons: https://x.com/jasonlk/status/1947336187527471321 see 8/13, 9/13, 10/13


Steve Yegge just did the same thing [0]:

> I did give [an LLM agent] access to my Google Cloud production instances and systems. And it promptly wiped a production database password and locked my network.

He got it all fixed, but the takeaway is you can't YOLO everything:

> In this case, I should have asked it to write out a detailed plan for how it was going to solve the problem, then reviewed the plan and discussed it with the AI before giving it the keys.

That's true of any kind of production deployment.

[0] https://x.com/Steve_Yegge/status/1946360175339974807


I mean I think it's a decent demo of how this stuff is useless, tho, even if that wasn't precisely his intention?


[delete]


His “company” was a 12-day vibe coding experiment side project and the “customers” were fake profiles.

This dogpiling from people who very obviously didn’t read the article is depressing.

Testing and showing the limitations and risks of vibe coding was the point of the experiment. Giving it full control and seeing what happened was the point!


I don't think people are claiming he was not experimenting as much as they are claiming he was overtly optimistic about the outcome. It seemed like he went in with the notion that AIs are somehow thinking machines. I don't think that's an objective sentiment. An unbiased researcher would go in without any expectation.


No one lost any real data in this specific case.

> In an episode of the "Twenty Minute VC" podcast published Thursday, he said that the AI made up entire user profiles. "No one in this database of 4,000 people existed," he said.


This was the preceding sentence:

> That wasn't the only issue. Lemkin said on X that Replit had been "covering up bugs and issues by creating fake data, fake reports, and worst of all, lying about our unit test."

And a couple of sentences before that:

> Replit then "destroyed all production data" with live records for "1,206 executives and 1,196+ companies" and acknowledged it did so against instructions.

So I believe what you shared is simply out of context. The LLM started putting fake records into the database to hide that it deleted everything.


> His actions led to a company losing their prod data.

did you even read the comment or the article you replied to?


Pretty stupid experiment if you ask me


an experiment to figure out the limitations and capabilities of a new tool is stupid?


It's not an experiment if you're using it in production and it has the capability of destroying production data. That's not experimenting, that's just using the tool without having tested it first.


the database was populated with fake data. The entire point of the experiment was to see how far you can get with vibe coding.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: