Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x500x79's commentslogin

I imagine Claude helped write this one.

DAU/MAU for IPO.

This is a common problem and you can find reports of it all over X, including from some influencers. Even outside of VSCode.

Never seen it on iTerm2.

This "outside of VSCode", was it still with a webview-based terminal?


$1,000.00 of credits per-day?? $200,000 per year? Those are bonkers numbers for someone not performing at a high level (on-top of their salary). Do you know what they are doing?

Yup. The way he works is all tasks he is issued in a sprint he just fires them through opus in parallel hoping to get a hit on Claude magically solving the ticket having them constantly be iterated on them. He doesnt even try using proper having plans be created.

Often time tickets get fleshed out or requirements change. He just throws everything out and reshoves it into Claude.

I weep for the planet.


They should just be on the $200 a month Max plan

Agree, maintainability, security, standards, all of these are important to follow and there are usually reasons for these things existing.

I also see AI coding tools violate "Chesterton's Fence" (and the pre-Chesterton's Fence, not sure what that is called, the idea being that code is necessary otherwise it shouldn't be in the source).


I think there are two other things missing: Security and Maintainability. Working code that can never be touched again by a developer or requires an excessive amount of time to maintain is not part of a developers job either.

Overall, this hits the nail on the head about not delivering broken code and providing automated tests. Thanks for putting your thoughts on paper.


I am currently going through this with someone in our organization.

Unfortunately, this person is vibe coding completely, and even the PR process is painful: * The coding agent reverts previously applied feedback * Coding agent not following standards throughout the code base * Coding agent re-inventing solutions that already exist * PR feedback is being responded to with agent output * 50k line PRs that required a 10-20 line change * Lack of testing (though there are some automated tests, but their validations are slim/lacking) * Bad error handling/flow handling


> 50k line PRs that required a 10-20 line change

This is hilarious. Not when you're the reviewer, of course, but as a bystander, this is expert-level enterprise-grade trolling.


Fire them?


I believe it is getting close to this. Things like this just take time though, and when this person talks to management/leadership they talk about how much they are producing and how everyone is blocking their work. So it becomes a challenging political maneuvering depending on the ability of certain leadership to see through the BS.

(By my organization, I meant my company - this person doesn't report to me or in my tree).


This is not really an option for your standard IC.


Just reject the PR?


deps.dev has a similar bigquery dataset across a couple more languages if someone wanted to do analysis across the other ecosystems they support.


I think it's a bit of planned obsolescence as well. The 1080ti has been a monster with it's 11GB VRAM up until this generation. A lot of enthusiasts basically call out that Nvidia won't make that mistake again since it led to longer upgrade cycles.


I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.

We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.

So we had a PM push a narrative to executives that this feature was simple, that he could do it with AI generated code: and it didn't solve 5% of the use cases that would need to be solved in order to ship this feature.

This is the state of things right now: all talk, little results, and other non-technical people being fed the same bullshit from multiple angles.


> I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.

This is likely because LLM's solve for document creation which "best" match the prompt, via statistical consensus based on their training data-set.

> We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.

So many people mistake the certainty implicit in commercial LLM responses as correctness, largely due to how people typically interpret similar content made by actual people when the latter's position supports the former's. It's a confluence of Argument from authority[0] and Subjective validation[1].

0 - https://en.wikipedia.org/wiki/Argument_from_authority

1 - https://en.wikipedia.org/wiki/Subjective_validation


I’ve recently had a couple people try to help me fix code issues by handing me the results of their AI prompting. 100% slop; it made absolutely no sense in the context of the problem.

I figured the issue out the old-fashioned way, but it was a little annoying that I had to waste extra time deciphering the hallucinations, and then explaining why they were hallucinations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: