Hacker Newsnew | past | comments | ask | show | jobs | submit | citizenpaul's commentslogin

>shitting out mediocre code as fast as possible.

This really is what businesses want and always have wanted. I've seen countless broken systems spitting out wrong info that was actively used by the businesses in my career, before AI. They literally did not want it fixed when I brought it up because dealing with errors was part of the process now in pretty much all cases. I don't even try anymore unless I'm specifically brought on to fix a legacy system.

>that I would be willing to sign my name to.

This right here is what mgmt thinks is the big "problem" that AI solves. They have always wanted us to magically know what parts are "good enough" and what parts can slide but for us to bear the burden of blame. The real problem is same as always bad spec. AI won't solve that but it will in their eyes remove a layer in their poor communication. Obviously no SWE is going to build a system that spit out wrong info and just say "hire people to always double check the work" or add it to so-so's job duties to check, but that really is the solution most places seem to go with by lack of decision.

Perhaps there is some sort of failure of SWE's to understand that businesses don't care. Accounting will catch the expensive errors anyway. Then Execs will bull whip middle managers and it will go away.


The adversarial tension was all that ever made any of it work.

The "Perfectionist Engineer" without a "Pragmatic Executive" to press them into delivering something good enough would of course still been in their workshop, tinkering away, when the market had already closed.

But the "Pragmatic Executive" without the "Perfectionist Engineer" around to temper their naive optimism would just as soon find themselves chased from the market for selling gilded junk.

You're right that there do seem to be some execs, in the naive optimism that defines them, eager to see if this technology finally lets them bring their vision to market without the engineer to balance them.

We'll see how it goes, I guess.


That's a nice balanced wholesome take, only the problem is that the "Pragmatic Executive" is more like "Career-driven frenzied 'ship it today at all costs' psychopath executive".

You are describing a push-and-pull / tug-of-war balanced relationship. In reality that's absolutely exactly never balanced. The engineer has 1% say, the other 99% go to the executive.

I so wish your take was universally applicable. In my 24 years of career, it was not.


> This really is what businesses want and always have wanted.

There's a difference between what they really want and executives knowing what they want. You make it sound like every business makes optimal decisions to get optimal earnings.

> They literally did not want it fixed when I brought it up because

Because they thought they knew what earns them profits. The key here they thought they knew.

The real problem behind the scenes is a lot of management is short term. Of course they don't care. They roll out their shiny features, get their promotions and leave. The issues after that are not theirs. It is THE business' problem.


> Perhaps there is some sort of failure of SWE's to understand that businesses don't care

I think it's an engineer's nature to want to improve things and make them better, but then we naively assume that everybody else also wants to improve things.

I know I personally went through a pretty rough disillusionment phase where I realised most of the work I was asked to do wasn't actually to make anything better, but rather to achieve some very specific metrics that actually made everything but that metric worse.

Thanks to the human tendency to fixate on narratives, we can (for a while) trick ourselves into believing a nice story about what we're doing even if it's complete bunk. I think that false narrative is at the core of mission statements and why they intuitively feel fake (mission statement is often more gaslighting than guideline - it's the identity a company wants to present, not the reality it does present).

AI is eager to please and doesn't have to deal with that cognitive dissonance, so it's a metric chaser's dream.


<< They have always wanted us to magically know what parts are "good enough" and what parts can slide but for us to bear the burden of blame.

Well, that part is bound to add a level of tension to the process. Our leadership has AI training, where the user is responsible for checking its output, but the same leadership also outright stated it now sees individual user of AI as having 7 employees under them ( so should be 7x more productive ). Honestly, its maddening. None of it is how it works at all.


Stuff like this guarantees future dev work. Its the new institutional spreadsheet mess.

> Stuff like this guarantees future dev work.

Don't worry, it'll be done by more LLMs.


It seems to be endemic to the industry. Why was the latest predator movie turned into what is functionally a buddy comedy with some action scenes?

>swoop in to establish new, more secure methods of identification.

This is already being done. However it is being done in backroom deals to make sure that the individual has no control over their identity only the corporations. You are not who you say you are, you are what a corporation decides you are.

Plaid is a huge player in this space.


I believe the term for them is evangelicals. I'm going to guess that a venn diagram of deeply religious people and people pushing for "protecting" the kids is just a circle.

This is so untrue I wonder if you even bothered to think about it.

> Micron's killing the Crucial brand of RAM and storage devices completely,

More rot economy. Customers are such a drag. Lets just sell to other companies for billion dollar deals at once. These AI companies have bottomless wallets. No one has thought of this before we will totally get rich.


"I don't want to make a little bit of money every day. I want to make a fuck ton of money all at once."

https://www.youtube.com/watch?v=BzAdXyPYKQo


It's scary how real that scene is.

Says a lot about human psychology.


> "The Coaching Habit."

Oh wow. This comment just completely explained the worst "manager" I ever had. They must have been using this terrible method.

>no matter how direct the request was or how much it really needed management authority behind it.

They nearly drove me insane with this circular cycle. It was the only job I ever walked out on. I emailed on a Sunday night that I would not be returning to the office after a particularly terrible cycle of this nonsense.

To be clear I am not a "needy" employee. When I ask a manager for something it is because I do not have the authority do the thing.


Tell the manager you are assuming their authority.

You will force the answer out of them either way


This is really how it’s done. Don’t go over their head; go under their boss’s.

I'm not sure you understand just how dedicated they were to this whole turn it back on the employee method they were apparently using. I just skimmed the summary on the book after reading this comment. Its a toxic method of avoiding accountability masquerading as "mgmt"

You can't do whatever you describe if it needs sign off by a manager that simply ghosts on everything you try to send their way. You can assume responsibility but I'm not committing fraud for a company that does not care.

Also what you said means that you are now officially responsible for the mistakes which I'm pretty sure is what this whole "method" is about.


If they are not signing things required for your job then you need to go over their head

Its scary that nowdays a company is simply doing "good business" and it is so unusual that its worth praise.

> it just makes software engineering into bothering supervisor job.

I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.


But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.

... except they didn't. In fact most AI tech were good for a nice demo and little else.

In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.

AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.


Luckily for us, technologies like SQL made similar promises (for more limited domains) and C suites couldn't be bothered to learn that stuff either.

Ultimately they are mostly just clueless, so we will either end up with legions of way shittier companies than we have today (because we let them get away with offloading a bunch of work to tools they rms int understand and accepting low quality output) or we will eventually realize the continued importance of human expertise.


I know I'm jaded but anytime I see a business situation and they specifically say you must be a MINOR to work with them... I get kinda suspicious.

>Yes, I'm a teenager ,18 and under

So not actually a teenager but a minor is what they mean and use what I would call deceptive language around it. But why?....


That's not jaded, that's paranoid-ly misreading this.

It's an organization for hacking working with high schools and young people. They don't want small children enrolled, and they don't want older people.

"teenager 18 and under" is perfectly fine description for 13-18 or 7th to 12th grade.


Yeah, I get suspicious when schools say only minors can enroll there

You already made the point but yea that’s overly jaded to a significant degree

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: