Hacker Newsnew | past | comments | ask | show | jobs | submit | askafriend's commentslogin

Why didn't you ask to get the accounts provisioned?

I work for a company that has so much bureaucracy and silos that teams maintain wiki pages with links and routing on how to create tickets for specific tasks and wether there is a specific mandatory information needed in order to not have your ticket just closed as incomplete without an explanation.

Sometimes a team unilaterally decide to change the process, info is sent to a random number of mailbox/managers who may fail to pass the info. Some entire teams just put themselves in away status 24/7 and do not respond to direct messages.

So yes I can believe his story. Sometimes in these kind of companies you just don't know who and how to ask for something and you just hope someone knows someone who might know.


What's the largest company you've worked for? A lot of big, older companies, are just so messed up that its just not worth it. How do you do this? Well you have to find the specific form, or specific person who does the thing, who is that? no one knows. So that provisioning of a vpn and getting in jira might literally be like a month of work.

I've worked for S&P Global, so pretty large. If you don't have an account that you need, then you need to be tenacious, which of course is super annoying. If you don't have an account on a system you should, it's 100% on you after a while.

On consulting engagements, 0% of the time are Jira and git provisioned correctly for an outside consultant. I used to be appalled at being paid for two or three days of waiting for the IT guy to fix this. Now I use the time to find cleaning supplies and deep clean my cubicle and chair. People do look at me funny, but I feel better not just sitting there reading.

I did, multiple times. I was a contractor. I was the only one on my team of contractors whose account was screwed up. There seemed to be no priority to do anything there. One of many many reasons I left when I could.

I had a similar thing happen to me with a huge company as a contractor. I couldn't work for 3 weeks due to a combination of login issues and permissions settings. Couldn't file a ticket and no one was really sure who to call/ask. Finally a director caught wind of it and knew who to talk to.

I imagine that's done via JIRA tcket/IT before onboarding.

So if they somehow can get past initial device deployment/user account logon, and get other resources IE; slack....well that speaks to how difficult/pointless it would be to get proper VPN/Jira access.


I believe it was an ancient ServiceNow incantation that all the current employees couldn't seem to hunt down.

You'd have to be able to find the person to do that first hehe!

I bet poor investment planning and low-paying roles despite the years of experience.

> I bet poor investment planning

Or good investment planning. I only recently built up savings after spending my entire career maxing out 401ks. If I was laid off, I'd only have like 6 months or so of savings before I run out despite having a coastFIRE/leanFIRE NW.

Yes, there are hardship withdraws from 401ks, but the older you get, the more retirement account means retirement account. Meaning, it becomes more clear that the money in that account needs to be left alone until things get dire. You're not going to be more employable in 20 years.


A 401k and an IRA together constitute only $31,500 this year, and less in each previous year, before taxes. That shouldn't be the main factor. FWIW I was including those accounts in my 'savings' number.

Heavy is the head that wears the crown, I guess.

The entire car dealership lobby hates Tesla, for example.


If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.


It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.

It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.


> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.

Are you describing LLM's or social media users?

Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...


I really could only be talking about LLMs but social media is also low quality.

The quality (or lack of it) if such texts is self evident. If you are unable to discern that I can’t help you.


“The quality if such texts…”

Indeed. The humans have bested the machines again.


I think that’s a good example of a superficial problem in a quickly typed statement, easily ignored, vs the profound and deep problems with LLM texts - they are devoid of meaning and purpose.


Your comment was low quality noise while the one you replied to was on topic and useful. A short and useful comment with a typo is high quality content while a perfectly written LLM comment would be junk.


Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.

The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.


Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.


You are absolutely right!


Lately the Claude-ism that drives me even more insane is "Perfect!".

Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.

"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."

"Perfect!...."


One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.


Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.

"August 15, 2025 GPT-5 Updates We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.

Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."

The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.


"Any Compliments about my queries cause me anguish and other potent negative emotions."


> If it conveys the intended information then what's wrong with that?

Well, the issue is precisely that it doesn’t convey any information.

What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?

There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not lead to any insight.

LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years. They can be useful to help you polish what you want to say or otherwise format interesting information (provided you ask it to not be ultra verbose), but its just not going to create information out of thin air if you don't provide it to it.

At least, if you do it yourself, you are forced to realize that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.


The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".


If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.

Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.


What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?


Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.


Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.


I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.

This is _not_ to say that I'd suggest LLMs should be used to write papers.


> What you are obsessing with is about the writer's style, not its substance

They aren’t, they are boring styling tics that suggest the writer did not write the sentence.

Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.


Writing reflects a person's train of thought. I am interested in what people think. What a robot thinks is of no value to me.


What information is conveyed by this sentence?

Seems like none to me.


it’s not really clear whether it conveys an “intended meaning” because it’s not clear whether the meaning - whatever it is - is really something the authors intended.


Style is important in writing. It always has been.


The brainrot apologists have arrived


Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.


Assist without replacing.

If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.

When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.

In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.

As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.


Because they produce text like this.


Is it really so painful to just think for yourself? For one sentence?

The answer to your question is that it rids the writer of their unique voice and replaces it with disingenuous slop.

Also, it's not a 'tool' if it does the entire job. A spellchecker is a tool; a pencil is a tool. A machine that writes for you (which is what happened here) is not a tool. It's a substitute.

There seem to be many falling for the fallacy of 'it's here to stay so you can't be unhappy about its use'.


The paragraph in question is a very poor use of the tool.


Because it sounds like shit? Taste matters, especially in the age of generative AI.

And it doesn’t convey information that well, to be honest.


It's because A LOT of people are. I can't imagine doing it any other way now that I've adopted all the tooling.

I haven't hand-written more than a dozen lines of code in months. The models are really really good now if you can learn to use them. There's definitely a learning curve though as is true with anything new.


Great thought, that seems very likely since so many "founder stories" are heavily spun tales.


Developers want a stable, secure platform where they can reach customers that trust the platform and are willing to transact. Everything is downstream of that, including any philosophy around control.

Developers are businesses and the economics need to work. For that, safety and security is much more important than openness.


Oh! Classic Survivorship bias. You're only looking at the devs who went into business in the phone ecosystem in the first place. I'm thinking that they're there despite the barriers to entry ('shenanigans'), and the ones you encounter happen to be those who happen to place a higher value on 'other values'. As the ecosystem gets locked down more, this effect becomes stronger.

Meanwhile, you're not looking at those who left, or those who decided to never enter a broken market dominated by players convicted of monopolistic practices.

This seems much more intuitive than a hypothesis where somehow people would prefer to enter a closed market over a fair and open market with no barriers to entry.

Remember, monopolists succeed because they are distorting the market, not because they are in fact the most efficient competitor.

* https://en.wikipedia.org/wiki/Survivorship_bias


I'm actually quite familiar with the history of app stores and getting people to pay for software on the internet. I grew up in this timeline so I have first-hand experience too.

Before the App Store, the picture was mostly a disaster of security, reliability and quality. There was no trust and so people didn't bother parting with their credit card information to buy software...especially not on their phone.

Apple's App Store model dramatically grew the pie because it was one of the few platforms that people were willing to actually transact confidently on and trusted. This is why millions of developers flocked to the platform. This is also why Apple has traditionally maintained an iron grip on it; it was beneficial for everyone involved.

Over time, they are being proven right as more open platforms realize that openness at the expense of trust doesn't work for the masses.


While I understand the author's point, IMO we're unlikely to do anything that results in slowing down first. Especially in a competitive, corporate context that involves building software.


The thing people seem to forget about Jobs is that he really was that good, that obsessive/dedicated and that visionary. It's that simple.

His process resulted in some of the most transformative products humanity has ever known.


True, but lots of people who were less talented failed with this approach. It's always a bad idea to look for advice from outliers.


Exactly this.

It's inherent in the process and way of thinking. It's a dangerous path to pursue for entrepreneurs. How can the results be anything but disposable and frivolous when the process treats them as such.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: