Hacker Newsnew | past | comments | ask | show | jobs | submit | atleastoptimal's commentslogin

This just seems like "burnout" simulator. What makes it unique to having autism vs being overworked in a job you hate in an alienating urban environment not congenial to human thriving?

Everyone would rather be cozy on the couch under a warm blanket than wake up at 6:30AM every day before commuting to type meaningless stuff at a computer desk, be exposed to a sensory environment that is far from ideal, and converse with people they would never associate with if they didn't have to. The experience of the wage worker is a universally reviled existence that is far from a unique plight afflicting those with high-functioning autism.

Is the implication that someone without autism could deal with all these stressors effortlessly with no need to think or put any effort in?


I answered this as somebody with 20+ years in this industry. I burned out instantly.

I had my wife do it, as a stay at home wife. She still burned out and has no reason too. She made it 6 questions. She said she wouldn't have chosen half of the optional questions.

I'm burned out. She's burnt out from the quiz.

I'm going to take a PTO Friday.


Everyone's different. Some people genuinely thrive under the conditions you're describing, others don't like it but are able to put up with it no problem, and others can't stand it but are forced to.

The perspective I've found most useful is this. There is a constellation of correlated "autistic traits", which everyone has to a degree, but which like most traits become disabling if turned up too much. "Autism" is a term describing that state. So, it is much less a particular switch that can be turned on or off, not even a slider between "not autistic" and "very autistic", but more a blurry region at the outskirts of the multidimensional bell curve of the human experience.

People on the furthermost reaches of this space are seriously, unambiguously disabled, by any definition. They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas. Whether this is a good or a bad thing is a social question, not a scientific one, in my opinion. Most of us want to live in a society that supports disabled people, but how many resources to allocate to that is a difficult question where our human instincts seem to clash with the reality of living in a modern society.

On your last paragraph: I think this is a serious problem with the discourse around neuroatypicality today. My opinion is that the important thing is that we become more accepting and aware of the diversity of the human experience, and that this is a necessary social force to balance the constant regression to the mean imposed by modernity. If that's the case, then drawing a border around any category of person, staking a territorial claim to a pattern of difficulty the group experiences, and refusing to accept that the pattern exists beyond it: it's just unfair, it's giving into defensiveness.


> They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas.

There has also been a change that reclassified what we would previously have termed Asperger’s Syndrome as Autism. To be clear, AS was always considered to be a form of or closely related to Autism, but that change in language does mean we’ve had a big shift in what is Autism medically and what the public pictures when they think Autism


The key difference here is magnitude and mechanism. For autistic people, even "normal" sensory inputs or social interactions can cause physical discomfort, confusion

If the expectation of the job is to "type meaningless stuff at a computer desk", doesn't this point to a problem with the expectations of the role? I would submit that if the work is truly meaningless, and it often is in my experience, it doesn't need to be done. Of course anyone would choose a pleasurable activity over meaningless, mundane busy work - regardless of their unique expression of the autism spectrum.

I also think that there are many wage workers who do not revile that existence. My intuition says it is more common in "office jobs".

I think the implication is that someone without autism can recover from these stressors more easily. And they tend to be able to absorb these stressors with less of an impact on their mood. People without autism have more control over when their brain is engaged with something, and have to expend less effort when exerting that control. It's not just about physical energy.

The brain of a person dealing with these types of symptoms is kind of like an engine running near red-line 99% of the time. When someone is masking, for every thought they express, there were likely dozens you didn't hear or see expressed over the course of a short social interaction.

Other times, they are caught in mental loops. Reading the same line of text over and over, or replaying someone else's comment over and over, and not comprehending because of an auditory stimulus that is monopolizing the comprehension processes within the brain. When this happens, it's easy for them to miss important context or body language when working with others. That requires even more masking to cover up because it's a social faux pas to admit you missed something important. So then your brain goes into overdrive trying to derive the missed information from followup conversation.

Using sustained, intense thinking to overcome challenges that others don't encounter as often can become the default coping mechanism for this kind of thing. It's not something that is easily noticed, because it's part of masking, but it tends to be more draining than many people realize.


I think we are still in the period where many new jobs are being created due to AI, and AI models are chiefly a labor enhancer, not a labor replacer. It is inevitable though, if current trends continue (the METR eval and GDPval) that AI models will be labor replacements in many fields, starting with jobs that are oriented around close-ended tasks (customer service reps, HR, designers, accountants), before expanding to jobs with longer and longer task horizons.

The only way this won't happen is if at some point AI models just stop getting smarter and more autonomously capable despite every AI lab's research and engineering effort.


AI coding is more like being a PM than an engineer. Obviously PM’s exist and don’t know as much about the tech they make as the engineers, but are nevertheless useful.

This is how I'm acting now because of my p(doom)

Turns out, I don't act any differently.


People pattern match with a very low-resolution view of the world (web3/crypt/nfts were a bubble because there was hype, so there must be a bubble since AI is hyped! I am very smart) and fail to reckon with the very real ways in which AI is fundamentally different.

Also I think people do understand just how big of a deal AI is but don't want to accept it or at least publicly admit it because they are scared for a number of reasons, least of all being human irrelevance.


The same problems people cite wrt social media are the same issues that have been cited for decades regarding living in a dense urban area vs a less populated one, but nevertheless people still overwhelmingly live in urban areas.


Yeah but its mostly because of jobs and corresponding salaries. For every person I know that simply loves living in the city, has no connection to the nature and the best weekend is spent partying or in similar city vein, there are 10 who would love to live in more rural place, but then there is work or services commute.

Triple that for families with small kids.

Also it doesn't have to be proper wilderness, thats only for few - ie our village has 2k people, kindergarten and school for kids up to 14 years, shops, 3 restaurants, football stadium, doctor and dentist and so on. Small city 5 mins drive, bigger 10, metropolis 20 mins drive. And just next to big wild forest and natural reserve from one side that continues up the hills 1km higher than where we are, and 15km stretch of vineyards from another. Almost ideal compromise for us, just me sucking up the 1h office commute 2x a week (for now).


Nitpick: Around 60% of the world population live in urban areas, and if a lot of people decide to live in a particular rural area, then it quickly faces urbanization.


>> people still overwhelmingly live in urban areas

If you restrict the classification to urban vs. rural, then yes, people overwhelmingly live in urban areas, something like 80% to 20% according to the census.

If you add in suburban, it changes. There's no authoritative definition of the term, but there was a Pew Research Center poll that asked people to describe the community they live in and the response was 25% urban, 43% suburban, and 30% rural. (And I guess 2% something else?)


I went to NYC the other day. There was lots of diverse interesting stuff. Not full of people who looked just like me.


Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.


> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.


I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.


I think the majority of people don't care about YC. It just happens to be the most popular tech forum.


> posting on a YC forum over other forums due to them supporting or having an interest in YC.

I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.

Lots of people come here for the forum, and leave the relationship with YC there.


Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.


Sam Altman wasn't fired from YC.


The story I heard before the PR spin that came from Paul Graham later (where he tweeted that he never fired him and asked him to choose between YC and OpenAI) was that he was asked to resign. I don't have an official source, I heard this from multiple YC alumni. I don't know exactly what happened but based on what I've heard and actually having interacted with Sam Altman, it seems most likely to me he was asked to resign (which isn't technically being fired) because he does weird stuff. He claimed to be a chairman of YC which wasn't true, he barred other YC partners from running personal funds while he did it himself, and then all the further similar behaviors we've seen play out at OpenAI. Maybe you're right, but it seems to me he was "fired" and later there was some PR to smooth it over.

https://archive.is/Vl3VR

https://archive.is/2mzD7


You don't know what exactly happened, but you stated confidently what happened.


That doesn't address the substance of the claim though. What do you know that you aren't telling us about that situation?


You're right about that and that's why I'm providing additional context.


It's Saturday morning for California, where YC is centered. Everyone here should be out doing anything else (including me). It's not a random sampling of HN commenters, but a certain subset. I think we've just found out which way the subset that comments on Saturday mornings leans.


Well, in a way they are endorsed. They actively censor things they don’t like. Since there’s no moderation log, nobody prevents them from removing things just because they don’t like them.


When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light.

This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.

It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.

I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.

It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.

[1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...


When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.


They undermined their not-for-profit mission by changing their governance structure. This changed their very DNA.


They released a near-SOTA open source model not too long ago


open weights != open source


they didn't release the source to it.


Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?


My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists


Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.


hmm interesting - based on the kind of posts that I see I made the presumption that this place is free but the opposite actually makes more sense. What kind of posts have you seen disappear?


I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?

Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.

Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.


These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.


True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.


I don’t think one can validly draw any such conclusion.


because of the repeated rugpulling?


I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?

Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.

I’d do some self reflection and ask yourself why you need to carry water for them.


I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.

I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.


Perhaps the people you see as cynical have more research and/or experience behind their views on OpenAI than you. Many of us have been more naive in the past, including specifically towards Altman, Microsoft, and OpenAI.


Microsoft, especially, has a long history of malfeasance.


I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.


> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

Because our views are our own and not reflective of the feelings of the company that hosts the forum?


This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.

Skepticism is healthy. Cynicism is exhausting.

Thank you for posting this.


In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy


Because at some point, HN builders mostly left the site and it’s just become /. 3.0.


People here are directly in the line of fire for their jobs. It’s not surprising.


True, but there are many reasons besides. Meta and Anthropic attract less criticism for a reason.


People remember things and consistently behaving like an asshole gets you treated like an asshole.

OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.


What are the worst things OpenAI has done


The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.

What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.

Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.


It's actually worse than that.

First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).

Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).


Do you believe AI should not be regulated?

Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.


I think you can both think there's a need for some regulation and also want to avoid regulation that effectively locks out competition. When only one company is pushing for regulation, it's a good bet that they see this as a competitive advantage.


Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.


They released a near-SOTA open-source model recently.

Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.

As per your claim that they are scammy, what about them is scammy?


Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.

Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.


Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]

GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.

[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...

[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14


That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.


winning an argument on HN over whether AGI has arrived


Copying files of scanned books isn’t worth a 1T dollar fine


Maybe Anthropic should have paid attention to the law that has $150k statutory damages per violation if the infringement is willful. So much cheaper just to buy the books and scan it instead of violating a law that has a statutory damage clause.


I really want to see an alternative universe where we have mechanical turk folks scanning a huge literal book library into a data warehouse.


Google and the Internet Archive both did/do this with elaborate setups. https://archive.org/details/eliza-digitizing-book_202107


I think anthropic has this operation now


If a person pirates a book should they have to pay 150k?


So true! I too think the average person is basically indistinguishable from anthropic.


And I remember the part in copyright law where it says "if the violation is done by a big corporationy corportation, then the punishment must be more severe even on a per-violation basis"


I mean, it's not crazy talk, you frame it as crazy but this difference in treatment is fairly normal. If punitive damages were limited to what an individual person could manage, they would hardly ever be "punitive" for corporate actors.

You're the second person I've talked to in this thread who thinks "The law is not applied with perfect consistency in practice!? Dear god why is it not like computer programs?".

It's just my personal take, but I don't think extreme rigidity and consistency in a "code is law" fashion would ever be desirable. look over to the crypto world to see how DAOs are faring.


150k per violation of copyright law by means of pirating a book is excessive even for the largest company in the world. I agree that punishment should be somewhat proportional so that "too big to fail" firms don't see it as just a fine to be accounted for as any expense, but among things that ought to be worth a a fine of that magnitude, downloading a book is not among them.


And I agree that 150k is dialing the slider a bit too high compared to an individual's offense, happy to meet you at the middle there.


In general buying books and scanning them for this type of use would *also* be copyright infringement.


No. Thats fair use. Format shifting is fair use as affirmed by RIAA v. Diamond Multimedia which was about ripping CDs to MP3s.


…for personal use, just as timeshifting was with MPAA v. Sony. Neither were about commercial use/exploitation.


Meta case just affirmed that training an LLM is fair use under transformative use. Alsom, Google's indexing(transformative use) of scanned books is settled law with Authors Guild v. Google.


And Roe v. Wade was settled law too, until it wasn’t.


It won't but it seems 95% of people on HN think (hopes) it will because they hate AI and much of big tech


The amount of cynicism motivated by amateur hatred is way way less than the amount of optimism generated by professional greed and profit motive.

Imagine someone in 2006: "All these cynics and doomsayers about the US housing market are just envious and bitter that they didn't get in on the ground floor with their own investments."

Perhaps some were, but if we're going to start discounting opinions based on ulterior motives, there's a much bigger elephant in the room.


AI is a technology, the housing market in the mid 2000s was propped up by fraud deception and knowably unsafe financial engineering.

There is a degree of fraud in AI startups but not nearly to the degree there was in other boom/bust businesses like crypto. Even if AI progress were to stop now, the productivity benefits of existing LLM’s still haven’t fully spread.


That is only your feeling, I can only guess that you have a somewhat irrational love for AI and refuse to see the rationality of other people and just brush them away as "they are irrational h8r".

But all of your comments seem to be dismissive of other people's opinions.


it seems that i am dismissive to others opinions because my optimistic takes on AI attract people dismissive to mine

Among my disagreements was at replying to someone who said managers who change requirements due to AI should be shot and killed in the upcoming “revolution”

Before disagreeing with anyone I always try to consider their point as thoroughly as possible. However it is very often I am not given the same courtesy in return. However many people perpetually biased against AI interpret my disagreements far leas charitably than they would be regarding any topic they don’t have a bias towards.


> Among my disagreements was at replying to someone who said managers who change requirements due to AI should be shot and killed in the upcoming “revolution”

Perhaps ask the AI to figure out quotes from humorist books for you?

> However it is very often I am not given the same courtesy in return

Having read your comments. No, you are incredibly dismissive of research and experiences and just treat your own feelings as an irrefutable mathematical proof.

Try asking a friend for their opinion on your commenting style if you don't believe me.


Not hate, they want to keep their jobs lol


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: