Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Okay hear me out. Restructuring for profit right? There will probably be companies spawned off of all of these leaving.

If the government ever wants a third party to oversee safety of openAI wouldn't it be convenient if one of those that left the company started a company that focused on safety. Safe Superintelligence Inc. gets the bid because lobbying because whatever I don't even care what the reason is in this made up scenario in my head.

Basically what I'm saying is what if Sam is all like "hey guys, you know it's inevitable that we're going to be regulated, I'm going for profit for this company now, you guys leave and later on down the line we will meet again in an incestuous company relationship where we regulate ourselves and we all profit."

Obviously this is bad. But also obviously this is exactly exactly what has happened in the past with other industries.

Edit: The man is all about the long con anyway. - https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Another edit: I'll go one further on this a lot of the people that are leaving are going to double down on saying that open AI isn't focused on safety to build up the public perception and therefore the governmental perception that regulation is needed so there's going to be a whole thing going on here. Maybe it won't just be safety and it might be other aspects also because not all the companies can be focused on safety.



I think the departures and switch to for-profit model may point in a different direction: that everyone involved is realizing that OpenAI’s current work is not going to lead to AGI, and it’s also not going to change.

So the people who want to work on AGI and safety are leaving to do that work elsewhere, and OpenAI is restructuring to instead focus on wringing as much profit as possible out of their current architecture.

Corporations are actually pretty bad at doing tons of different things simultaneously. See the failure of huge conglomerates like GE, as well as the failure of companies like Bell, Xerox, and Microsoft to drive growth with their corporate research labs. OpenAI is now locked into a certain set of technologies and products, which are attracting investment and customers. Better to suck as much out of that fruit as possible while it is ripe.


I feel like it's unfair to expect growth to remain within your walls. bell and Xerox both drove a lot of growth. That growth just left bell and Xerox to go build things like intel and apple. They didn't keep it for themselves and that's a good thing. Could you imagine if the world was really like those old at&t commercials and at&t was actually the ones bringing it to you? I would not want a monolithic at&t providing all technology.

https://youtu.be/xBJ2KXa9c6A?si=pB67u56Apj7gdiHa

I do agree with you. They are locked into pulling value out of what they got and they probably aren't going to build something new.


> See the failure of huge conglomerates like GE

GE was a successful company as a major conglomarate which made aircraft engines, railroad locomotives and light bulbs.

GE was a failure as a financialized "engine of financial performance" that was focused entirely on spinning off businesses, outsourcing, and speculating in the debt derivatives market.


I think it's rather one of these:

1. Naah, it's not gonna lead to AGI, not here at least.

2. If it's gonna be a for profit then why the hell I should stick here - maybe go somewhere that pays me more, or maybe I will start my own dig.

3. Or maybe, selling the snake oil is more profitable if I start my own brand of snake oil which is kinda close to point 2 anyway.


> If the government ever wants a third party to oversee safety of openAI

No, please don't drag the government into this. Don't call for regulation when we don't know it's implications. Once we gain enough insight, the benefits of regulations and government interventions will most likely outweigh the costs. Laissez-faire is the way to go. When can we learn from history?


Now that AI has exploded, I keep thinking about that show called Almost Human, that opened describing a time when technology advanced so fast that it was unable to be regulated.


As long as government runs slowly and industry runs fast it's inevitable.


The baptists and the bootleggers

https://a16z.com/ai-will-save-the-world/


Is there no date on that post or I missed it somewhere? I mean it might be on the lines that if the oracle says something it becomes dateless. Yeah, that could be it.


Right at the top (at least on mobile): “Posted June 6, 2023”


> But also obviously this is exactly exactly what has happened in the past with other industries.

Could you give some examples?


These are some serious mental gymnastics. It depends on:

1. The government providing massive funds for AI safety research. There is no evidence for this. 2. Sam Altman and everyone else knowing this will happen and planning for it. 3. Sam Altman, amongst the richest people in the world, and everyone else involved, not being greedy. (Despite the massive evidence of greed) 4. San altman heroically abandoning his massive profits down the line.

Also, even in your story, Sam Altman profits wildly and is somehow also not motivated by that profit.

On the other hand, a much simpler and more realistic explanation is available: he wants to get rich.


Why would government care about safety? They already have the former director of NSA, sitting member of the board.


Why would they have the FCC? Why would they have FDA? Why would people from industry end up sitting on each of these things eventually?

EDIT: oh and by the way i'm very for bigger government and more regulations to keep corpos in line. i'm hoping i'm wrong about all of this and we don't end up with corruption straight off the bat.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: