If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here. I don't expect the IRS to be a fan of this arrangement.
Major corporate boards are rife with "on paper" conflicts on interest - that's what happens when you want people with real management experience to sit on your board and act like responsible adults. This happens in every single industry and has nothing to do with tech or with OpenAI specifically.
In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.
I get a lostredditor vibe way too often here. Oddly more than Reddit.
I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.
>I think people forget sometimes that comments come with a context.
I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.
It happens a lot. Every big company has CEOs from other businesses on its board and sometimes those businesses will have competing products or services.
That's what I would term a black-and-white case. I don't think there's anyone with sense who would argue in good faith that a CEO should get a vote on their own salary. There are many degrees of grey between outright corruption and this example, and I think the concern lies within.
I get what you're saying, but I also live in the world and see the mechanics of capitalism. I may be a person who's interested in tech, science, education, archeology, etc. That doesn't mean that I don't also have political views that sometimes overlap with a lot of other very-online people.
I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.
Oh I wasn’t complaining about the parent, I was complaining it needed to be said.
We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.
Instead it often sounds like “it’s very unusual for the front to fall off”.
No, this is the part of the show where the patronizing rhetoric gets trotted out to rationalize discarding the principles that have suddenly become inconvenient for the people with power.
No worries. The same kind of people who devoted their time and energy to creating open-source operating systems in the era of Microsoft and Apple are now devoting their time and energy to doing the same for non-lobotomized LLMs.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
No, its the part of the show where they go back to providing empty lip service to the principles and using them as a pretext for things that actually serve narrow proprietary interests, the same way they were before the leadership that has been doing that for a long time was temporarily removed until those sharing the proprietary interests revolted for a return to the status quo ante.
Yes, and we were also watching the thousands and thousands of companies where these types of conflicts are handled easily by decent people and common sense. Don't confuse the outlier with the silent majority.
And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
You need to be able to separate macro-level and micro-level. GP is responding to a comment about the IRS caring about the conflict-of-interest on paper. The IRS has to make and follow rules at a macro level. Micro-level events obviously can affect the macro view, but you don't completely ignore the macro because something bad happened at the micro level. That's how you get knee-jerk reactionary governance, which is highly emotional.
A corporation acting (due to influence from a conflicted board member that doesn't recuse) contrary to the interests of its stockholders and in the interest of the conflicted board member or who they represent potentially creates liability of the firm to its stockholders.
A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.
Reminds me of the “revolving door” problem. Obvious risk of corruption and conflict of interest, but at the same time experts from industry are the ones with the knowledge to be effective regulators. Not unlike how many good patent attorneys were previously engineers.
501c3's also have governing internal rules, and the threat of penalties and loss of status imposed by the IRS gives them additional incentive to safeguard against even the appearance of conflict being manifested into how they operate (whether that's avoiding conflicted board members or assuring that they recuse where a conflict is relevant.)
If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.
The Bill and Melinda Gates Foundation is a 501c3 and I'd expect that even the most techno-futurist free-market types on HN would agree that no matter what alleged impact it has, it is also in practice creating profitable overseas contracts for US corporations that ultimately provide downstream ROI to the Gates estate.
Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.
My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.
I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.
> I think it was a real mistake to create OpenAI as a public charity
Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.
I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?
Maybe the vision is to eventually bring UBI into it and cap earn outs. Not so wild given Sam’s world coin and his UBI efforts when he was YC president.
The public support test for public charities is a 5-year rolling average, so "eventually" won't help you. The idea of billionaires asking the public for donations to support their wacky ideas is actually quite humorous. Just make it a private foundation and follow the appropriate rules. Bill Gates manages to do it and he's a dinosaur.
Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.
Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.
What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.
Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.
Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.
Well, I think that's really the question, isn't it?
Was it a mistake to create OpenAI as a public charity?
Or was it a mistake to operate OpenAI as if it were a startup?
The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
> IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.
I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.
(But yes; what you describe is absolutely happening left and right...)
OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.
Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.
I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.
> I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.
I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.
Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?
Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth
Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.
Are there any similar cases of this "non-profit board overseeing a (huge) for-profit company" model? I want to like the concept behind it. Was this inevitable due to the leadership structure of OpenAI, or was it totally preventable had the right people been on the board? I wish I had the historical context to answer that question.
But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.
They are registered as a 501(c)(3) which is what people commonly call a public charity.
> Organizations described in section 501(c)(3) are commonly referred to as charitable organizations. Organizations described in section 501(c)(3), other than testing for public safety organizations, are eligible to receive tax-deductible contributions in accordance with Code section 170.
> They are registered as a 501(c)(3) which is what people commonly call a public charity.
TIL "public charity" is specific legal term that only some 501(c)(3) qualify as. To do so there are additional restrictions, including around governance and a requirement that a significant amount of funding come from small donors other charities or the government. In exchange a public charity has higher tax deductible giving limits for donors.
Important to note here that most large individual contributions are made through a DAF or donor-advised fund, which counts as a public source in the support test. This helps donors maximize their tax incentives and prevents the charity from tipping into private foundation status.
Their IRS determination letter says they are formed as a public charity and their 990s claim that they have met the "public support" test as a public charity. But there are some questions since over half of their support ($70 million) is identified as "other income" without the required explanation as to the "nature and source" of that income. Would not pass an IRS audit.
> They are registered as a 501(c)(3) which is what people commonly call a public charity.
Why do they do that? Seems ridiculous on the face of it. Nothing about 501(c)(3) entails providing any sort of good or service to society at large. In fact, the very same thing prevents them from competing with for-profit entities at providing any good or service to society at large. The only reason they exist at all is that for-profit companies are terrible at feeding, housing, and protecting their own labor force.
> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.
While one might disagree that the particular subcategories into which a 501c3 must fit into one of do, in fact, provide a good or service to society at large, that's the rationale for 501c3 and its categories. Its true that "charity" or "charitable organization" (and "charitable purpose"), the common terms (used even by the IRS) is pedantically incomplete, since the actual purpose part of the requirement in the statute is "organized and operated exclusively for religious, charitable, scientific, testing for public safety, literary, or educational purposes, or to foster national or international amateur sports competition (but only if no part of its activities involve the provision of athletic facilities or equipment), or for the prevention of cruelty to children or animals", but, yeah, it does require something which policymakers have judged to be a good or service that benefits society at large.
- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.
OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.
And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.
well not anymore, as they cannot function as a nonprofit.
also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly
> well not anymore, as they cannot function as a nonprofit.
There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.
> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive
No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.
Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.
"Almost certainly"? Here's a fun exercise. Over the course of, say, a year, keep track of all your predictions along these lines, and how certain you are of each. Almost certainly, expressed as a percentage, would be maybe 95%? Then see how often the predicted events occur, compared to how sure you are.
Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.
I’m pretty confident (close to the 95% level) they will abandon the public charity structure, but throughout this saga, I have been baffled by the discourse’s willingness to handwave away OpenAI’s peculiar legal structure as irrelevant to these events.
Within a few months? I don't think it should be possible to be 95% confident of that without inside info. As you said, many unexpected things have happened already. IMO that should bring the most confident predictions down to the 80-85% level at most.
A charity is a type of not-for-profit organisation however the main difference between a nonprofit and a charity is that a nonprofit doesn't need to reach a 'charitable status' whereas a charity, to qualify as a charity, needs to meet very specific or strict guidelines
> First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.
There’s no indication a Microsoft appointed board member would be a Microsoft employee (though the they could be of course), and large nonprofits often have board members that come from for-profit companies.
I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.
I don't expect the government to regulate any of this aggressively. AI is much to important to the government and military to allow pesky conflicts of interest to slow down any competitive advantage we may have.
My comment here was actually meant to talk about AI broadly, though I can get the confusion here as the original source thread here is about OpenAI.
I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.
If you think the person you're replying to was talking about regulating OpenAI specifically and not the industry as a whole, I have ADHD medicine to sell you.
The context of the comment thread you're replying to was a response to a comment suggesting the IRS will get involved in the question of whether MS have too much influence over OpenAI, it was not the subject of general industry regulation.
But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.
I'm sorry. I was just taking the snark discussion to the next level. I thought going overboard was the only way to convey that there's no way I'm serious.
if up-the-line parent wasn't talking about regulation of AI in general, then what do you think they meant by "competitive advantage"? Also, governments have to set policy and enforce that policy. They can't (or shouldn't at least) pick and choose favorites.
Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.
Others have pointed out several reasons this isn't actually a problem (and that the premise itself is incorrect since "OpenAI" is not a charity), but one thing not mentioned: even if the MS-appointed board member is a MS employee, yes they will have a fiduciary duty to the organizations under the purview of the board, but unless they are also a board member of Microsoft (extraordinarily unlikely) they have no such fiduciary duty to Microsoft itself. So in the also unlikely scenario that there is a vote that conflicts with their Microsoft duties, and in the even more unlikely scenario that they don't abstain due to that conflict, they have a legal responsibility to err on the side of OpenAI and no legal responsibility to Microsoft. Seems like a pretty easy decision to make - and abstaining is the easiest unless it's a contentious 4-4 vote and there's pressure for them to choose a side.
But all that seems a lot more like an episode of Succession and less like real life to be honest.
> and that the premise itself is incorrect since "OpenAI" is not a charity
OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.
OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)
It's still a conflict of interest. One that they should avoid. Microsoft COULD appoint someone who they like and shares their values, that is not a MSFT employee. That would be a preferred approach but one that I doubt a megacorp would take
Both profit and non-profit boards have members that have potential conflicts of interest all the time. So long as it’s not too egregious no one cares, especially not the IRS.
Microsoft is going to appoint someone who benefits Microsoft. Whether a particular vote would violate fiduciary duty is subjective. There's plenty of opportunity for them to prioritize the welfare of Microsoft over OAI.
There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.
If you wanted to wear a foil hat, you might think this internal fighting was started from someone connected to TPTB subverting the rest of the board to gain a board seat, and thus more power and influence, over AGI.
The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.
nothing screams 'protect public interest' more than Wall Streets biggest cheerleader during 2008 financial crisis. who's next, Richard S. Fuld Jr ? Should the Enron guys be included ?
It's obvious this class of people love their status as neu-feudal lords above the law living as 18th century libertines behind closed doors.
But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.
The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.
Actually I think Bill would be a pretty good candidate. Smart, mature, good at first principles reasoning, deeply understands both the tech world and the nonprofit world, is a tech person who's not socially networked with the existing SF VCs, and (if the vague unsubstantiated rumors about Sam are correct) is one of the few people left with enough social cachet to knock Sam down a peg or two.
Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".
Whenever there's an obvious conflict, assume it's not enforced or difficult to litigate or has relatively irrelevant penalties. Experts/lawyers who have a material stake in getting this right have signed off on it. Many (if not most) people with enough status to be on the board of a fortune 500 company tend to also be on non-profit boards. We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
Do you remember before Bill Gates got into disease prevention he thought that “charity work” could be done by giving away free Microsoft products? I don’t know who sat him down and explained to him how full of shit he was but they deserve a Nobel Peace Prize nomination.
Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.
> Experts/lawyers who have a material stake in getting this right have signed off on it.
How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?
> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.
Its useful PR pretext for their regulatory advocacy, and subjective enough that if they are careful not to be too obvious about specifically pushing one company’s commercial interest, they can probably get away with it forever, so why would it be any deader than when Sam was CEO before and not substantively guided by it.
The only evidence I have is that the board members that were removed had less business connections than the ones that replaced them.
The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?
I don't get the drama with "conflict of interests"... Aren't board members generally (always?) in representation of major shareholders? Isn't it obvious that shareholders have interests that are likely to be in conflict with each other or even the own organization? Thats why board members are supposed to check each other, right?
OpenAI is a non profit and the board members are not allowed to own shares in the for profit.
That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.
The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.
Did you think OP meant there was some inherent conflict of interest with charities?
> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.
Not to mention, the mission of the Board cannot be "build safe AGI" anymore. Perhaps something more consistent with expanding shareholder value and capitalism, as the events of this weekend has shown.
Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival