Sam Altman wins power struggle, returns to OpenAI with new board

wackazoa

Ars Scholae Palatinae
1,057
I'm not afraid of an AGI. I don't think one is possible, and certainly not via an LLM.

What really scares me is people thinking that LLMs are far more intelligent than they actually are and relying on them, with their biased training data and penchant for confabulation, to do things they aren't actually capable of doing.

And dont we already see that happening with “self-driving” cars? People who should be paying attention to the road, because the car is reliant on the driver fixing its mistakes, instead reading/sleeping/watching tv/etc. and getting into accidents.

Now with AI it could be multiple times scale of accidents, only not just on the road.

All because of some charismatic talking head wanting to become something…

Its always been this way, but the stakes seem higher now.
 
Upvote
-5 (4 / -9)

wackazoa

Ars Scholae Palatinae
1,057
Welcome to the Third/Fourth Industrial Revolution. The first eliminated the need for animal power. The second eliminated the need for human power. The third/fourth eliminated the need for humans.

So only half joking…

When do I get my hover chair and bottomless soda cup?
 
Upvote
6 (6 / 0)
That was not my intent at all.

A poorly designed automobile (such as the old Ford Pinto) can cause damage with normal use due to its design. But even a well-designed automobile can cause damage if driven into a crowd of people. The manufacturer is responsible for the damage caused by the design of the product, but is not necessarily responsible for how the product is used.
Agreed. Another analogy: a brick is just a brick. It can be used to build a house or used to break a window. The brick doesn't care how it is used, it all comes down to the person using the brick.
 
Upvote
11 (12 / -1)
For the past few days, the old nonprofit board members kept saying "No" to hiring Altman and resigning from the board. Microsoft didn't take "No" for an answer and kept booking meetings with them on the weekend and weekdays until they said "Yes". Individuals can tire out, but corporations can rotate in fresh, well-rested negotiators until their opponent gives in to their demands.

Helen Toner's last tweet on this was "And now, we all get some sleep", as if she's signalling that she wouldn't have been allowed to sleep until she gave up.
 
Upvote
5 (10 / -5)

wackazoa

Ars Scholae Palatinae
1,057
My guess is that the board saw that Altman was trying to usurp them and undermine the non-profits mission, so they shot their shot and ended up ensuring that they were cast out and the non-profits mission was subsumed by commercial interests. I doubt they really stood much of a chance, given all the sharks circling.

So like 4 Pontius Pilates?
 
Upvote
0 (0 / 0)
What hasn’t been said here is that strangely, Sam had no equity in the company. It’s very unusual for a founder to have no equity in the company he founded and that’s whether it’s a non profit or not. It’s why he was so easily fired.

the structure of the organization is equally odd, with the for profit company being under the control of the non profit. Having had two companies, I’m baffled as to how that was supposed to work as the two concepts are at odds with each other. And the employees of the for profit were under the control of the non profit as well.

the entire thing is almost set up for failure. And how anyone ever thought that the billions needed were going to be raised from investors who have no chance of making not only a profit, but even getting their initial investment back is absurd. We’re not taking about a political,campaign where many people give a little, and a few give a lot in the hopes of getting some,nebulous payback. We’re talking about a small number of supposedly sophisticated people and organizations putting vast amounts of capital into this, such as Microsoft, who want and expect to make a lot of money from it.
Why do you think he doesn't have equity? They aren't public. Do you know something the rest of the public doesn't?
 
Upvote
3 (3 / 0)

fredrum

Ars Scholae Palatinae
817
Word on the street is some kind of project called Q-star has made a breakthrough and when the board learned about it they panicked and wanted to halt things. But its too late for that now.

Can't have been that serious as in the end they preferred to call for their own resignations from the company.

EDIT: a quote from somewhere
"Wednesday night detail an OpenAI model called Q* (pronounced Q Star) that was recently demonstrated internally and is capable of solving simple math problems. Doing grade school math may not seem impressive, but the reports note that, according to the researchers involved, it could be a step toward creating artificial general intelligence (AGI)."
 
Last edited:
Upvote
2 (4 / -2)

Teamsprocket

Wise, Aged Ars Veteran
197
I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?
Speculation time: Even if the board was fully justified, the method used to fire Altman was seen as a bullshit move by those most affected by it.

I'm just an outside observer but I'm not sure I fully follow the board's reasoning either. Certainly there are better ways to pull the reins on a rogue CEO, and get him to behave himself?
 
Upvote
2 (5 / -3)
Heres the thing with Elon.

Hes almost our generations Edison. He seems to act like Edison, a complete asshole. He gets credited like Edison, it was Edison’s employees who did all the work.

Lets us just hope that history is a bit more clear with Elon than it has been with Edison.
And both of them took advantage of Tesla.
 
Upvote
10 (11 / -1)
I would have thought a 49% ownership might be worth a board seat regardless.
In most instances it usually is.

But in this case I think they specifically did not ask for a board seat to emphasize the continued independent status of Open AI and let everybody know that there were no nefarious intentions for Microsoft to take over. It was all about optics and in this case it worked for the most part with assuaging people's reservations about the deal. Even if in the end not being on the board did backfire on them spectacularly, but not as spectacularly as it could have.
 
Upvote
1 (3 / -2)
I don’t know anything about corporate governance but I used to enjoy history so I’ve read a thing or two about coup d’états.

Following the removal of the previous leader, the coup generals typically prioritize public appearances for two purposes. First, they must articulate a clear rationale for their actions, regardless of its legitimacy. It's essential to prevent the old regime and its supporters from shaping the narrative. Secondly, they aim to reassure the public that despite the significant change at the top, their day-to-day lives will continue unaffected, ensuring stability and averting potential unrest.

OpenAI’s previous board went radio silent, letting Altman and his supporters set the stories in the news which got the investors and employees riled up. An absolute disaster by the old board.
 
Upvote
18 (18 / 0)
He took enough money for fifty people to retire comfortably and bet it on two things:
  • electric cars, which were so pie-in-the-sky that there were literal conspiracy theories at the time about how the big car companies wouldn’t let anyone develop an electric car
  • reusable space launch, which the parallel efforts of all spacefaring governments and their supporting aerospace contractors had failed to achieve over the previous fifty years

Those things exist the way they do today because he made those bets. When they’d have existed otherwise is an entirely hypothetical question.
I hate that this thread got derailed into Musk territory. He's been nothing but a troll screaming for attention from the sidelines on this current Open AI issue. Thankfully his low effort attention whoring on this hasn't worked and he's been largely ignored.

HOWEVER.. he did almost single handedly usher in the current boom in electric cars. Prior to him pushing Tesla through the sieve with both feet the legacy car companies were LITERALLY confiscating and crushing their electric cars to protect their status quo. Without Musk electric cars would have still been strictly for early adopters and west coast tech bros looking for attention. The legacy car companies had more than a hundred (100) years to make something happen and they flat out refused to do it because it would have hurt their legacy business. Musk was the disruptor to force the entire legacy auto industry to get off its ass. I just don't see how anyone with real knowledge of the situation can deny this.

It's kind of the same deal with rockets. The U.S. was relying on Russia of all people to take us into space. Even during their shenanigans in Crimea when were were sanctioning them we were STILL relying on them for manned space missions. Given current events we would have kind of been royally screwed regarding manned space travel if it wasn't for Space X.

So while I'm not a fan of his management style...AT ALL. Like there has to be a better way than being a straight up a**hole to get things done and manage people. He's definitely not next level, cutting edge, moving the world forward in that regard. But his actual results speak for itself.

So yeah I won't be replying to any Musk replies because I want the thread back on its tracks.
 
Upvote
4 (8 / -4)
EDIT: a quote from somewhere
"Doing grade school math may not seem impressive, but the reports note that, according to the researchers involved, it could be a step toward creating artificial general intelligence (AGI)."

So someone presented a powerpoint about how potentially great their division's project could be someday, and for some reason this standard day-to-day matter made the entire board lose its mind with terror, to the point of self-immolation?

Upper management types have been known to spook easier than horses, but I'd need more information to find that plausible.
 
Upvote
0 (2 / -2)
So only half joking…

When do I get my hover chair and bottomless soda cup?

This is partially answered by this:
Why Larry Summers? Hasn't he already done enough harm to humanity? He hates the working class.

There's no cup, no chair. You don't qualify for the space luxury liner. You're Excess Labor, which doesn't exist under capitalism. No worries, you'll just retrain for an ever dwindling pool of work faster than the rate of AI deployment renders jobs obsolete. And if that doesn't work, there's always human chattel for the kinds of people who could have all the tools for utopia but still not be able to tolerate not owning people.

I strongly believe that were you to strap Summers to a chair and load him up with a cocktail of sodium thiopental and MDMA, it would take very little coaxing for him to admit that the endgame is billions of dead excess poors. To these ghouls, you're nothing but an inconvenient statistic.
 
Upvote
3 (8 / -5)

wackazoa

Ars Scholae Palatinae
1,057
This is partially answered by this:


There's no cup, no chair. You don't qualify for the space luxury liner.

Your no fun :(

You're Excess Labor, which doesn't exist under capitalism. No worries, you'll just retrain for an ever dwindling pool of work faster than the rate of AI deployment renders jobs obsolete. And if that doesn't work, there's always human chattel for the kinds of people who could have all the tools for utopia but still not be able to tolerate not owning people.

I strongly believe that were you to strap Summers to a chair and load him up with a cocktail of sodium thiopental and MDMA, it would take very little coaxing for him to admit that the endgame is billions of dead excess poors. To these ghouls, you're nothing but an inconvenient statistic.

The only issue I have with that thinking, is that you can never get rid of the poor. In fact i order for there to be 1 rich person, there needs to be like 100 poor people doing the menial work for there to be the value.

Im often reminded, when confronted with how some rich people think, of ancient Rome. From the consuls to the Caesars, they understood one thing. Qui regat plebem, romam imperat.*






*That was run through a translator, so apologies for imperfections.
 
Upvote
2 (2 / 0)
The only issue I have with that thinking, is that you can never get rid of the poor. In fact i order for there to be 1 rich person, there needs to be like 100 poor people doing the menial work for there to be the value.

The endgame for the wealthy is very much to get rid of the poor, by replacing their work with a handful of machines that are completely obedient and have no rights.

Whether that means "everybody is rich now" or "the people who aren't already rich just kind of... go away," depends on your perspective.
 
Upvote
4 (7 / -3)

Derecho Imminent

Ars Legatus Legionis
16,258
Subscriptor
https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/"The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences."

I normally dont give a lot of credence to anonymous sources but this is Reuters reporting and I do give them much credit as journalists.
 
Last edited:
Upvote
5 (6 / -1)

Korios

Ars Scholae Palatinae
1,470
Nadella made Microsoft look great through the process and made sure to keep access to AI development regardless of which path resulted. That might even be worth an OpenAI board position - giving MS an even better footing against other established competition looking to use AI.
Giving MS a board seat would complicate OpenAI's structure further, to the point of making it ridiculous. MS and some other companies -largely VCs- have invested in the for profit wing of OpenAI, not in its not for profit (which, I suppose, should be uninvestable). And right above said not for profit sits the board.

The most unusual -unique perhaps- aspect of OpenAI is not that a not for profit controls a for profit company but that external companies have minority -but not minor- stakes in the for profit wing.

Yet despite their sizable investments they have no real say in the (commercial) company's policies, since it is nominally controlled by the not for profit right above it, which is in turn controlled by the board - and the entire thing is run by the CEO.

If that not for profit went away MS would already have a board seat. But giving them a board seat while keeping the company structure intact makes no sense. A company cannot be both not for profit and for profit at the same time. And a board seat from a large commercial company would mean just that.

I think OpenAI's contradictory company structure and "mission" was a major factor in this mess. They need to decide if they want to be a not for profit foundation or commercial company. They cannot have it both ways.
 
Upvote
2 (5 / -3)

graylshaped

Ars Legatus Legionis
67,695
Subscriptor++
I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?

Has any of the OAI staff gone on the record as to their motivations for sticking with Altman?

I'm looking forward to the ColdFusion video on this episode.
I'll go with "greater payout."

Follow the money, as the saying goes.
 
Upvote
6 (7 / -1)

Mentil

Ars Scholae Palatinae
704
Shortly after Poe's variant of a marketplace rolled out, OpenAI announced that they'd be doing the same fully in-house, mooting Poe for anyone just looking to use OpenAI's back end (which is likely most of the market currently).

I do not understand why he gets to stay. There's a very clear conflict of interest there.
D'Angelo was the last board member holding out on the plan of the board resigning and Altman coming back. It wouldn't surprise me if he was doing everything in his power to burn OpenAI to the ground, and thus had no interest in going along with any plan to save it.

Even though this is a nonprofit, given the clear conflict of interest and personal motivation (assuming this rumored explanation is true) I'd have to wonder if such a thing might not lead to civil or criminal charges that could pierce the corporate veil. If his actions were arguably illegal it'd explain why no detailed explanation was publicly given.

I imagine the current plan is to instate a new board chair, who will then immediately make a motion to remove D'Angelo.
 
Upvote
-5 (0 / -5)
It's not lost on me either. Particularly egregious that they're firing the two women (apparently this whole thing started when Altman tried to fire Helen for writing an academic paper he didn't like and she fired him first) on the board and replacing them with not just two men, but one of those men being Larry "Women are genetically inferior to men at science and mathematics" Summers, also of "We need 10 million people to lose their jobs for the economy" fame. And people wonder why women in STEM struggle and get pushed out.
Helen Toner graduated from the University of Melbourne in 2014, her qualifications for being on this board are what exactly? "That would actually be consistent with the mission" - no thanks, get lost.

Tasha McCauley is associated with companies Fellow Robot and GeoSim Systems - good luck finding out what they are. The most remarkable thing about her is her husband, an actor Joseph Gordon Levitt.

These people tried to fire Sam Altman? What a joke. Diversity hires who were given a cushy job and failed miserably. Good riddance. Hiring board members based on genitalia just doesn't work whatever are the presumed benefits of diversity.

"And people wonder" - people wonder why D'Angelo is still there, not why these two are out. The entire board richly deserve the booting regardless of gender.
 
Upvote
-19 (4 / -23)
So someone presented a powerpoint about how potentially great their division's project could be someday, and for some reason this standard day-to-day matter made the entire board lose its mind with terror, to the point of self-immolation?

Upper management types have been known to spook easier than horses, but I'd need more information to find that plausible.
I think it is more related to Altman talking about the discovery at the Asia-Pacific Economic Cooperation summit a day before he was fired. Unlike normal startups that hype every advancement before knowing if it will actually work, I can imagine that the board of OpenAI wanted to develop the safety aspect before telling anyone about the prospects of the project. There have also been reports about the board wanting to decide if something is AGI, instead of any researcher/CEO making the claim publicly. Since the board had already lost trust in Altman, any disagreement must have been amplified.

The board handled the situation terribly. They should have used their "soft power," as another commenter wrote.
 
Upvote
4 (4 / 0)
Let me get this straight. Sam Altman sets up a nonprofit which is charged with ending OpenAI if it thinks AI development is getting dangerous. They do, and fire him. Now he’s come back and fired the people doing the firing because he thinks AI development isn’t dangerous. We all get a long weekend of popcorn consumption followed by hoping AI development isn’t actually that dangerous despite the nonprofit board, who was supposed to be monitoring this, getting fired for thinking so.

What a bunch of unserious people. Just drop the nonprofit pretense, admit everyone involved here only cares about dying with the greatest number of zeros in their bank account and move on with unchecked capitalism as per currently accepted norms.
my bank account already has an enormous number of 0's in it, just not preceded by anything non zero
 
Upvote
8 (8 / 0)
I think OpenAI's contradictory company structure and "mission" was a major factor in this mess. They need to decide if they want to be a not for profit foundation or commercial company. They cannot have it both ways.
One of the major problems of the Board is not recognizing that without the commercial side, their nonprofit may be largely meaningless. I make no argument that this is a GOOD state of affairs, merely ask people to recognize how the world works in practice, as opposed to theory.

The issue here is whether you view the goal of the Board to stick to their principles…or accomplish their goals. In a perfect world, these are not contradictory. But here, that’s the precise question. If you are a nonprofit aimed at trying to make something good for humanity and place limits on the type of product you produce, then limiting only yourself causes little societal good, but you do get to stick to your morals. Look at what happened here as a perfect example: your employees want financial security as much as anyone else. If your nonprofit does the “this has gone too far, we’re destroying our research”, all that happens is those same people take all their skills and general knowledge and take it to other, explicitly commercial businesses. If you’re in front, that means more people going to more companies and advancing MORE competing systems.

Blowing it up, in other words, will almost certainly lead to a worse outcome.

The product is inherently commercial. There’s no getting around that. A bunch of other companies would love to hire talent away to commercialize their product. So if your goal is to actually achieve your the purpose of your nonprofit, accepting the commercial nature of your product is a necessity. Their nonprofit should be zealously advocating for limitations to AI even as their company works to improve it. Why? Because if you only cripple your product, the rest of the industry drinks to your downfall and their inevitably larger stock price.

If you want to avoid entangling yourself with capitalistic desires, that’s a perfectly valid personal preference. If you actually want to enact change, your actions must adhere to that. That requires not capitalistic intent as a priority, but as a necessary part of the solution.
 
Upvote
3 (5 / -2)
I've only skimmed all these articles about OpenAI.. can someone please explain to me why so many employees were so loyal to Altman to threaten leaving the company over his firing?
Simple interest, Microsoft meddling in the background, actual respect for the guy, or what??

This situation is so weird.. and I don't like the outcome where an allegedly unreliable person, uncaring about AI safety and the nonprofit's "mission for humanity's benefit" has "won the power struggle".
 
Last edited:
Upvote
-2 (5 / -7)
Giving MS a board seat would complicate OpenAI's structure further, to the point of making it ridiculous. MS and some other companies -largely VCs- have invested in the for profit wing of OpenAI, not in its not for profit (which, I suppose, should be uninvestable). And right above said not for profit sits the board.

The most unusual -unique perhaps- aspect of OpenAI is not that a not for profit controls a for profit company but that external companies have minority -but not minor- stakes in the for profit wing.

Yet despite their sizable investments they have no real say in the (commercial) company's policies, since it is nominally controlled by the not for profit right above it, which is in turn controlled by the board - and the entire thing is run by the CEO.

If that not for profit went away MS would already have a board seat. But giving them a board seat while keeping the company structure intact makes no sense. A company cannot be both not for profit and for profit at the same time. And a board seat from a large commercial company would mean just that.

I think OpenAI's contradictory company structure and "mission" was a major factor in this mess. They need to decide if they want to be a not for profit foundation or commercial company. They cannot have it both ways.


The GirlScouts of America are non profit. They sell girlscout cookies for profit. This is the same thing except these cookies are worth $90 billion dollars.

But now that you mention it. I don't think Microsoft would be allowed to have a board seat overseeing the non profit. As of now they are an outside entity buying the cookies. If they were also selling them it would definitely complicate things.
 
Last edited:
Upvote
2 (3 / -1)
One of the major problems of the Board is not recognizing that without the commercial side, their nonprofit may be largely meaningless. I make no argument that this is a GOOD state of affairs, merely ask people to recognize how the world works in practice, as opposed to theory.

The issue here is whether you view the goal of the Board to stick to their principles…or accomplish their goals. In a perfect world, these are not contradictory. But here, that’s the precise question. If you are a nonprofit aimed at trying to make something good for humanity and place limits on the type of product you produce, then limiting only yourself causes little societal good, but you do get to stick to your morals. Look at what happened here as a perfect example: your employees want financial security as much as anyone else. If your nonprofit does the “this has gone too far, we’re destroying our research”, all that happens is those same people take all their skills and general knowledge and take it to other, explicitly commercial businesses. If you’re in front, that means more people going to more companies and advancing MORE competing systems.

Blowing it up, in other words, will almost certainly lead to a worse outcome.

The product is inherently commercial. There’s no getting around that. A bunch of other companies would love to hire talent away to commercialize their product. So if your goal is to actually achieve your the purpose of your nonprofit, accepting the commercial nature of your product is a necessity. Their nonprofit should be zealously advocating for limitations to AI even as their company works to improve it. Why? Because if you only cripple your product, the rest of the industry drinks to your downfall and their inevitably larger stock price.

If you want to avoid entangling yourself with capitalistic desires, that’s a perfectly valid personal preference. If you actually want to enact change, your actions must adhere to that. That requires not capitalistic intent as a priority, but as a necessary part of the solution.

This guy gets it. God tier post.
 
Upvote
-10 (1 / -11)

graylshaped

Ars Legatus Legionis
67,695
Subscriptor++
I think it is more related to Altman talking about the discovery at the Asia-Pacific Economic Cooperation summit a day before he was fired. Unlike normal startups that hype every advancement before knowing if it will actually work, I can imagine that the board of OpenAI wanted to develop the safety aspect before telling anyone about the prospects of the project. There have also been reports about the board wanting to decide if something is AGI, instead of any researcher/CEO making the claim publicly. Since the board had already lost trust in Altman, any disagreement must have been amplified.

The board handled the situation terribly. They should have used their "soft power," as another commenter wrote.
Sometimes a board has to wield a hammer, blowback be darned. This kind of thing does not happen unless a CEO crosses a line.
 
Upvote
1 (4 / -3)

graylshaped

Ars Legatus Legionis
67,695
Subscriptor++
The GirlScouts of America are non profit. They sell girlscout cookies for profit. This is the same thing except these cookies are worth $90 billion dollars.

But now that you mention it. I don't think Microsoft would be allowed to have a board seat overseeing the non profit. As of now they are an outside entity buying the cookies. If they were also selling them it would definitely complicate things.
They sell cookies to fund their organization.
 
Upvote
5 (5 / 0)
They sell cookies to fund their organization.
True. And most people buy them by the stack load because they're flipping good.

If they weren't any better than D-grade generic store brand not many people would buy them. No matter how worthy the cause was. I mean sure some would. But not nearly as many people would buy them as they do now.

That is why the kind of tricky complicated point the poster above was trying to make was right. Do you want to change the world with your delicious cookies? Or do you want your cookies rotting on the shelf by the truckload because nobody wants them. But yeah you stuck by your principles and only used non-fat milk and non-fat butter substitute.

But to bring it back to more of a direct deal. If Open AI wasn't cutting edge nobody would care. They would be just another research team on the sideline yelling at clouds. Completely irrelevant in the grand scheme of things with zero ability to affect anything or anyone now that the genie is already out of the bottle.
 
Upvote
2 (3 / -1)

graylshaped

Ars Legatus Legionis
67,695
Subscriptor++
True. And most people buy them by the stack load because they're flipping good.

If they weren't any better than D-grade generic store brand not many people would buy them. No matter how worthy the cause was. I mean sure some would. But not nearly as many people would buy them as they do now.

That is why the kind of tricky complicated point the poster above was trying to make was right. Do you want to change the world with your delicious cookies? Or do you want your cookies rotting on the shelf by the truckload because nobody wants them. But yeah you stuck by your principles and only used non-fat milk and non-fat butter substitute.

But to bring it back to more of a direct deal. If Open AI wasn't cutting edge nobody would care. They would be just another research team on the sideline yelling at clouds. Completely irrelevant in the grand scheme of things with zero ability to affect anything or anyone now that the genie is already out of the bottle.
I have no beef with Girl Scout cookies, other than having to run a gauntlet every year to get to the grocery store. I haven't eaten one for at least fifteen years, because I need to manage my sugar and processed carb intake. It is a me thing, and I wish them success in raising confident and empowered women. Did it with two of them myself. I am now having to learn how to raise a son to be a good person. It is proving to be a different experience.
 
Upvote
4 (4 / 0)

papito10

Ars Scholae Palatinae
837
Please explain how Altman has done anything “for the benefit of humanity.”

Because all I’ve seen is a guy wanting push far faster than sensible controls can evolve, resulting in dangerous nonsense being ubiquitous on the internet.
Excuse me. Sam Altman went to Stanford! He wrote some code once, and his failed startup was sold for ... $40M.

This darling of VC and libertarian fanboys clearly knows what he is doing.
 
Upvote
-2 (3 / -5)
I think the threat of a complete exodus of the staff (effectively killing OpenAI as an entity at all), plus the threats of multiple investors (including Microsoft), were taken pretty seriously.

Which is rather surprising. Usually from what I've seen, Boards of Directors have a "know your place" attitude.
Usually Boards of directors have some skin in the game, so that they suffer some personal loss if the company goes under. With OpenAI this was not the case. If OpenAI goes bust non of the old Board would loose any kind sort of asset (work our money).
 
Upvote
4 (4 / 0)