Sam Altman wins power struggle, returns to OpenAI with new board

elin

Smack-Fu Master, in training
72
It seems clear that he lied to the board about something. Maybe it was about getting investments from the Saudis or others. Maybe it was about personal benefits from the msft deal. Whatever it was, the employees don't know about it.

What was incompetent is that there was seemingly no legal advice and no communication strategy in place. When making a move this important, they should have been ready with communication to the employees and key outsiders as well as the press. And even when you fire a CEO there is discussion between the lawyers.

Board members are really not allowed to make individual statements without permission. Part of their plan should have been to have a single spokesperson to answer questions. I don't think their structure per se is the issue, it is the implementation of the structure.
 
Upvote
18 (23 / -5)
So what the fuck is going on here? Was Altman not lying as was initially alleged? What's with this?

View attachment 68095
https://bsky.app/profile/maxkennerly.bsky.social/post/3kersyj74m22k
Did I miss something? The board did have a good reason? But they messed up and now they're out and he's back. But... is that bad?

I get that he had support within the company but that doesn't mean he wasn't doing something shady.
I wouldn't consider working on AI chips to be competing with OpenAI's business or a significant conflict of interest. In truth considering how much noise they've made about not being able to get enough chips it might be complimentary. I've learned my lesson about ascribing positive motivations to billionaires but this action seems very reasonable and above-board. It's hardly like it was a secret, the NYT was reporting on that venture weeks ago.
 
Upvote
7 (11 / -4)
"Never ascribe to malice that which is adequately explained by incompetence"

...but I STILL think this whole thing was an elaborate scheme by Satya Nadella, and the OAI board fell right into his trap.
Haha, if we're going down conspiracy theories, maybe Elon Musk somehow influenced the board to let go of Altman. Musk and Altman already had a falling out, and throwing OpenAI into chaos would give Grok AI a chance to rise.
 
Upvote
12 (14 / -2)
This whole OpenAI mess is: Referees versus players.

The refs called foul and both the player and manager threatened the referee organization and won.

Now the uncompromised refs are out and there’ll be new pro-player refs installed.

The only winners are the players. MS and Altman in this case.
And if it turns out the referees were trying to ensure the safety of the fans, then your last line is especially true. A lot of people here seem to be neglecting this point.
 
Upvote
0 (7 / -7)

JuniorTempest

Wise, Aged Ars Veteran
171
Subscriptor++
From what I've read, and based on the fact that OpenAI was NOT organized as a corporation, but a non-profit, I'd guess that Altman's gung-ho attempts to open the throttle on monetizing the corporate subsidy created to be able to make profits were in contrast to OpenAI's actual mission - which was to create an AGI (Artificial General Intelligence) that would think and reason as a human would, only "better" - is what caused the board to fire him.

...

My guess is the "loss of confidence in his communications with the board" were because he didn't make it enough clear that the primary nature of partnering with Microsoft was to enrich investors, rather than simply fund the AIG development.
The thing is, if that were true, all the board had to do was release a statement that says something like this: "Despite trying to work with Altman for months to align his efforts with the founding goals of OpenAI, the two sides have not been able to resolve the issue satisfactorily and the OpenAI board has concluded that Altman is not the right person to lead the effort develop AI according to OpenAI mission: ensure AI benefits humanity and not threaten it."

Instead, the board mostly remained silent. It has come out that there was no malfeasance but there was a lack of candor in Altman's communications with the board. The lack of candor? Assigning the same project to two different people and expressing two different opinions about the same team member to two different board members, not about his AI chip effort nor the relationship with Microsoft.

Those are ridiculous and unconvincing reasons for firing Altman and that's why the board's actions remain highly suspect.
 
Upvote
20 (22 / -2)

Alexstarfire

Ars Scholae Palatinae
721
Ah, the "guns don't kill people" defense.
I've never liked the comparison. At face value it's true. A gun doesn't spontaneously kill people. Something, almost always human, uses the gun to kill another person. But, most guns are designed to kill targets, human or otherwise. It's quite different from something like a car, whose purpose is transportation but can easily be used to kill people.

That said, my main problem with guns is who is allowed to have them, not that they exist. Too many irresponsible and violent people are able to obtain guns. You can't give someone gasoline and matches then be shocked when they start a fire.

Anyway, where does AI fit into this analogy? It's just another tool humans made. It's no more capable of being good or bad than a rock. Do the pros outweigh the cons? Have no idea. I don't know how bad the cons are. But knowing humans, they'll be pretty bad.
 
Upvote
20 (21 / -1)
This structure has worked for decades at Mozilla, who pioneered the approach, if IIRC. It's just a bad fit for taking growth oriented venture money into the for profit subsidiary, which is what happened in this case..
I don't think it's a bad model at all!

Execution matters, and in this case the board:
1) Appears not to have warned or worked with Altman on their concerns about his honesty
2) Most certainly did not warn or work with customers/investors/partners in any way, shape, or form
3) Did not have a plan for the day after
4) Did not anticipate or have a plan for the predictable backlash
5) Apparently did not consult outside parties and/or were making decisions emotionally, which makes them unserious as board members (see: reports that Sutskever, who pushed out Altman, later had a change of heart after Brockman's wife spent hours talking with him and crying)

None of these would have been automatically ameliorated if the company was profit-driven. I've personally seen for-profit boards behave in irrational ways antithetical to their own monetary interests. People really are stupid and irrational, even (perhaps, especially), at the board level.

This was a problem of a dysfunctional board stuffed with people poorly suited to their roles.

Personal take on the outcome:
We definitely avoided the worst possibilities, which included Microsoft wholesale owning OpenAI (why is this a bad option? see reports about Microsoft AI safety folks getting sidelined and focusing on getting Bing Chat out the door before OpenAI could get GPT-4 released publicly, while having a strictly profit motive). We also avoided this clown-car of a board continuing to destroy OpenAI in the name of "safety" which really just pushed their employees into the hands of competitors like Microsoft, Facebook, and Salesforce. I have valid concerns about Altman coming out of this, and I think it's bad that he "won" the power play since I do believe that he needs a forceful, empowered check on his own tendencies. However, all told, things could have been far worse, even if they could also have been better.
 
Upvote
15 (17 / -2)

DataByne

Smack-Fu Master, in training
45
Subscriptor++
That was not my intent at all.

A poorly designed automobile (such as the old Ford Pinto) can cause damage with normal use due to its design. But even a well-designed automobile can cause damage if driven into a crowd of people. The manufacturer is responsible for the damage caused by the design of the product, but is not necessarily responsible for how the product is used.
I see this as varying levels of moral culpability. The gun example is an extreme comparison given that they are products purposely designed to inflict harm. While a given manufacturer may not be specifically culpable (in an ethical sense) as whomever pulls the trigger, they do subsume a degree of moral culpability by nature of their harm inducing product being used to inflict harm. That the recipient of that harm being a sapient being was a foreseeable and likely outcome (and for many styles an intended one). That knowledge and intent is an important factor in their ethical responsibility.

For the car example I mostly agree. I think I would add that the manufacturer has an ethical responsibility to take steps to prevent their product from being better suited to an immoral use and to lessen the harm their product may inadvertently cause to third parties by their product being used as intended. As an example, I would argue that the high profile and mass of the modern consumer trucks and SUVs (pick-ups being the significant examples) have a foreseeable result of indirect harm from collisions with pedestrians (especially children). That this design would also appeal to a user desiring to cause harm in the described manner is further grounds for manufacturers bearing some responsibility by not having taken steps to lessen the potential harm their product might cause.

In my view, the degree and likelihood of potential harm confer a proportional amount of moral responsibility, but we must also take into account the scale at which these products are released into the world. A boutique automotive manufacturer releasing 5,000 trucks a year with a design that inadvertently makes them more lethal bears a lesser responsibility than a GM sized company. In this fashion we can understand a manufacturer releasing a product with a known harm that was deemed acceptable due being of minimal impact to an individual, is actually culpable of a greater ethical harm when considering the broader context and cumulative impact. (e.g. plastics, climate change impacts, and so on.)

The A.I. technology that OpenAI designs and controls is not a product purpose made to induce harm, so I would label the comparison to guns as being a poor analogy at best and reductively labeling the original comment by @whiteknave as the "...'guns don't kill people' defense" is unconstructive and patently hostile to discussion given the baggage that phrase carries.

That said, OpenAI has a greater ethical responsibility than a car manufacturer and awareness of this fact was the whole reason that its corporate organization is so convoluted. They viewed their ethical responsibility to be at odds with the pressures of the free market and sought some means of designing a release valve that would prevent matters from exploding out of control. Unfortunately for them, the man hired to realize the release of A.I. technology as a product to finance further research was both at odds with the ethics grounded restraint ethos and highly capable as the helmsman of a for-profit enterprise. (95% of your employees signaling that they would relocate with you upon your ouster is a rather ringing endorsement of your abilities to build and manage a team.)

I strongly agree with arguments that Altman's tenure has been morally reckless. There are foreseeable and likely negative outcomes in the application of their products for which few enough safeguards are in place (let alone a broader cultural shift that would better prepare third parties to the new reality). ChatGPT is certainly no Skynet, but just because most of the direct and indirect potential harms are not so lethal as a genocidal super-intelligence or a gun does not mean that there is no ethical burden to consider them. Among the reasons why Google had been so hesitant to widely release the fruits of their own LLM research before ChatGPT were the unconsidered risks and harms that such technologies might unleash. (I would like to emphasize that "among the reasons" part as it is doing some heavy lifting for Google's corporate entrenchment, stagnation, and general creative risk aversion.)

Moving fast and breaking things can be an effective mantra for innovation as you are willing to set aside old orthodoxies that may be occluding more promising avenues. However, earlier I mentioned the importance of scale when considering moral culpability. The technologies innovated by OpenAI were not simply selectively released in profitable B2B contracts that could feasibly be restrained in scope even if structural mitigations on OpenAI's had yet to be implemented: they were widely released to the entire online world in an amazingly brilliant and successful marketing move. With this immense pool of potential users comes a dramatically increased likelihood of unintended harms being realized. Altman paid lip service to this by spreading some pleasingly warm air about dangers and the need for regulation, but never embraced his own moral responsibility as OpenAI's CEO for the new urgency for that need and taking concrete steps to bring order to a new frontier (as the public sees it) by using their position as de facto market leader to build in a conservation of negative use cases.

Media has been through seismic changes like those enabled by LLMs, so I am not fearful that we cannot weather the coming storm. I merely wish we would learn from the past and brace the windows, doors, and roof so that in our haste to innovate we do not needlessly break what we cherish and depend on for our well being and safety.


Edit: TLDR there is an ethical responsibility to consider how our designs impact the world, even if we are not directly responsible for those designs' misuse.

I may have let an opinion I have been pondering verbosely impose itself into my comment...
 
Last edited:
Upvote
14 (18 / -4)
Why was everyone willing to quit for this one guy?
I think despite the optics, it probably wasn't the primary motivation behind everyone threatening to quit. Re-reading the open letter, it's mostly about how the board went about this whole debacle, acting secretly, abruptly, vaguely, not answering questions, not engaging with the larger company leadership whatsoever after Altman was fired, etc. With the board apparently having stated that killing the company would be consistent with its mission followed by Microsoft announcing they were going to hire Altman and Brockman to form a new AI department. Now instead of partnering with Microsoft, you're going to be competing with them. Your company is effectively dead, your job is dead, everything you've been working on is dead, and you're under a board willing to put you in such a situation with no notice or explanation. You, me, we'd all probably quit.
 
Upvote
23 (25 / -2)

DataByne

Smack-Fu Master, in training
45
Subscriptor++
I thought he acted rather classily through this whole thing. At least, his statements on social media were polite and not the insane fragile ego stuff we're used to seeing from certain people who've become big through social in recent times.

So, he's gone up in my estimations. That is, I still regard him as a terrifyingly extreme capitalist, but a polite one, at least.
So maybe his neglect of capitals in his statements was an intentional act of misdirection masking just how extreme then? 🤣
 
Upvote
-6 (0 / -6)

fredrum

Ars Scholae Palatinae
817
It seems clear that he lied to the board about something. Maybe it was about getting investments from the Saudis or others. Maybe it was about personal benefits from the msft deal. Whatever it was, the employees don't know about it.

What was incompetent is that there was seemingly no legal advice and no communication strategy in place. When making a move this important, they should have been ready with communication to the employees and key outsiders as well as the press. And even when you fire a CEO there is discussion between the lawyers.

Board members are really not allowed to make individual statements without permission. Part of their plan should have been to have a single spokesperson to answer questions. I don't think their structure per se is the issue, it is the implementation of the structure.


Well it sounds like Sam had a handful of 'his own' projects on the side that were quite possibly intertwined with OpenAI's work.

For example, that nvidia competing AI chip. He might have offered to be a backstop buyer of the chips seeing as OpenAI was hugely in need of compute. He could have said 'if we do this I'll make sure that OpenAI will use these'. And 'we will optimise the code for these chips to make them look good'.

He also had some 'AI assistant' venture right? Sound very likely that would be built ontop of GPT technology.

etc etc

So he'd make money on side projects that would have leveraged the non-profit company.

Super shady if you ask me.
 
Upvote
12 (17 / -5)

stk5

Ars Scholae Palatinae
982
Subscriptor++
Was Altman not taught how to use proper capitalization? In his statement in this article and previous ones in this saga he hasn’t demonstrated that basic capability. It seems like the CEO of any company should have the basic competency to capitalize “I”.
It's purely an affectation. Every phone I know of makes you have to go out of your way to turn that kind of thing off.
 
Upvote
24 (24 / 0)
Post content hidden for low score. Show…

morlamweb

Ars Scholae Palatinae
1,425
I think the threat of a complete exodus of the staff (effectively killing OpenAI as an entity at all), plus the threats of multiple investors (including Microsoft), were taken pretty seriously.

Which is rather surprising. Usually from what I've seen, Boards of Directors have a "know your place" attitude.
I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?

Has any of the OAI staff gone on the record as to their motivations for sticking with Altman?

I'm looking forward to the ColdFusion video on this episode.
 
Upvote
-2 (3 / -5)
Why was everyone willing to quit for this one guy?

They were willing to threaten to quit. It is unclear how many actually would have quit. The most senior individuals were looking at losing a 10 million dollar exit. Less senior individuals were contacted and encouraged to sign by senior individuals (if your boss asks you to sign a public document and can see if you don't, and if you don't sign it could cost them a lot of a money and lead to retaliation, you probably are going to sign...)
 
Upvote
7 (7 / 0)
To be clear, I don't think they're going to produce AGI. Our current AI approach is not going to automagically suddenly evolve to become greater than the sun of its parts. There's no mechanism there for sentience or sapience.
You’re probably/very likely right but while LLMs probably won’t be an AGI I think there may be a small chance they could be used to build one.

One thing LLMs have turned out to be fairly good at is breaking down problems into steps. It’s going to be interesting to see what‘s possible once we have a sufficiently advanced way to act upon on those steps. If we have a “idea to action” machine, who’s to say asking it to “build me an AGI” won’t work? Couldn’t it recursively break that down and down and act to build something novel?

A can foresee a retort to that being that LLMs are not rational and therefore can only “autocomplete” steps rather then devise them, but I would remind that all the content on the internet that these models were trained on is post-rational. There is rationality in the content. If you autocomplete rationality do you get rationality? I suspect we’re not long from finding out.
 
Upvote
-4 (3 / -7)
I still want to know why the board thought they could fire him with a vague excuse and not face repercussions. If he was doing something wrong they should’ve spelled it out. If not, how did they expect things to go?

They legally cannot say anything specific without opening themselves up to massive liability, and their director's insurance would probably not cover them if they were to make a public statement. If you could be sued for everything you own and lose all legal protections I don't think you would comment either.
 
Upvote
-1 (2 / -3)

Nilt

Ars Legatus Legionis
21,810
Subscriptor++
So what the fuck is going on here? Was Altman not lying as was initially alleged? What's with this?
What happened is there's too much money to be made. Add in that the board is a bunch of cowards who didn't have the guts to go with what their morals told them was the right thing to do. I suspect the latter is because they'd also like to be able to keep having cushy gigs where they get paid a bunch of money to play with the things they like to play with. The problem is the for-profit entity is far too valuable to be allowed to fail before others can extract their share of the money so they basically all forced the issue by effectively blackmailing the board with the threat of total irrelevance and no change of future employment by any other tech company run by techbros if they didn't get the fuck back in line on the money train that is planned to arrive at the station real soon now.

As far as what actually happened to kick this all off in the first place? I doubt we'll ever really know now that the asshat is back in charge and the board members have capitulated.
 
Upvote
-3 (6 / -9)
I think you're wrong on that last one. The board was for the non-profit; their goals weren't profit-related. They were supposed to ensure that the development of AI was done in an ethical way that bettered society as a whole. With the ever-increasing erosion in trust in their subsidiary's management, they decided the safest thing to do was pull the plug. However, as you point out, there's been no public presentation of WHY that choice was made precisely then, or what, if any, concrete reasoning went into that decision.
Sure, but the board members' personal livelihood is unhinged from their decisions here. employees work for a business the nonprofit created. These employees aren't floating on piles of personal cash like the board members. Employees still have to eat ... and maybe even save up a bit for retirement. Are you saying they are wrong for that?

And it's actually worse than my glib memery. When challenged on the "???" part, the board looked everyone in the face and told them the business could fuck-off and die.

If you tell your entire crew "Actually, running your paycheck and everything you have built into a brick wall IS our business strategy!" How many of them are going to want to come in to work? That's just some dumb-ass management. If the business is important to the mission of the non-profit, I'd argue incompetently dumb-ass management.
 
Upvote
5 (12 / -7)
And it's actually worse than my glib memery. When challenged on the "???" part, the board looked everyone in the face and told them the business could fuck-off and die.

If you tell your entire crew "Actually, running your paycheck and everything you have built into a brick wall IS our business strategy!"

That's exactly what their whole purpose was. Everyone who worked at OpenAI was surely aware of this. They probably didn't think it would ever happen, but it was very much out there in the open: if it looks like this shit is getting dangerous, we're shutting it down. Full stop.

There was no actual mechanism in place and nobody expected to need one, until they did. This is what happens when Security Theater is applied to corporate governance. Everyone knows the emergency doors are just painted on, but what's the likelihood of a fire breaking out, anyway? The investors are mollified with the mere appearance of safety, it'll be fine...
 
Upvote
6 (13 / -7)

fenris_uy

Ars Tribunus Angusticlavius
9,088
I think despite the optics, it probably wasn't the primary motivation behind everyone threatening to quit. Re-reading the open letter, it's mostly about how the board went about this whole debacle, acting secretly, abruptly, vaguely, not answering questions, not engaging with the larger company leadership whatsoever after Altman was fired, etc. With the board apparently having stated that killing the company would be consistent with its mission followed by Microsoft announcing they were going to hire Altman and Brockman to form a new AI department. Now instead of partnering with Microsoft, you're going to be competing with them. Your company is effectively dead, your job is dead, everything you've been working on is dead, and you're under a board willing to put you in such a situation with no notice or explanation. You, me, we'd all probably quit.

Also, MS probably did offer everybody at OpenAI new jobs at MS. So, you can compete with MS with a board that doesn't wants to make money, or you can be with MS.
 
Upvote
9 (9 / 0)

eas

Ars Scholae Palatinae
1,310
My guess is that the board saw that Altman was trying to usurp them and undermine the non-profits mission, so they shot their shot and ended up ensuring that they were cast out and the non-profits mission was subsumed by commercial interests. I doubt they really stood much of a chance, given all the sharks circling.
 
Upvote
5 (7 / -2)

jesse1

Ars Scholae Palatinae
948
My guess is that the board saw that Altman was trying to usurp them and undermine the non-profits mission, so they shot their shot and ended up ensuring that they were cast out and the non-profits mission was subsumed by commercial interests. I doubt they really stood much of a chance, given all the sharks circling.
Apparently Sam tried to oust the non-profit types from the board first but failed to get Ilyas vote.
 
Upvote
5 (7 / -2)
That's exactly what their whole purpose was. Everyone who worked at OpenAI was surely aware of this. They probably didn't think it would ever happen, but it was very much out there in the open: if it looks like this shit is getting dangerous, we're shutting it down. Full stop.

There was no actual mechanism in place and nobody expected to need one, until they did. This is what happens when Security Theater is applied to corporate governance. Everyone knows the emergency doors are just painted on, but what's the likelihood of a fire breaking out, anyway? The investors are mollified with the mere appearance of safety, it'll be fine...
I'm not super up on the lingo and I'm in a little bit of a hurry so if this sounds a bit disjointed, I apologize to anyone unfortunate enough to read it.

I think that this whole episode really sort of exposes the reality behind this concept of ethical governance or keeping humanity safe or whatever the buzzwords are. It turns out that having the power to pull the plug on something doesn't mean a lot when there are billions of dollars at stake, an utterly minuscule amount of which could be used to destroy the lives of anyone who puts that wealth in any sort of jeopardy.

Ultimately, I'm of several minds about this. A company like Firefox can have an ethical governance board because...look, I like Firefox, but it's a browser. I've used it preferentially since forever. But if it shut down tomorrow, I can still use Safari, or Brave (which I already use on iOS anyway, such as it is), or Vivaldi, or whatever. It would suck a lot, because aside from Safari in it's various forms, Firefox is the only real bulwark against Chromium dominance. But it wouldn't result in nuclear bombs vaporizing metropolises.

I actually am very concerned about ML. I don't think it's overmuch to say that, at the very least, the possibility exists that it could become an existential threat to humanity. I think the idea of safeguards is a noble one. I just don't believe that, in the world we inhabit today, there's much of anything that can be done to postpone the inevitable. Even if EMPs fried every computer in existence today, enough people and enough written technical material would survive to ensure that integrated circuits would be reinvented in short order.

It's like warrant canaries...it's all fun and games until the three-letter agency making the demand makes it clear that they can kill your entire family, make it look like an accident, and then lean on the local constabulary to write the whole thing off rather than carry out an actual investigation.

So, I mean, in conclusion, I guess...RIP humanity? 🤷‍♂️ We had an okay run, I guess...
 
Upvote
6 (10 / -4)
This structure has worked for decades at Mozilla

1700680397255.gif
 
Upvote
-14 (0 / -14)
That's exactly what their whole purpose was. Everyone who worked at OpenAI was surely aware of this. They probably didn't think it would ever happen, but it was very much out there in the open: if it looks like this shit is getting dangerous, we're shutting it down. Full stop.

There was no actual mechanism in place and nobody expected to need one, until they did. This is what happens when Security Theater is applied to corporate governance. Everyone knows the emergency doors are just painted on, but what's the likelihood of a fire breaking out, anyway? The investors are mollified with the mere appearance of safety, it'll be fine...
No. It's not. The purpose of OpenAI's nonprofit is to achieve AGI and establish control over it so it works for the benefit of humanity. The purpose of the board was to guide OpenAI through achieving this. Unless they change the charter, still is, actually.

Employees of the for-profit entity were promised a salary and job stability in exchange for achieving these goals on behalf of the board. Investors were promised up to 100x return on their investment for helping OpenAI achieve this goal. At these very early stages on the road to AGI as OpenAI's board defined it, why would anyone involved expect them to freak the hell out and just pull the plug on everything?

The idea nuking their moon-shot program in the cradle over an executive dispute somehow advances the nonprofit's charter mission makes exactly zero sense. Everyone spouting all this nonsense about how this board is supposed to "keep humanity safe" and then superimposing their own idea of what that means on top of it very much haven't read the fine print.
 
Upvote
6 (12 / -6)