I wouldn't consider working on AI chips to be competing with OpenAI's business or a significant conflict of interest. In truth considering how much noise they've made about not being able to get enough chips it might be complimentary. I've learned my lesson about ascribing positive motivations to billionaires but this action seems very reasonable and above-board. It's hardly like it was a secret, the NYT was reporting on that venture weeks ago.So what the fuck is going on here? Was Altman not lying as was initially alleged? What's with this?
View attachment 68095
https://bsky.app/profile/maxkennerly.bsky.social/post/3kersyj74m22k
Did I miss something? The board did have a good reason? But they messed up and now they're out and he's back. But... is that bad?
I get that he had support within the company but that doesn't mean he wasn't doing something shady.
Haha, if we're going down conspiracy theories, maybe Elon Musk somehow influenced the board to let go of Altman. Musk and Altman already had a falling out, and throwing OpenAI into chaos would give Grok AI a chance to rise."Never ascribe to malice that which is adequately explained by incompetence"
...but I STILL think this whole thing was an elaborate scheme by Satya Nadella, and the OAI board fell right into his trap.
And if it turns out the referees were trying to ensure the safety of the fans, then your last line is especially true. A lot of people here seem to be neglecting this point.This whole OpenAI mess is: Referees versus players.
The refs called foul and both the player and manager threatened the referee organization and won.
Now the uncompromised refs are out and there’ll be new pro-player refs installed.
The only winners are the players. MS and Altman in this case.
Short answer: $Why was everyone willing to quit for this one guy?
The thing is, if that were true, all the board had to do was release a statement that says something like this: "Despite trying to work with Altman for months to align his efforts with the founding goals of OpenAI, the two sides have not been able to resolve the issue satisfactorily and the OpenAI board has concluded that Altman is not the right person to lead the effort develop AI according to OpenAI mission: ensure AI benefits humanity and not threaten it."From what I've read, and based on the fact that OpenAI was NOT organized as a corporation, but a non-profit, I'd guess that Altman's gung-ho attempts to open the throttle on monetizing the corporate subsidy created to be able to make profits were in contrast to OpenAI's actual mission - which was to create an AGI (Artificial General Intelligence) that would think and reason as a human would, only "better" - is what caused the board to fire him.
...
My guess is the "loss of confidence in his communications with the board" were because he didn't make it enough clear that the primary nature of partnering with Microsoft was to enrich investors, rather than simply fund the AIG development.
I've never liked the comparison. At face value it's true. A gun doesn't spontaneously kill people. Something, almost always human, uses the gun to kill another person. But, most guns are designed to kill targets, human or otherwise. It's quite different from something like a car, whose purpose is transportation but can easily be used to kill people.Ah, the "guns don't kill people" defense.
I don't think it's a bad model at all!This structure has worked for decades at Mozilla, who pioneered the approach, if IIRC. It's just a bad fit for taking growth oriented venture money into the for profit subsidiary, which is what happened in this case..
I see this as varying levels of moral culpability. The gun example is an extreme comparison given that they are products purposely designed to inflict harm. While a given manufacturer may not be specifically culpable (in an ethical sense) as whomever pulls the trigger, they do subsume a degree of moral culpability by nature of their harm inducing product being used to inflict harm. That the recipient of that harm being a sapient being was a foreseeable and likely outcome (and for many styles an intended one). That knowledge and intent is an important factor in their ethical responsibility.That was not my intent at all.
A poorly designed automobile (such as the old Ford Pinto) can cause damage with normal use due to its design. But even a well-designed automobile can cause damage if driven into a crowd of people. The manufacturer is responsible for the damage caused by the design of the product, but is not necessarily responsible for how the product is used.
I think despite the optics, it probably wasn't the primary motivation behind everyone threatening to quit. Re-reading the open letter, it's mostly about how the board went about this whole debacle, acting secretly, abruptly, vaguely, not answering questions, not engaging with the larger company leadership whatsoever after Altman was fired, etc. With the board apparently having stated that killing the company would be consistent with its mission followed by Microsoft announcing they were going to hire Altman and Brockman to form a new AI department. Now instead of partnering with Microsoft, you're going to be competing with them. Your company is effectively dead, your job is dead, everything you've been working on is dead, and you're under a board willing to put you in such a situation with no notice or explanation. You, me, we'd all probably quit.Why was everyone willing to quit for this one guy?
So maybe his neglect of capitals in his statements was an intentional act of misdirection masking just how extreme then?I thought he acted rather classily through this whole thing. At least, his statements on social media were polite and not the insane fragile ego stuff we're used to seeing from certain people who've become big through social in recent times.
So, he's gone up in my estimations. That is, I still regard him as a terrifyingly extreme capitalist, but a polite one, at least.
It seems clear that he lied to the board about something. Maybe it was about getting investments from the Saudis or others. Maybe it was about personal benefits from the msft deal. Whatever it was, the employees don't know about it.
What was incompetent is that there was seemingly no legal advice and no communication strategy in place. When making a move this important, they should have been ready with communication to the employees and key outsiders as well as the press. And even when you fire a CEO there is discussion between the lawyers.
Board members are really not allowed to make individual statements without permission. Part of their plan should have been to have a single spokesperson to answer questions. I don't think their structure per se is the issue, it is the implementation of the structure.
It's purely an affectation. Every phone I know of makes you have to go out of your way to turn that kind of thing off.Was Altman not taught how to use proper capitalization? In his statement in this article and previous ones in this saga he hasn’t demonstrated that basic capability. It seems like the CEO of any company should have the basic competency to capitalize “I”.
Perhaps women are more likely to put ethics over profit than men are?
A glance at his work history tells us he will cash out in a year or so and then move to the next squirrel.Interesting.
I guess in few years we'll know whether the board was right.
I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?I think the threat of a complete exodus of the staff (effectively killing OpenAI as an entity at all), plus the threats of multiple investors (including Microsoft), were taken pretty seriously.
Which is rather surprising. Usually from what I've seen, Boards of Directors have a "know your place" attitude.
Why was everyone willing to quit for this one guy?
You’re probably/very likely right but while LLMs probably won’t be an AGI I think there may be a small chance they could be used to build one.To be clear, I don't think they're going to produce AGI. Our current AI approach is not going to automagically suddenly evolve to become greater than the sun of its parts. There's no mechanism there for sentience or sapience.
I still want to know why the board thought they could fire him with a vague excuse and not face repercussions. If he was doing something wrong they should’ve spelled it out. If not, how did they expect things to go?
What happened is there's too much money to be made. Add in that the board is a bunch of cowards who didn't have the guts to go with what their morals told them was the right thing to do. I suspect the latter is because they'd also like to be able to keep having cushy gigs where they get paid a bunch of money to play with the things they like to play with. The problem is the for-profit entity is far too valuable to be allowed to fail before others can extract their share of the money so they basically all forced the issue by effectively blackmailing the board with the threat of total irrelevance and no change of future employment by any other tech company run by techbros if they didn't get the fuck back in line on the money train that is planned to arrive at the station real soon now.So what the fuck is going on here? Was Altman not lying as was initially alleged? What's with this?
Sure, but the board members' personal livelihood is unhinged from their decisions here. employees work for a business the nonprofit created. These employees aren't floating on piles of personal cash like the board members. Employees still have to eat ... and maybe even save up a bit for retirement. Are you saying they are wrong for that?I think you're wrong on that last one. The board was for the non-profit; their goals weren't profit-related. They were supposed to ensure that the development of AI was done in an ethical way that bettered society as a whole. With the ever-increasing erosion in trust in their subsidiary's management, they decided the safest thing to do was pull the plug. However, as you point out, there's been no public presentation of WHY that choice was made precisely then, or what, if any, concrete reasoning went into that decision.
So what the fuck is going on here? Was Altman not lying as was initially alleged?
He apparently signed the open letter demanding his own resignation, so I'm taking that to mean that he's resigning.What happens to sutskeever?
And it's actually worse than my glib memery. When challenged on the "???" part, the board looked everyone in the face and told them the business could fuck-off and die.
If you tell your entire crew "Actually, running your paycheck and everything you have built into a brick wall IS our business strategy!"
I think despite the optics, it probably wasn't the primary motivation behind everyone threatening to quit. Re-reading the open letter, it's mostly about how the board went about this whole debacle, acting secretly, abruptly, vaguely, not answering questions, not engaging with the larger company leadership whatsoever after Altman was fired, etc. With the board apparently having stated that killing the company would be consistent with its mission followed by Microsoft announcing they were going to hire Altman and Brockman to form a new AI department. Now instead of partnering with Microsoft, you're going to be competing with them. Your company is effectively dead, your job is dead, everything you've been working on is dead, and you're under a board willing to put you in such a situation with no notice or explanation. You, me, we'd all probably quit.
Apparently Sam tried to oust the non-profit types from the board first but failed to get Ilyas vote.My guess is that the board saw that Altman was trying to usurp them and undermine the non-profits mission, so they shot their shot and ended up ensuring that they were cast out and the non-profits mission was subsumed by commercial interests. I doubt they really stood much of a chance, given all the sharks circling.
I'm not super up on the lingo and I'm in a little bit of a hurry so if this sounds a bit disjointed, I apologize to anyone unfortunate enough to read it.That's exactly what their whole purpose was. Everyone who worked at OpenAI was surely aware of this. They probably didn't think it would ever happen, but it was very much out there in the open: if it looks like this shit is getting dangerous, we're shutting it down. Full stop.
There was no actual mechanism in place and nobody expected to need one, until they did. This is what happens when Security Theater is applied to corporate governance. Everyone knows the emergency doors are just painted on, but what's the likelihood of a fire breaking out, anyway? The investors are mollified with the mere appearance of safety, it'll be fine...
No. It's not. The purpose of OpenAI's nonprofit is to achieve AGI and establish control over it so it works for the benefit of humanity. The purpose of the board was to guide OpenAI through achieving this. Unless they change the charter, still is, actually.That's exactly what their whole purpose was. Everyone who worked at OpenAI was surely aware of this. They probably didn't think it would ever happen, but it was very much out there in the open: if it looks like this shit is getting dangerous, we're shutting it down. Full stop.
There was no actual mechanism in place and nobody expected to need one, until they did. This is what happens when Security Theater is applied to corporate governance. Everyone knows the emergency doors are just painted on, but what's the likelihood of a fire breaking out, anyway? The investors are mollified with the mere appearance of safety, it'll be fine...