I'm not afraid of an AGI. I don't think one is possible, and certainly not via an LLM.
What really scares me is people thinking that LLMs are far more intelligent than they actually are and relying on them, with their biased training data and penchant for confabulation, to do things they aren't actually capable of doing.
Welcome to the Third/Fourth Industrial Revolution. The first eliminated the need for animal power. The second eliminated the need for human power. The third/fourth eliminated the need for humans.
That should be a clue for those uncertain as to whether this is a good or bad development.Why Larry Summers? Hasn't he already done enough harm to humanity? He hates the working class.
Agreed. Another analogy: a brick is just a brick. It can be used to build a house or used to break a window. The brick doesn't care how it is used, it all comes down to the person using the brick.That was not my intent at all.
A poorly designed automobile (such as the old Ford Pinto) can cause damage with normal use due to its design. But even a well-designed automobile can cause damage if driven into a crowd of people. The manufacturer is responsible for the damage caused by the design of the product, but is not necessarily responsible for how the product is used.
My guess is that the board saw that Altman was trying to usurp them and undermine the non-profits mission, so they shot their shot and ended up ensuring that they were cast out and the non-profits mission was subsumed by commercial interests. I doubt they really stood much of a chance, given all the sharks circling.
Why do you think he doesn't have equity? They aren't public. Do you know something the rest of the public doesn't?What hasn’t been said here is that strangely, Sam had no equity in the company. It’s very unusual for a founder to have no equity in the company he founded and that’s whether it’s a non profit or not. It’s why he was so easily fired.
the structure of the organization is equally odd, with the for profit company being under the control of the non profit. Having had two companies, I’m baffled as to how that was supposed to work as the two concepts are at odds with each other. And the employees of the for profit were under the control of the non profit as well.
the entire thing is almost set up for failure. And how anyone ever thought that the billions needed were going to be raised from investors who have no chance of making not only a profit, but even getting their initial investment back is absurd. We’re not taking about a political,campaign where many people give a little, and a few give a lot in the hopes of getting some,nebulous payback. We’re talking about a small number of supposedly sophisticated people and organizations putting vast amounts of capital into this, such as Microsoft, who want and expect to make a lot of money from it.
Word on the street is some kind of project called Q-star has made a breakthrough and when the board learned about it they panicked and wanted to halt things. But its too late for that now.
Speculation time: Even if the board was fully justified, the method used to fire Altman was seen as a bullshit move by those most affected by it.I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?
And both of them took advantage of Tesla.Heres the thing with Elon.
Hes almost our generations Edison. He seems to act like Edison, a complete asshole. He gets credited like Edison, it was Edison’s employees who did all the work.
Lets us just hope that history is a bit more clear with Elon than it has been with Edison.
In most instances it usually is.I would have thought a 49% ownership might be worth a board seat regardless.
I hate that this thread got derailed into Musk territory. He's been nothing but a troll screaming for attention from the sidelines on this current Open AI issue. Thankfully his low effort attention whoring on this hasn't worked and he's been largely ignored.He took enough money for fifty people to retire comfortably and bet it on two things:
- electric cars, which were so pie-in-the-sky that there were literal conspiracy theories at the time about how the big car companies wouldn’t let anyone develop an electric car
- reusable space launch, which the parallel efforts of all spacefaring governments and their supporting aerospace contractors had failed to achieve over the previous fifty years
Those things exist the way they do today because he made those bets. When they’d have existed otherwise is an entirely hypothetical question.
EDIT: a quote from somewhere
"Doing grade school math may not seem impressive, but the reports note that, according to the researchers involved, it could be a step toward creating artificial general intelligence (AGI)."
So only half joking…
When do I get my hover chair and bottomless soda cup?
Why Larry Summers? Hasn't he already done enough harm to humanity? He hates the working class.
This is partially answered by this:
There's no cup, no chair. You don't qualify for the space luxury liner.
You're Excess Labor, which doesn't exist under capitalism. No worries, you'll just retrain for an ever dwindling pool of work faster than the rate of AI deployment renders jobs obsolete. And if that doesn't work, there's always human chattel for the kinds of people who could have all the tools for utopia but still not be able to tolerate not owning people.
I strongly believe that were you to strap Summers to a chair and load him up with a cocktail of sodium thiopental and MDMA, it would take very little coaxing for him to admit that the endgame is billions of dead excess poors. To these ghouls, you're nothing but an inconvenient statistic.
The only issue I have with that thinking, is that you can never get rid of the poor. In fact i order for there to be 1 rich person, there needs to be like 100 poor people doing the menial work for there to be the value.
Giving MS a board seat would complicate OpenAI's structure further, to the point of making it ridiculous. MS and some other companies -largely VCs- have invested in the for profit wing of OpenAI, not in its not for profit (which, I suppose, should be uninvestable). And right above said not for profit sits the board.Nadella made Microsoft look great through the process and made sure to keep access to AI development regardless of which path resulted. That might even be worth an OpenAI board position - giving MS an even better footing against other established competition looking to use AI.
I'll go with "greater payout."I wonder why the vast majority of the OpenAI staff were, and are, loyal to Altman. Was it a cult of personality? Was it the promise of greater payouts under a commercially-focused enterprise vs. the cautious approach favored by the other execs?
Has any of the OAI staff gone on the record as to their motivations for sticking with Altman?
I'm looking forward to the ColdFusion video on this episode.
D'Angelo was the last board member holding out on the plan of the board resigning and Altman coming back. It wouldn't surprise me if he was doing everything in his power to burn OpenAI to the ground, and thus had no interest in going along with any plan to save it.Shortly after Poe's variant of a marketplace rolled out, OpenAI announced that they'd be doing the same fully in-house, mooting Poe for anyone just looking to use OpenAI's back end (which is likely most of the market currently).
I do not understand why he gets to stay. There's a very clear conflict of interest there.
Helen Toner graduated from the University of Melbourne in 2014, her qualifications for being on this board are what exactly? "That would actually be consistent with the mission" - no thanks, get lost.It's not lost on me either. Particularly egregious that they're firing the two women (apparently this whole thing started when Altman tried to fire Helen for writing an academic paper he didn't like and she fired him first) on the board and replacing them with not just two men, but one of those men being Larry "Women are genetically inferior to men at science and mathematics" Summers, also of "We need 10 million people to lose their jobs for the economy" fame. And people wonder why women in STEM struggle and get pushed out.
I think it is more related to Altman talking about the discovery at the Asia-Pacific Economic Cooperation summit a day before he was fired. Unlike normal startups that hype every advancement before knowing if it will actually work, I can imagine that the board of OpenAI wanted to develop the safety aspect before telling anyone about the prospects of the project. There have also been reports about the board wanting to decide if something is AGI, instead of any researcher/CEO making the claim publicly. Since the board had already lost trust in Altman, any disagreement must have been amplified.So someone presented a powerpoint about how potentially great their division's project could be someday, and for some reason this standard day-to-day matter made the entire board lose its mind with terror, to the point of self-immolation?
Upper management types have been known to spook easier than horses, but I'd need more information to find that plausible.
my bank account already has an enormous number of 0's in it, just not preceded by anything non zeroLet me get this straight. Sam Altman sets up a nonprofit which is charged with ending OpenAI if it thinks AI development is getting dangerous. They do, and fire him. Now he’s come back and fired the people doing the firing because he thinks AI development isn’t dangerous. We all get a long weekend of popcorn consumption followed by hoping AI development isn’t actually that dangerous despite the nonprofit board, who was supposed to be monitoring this, getting fired for thinking so.
What a bunch of unserious people. Just drop the nonprofit pretense, admit everyone involved here only cares about dying with the greatest number of zeros in their bank account and move on with unchecked capitalism as per currently accepted norms.
One of the major problems of the Board is not recognizing that without the commercial side, their nonprofit may be largely meaningless. I make no argument that this is a GOOD state of affairs, merely ask people to recognize how the world works in practice, as opposed to theory.I think OpenAI's contradictory company structure and "mission" was a major factor in this mess. They need to decide if they want to be a not for profit foundation or commercial company. They cannot have it both ways.
Does this mean that the board's main blunder was.. timing?Because Altman's ouster killed a tender offer for their stock options at an $86B valuation, closing in Dec. The board's decision cut them out of a generational payout
Giving MS a board seat would complicate OpenAI's structure further, to the point of making it ridiculous. MS and some other companies -largely VCs- have invested in the for profit wing of OpenAI, not in its not for profit (which, I suppose, should be uninvestable). And right above said not for profit sits the board.
The most unusual -unique perhaps- aspect of OpenAI is not that a not for profit controls a for profit company but that external companies have minority -but not minor- stakes in the for profit wing.
Yet despite their sizable investments they have no real say in the (commercial) company's policies, since it is nominally controlled by the not for profit right above it, which is in turn controlled by the board - and the entire thing is run by the CEO.
If that not for profit went away MS would already have a board seat. But giving them a board seat while keeping the company structure intact makes no sense. A company cannot be both not for profit and for profit at the same time. And a board seat from a large commercial company would mean just that.
I think OpenAI's contradictory company structure and "mission" was a major factor in this mess. They need to decide if they want to be a not for profit foundation or commercial company. They cannot have it both ways.
One of the major problems of the Board is not recognizing that without the commercial side, their nonprofit may be largely meaningless. I make no argument that this is a GOOD state of affairs, merely ask people to recognize how the world works in practice, as opposed to theory.
The issue here is whether you view the goal of the Board to stick to their principles…or accomplish their goals. In a perfect world, these are not contradictory. But here, that’s the precise question. If you are a nonprofit aimed at trying to make something good for humanity and place limits on the type of product you produce, then limiting only yourself causes little societal good, but you do get to stick to your morals. Look at what happened here as a perfect example: your employees want financial security as much as anyone else. If your nonprofit does the “this has gone too far, we’re destroying our research”, all that happens is those same people take all their skills and general knowledge and take it to other, explicitly commercial businesses. If you’re in front, that means more people going to more companies and advancing MORE competing systems.
Blowing it up, in other words, will almost certainly lead to a worse outcome.
The product is inherently commercial. There’s no getting around that. A bunch of other companies would love to hire talent away to commercialize their product. So if your goal is to actually achieve your the purpose of your nonprofit, accepting the commercial nature of your product is a necessity. Their nonprofit should be zealously advocating for limitations to AI even as their company works to improve it. Why? Because if you only cripple your product, the rest of the industry drinks to your downfall and their inevitably larger stock price.
If you want to avoid entangling yourself with capitalistic desires, that’s a perfectly valid personal preference. If you actually want to enact change, your actions must adhere to that. That requires not capitalistic intent as a priority, but as a necessary part of the solution.
Sometimes a board has to wield a hammer, blowback be darned. This kind of thing does not happen unless a CEO crosses a line.I think it is more related to Altman talking about the discovery at the Asia-Pacific Economic Cooperation summit a day before he was fired. Unlike normal startups that hype every advancement before knowing if it will actually work, I can imagine that the board of OpenAI wanted to develop the safety aspect before telling anyone about the prospects of the project. There have also been reports about the board wanting to decide if something is AGI, instead of any researcher/CEO making the claim publicly. Since the board had already lost trust in Altman, any disagreement must have been amplified.
The board handled the situation terribly. They should have used their "soft power," as another commenter wrote.
They sell cookies to fund their organization.The GirlScouts of America are non profit. They sell girlscout cookies for profit. This is the same thing except these cookies are worth $90 billion dollars.
But now that you mention it. I don't think Microsoft would be allowed to have a board seat overseeing the non profit. As of now they are an outside entity buying the cookies. If they were also selling them it would definitely complicate things.
Sure, but they should at least know which end to hold on that hammer.Sometimes a board has to wield a hammer, blowback be darned. This kind of thing does not happen unless a CEO crosses a line.
Sure, but they should at least know which end to hold on that hammer.
True. And most people buy them by the stack load because they're flipping good.They sell cookies to fund their organization.
I have no beef with Girl Scout cookies, other than having to run a gauntlet every year to get to the grocery store. I haven't eaten one for at least fifteen years, because I need to manage my sugar and processed carb intake. It is a me thing, and I wish them success in raising confident and empowered women. Did it with two of them myself. I am now having to learn how to raise a son to be a good person. It is proving to be a different experience.True. And most people buy them by the stack load because they're flipping good.
If they weren't any better than D-grade generic store brand not many people would buy them. No matter how worthy the cause was. I mean sure some would. But not nearly as many people would buy them as they do now.
That is why the kind of tricky complicated point the poster above was trying to make was right. Do you want to change the world with your delicious cookies? Or do you want your cookies rotting on the shelf by the truckload because nobody wants them. But yeah you stuck by your principles and only used non-fat milk and non-fat butter substitute.
But to bring it back to more of a direct deal. If Open AI wasn't cutting edge nobody would care. They would be just another research team on the sideline yelling at clouds. Completely irrelevant in the grand scheme of things with zero ability to affect anything or anyone now that the genie is already out of the bottle.
Excuse me. Sam Altman went to Stanford! He wrote some code once, and his failed startup was sold for ... $40M.Please explain how Altman has done anything “for the benefit of humanity.”
Because all I’ve seen is a guy wanting push far faster than sensible controls can evolve, resulting in dangerous nonsense being ubiquitous on the internet.
Usually Boards of directors have some skin in the game, so that they suffer some personal loss if the company goes under. With OpenAI this was not the case. If OpenAI goes bust non of the old Board would loose any kind sort of asset (work our money).I think the threat of a complete exodus of the staff (effectively killing OpenAI as an entity at all), plus the threats of multiple investors (including Microsoft), were taken pretty seriously.
Which is rather surprising. Usually from what I've seen, Boards of Directors have a "know your place" attitude.