[Previous] Non-Violent Creative Adversaries | Home | [Next] Government Policy Proposals and Local Optima

OpenAI Fires Then Rehires CEO

Here's my understanding of the recent OpenAI drama, with Sam Altman being fired then coming back (and the board of directors being mostly fired instead), and some thoughts about it:

OpenAI was created with a mission and certain rules. This was all stated clearly in writing. All employees and investors knew it, or would have known if they were paying any attention.

In short, the board of directors had all the power. And the mission was to help humanity, not make money.

The board of directors fired the CEO. They were rude about it. They didn't talk with him, employees or investors first. They probably thought: it doesn't matter how we do this, the rules say we get our way, so obviously we'll get our way. They may have thought being abrupt and sneaky would give people less opportunity to complain and object. Maybe they wanted to get it over with fast.

The board of directors may have been concerned about AI Safety: that the CEO was leading the company in a direction that might result in AIs wiping out humanity. This has been denied some, and I haven't followed all the details, but it still seems like maybe what happened. Regardless, I think it could have happened and the results would likely have been the same.

The board of directors lost.

You can't write rules about safe AI and then try to actually follow them and get your way when there are billions of dollars involved. Pressure will happen. It will be non-violent (usually, at least at first). This wasn't about death threats or beatings on the street. But big money is above written rules and contracts. Sometimes. Not always. Elon Musk tried to get out of his contract to buy Twitter but he failed (but note how that was big money against big money).

Part of the pressure was people like Matt Levine and John Gruber joining in on attacking and mocking the board. They took sides. They didn't directly and openly state that they were taking sides, but they did. A lot of journalists took sides too.

Another part of the pressure was the threat that most of the OpenAI employees would quit and go work for Microsoft and do the same stuff there, away from the OpenAI board.

Although I'm not one of the people who is concerned that this kind of software may kill us all, I don't think Matt Levine and the others know that it won't. They don't have an informed opinion about that. They don't have rational arguments about it, and they don't care about rational debate. So I sympathize with the AI doomers. It must be very worrying for them to see not only the antipathy their ideas get from fools who don't know better, but also to see that written rules will not protect them. Just having it in writing that "if X happens, we will pull the plug" does not mean the plug will be pulled. ("We'll just pull the plug if things start looking worrying." is one of the common bad arguments used against AI doomers.)

It's also relevant to me and my ideas like "what if we had written rules to govern our debates, and then people participating in debates followed those rules, just like how chess players follow the rules of chess". It's hard to make that work. People often break rules and break their word, even if there are high stakes and legally-enforceable written contracts (not that anyone necessarily broke a contract; but the contract didn't win; other types of pressure got the people with contractual rights to back down, so the contract was evidently not the most important factor).

The people who made OpenAI actually put stuff in writing like "yo, investors, you should think of your investment a lot like a donation, and if you don't like that then don't invest" and Microsoft and others were like "whatever, here's billions of dollars on those terms" and employees were like "hell yeah I want stock options – I want to work here for a high salary and also be an investor on those terms". And then the outside investors and employees were totally outraged when actions were taken that could lower the value of their investment and treat it a bit like a donation to a non-profit that doesn't have a profit-driven mission.

I think the board handled things poorly too. They certainly didn't do it how I would have. To me, it's an everyone sucks here, but a lot of people seem to just think the board sucks and don't really mind trampling over contracts and written rules when they think the victim sucks.

Although I don't agree with AI doom ideas, I think they do deserve to be taken seriously in rational debate, not mocked, ignored, and put under so much pressure that they lose when trying to assert their contractual rights.


Elliot Temple on November 23, 2023

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)