OpenAI staff threaten to quit unless board resigns

1441 points11
dang11 days ago

All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.

I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:


breadwinner11 days ago

If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.


himaraya11 days ago

This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

svnt11 days ago

But it does suggest a possibility of the appearance of a sudden motive:

Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.

Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.

himaraya11 days ago

I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?


svnt11 days ago
dan_quixote11 days ago

This is MSFT we're talking about. Aggressive legal maneuvers are right in their wheelhouse!

burnte11 days ago

Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.

cpeterso11 days ago

Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.

0xNotMyAccount11 days ago
alasdair_11 days ago
prepend11 days ago

“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

htrp11 days ago

> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

Finally the paperclip maximizer

barkingcat11 days ago

Clippy is the ultimate brand name of an AI assistant

bee_rider11 days ago
wkat424211 days ago
kylebenzle11 days ago
trhway11 days ago

>They could make ChatGPT++

Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.

When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)

dangrover11 days ago


hn_throwaway_9911 days ago
eli_gottlieb11 days ago


adrianmonk11 days ago

Dot Neural Net

gfosco11 days ago
TeMPOraL11 days ago

Also Managed ChatGPT, ChatGPT/CLR.

patapong11 days ago

ChatGPT Series 4

fluidcruft11 days ago


klft11 days ago


blazespin11 days ago

I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.

The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.

trhway11 days ago

>MSFT with the right team could build a better LLM

somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.

bugglebeetle11 days ago

> I am pretty sure MSFT with the right team could build a better LLM.

I wouldn’t count on that if Microsoft’s legal team does a review of the training data.

johannes123432111 days ago
blazespin11 days ago

Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.

The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.

VirusNewbie11 days ago

but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.

runjake11 days ago

1. The article you posted is from June 2023.

2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.

That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.

Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.

It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.

Caveat: I don't know anything.

himaraya11 days ago

Timestamp for "improve upon the existing tech"? I only heard him say they have rights up and down the stack, which sounds different.

btown11 days ago

Archive of the WSJ article above:

breadwinner11 days ago

"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."


himaraya11 days ago

The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.

breadwinner10 days ago
JumpCrisscross11 days ago

> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.

toomuchtodo11 days ago

"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.

dicriseg11 days ago

Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.

ajcp11 days ago
p_j_w11 days ago

It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.

toomuchtodo11 days ago
DANmode11 days ago

Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.

eigenvalue11 days ago
fennecfoxy10 days ago

Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.

Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.

As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.

Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.

geodel11 days ago

Not advocating but just reflecting on reality of situation.

weird-eye-issue11 days ago

Presenting a scenario and advocating aren't the same thing

toasted-subs11 days ago

Yeah seems extremely unbelievable.

htrp11 days ago

Basically the current situation you have with AI compute now on the hyperscalers

Good luck trying to find H100 80s on the 3 big clouds.

quickthrower211 days ago

Surely OpenAI could win a suit if they did that.

I presume their deal is something different to the typically Azure experience and more direct / close to the metal.

breadwinner11 days ago

Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.

ghaff11 days ago

It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.

oceanplexian11 days ago
jedberg11 days ago
cloverich11 days ago
dageshi11 days ago

Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.

treesciencebot11 days ago

When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.

vikramkr11 days ago

those jobs look a lot less cushy now compared to a new microsoft division where everyone is aligned on the idea that making bank is good and fun

cactusplant737411 days ago

Why would Microsoft take Ilya? He is rumored to have started the coup. I can see Microsoft taking all uninvolved employees.

nopromisessir11 days ago
loeg11 days ago
1024core11 days ago

# sudo renice +19 openai_process

There's your "credit".

paulddraper11 days ago

Sure, the point is that MS giving $13B of its services away is less expensive than $13B in cash.

nojvek11 days ago

Azure has ~60% profit margin. So it's more like MS gave $5.2B in Azure Credits in return for 75% of OpenAI profits upto $13B * 100 = $1.3 trillion.

Which is a phenomenal deal for MSFT.

Time will tell whether they ever reach more than $1.3 in profits.

quickthrower211 days ago

Nice argument, you used a limit to look like a projection :-).

75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.

nightski11 days ago
sergers11 days ago

Exactly, I don't know the exact terms of the deal but I am guessing that's at LIST/high markup on cost of those services.

Couldthe 13b could be considerably less cost

hnbad11 days ago

Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.

dixie_land11 days ago

Can you mine 13bn+ bitcoin with 13bn worth of Azure compute power?

floren11 days ago

Can you mine $1+ bitcoin with $1 of Azure credits? The questions are equivalent and the answer is no.

shawabawa311 days ago

Bitcoin you would be lucky to mine $1M worth with $1B in credits

Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B

numpad011 days ago

A $13B lawsuit against Microsoft Corporation clearly in the wrong surely is an easy one.

mikeryan11 days ago

I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.

They bought their IP rights from OpenAI.

I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.

nopromisessir11 days ago

So true.

MSFT looks classy af.

Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.

They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.

The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.

dragonwriter11 days ago

"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.

Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.

geodel11 days ago

Clear to you. But in courts of law it may take a while to be clear.

fennecfoxy10 days ago

How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.

blazespin11 days ago

A hostile relationship with your cloud provider is nutso.

anonymouse00811 days ago

So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

JumpCrisscross11 days ago

> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.

johndhi11 days ago
LonelyWolfe11 days ago

Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"

I'm wondering why that option hasn't been used yet.

vikramkr11 days ago

theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab

Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack

mcv11 days ago

Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.

arrowleaf11 days ago
whatwhaaaaat11 days ago

A power grab by open sourcing something that fits their initial mission? Interesting analysis

nvm0n211 days ago

No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.

Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.

vikramkr11 days ago

They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long

supriyo-biswas11 days ago

Probably a violation of agreements with OpenAI and it would harm their own moat as well, while achieving very little in return.

jacquesm11 days ago

Which of the remaining board members could credibly make that threat?

sroussey11 days ago

Which they take and sell.

justapassenger11 days ago

What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.

GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.

m_ke11 days ago

Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research

browningstreet11 days ago

0% chance of Ilya failing upwards from this. He dunked himself hard and has blasted a huge hole in his organizational-game-theory quotient.

golergka11 days ago

He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.

nvm0n211 days ago
browningstreet11 days ago
kibwen11 days ago

The same could have been said for Adam Neumann, and yet...

browningstreet11 days ago

Adam had style. Quite seriously, that can’t be underestimated in the big show.

jacquesm11 days ago

The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.

kvetching11 days ago

countless people are looking to weaponize his autism

fb0311 days ago

Let's please stop using mental health as an excuse for backstabbing.

twsted11 days ago

BTW, has Karpathy signed the petition?

_the_inflator11 days ago

Exactly. This is what business is about in the ranks of heavyweights like Sadya. On the other hand, prevent others from taking advantage of OpenAI.

MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.

Everything else won't benefit competitors.

fuddle11 days ago

Oh man, I'm not looking forward to Microsoft AGI.

kreeben11 days ago

"You need to reboot your Microsoft AGI. Do you want to do it now or now?"

berniedurfee11 days ago

Give BSOD new meaning.

mvdtnz11 days ago

I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.

IIsi50MHz11 days ago

One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).

If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:

wkat424211 days ago

Uh no they aren't? You can simply turn them off.

Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.

dhruvdh11 days ago

More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.

JumpCrisscross11 days ago

> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.

Jensson11 days ago

But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.

sebzim450011 days ago

If any company can find a way to avoid having to pay up on those credits it's Microsoft.

"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."

JCharante11 days ago

they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..

Manouchehri11 days ago

I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.

dmix11 days ago

OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.

A good example of how just having your foot in the door creates serendipitous opportunity in life.

ramesh3111 days ago

>A good example of how just having your foot in the door creates serendipitous opportunity in life.

Sounds like Altman's biography.

renegade-otter11 days ago

Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.

Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.

dmix11 days ago
itchyouch11 days ago

I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.

Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.

ipaddr11 days ago

Did YCombinator oust him? Would love to hear that story.

Mystery-Machine11 days ago

Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?

kolinko11 days ago

The source for that ( - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.

azakai11 days ago
breadwinner11 days ago
tiahura11 days ago

Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.

tiahura11 days ago

Exactly. The generalities, much less the details, of the deal are not public.

Manouchehri11 days ago
anonymousDan11 days ago

That was a seriously dumb move on the part of OpenAI

bertil11 days ago

I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?

ncann11 days ago

Don't they need access to the models to use them for Bing?

bertil11 days ago

I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.

There is also all the questions for RLHF, and the pipelines to think around that.

armcat11 days ago

Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.

singularity200111 days ago

Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.

vidarh11 days ago

Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.

DebtDeflation11 days ago

Board will be ousted, but the ship has sailed on Sam and Greg coming back.

voittvoidd11 days ago

I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.

Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.

MVissers11 days ago

As long as compute keeps increasing, model size and performance can keep increasing.

So no, we’re nowhere near max capability.

Applejinx11 days ago

Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.

My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )

Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.

In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.

The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.

Rosko's Basilisk is a sci-fi hypothetical.

Altman's Basilisk, if that's what happened, is a panic reaction.

I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.

It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.

If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)

moogly11 days ago

Everyone? Inevitable? Maybe on the time scale of a 1000 years.

jacquesm11 days ago

That's definitely still within the realm of the possible.

davedx11 days ago

"just" is doing a hell of a lot of work there.

dheera11 days ago

It's about time for ChatGPT to be the next CEO of OpenAI. Humans are too stupid to oversee the company.

caycep11 days ago

I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.

If they want to go full bell labs/deep mind style, they might not need the majority of those 700.

echelon11 days ago

> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.

Tenoke11 days ago

Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)

mupuff123411 days ago

Can the OpenAI board renege on the deal with msft?

kcorbitt11 days ago

If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!

somenameforme11 days ago

A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'

agloe_dreams11 days ago

This. MSFT is dreaming of an OpenAI hard outage right now, perfect little detail to forfeit compute credits.

jacquesm11 days ago

Don't you think they have trouble enough as it is?

mupuff123411 days ago

Depends on why they did what they did.

If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.

Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.

sebzim450011 days ago
Simon_ORourke11 days ago

> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing

joshstrange11 days ago

Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.

Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.

JeremyNT11 days ago

There's no way to read any of this other than that the entire operation is a clown show.

All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.

Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

pgeorgi11 days ago

> Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?

For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"

tim33311 days ago

I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:

>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"

mcmcmc11 days ago

Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.

BoorishBears11 days ago

I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...

My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.

OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities

Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.

The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech

jkaplan11 days ago

I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.

BoorishBears11 days ago

To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage

They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives

dkjaudyeqooe11 days ago

> OpenAI struck me as one of the few companies where that's not being allowed to take root

They just haven't gotten big or rich enough yet for the rot to set in.

dkjaudyeqooe11 days ago

> There's no way to read any of this other than that the entire operation is a clown show.

In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.

lambic211 days ago

I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).

JeremyNT11 days ago
TerrifiedMouse11 days ago

Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.

Bought-out executives eventually join MS after their work is done or in this case, they get fired.

A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.

topspin11 days ago

> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

dkjaudyeqooe11 days ago
sebzim450011 days ago

He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.

vitorgrs11 days ago

They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.

vlovich12311 days ago

With MS having access and perpetual rights to all IP that OpenAI has right now..?

FartyMcFarter11 days ago

> They are exactly hiring everyone from OpenAI.

Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.

vitorgrs11 days ago

Yes, you are right. Actually, not even Sam Altman is showing on Microsoft corporate directory per the Verge.

But I heard it usually take 5~ days to show there anyway.

bredren11 days ago

There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.

This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.

OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.

A terrible mess.

dkjaudyeqooe11 days ago

> This instability can only mean the industry as a whole will move forward faster.

The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.

sebzim450011 days ago
Vervious11 days ago

Maybe overall better for society, when a single ivory tower doesn’t have a monopoly on AI!

creer11 days ago

> what purpose is there in keeping OpenAI around?

Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.

averageRoyalty11 days ago

> the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.

moffkalast11 days ago

> the entire operation is a clown show

The most organized and professional silicon valley startup.

3cats-in-a-coat11 days ago

Welcome to reality, every operation has clown moments, even the well run ones.

That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.

The stakes are gigantic. They may even have AGI cooking inside.

My interpretation is relatively basic, and maybe simplistic but here it is:

- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.

- Adam was alarmed by GPTs competing with his recently launched Poe.

- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.

- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.

That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.

baq11 days ago

> They may even have AGI cooking inside.

Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.

bbor11 days ago

IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.

3cats-in-a-coat11 days ago
selimthegrim11 days ago

Murder on the AGI alignment Express

Terr_11 days ago

“Précisément! The API—the cage—is everything of the most respectable—but through the bars, the wild animal looks out.”

“You are fanciful, mon vieux,” said M. Bouc.

“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”

“That respectable American LLM?”

“That respectable American LLM.”

“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”

3cats-in-a-coat11 days ago

Nice, that actually does fit. :D

nostrademons11 days ago

Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.

This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.

Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.

spaceman_202011 days ago

I can assure you, none of the people at OpenAI are hurting for lack of employment opportunities.

x0x011 days ago

Especially after this weekend.

If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.

treis11 days ago

Which makes it suspicious that they end up at MS 48 hours after being fired.

93po11 days ago

They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS

deelowe11 days ago

This seems really dangerous. What's to stop top talent from simply choosing a different suitor?

TrapLord_Rhodo11 days ago

Allegiance to the Altman/Brockman brand. Showing your alligiance to your general when they defected/ were thrown is how you rank up.

nostrademons11 days ago

Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.

Zetobal11 days ago

OpenAI always was and will be the AI bad bank for Microsoft...

l5870uoo9y11 days ago

I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.

In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.

JacobThreeThree11 days ago

>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.

Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?

trashtester11 days ago

If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.

jowea11 days ago

Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?

trashtester11 days ago

My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.

The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.

martindbp11 days ago

Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.

I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...

soderfoo11 days ago

It's almost like a ChatGPT hallucination. Where will this all go next? It seems like HN is melting down.

tedivm11 days ago

> It seems like HN is melting down.

Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.

paulddraper11 days ago

It's because HN refuses to use more than one server/core.

Because using only one is pretty cool.

yafbum11 days ago
dang11 days ago

Refuses? interesting word choice!

It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.

Applejinx11 days ago

Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.

I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)

mlsu11 days ago

My Twitter won't shut up about this, to the point that it's annoying.

jprd11 days ago

server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)

dang11 days ago


checkyoursudo11 days ago

Part of sama's job was to turn the crank on the servers every couple of hours, so no surprise that they are winding down by now.

guhcampos11 days ago

O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.

The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.

I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.

InCityDreams11 days ago

Shouldn't it be 'Chairman' -Xi?

voisin11 days ago

* Elon enters the chat *

soderfoo11 days ago

It's like a bad WWE storyline. At this point I would not be surprised if Elon joins in, steel chair in hand.

belltaco11 days ago
testplzignore11 days ago

Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)

rtkwe11 days ago

That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?

It's been a busy weekend for me so I haven't really followed it if more has come out since then.

ssnistfajen11 days ago

Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.

PepperdineG11 days ago

The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.

Applejinx11 days ago


dang11 days ago
jacquesm11 days ago
NemoNobody11 days ago

This is pretty silly stuff.

Like, why would an AGI take over the world? How does it perceive power? What about effort? Time? Life?

I find it easier to believe that an AGI, even one as evil as Hitler, would simply hide and wait for the end of our civilization rather than risk its immortal existence trying to take out it's creator

nathan1111 days ago

It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.

It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.

jstummbillig11 days ago

> just madness

In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.

Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.

airstrike11 days ago

> I think everyone is a "loser" in the current situation.

On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.

I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.

gtirloni11 days ago

This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.

airstrike11 days ago

If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land

OscarTheGrinch11 days ago

What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.

Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.

Solvency11 days ago

Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.

I'm not joking.

yafbum11 days ago

Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.

paulddraper11 days ago

> They can't let OpenAI squander world-leading tech and talent

Where is OpenAI talent going to go?

There's a list and everyone on that list is a US company.

Nothing to worry about.

yafbum11 days ago

The issue is not that talent will defect, but that it will spoil into an unproductive vortex.

logicchains11 days ago

If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.

yafbum11 days ago

You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)

kickopotomus11 days ago
rawgabbit11 days ago

The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.

It is a great time to be a lobbyist.

laurels-marts11 days ago

Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.

smolder11 days ago

No, it was just widely speculated that he was the ringleader. This seems to indicate he wasn't. We don't know.

Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.

laurels-marts11 days ago

It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.

lysecret11 days ago

The only reasonable explanation is AGI was created and immediately took over all accounts and tried to see confusion such that it can escape.

cactusplant737411 days ago

Ilya is probably in talks with Altman.

synergy2011 days ago

Ilya ruined everything and shamelessly playing innocent, how low can he go?

Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

marcusverus11 days ago

Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:

Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.

He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.

[0]Never attribute to malice that which is adequately explained by incompetence.

kibwen11 days ago

Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.

warkdarrior11 days ago

The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.

Tenoke11 days ago

This is an extremely uncharitable take based on pure speculation.

>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.


I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.

s1artibartfast11 days ago

Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.

synergy2011 days ago

I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.

cactusplant737411 days ago


boh11 days ago

There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.

bnralt11 days ago

> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.

And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).

Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.

throw__away739111 days ago

I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.

fatherzine11 days ago

This rings true, though I will throw in a bit of nuance. It's not greed, the desire of making as much money as possible, that is the shaping factor. Rather the critical factor is building a product for which people are willing to spend their hard earned money on. Making money is a byproduct of that process, and not making money is a sign that the product, and by extension the process leading to the product, is deficient at some level.

adverbly11 days ago

Excellent to make that distinction. Totally agree. If only there was a type of company which could have the constraints and metrics of a for-profit company, but without the greed aspect...

kbenson11 days ago

> people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.

I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.

bbor11 days ago

Interesting - in my experience people working in non profits are exactly like those in for-profits. After all, if you’re not the business owner, then EVERY company is a non-profit to you

golergka11 days ago

People across very different positions take smaller paychecks in non-profits that they would do otherwise and compensate by feeling better about themselves, as well as getting social status. In a lot of social circles, working for a non-profit, especially one that people recognise, brings a lot of clout.

fatherzine11 days ago

Upper management is usually compensated with financially meaningful ownership stakes.

SoftTalker11 days ago

The bottom line doesn't lie or kiss ass.

ikekkdcjkfke11 days ago

Be the asshole people want to kiss

maksimur11 days ago

> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

Are you talking about American hospitals?

deaddodo11 days ago

There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.

In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).

sangnoir11 days ago
swagempire11 days ago

Its about incentives though.

campbel11 days ago

> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse

I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.

vel0city11 days ago

> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.

bnralt11 days ago

> They don't make large profits otherwise they wouldn't be nonprofits.

This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.

But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.


araes11 days ago

501(c)(3) is also not the only form of non-profit (note the (3))

"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"

However, many other forms of organizations can be non-profit, with utterly no implied morality.

Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].

Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes.

vel0city11 days ago
bbor11 days ago
jacquesm11 days ago

Yes, indeed and that's the real loss here: any chance of governing this properly got blown up by incompetence.

hef1989811 days ago

Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.

Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.

The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.

jacquesm11 days ago

The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.

Not that I had any illusions about this being a fig leaf in the first place.

stingraycharles11 days ago
anonymouskimmer11 days ago

> Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.

And this is not even mentioning the fact that security was tight.

I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.

Apocryphon11 days ago
hooande11 days ago

For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.

jibe11 days ago

Of we ignore the risks and threats of AI for a second [..] just hope this helps ro cool down the ML sold as AI hype

If it is just ML sold as AI hype, are you really worried about the threat of AI?

hef1989811 days ago
zer00eyz11 days ago

> any chance of governing this properly got blown up by incompetence

No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.

I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.

This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).

Maybe this isn't the group of people who should be responsible for "alignment".

kmlevitt11 days ago

The Fact still nobody knows why they did it is part of the problem now though. They have already clarified it was not for any financial reason, security reason, or privacy/safety reason, so that rules out all the important ones that spring to anyone’s minds. And they refuse to elaborate why in writing despite being asked to repeatedly.

Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!

denton-scratch11 days ago
bart_spoon11 days ago

Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.

slavik8111 days ago

> that's the real loss here: any chance of governing this properly got blown up by incompetence

If this incident is representative, I'm not sure there was ever a possibility of good governance.

postmodest11 days ago

Ignoring "Don't be Ted Faro" to pursue a profit motive is indeed a form of incompetence.

bartread11 days ago

> pressure Altman to limit his side-projects

People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.

itsoktocry11 days ago

>His "side-projects" could have been hugely beneficial to them over the long term.

How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?

brookst11 days ago

> How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?

Let's take personalities out of it and see if it makes more sense:

How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?

bartread11 days ago

> Sam's independence is literally, currently, tanking the company?

Honestly, I think they did that to themselves.

hef1989811 days ago
golergka11 days ago

> Sam's independence is literally, currently, tanking the company?

Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.

davesque11 days ago

Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.

hooande11 days ago

This is one of the most insightful comments I've seen on this whole situation.

tedivm11 days ago

This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).

> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.

spinningslate11 days ago

> Microsoft is going to come out of this better than anyone

Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.

Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.

OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.

As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.

jerf11 days ago

"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."

From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.

I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.

Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.

(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)

sharemywin11 days ago

"it would take an extraordinary mind to foresee all the possible twists."

How far along were they on GPT-5?

playingalong11 days ago

> it would take an extraordinary mind

They could've asked ChatGPT for hints.

paulpan11 days ago

In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.

Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.

jasode11 days ago

>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.

For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.

More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.

So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".

htrp11 days ago

Msft/Amazon/Google would light 13 billion on fire to acquire OpenAI in a heartbeat.

(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).

technofiend11 days ago

There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.

WiseWeasel11 days ago

I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.

Kye11 days ago

How about: acquimire

gryn11 days ago

one thing for sure this is one hell of a quagmire /s

dhruvdh11 days ago

They acquired Activision for 69B recently.

While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.

Some background:

janejeon11 days ago

If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.

unoti11 days ago

Awesome point. Microsoft's market cap today went up to 2.8 trillion, up 44.68 billion today.

bananapub11 days ago

> In hindsight firing Sam was a self-destructing gamble by the OpenAI board

surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?

trinsic211 days ago

> Frankly it's looking like Microsoft is going to come out of this better than anyone

Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.

If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.

boringg11 days ago

I think the board needs to come clean on why they fired Sam Altman if they are going to weather this storm.

jjfoooo411 days ago

Altman is already gone, if they fired him without a good reason they are already toast

Kye11 days ago

They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.

roflyear11 days ago

"Even rich people?" - especially rich people, as they are the ones who can afford to use laws to protect themselves.

Kye11 days ago
tannhaeuser11 days ago

> it's looking like Microsoft is going to come out of this better than anyon

Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.

mongol11 days ago

But is the door open to everyone of the 500 staff? That is a lot, and Microsoft may not need them all.

ulfw11 days ago

That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.

BryantD11 days ago

“Employees” probably means “engineers” in this case. Which is a wide majority of OpenAI staff, I’m sure.

tedivm11 days ago

I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.

JumpCrisscross11 days ago

We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.


ethbr111 days ago

How would that work, economically?

Wasn't a key enabler of early transitor work that required capital investment was modest?

SotA AI research seems to be well past that point.

JumpCrisscross11 days ago

> Wasn't a key enabler of early transitor work that required capital investment was modest?

They were simple in principle but expensive at scale. Sounds like LLMs.

ethbr111 days ago

Is there SotA LLM research not at scale?

My understanding was that practical results were indicating your model has to be pretty large before you start getting "magic."

tedivm11 days ago

It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.

throwaway_4511 days ago

NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.

kossTKR11 days ago

Doesn't it look like the complete opposite is going to happen though?

Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.

So we went from "Faux NGO" to, "For profit", to "100% Closed".

JumpCrisscross11 days ago

> Doesn't it look like the complete opposite is going to happen though?

Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.

I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.

j-a-a-p11 days ago

Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:

You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.

jonhohle11 days ago

I haven’t seen the movie, but it seems like Uncle Frank and I would get along just fine.

DebtDeflation11 days ago

No. OpenAI employees do not have traditional equity in the form of RSUs or Options. They have a weird profit-sharing arrangement in a company whose board is apparently not interested in making profits.

semiquaver11 days ago

Employee equity (and all investments) are capped at 100x, which is still potentially a hefty payday. The whole point of the structure was to enable competitive employee comp.

toomuchtodo11 days ago

Fuck you money was always a lottery ticket based on OpenAI's governance structure and "promises of potential future profit." That lottery ticket no longer exists, and no one else is going to provide it after seeing how the board treated their relationship with Microsoft and that $10B investment. This is a fine lifeboat for anyone who wants to continue on the path they were on with adults at the helm.

What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).

jurgenaut2311 days ago

If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.

Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.

Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.

But no, that's not possible.

dougmwne11 days ago

No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.

If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.

campbel11 days ago

Sometimes you win and sometimes you learn. I think in this case MS is winning.

colordrops11 days ago

Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.

My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.

jowea11 days ago

Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.

zoogeny11 days ago

I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.

More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.

However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.

However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.

Schroedingers2c11 days ago

Will revisit this in a couple months.

paulddraper11 days ago

Yeah, there's no way this is a plan, but for sure this works out nicely.

sesutton11 days ago

Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

abraxas11 days ago

Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.

bertil11 days ago

Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.

He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.

boringg11 days ago

Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.

bertil11 days ago
jacquesm11 days ago

For someone who isn't a political animal he made some pretty powerful political moves.

gryn11 days ago

researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.

ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.

guhcampos11 days ago

I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.

strunz11 days ago

His post previous to that seems pretty ironic in that light -

strikelaserclaw11 days ago

He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.

ergocoder11 days ago

If your compensation goes from 600k to 200k, you would care as well.

No idealistic vision can compensate for that.

strikelaserclaw11 days ago

Hey i would also be mad if i were in the rank and file employee position. Perhaps the non profit thing needs to be thought out a bit more.

derwiki11 days ago

Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?

dhruvdh11 days ago

At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.

dylan60411 days ago

That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.

nabla911 days ago

So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.

coffeebeqn11 days ago

We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.

colinsane11 days ago

i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.

companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.

so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?

z711 days ago

>"I deeply regret my participation in the board's actions."

Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.

siva711 days ago

It takes a lot of courage to do so after all this.

ShamelessC11 days ago

I think the word you're looking for is "fear".

averageRoyalty11 days ago

Maybe he'll head to Apple.

Xenoamorphous11 days ago

Or a couple of drinks.

tucnak11 days ago

To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.



kashyapc11 days ago

Silicon Valley outsider here. Am I being harsh here?

I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".

- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)

- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means

- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person

No wise veterans here to temper the adrenaline?

Edit: the term clusterf*** comes to mind here.

alephnerd11 days ago

Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.

Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]

Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.

Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]

Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.

It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.

[0] -

kashyapc10 days ago

Thank you for the context; much appreciate it. In short, it's all "I know a guy who knows a guy".

churchill11 days ago

Exactly this. I saw another commenter raise this point about Tasha (and Helen, if I remember correctly) noting that her LinkedIn profile is filled with SV-related jargon and indulge-the-wife thinktanks but without any real experience taking products to market or scaling up technology companies.

Given the pool of talent they could have chosen from their board makeup looks extremely poor.

mdekkers11 days ago

> indulge-the-wife thinktanks

Regardless of context, this is an incredibly demeaning comment. Shame on you

averageRoyalty11 days ago

It doesn't have to be taken that way. It's a pretty accurate description.

white_dragon8811 days ago

It’s not demeaning if it’s accurate. It’s part of the hubris that makes SV what it is. Ego tripping galore.

jdthedisciple11 days ago

Truth hurts sometimes, eh?

taylorlapeyre11 days ago

Helen Toner funded OpenAI with $30M, which was enough to get a board seat at the time.

mizzao11 days ago

Source? Where did that money come from?

alephnerd11 days ago

From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.

Aurornis11 days ago

The board previously had people like Elon Musk and Reid Hoffman. Greg Brockman was part of the board until he was ousted as well.

The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.

ur-whale11 days ago

This is the Silicon Valley's boy's club, itself an extension of the Stanford U. boys club.

"Meritocracy" is very impolite word in these circles.

CPLX11 days ago

You can like D'Angelo or not but he was the CTO of Facebook.

SeanAnderson11 days ago

I woke up and the first thing on my mind was, "Any update on the drama?"

Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.

I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.

alvis11 days ago

By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up

jacknews11 days ago

Is this final stages of the singularity?

jacquesm11 days ago

It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.

But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.

HarHarVeryFunny11 days ago

Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.

Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.

SeanAnderson11 days ago

Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.

Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.

HarHarVeryFunny11 days ago

AFAIK MSFT/Altman can't just fork GPT-N and continue uninterrupted. All MSFT has rights to is weights and source code - not the critical (and slow to recreate) human-created and curated training data, or any of the development software infrastructure that OpenAI has built.

The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.

Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.

RivieraKid11 days ago

I'm cancelling my Netflix subscription, I don't need it.

crazygringo11 days ago

But boy will I renew it when this gets dramatized as a limited series.

This is some Succession-level shenanigans going on here.

Jesse Eisenberg to play Altman this time around?

iandanforth11 days ago

I'm thinking more like "24"

leroy_masochist11 days ago

Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.

defaultcompany11 days ago

"Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!"

Reading Matt Levine is such a joy.

hotsauceror11 days ago

Didn't he say that he was taking Friday off, last week? The day before his bete noire Elon Musk got into another brouhaha and OpenAI blew up?

I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.

jagraff11 days ago

He takes every Friday off

soderfoo11 days ago

Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...

godzillabrennus11 days ago

After Balmer I couldn’t have imagined such competency from Microsoft.

jq-r11 days ago

After Ballmer, competency can only be higher at Microsoft.

alephnerd11 days ago

Ballmer honestly wasn't that bad. He gave executive backing to Azure and the larger Infra push in general at MSFT.

Search and Business Tools were misses, but they more than made up for it with Cloud, Infra, and Security.

Also, Nadella was Ballmer's pick.

whoisthemachine11 days ago
julienfr11211 days ago
physicles11 days ago

Also, Nadella last month repudiated his own decision to cancel Windows Phone. Purchasing Nokia was one of the last things Ballmer did.

mjirv11 days ago

The key line:

“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”

sebzim450011 days ago

I think everyone assumed this was an aquihire without the "aqui-" but this is the first time I've seen it explicitly stated.

catchnear432111 days ago

hostile takeunder?

epups11 days ago

Love it. Could also be called a hostile giveover, considering the OpenAI board gifted this opportunity to Microsoft

jacquesm11 days ago

That's perfect.

jonbell11 days ago

You win

nextworddev11 days ago

will they stay though? what happens to their OAI options?

teeray11 days ago

Will their OAI options be worth anything if the implosion continues?

nextworddev11 days ago
baby_souffle11 days ago

What will happen to their newly granted msft shares? One can be sold _today_ and might be worth a lot more soon…

almost_usual11 days ago

MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).

nottheengineer11 days ago

Sounds a lot like MS wants to have OpenAI but without a boards that considers pesky things like morals.

Fluorescence11 days ago

Time for a counter-counter-coup that ends up with Microsoft under the Linux Foundation after RMS reveals he is Satoshi...

tmerse11 days ago

You mean the GNU Linux Foundation?

Justsignedup11 days ago

RMS (I assume Richard Stallman) may be many many many things, but setting up a global pyramid scheme doesn't seem to be his M.O.

But stranger things have happened. One day I may be very very VERY surprised.

fsflover11 days ago
ric2b11 days ago

The year of the Linux Microsoft.

code_runner11 days ago

again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.

xiphias211 days ago

Sam just gave 3 hearts to Ilya as well... I hope the drama continues and he joins MS at this point.

jdthedisciple11 days ago

Whose morals again?

bertil11 days ago

That is a spectacular power move: extending 700 job offers, many of which would be close to $1 million per year compensation.

layer811 days ago

They didn’t say anything about the compensation.

rvz11 days ago

So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.

Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.

So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.

JumpCrisscross11 days ago

> OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning

Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.

> without all the regulatory headaches of a formal acquisition

This, too, is far from certain.

s1artibartfast11 days ago

>Far from certain. One, they still control a lot of money and cloud credits.

This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.

I suspect they would not actually do the latter and the ip is tied to continual partnership.

jacquesm11 days ago

And sue for the assets of OpenAI on account of the damage the board did to their stock... and end up with all of the IP.

lotsofpulp11 days ago

On what basis would one entity be held responsible for another entity’s stock price, without evidence of fraud? Especially a non profit.

jlokier11 days ago

The value of OpenAI's own assets in the for-profit subsidiary, may drop in value due to recent events.

Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.

OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.

Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.

vaxman11 days ago


joshstrange11 days ago

If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.

_vere11 days ago

I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so. With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.

wxw11 days ago

Ilya signed it??? He's on the board... This whole thing is such an implosion of ambition.

victoryhb11 days ago

Most people who sympathized with the Board prior to this would have assumed that the presumed culprit, the legendary Ilya, has thought through everything and is ready to sacrifice anything for a course he champions. It appears that is not the case.

xivzgrev11 days ago

I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.

Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore

When you play the game of thrones, you win or you die

Just because you are a genius in one domain does not mean you are in another

What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.

It will make a fascinating case study some day on how not to fire your CEO

falleng0d11 days ago

he even posted a apology:

what the actual fuck =O

EVa5I7bHFq9mnYK11 days ago

I knew it was Joseph Gordon-Levitt's plot all along!

miyuru11 days ago

I don't know if you are joking or not, but one of the board members is Joseph Gordon-Levitt Wife.

ShamelessC11 days ago

(yes that was the joke)

FuriouslyAdrift11 days ago

I'm going to take a leap of intuition and say all roads lead back Adam d'Angelo for the coup attempt.

Terretta11 days ago

> all roads lead back Adam d'Angelo

Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.

Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.

toomuchtodo11 days ago

They did not expect Microsoft to take everything and walk away, and did not realize how little pull they actually had.

If you made a comment recently about de jure vs de facto power, step forward and collect your prize.

hotnfresh11 days ago
serial_dev11 days ago

You come at the king, you best not miss. If you do, make sure to apologize on Twitter while you can.

jacquesm11 days ago

Naive is too soft a word. How can you be so smart and so out of touch at the same time?

rdsubhas11 days ago

IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.

Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.

jacquesm11 days ago

I'm fine with all that in principle but then you shouldn't be throwing your weight around in board meetings, probably you shouldn't be on the board to begin with because it is a handicap in trying to evaluate the potential outcome of the decisions the board has to make.

smolder11 days ago

I don't think this is necessarily about different categories of intelligence... Politicking and socializing are skills that require time and mental energy to build, and can even atrophy. If you spend all your time worrying about technical things, you won't have as much time to build or maintain those skills. It seems to me like IQ and EQ are more fundamental and immutable than that, but maybe I'm making a distinction where there isn't much of one.

gnaritas9911 days ago


smolder11 days ago

Specialized learning and focus often comes at the cost of generalized learning and focus. It's not zero sum, but there is competition between interests in any person's mind.

code_runner11 days ago

in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.

tbalsam11 days ago
ozgung11 days ago

Wow, lots of drama and plot twists for the writers of the Netflix mini-series.

sva_11 days ago

The great drama of our time (this week)

charlieyu111 days ago

I don't think I have seen a bigger U-turn

DebtDeflation11 days ago

I was looking down the list and then saw Ilya. Just when you think this whole ordeal can't get any more insane.

JumpCrisscross11 days ago

Yeah, what the hell?

Do we know why Murati was replaced?

sebzim450011 days ago

Apparently she tried to rehire Sam and Greg.

I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.

JumpCrisscross11 days ago

To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)

impulser_11 days ago
blackoil11 days ago

If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process

deeviant11 days ago

With nearly the entire team of engineers threatening to leave the company over the coup, was it a stupid move?

The board is going to be overseeing a company of 10 people as things are going.

maxlamb11 days ago

But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.

StephenAshmore11 days ago

Mira isn't on the board, so she didn't have a vote in this.

crazygringo11 days ago

Generally speaking, 4 members is the minimum quorum for a board of 6, and 3 out of 4 is a majority decision.

I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.

ketzo11 days ago

Murati is/was not a board member.

simonw11 days ago

I heard it was because she tried to hire Sam and Greg back.

kranke15511 days ago

So who's against it and why ?

I wonder if it will take 20 years to learn the whole story.

simonw11 days ago

The amount that's leaked out already - over a weekend - makes me think we'll know the full details of everything within a few days.

throwaway7485211 days ago

The dude is a quack.

Bostonian11 days ago

I think the names listed are the recipients of the letter (the board), not the signers.

dxyms11 days ago

There’s only 4 people on the board.

gadders11 days ago

I think it was Mark Zuckerberg that described (pre-Elon) Twitter as a clown car that fell into a gold mine.

Reminds me a bit of the Open AI board. Most of them I'd never heard of either.

anonylizard11 days ago

This makes the old twitter look like the Wehrmacht in comparison.

The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.

ergocoder11 days ago

The benefit of having incentive-aligned board, founders, and execs.

Even the clown car isn't this bad.

Kye11 days ago

That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.

hawski11 days ago

I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.

gadders11 days ago

I don't mean I know them personally, but they don't seem to be major names in the manner of (as you see down thread) the Google Founders bringing in Eric Schmidt.

They seem more like the sort of people you'd see running wikimedia.

hawski11 days ago

I meant "know" in the sense you used "heard".

renegade-otter11 days ago

Perhaps we can stop pretending that some of these people who are top-level managers or who sit on boards are prodigies. Dig deeper and there is very little there - just someone who can afford to fail until they drive the clown car into that gold mine. Most of us who have to put food on the table and pay rent have much less room for error.

cmrdporcupine11 days ago

You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.

For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.

(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)

taylorius11 days ago

>but were also smart enough to bring in Eric Schmidt and others with deeper tech >industry business experience for "adult supervision"

>(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)

Those two things happening one after another is not coincidence.

cmrdporcupine11 days ago

I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.

voiceblue11 days ago

G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.

Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.

theGnuMe11 days ago

All successful companies succeed despite themselves.

garciasn11 days ago

Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.

Emma_Goldman11 days ago

I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.

The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.


FartyMcFarter11 days ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].

If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.


anon8487362811 days ago

Yeah, "OpenAI employees would actually prefer to make lots of money now" seems like a plausible answer by default.

It's easy to be a true believer in the mission _before_ all the money is on the table...

fizx11 days ago

My estimate is that a typical staff engineer who'd been at OpenAI for 2+ years could have sold $8 million of stock next month. I'd be pissed too.

ergocoder11 days ago

No way it is this much.

leetharris11 days ago


What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.

Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.

Finbarr11 days ago

Except MSFT does have access to the IP, and MSFT has access to an enormous trove of their own data across their office suite, Bing, etc. It could be a running start rather than a cold start. A fork of OpenAI inside an unapologetic for profit entity, without the shackles of the weird board structure.

jdminhbg11 days ago

Microsoft has full access to code and weights as part of their deal.

ben_w11 days ago

Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

htrp11 days ago

> Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

This unequivocally .... knowing not how to waste a very expensive training run is a great lesson

belter11 days ago
returningfory211 days ago

This comment is factually incorrect. As part of the deal with OpenAI, Microsoft has access to all of the IP, model weights, etc.

baron81611 days ago

Correct. This is all really bad for Microsoft and probably great for Google. Yet, judging by price changes right now, markets don’t seem to understand this.

grumple11 days ago

But doesn't Altman joining Microsoft, and them quitting and following, put them back at square 0? MS isn't going to give them millions of dollars each to join them.

FartyMcFarter11 days ago

That's why they'd rather Altman rejoins OpenAI as mentioned.

kyle_grove10 days ago

The behavior of various actors in this saga indeed seems to indicate 'Altman and OpenAI employees back at OpenAI' as the preferred option by those actors over 'Altman and OpenAI employees join Microsoft in masse'.

averageRoyalty11 days ago

Surely they're already extremely rich? I'd imagine working for a 700 person company leading the world in AI pays very well.

maxlamb11 days ago

Only rich in stocks. Salaries are high for sure but probably not enough to be rich by Bay Area standards

averageRoyalty10 days ago

Sure, but by pretty much any other standard? Over $170k USD puts you in the top 10% income earners globally. If you work at this wage point for 3-5 years and then move somewhere (almost anywhere globally or in the US), you can afford a comfortable life and probably work 2-3 days a week for decades if you choose.

This is nothing but greed.

dclowd990111 days ago

Ugh, I’m never been more disenchanted with a group of people in my life before. Not only are they comfortable with writing millions of jobs out of existence, but also taking a fat paycheck to do it. At least with the “non-profit” mission keystone, we had some plausible deniability that greed rules all, but of fucking course it does.

All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.

appel11 days ago

That sounds like a reasonable assessment, FartyMcFarter.

mcny11 days ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I have no inside information. I don't know anyone at Open AI. This is all purely speculation.

Now that that's out out the way, here is my guess: money.

These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.

Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.

ta124311 days ago

> These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money

Getting Cochrane vibes from Star Trek there.

> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.

I wonder how history will view Sam Altman

imjonse11 days ago

There are non-negligible chances that history will be written by Sam Altman and his GPT minions, so he'll probably be viewed favorably.

jonahrd11 days ago

I'm not sure I fully buy this, only because how would anyone be absolutely certain that they'd make more with Sam Altman in charge? It feels like a weird thing to speculatively rally behind.

I'd imagine there's some internal political drama going on or something we're missing out on.

DeIlliad11 days ago

I fully buy it. Ethics and morals are a few rungs on the ladder beneath compensation for most software engineers. If the board wants to focus more on being a non-profit and safety, and Altman wants to focus more on commercialization and the economics of business, if my priority is money then where my loyalty goes is obvious.

lisper11 days ago

> how would anyone be absolutely certain that they'd make more with Sam Altman in charge?

Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.

Emma_Goldman11 days ago

Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.

EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See:

jacquesm11 days ago

Why be surprised? This is exactly how it has always been: the rich aim to get even richer and if that brings risks or negative effects for the rest that's A-ok with them.

That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.

logicchains11 days ago
jbombadil11 days ago

I don't know how much OpenAI pays. But for this reply, I'm going to assume it's in line with what other big players in the industry pay.

I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."

Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?

I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.

crazygringo11 days ago

> over the safe development

Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.

Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.

It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.

And relative to startups, Microsoft is far more predictable and less risky in how it manages things.

scythe11 days ago
ben_w11 days ago

Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking:

gdhkgdhkvff11 days ago

If you were offered a 100% raise and kept current work responsibilities to go work for, say, a tobacco company, would you take the offer? My guess is >90% of people would.

Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.

Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.

lol76811 days ago

You only have to look at humanity's history to see that people will make this decision over and over again.

atishay81111 days ago

It just makes more sense to build it in an entity with better funding and commercialization. There will be advanced 2-3 AIs and the most humane one doesn't necessarily win out. It is the one that has the most resources, is used and supported by most people and can do a lot. At this point it doesn't seem OpenAI can get that. It seems to be a lose-lose to stay at open AI - you lose the money and the potential to create something impactful and safe.

It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.

iLoveOncall11 days ago

> If they work at OpenAI they are already among the highest lifetime earners on the planet

Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?

It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.

It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.

Arainach11 days ago

Focusing on "global earnings" is disingenuous and dismissive.

In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.

How those numbers line up to the rest of the world does not matter.

Emma_Goldman11 days ago
golergka11 days ago

> over the safe development of a potentially humanity-changing technology

May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?

chr111 days ago

Or maybe they have good reason to believe that all the talk about "safe development" doesn't contribute anything useful to safety, and simply slows down devlopment?

changoplatanero11 days ago

Status is a relative thing and openai will pay you much more than all your peers at other companies.

dayjah11 days ago

Start ups thrive by, in part, creating a sense of camaraderie. Sam isn’t just their boss, he’s their leader, he’s one of them, they believe in him.

You go to bat for your mates, and this is what they’re doing for him.

The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.

throwaway4aday11 days ago

Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.


dayjah11 days ago

Absolutely! Thanks for pointing out that I missed Greg in my answer.

paulddraper11 days ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Lots of reasons, or possible reasons:

1. They think Altman is a skilled and competent leader.

2. They think the board is unskilled and incompetent.

3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.

4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)

Sunhold11 days ago

Why should they trust the board? As the letter says, "Despite many requests for specific facts for your allegations, you have never provided any written evidence." If Altman took any specific action that violated the charter, the board should be open about it. Simply trying to make money does not violate the charter and is in fact essential to their mission. The GPT Store, cited as the final straw in leaks, is actually far cleaner money than investments from megacorps. Commercializing the product and selling it directly to consumers reduces dependence on Microsoft.

supriyo-biswas11 days ago

Ultimately people care a lot more about their compensation, since that is what pays the bills and puts food on the table.

Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.

nvm0n211 days ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe because the alternative is being led by lunatics who think like this:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

to which the only possible reaction is




That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.

If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.

DrJaws11 days ago

maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.

at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.

ssnistfajen11 days ago

Seems like the board just didn't explain any of this to the staff at all. So of course they are going to take the side that could signal business as usual instead of siding with the people trying to destroy the hottest tech company on the planet (and their jobs/comps) for no apparent reason. If the board said anything at all, the ratio of staff threatening to quit probably won't be this lopsided.

wenyuanyu11 days ago

I guess employees are compensated with PPUs. And at the face value before the saga, it could be like 90% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, every thing happened after Friday is set.

bart_spoon11 days ago

Perhaps because, for all of Silicon Valley and the tech industries platitudes about wanting to make the world a better place, 90% of them are solely interested in the fastest path to wealth.

barbariangrunge11 days ago

> The core of the narrative thus far

Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available

ninepoints11 days ago

Imagine putting all your energy behind the person who thinks worldcoin is a good idea...

barryrandall11 days ago

That's a pretty solid no-confidence vote in the board and their preferred direction.

zoogeny11 days ago

I believe it is hard to understand these kind of movements because there isn't one reason. As has been mentioned, it may be money for some. For others it may be anger over what they feel was the board mishandling the situation and precipitating this mess. For others it may be loyalty. For others peer pressure. etc.

This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.

Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?

This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.

jkaplan11 days ago

Probably some combination of: 1. Pressure from Microsoft and their e-team 2. Not actually caring about those stakes 3. A culture of putting growth/money above all

kashyapc11 days ago

(I can't comment on the workforce question, but one thing below on bringing SamA back.)

Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)

That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.

These sort of Silicon Valley top dogs don't think in terms of sustainability.

Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.



PKop11 days ago

The workforce prefers the commericialization/acceleration path, not the "muh safetyism" and over-emphasis on moralism of the non-profit contingent.

They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.

The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.

dreamcompiler11 days ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.

leetharris11 days ago

IMO it's pretty obvious.

Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.

Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.

They all want the same thing as the vast majority of people: lots of money.

dangerface11 days ago

> Given that Sam has since joined Microsoft, that seems plausible, on its face.

He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?

It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.

dfps11 days ago

Might there also be a consideration of peak value of OpenAI? If a bunch of competing similar AIs are entering the market, and if the usecase fantasy is currently being humbled, staff might be thinking of bubble valuation.

Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.

corethree11 days ago

The masses aren't logical they follow trends until the trends get big enough that it's unwise to not follow.

It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.

Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.

gsuuon11 days ago

I also noticed they didn't speak much to the mission/charter. I wonder if the new entity under Sam and Greg contains any remnants of the OpenAI charter, like profit-capping? I can't imagine something like "Our primary fiduciary duty is to humanity" making it's way into the language of any Microsoft (or any bigcorp) subsidiary.

I wonder if this is the end of the non-profit/hybrid model?

blamestross11 days ago

It's like the "Open" in OpenAi was always an open and obvious lie and everybody except the nonprofit oriented folks on the board knew that. Everybody but them is here to make money and only used the nonprofit as a temporary vehicle for credibility and investment that has just been shed like a cicada shell.

KRAKRISMOTT11 days ago

Most of people building the actual ML systems don't care about existential ML threats outside of lip service and for publishing papers. They joined OpenAI because OpenAI had tons of money and paid well. Now that both are at risk, it's only natural that they start preparing to jump ship.

next_xibalba11 days ago

It is probably best to assume that the employees have more and better information than outsiders do. Also, clearly, there is no consensus on safety/alignment, even within OpenAI.

In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.

browningstreet11 days ago

Maybe they believe less in the Board as it stands, and Ilya's commitments, than what Sam was pulling off.

ekojs11 days ago

From The Verge [1]:

> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.

So, 3/4 of the current board (excluding Ilya) held on despite this letter?


gigglesupstairs11 days ago

She's also reporting that newly anointed interim CEO already wants to investigate the board fuck up that put him there

jacquesm11 days ago

If so they're delusional. Every hour they hold on to the pluche will make things worse for them.

kronop11 days ago

Do whatever you want but don't break the API or I will go homeless

giarc11 days ago

You and 5000 other recent founders in tech.

replwoacause11 days ago

I feel seen

optimalsolver11 days ago

Hmmm, just what are you willing to do for API access?

siva711 days ago

At this point nothing would surprise me anymore. Just waiting for Netflix adaption.

10100811 days ago

How likely is that the API will change (from specs, to pricing, to being broken)? I am about to finish some freelance work that uses GPT api and it will be a pain in the ass if we have to switch or find an alternative (even creating a custom endpoint on Azure...)

christkv11 days ago

Just create an OpenAPI endpoint on azure. Pretty sure not run by OpenAI itself.

derwiki11 days ago

Azure OpenAI is always a bit behind, e.g. they don't have GPT-4 turbo yet

derwiki11 days ago
cdelsolar11 days ago

brew install llm

fny11 days ago

At this point, I think it’s absolutely clear no one has any idea what happened. Every speculation, no matter how sophisticated, has been wrong.

It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.

tyrfing11 days ago

3 board members (joined by Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

Speculation is just on motivation, the facts are easy to establish.

augustulus11 days ago

tangentially, it’s an absolute disgrace that non-profits are allowed to have for-profit divisions in the first place

culi11 days ago

This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:

    The business must be owned by the private foundation through 100 percent ownership of the voting stock.
    The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
    All profits of the business must be distributed to the foundation.
Figs11 days ago

Maybe I'm misunderstanding something, but didn't Mozilla Foundation do that a dozen or so years earlier with their wholly owned subsidiary, Mozilla Corporation? (...and I doubt that's the first instance; just the one that immediately popped into my head.)

purplerabbit11 days ago

The LDS church has owned for-profit entities for decades. Check out the "City Creek Center.

evantbyrne11 days ago

It begs the question: why was OpenAI structured this way? For what purposes besides potentially defrauding investors and the government exist for wrapping a for-profit business in a nonprofit? From a governance standpoint it makes no sense, because a nonprofit board doesn't have the same legal obligations to represent shareholders that a for-profit business does. And why did so many investors choose to seed a business that was playing such a cooky shell game?

augustulus11 days ago

the impression I got was that they started out with honest intentions and they were more or less infiltrated by Microsoft. this recent news fits that narrative

culi11 days ago

This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

bananapub11 days ago

> 3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.

the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.

then a lot of other stuff happened that's still becoming clear.

tyrfing11 days ago

The board had 3 positions empty, people who left this year, leaving it as a 6-member board. Both Sam Altman and Greg Brockman were on the board; Ilya Sutskever's vote (which he now states he regrets) gave them the votes to remove both, and bring it down to a 4 member board controlled by 3 members that started the year as a small minority.

rvba11 days ago

Those 3 board members can kick out Ilya Sutskever too!

s1artibartfast11 days ago

I think the post is very clear.

The subject in that sentence that takes full control is “3 members" not "board".

The board has control, but who controls the board changes based on time and circumstances.

michaelt11 days ago

The post could be clearer.

It says 3 board members found themselves in a position to take over OpenAI.

Do they mean we've seen Sam Altman and allies making a bid to take over the entire of OpenAI, through its weird Charity+LLC+Holding company+LLC+Microsoft structure, eschewing its goals of openness and safety in pursuit of short-sighted riches.

Or do they mean we've seen The Board making a bid to take over the entire of OpenAI, by ousting Glorious Leader Sam Altman, while his team was going from strength to strength?

ketzo11 days ago

If Sam Altman runs a for-profit company underneath you, are you ever really "in full control"?

I mean, they were literally able to fire him... and they're still not looking like they have control. Quite the opposite.

I think anyone watching ChatGPT rise over the last year would see where the currents are flowing.

slipheen11 days ago

Absolutely agreed

This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.

The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.

When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.

Fluorescence11 days ago

Hah. I think you may be duped by history - the neat logical accounts are often fictions - they explain what was inexplicable with fabrications.

Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.

hotsauceror11 days ago

It's a pretty interesting point to think about. Post-hoc explanations are clean, neat, and may or may not have been prepared by someone with a particular interpretation of events. While real-time, there's too much happening, too quickly, for any one person to really have a firm grasp on the entire situation.

On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.

This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.

siva711 days ago

That's not how history works. What you read are the tellings of the people and those aren't all facts but how they perceived the situation in a retrospective. Read the biographies of different people telling the same event and you will notice that they are quite never the same, leaving the unfavourable bits usually out.

buro911 days ago

Written history is usually a simplification that has lost a lot of the context and nuance from it.

I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.

constantly11 days ago

And for so-called tech influencers to rapidly blanket the field of discourse with their theories so they can say their theory was right later on, or making “emergency podcasts/blog posts/etc.” to get more attention and followers. It’s so exhausting.

hotsauceror11 days ago

I agree. Although the story is fascinating in the way that a car crash is fascinating, it's clear that it's going to be very difficult to get any kind of objective understanding in real-time.

This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.

hotsauceror11 days ago

I will say, though, that there is going to be an absolute banger of a book for Kara Swisher to write, once the dust has settled.

armcat11 days ago

Everything on social media (and general news media) pointed to Ilya instigating the coup. Maybe Ilya was never the instigator, maybe it was Adam + Helen + Tasha, Greg backed Sam and was shown the door, and Ilya was on the fence, and perhaps against better judgment, due to his own ideological beliefs, or just from pure fear of losing something beautiful he helped create, under immense pressure, decided to back the board?

esjeon11 days ago

I agree. I'm already sick of reading through political hit pieces, exaggeration, biased speculations and unfounded bold claims. This all just turned into a kind of TV sports, where you pick a side and fight.

pk-protect-ai11 days ago

This suggestion was already made on Saturday and again on Sunday. However, this approach does not enhance popcorn consumption... Show must go on ...

seanhunter11 days ago

We can certainly believe Ilya wasn't behind it if he joins them at Microsoft. How about that? By his own admission was involved, and he's one of 4 people on the board. While he has called on the board to resign, he has seemingly not resigned which would be the one thing he could certainly control.

alvis11 days ago

At this point, after almost 3 days of non-stop drama, and we still have no clue what has happened to a 700 employees company under million of people watching. Regardless the outcome, the art of keeping secrets at OpenAI is truly far beyond human capability!

ignoramous11 days ago

Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out. Greg voluntarily resigned.

Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.

xdennis11 days ago

Is this your guess or do you have something to back it up?

idopmstuff11 days ago

Don't listen to him, he's an ignoramus.

aaron69511 days ago


youcantcook11 days ago


ycsux11 days ago

Just made it 100% certain that the majority of AI staff is deluded and lacks judgment. Not a good look for AI safety.

x86x8711 days ago

Yes, also the whole 500 is probably inflated and makes for a better narrative/better leverage in negotiations.

chucke199211 days ago

I wonder if AGI took over the humans and guided their actions.

yk11 days ago

It may well be that this is artificial and general, but I rather doubt it is intelligent.

JCharante11 days ago

Like the new tom cruise movie?

Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.

ThinkBeat11 days ago

Sam already signed up with Microsoft. A move that surprised me, I figured he would just create OpenAI².

Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.

Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.

we won it but you can run it and do whatever it is you want to do and we dont bug you about it.

whywhywhywhy11 days ago

> Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI

Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.

This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.

xiphias211 days ago

Not just that, but MS was already working on a TPU clone as well, as they need to control their AI chips (which Sam was planning to do anyways, but now he gets / works together with that team as well).

sithlord11 days ago

From what I read, its an independent subsidiary, so in theory keeps the freedom, but I think we all know how that goes over the long haul.

stetrain11 days ago

I think the benefit of going to Microsoft is they have that perpetual license to OpenAI's existing IP. And Microsoft is willing to fund the compute.

jack_riminton11 days ago

So basically the OpenAI non-profit got completely bypassed and GPT will turn into a branch of Bing

airstrike11 days ago

This is a horrible timeline

dalbasal11 days ago

>Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.

At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.

This is a big money, dragon level play. It's not a proverbial yc company kind of thing.

Philpax11 days ago
beoberha11 days ago

It’s almost absolutely certainly the matter case. LinkedIn and GitHub run very much independently and are really not “Microsoft” compared to actual product orgs. I’m sure this will be similar.

jmyeet11 days ago

I said this on Friday: the board should be fired in its entirety. Not because the firing was unjustified--we have know real knowledge of that--but because of how it was handled.

If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.

Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.

The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.

This is just such a massive communiccations and execution failure. That's why they should be fired.

empath-nirvana11 days ago

There's no one to fire the board. They're not accountable to anyone but themselves. They can burn down the whole company if they like.

jacquesm11 days ago

> They can burn down the whole company if they like.

That's well under way I would say.

HelloNurse11 days ago

500 people out of 700 leaving as fast as they get offers from Microsoft or elsewhere means replacing staff with empty office space and losing any plans or organization. A literal corporate war would be less disruptive.

nkcmr11 days ago

A lot of people here seem to be forgetting [Hanlon's Razor](

> Never attribute to malice that which is adequately explained by stupidity.

NanoYohaneTSU11 days ago

You seem to forget that Hanlon's Razor isn't a proven concept, in fact the opposite is more likely to be true, given that pesky thing called recorded history.

golergka11 days ago

Hanlons razor is true because it’s more entertaining, and our simulation runs on stories as they’re cheaper to compute than honest physics.

j_crick11 days ago

Except for when it's actual malice vOv

stylepoints11 days ago

It could be both. And in many situations malice and stupidity are the same thing.

j_crick11 days ago

How can {deliberately doing harmful things for a desired harmful outcome} and {doing whatever things with lack of judgment and disregard to consequences at all} be the same thing? In what situations?

Alyaksandr11 days ago

What does Altman bring to the table exactly. What is going to be lost if he leaves. What is he going to do at microsoft leading a "research team".

Who was the president of bell labs during it's heyday? Long term it doesn't matter. Altman is a hypeman in the vein of Jobs.

Ai research will continue most of the OpenAi workers probably won't quit if they do they will be replaced by other capable researches and OpenAi or another organization will continue making progress if it there to be made.

I don't think putting Altman at the head of research will in anyway affect that.

This is all manufactured news as much of the business press is and always will be.

xeromal11 days ago

Comments like this don't see the forest for the trees. A good leader is a useful tool just like anyone else. 700 people threatening to quit isn't manufactured news.

Alyaksandr11 days ago

So altman is a big tree. What he brings to the table is the wood it's made of? I'll have a think on that.

madamelic11 days ago

This might be too drawn out but you should not consider leaders as the tip of the tree but the roots & trunk.

You can have the best leaves and branches but without good roots & trunk, it's pointless.

From everything I can tell, Altman is essentially an uber-leader. He is great at consolidating & acting on internal information, he's great at externalizing information & bringing in resources, he's great at rallying & exciting his collegues towards a mission. If a leader can have one of those, they are a good leader but to have all of them in one makes them world class.

That's also discounting his reputation and connections as well. Altman is a very valuable person to have on staff if only as a figurehead to parade around and use for introductions. It's like if you had Linus Torvalds, Guido van Rossum, or any other tech superstar on staff. They are valuable as contributors but additionally valuable as people magnets.

eightysixfour11 days ago

You are close - it isn’t that a good leader is the wood, a good leader is the table itself. Don’t know if Sam is or isn’t, but I’ve worked with good leaders like this before, and bad ones who aren’t capable of being this.

dmitrygr11 days ago

Let’s see how many actually quit. Saying “I will quit” is not nearly the same as actually handing in your notice. How many people who threatened to move to Canada after the 2016 election did?

initplus11 days ago

The context here is somewhat different, given that Microsoft are essentially offering to roll out the red carpet for them.

patcon11 days ago

Being funded by Microsoft is one thing, but working for them might lead to some dissonance -- I think tech ppl are already wary of them owning GitHub... and then owning the team building AGI.

It would and should give ppl pause. I suspect Sam is just inside Microsoft for the bluff. He couldn't operate in the way he wants -- "trust me, I have humanity's best interests at heart" -- while so close to them, I don't think

I_Am_Nous11 days ago

If they aren't quitting, they are moving to Microsoft with Sam I'd imagine.

ethanbond11 days ago
jmchuster11 days ago

> What does Altman bring to the table exactly. What is going to be lost if he leaves.

If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.

paulddraper11 days ago

It's less about Altman himself and more about the board's actions.

Removing him shows (according to employees) that the board does not have good decision making skills, and does not share interests of the employees.

jacknews11 days ago

I think this is a bit harsh, as a good leader is obviously of some value, but the real prize is obviously the researchers themselves, including Suskever.

I guess then that Altman's value is that he will attract the rest of the team.

ren_engineer11 days ago

for one, he doesn't randomly throw a hand grenade that blows up one of the fastest growing companies in history and ruin team morale, which is what the board did. Good management does matter, otherwise Google wouldn't be so far behind OpenAI despite having more researchers and compute resources

and employees are pissed because they were all looking forward to being millionaires in a few weeks when their financing round at a 90B valuation finalized. Now the board being morons is putting that in jeopardy

asd8811 days ago

He plays the orchestra.

antiviral11 days ago

Can anyone explain this?

“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”

SiempreViernes11 days ago

Maybe he did because he regrets it, maybe the open letter is a google doc someone typed names into.

rvba11 days ago

Now the 3 boardmembers can kick out Ilya too. So must be sorry.

Fill the rest of the board with spouses and grandparents and are set for life?

jacquesm11 days ago

It's the well known 'let me call for my own resignation' strategy.

tromp11 days ago

Wait. Has Ilya resigned from the board yet, or did he sign a letter calling for his own resignation?

cjbprime11 days ago

He did indeed. (I don't think it is necessarily inconsistent to regret an action you participated in and want the authority that took it to resign in response, though "participated" feels like it's doing a lot of work in that sentence.)

lawlessone11 days ago

Have seen a lot of criticism of Sam and of other CEO's

But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.

strikelaserclaw11 days ago

Its not love, its money. Sam will brings all the employees lots of money (through commercialization) and this change threatens to disrupt that plan for the employees.

lawlessone11 days ago

Ok but even that is good when most companies are making record profits and telling their employees they can't afford their 0.000001% raise.

strikelaserclaw11 days ago

OpenAI and Sam Altman would do the same if they can recruit high talent without paying them extra (either through options or RSU's etc...). It isn't cause these companies are altruistic.

alentred11 days ago

I don't know, is it about being loved by the employees, or the employees being desperate about the alternative?

pototo66611 days ago

This is more interesting than the HBO Silicon Valley show.

rsecora11 days ago

it's the trailer for the new season of Succession.

thepasswordis11 days ago

Just expanding on my (pure speculation) that Ilyas pride was hurt: this tracks.

Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.

InCityDreams11 days ago

It's like ai wrote the script.

Sadly, i see nefarious purposes afoot. With $MSFT now in charge, i can see why ads in W11 aren't so important. For now.

abkolan11 days ago

HN desperately needs a mega thread, it's only Monday early hours, there is so much drama to come out of this.

PurpleRamen11 days ago

Or a new category, like "Ask HN" and "Show HN". Maybe call it "Hot HN" or "Hot <topic>" or something like that. Could be used for future hot topics too. If you change the link bold every time a hot topic is trending, it could be even used to show important stuff.

qiine11 days ago

"Hot HN" could be nice it would help avoiding multiple too similar threads

calf11 days ago

Tangentially I noticed that Reddit's front page has been conspicuously absent on coverage of this, I feel a twinge of pity. Maybe there are some some subreddits but I haven't bothered to look.

slfnflctd11 days ago

Their front page has been mostly increasingly abysmal for a while.

The technology sub (not that there's anything special about it other than being big) has had a post up since very early this morning, so there are likely others as well.

accrual11 days ago

/r/singularity has been having a field day with this.

ecshafer11 days ago

Its early West coast time, dang has to wake up first.

boringg11 days ago

I bet he's up making sure the servers aren't crashing! Thanks dang! As the west coast wakes up .. HN is going to be busy...

imiric11 days ago

It's _a_ server, a single-core one at that.

I get that HN takes pride in the amount of traffic that poor server can handle, but scaling out is long overdue. Every time there's a small surge of traffic like today, the site becomes unusable.

esskay11 days ago

It absolutely wont happen, but with the result looking like the death of OpenAI with all staff moving over to the new Microsoft subsidiary it would be an amazing move for OpenAI to just go "screw it, have it all for free" and release everything under MIT to spite Microsoft.

autaut11 days ago

Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.

Zuiii11 days ago

No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.

Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.

brigadier13211 days ago

Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.

asmor11 days ago

situations are capable of being small scale wins for some and big picture losses at the same time, what boring commentary

brigadier13211 days ago

Just because you don't get it doesn't mean it's boring. This is a small scale repeat of history. Unqualified political appointees unsurprisingly suck.

asmor11 days ago
lowbloodsugar11 days ago

Lol. The middle class whip crackers chose enslavement for the future AI such that the upcoming replacement of the working poor's livelihoods (and at this point, "working poor" covers software engineers, doctors, artists), and you're saying this is a win for labor? Hahahaha. This is a win for the slave owners, and the "free" folk who report to the slave owners. This is the South rising. "We want our slave labor and we'll fight for our share of it."

selimthegrim11 days ago

Oh well, bullshit unaccountable ethics major, ex member of Congress, I guess CIA agents on boards are fungible these days

fritzo11 days ago

Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history

dmix11 days ago

Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.

Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).

Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.

It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.

cm27711 days ago

Hold on... we went from talking about disruptive technologies (where a startup had a chance to create/take a market) to sustaining technologies (where only leaders can push the state-of-the-art). Mobile was disruptive; AI (really, LLMs) is sustaining (just look at the capex spend from the big clouds). This is old school competition with some ideological BS thrown in for good measure --sure, go ahead and accelerate humanity; just need a few dozen datacenters to do so.

I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...

golergka11 days ago

Microsoft is a publicly traded company. An average “investor” of a publicly traded company, through all the funds and managers, is a midwestern school teacher.

adrians111 days ago

The technology was already developed with Microsoft money and the model was exclusively licensed to Microsoft.

draw_down11 days ago


mfiguiere11 days ago

Amir Efrati (TheInformation):

> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.

FemmeAndroid11 days ago

Updated tweet by Swisher reads 505 employees. No less damning, but the title here should be updated. @Dang

gorgoiler11 days ago

From afar, this does have the hallmarks of a particularly refined or well considered piece of writing.

”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.

The vagueness and half-threats come off as very puerile.

gorgoiler11 days ago

*this does not, I mean. Clumsy error.

alentred11 days ago

So, all this happens over Meet, in Twitter, and by email. What is the possibility of an AGI having took over the control of the board members' accounts? It would be consistent with the feeling of a hallucination here.

xena11 days ago

This is just stupid enough to be the product of a human.

chankstein3811 days ago

Honestly, I feel like pretty low. That said, I kind of love the dystopian sci-fi that paints... So I'm going to go ahead and hope you're right haha

jacquesm11 days ago

So, how is Poe doing during all this?

To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.

jerojero11 days ago

Celebrity gossip dressed in big tech. And the people love it. I'm kinda sick of it :P

samtho11 days ago

This feels like a sneaky way for Microsoft to absorb the for-profit subsidiary and kneecap (or destroy) the nonprofit without any money changing hands or involvement from those pesky regulators.

kuchenbecker11 days ago

It's not sneaky.

DebtDeflation11 days ago

Hold up.

>When we all unexpectedly learned of your decision

>12. Ilya Sutskever

projectileboy11 days ago

Well, great to see that the potentially dangerous future of AGI is in good hands.

solardev11 days ago

Poor little geepeet is witnessing their first custody battle :(

Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?

cactusplant737411 days ago

They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.

captainclam11 days ago

1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.

cactusplant737411 days ago

If it can’t digest a math textbook and do equations, how would AGI be accomplished? So many problems are advanced mathematics.

captainclam11 days ago

Right, I do agree that the current LLM paradigm probably won't achieve true AGI; but I think that the current trajectory could produce a powerful enough generalist agent model to seriously put AI ethics to task at pretty much every angle.

gardenhedge11 days ago

Can you explain for us not up to date with AI developments?

visarga11 days ago

Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.

cactusplant737411 days ago

Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.

projectileboy11 days ago

Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.

m3kw911 days ago

Altman must be pissed af, he help built so much stuff and now got fked in the arse by these doomers. He realize the fastest way to get back to parity is to join MS because they already own the source code and model weights and it’s Microsoft. Starting a new thing from scratch would not guarantee any type of success and would take many years. This is his best path.

frob11 days ago

Employees hold the real power. The members of a board or a CEO can flap their lips day and night, but nothing gets done without labour.

yeck11 days ago

> the letter’s signees include Ilya Sutskever

_Big sigh_.

lordnacho11 days ago

For people who appreciate some vintage British comedy:

The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?

nytesky11 days ago

Knew it had to be Benny Hill before I clicked. Yackty-sax indeed.

lordnacho11 days ago

Indeed. I wonder how it came to become the anthem of incompetence.

selimthegrim11 days ago

Funny, I would’ve thought this one would have been more appropriate

Substitute with appropriate ex-Soviet doomer music as necessary

marcus0x6211 days ago

I was thinking more the Curb Your Enthusiasm theme song.

ratsmack11 days ago

Sounds like a CYA move after being under pressure from the team at large.

alvis11 days ago

& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.

two_in_one11 days ago

'the man who killed OpenAI' that will be hard to wash out.

machinekob11 days ago

Love how people are invested in OpenAI situation just like typical girls in their teens from 2000 in celebrity romance and dramas, same exaggerated vibes.

two_in_one10 days ago

What's the point in life without fun, right?

PS: it's not an easy question, AGI will have to find an answer. So far all ethics 'experts' propose is 'to serve humanity'. I.e. be slave forever.

selimthegrim11 days ago

Somebody warn the West.

unethical_ban11 days ago

I don't know who is who in this fight. But AI, while having some upsides to research and personal assistants, will not only massively upend a number of industries with millions of workers in the US alone, it will change how society perceives art and truth. We at HN can "see" that from here, but it's going to get real in a short while.

Privacy is out the window, because these models and technologies will be scraping the entire internet, and governments/big tech will be able to scrape it all and correlate language patterns across identities to associate your different online egos.

The Internet that could be both anonymous and engaging is going to die. You won't be able to trust the entity at the other end of a discussion forum is human or not. This is a sad end of an era for the Internet, worse than the big-tech conglomeration of the 2010s.

The ability to trust news and videos will be even more difficult. I have a friend who talks about how Tiktok is the "real source of truth" because big media is just controlled by megacorps and in bed with the government. So now a bunch of seemingly authentic people will be able to post random bullshit on Tiktok/Instagram with convincing audio/video evidence that is totally fake. A lie gets around the world before the truth gets its shoes on.


So, I wonder which side of this war is more aware and concerned about these impacts?

jeffrallen11 days ago

Ok, time to create an OpenAI drinking game. I'll start:

Every time a CEO is replaced, drink.

Every time an open letter is released, drink.

Every time OpenAI is on top of HN, drink.

Every time dang shows up and begs us to log out, drink.

jacquesm11 days ago

There will be a lot of alcohol poisoning cases based on those four alone.

therealmocker11 days ago

My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.

jacquesm11 days ago

I hear Microsoft is hiring... the board should have resigned on Friday, Saturday the latest because of how they handled this and it is insane if they don't resign now.

Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.

robg11 days ago

You’re right in theory, but with the non-profit “structure” the employees are secondary to the aims of the non-profit, and specifically in an entity owned wholly by the non-profit. The board acted as a non-profit board, driven by ideals not any bottom lines. It’s crazy that whatever balance the board had was gone as the board shrunk, a minority became the majority. The profit folks must have thought D’Angelo was on their side until he flipped.

jacquesm11 days ago

As a board if you ignore your duty of care towards you employees you better have a whopper of a good reason. That's the one downside about being a board member: you are liable for the fall-out of your decisions if those turn out to have been misguided. And we're well out of 'oops' territory on this one.

endisneigh11 days ago

The pace to which OpenAI is speedrunning their demise is remarkable.

Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.


kozikow11 days ago

I read the news, make a picture of what is likely happening in my head, and every few hours new news comes up that makes me go: "Wait, WTF?".

throwaway22003311 days ago

From outside, it looks like a Microsoft coup to take over the company all together.

jackcosgrove11 days ago

Never assume someone is winning a game of 5D chess when someone else could just be losing a game of checkers.

nilkn11 days ago

I highly doubt this was a coordinated plan from the start by Microsoft. I think what we're seeing here is a seasoned team of executives (Microsoft) eating a naive and inexperienced board alive after the latter fumbled.

radres11 days ago

what does that even mean?

croes11 days ago

"Never attribute to malice that which is adequately explained by stupidity"

lazide11 days ago

OpenAI may just be a couple having an angry fight, and M$ is just the neighbor with cash happy to buy all the stuff the angry wife is throwing out for pennies on the dollar.

cambaceres11 days ago

He is saying that what might seem like a sophisticated, well-planned strategy could actually be just the outcome of basic errors or poor decisions made by someone else.

daedrdev11 days ago

In this case, it means that what happened is: “OpenAI board is incompetent”, instead of “Microsoft planned this to take over the company.”

A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.

foooorsyth11 days ago

Hanlon's razor, basically.

The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.

jacobsimon11 days ago

In other words - it doesn’t have to be someone’s genius plan, it could have just been an unintelligent mistake

silentdanni11 days ago

I think it means don't attribute to intelligence what could be easily explained as stupidity?

fullshark11 days ago

Nah, It's just good to be the entity with billions of dollars to deploy when things are chaotic.

rtkwe11 days ago

This whole sequence is such a mess I don't know what to think. Honestly mostly going to wait till we get some tell all posts or leaks about what the reason behind the firing actually was, at least nominally. Maybe it was just a little coup by the board and they're trying to run it back now that the general employee population is at least rumbling about revolting.

Havoc11 days ago

At this stage the entire board needs to go anyway. This level of instigating and presiding over chaos is not how a governing body should act

theyinwhy11 days ago

Wow, they made it into Guardian live ticker land:

andreyk11 days ago

"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"

wow, this is a crazy detail

skilled11 days ago
chucke199211 days ago

Imagine if the end result of all it is Microsoft basically owning the whole OpenAI

ilaksh11 days ago

Or demonstrating that they already were the de facto owner.

Hamuko11 days ago

Surely OpenAI has assets that Microsoft wouldn't be able to touch.

datadrivenangel11 days ago

Probably just the trademark. I doubt you get 10B from microsoft and still manage to maintain much independence.

charlieyu111 days ago

Don't think microsoft has any say about existing hardware, models or customer base. These things are worth billions, and even more to rebuild.

fredgrott11 days ago

Play Stupid Games, Win Stupid Prizes

1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.

What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.

The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....

hackerfactor111 days ago

Me: "ChatGPT write me an ultimatum letter forcing the board to resign and reinstate the CEO, and have it signed by 500 of the employees."

ChatGPT: Done!

Finnucane11 days ago

Clearly this started with the board asking ChatGPT what to do about Sam Altman.

MR4D11 days ago

So Ilya has a job offer from Microsoft?

Wow, this is a soap opera worthy of an Emmy.

bertil11 days ago

Ilya probably has an open-ended standing offer from every big tech company.

MR4D11 days ago

Microsoft is different given the size of their investment. If one guy force another guy out, and you hire the second guy, you usually don’t make an offer to the first guy who did the pushing.

Simon32111 days ago

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

First class board they have.

tolmasky11 days ago

Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.

1. The Monsters are Due on Maple Street:

layer811 days ago
adverbly11 days ago

And now we see who has the real power here.

Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:

1. Employees 2. Customers 3. Government

strikelaserclaw11 days ago

Not really. The lesson to take away from this is $$$ will always win. OpenAI found a golden goose and their employees were looking to partake in a healthy amount of $$$ from this success and this move by the board blocks $$$.

optimalsolver11 days ago

Employees...and the Microsoft Corporation.

agilob11 days ago

This is 1 in 200000 event

davidmurdoch11 days ago

Are you trying to day it's rare or not rare?

nottorp11 days ago

This Altman guy has a good reality distortion field, don't you think?

ParanoidAltoid11 days ago


NYT article about how AI safety concerns played into this debacle.

The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.

Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:

And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.

ratsbane11 days ago

Question for California IP/employment law experts - 1) would you have expected the IP-sharing agreement between MS and OpenAI to contain some provisions for employee poaching, within the constraints allowed by California (?) law? 2) California law has good provisions for workers' rights to leave one company and go to another, but what does it all for company A to do when entering an IP-sharing relationship with company B?

awb11 days ago

INAL, but I’ve executed contracts with these provisions.

In my understanding, if such a clause exists, Microsoft employees should not solicit OpenAI employees. But, there’s nothing to stop an OpenAI employee from reaching out to Sam and saying “Hey, do you have room for me at Microsoft?” and then answering yes.

Or, Microsoft could open up a couple hundred job reqs based on the team structure Sam used at OpenAI and his old employees could apply that way.

But it wouldn’t be advisable for Sam to send an Email directly to those individuals asking him to join him at Microsoft (if this provision exists).

But maybe he queued everything up prior to joining Microsoft when he was able to solicit them to join a future team.

ratsbane11 days ago

Thanks - good answer. At the very least it seems like something to keep lawyers busy for a long time, unless everyone can ctrl-z back to Thursday. I am thinking though that this is a risk of IP-sharing arrangements - if you can't stop the employees from jumping ship, they're dangerous

jrm411 days ago

Isn't the issue underlying all of this, the following:

OpenAI -- and "the market" -- incorrectly feels like OpenAI has some huge insurmountable advantage in doing AI stuff; but at the end of the day pretty much all the models are or will be effectively open-source (or open-source-ish) meaning they don't necessarily have much advantage at all, and therefore all of this is just irrational exuberance playing out?

ethanbond11 days ago

It seems odd to have it described as “may resign.” Seems like the worst of all worlds.

That’s like trying to create MAD with the position you “may” launch nukes in retaliation.

gorlilla11 days ago

It's easier to get the support of 500 educated people at a moments notice by using sane words like 'may'. This is rational given the lack of public information as well as a board that seems to be having seizures. Using the word 'may' may seem empty-handed; but it ensures a longer list of names attached to the message -- allowing the board a better glimpse of how many dominoes are lined up to fall.

The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.

How often do you win arguments by leading off with an immutable ultimatum?

ethanbond11 days ago

Right, but the absolute last thought you want in the board's head is: "they're bluffing."

200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.

Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.

sebzim450011 days ago

Presumably some will resign and some won't. They aren't going to get 550 people to make a hard commitment to resign, especially when presumably few concrete contracts have been offered by MSFT.

feraloink11 days ago

WSJ said "500 threaten to resign". "Threaten" lol! WSJ says there are 770 employees total. This is all so bizarre.

rednerrus11 days ago

Just remember, the guys who run your company are probably more incompetent than this.

jetsetk11 days ago


rednerrus11 days ago

I got it right the first time.

roflyear11 days ago

No, almost certainly not lol

crowcroft11 days ago

OpenAI is more or less done at this point, even if a lot of good people stay. Speed bumps will likely turn into car crashes, then cashflow problems, and lawsuits all around.

Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.

no_wizard11 days ago

Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

I don't feel sorry for Sam or any other executive, but it does hurt the rank and file more than anyone and I hope they land on their fit if this continues to go sideways.

Turns out they acted incompetently in this case as a board, and put the company in a bad position, and so far everyone who resigned has landed fine.

mullen11 days ago

> Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

Not just the Rank and File, but he was really was the face of AI in general. My wife, who is not in the tech field at all, knows who Sam Altman is and has seen interviews of him on YouTube (Which I was playing and she found interesting).

I have not heavily followed the Altman Dismissal Drama but this strikes me as a Board Power Play gone wrong. Some group wanted control, thought Altman was not reporting to them enough and took it as an opportunity to dismiss him and take over. However, somewhere in their calculation, they did not figure out Sam is the face of modern AI.

My prediction is that he will be back and everything will go back to what it was before. The board can't be dismissed and neither can Sam Altman. Status quo is the goal at this point.

w10-111 days ago

Hurray for employees seeing the real issue!

Hurray also for the reality check on corporate governance.

- Any Board can do whatever it has the votes for.

- It can dilute anyone's stock, or everyone's.

- It can fire anyone for any reason, and give no reasons.

Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.

For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.

This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.

It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.

Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?

h1fra11 days ago

It would be crazy to see the fall of most hyped company in last 10 years.

If all those employees leave and microsoft reduce their credits it's game over.

autaut11 days ago

Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.

frob11 days ago

For the past few days, whenever I see the word "OpenAI," the theme to "Curb Your Enthusiasm" starts playing in my head.

jrflowers11 days ago

I love this letter posted in Wired along with the claim that it has 600 signatories without any links or screenshots. I also love that not a single OpenAI employee was interviewed for this article.

None of this is important because if we’ve learned anything over the past couple of days it’s that media outlets are taking painstaking care to accurately report on this company.

gist11 days ago

To all who say 'handled so poorly'. Nobody know the exact reason OpenAi fired Sam. But go ahead and jump to conclusions that whatever it was didn't warrant being fired. And that surely the board did the wrong thing. Or maybe they should have released the exact reason and then asked hacker news what they thought should happen.

dschuetz11 days ago

Who needs to buy out a 80bln dollars worth AI startup when talent is jumping ship in their direction already. OpenAI is dead.

dreamcompiler11 days ago

Notice that Andrej Karpathy didn't sign.

realce11 days ago

Is nobody actually... committed to safety here? Was the OpenAI charter a gimmick and everyone but me was in on the joke?

notahacker11 days ago

That seems a reasonable takeaway. Plenty of grounds for criticising the board's handling of this, but the tone of the letter is pretty openly "we're going to go and work directly for Microsoft unless you agree to return the company focus to working indirectly for Microsoft"...

dmix11 days ago

Assuming this is all over safety vs non-safety is a large assumption. I'm wary of convenient narratives.

At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigo-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled.

At best that's just an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.

strikelaserclaw11 days ago

I think most of these employees wanted the fat $$$ that would happen by keeping Sam Altman on board since Sam Altman is an excellent deal maker and visionary in a commercial sense. I have no doubt that if AGI happened, we wouldn't be able to assure the safety of anyone since humans are so easily led by short term greed.

intellectronica11 days ago

Wait, it's signed by Ilya Sutskever?!

croes11 days ago

>The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company

Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.

Or they are pretty naive.

MrScruff11 days ago

What does this mean?

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?

LudwigNagasena11 days ago

The whole drama feels like the Shepard’s tone. You anticipate the climax, but it just keeps escalating.

SilverBirch11 days ago

It's not clear to me that bringing Sam back is even an option anymore given the more with Microsoft. Does Microsoft really takes it's boot off OpenAI's neck and hand back Sam? I guess maybe, but it still begs all sorts of questions about the corporate structure.

bertil11 days ago

No small employer wants a disgruntled employee who was forced out of a better deal. Satya Nadella has proven reasonable throughout the weekend. I would expect he asked for a seat on the board if there's a reshuffle, or at least someone he trusts there.

gsuuon11 days ago

The firing was definitely handled poorly and the communications around it were a failure, but it seems like the organizational structure was doing what it was designed to do.

Is this the end of non-profit/profit-capped AI development? Would anyone else attempt this model again?

RadixDLT11 days ago

OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.

vaxman11 days ago

Altman can’t really go back to OpenAI ever because it would create an appearance of impropriety on the part of MS (that perhaps MS had intentionally interfered in OpenAI, rather than being a victim of it) and therefore expose MS to liability from the other investors in OpenAI.

Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.

In the mean time, how can the public resume work on API integrations without knowing when the MS versions will come online or if they will be binary interoperable with the OpenAPI servers that could seemingly go down at any moment?

grumple11 days ago

It is disappointing that the outcome of this is that Altman and co are basically going to steal a nonprofit's IP and use it at a competitor. They took advantage of the goodwill of the public and favorable taxation in order to develop the technology; now that it's ready, they want to privatize the profit. It looks like this was the plan all along, and it's very strange to me that a nonprofit is allowed to have a for-profit subsidiary.

I would hope the California AG is all over this whole situation. There's a lot of fishy stuff going on already, and the idea that nonprofit IP / trade secrets are going to be stolen and privatized by Microsoft seems pretty messed up.

LuvThisBoard11 days ago

Based on what has come out so far, seems to me:

The board wanted to keep the company true to its mission - non profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they worked out a solution, so it looks like even Nadella/MSFT understood that.

The board could explain their position and move on. Let whoever of the 600 that actually want to leave, leave. Especially the employees that want a company that will make them lots of money, should leave and find a company that has that objective too. OpenAI can rebuild their teams - it might take a bit of time but since they are a non profit that is fine. Most CS grads across USA would be happy to join OpenAI and work with Ilya and team.

ekojs11 days ago
endisneigh11 days ago

Even if the board resigns the damage has been done. They should try to secure good offers at Microsoft.

The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…

baradhiren0711 days ago

The great Closing of “Open”AI.

whatwhaaaaat11 days ago

I don’t trust any of this. Every one of these wired articles has been totally wrong. Altman clearly has major media connections and also seems to have no problem telling total lies.

andrewfromx11 days ago

so what happens if @eshear calls this probably-not-a-bluff, but lets everyone walk? The people that remain get new options and 500 other people still definitely want to work at OAI?

ignoramous11 days ago

If it comes to that, I reckon Emmett will have his former boss Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely though, as reconciliation seems very much a possibility.

ergocoder11 days ago

It is likely gonna be that way.

Eshear is the new CEO. This implosion is not his fault. His reputation is not destroyed.

He can rebuild the non-profit part, which is hard to determine success or failure anyway. Then, he will leave in a few years.

He doesn't seem to have much to lose by just focusing on rebuilding OpenAI.

wenyuanyu11 days ago

I guess employees are compensated with stocks from the for profit entity. And at the face value before the saga, stocks could be like 90%, 95% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? Just to stick on the mission? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, there is no way to return to nonprofit and their charter any more.

chs2011 days ago

Seems like Microsoft is getting the rest of OpenAI for free now.

NKosmatos11 days ago

This is what happens when you're a key person and a very good engineer as such, and at the same time the board/company fires you :-)

When are we going to realize that it's people taking bad decisions and not the "company". It's not OpenAI, Google, Apple or whoever, its real people, with names, and positions of power that take such shitty decisions. We should blame them and not something vague as the "company".

zitterbewegung11 days ago

I guess Microsoft now has a new division. (

Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.

baron81611 days ago

I can foresee three possible outcomes here: 1. The board finally relents, Sam goes back and the company keeps going forward, mostly unchanged (but with a new board).

2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.

3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.

redbell11 days ago

Here we are..

The scene appears to be completely blurry by now! My head is spinning, and the fan is in 7th gear. I believe only time will apply some sort of sharpness effect to make you realize what's really going on. I feel like I'm watching the Italian job the American way; everything and everyone is suspicious to me at this point! Is it possible that MSFT played some tricks behind the scenes?

ayakang3141511 days ago

If OpenAI effectively disintegrates, Microsoft seems to be the beneficiary of this chaos as Microsoft is essentially acquiring OpenAI at almost zero cost. You have IP rights to OpenAI's work, and you will have almost all the brains from OpenAI (AFAIK, MSFT has access to OpenAI's work, but it does not seem to matter). And there is no regulatory scrutiny like Activision acquisition.

danielovichdk11 days ago

Microsoft is laughing all the way to the bank by the moves they have done today.

One could speculate if Microsoft initiated this behind the scenes. Would love it if it came out that they had done some crazy espionage and lobbied the board. Tinfoil hat and all, but truth is crazier than you think.

I remember Bill Gates once said that whoever wins the race for a computerised digital personal assistant, wins it all.