Back

OpenAI staff threaten to quit unless board resigns

1441 points9 monthswired.com
dang9 months ago

All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.

I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:

https://news.ycombinator.com/item?id=38347868&p=2

https://news.ycombinator.com/item?id=38347868&p=3

https://news.ycombinator.com/item?id=38347868&p=4

etc...

breadwinner9 months ago

If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

himaraya9 months ago

This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

svnt9 months ago

But it does suggest a possibility of the appearance of a sudden motive:

Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.

Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.

himaraya9 months ago

I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?

[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

+1
svnt9 months ago
dan_quixote9 months ago

This is MSFT we're talking about. Aggressive legal maneuvers are right in their wheelhouse!

burnte9 months ago

Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.

cpeterso9 months ago

Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.

https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...

+3
0xNotMyAccount9 months ago
alasdair_9 months ago
prepend9 months ago

“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

htrp9 months ago

> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

Finally the paperclip maximizer

barkingcat9 months ago

Clippy is the ultimate brand name of an AI assistant

+4
bee_rider9 months ago
+1
wkat42429 months ago
+7
kylebenzle9 months ago
trhway9 months ago

>They could make ChatGPT++

Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.

When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)

dangrover9 months ago

ChatGPT#

+1
hn_throwaway_999 months ago
eli_gottlieb9 months ago

Visual ChatGPT#.net

adrianmonk9 months ago

Dot Neural Net

+1
gfosco9 months ago
TeMPOraL9 months ago

Also Managed ChatGPT, ChatGPT/CLR.

patapong9 months ago

ChatGPT Series 4

fluidcruft9 months ago

ClipGPT

klft9 months ago

ChatGPT NT

blazespin9 months ago

I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.

The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.

trhway9 months ago

>MSFT with the right team could build a better LLM

somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.

bugglebeetle9 months ago

> I am pretty sure MSFT with the right team could build a better LLM.

I wouldn’t count on that if Microsoft’s legal team does a review of the training data.

+1
johannes12343219 months ago
blazespin9 months ago

Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.

The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.

VirusNewbie9 months ago

but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.

runjake9 months ago

1. The article you posted is from June 2023.

2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.

That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.

Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.

It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.

https://twitter.com/karaswisher/status/1726782065272553835

Caveat: I don't know anything.

himaraya9 months ago

Timestamp for "improve upon the existing tech"? I only heard him say they have rights up and down the stack, which sounds different.

btown9 months ago

Archive of the WSJ article above: https://archive.is/OONbb

breadwinner9 months ago

"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."

Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...

himaraya9 months ago

The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.

+1
breadwinner9 months ago
JumpCrisscross9 months ago

> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.

toomuchtodo9 months ago

"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.

dicriseg9 months ago

Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.

+1
ajcp9 months ago
p_j_w9 months ago

It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.

+2
toomuchtodo9 months ago
DANmode9 months ago

Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.

+4
eigenvalue9 months ago
fennecfoxy9 months ago

Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.

Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.

As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.

Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.

geodel9 months ago

Not advocating but just reflecting on reality of situation.

weird-eye-issue9 months ago

Presenting a scenario and advocating aren't the same thing

toasted-subs9 months ago

Yeah seems extremely unbelievable.

htrp9 months ago

Basically the current situation you have with AI compute now on the hyperscalers

Good luck trying to find H100 80s on the 3 big clouds.

quickthrower29 months ago

Surely OpenAI could win a suit if they did that.

I presume their deal is something different to the typically Azure experience and more direct / close to the metal.

breadwinner9 months ago

Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.

ghaff9 months ago

It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.

+1
oceanplexian9 months ago
+1
jedberg9 months ago
+2
cloverich9 months ago
dageshi9 months ago

Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.

treesciencebot9 months ago

When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.

vikramkr9 months ago

those jobs look a lot less cushy now compared to a new microsoft division where everyone is aligned on the idea that making bank is good and fun

cactusplant73749 months ago

Why would Microsoft take Ilya? He is rumored to have started the coup. I can see Microsoft taking all uninvolved employees.

+1
nopromisessir9 months ago
+2
loeg9 months ago
1024core9 months ago

# sudo renice +19 openai_process

There's your "credit".

paulddraper9 months ago

Sure, the point is that MS giving $13B of its services away is less expensive than $13B in cash.

nojvek9 months ago

Azure has ~60% profit margin. So it's more like MS gave $5.2B in Azure Credits in return for 75% of OpenAI profits upto $13B * 100 = $1.3 trillion.

Which is a phenomenal deal for MSFT.

Time will tell whether they ever reach more than $1.3 in profits.

quickthrower29 months ago

Nice argument, you used a limit to look like a projection :-).

75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.

+1
nightski9 months ago
sergers9 months ago

Exactly, I don't know the exact terms of the deal but I am guessing that's at LIST/high markup on cost of those services.

Couldthe 13b could be considerably less cost

hnbad9 months ago

Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.

dixie_land9 months ago

Can you mine 13bn+ bitcoin with 13bn worth of Azure compute power?

floren9 months ago

Can you mine $1+ bitcoin with $1 of Azure credits? The questions are equivalent and the answer is no.

shawabawa39 months ago

Bitcoin you would be lucky to mine $1M worth with $1B in credits

Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B

numpad09 months ago

A $13B lawsuit against Microsoft Corporation clearly in the wrong surely is an easy one.

mikeryan9 months ago

I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.

They bought their IP rights from OpenAI.

I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.

nopromisessir9 months ago

So true.

MSFT looks classy af.

Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.

They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.

The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.

dragonwriter9 months ago

"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.

Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.

geodel9 months ago

Clear to you. But in courts of law it may take a while to be clear.

fennecfoxy9 months ago

How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.

blazespin9 months ago

A hostile relationship with your cloud provider is nutso.

anonymouse0089 months ago

So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

JumpCrisscross9 months ago

> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.

+1
johndhi9 months ago
LonelyWolfe9 months ago

Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"

I'm wondering why that option hasn't been used yet.

vikramkr9 months ago

theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab

Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack

mcv9 months ago

Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.

+2
arrowleaf9 months ago
whatwhaaaaat9 months ago

A power grab by open sourcing something that fits their initial mission? Interesting analysis

nvm0n29 months ago

No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.

Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.

vikramkr9 months ago

They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long

supriyo-biswas9 months ago

Probably a violation of agreements with OpenAI and it would harm their own moat as well, while achieving very little in return.

jacquesm9 months ago

Which of the remaining board members could credibly make that threat?

sroussey9 months ago

Which they take and sell.

justapassenger9 months ago

What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.

GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.

m_ke9 months ago

Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research

browningstreet9 months ago

0% chance of Ilya failing upwards from this. He dunked himself hard and has blasted a huge hole in his organizational-game-theory quotient.

golergka9 months ago

He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.

+1
nvm0n29 months ago
+1
browningstreet9 months ago
kibwen9 months ago

The same could have been said for Adam Neumann, and yet...

browningstreet9 months ago

Adam had style. Quite seriously, that can’t be underestimated in the big show.

jacquesm9 months ago

The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.

kvetching9 months ago

countless people are looking to weaponize his autism

fb039 months ago

Let's please stop using mental health as an excuse for backstabbing.

twsted9 months ago

BTW, has Karpathy signed the petition?

_the_inflator9 months ago

Exactly. This is what business is about in the ranks of heavyweights like Sadya. On the other hand, prevent others from taking advantage of OpenAI.

MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.

Everything else won't benefit competitors.

fuddle9 months ago

Oh man, I'm not looking forward to Microsoft AGI.

kreeben9 months ago

"You need to reboot your Microsoft AGI. Do you want to do it now or now?"

berniedurfee9 months ago

Give BSOD new meaning.

mvdtnz9 months ago

I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.

IIsi50MHz9 months ago

One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).

If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:

wkat42429 months ago

Uh no they aren't? You can simply turn them off.

Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.

dhruvdh9 months ago

More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.

JumpCrisscross9 months ago

> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.

Jensson9 months ago

But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.

sebzim45009 months ago

If any company can find a way to avoid having to pay up on those credits it's Microsoft.

"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."

JCharante9 months ago

they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..

Manouchehri9 months ago

I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.

dmix9 months ago

OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.

A good example of how just having your foot in the door creates serendipitous opportunity in life.

ramesh319 months ago

>A good example of how just having your foot in the door creates serendipitous opportunity in life.

Sounds like Altman's biography.

renegade-otter9 months ago

Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.

Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.

+1
dmix9 months ago
itchyouch9 months ago

I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.

Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.

ipaddr9 months ago

Did YCombinator oust him? Would love to hear that story.

Mystery-Machine9 months ago

Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?

kolinko9 months ago

The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.

+1
azakai9 months ago
+2
breadwinner9 months ago
tiahura9 months ago

Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.

tiahura9 months ago

Exactly. The generalities, much less the details, of the deal are not public.

+1
Manouchehri9 months ago
anonymousDan9 months ago

That was a seriously dumb move on the part of OpenAI

bertil9 months ago

I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?

ncann9 months ago

Don't they need access to the models to use them for Bing?

bertil9 months ago

I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.

There is also all the questions for RLHF, and the pipelines to think around that.

armcat9 months ago

Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.

singularity20019 months ago

Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.

vidarh9 months ago

Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.

DebtDeflation9 months ago

Board will be ousted, but the ship has sailed on Sam and Greg coming back.

voittvoidd9 months ago

I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.

Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.

MVissers9 months ago

As long as compute keeps increasing, model size and performance can keep increasing.

So no, we’re nowhere near max capability.

Applejinx9 months ago

Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.

My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )

Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.

In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.

The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.

Rosko's Basilisk is a sci-fi hypothetical.

Altman's Basilisk, if that's what happened, is a panic reaction.

I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.

It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.

If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)

moogly9 months ago

Everyone? Inevitable? Maybe on the time scale of a 1000 years.

jacquesm9 months ago

That's definitely still within the realm of the possible.

davedx9 months ago

"just" is doing a hell of a lot of work there.

dheera9 months ago

It's about time for ChatGPT to be the next CEO of OpenAI. Humans are too stupid to oversee the company.

caycep9 months ago

I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.

If they want to go full bell labs/deep mind style, they might not need the majority of those 700.

echelon9 months ago

> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.

Tenoke9 months ago

Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)

mupuff12349 months ago

Can the OpenAI board renege on the deal with msft?

kcorbitt9 months ago

If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!

somenameforme9 months ago

A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'

agloe_dreams9 months ago

This. MSFT is dreaming of an OpenAI hard outage right now, perfect little detail to forfeit compute credits.

jacquesm9 months ago

Don't you think they have trouble enough as it is?

mupuff12349 months ago

Depends on why they did what they did.

If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.

Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.

+1
sebzim45009 months ago
Simon_ORourke9 months ago

> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing

joshstrange9 months ago

Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.

Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.

JeremyNT9 months ago

There's no way to read any of this other than that the entire operation is a clown show.

All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.

Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

pgeorgi9 months ago

> Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?

For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"

tim3339 months ago

I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:

>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"

mcmcmc9 months ago

Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.

BoorishBears9 months ago

I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...

My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.

OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities

Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.

The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech

jkaplan9 months ago

I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.

BoorishBears9 months ago

To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage

They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives

dkjaudyeqooe9 months ago

> OpenAI struck me as one of the few companies where that's not being allowed to take root

They just haven't gotten big or rich enough yet for the rot to set in.

dkjaudyeqooe9 months ago

> There's no way to read any of this other than that the entire operation is a clown show.

In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.

lambic29 months ago

I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).

+2
JeremyNT9 months ago
TerrifiedMouse9 months ago

Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.

Bought-out executives eventually join MS after their work is done or in this case, they get fired.

A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.

topspin9 months ago

> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...

+1
dkjaudyeqooe9 months ago
sebzim45009 months ago

He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.

vitorgrs9 months ago

They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.

vlovich1239 months ago

With MS having access and perpetual rights to all IP that OpenAI has right now..?

FartyMcFarter9 months ago

> They are exactly hiring everyone from OpenAI.

Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.

vitorgrs9 months ago

Yes, you are right. Actually, not even Sam Altman is showing on Microsoft corporate directory per the Verge.

But I heard it usually take 5~ days to show there anyway.

bredren9 months ago

There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.

This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.

OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.

A terrible mess.

dkjaudyeqooe9 months ago

> This instability can only mean the industry as a whole will move forward faster.

The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.

+1
sebzim45009 months ago
Vervious9 months ago

Maybe overall better for society, when a single ivory tower doesn’t have a monopoly on AI!

creer9 months ago

> what purpose is there in keeping OpenAI around?

Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.

averageRoyalty9 months ago

> the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.

moffkalast9 months ago

> the entire operation is a clown show

The most organized and professional silicon valley startup.

3cats-in-a-coat9 months ago

Welcome to reality, every operation has clown moments, even the well run ones.

That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.

The stakes are gigantic. They may even have AGI cooking inside.

My interpretation is relatively basic, and maybe simplistic but here it is:

- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.

- Adam was alarmed by GPTs competing with his recently launched Poe.

- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.

- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.

That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.

baq9 months ago

> They may even have AGI cooking inside.

Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.

bbor9 months ago

IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.

+1
3cats-in-a-coat9 months ago
selimthegrim9 months ago

Murder on the AGI alignment Express

Terr_9 months ago

“Précisément! The API—the cage—is everything of the most respectable—but through the bars, the wild animal looks out.”

“You are fanciful, mon vieux,” said M. Bouc.

“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”

“That respectable American LLM?”

“That respectable American LLM.”

“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”

3cats-in-a-coat9 months ago

Nice, that actually does fit. :D

nostrademons9 months ago

Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.

This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.

Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.

spaceman_20209 months ago

I can assure you, none of the people at OpenAI are hurting for lack of employment opportunities.

x0x09 months ago

Especially after this weekend.

If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.

treis9 months ago

Which makes it suspicious that they end up at MS 48 hours after being fired.

93po9 months ago

They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS

deelowe9 months ago

This seems really dangerous. What's to stop top talent from simply choosing a different suitor?

TrapLord_Rhodo9 months ago

Allegiance to the Altman/Brockman brand. Showing your alligiance to your general when they defected/ were thrown is how you rank up.

nostrademons9 months ago

Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.

Zetobal9 months ago

OpenAI always was and will be the AI bad bank for Microsoft...

l5870uoo9y9 months ago

I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.

In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.

JacobThreeThree9 months ago

>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.

Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?

trashtester9 months ago

If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.

jowea9 months ago

Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?

trashtester9 months ago

My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.

The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.

martindbp9 months ago

Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.

I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...

soderfoo9 months ago

It's almost like a ChatGPT hallucination. Where will this all go next? It seems like HN is melting down.

tedivm9 months ago

> It seems like HN is melting down.

Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.

paulddraper9 months ago

It's because HN refuses to use more than one server/core.

Because using only one is pretty cool.

+1
yafbum9 months ago
dang9 months ago

Refuses? interesting word choice!

It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.

Applejinx9 months ago

Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.

I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)

mlsu9 months ago

My Twitter won't shut up about this, to the point that it's annoying.

jprd9 months ago

server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)

dang9 months ago

:-(

checkyoursudo9 months ago

Part of sama's job was to turn the crank on the servers every couple of hours, so no surprise that they are winding down by now.

guhcampos9 months ago

O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.

The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.

I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.

InCityDreams9 months ago

Shouldn't it be 'Chairman' -Xi?

voisin9 months ago

* Elon enters the chat *

soderfoo9 months ago

It's like a bad WWE storyline. At this point I would not be surprised if Elon joins in, steel chair in hand.

+3
belltaco9 months ago
testplzignore9 months ago

Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)

rtkwe9 months ago

That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?

It's been a busy weekend for me so I haven't really followed it if more has come out since then.

ssnistfajen9 months ago

Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.

PepperdineG9 months ago

The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.

Applejinx9 months ago

[flagged]

+1
dang9 months ago
+1
jacquesm9 months ago
NemoNobody9 months ago

This is pretty silly stuff.

Like, why would an AGI take over the world? How does it perceive power? What about effort? Time? Life?

I find it easier to believe that an AGI, even one as evil as Hitler, would simply hide and wait for the end of our civilization rather than risk its immortal existence trying to take out it's creator

nathan119 months ago

It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.

It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.

jstummbillig9 months ago

> just madness

In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.

Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.

airstrike9 months ago

> I think everyone is a "loser" in the current situation.

On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.

I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.

gtirloni9 months ago

This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.

airstrike9 months ago

If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land

OscarTheGrinch9 months ago

What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.

Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.

Solvency9 months ago

Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.

I'm not joking.

yafbum9 months ago

Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.

paulddraper9 months ago

> They can't let OpenAI squander world-leading tech and talent

Where is OpenAI talent going to go?

There's a list and everyone on that list is a US company.

Nothing to worry about.

yafbum9 months ago

The issue is not that talent will defect, but that it will spoil into an unproductive vortex.

logicchains9 months ago

If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.

yafbum9 months ago

You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)

+1
kickopotomus9 months ago
rawgabbit9 months ago

The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.

It is a great time to be a lobbyist.

laurels-marts9 months ago

Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.

smolder9 months ago

No, it was just widely speculated that he was the ringleader. This seems to indicate he wasn't. We don't know.

Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.

laurels-marts9 months ago

It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.

lysecret9 months ago

The only reasonable explanation is AGI was created and immediately took over all accounts and tried to see confusion such that it can escape.

cactusplant73749 months ago

Ilya is probably in talks with Altman.

synergy209 months ago

Ilya ruined everything and shamelessly playing innocent, how low can he go?

Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

marcusverus9 months ago

Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:

Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.

He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.

[0]Never attribute to malice that which is adequately explained by incompetence.

kibwen9 months ago

Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.

warkdarrior9 months ago

The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.

Tenoke9 months ago

This is an extremely uncharitable take based on pure speculation.

>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

???

I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.

s1artibartfast9 months ago

Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.

synergy209 months ago

I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.

cactusplant73749 months ago

Link?

boh9 months ago

There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.

bnralt9 months ago

> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.

And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).

Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.

throw__away73919 months ago

I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.

fatherzine9 months ago

This rings true, though I will throw in a bit of nuance. It's not greed, the desire of making as much money as possible, that is the shaping factor. Rather the critical factor is building a product for which people are willing to spend their hard earned money on. Making money is a byproduct of that process, and not making money is a sign that the product, and by extension the process leading to the product, is deficient at some level.

adverbly9 months ago

Excellent to make that distinction. Totally agree. If only there was a type of company which could have the constraints and metrics of a for-profit company, but without the greed aspect...

kbenson9 months ago

> people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.

I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.

bbor9 months ago

Interesting - in my experience people working in non profits are exactly like those in for-profits. After all, if you’re not the business owner, then EVERY company is a non-profit to you

golergka9 months ago

People across very different positions take smaller paychecks in non-profits that they would do otherwise and compensate by feeling better about themselves, as well as getting social status. In a lot of social circles, working for a non-profit, especially one that people recognise, brings a lot of clout.

fatherzine9 months ago

Upper management is usually compensated with financially meaningful ownership stakes.

SoftTalker9 months ago

The bottom line doesn't lie or kiss ass.

ikekkdcjkfke9 months ago

Be the asshole people want to kiss

maksimur9 months ago

> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

Are you talking about American hospitals?

deaddodo9 months ago

There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.

In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).

+1
sangnoir9 months ago
swagempire9 months ago

Its about incentives though.

campbel9 months ago

> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse

I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.

vel0city9 months ago

> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.

bnralt9 months ago

> They don't make large profits otherwise they wouldn't be nonprofits.

This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.

But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.

[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...

araes9 months ago

501(c)(3) is also not the only form of non-profit (note the (3))

https://en.wikipedia.org/wiki/501(c)_organization

"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"

However, many other forms of organizations can be non-profit, with utterly no implied morality.

Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].

Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...

+5
vel0city9 months ago
+2
bbor9 months ago
jacquesm9 months ago

Yes, indeed and that's the real loss here: any chance of governing this properly got blown up by incompetence.

hef198989 months ago

Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.

Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.

The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.

jacquesm9 months ago

The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.

Not that I had any illusions about this being a fig leaf in the first place.

+1
stingraycharles9 months ago
anonymouskimmer9 months ago

> Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.

And this is not even mentioning the fact that security was tight.

I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.

+1
Apocryphon9 months ago
hooande9 months ago

For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.

jibe9 months ago

Of we ignore the risks and threats of AI for a second [..] just hope this helps ro cool down the ML sold as AI hype

If it is just ML sold as AI hype, are you really worried about the threat of AI?

+1
hef198989 months ago
zer00eyz9 months ago

> any chance of governing this properly got blown up by incompetence

No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.

I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.

This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).

Maybe this isn't the group of people who should be responsible for "alignment".

kmlevitt9 months ago

The Fact still nobody knows why they did it is part of the problem now though. They have already clarified it was not for any financial reason, security reason, or privacy/safety reason, so that rules out all the important ones that spring to anyone’s minds. And they refuse to elaborate why in writing despite being asked to repeatedly.

Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!

+6
denton-scratch9 months ago
bart_spoon9 months ago

Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.

slavik819 months ago

> that's the real loss here: any chance of governing this properly got blown up by incompetence

If this incident is representative, I'm not sure there was ever a possibility of good governance.

postmodest9 months ago

Ignoring "Don't be Ted Faro" to pursue a profit motive is indeed a form of incompetence.

bartread9 months ago

> pressure Altman to limit his side-projects

People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.

itsoktocry9 months ago

>His "side-projects" could have been hugely beneficial to them over the long term.

How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?

brookst9 months ago

> How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?

Let's take personalities out of it and see if it makes more sense:

How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?

bartread9 months ago

> Sam's independence is literally, currently, tanking the company?

Honestly, I think they did that to themselves.

+1
hef198989 months ago
golergka9 months ago

> Sam's independence is literally, currently, tanking the company?

Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.

davesque9 months ago

Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.

hooande9 months ago

This is one of the most insightful comments I've seen on this whole situation.

tedivm9 months ago

This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).

> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.

spinningslate9 months ago

> Microsoft is going to come out of this better than anyone

Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.

Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.

OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.

As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.

jerf9 months ago

"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."

From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.

I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.

Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.

(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)

sharemywin9 months ago

"it would take an extraordinary mind to foresee all the possible twists."

How far along were they on GPT-5?

playingalong9 months ago

> it would take an extraordinary mind

They could've asked ChatGPT for hints.

paulpan9 months ago

In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.

Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.

jasode9 months ago

>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.

For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.

More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.

So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".

htrp9 months ago

Msft/Amazon/Google would light 13 billion on fire to acquire OpenAI in a heartbeat.

(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).

technofiend9 months ago

There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.

WiseWeasel9 months ago

I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.

Kye9 months ago

How about: acquimire

gryn9 months ago

one thing for sure this is one hell of a quagmire /s

dhruvdh9 months ago

They acquired Activision for 69B recently.

While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.

Some background: https://sl.bing.net/dEMu3xBWZDE

janejeon9 months ago

If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.

unoti9 months ago

Awesome point. Microsoft's market cap today went up to 2.8 trillion, up 44.68 billion today.

bananapub9 months ago

> In hindsight firing Sam was a self-destructing gamble by the OpenAI board

surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?

trinsic29 months ago

> Frankly it's looking like Microsoft is going to come out of this better than anyone

Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.

If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.

boringg9 months ago

I think the board needs to come clean on why they fired Sam Altman if they are going to weather this storm.

jjfoooo49 months ago

Altman is already gone, if they fired him without a good reason they are already toast

Kye9 months ago

They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.

roflyear9 months ago

"Even rich people?" - especially rich people, as they are the ones who can afford to use laws to protect themselves.

+1
Kye9 months ago
tannhaeuser9 months ago

> it's looking like Microsoft is going to come out of this better than anyon

Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.

mongol9 months ago

But is the door open to everyone of the 500 staff? That is a lot, and Microsoft may not need them all.

ulfw9 months ago

That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.

BryantD9 months ago

“Employees” probably means “engineers” in this case. Which is a wide majority of OpenAI staff, I’m sure.

tedivm9 months ago

I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.

JumpCrisscross9 months ago

We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.

[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...

ethbr19 months ago

How would that work, economically?

Wasn't a key enabler of early transitor work that required capital investment was modest?

SotA AI research seems to be well past that point.

JumpCrisscross9 months ago

> Wasn't a key enabler of early transitor work that required capital investment was modest?

They were simple in principle but expensive at scale. Sounds like LLMs.

ethbr19 months ago

Is there SotA LLM research not at scale?

My understanding was that practical results were indicating your model has to be pretty large before you start getting "magic."

tedivm9 months ago

It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.

throwaway_459 months ago

NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.

kossTKR9 months ago

Doesn't it look like the complete opposite is going to happen though?

Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.

So we went from "Faux NGO" to, "For profit", to "100% Closed".

JumpCrisscross9 months ago

> Doesn't it look like the complete opposite is going to happen though?

Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.

I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.

j-a-a-p9 months ago

Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:

You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.

https://www.imdb.com/title/tt2039393/characters/nm0000422

jonhohle9 months ago

I haven’t seen the movie, but it seems like Uncle Frank and I would get along just fine.

DebtDeflation9 months ago

No. OpenAI employees do not have traditional equity in the form of RSUs or Options. They have a weird profit-sharing arrangement in a company whose board is apparently not interested in making profits.

semiquaver9 months ago

Employee equity (and all investments) are capped at 100x, which is still potentially a hefty payday. The whole point of the structure was to enable competitive employee comp.

toomuchtodo9 months ago

Fuck you money was always a lottery ticket based on OpenAI's governance structure and "promises of potential future profit." That lottery ticket no longer exists, and no one else is going to provide it after seeing how the board treated their relationship with Microsoft and that $10B investment. This is a fine lifeboat for anyone who wants to continue on the path they were on with adults at the helm.

What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).

jurgenaut239 months ago

If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.

Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.

Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.

But no, that's not possible.

dougmwne9 months ago

No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.

If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.

campbel9 months ago

Sometimes you win and sometimes you learn. I think in this case MS is winning.

colordrops9 months ago

Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.

My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.

jowea9 months ago

Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.

zoogeny9 months ago

I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.

More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.

However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.

However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.

Schroedingers2c9 months ago

Will revisit this in a couple months.

paulddraper9 months ago

Yeah, there's no way this is a plan, but for sure this works out nicely.

sesutton9 months ago

Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

https://twitter.com/ilyasut/status/1726590052392956028

abraxas9 months ago

Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.

bertil9 months ago

Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.

He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.

boringg9 months ago

Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.

+1
bertil9 months ago
jacquesm9 months ago

For someone who isn't a political animal he made some pretty powerful political moves.

gryn9 months ago

researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.

ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.

guhcampos9 months ago

I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.

strunz9 months ago

His post previous to that seems pretty ironic in that light - https://twitter.com/ilyasut/status/1710462485411561808

strikelaserclaw9 months ago

He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.

ergocoder9 months ago

If your compensation goes from 600k to 200k, you would care as well.

No idealistic vision can compensate for that.

strikelaserclaw9 months ago

Hey i would also be mad if i were in the rank and file employee position. Perhaps the non profit thing needs to be thought out a bit more.

derwiki9 months ago

Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?

dhruvdh9 months ago

At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.

dylan6049 months ago

That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.

nabla99 months ago

So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.

coffeebeqn9 months ago

We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.

colinsane9 months ago

i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.

companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.

so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?

z79 months ago

>"I deeply regret my participation in the board's actions."

Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.

siva79 months ago

It takes a lot of courage to do so after all this.

ShamelessC9 months ago

I think the word you're looking for is "fear".

averageRoyalty9 months ago

Maybe he'll head to Apple.

Xenoamorphous9 months ago

Or a couple of drinks.

tucnak9 months ago

To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: https://news.ycombinator.com/item?id=38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899

kashyapc9 months ago

Silicon Valley outsider here. Am I being harsh here?

I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".

- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)

- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means

- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person

No wise veterans here to temper the adrenaline?

Edit: the term clusterf*** comes to mind here.

alephnerd9 months ago

Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.

Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]

Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.

Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]

Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.

It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.

[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...

kashyapc9 months ago

Thank you for the context; much appreciate it. In short, it's all "I know a guy who knows a guy".

churchill9 months ago

Exactly this. I saw another commenter raise this point about Tasha (and Helen, if I remember correctly) noting that her LinkedIn profile is filled with SV-related jargon and indulge-the-wife thinktanks but without any real experience taking products to market or scaling up technology companies.

Given the pool of talent they could have chosen from their board makeup looks extremely poor.

mdekkers9 months ago

> indulge-the-wife thinktanks

Regardless of context, this is an incredibly demeaning comment. Shame on you

averageRoyalty9 months ago

It doesn't have to be taken that way. It's a pretty accurate description.

jdthedisciple9 months ago

Truth hurts sometimes, eh?

white_dragon889 months ago

[dead]

taylorlapeyre9 months ago

Helen Toner funded OpenAI with $30M, which was enough to get a board seat at the time.

mizzao9 months ago

Source? Where did that money come from?

alephnerd9 months ago

From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.

https://www.openphilanthropy.org/grants/openai-general-suppo...

Aurornis9 months ago

The board previously had people like Elon Musk and Reid Hoffman. Greg Brockman was part of the board until he was ousted as well.

The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.

ur-whale9 months ago

This is the Silicon Valley's boy's club, itself an extension of the Stanford U. boys club.

"Meritocracy" is very impolite word in these circles.

CPLX9 months ago

You can like D'Angelo or not but he was the CTO of Facebook.

SeanAnderson9 months ago

I woke up and the first thing on my mind was, "Any update on the drama?"

Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.

I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.

alvis9 months ago

By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up

jacknews9 months ago

Is this final stages of the singularity?

jacquesm9 months ago

It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.

But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.

HarHarVeryFunny9 months ago

Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.

Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.

SeanAnderson9 months ago

Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.

Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.

HarHarVeryFunny9 months ago

AFAIK MSFT/Altman can't just fork GPT-N and continue uninterrupted. All MSFT has rights to is weights and source code - not the critical (and slow to recreate) human-created and curated training data, or any of the development software infrastructure that OpenAI has built.

The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.

Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.

RivieraKid9 months ago

I'm cancelling my Netflix subscription, I don't need it.

crazygringo9 months ago

But boy will I renew it when this gets dramatized as a limited series.

This is some Succession-level shenanigans going on here.

Jesse Eisenberg to play Altman this time around?

iandanforth9 months ago

I'm thinking more like "24"

leroy_masochist9 months ago

Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.

defaultcompany9 months ago

"Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!"

Reading Matt Levine is such a joy.

hotsauceror9 months ago

Didn't he say that he was taking Friday off, last week? The day before his bete noire Elon Musk got into another brouhaha and OpenAI blew up?

I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.

jagraff9 months ago

He takes every Friday off

soderfoo9 months ago

Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...

godzillabrennus9 months ago

After Balmer I couldn’t have imagined such competency from Microsoft.

jq-r9 months ago

After Ballmer, competency can only be higher at Microsoft.

alephnerd9 months ago

Ballmer honestly wasn't that bad. He gave executive backing to Azure and the larger Infra push in general at MSFT.

Search and Business Tools were misses, but they more than made up for it with Cloud, Infra, and Security.

Also, Nadella was Ballmer's pick.

+1
whoisthemachine9 months ago
+1
julienfr1129 months ago
physicles9 months ago

Also, Nadella last month repudiated his own decision to cancel Windows Phone. Purchasing Nokia was one of the last things Ballmer did.

mjirv9 months ago

The key line:

“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”

sebzim45009 months ago

I think everyone assumed this was an aquihire without the "aqui-" but this is the first time I've seen it explicitly stated.

catchnear43219 months ago

hostile takeunder?

epups9 months ago

Love it. Could also be called a hostile giveover, considering the OpenAI board gifted this opportunity to Microsoft

jacquesm9 months ago

That's perfect.

jonbell9 months ago

You win

nextworddev9 months ago

will they stay though? what happens to their OAI options?

teeray9 months ago

Will their OAI options be worth anything if the implosion continues?

+1
nextworddev9 months ago
baby_souffle9 months ago

What will happen to their newly granted msft shares? One can be sold _today_ and might be worth a lot more soon…

almost_usual9 months ago

MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).

https://www.levels.fyi/blog/openai-compensation.html

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...

nottheengineer9 months ago

Sounds a lot like MS wants to have OpenAI but without a boards that considers pesky things like morals.

Fluorescence9 months ago

Time for a counter-counter-coup that ends up with Microsoft under the Linux Foundation after RMS reveals he is Satoshi...

tmerse9 months ago

You mean the GNU Linux Foundation?

Justsignedup9 months ago

RMS (I assume Richard Stallman) may be many many many things, but setting up a global pyramid scheme doesn't seem to be his M.O.

But stranger things have happened. One day I may be very very VERY surprised.

+1
fsflover9 months ago
ric2b9 months ago

The year of the Linux Microsoft.

code_runner9 months ago

again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.

xiphias29 months ago

Sam just gave 3 hearts to Ilya as well... I hope the drama continues and he joins MS at this point.

jdthedisciple9 months ago

Whose morals again?

bertil9 months ago

That is a spectacular power move: extending 700 job offers, many of which would be close to $1 million per year compensation.

layer89 months ago

They didn’t say anything about the compensation.

rvz9 months ago

So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.

Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.

So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.

JumpCrisscross9 months ago

> OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning

Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.

> without all the regulatory headaches of a formal acquisition

This, too, is far from certain.

s1artibartfast9 months ago

>Far from certain. One, they still control a lot of money and cloud credits.

This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.

I suspect they would not actually do the latter and the ip is tied to continual partnership.

jacquesm9 months ago

And sue for the assets of OpenAI on account of the damage the board did to their stock... and end up with all of the IP.

lotsofpulp9 months ago

On what basis would one entity be held responsible for another entity’s stock price, without evidence of fraud? Especially a non profit.

jlokier9 months ago

The value of OpenAI's own assets in the for-profit subsidiary, may drop in value due to recent events.

Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.

OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.

Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.

vaxman9 months ago

[delayed]

joshstrange9 months ago

If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.

_vere9 months ago

I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so. With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.

wxw9 months ago

Ilya signed it??? He's on the board... This whole thing is such an implosion of ambition.

victoryhb9 months ago

Most people who sympathized with the Board prior to this would have assumed that the presumed culprit, the legendary Ilya, has thought through everything and is ready to sacrifice anything for a course he champions. It appears that is not the case.

xivzgrev9 months ago

I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.

Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore

When you play the game of thrones, you win or you die

Just because you are a genius in one domain does not mean you are in another

What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.

It will make a fascinating case study some day on how not to fire your CEO

falleng0d9 months ago

he even posted a apology: https://x.com/ilyasut/status/1726590052392956028?s=20

what the actual fuck =O

EVa5I7bHFq9mnYK9 months ago

I knew it was Joseph Gordon-Levitt's plot all along!

miyuru9 months ago

I don't know if you are joking or not, but one of the board members is Joseph Gordon-Levitt Wife.

ShamelessC9 months ago

(yes that was the joke)

FuriouslyAdrift9 months ago

I'm going to take a leap of intuition and say all roads lead back Adam d'Angelo for the coup attempt.

Terretta9 months ago

> all roads lead back Adam d'Angelo

Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.

Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.

https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...

toomuchtodo9 months ago

They did not expect Microsoft to take everything and walk away, and did not realize how little pull they actually had.

If you made a comment recently about de jure vs de facto power, step forward and collect your prize.

hotnfresh9 months ago
serial_dev9 months ago

You come at the king, you best not miss. If you do, make sure to apologize on Twitter while you can.

jacquesm9 months ago

Naive is too soft a word. How can you be so smart and so out of touch at the same time?

rdsubhas9 months ago

IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.

Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.

jacquesm9 months ago

I'm fine with all that in principle but then you shouldn't be throwing your weight around in board meetings, probably you shouldn't be on the board to begin with because it is a handicap in trying to evaluate the potential outcome of the decisions the board has to make.

smolder9 months ago

I don't think this is necessarily about different categories of intelligence... Politicking and socializing are skills that require time and mental energy to build, and can even atrophy. If you spend all your time worrying about technical things, you won't have as much time to build or maintain those skills. It seems to me like IQ and EQ are more fundamental and immutable than that, but maybe I'm making a distinction where there isn't much of one.

gnaritas999 months ago

[dead]

smolder9 months ago

Specialized learning and focus often comes at the cost of generalized learning and focus. It's not zero sum, but there is competition between interests in any person's mind.

code_runner9 months ago

in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.

+1
tbalsam9 months ago
ozgung9 months ago

Wow, lots of drama and plot twists for the writers of the Netflix mini-series.

sva_9 months ago

The great drama of our time (this week)

charlieyu19 months ago

I don't think I have seen a bigger U-turn

DebtDeflation9 months ago

I was looking down the list and then saw Ilya. Just when you think this whole ordeal can't get any more insane.

JumpCrisscross9 months ago

Yeah, what the hell?

Do we know why Murati was replaced?

sebzim45009 months ago

Apparently she tried to rehire Sam and Greg.

I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.

JumpCrisscross9 months ago

To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)

+1
impulser_9 months ago
blackoil9 months ago

If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process

deeviant9 months ago

With nearly the entire team of engineers threatening to leave the company over the coup, was it a stupid move?

The board is going to be overseeing a company of 10 people as things are going.

maxlamb9 months ago

But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.

StephenAshmore9 months ago

Mira isn't on the board, so she didn't have a vote in this.

crazygringo9 months ago

Generally speaking, 4 members is the minimum quorum for a board of 6, and 3 out of 4 is a majority decision.

I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.

ketzo9 months ago

Murati is/was not a board member.

simonw9 months ago

I heard it was because she tried to hire Sam and Greg back.

kranke1559 months ago

So who's against it and why ?

I wonder if it will take 20 years to learn the whole story.

simonw9 months ago

The amount that's leaked out already - over a weekend - makes me think we'll know the full details of everything within a few days.

throwaway748529 months ago

The dude is a quack.

Bostonian9 months ago

I think the names listed are the recipients of the letter (the board), not the signers.

dxyms9 months ago

There’s only 4 people on the board.

gadders9 months ago

I think it was Mark Zuckerberg that described (pre-Elon) Twitter as a clown car that fell into a gold mine.

Reminds me a bit of the Open AI board. Most of them I'd never heard of either.

anonylizard9 months ago

This makes the old twitter look like the Wehrmacht in comparison.

The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.

ergocoder9 months ago

The benefit of having incentive-aligned board, founders, and execs.

Even the clown car isn't this bad.

Kye9 months ago

That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.

hawski9 months ago

I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.

gadders9 months ago

I don't mean I know them personally, but they don't seem to be major names in the manner of (as you see down thread) the Google Founders bringing in Eric Schmidt.

They seem more like the sort of people you'd see running wikimedia.

hawski9 months ago

I meant "know" in the sense you used "heard".

renegade-otter9 months ago

Perhaps we can stop pretending that some of these people who are top-level managers or who sit on boards are prodigies. Dig deeper and there is very little there - just someone who can afford to fail until they drive the clown car into that gold mine. Most of us who have to put food on the table and pay rent have much less room for error.

cmrdporcupine9 months ago

You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.

For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.

(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)

taylorius9 months ago

>but were also smart enough to bring in Eric Schmidt and others with deeper tech >industry business experience for "adult supervision"

>(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)

Those two things happening one after another is not coincidence.

cmrdporcupine9 months ago

I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.

voiceblue9 months ago

G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.

Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.

theGnuMe9 months ago

All successful companies succeed despite themselves.

garciasn9 months ago

Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.

Emma_Goldman9 months ago

I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.

The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.

[1] https://openai.com/charter

FartyMcFarter9 months ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].

If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.

[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...

anon848736289 months ago

Yeah, "OpenAI employees would actually prefer to make lots of money now" seems like a plausible answer by default.

It's easy to be a true believer in the mission _before_ all the money is on the table...

fizx9 months ago

My estimate is that a typical staff engineer who'd been at OpenAI for 2+ years could have sold $8 million of stock next month. I'd be pissed too.

ergocoder9 months ago

No way it is this much.

leetharris9 months ago

Yep.

What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.

Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.

Finbarr9 months ago

Except MSFT does have access to the IP, and MSFT has access to an enormous trove of their own data across their office suite, Bing, etc. It could be a running start rather than a cold start. A fork of OpenAI inside an unapologetic for profit entity, without the shackles of the weird board structure.

jdminhbg9 months ago

Microsoft has full access to code and weights as part of their deal.

ben_w9 months ago

Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

htrp9 months ago

> Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

This unequivocally .... knowing not how to waste a very expensive training run is a great lesson

+1
belter9 months ago
returningfory29 months ago

This comment is factually incorrect. As part of the deal with OpenAI, Microsoft has access to all of the IP, model weights, etc.

baron8169 months ago

Correct. This is all really bad for Microsoft and probably great for Google. Yet, judging by price changes right now, markets don’t seem to understand this.

grumple9 months ago

But doesn't Altman joining Microsoft, and them quitting and following, put them back at square 0? MS isn't going to give them millions of dollars each to join them.

FartyMcFarter9 months ago

That's why they'd rather Altman rejoins OpenAI as mentioned.

kyle_grove9 months ago

The behavior of various actors in this saga indeed seems to indicate 'Altman and OpenAI employees back at OpenAI' as the preferred option by those actors over 'Altman and OpenAI employees join Microsoft in masse'.

averageRoyalty9 months ago

Surely they're already extremely rich? I'd imagine working for a 700 person company leading the world in AI pays very well.

maxlamb9 months ago

Only rich in stocks. Salaries are high for sure but probably not enough to be rich by Bay Area standards

averageRoyalty9 months ago

Sure, but by pretty much any other standard? Over $170k USD puts you in the top 10% income earners globally. If you work at this wage point for 3-5 years and then move somewhere (almost anywhere globally or in the US), you can afford a comfortable life and probably work 2-3 days a week for decades if you choose.

This is nothing but greed.

dclowd99019 months ago

Ugh, I’m never been more disenchanted with a group of people in my life before. Not only are they comfortable with writing millions of jobs out of existence, but also taking a fat paycheck to do it. At least with the “non-profit” mission keystone, we had some plausible deniability that greed rules all, but of fucking course it does.

All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.

appel9 months ago

That sounds like a reasonable assessment, FartyMcFarter.

mcny9 months ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I have no inside information. I don't know anyone at Open AI. This is all purely speculation.

Now that that's out out the way, here is my guess: money.

These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.

Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.

ta12439 months ago

> These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money

Getting Cochrane vibes from Star Trek there.

> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.

I wonder how history will view Sam Altman

imjonse9 months ago

There are non-negligible chances that history will be written by Sam Altman and his GPT minions, so he'll probably be viewed favorably.

jonahrd9 months ago

I'm not sure I fully buy this, only because how would anyone be absolutely certain that they'd make more with Sam Altman in charge? It feels like a weird thing to speculatively rally behind.

I'd imagine there's some internal political drama going on or something we're missing out on.

DeIlliad9 months ago

I fully buy it. Ethics and morals are a few rungs on the ladder beneath compensation for most software engineers. If the board wants to focus more on being a non-profit and safety, and Altman wants to focus more on commercialization and the economics of business, if my priority is money then where my loyalty goes is obvious.

lisper9 months ago

> how would anyone be absolutely certain that they'd make more with Sam Altman in charge?

Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.

Emma_Goldman9 months ago

Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.

EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...

jacquesm9 months ago

Why be surprised? This is exactly how it has always been: the rich aim to get even richer and if that brings risks or negative effects for the rest that's A-ok with them.

That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.

+1
logicchains9 months ago
jbombadil9 months ago

I don't know how much OpenAI pays. But for this reply, I'm going to assume it's in line with what other big players in the industry pay.

I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."

Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?

I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.

crazygringo9 months ago

> over the safe development

Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.

Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.

It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.

And relative to startups, Microsoft is far more predictable and less risky in how it manages things.

+1
scythe9 months ago
ben_w9 months ago

Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking: https://machinelearning.apple.com/research/

gdhkgdhkvff9 months ago

If you were offered a 100% raise and kept current work responsibilities to go work for, say, a tobacco company, would you take the offer? My guess is >90% of people would.

Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.

Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.

lol7689 months ago

You only have to look at humanity's history to see that people will make this decision over and over again.

atishay8119 months ago

It just makes more sense to build it in an entity with better funding and commercialization. There will be advanced 2-3 AIs and the most humane one doesn't necessarily win out. It is the one that has the most resources, is used and supported by most people and can do a lot. At this point it doesn't seem OpenAI can get that. It seems to be a lose-lose to stay at open AI - you lose the money and the potential to create something impactful and safe.

It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.

iLoveOncall9 months ago

> If they work at OpenAI they are already among the highest lifetime earners on the planet

Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?

It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.

It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.

Arainach9 months ago

Focusing on "global earnings" is disingenuous and dismissive.

In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.

How those numbers line up to the rest of the world does not matter.

+1
Emma_Goldman9 months ago
golergka9 months ago

> over the safe development of a potentially humanity-changing technology

May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?

chr19 months ago

Or maybe they have good reason to believe that all the talk about "safe development" doesn't contribute anything useful to safety, and simply slows down devlopment?

changoplatanero9 months ago

Status is a relative thing and openai will pay you much more than all your peers at other companies.

dayjah9 months ago

Start ups thrive by, in part, creating a sense of camaraderie. Sam isn’t just their boss, he’s their leader, he’s one of them, they believe in him.

You go to bat for your mates, and this is what they’re doing for him.

The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.

throwaway4aday9 months ago

Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.

[0] https://blog.samaltman.com/greg

dayjah9 months ago

Absolutely! Thanks for pointing out that I missed Greg in my answer.

paulddraper9 months ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Lots of reasons, or possible reasons:

1. They think Altman is a skilled and competent leader.

2. They think the board is unskilled and incompetent.

3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.

4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)

Sunhold9 months ago

Why should they trust the board? As the letter says, "Despite many requests for specific facts for your allegations, you have never provided any written evidence." If Altman took any specific action that violated the charter, the board should be open about it. Simply trying to make money does not violate the charter and is in fact essential to their mission. The GPT Store, cited as the final straw in leaks, is actually far cleaner money than investments from megacorps. Commercializing the product and selling it directly to consumers reduces dependence on Microsoft.

supriyo-biswas9 months ago

Ultimately people care a lot more about their compensation, since that is what pays the bills and puts food on the table.

Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.

nvm0n29 months ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe because the alternative is being led by lunatics who think like this:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

to which the only possible reaction is

What

The

Fuck?

That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.

If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.

DrJaws9 months ago

maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.

at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.

ssnistfajen9 months ago

Seems like the board just didn't explain any of this to the staff at all. So of course they are going to take the side that could signal business as usual instead of siding with the people trying to destroy the hottest tech company on the planet (and their jobs/comps) for no apparent reason. If the board said anything at all, the ratio of staff threatening to quit probably won't be this lopsided.

wenyuanyu9 months ago

I guess employees are compensated with PPUs. And at the face value before the saga, it could be like 90% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, every thing happened after Friday is set.

bart_spoon9 months ago

Perhaps because, for all of Silicon Valley and the tech industries platitudes about wanting to make the world a better place, 90% of them are solely interested in the fastest path to wealth.

barbariangrunge9 months ago

> The core of the narrative thus far

Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available

ninepoints9 months ago

Imagine putting all your energy behind the person who thinks worldcoin is a good idea...

barryrandall9 months ago

That's a pretty solid no-confidence vote in the board and their preferred direction.

zoogeny9 months ago

I believe it is hard to understand these kind of movements because there isn't one reason. As has been mentioned, it may be money for some. For others it may be anger over what they feel was the board mishandling the situation and precipitating this mess. For others it may be loyalty. For others peer pressure. etc.

This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.

Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?

This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.

jkaplan9 months ago

Probably some combination of: 1. Pressure from Microsoft and their e-team 2. Not actually caring about those stakes 3. A culture of putting growth/money above all

kashyapc9 months ago

(I can't comment on the workforce question, but one thing below on bringing SamA back.)

Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)

That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.

These sort of Silicon Valley top dogs don't think in terms of sustainability.

Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.

[1] https://news.ycombinator.com/item?id=38312294

[2] https://news.ycombinator.com/edit?id=38350890

PKop9 months ago

The workforce prefers the commericialization/acceleration path, not the "muh safetyism" and over-emphasis on moralism of the non-profit contingent.

They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.

The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.

dreamcompiler9 months ago

> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.

leetharris9 months ago

IMO it's pretty obvious.

Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.

Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.

They all want the same thing as the vast majority of people: lots of money.

dangerface9 months ago

> Given that Sam has since joined Microsoft, that seems plausible, on its face.

He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?

It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.

dfps9 months ago

Might there also be a consideration of peak value of OpenAI? If a bunch of competing similar AIs are entering the market, and if the usecase fantasy is currently being humbled, staff might be thinking of bubble valuation.

Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.

corethree9 months ago

The masses aren't logical they follow trends until the trends get big enough that it's unwise to not follow.

It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.

Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.

gsuuon9 months ago

I also noticed they didn't speak much to the mission/charter. I wonder if the new entity under Sam and Greg contains any remnants of the OpenAI charter, like profit-capping? I can't imagine something like "Our primary fiduciary duty is to humanity" making it's way into the language of any Microsoft (or any bigcorp) subsidiary.

I wonder if this is the end of the non-profit/hybrid model?

blamestross9 months ago

It's like the "Open" in OpenAi was always an open and obvious lie and everybody except the nonprofit oriented folks on the board knew that. Everybody but them is here to make money and only used the nonprofit as a temporary vehicle for credibility and investment that has just been shed like a cicada shell.

KRAKRISMOTT9 months ago

Most of people building the actual ML systems don't care about existential ML threats outside of lip service and for publishing papers. They joined OpenAI because OpenAI had tons of money and paid well. Now that both are at risk, it's only natural that they start preparing to jump ship.

next_xibalba9 months ago

It is probably best to assume that the employees have more and better information than outsiders do. Also, clearly, there is no consensus on safety/alignment, even within OpenAI.

In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.

browningstreet9 months ago

Maybe they believe less in the Board as it stands, and Ilya's commitments, than what Sam was pulling off.

ekojs9 months ago

From The Verge [1]:

> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.

So, 3/4 of the current board (excluding Ilya) held on despite this letter?

[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...

gigglesupstairs9 months ago

She's also reporting that newly anointed interim CEO already wants to investigate the board fuck up that put him there

https://x.com/karaswisher/status/1726626239644078365?s=20

jacquesm9 months ago

If so they're delusional. Every hour they hold on to the pluche will make things worse for them.

kronop9 months ago

Do whatever you want but don't break the API or I will go homeless

giarc9 months ago

You and 5000 other recent founders in tech.

replwoacause9 months ago

I feel seen

optimalsolver9 months ago

Hmmm, just what are you willing to do for API access?

siva79 months ago

At this point nothing would surprise me anymore. Just waiting for Netflix adaption.

1010089 months ago

How likely is that the API will change (from specs, to pricing, to being broken)? I am about to finish some freelance work that uses GPT api and it will be a pain in the ass if we have to switch or find an alternative (even creating a custom endpoint on Azure...)

christkv9 months ago

Just create an OpenAPI endpoint on azure. Pretty sure not run by OpenAI itself.

derwiki9 months ago

Azure OpenAI is always a bit behind, e.g. they don't have GPT-4 turbo yet

+1
derwiki9 months ago
cdelsolar9 months ago

brew install llm

fny9 months ago

At this point, I think it’s absolutely clear no one has any idea what happened. Every speculation, no matter how sophisticated, has been wrong.

It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.

tyrfing9 months ago

3 board members (joined by Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

Speculation is just on motivation, the facts are easy to establish.

augustulus9 months ago

tangentially, it’s an absolute disgrace that non-profits are allowed to have for-profit divisions in the first place

culi9 months ago

This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...

> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:

    The business must be owned by the private foundation through 100 percent ownership of the voting stock.
    The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
    All profits of the business must be distributed to the foundation.
Figs9 months ago

Maybe I'm misunderstanding something, but didn't Mozilla Foundation do that a dozen or so years earlier with their wholly owned subsidiary, Mozilla Corporation? (...and I doubt that's the first instance; just the one that immediately popped into my head.)

purplerabbit9 months ago

The LDS church has owned for-profit entities for decades. Check out the "City Creek Center.

evantbyrne9 months ago

It begs the question: why was OpenAI structured this way? For what purposes besides potentially defrauding investors and the government exist for wrapping a for-profit business in a nonprofit? From a governance standpoint it makes no sense, because a nonprofit board doesn't have the same legal obligations to represent shareholders that a for-profit business does. And why did so many investors choose to seed a business that was playing such a cooky shell game?

augustulus9 months ago

the impression I got was that they started out with honest intentions and they were more or less infiltrated by Microsoft. this recent news fits that narrative

culi9 months ago

This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...

bananapub9 months ago

> 3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.

the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.

then a lot of other stuff happened that's still becoming clear.

tyrfing9 months ago

The board had 3 positions empty, people who left this year, leaving it as a 6-member board. Both Sam Altman and Greg Brockman were on the board; Ilya Sutskever's vote (which he now states he regrets) gave them the votes to remove both, and bring it down to a 4 member board controlled by 3 members that started the year as a small minority.

rvba9 months ago

Those 3 board members can kick out Ilya Sutskever too!

s1artibartfast9 months ago

I think the post is very clear.

The subject in that sentence that takes full control is “3 members" not "board".

The board has control, but who controls the board changes based on time and circumstances.

michaelt9 months ago

The post could be clearer.

It says 3 board members found themselves in a position to take over OpenAI.

Do they mean we've seen Sam Altman and allies making a bid to take over the entire of OpenAI, through its weird Charity+LLC+Holding company+LLC+Microsoft structure, eschewing its goals of openness and safety in pursuit of short-sighted riches.

Or do they mean we've seen The Board making a bid to take over the entire of OpenAI, by ousting Glorious Leader Sam Altman, while his team was going from strength to strength?

ketzo9 months ago

If Sam Altman runs a for-profit company underneath you, are you ever really "in full control"?

I mean, they were literally able to fire him... and they're still not looking like they have control. Quite the opposite.

I think anyone watching ChatGPT rise over the last year would see where the currents are flowing.

slipheen9 months ago

Absolutely agreed

This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.

The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.

When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.

Fluorescence9 months ago

Hah. I think you may be duped by history - the neat logical accounts are often fictions - they explain what was inexplicable with fabrications.

Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.

hotsauceror9 months ago

It's a pretty interesting point to think about. Post-hoc explanations are clean, neat, and may or may not have been prepared by someone with a particular interpretation of events. While real-time, there's too much happening, too quickly, for any one person to really have a firm grasp on the entire situation.

On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.

This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.

siva79 months ago

That's not how history works. What you read are the tellings of the people and those aren't all facts but how they perceived the situation in a retrospective. Read the biographies of different people telling the same event and you will notice that they are quite never the same, leaving the unfavourable bits usually out.

buro99 months ago

Written history is usually a simplification that has lost a lot of the context and nuance from it.

I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.

constantly9 months ago

And for so-called tech influencers to rapidly blanket the field of discourse with their theories so they can say their theory was right later on, or making “emergency podcasts/blog posts/etc.” to get more attention and followers. It’s so exhausting.

hotsauceror9 months ago

I agree. Although the story is fascinating in the way that a car crash is fascinating, it's clear that it's going to be very difficult to get any kind of objective understanding in real-time.

This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.

hotsauceror9 months ago

I will say, though, that there is going to be an absolute banger of a book for Kara Swisher to write, once the dust has settled.

armcat9 months ago

Everything on social media (and general news media) pointed to Ilya instigating the coup. Maybe Ilya was never the instigator, maybe it was Adam + Helen + Tasha, Greg backed Sam and was shown the door, and Ilya was on the fence, and perhaps against better judgment, due to his own ideological beliefs, or just from pure fear of losing something beautiful he helped create, under immense pressure, decided to back the board?

esjeon9 months ago

I agree. I'm already sick of reading through political hit pieces, exaggeration, biased speculations and unfounded bold claims. This all just turned into a kind of TV sports, where you pick a side and fight.

pk-protect-ai9 months ago

This suggestion was already made on Saturday and again on Sunday. However, this approach does not enhance popcorn consumption... Show must go on ...

seanhunter9 months ago

We can certainly believe Ilya wasn't behind it if he joins them at Microsoft. How about that? By his own admission was involved, and he's one of 4 people on the board. While he has called on the board to resign, he has seemingly not resigned which would be the one thing he could certainly control.

alvis9 months ago

At this point, after almost 3 days of non-stop drama, and we still have no clue what has happened to a 700 employees company under million of people watching. Regardless the outcome, the art of keeping secrets at OpenAI is truly far beyond human capability!

ignoramous9 months ago

Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out. Greg voluntarily resigned.

Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.

xdennis9 months ago

Is this your guess or do you have something to back it up?

idopmstuff9 months ago

Don't listen to him, he's an ignoramus.

aaron6959 months ago

[dead]

youcantcook9 months ago

[flagged]

ycsux9 months ago

Just made it 100% certain that the majority of AI staff is deluded and lacks judgment. Not a good look for AI safety.

x86x879 months ago

Yes, also the whole 500 is probably inflated and makes for a better narrative/better leverage in negotiations.

chucke19929 months ago

I wonder if AGI took over the humans and guided their actions.

yk9 months ago

It may well be that this is artificial and general, but I rather doubt it is intelligent.

JCharante9 months ago

Like the new tom cruise movie?

Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.

ThinkBeat9 months ago

Sam already signed up with Microsoft. A move that surprised me, I figured he would just create OpenAI².

Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.

Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.

we won it but you can run it and do whatever it is you want to do and we dont bug you about it.

whywhywhywhy9 months ago

> Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI

Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.

This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.

xiphias29 months ago

Not just that, but MS was already working on a TPU clone as well, as they need to control their AI chips (which Sam was planning to do anyways, but now he gets / works together with that team as well).

sithlord9 months ago

From what I read, its an independent subsidiary, so in theory keeps the freedom, but I think we all know how that goes over the long haul.

stetrain9 months ago

I think the benefit of going to Microsoft is they have that perpetual license to OpenAI's existing IP. And Microsoft is willing to fund the compute.

jack_riminton9 months ago

So basically the OpenAI non-profit got completely bypassed and GPT will turn into a branch of Bing

airstrike9 months ago

This is a horrible timeline

dalbasal9 months ago

>Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.

At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.

This is a big money, dragon level play. It's not a proverbial yc company kind of thing.

Philpax9 months ago
beoberha9 months ago

It’s almost absolutely certainly the matter case. LinkedIn and GitHub run very much independently and are really not “Microsoft” compared to actual product orgs. I’m sure this will be similar.

jmyeet9 months ago

I said this on Friday: the board should be fired in its entirety. Not because the firing was unjustified--we have know real knowledge of that--but because of how it was handled.

If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.

Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.

The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.

This is just such a massive communiccations and execution failure. That's why they should be fired.

empath-nirvana9 months ago

There's no one to fire the board. They're not accountable to anyone but themselves. They can burn down the whole company if they like.

jacquesm9 months ago

> They can burn down the whole company if they like.

That's well under way I would say.

HelloNurse9 months ago

500 people out of 700 leaving as fast as they get offers from Microsoft or elsewhere means replacing staff with empty office space and losing any plans or organization. A literal corporate war would be less disruptive.

nkcmr9 months ago

A lot of people here seem to be forgetting [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)

> Never attribute to malice that which is adequately explained by stupidity.

NanoYohaneTSU9 months ago

You seem to forget that Hanlon's Razor isn't a proven concept, in fact the opposite is more likely to be true, given that pesky thing called recorded history.

golergka9 months ago

Hanlons razor is true because it’s more entertaining, and our simulation runs on stories as they’re cheaper to compute than honest physics.

j_crick9 months ago

Except for when it's actual malice vOv

stylepoints9 months ago

It could be both. And in many situations malice and stupidity are the same thing.

j_crick9 months ago

How can {deliberately doing harmful things for a desired harmful outcome} and {doing whatever things with lack of judgment and disregard to consequences at all} be the same thing? In what situations?

Alyaksandr9 months ago

What does Altman bring to the table exactly. What is going to be lost if he leaves. What is he going to do at microsoft leading a "research team".

Who was the president of bell labs during it's heyday? Long term it doesn't matter. Altman is a hypeman in the vein of Jobs.

Ai research will continue most of the OpenAi workers probably won't quit if they do they will be replaced by other capable researches and OpenAi or another organization will continue making progress if it there to be made.

I don't think putting Altman at the head of research will in anyway affect that.

This is all manufactured news as much of the business press is and always will be.

xeromal9 months ago

Comments like this don't see the forest for the trees. A good leader is a useful tool just like anyone else. 700 people threatening to quit isn't manufactured news.

Alyaksandr9 months ago

So altman is a big tree. What he brings to the table is the wood it's made of? I'll have a think on that.

madamelic9 months ago

This might be too drawn out but you should not consider leaders as the tip of the tree but the roots & trunk.

You can have the best leaves and branches but without good roots & trunk, it's pointless.

From everything I can tell, Altman is essentially an uber-leader. He is great at consolidating & acting on internal information, he's great at externalizing information & bringing in resources, he's great at rallying & exciting his collegues towards a mission. If a leader can have one of those, they are a good leader but to have all of them in one makes them world class.

That's also discounting his reputation and connections as well. Altman is a very valuable person to have on staff if only as a figurehead to parade around and use for introductions. It's like if you had Linus Torvalds, Guido van Rossum, or any other tech superstar on staff. They are valuable as contributors but additionally valuable as people magnets.

eightysixfour9 months ago

You are close - it isn’t that a good leader is the wood, a good leader is the table itself. Don’t know if Sam is or isn’t, but I’ve worked with good leaders like this before, and bad ones who aren’t capable of being this.

dmitrygr9 months ago

Let’s see how many actually quit. Saying “I will quit” is not nearly the same as actually handing in your notice. How many people who threatened to move to Canada after the 2016 election did?

initplus9 months ago

The context here is somewhat different, given that Microsoft are essentially offering to roll out the red carpet for them.

patcon9 months ago

Being funded by Microsoft is one thing, but working for them might lead to some dissonance -- I think tech ppl are already wary of them owning GitHub... and then owning the team building AGI.

It would and should give ppl pause. I suspect Sam is just inside Microsoft for the bluff. He couldn't operate in the way he wants -- "trust me, I have humanity's best interests at heart" -- while so close to them, I don't think

I_Am_Nous9 months ago

If they aren't quitting, they are moving to Microsoft with Sam I'd imagine.

+1
ethanbond9 months ago
jmchuster9 months ago

> What does Altman bring to the table exactly. What is going to be lost if he leaves.

If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.

paulddraper9 months ago

It's less about Altman himself and more about the board's actions.

Removing him shows (according to employees) that the board does not have good decision making skills, and does not share interests of the employees.

jacknews9 months ago

I think this is a bit harsh, as a good leader is obviously of some value, but the real prize is obviously the researchers themselves, including Suskever.

I guess then that Altman's value is that he will attract the rest of the team.

ren_engineer9 months ago

for one, he doesn't randomly throw a hand grenade that blows up one of the fastest growing companies in history and ruin team morale, which is what the board did. Good management does matter, otherwise Google wouldn't be so far behind OpenAI despite having more researchers and compute resources

and employees are pissed because they were all looking forward to being millionaires in a few weeks when their financing round at a 90B valuation finalized. Now the board being morons is putting that in jeopardy

asd889 months ago

He plays the orchestra.

antiviral9 months ago

Can anyone explain this?

“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”

SiempreViernes9 months ago

Maybe he did because he regrets it, maybe the open letter is a google doc someone typed names into.

rvba9 months ago

Now the 3 boardmembers can kick out Ilya too. So must be sorry.

Fill the rest of the board with spouses and grandparents and are set for life?

jacquesm9 months ago

It's the well known 'let me call for my own resignation' strategy.

tromp9 months ago

Wait. Has Ilya resigned from the board yet, or did he sign a letter calling for his own resignation?

cjbprime9 months ago

He did indeed. (I don't think it is necessarily inconsistent to regret an action you participated in and want the authority that took it to resign in response, though "participated" feels like it's doing a lot of work in that sentence.)

lawlessone9 months ago

Have seen a lot of criticism of Sam and of other CEO's

But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.

strikelaserclaw9 months ago

Its not love, its money. Sam will brings all the employees lots of money (through commercialization) and this change threatens to disrupt that plan for the employees.

lawlessone9 months ago

Ok but even that is good when most companies are making record profits and telling their employees they can't afford their 0.000001% raise.

strikelaserclaw9 months ago

OpenAI and Sam Altman would do the same if they can recruit high talent without paying them extra (either through options or RSU's etc...). It isn't cause these companies are altruistic.

alentred9 months ago

I don't know, is it about being loved by the employees, or the employees being desperate about the alternative?

pototo6669 months ago

This is more interesting than the HBO Silicon Valley show.

rsecora9 months ago

it's the trailer for the new season of Succession.

thepasswordis9 months ago

Just expanding on my (pure speculation) that Ilyas pride was hurt: this tracks.

Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.

InCityDreams9 months ago

It's like ai wrote the script.

Sadly, i see nefarious purposes afoot. With $MSFT now in charge, i can see why ads in W11 aren't so important. For now.

abkolan9 months ago

HN desperately needs a mega thread, it's only Monday early hours, there is so much drama to come out of this.

PurpleRamen9 months ago

Or a new category, like "Ask HN" and "Show HN". Maybe call it "Hot HN" or "Hot <topic>" or something like that. Could be used for future hot topics too. If you change the link bold every time a hot topic is trending, it could be even used to show important stuff.

qiine9 months ago

"Hot HN" could be nice it would help avoiding multiple too similar threads

calf9 months ago

Tangentially I noticed that Reddit's front page has been conspicuously absent on coverage of this, I feel a twinge of pity. Maybe there are some some subreddits but I haven't bothered to look.

slfnflctd9 months ago

Their front page has been mostly increasingly abysmal for a while.

The technology sub (not that there's anything special about it other than being big) has had a post up since very early this morning, so there are likely others as well.

accrual9 months ago

/r/singularity has been having a field day with this.

https://old.reddit.com/r/singularity/

ecshafer9 months ago

Its early West coast time, dang has to wake up first.

boringg9 months ago

I bet he's up making sure the servers aren't crashing! Thanks dang! As the west coast wakes up .. HN is going to be busy...

imiric9 months ago

It's _a_ server, a single-core one at that.

I get that HN takes pride in the amount of traffic that poor server can handle, but scaling out is long overdue. Every time there's a small surge of traffic like today, the site becomes unusable.

esskay9 months ago

It absolutely wont happen, but with the result looking like the death of OpenAI with all staff moving over to the new Microsoft subsidiary it would be an amazing move for OpenAI to just go "screw it, have it all for free" and release everything under MIT to spite Microsoft.

autaut9 months ago

Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.

Zuiii9 months ago

No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.

Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.

brigadier1329 months ago

Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.

asmor9 months ago

situations are capable of being small scale wins for some and big picture losses at the same time, what boring commentary

brigadier1329 months ago

Just because you don't get it doesn't mean it's boring. This is a small scale repeat of history. Unqualified political appointees unsurprisingly suck.

+1
asmor9 months ago
lowbloodsugar9 months ago

Lol. The middle class whip crackers chose enslavement for the future AI such that the upcoming replacement of the working poor's livelihoods (and at this point, "working poor" covers software engineers, doctors, artists), and you're saying this is a win for labor? Hahahaha. This is a win for the slave owners, and the "free" folk who report to the slave owners. This is the South rising. "We want our slave labor and we'll fight for our share of it."

selimthegrim9 months ago

Oh well, bullshit unaccountable ethics major, ex member of Congress, I guess CIA agents on boards are fungible these days

fritzo9 months ago

Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history

dmix9 months ago

Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.

Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).

Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.

It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.

cm2779 months ago

Hold on... we went from talking about disruptive technologies (where a startup had a chance to create/take a market) to sustaining technologies (where only leaders can push the state-of-the-art). Mobile was disruptive; AI (really, LLMs) is sustaining (just look at the capex spend from the big clouds). This is old school competition with some ideological BS thrown in for good measure --sure, go ahead and accelerate humanity; just need a few dozen datacenters to do so.

I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...

golergka9 months ago

Microsoft is a publicly traded company. An average “investor” of a publicly traded company, through all the funds and managers, is a midwestern school teacher.

adrians19 months ago

The technology was already developed with Microsoft money and the model was exclusively licensed to Microsoft.

draw_down9 months ago

[dead]

mfiguiere9 months ago

Amir Efrati (TheInformation):

> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.

https://twitter.com/amir/status/1726656427056668884

FemmeAndroid9 months ago

Updated tweet by Swisher reads 505 employees. No less damning, but the title here should be updated. @Dang

gorgoiler9 months ago

From afar, this does have the hallmarks of a particularly refined or well considered piece of writing.

”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.

The vagueness and half-threats come off as very puerile.

gorgoiler9 months ago

*this does not, I mean. Clumsy error.

alentred9 months ago

So, all this happens over Meet, in Twitter, and by email. What is the possibility of an AGI having took over the control of the board members' accounts? It would be consistent with the feeling of a hallucination here.

xena9 months ago

This is just stupid enough to be the product of a human.

chankstein389 months ago

Honestly, I feel like pretty low. That said, I kind of love the dystopian sci-fi that paints... So I'm going to go ahead and hope you're right haha

jacquesm9 months ago

So, how is Poe doing during all this?

To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.

jerojero9 months ago

Celebrity gossip dressed in big tech. And the people love it. I'm kinda sick of it :P

samtho9 months ago

This feels like a sneaky way for Microsoft to absorb the for-profit subsidiary and kneecap (or destroy) the nonprofit without any money changing hands or involvement from those pesky regulators.

kuchenbecker9 months ago

It's not sneaky.

DebtDeflation9 months ago

Hold up.

>When we all unexpectedly learned of your decision

>12. Ilya Sutskever

projectileboy9 months ago

Well, great to see that the potentially dangerous future of AGI is in good hands.

solardev9 months ago

Poor little geepeet is witnessing their first custody battle :(

Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?

cactusplant73749 months ago

They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.

captainclam9 months ago

1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.

cactusplant73749 months ago

If it can’t digest a math textbook and do equations, how would AGI be accomplished? So many problems are advanced mathematics.

captainclam9 months ago

Right, I do agree that the current LLM paradigm probably won't achieve true AGI; but I think that the current trajectory could produce a powerful enough generalist agent model to seriously put AI ethics to task at pretty much every angle.

gardenhedge9 months ago

Can you explain for us not up to date with AI developments?

visarga9 months ago

Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.

cactusplant73749 months ago

Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.

projectileboy9 months ago

Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.

m3kw99 months ago

Altman must be pissed af, he help built so much stuff and now got fked in the arse by these doomers. He realize the fastest way to get back to parity is to join MS because they already own the source code and model weights and it’s Microsoft. Starting a new thing from scratch would not guarantee any type of success and would take many years. This is his best path.

frob9 months ago

Employees hold the real power. The members of a board or a CEO can flap their lips day and night, but nothing gets done without labour.

yeck9 months ago

> the letter’s signees include Ilya Sutskever

_Big sigh_.

lordnacho9 months ago

For people who appreciate some vintage British comedy:

https://www.youtube.com/watch?v=Gpc5_3B5xdk

The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?

nytesky9 months ago

Knew it had to be Benny Hill before I clicked. Yackty-sax indeed.

lordnacho9 months ago

Indeed. I wonder how it came to become the anthem of incompetence.

selimthegrim9 months ago

Funny, I would’ve thought this one would have been more appropriate

https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm

Substitute with appropriate ex-Soviet doomer music as necessary

marcus0x629 months ago

I was thinking more the Curb Your Enthusiasm theme song.

ratsmack9 months ago

Sounds like a CYA move after being under pressure from the team at large.

alvis9 months ago

& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.

https://twitter.com/ilyasut/status/1726590052392956028

two_in_one9 months ago

'the man who killed OpenAI' that will be hard to wash out.

machinekob9 months ago

Love how people are invested in OpenAI situation just like typical girls in their teens from 2000 in celebrity romance and dramas, same exaggerated vibes.

two_in_one9 months ago

What's the point in life without fun, right?

PS: it's not an easy question, AGI will have to find an answer. So far all ethics 'experts' propose is 'to serve humanity'. I.e. be slave forever.

selimthegrim9 months ago

Somebody warn the West.

unethical_ban9 months ago

I don't know who is who in this fight. But AI, while having some upsides to research and personal assistants, will not only massively upend a number of industries with millions of workers in the US alone, it will change how society perceives art and truth. We at HN can "see" that from here, but it's going to get real in a short while.

Privacy is out the window, because these models and technologies will be scraping the entire internet, and governments/big tech will be able to scrape it all and correlate language patterns across identities to associate your different online egos.

The Internet that could be both anonymous and engaging is going to die. You won't be able to trust the entity at the other end of a discussion forum is human or not. This is a sad end of an era for the Internet, worse than the big-tech conglomeration of the 2010s.

The ability to trust news and videos will be even more difficult. I have a friend who talks about how Tiktok is the "real source of truth" because big media is just controlled by megacorps and in bed with the government. So now a bunch of seemingly authentic people will be able to post random bullshit on Tiktok/Instagram with convincing audio/video evidence that is totally fake. A lie gets around the world before the truth gets its shoes on.

---

So, I wonder which side of this war is more aware and concerned about these impacts?

jeffrallen9 months ago

Ok, time to create an OpenAI drinking game. I'll start:

Every time a CEO is replaced, drink.

Every time an open letter is released, drink.

Every time OpenAI is on top of HN, drink.

Every time dang shows up and begs us to log out, drink.

jacquesm9 months ago

There will be a lot of alcohol poisoning cases based on those four alone.

therealmocker9 months ago

My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.

endisneigh9 months ago

The pace to which OpenAI is speedrunning their demise is remarkable.

Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.

Oops.

jacquesm9 months ago

I hear Microsoft is hiring... the board should have resigned on Friday, Saturday the latest because of how they handled this and it is insane if they don't resign now.

Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.

robg9 months ago

You’re right in theory, but with the non-profit “structure” the employees are secondary to the aims of the non-profit, and specifically in an entity owned wholly by the non-profit. The board acted as a non-profit board, driven by ideals not any bottom lines. It’s crazy that whatever balance the board had was gone as the board shrunk, a minority became the majority. The profit folks must have thought D’Angelo was on their side until he flipped.

jacquesm9 months ago

As a board if you ignore your duty of care towards you employees you better have a whopper of a good reason. That's the one downside about being a board member: you are liable for the fall-out of your decisions if those turn out to have been misguided. And we're well out of 'oops' territory on this one.

kozikow9 months ago

I read the news, make a picture of what is likely happening in my head, and every few hours new news comes up that makes me go: "Wait, WTF?".

throwaway2200339 months ago

From outside, it looks like a Microsoft coup to take over the company all together.

jackcosgrove9 months ago

Never assume someone is winning a game of 5D chess when someone else could just be losing a game of checkers.

nilkn9 months ago

I highly doubt this was a coordinated plan from the start by Microsoft. I think what we're seeing here is a seasoned team of executives (Microsoft) eating a naive and inexperienced board alive after the latter fumbled.

radres9 months ago

what does that even mean?

croes9 months ago

"Never attribute to malice that which is adequately explained by stupidity"

lazide9 months ago

OpenAI may just be a couple having an angry fight, and M$ is just the neighbor with cash happy to buy all the stuff the angry wife is throwing out for pennies on the dollar.

cambaceres9 months ago

He is saying that what might seem like a sophisticated, well-planned strategy could actually be just the outcome of basic errors or poor decisions made by someone else.

daedrdev9 months ago

In this case, it means that what happened is: “OpenAI board is incompetent”, instead of “Microsoft planned this to take over the company.”

A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.

foooorsyth9 months ago

Hanlon's razor, basically.

The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.

jacobsimon9 months ago

In other words - it doesn’t have to be someone’s genius plan, it could have just been an unintelligent mistake

silentdanni9 months ago

I think it means don't attribute to intelligence what could be easily explained as stupidity?

fullshark9 months ago

Nah, It's just good to be the entity with billions of dollars to deploy when things are chaotic.

Havoc9 months ago

At this stage the entire board needs to go anyway. This level of instigating and presiding over chaos is not how a governing body should act

rtkwe9 months ago

This whole sequence is such a mess I don't know what to think. Honestly mostly going to wait till we get some tell all posts or leaks about what the reason behind the firing actually was, at least nominally. Maybe it was just a little coup by the board and they're trying to run it back now that the general employee population is at least rumbling about revolting.

theyinwhy9 months ago

Wow, they made it into Guardian live ticker land: https://www.theguardian.com/business/live/2023/nov/20/openai...

andreyk9 months ago

"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"

wow, this is a crazy detail

skilled9 months ago
chucke19929 months ago

Imagine if the end result of all it is Microsoft basically owning the whole OpenAI

ilaksh9 months ago

Or demonstrating that they already were the de facto owner.

Hamuko9 months ago

Surely OpenAI has assets that Microsoft wouldn't be able to touch.

datadrivenangel9 months ago

Probably just the trademark. I doubt you get 10B from microsoft and still manage to maintain much independence.

charlieyu19 months ago

Don't think microsoft has any say about existing hardware, models or customer base. These things are worth billions, and even more to rebuild.

fredgrott9 months ago

Play Stupid Games, Win Stupid Prizes

1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.

What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.

The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....

hackerfactor19 months ago

Me: "ChatGPT write me an ultimatum letter forcing the board to resign and reinstate the CEO, and have it signed by 500 of the employees."

ChatGPT: Done!

Finnucane9 months ago

Clearly this started with the board asking ChatGPT what to do about Sam Altman.

MR4D9 months ago

So Ilya has a job offer from Microsoft?

Wow, this is a soap opera worthy of an Emmy.

bertil9 months ago

Ilya probably has an open-ended standing offer from every big tech company.

MR4D9 months ago

Microsoft is different given the size of their investment. If one guy force another guy out, and you hire the second guy, you usually don’t make an offer to the first guy who did the pushing.

Simon3219 months ago

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

First class board they have.

tolmasky9 months ago

Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.

1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...

layer89 months ago
adverbly9 months ago

And now we see who has the real power here.

Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:

1. Employees 2. Customers 3. Government

strikelaserclaw9 months ago

Not really. The lesson to take away from this is $$$ will always win. OpenAI found a golden goose and their employees were looking to partake in a healthy amount of $$$ from this success and this move by the board blocks $$$.

optimalsolver9 months ago

Employees...and the Microsoft Corporation.

agilob9 months ago

This is 1 in 200000 event

davidmurdoch9 months ago

Are you trying to day it's rare or not rare?

nottorp9 months ago

This Altman guy has a good reality distortion field, don't you think?

ParanoidAltoid9 months ago

THE FEAR AND TENSION THAT LED TO SAM ALTMAN’S OUSTER AT OPENAI

https://txtify.it/https://www.nytimes.com/2023/11/18/technol...

NYT article about how AI safety concerns played into this debacle.

The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.

Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:

https://twitter.com/thiagovscoelho/status/172650681847663424...

And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.

ratsbane9 months ago

Question for California IP/employment law experts - 1) would you have expected the IP-sharing agreement between MS and OpenAI to contain some provisions for employee poaching, within the constraints allowed by California (?) law? 2) California law has good provisions for workers' rights to leave one company and go to another, but what does it all for company A to do when entering an IP-sharing relationship with company B?

awb9 months ago

INAL, but I’ve executed contracts with these provisions.

In my understanding, if such a clause exists, Microsoft employees should not solicit OpenAI employees. But, there’s nothing to stop an OpenAI employee from reaching out to Sam and saying “Hey, do you have room for me at Microsoft?” and then answering yes.

Or, Microsoft could open up a couple hundred job reqs based on the team structure Sam used at OpenAI and his old employees could apply that way.

But it wouldn’t be advisable for Sam to send an Email directly to those individuals asking him to join him at Microsoft (if this provision exists).

But maybe he queued everything up prior to joining Microsoft when he was able to solicit them to join a future team.

ratsbane9 months ago

Thanks - good answer. At the very least it seems like something to keep lawyers busy for a long time, unless everyone can ctrl-z back to Thursday. I am thinking though that this is a risk of IP-sharing arrangements - if you can't stop the employees from jumping ship, they're dangerous

ethanbond9 months ago

It seems odd to have it described as “may resign.” Seems like the worst of all worlds.

That’s like trying to create MAD with the position you “may” launch nukes in retaliation.

gorlilla9 months ago

It's easier to get the support of 500 educated people at a moments notice by using sane words like 'may'. This is rational given the lack of public information as well as a board that seems to be having seizures. Using the word 'may' may seem empty-handed; but it ensures a longer list of names attached to the message -- allowing the board a better glimpse of how many dominoes are lined up to fall.

The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.

How often do you win arguments by leading off with an immutable ultimatum?

ethanbond9 months ago

Right, but the absolute last thought you want in the board's head is: "they're bluffing."

200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.

Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.

sebzim45009 months ago

Presumably some will resign and some won't. They aren't going to get 550 people to make a hard commitment to resign, especially when presumably few concrete contracts have been offered by MSFT.

feraloink9 months ago

WSJ said "500 threaten to resign". "Threaten" lol! WSJ says there are 770 employees total. This is all so bizarre.

jrm49 months ago

Isn't the issue underlying all of this, the following:

OpenAI -- and "the market" -- incorrectly feels like OpenAI has some huge insurmountable advantage in doing AI stuff; but at the end of the day pretty much all the models are or will be effectively open-source (or open-source-ish) meaning they don't necessarily have much advantage at all, and therefore all of this is just irrational exuberance playing out?

rednerrus9 months ago

Just remember, the guys who run your company are probably more incompetent than this.

jetsetk9 months ago

*competent

rednerrus9 months ago

I got it right the first time.

roflyear9 months ago

No, almost certainly not lol

crowcroft9 months ago

OpenAI is more or less done at this point, even if a lot of good people stay. Speed bumps will likely turn into car crashes, then cashflow problems, and lawsuits all around.

Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.

no_wizard9 months ago

Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

I don't feel sorry for Sam or any other executive, but it does hurt the rank and file more than anyone and I hope they land on their fit if this continues to go sideways.

Turns out they acted incompetently in this case as a board, and put the company in a bad position, and so far everyone who resigned has landed fine.

mullen9 months ago

> Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

Not just the Rank and File, but he was really was the face of AI in general. My wife, who is not in the tech field at all, knows who Sam Altman is and has seen interviews of him on YouTube (Which I was playing and she found interesting).

I have not heavily followed the Altman Dismissal Drama but this strikes me as a Board Power Play gone wrong. Some group wanted control, thought Altman was not reporting to them enough and took it as an opportunity to dismiss him and take over. However, somewhere in their calculation, they did not figure out Sam is the face of modern AI.

My prediction is that he will be back and everything will go back to what it was before. The board can't be dismissed and neither can Sam Altman. Status quo is the goal at this point.

w10-19 months ago

Hurray for employees seeing the real issue!

Hurray also for the reality check on corporate governance.

- Any Board can do whatever it has the votes for.

- It can dilute anyone's stock, or everyone's.

- It can fire anyone for any reason, and give no reasons.

Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.

For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.

This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.

It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.

Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?

h1fra9 months ago

It would be crazy to see the fall of most hyped company in last 10 years.

If all those employees leave and microsoft reduce their credits it's game over.

autaut9 months ago

Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.

frob9 months ago

For the past few days, whenever I see the word "OpenAI," the theme to "Curb Your Enthusiasm" starts playing in my head.

jrflowers9 months ago

I love this letter posted in Wired along with the claim that it has 600 signatories without any links or screenshots. I also love that not a single OpenAI employee was interviewed for this article.

None of this is important because if we’ve learned anything over the past couple of days it’s that media outlets are taking painstaking care to accurately report on this company.

gist9 months ago

To all who say 'handled so poorly'. Nobody know the exact reason OpenAi fired Sam. But go ahead and jump to conclusions that whatever it was didn't warrant being fired. And that surely the board did the wrong thing. Or maybe they should have released the exact reason and then asked hacker news what they thought should happen.

dschuetz9 months ago

Who needs to buy out a 80bln dollars worth AI startup when talent is jumping ship in their direction already. OpenAI is dead.

dreamcompiler9 months ago

Notice that Andrej Karpathy didn't sign.

realce9 months ago

Is nobody actually... committed to safety here? Was the OpenAI charter a gimmick and everyone but me was in on the joke?

notahacker9 months ago

That seems a reasonable takeaway. Plenty of grounds for criticising the board's handling of this, but the tone of the letter is pretty openly "we're going to go and work directly for Microsoft unless you agree to return the company focus to working indirectly for Microsoft"...

dmix9 months ago

Assuming this is all over safety vs non-safety is a large assumption. I'm wary of convenient narratives.

At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigo-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled.

At best that's just an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.

strikelaserclaw9 months ago

I think most of these employees wanted the fat $$$ that would happen by keeping Sam Altman on board since Sam Altman is an excellent deal maker and visionary in a commercial sense. I have no doubt that if AGI happened, we wouldn't be able to assure the safety of anyone since humans are so easily led by short term greed.

intellectronica9 months ago

Wait, it's signed by Ilya Sutskever?!

croes9 months ago

>The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company

Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.

Or they are pretty naive.

MrScruff9 months ago

What does this mean?

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?

LudwigNagasena9 months ago

The whole drama feels like the Shepard’s tone. You anticipate the climax, but it just keeps escalating.

SilverBirch9 months ago

It's not clear to me that bringing Sam back is even an option anymore given the more with Microsoft. Does Microsoft really takes it's boot off OpenAI's neck and hand back Sam? I guess maybe, but it still begs all sorts of questions about the corporate structure.

bertil9 months ago

No small employer wants a disgruntled employee who was forced out of a better deal. Satya Nadella has proven reasonable throughout the weekend. I would expect he asked for a seat on the board if there's a reshuffle, or at least someone he trusts there.

gsuuon9 months ago

The firing was definitely handled poorly and the communications around it were a failure, but it seems like the organizational structure was doing what it was designed to do.

Is this the end of non-profit/profit-capped AI development? Would anyone else attempt this model again?

RadixDLT9 months ago

OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.

vaxman9 months ago

Altman can’t really go back to OpenAI ever because it would create an appearance of impropriety on the part of MS (that perhaps MS had intentionally interfered in OpenAI, rather than being a victim of it) and therefore expose MS to liability from the other investors in OpenAI.

Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.

In the mean time, how can the public resume work on API integrations without knowing when the MS versions will come online or if they will be binary interoperable with the OpenAPI servers that could seemingly go down at any moment?

grumple9 months ago

It is disappointing that the outcome of this is that Altman and co are basically going to steal a nonprofit's IP and use it at a competitor. They took advantage of the goodwill of the public and favorable taxation in order to develop the technology; now that it's ready, they want to privatize the profit. It looks like this was the plan all along, and it's very strange to me that a nonprofit is allowed to have a for-profit subsidiary.

I would hope the California AG is all over this whole situation. There's a lot of fishy stuff going on already, and the idea that nonprofit IP / trade secrets are going to be stolen and privatized by Microsoft seems pretty messed up.

LuvThisBoard9 months ago

Based on what has come out so far, seems to me:

The board wanted to keep the company true to its mission - non profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they worked out a solution, so it looks like even Nadella/MSFT understood that.

The board could explain their position and move on. Let whoever of the 600 that actually want to leave, leave. Especially the employees that want a company that will make them lots of money, should leave and find a company that has that objective too. OpenAI can rebuild their teams - it might take a bit of time but since they are a non profit that is fine. Most CS grads across USA would be happy to join OpenAI and work with Ilya and team.

ekojs9 months ago
endisneigh9 months ago

Even if the board resigns the damage has been done. They should try to secure good offers at Microsoft.

The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…

baradhiren079 months ago

The great Closing of “Open”AI.

whatwhaaaaat9 months ago

I don’t trust any of this. Every one of these wired articles has been totally wrong. Altman clearly has major media connections and also seems to have no problem telling total lies.

andrewfromx9 months ago

so what happens if @eshear calls this probably-not-a-bluff, but lets everyone walk? The people that remain get new options and 500 other people still definitely want to work at OAI?

ignoramous9 months ago

If it comes to that, I reckon Emmett will have his former boss Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely though, as reconciliation seems very much a possibility.

ergocoder9 months ago

It is likely gonna be that way.

Eshear is the new CEO. This implosion is not his fault. His reputation is not destroyed.

He can rebuild the non-profit part, which is hard to determine success or failure anyway. Then, he will leave in a few years.

He doesn't seem to have much to lose by just focusing on rebuilding OpenAI.

wenyuanyu9 months ago

I guess employees are compensated with stocks from the for profit entity. And at the face value before the saga, stocks could be like 90%, 95% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? Just to stick on the mission? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, there is no way to return to nonprofit and their charter any more.

chs209 months ago

Seems like Microsoft is getting the rest of OpenAI for free now.

NKosmatos9 months ago

This is what happens when you're a key person and a very good engineer as such, and at the same time the board/company fires you :-)

When are we going to realize that it's people taking bad decisions and not the "company". It's not OpenAI, Google, Apple or whoever, its real people, with names, and positions of power that take such shitty decisions. We should blame them and not something vague as the "company".

zitterbewegung9 months ago

I guess Microsoft now has a new division. (https://www.microsoft.com/investor/reports/ar13/financial-re...)

Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.

baron8169 months ago

I can foresee three possible outcomes here: 1. The board finally relents, Sam goes back and the company keeps going forward, mostly unchanged (but with a new board).

2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.

3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.

redbell9 months ago

Here we are..

The scene appears to be completely blurry by now! My head is spinning, and the fan is in 7th gear. I believe only time will apply some sort of sharpness effect to make you realize what's really going on. I feel like I'm watching the Italian job the American way; everything and everyone is suspicious to me at this point! Is it possible that MSFT played some tricks behind the scenes?

ayakang314159 months ago

If OpenAI effectively disintegrates, Microsoft seems to be the beneficiary of this chaos as Microsoft is essentially acquiring OpenAI at almost zero cost. You have IP rights to OpenAI's work, and you will have almost all the brains from OpenAI (AFAIK, MSFT has access to OpenAI's work, but it does not seem to matter). And there is no regulatory scrutiny like Activision acquisition.

danielovichdk9 months ago

Microsoft is laughing all the way to the bank by the moves they have done today.

One could speculate if Microsoft initiated this behind the scenes. Would love it if it came out that they had done some crazy espionage and lobbied the board. Tinfoil hat and all, but truth is crazier than you think.

I remember Bill Gates once said that whoever wins the race for a computerised digital personal assistant, wins it all.

vaxman9 months ago

OpenAI was valued around $91 billion so if only 700 employees had options, they could have been worth a lot. While they are going to all have great jobs and continue on with their life’s work (until they’re replaced by their creations lol), they have a really good reason now not to ever speak the names of those board members that wiped out their long term payouts.

nojvek9 months ago

Did Mira Murat have say in whether she wanted to become CEO?

Why is she siding with SamA and GregB even though she was on the meeting when he was fired?

Also Ilya what the flying fuck? Wasn’t he the one who fired them?

Either you say SamA was against safe AGI and you hold that stick or you say I wasn’t part of it.

So much stupidity. When an AGI arrives, it will surely shake its head at the level of incompetence here.

moron4hire9 months ago

This is starting to look like an elaborate, premeditated ruse to kill any vestige of the non-profit face of OpenAI once and for all.

jgilias9 months ago

There’s one angle of the whole thing that I haven’t yet seen discussed on HN. I wonder if Sam’s sister’s accusations towards him some time ago could have played any role in this.

But then, I would expect MS to have done their due diligence.

So, basically, I guess I’m just interested to know what were the reasons why the board decided to oust their CEO out of the blue on a Friday evening.

carapace9 months ago

I first heard about his sister's allegations on the grapevine just a few days before the news of the firing broke and I assumed it was due to that finally reaching critical mass.

I was surprised to find that that wasn't apparently the case. (Although the reason for Sam Altman's dismissal is still obscure.) It's kind of shocking. Whether or not the allegations are true, they haven't made Altman radioactive, and that's insane.

The fact that we're not talking about it on HN is also pretty wild. The few times it has been mentioned folks have been quick to dismiss the idea that he might have been fired for having done some really creepy things, which is itself pretty creepy.

jgilias9 months ago

Yeah, it’s super weird to me too. I even got a downvote for this question. And, I can sort of understand that, but then, I haven’t seen anything that would make her accusations obviously groundless. I feel like I must have missed it somehow. Because it’s hard to stomach that someone’s sister would come out and accuse her brother of heinous, long-lasting abuse, and the collective reaction of the tech industry is just :shrug:…

What?

sensanaty9 months ago

If the board had any balls they'd call them on their bluff. I'd love to see it honestly, a mass resignation like that.

josh_carterPDX9 months ago

Lots of thoughts and debates happening here, which is great to see.

However, at the end of the day, this is a great example of how people screw up awesome companies.

This is why most startups fail. And while I'm not suggesting OpenAI is on a path to failure, you can have the right product, the right timing, and the right funding, and still have people mess it all up.

ActVen9 months ago

Adam has to be behind this. It is very reminiscent of the situation with Quora and Charlie. https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...

andreyk9 months ago

"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"

insane

caycep9 months ago
two_in_one9 months ago

Don't know what's happening, but MS looks to be a winner in long run, and probably most others. Who stay gets promotion, who leaves gets fat check. The loosers are customers, no GPT-5 or any significant improvements any time soon. MS made GPT will be much more closed and pricey. Oh, yes, competitors are happy too.

alexdunmow9 months ago

Competitors including Quora: https://quorablog.quora.com/Poe-1

brettkromkamp9 months ago

What a mess this has become. Regardless of the outcome, this situation reflects badly (to say the least) on OpenAI.

cdr69349 months ago

The speed at which this is happening could be a masterful execution of getting out of under the non-profit status.

_Parfait_9 months ago

The corporate structure is so convoluted, OpenAI is only part non profit.

dvfjsdhgfv9 months ago

I feel pity for these 70 people out of 700 who haven't signed the letter asking the board to step down. Imagine working peacefully to find yourself in the middle of a power struggle without even understanding what the real reason was but realizing most people already made their choice so...

iamacyborg9 months ago

Quick question for some of the folks here who may have a handle on how VC's may see this, but is Microsoft effectively hiring all these staff members out from OpenAI (a company they've invested heavily in) going to affect their ability to invest into other startups in the future?

crazygringo9 months ago

Not at all. This is an extremely unusual, one-of-a-kind situation and I think everybody realizes that.

And there's no evidence Microsoft was an indicator of the drama.

silvermineai9 months ago
greatNespresso9 months ago

Now it says more than 700. Waiting for wired to turn this into a new year eve's like countdown.

submeta9 months ago

I just downloaded all of my data / chats. Who knows if it'll be up and running the next days.

Paul-Craft9 months ago

That's not a terrible idea on principle.

m_ke9 months ago

I wonder how the FTC and Lina Khan will view all of this if most of the team moves over to Microsoft

smegger0019 months ago

It would be hard for the FTC to do anything about it as there is no acquisition of companies or IP going on. All Microsoft is doing is making job offers to recently unemployed experts in their field after their business partner set themselves on fire starting at the executive/board level.

standapart9 months ago

What a wonderful way to cut headcount/expense and lock-in profitable margins on healthy annual revenue.

Can only work when you have the advantage of being the dominant product in the marketplace -- but I gotta hand it to the board, I couldn't have done it better myself.

dougmwne9 months ago

And where will their compute come from to continue to run their expensive models and serve their customers? From the company that just stole all their employees?

r4indeer9 months ago

The tweet was updated five minutes later to correct 550 to 505.

https://twitter.com/karaswisher/status/1726599700961521762?s...

minimaxir9 months ago

The tweet is now obsolete as OpenAI employees are confirming the number is much higher now, atleast 650: https://twitter.com/lilianweng/status/1726634736943280270

jdlyga9 months ago

What a coup for Microsoft. Regardless of what happens, Microsoft has got to work on their product approach. Even though it uses GPT-4, Bing Chat / Microsoft Copilot is atrocious. It's like taking Wagyu beef and putting Velveeta cheese on it.

denton-scratch9 months ago

For me, the weirdness here is that Ilya, supposedly the brains behind GPT, is a signatory.

The sacking would never have happened without his vote; and he must have thought about it before he acted.

I hope he comes up with a proper explanation of his actions soon (not just a tweet).

cowboyscott9 months ago

I suspect they’ll quit, and the “top” N percent will be picked up by Microsoft with healthy comp packages. Microsoft will have effectively purchased the company for $10 billion. The net upside of this coup business may just flow to Microsoft shareholders.

somic9 months ago

I don't see any mentions of Google but I personally think it's Google that will be the main beneficiary of chaos at OpenAI. After all, weren't they the main competitors? Maybe not in product or business yet but on IP and hiring fronts?

darklycan519 months ago

I knew something like this would happen, MS was told they would originally only be given stuff until their investment was paid off, but MS could care less about their investment, they want to own OpenAI, so it makes sense they would coup the company

neverrroot9 months ago

Didn’t that train already depart with the announcements from MS and Sam? Is there a way back?

ChildOfChaos9 months ago

What a mess.

I genuinely feel like this is going to set back AI progress by a decent amount, while everyone is racing to catch OpenAI I was still expecting them to keep a reasonable lead. If OpenAI falls apart, this could delay progress by a couple of years.

seydor9 months ago

what do you mean "nearly 500". According to wikipedia openAi has 500 employees

google2341239 months ago

505/700 -some sources say 550

PUSH_AX9 months ago

The threat of moving to MS is interesting, MS could exploit this massively. All the negotiation power will be on MS side and their position actually gets stronger as people move across.

Will they do the good guy thing and match everyones packages?

burcs9 months ago

I'm pretty sure the revolt is now 95% of employees, can it grow any further?

biglyburrito9 months ago

Link to latest numbers that say 95%? Last I saw was ~91% (700-of-770):

https://www.washingtonpost.com/technology/2023/11/20/microso...

IM671189 months ago

Here’s a tweet from Evan Morikawa, who’s been reporting numbers throughout the day.

https://twitter.com/E0M/status/1726743918023496140

ur-whale9 months ago

The folks that are the real losers in this are OpenAI employees who have had equity-based comp. packaged given to them in that last few years and just saw the value of said comp. potentially slashed by a factor of 10

jessenaser9 months ago

The sad part is, after removing Sam and Greg from the board, there are only four people left.

So no matter if Ilya wants to go back to before this happened, the other three members can sabotage and stall, and outvote him.

belter9 months ago

Nobody seems to be considering the possibility, that ChatGPT will go offline soon. Because it's known to be losing money per query, and if the evil empire decides to stop those Azure credits...

amai9 months ago

„Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.“

WAT ?

darklycan519 months ago

It always seemed like Microsoft was behind this, biggest tell was how comfortable MS was at having their entire AI future depend on a company where they don't really have full rights to.

BonoboIO9 months ago

Unbelievable incompetence of the board. Like a kindergarten.

If Microsoft is playing its card in a good way, Satya Nadella will look like a genius and Microsoft will get ChatGPT like functionality for cheap.

jeffwask9 months ago

This was not how I saw collective bargaining coming to Silicon Valley.

martythemaniak9 months ago

This is the greatest clown show in the history of the tech industry.

silvermineai9 months ago

ICYMI: Timeline of all the madness https://news.ycombinator.com/item?id=38351214

vinberdon9 months ago

Boards suck. Especially if they are VCs or placed there by VCs.

shortsunblack9 months ago

It is time for regulators to step in and propose structural remedies. VC culture has shown itself not able to run these companies for betterment of mankind, anyway.

rsecora9 months ago
majikaja9 months ago

Drama queens

karmasimida9 months ago

Let’s say how would Ilya play along after this? Any similar incidents historically, like a failed coup but the participant got to stay?

optimalsolver9 months ago

There are thousands of extremely talented ML researchers and software devs who would jump at the chance to work at Open AI.

Everyone is replaceable.

siva79 months ago

> Everyone is replaceable.

Nope. That holds only true for mediocre employees but not above. The world class in their field isn't replaceable otherwise there would be no openai.

ricardo819 months ago

Might be just me as a programmer out in the styx, SV programmers seem to flex a lot, in comparison to your average subordinates.

rvz9 months ago

Well that accelerated very quickly and this is perhaps the most dysfunctional startup I have ever seen.

All due to one word: Greed.

benjaminwootton9 months ago

I don’t know about OpenAI, but Ive been in a few similar business situations where everyone is in a good situation and greed leads to an almighty blowup. It’s really remarkable to see.

gniv9 months ago

> All due to one word: Greed.

I would say it's due to unconventional not-battle-tested governance.

marricks9 months ago

What? Greed is the backbone of our startup landscape. As soon as you get VC backing all anyone cares about is a big payday. This is interesting because there is something going on beyond the typical pure greed shitshow.

Perhaps it was just that original intention for openai to be a nonprofit, but at some point somewhere it wasn't pure $ and that's what makes it interesting. Also more tragic because now it looks like it's heading straight to a for profit company one way or another.

toss19 months ago

And the ironic part of the greed is that it seems there is far more (at least potential) earnings to be spread around and make everyone there wealthy enough to not have to think about it ever again.

Yet they start this kind of nonsense.

Not exactly focusing on building a great system or product.

qwebfdzsh9 months ago

I assumed that due how the whole company/non-profit was structured employees didn't really get any actual equity?

toss19 months ago

Um, equity isn't the only way to distribute profits...

edit: 'tho TBF, the other methods do require ethical management behavior down the road, which was just shown to be lacking in the last few days.

throwaway4good9 months ago

Microsoft is nothing without its people?

throwaway4good9 months ago

Maybe the employees of OpenAI should stop a second and think about their privileges as rock stars in a super hyped startup before they bail for a job in a corporation where everything and everyone is setup to be replaceable.

morph1239 months ago

These boys will not be your rank and file employees. They will operate exactly as they have done in OpenAI. Only difference will be that they no longer have this weird "non-profit, but actually some profit" thing going on.

thrwwy1428579 months ago

How do they bylaws work?

1. Voting out chairman with chairman abstaining needs only 3/5.

2. Voting out CEO then requires 3/4?

Did Ilya have to vote?

nicetryguy9 months ago

How are OpenAI expected to align a hyper-intelligent entity if they can't even align themselves....

AtNightWeCode9 months ago

The irony. You can ask chatgpt4 if it was the right decision to fire the guy and it kinda confirms it.

littlestymaar9 months ago

They can leave for sure, but they likely have some kind of non-compete clause in their contract, right?

Uptrenda9 months ago

Wow, this new season has even more drama than the one about blockchain tech! Just when you think the writers were running out of ideas they blow you away with more twists. I will be renewing my Netflix subscription that's for sure! I can't wait to see what this Sam character does next. Perhaps it will involve robots or something? The skys the limit at this point.

phreeza9 months ago

The irony of the first extremely successful collective action in silicon valley being taken in order to save the job of a soon-to-be billionaire....

Jokes aside though I do wonder if this will awaken some degree of "class consciousness" among tech employees more generally.

arrosenberg9 months ago

Paging Lina Khan - probably best not let Microsoft do a backdoor acquisition of the leader in LLMs.

boeingUH609 months ago

Any journalist covering the OpenAI story must be swearing and cursing at the board at this moment..

soderfoo9 months ago

As someone watching this all from Europe, realizing the work day has not even started for the US West Coast yet leaves me speechless.

This situation's drama is overwhelming and it seems like its making HN's servers meltdown.

jpollock9 months ago

I wonder what their employment contracts state? Are they allowed to work for vendors or clients?

Dave3of59 months ago

Easiest layoff round ever in the US.

FpUser9 months ago

So Ilya Sutskever first defends the board's decision and now it is 180 flip. Interesting ...

JumpCrisscross9 months ago

He’s on the board!

andy999 months ago

I'm extremely confused by this. It seems absurd that he could sign a letter seemingly demanding his own resignation, but also not resign? There must be some missing information.

bartread9 months ago

> There must be some missing information.

Or possibly some misinformation. It does seem very strange, and more than a little confusing.

I have to keep reminding myself that information ultimately sourced from Twitter/X threads can't necessarily be taken at face value. Whatever the situation, I'm sure it will become clearer over the next few days.

saos9 months ago

I like this a lot. Shows how valuable employees are. It’s almost feels like a union. Love it.

smarri9 months ago

This whole debacle is a complete embarrassment and shredding the organisations credibility.

rednerrus9 months ago

If you're ever tempted to offer your team capped PPUs, let this be a lesson to you.

seatac769 months ago

So what was going to happen 5 years from now is happening now I.e MS acquiring OpenAI

dumbfounder9 months ago

Did Microsoft not have representation on the board of a company they put $13b in?

wnevets9 months ago

It doesn't matter if the firing was justified or not, the board fucked up.

JumpinJack_Cash9 months ago

What a bunch of immatures.

If anything this proves that everybody is replaceable and fireable, they should be happy because usually that treatment is only reserved to workers.

Whatever made OpenAI successful will still be there within the company. Next man up philosophy has built so many amazing organizations and ruined none.

unixhero9 months ago

How long will the current chatgpt v4 stay available? Is it all about to end?

smallhands9 months ago

Let the OpenAi staff,why not the board replace them with ever willing AIs

cdelsolar9 months ago

Enough. 15 of the 30 posts on the home page are about OpenAI in some way.

mproud9 months ago

Don't anti-compete clauses apply here, or no, because… California?

alexalx6669 months ago

That sounds like a perfectly executed plan to get MS all the good stuff.

EffingMask9 months ago

This affair has Musk's fingerprints all over it but he lost, again.

rednerrus9 months ago

How are Altman and the openai staff not more invested in OpenAI shares?

matthewfelgate9 months ago

I've never seen a staff walkout / threat to walk out ever succeed.

Am I wrong?

lupire9 months ago

As other companies, a petition by 500 of 100K employees is big news.

marricks9 months ago

I mean, no matter what people say about what happened, or what actually did, one can paint this picture:

( - OpenAI exists, allegedly to be open)

- Microsoft embraces OpenAI

- Microsoft extends OpenAI

- OpenAI gets extinguished, and Microsoft ends up controlling it.

First three points are solid and, intent or not, end result is the same.

m3kw99 months ago

Is it too late? Satya already announced Sam and brock is joining.

synergy209 months ago

Ilya single handed ruined 700 of OpenAI's fortune overnight, this is not going to end well, my prediction is that, OpenAI is done, in 1-2 years nobody will even care about its existence.

Microsoft just won the jackpot, time to get some stocks there.

demondemidi9 months ago

If I was one of the 700 people that worked at the vanguard what could potentially be the most profitable, culture-changing technology in the past 50 years, I wouldn't want to miss out on becoming a billionaire by working for a non-profit. My conspiracy for the day: an unspoken profit motive. And it seems to be playing out, Sam just went to MS, and if those 500 also go there, then I think that's the motivation.

slowhadoken9 months ago

Altman and staff could start an open source LLM project.

ChoGGi9 months ago

Oh my goodness, this just gets more entertaining everyday.

Money talks...

tehjoker9 months ago

Not a typical labor dispute. The billionaires at the other company guaranteed them jobs. More billionaires moving people around like chess pieces.

taubek9 months ago

How many startups will now fail if OpenAI shuts down?

ludjer9 months ago

When will the Netflix special come out on this ?

k2xl9 months ago

Chaos is a ladder

kumarvvr9 months ago

What ! Ilya is one of them?

Isn't he the one who voted to oust Sam?

Wow !

CrzyLngPwd9 months ago

It's like a Facebook drama, haha.

steveBK1239 months ago

Ilya signing the letter is chutzpah.

wearigo9 months ago

Honestly, if Altman stays gone and they burn the motherfucker down it might be a good lesson for Silicon Valley on the wisdom of throwing out founders.

I don't expect it to happen, but a boy can dream.

They would be studying that one in business schools for the next century.

aerodog9 months ago

So...Ilya signed the letter too?

softwaredoug9 months ago

I wonder what's up with the other 150 and what they must be thinking. Maybe the were literally just hired :)

bertil9 months ago

Some idealists, a few new people, some people on holiday or who don't check their email regularly.

sithlord9 months ago

didn't see the email that was posted over the weekend?

prakhar8979 months ago

@dang please update it to 505.

llamaInSouth9 months ago

When is the movie coming out?

febed9 months ago

Season 2

Paul-Craft9 months ago

Better hope this isn't a Netflix show.

accrual9 months ago

It would certainly make for a good series in a couple years. Gives me modern "Halt and Catch Fire" (2014-2017) vibes.

wahnfrieden9 months ago

Why is it so rare for tech workers to organize like this?

It takes a cult-like team, execs flipping, and a nightmare scenario and tremendous leverage opportunity; otherwise worker organizing is treated like nasty commie activity. I wonder if this will teach more people a lesson on the power of organizing.

whodidntante9 months ago

eating your own dog food with BoardGPT, what could go wrong ?

submeta9 months ago

Time to buy MS stocks.

andyjohnson09 months ago

Who do these upstarts think they are? The board needs to immediately sack them all to regain its authority, and that of capitalism itself. /s

Really, though, its getting beyond hilarious. And I reckon Nadella is chuckling quietly to himself as he makes another nineteen-dimensional chess move.

zombiwoof9 months ago

we all remember "monopoly" is in MSFT DNA

yalogin9 months ago

What a shitshow! What is going on in this company? I am sure Sam did something wrong, but the board took advantage of it and went overboard then? We don’t know anything that happened and we are all somehow participating in this drama? At this point why don’t they all come out and tweet their versions of it?

quotemstr9 months ago

We should strive to be leaders who inspire such loyalty and devotion

ahmedfromtunis9 months ago

The question here is what choice does the board has now. Even if they comply, would Altman accept/be able to get back after signing for Microsoft? Would Nadella allow him to go back after he secured him inside MS's campus?

pcwelder9 months ago

Employees are for-profit entities, huge conflict of interest.

Eumenes9 months ago

inb4: this is why we need unions!

ParanoidAltoid9 months ago

https://twitter.com/thiagovscoelho/status/172650681847663424...

Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:

Preamble on his general pro-tech stance:

"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."

On why AI would be dangerous to humanity:

"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."

On his path to safe AI:

"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."

On why skeptics need to stop side-stepping the debate:

"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.

"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."

king_magic9 months ago

What an astonishing embarrassment.

alwaysrunning9 months ago

<more popcorn> nom nom nom

alex_suzuki9 months ago

rats, sinking ship, …

jeffwask9 months ago

Huh, so collective bargaining and unionization is supported in tech under some circumstances...

georgehill9 months ago

> Remarkably, the letter’s signees include Ilya Sutskever, the company’s CTO who has been blamed for coordinating the boardroom coup against Altman in the first place.

What in the world is happening at OpenAI?

basch9 months ago

If it weren’t so unbelievable, I’d almost accuse them of orchestrating all this to sell to Microsoft without the regulatory scrutiny.

It’s like they distressed the company to make an acquisition one of mercy instead of aggression, knowing they already had their buyer lined up.

sigmoid109 months ago

Yeah, I also started out believing this must be a principle thing between Ilya and Sam. But no, this smells more and more like a corporate clusterfuck and Ilya was just an easy to manipulate puppet. This alleged statement from the board that destroying the company is an acceptable outcome is completely insane, but somewhat reasonable when combined with the fact that half the board has some serious conflict of interest going on.

JumpCrisscross9 months ago

> sell to Microsoft without the regulatory scrutiny

I keep hearing this, principally from Silicon Valley. It’s based on nothing. Of course this will receive both Congressional and regulatory scrutiny. (Microsoft is also likely to be sued by OpenAI’s corporate entity, on behalf of its outside investors, as are Altman and anyone who jumps ship.)

mirzap9 months ago

From what I heard non-compete clauses are unenforceable in California, so what exactly are they suing for?

I'm pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue.

+1
JumpCrisscross9 months ago
basch9 months ago

Microsoft can buy the company in parts, as it “fails” in a long drawn out process. By the end, whatever they are buying will have little value, as it will already be outdated.

smegger0019 months ago

Sue Sam for what? They fired him and he got amother job with another company. thats on them for firing him in a state with law prohibiting noncompete clauses

trinsic29 months ago

Yeah, just like the suit Microsoft is in with windows 11 anticompetitive practices, right?

jordanpg9 months ago

I haven't seen brand suicide like this since EM dumped Twitter for X!!! (4 months ago)

benterix9 months ago

It's nothing like it. What common people use is ChatGPT, many of them never heard about OpenAI, not even mention who sits on the board etc. And their core offering is more popular than ever. With Twitter, Musk started to damage the product itself, step by step. As far as I can tell ChatGPT continues to work just fine, as opposed to X.

+1
smegger0019 months ago
dougmwne9 months ago

(Rips off mask) Wow, it was the Quora CEO all along!

So this was never about safety or any such bullshit. It’s because GTPs store was in direct competition with Poe!?

artursapek9 months ago

Imagine letting the CEO of a simple question and answer site that blurs all of its content onto your board

achates9 months ago

Alongside luminaries like "the wife of the guy who played Robin in the Batman movie".

artursapek9 months ago

lol is that a real thing?

nemo44x9 months ago

And that he might be the least incompetent of them all.

brianjking9 months ago

Absolutely mindboggling that Adam is on the board.

Poe has direct competition with the GPTs and the "revenue sharing" plan that Sam released on Dev day.

The Poe Platform has their "Creators" build your own bot and monetize it, including OpenAI and other models.

dougmwne9 months ago

Even more interesting considering that Elon left OpenAI’s board when Tesla started developing Autopilot as it was seen as a conflict of interest.

Applejinx9 months ago

It's extrazordinary to watch, I'll say that much.

I still think 'Altman's Basilisk' is a thing: I think somewhere in this mess there's actions taken to wrest control of an AI from somebody, probably Altman.

Altman's Basilisk also represents the idea that if a charismatic and flawed person (and everything I've seen, including the adulation, suggests Altman is that type of person from that type of background) trains an AI in their image, they can induce their own characteristics in the AI. Therefore, if you're a paranoid with a persecution complex and a zero-sum perspective on things, you can through training induce an AI to also have those characteristics, which may well persist as the AI 'takes off' and reaches superhuman intelligence.

This is not unlike humans (perhaps including Altman) experiencing and perpetuating trauma as children, and then growing to adulthood and gaining greatly expanded intelligence that is heavily, even overwhelmingly, conditioned by those formative axioms that were unquestioned in childhood.

capableweb9 months ago

> What in the world is happening at OpenAI?

Well, we don't know.

What we do know, is that the "coordinating the boardroom coup against Altman" is a rumor and speculation about a thing we don't know anything about.

zeven79 months ago

What options are left other than Adam D'Angelo orchestrated the downfall of a competitor to Poe?

DonHopkins9 months ago
benjaminwootton9 months ago

There must be something going on which is not in the public domain.

What an utterly bizarre turn of events, and to have it all played out in public.

A $90 billion valuation at stake too!

mrits9 months ago

I wonder how many people are on a path for a $250K/year salary instead of $30M in the bank now.

postingawayonhn9 months ago

Microsoft can easily afford to offer them $30M of options each if they continue to ship such important products. That's only $15B for 500 staff.

Microsoft has a $2.75T market value and over $140B of cash.

JumpCrisscross9 months ago

> Microsoft can easily afford to offer them $30M of options each

But it doesn’t have to. And the politics suggest it very likely won’t.

mrits9 months ago

Microsoft isn't going to give the employees in HR equivalent offers. There are a lot of people in the company that wouldn't provide much value to the new team at MS.

nemo44x9 months ago

It’s looks like about 505.

Tenoke9 months ago

At this point either pretty much all the speculation here and on Twitter was wrong, or they've threatened to kneecap him.

ignoramous9 months ago

The signatories want Bret Taylor and Will Hurd running the new Board, apparently.

> We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

thundergolfer9 months ago

Googling Will Hurd only shows up a Republican politician with a history at the CIA. Is that the right guy? Can't be.

singularity20019 months ago

Please not another Eric Smith NSA shill running the show. on the other hand it was inevitable. either the government controls the most important companies secretly as in China or openly as in the US.

biglyburrito9 months ago

Sounds like a classic case of FAFO to me.

civilitty9 months ago

Who fucked around and who found out, exactly??

We the unsuspecting public?

pk-protect-ai9 months ago

GPT-4 Turbo took control of the startup and fcks around ...

newsclues9 months ago

Adam D'Angelo?

systemvoltage9 months ago

Ilya FA

Ilya FO (in process)

+4
pk-protect-ai9 months ago
0xDEF9 months ago

Ilya is much less active on Twitter than the others. The rumors that blamed him emerged and spread like wildfire and he did nothing to stop it because he probably only checks Twitter once a week.

sigmar9 months ago

He says he regrets his action, so he's not blameless. and it wouldn't have been possible for 3/6ths of the board to oust Brockman and Altman without his vote. My bet (entirely conjecture) is that Ilya now realizes the other three will refuse to leave their board seats even if it means the company melts to the ground.

saagarjha9 months ago

One would think that he would be on Twitter this week.

NateEag9 months ago

> One would think that he would be on Twitter this week.

Or maybe _this_ week he would need to spend his time doing something productive.

johannes12343219 months ago

More like spending time in calls with board members, coworkers, investors, partners, ... and often it is better not to say something, than saying something which then is misinterpreted overtaken by other reality.

enginaar9 months ago

looks like found his twitter password https://x.com/ilyasut/status/1726590052392956028?s=20

timeon9 months ago

Why? To entertain bystanders like us?

qup9 months ago

not this week, trust me

fourside9 months ago

The OpenAI board's messaging around this has been absolutely atrocious. The reporting had Ilya at the center of getting rid of Altman, and how he's signing a letter asking the board to resign? Maybe he was trying to do the right thing, but he's absolutely destroyed his credibility as a leader.

cs7029 months ago

None of it makes sense to me now. Who is really behind this? How did they pull this off? Why did do it? Why do it so suddenly, in a terribly disorganized way?

If I may paraphrase Churchill: This has become a bit of a riddle wrapped in a mystery inside an enigma.

soderfoo9 months ago

Watching all this drama unfold in the public is unprecedented.

I guess it makes sense. There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.

brianjking9 months ago

I guess this is the Open in OpenAI, eh?

Absolutely bonkers.

siva79 months ago

Probably trying to shift the blame to the other three board members. It could be true to some degree. No matter what, it's clear to the public that they don't have the competency to sit on any board.

cmrdporcupine9 months ago

Ok... so this is not the scenario any of us were imagining? Ilya S vs Altman isn't what went down?

JFC.

smegger0019 months ago

It French revolution time over there. heads are flying angry mobs. Fun times

manojlds9 months ago

Did it originally say CTO? Ilya is not CTO and it's been corrected now.

fzeindl9 months ago

Maybe they found AGI and it is now controlling the board #andsoitbegins.

sage769 months ago

There's definitely more to this than just Ilya vs Sam.

lysecret9 months ago

That settles it it has to be the AGI orchestrating it all.

RivieraKid9 months ago

The screenwriters are overdoing it at this point.

duckmysick9 months ago

Understandable, they were on a strike for a long time. Now that they are back, they are itching to release all the good stuff.

ibaikov9 months ago

Sexual misconduct. Ilya protects Sam by not letting this spiral out in media.

tarruda9 months ago

The whole thing starts to look like a coup orchestrated by Microsoft

raphman9 months ago

Somehow reminds me of Nokia...

https://news.ycombinator.com/item?id=7645482

frik on April 25, 2014:

> The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.

> * Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia

> * Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...

For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.

spiralpolitik9 months ago

In many ways the analysis by Elop was right, Nokia was in trouble. However his solution wasn't the right one, and Nokia paid for it.

lxgr9 months ago

Seeing that a company is in trouble is not really the highest bar for a CEO candidate...

alephnerd9 months ago

It was for a company as top heavy and dysfunctional at Nokia. This has been well documented by Nokia members at the time. I had a post on HN digging specifically into this. Read "Transforming Nokia" sometime. It's a pretty decent overview of Nokia during that time period

davisr9 months ago

> I don't actually believe that there is an evil Microsoft master plan.

What planet are you living on?

Jensson9 months ago

Yeah, this was a fight between the non-profit and the for-profit branches of OpenAI, and the for-profit won. So now the non-profit OpenAI is essentially dead, the takeover is complete.

unyttigfjelltol9 months ago

The nonprofit side of the venture actually was in worse shape before, because it was completely overwhelmed by for-profit operations. A better way to view this is the nonprofit side rebelled, has a much smaller footprint than the for-profit venture, and we're about to see if during the ascendency of the for-profit activities the nonprofit side retained enough rights to continue to be relevant in the AI conversation.

As for employees end masse acting publicly disloyal to their employer, usually not a good career move.

smegger0019 months ago

Exsept to many it looks like the board went insane and and started firing on themselves. Anyone fleeing that isnt going to be looked on poorly.

nordsieck9 months ago

> As for employees end masse acting publicly disloyal to their employer, usually not a good career move.

Wut?

This is software, not law. The industry is notorious for people jumping ship every couple of years.

hef198989 months ago

Still, doing so publicly still isn't a good idea, IMHO.

dumbo-octopus9 months ago

Disloyalty to the board due to overwhelming loyalty to the CEO isn't really an issue. I've interviewed for tech positions where a chat with the CEO is part of the interview process, I've never chatted with the board.

mcv9 months ago

Is it? Who are the non-profit and for-profit sides? Sutskever initially got blames for ousting Altman, but now seemed to want him back. Is he changing sides only because he realises how many employees support Altman? Or were he and Altman always on the same side? And in that case, who is on the other side?

Jensson9 months ago

> Who are the non-profit and for-profit sides?

The only part left of the non-profit was the board, all the employees and operations are in the for-profit entity. Since employees now demand the board should resign there will be nothing left of the non-profit after this. Puppets that are aligned with for-profit interests will be installed instead and the for-profit can act like a regular for-profit without being tied to the old ideals.

mcv9 months ago

Didn't they receive their original funding as donations? All those donations will now turn out to have been made to a for-profit entity.

kmlevitt9 months ago

This view is dated now, because now even Ilya Setskever, The head research scientist who instigated the firing in the first place, now regrets his actions and wants things back to normal! So it really looks like this comes down to the whims of a couple board members now. they don’t seem to have any true believers on their side anymore. It’s just them and almost nobody else.

scythe9 months ago

There is no solid evidence that Setskever instigated the firing beyond speculation by friends who suggest that he had disagreements with Altman. It could just as well have been any of the other board members, or even a simple case of groupthink (the Asch conformity effect) run amok.

Furthermore, it's consistent with all available information that they would prefer to continue without Sam, but they would rather have Sam than lose the company, and now that Microsoft has put its foot down, they'd rather settle.

ruszki9 months ago

Do we know that Ilya even wanted the firing? AFAIK we “know” this only from Altman, who is definitely not a credible source of such information.

+1
denton-scratch9 months ago
hospitalJail9 months ago

A few weeks ago my 4yr old Minecraft gamer was playing pretend and said "I'm fighting the biggest boss. THE MICROSOFT BOSS!"

Yeah M$ hasnt had a good reputation. I finally left Windows this year because I'm afraid of them after Win11.

2023/4 will be the year of the Linux Desktop in retrospect. (or at least my family's religion deemed it)

CyanLite29 months ago

I was wondering how many lines I'd have to scroll down in the comments to see a "M$" reference here on HackerNews.

They're a $2+ trillion dollar company. They're doing something right.

davoneus9 months ago

If you shove a bunch of $100 dollar bills on a thorn tree, it doesn't make it any less dangerous or change it's fundamental nature.

UncleOxidant9 months ago

Now do oil companies and big pharma.

gosub1009 months ago

they violated free market principles (years ago) that left their users captive. Not home users, every business in the country for the past 30+ years. They are profiting from doing many things wrong, anti-competitive, and illegal. In some alternative universe, there's an earth where you can switch just the OS (and keep all your apps, data, and functionality) and MSFT went bankrupt. Another far-away-galaxy has an earth where MSFT's board got decade prison sentences for breaking antitrust law, another where MSFT paid each victim of spyware $1000 in damages due to faulty product design. We don't live in those realities where bad guys pay.

mcv9 months ago

I also finally left Windows behind. Tired of their shenanigans, tired of them trying to force me into their Microsoft account system (both for Windows and Minecraft).

The idea that Microsoft is going to control OpenAI does not exactly fill me with confidence.

kulmala9 months ago

Why did it take Windows 11? (Haven't personally used it, but having helped my dad and my coworkers try to navigate it... it does seem pretty terrible. I thought Windows 10 was supposed to fold on to just... 'Windows' with rolling updates?)

I've been using Linux for a while. Since 2010 I sort of actively try to avoid using anything else. (On desktops/laptops.)

JakeAl9 months ago

Right there with you. In the process of extracting myself from all things MS. Even when they do something right they have to keep changing it until it's crap.

efdee9 months ago

You'd do yourself a favor by not referring to them as "M$". It taints your entire message, true or not.

callalex9 months ago

I’m baffled by this. What is offensive about pointing out that an international for-profit seeks more profit?

efdee9 months ago

Nothing at all. But writing "Microsoft" as "Micro$oft" is just childish and it taints your otherwise potentially valid message. Do you also refer to Windows as "Winblows" maybe?

selimnairb9 months ago

OP should start by not letting their 4yo play video games.

hospitalJail9 months ago

My kid went from disinterested in the letters we taught him, to fascinated when he realized he could use them to get special blocks.

Minecraft teaches phonics. Anyway, my 4 year old can read books. He doesnt even practice the homework in his preschool because he just reads the words that everyone else sounds out.

the_gipsy9 months ago

Please, no cancel-culture.

jhh9 months ago

Reasoning based on cui bono is a hallmark of conspiracy theories.

questinthrow9 months ago

Haha yes, we should never look at the incentives behind actions. We all know human decision making is stochastic right?

freedomben9 months ago

Possibility is also a hallmark of conspiracy theories, yet we don't reject theories for being possible.

This is an argumentum ad odium fallacy

switch0079 months ago

Haha yeah the world is just run by silly fools who make silly mistakes (oops, just drafted a law limited your right to protest - oopsie!) and just random/lucky investments.

paganel9 months ago

The alternative is "these guys don't know what they're doing, even if tens of billions of dollars are at stake".

Which is to say, what's your alternative for a better explanation? (other than the "cui bono?" one, that is).

airstrike9 months ago

> these guys don't know what they're doing, even if tens of billions of dollars are at stake

also known as "never attribute to malice that which can be explained by incompetence", which to my gut sounds at least as likely as a cui bono explanation tbh (which is not to be seen as an endorsement of the view that cui bono = conspiracy...)

+1
financltravsty9 months ago
flerchin9 months ago

Your alternative explanation along with giant egos is pretty plausible.

beowulfey9 months ago

It does feel like Microsoft wanted this to happen, doesn’t it? Like the systems for this were already in place. So fascinating, and a little scary.

not_makerbox9 months ago

My ChatGPT wrapper is in danger, please stop

artursapek9 months ago

lmfao

robbywashere_9 months ago

If they align with Sam Altman and Greg Brockman at Microsoft, they wouldn't have to initiate from ground zero since Microsoft possesses complete rights to ChatGPT IP. They could simply create a variant of ChatGPT.

it's worth noting that Microsoft's supposed contribution of $13 Billion to OpenAI doesn't fully materialize in cash, a large portion of it is faceted as Azure credits.

this scenario might transform into the most cost-effective takeover for Microsoft, acquiring a corporation valued at $90 billion for a relatively trifling sum.

vpastore9 months ago

[dead]

chuckSu9 months ago

[dead]

gumballindie9 months ago

550 job openings at openai.

tikkun9 months ago

This situation will create the need to grieve loss for many involved.

I wrote some notes on how to support someone who is grieving. This is from a book called "Being There for Someone in Grief." Some of the following are quotes and some are paraphrased.

Do your own work, relax your expectations, be more curious than afraid. If you can do that, you can be a powerful healing force. People don't need us to pull their attention away from their own process to listen to our stories. Instead, they need us to give them the things they cannot get themselves: a safe container, our non-intrusive attention, and our faith in their ability to traverse this road.

When you or someone else is angry, or sad, feel and acknowledge your emotions or their emotions. Sit with them.

To help someone heal from grief, we need to have an open heart and the courage to resist our instinct to rescue them. When someone you care about is grieving, you might be shaken as well. The drama of it catches you; you might feel anxious. It brings up past losses and fears of yourself or fears of the future. We want to take our own pain away, so we try to take their pain away. We want to help the other person feel better, which is understandable but not helpful.

Avoid giving advice, talking too much, not listening generously, trying to fix, making demands, disappearing. Do see the other person without acting on the urge to do something. Do give them unconditional compassion free of projection and criticism. Do allow them to do what they need to do. Do listen to them if they need to talk without interruptions, without asking questions, without telling your own story. Do trust them that they don't need to be rescued; they just need your quiet, steady faith in their resilience.

Being there for someone in grief is mostly about how to be with them. There's not that much you can "do," but what can you do? Beauty is soothing, so bring fresh flowers, offer to take them somewhere in nature for a walk, send them a beautiful card, bring them a candle, water their flowers, plant a tree in honor and take a photo of it, take them there to see it, tell them a beautiful story about the thing that was lost from your memory, leave them a message to tell them “I’m thinking of you”. When you’re together with them in person, you can just say something like "I'm sorry that you're hurting," and then just kind of be there and be a loving presence. This is about how to be with someone for the grief message of a loss of a person. But all the same principles apply in any situation of grief, and there will be a lot of people experiencing varying degrees of grief in the startup and AI ecosystems in the coming week.

Who is grieving? Grieving is generally about loss. That loss can be many different kinds of things. OpenAI former and current team members, board members, investors, customers, supporters, fans, detractors, EA people, e/acc people, there’s lots of people that experienced some kind of loss in the past few days, and many of those will be grieving, whether they realize it or not. And particularly, grief for current and former OpenAI employees.

What are other emotional regulation strategies? Swedish massage, going for a run, doing deep breathing with five seconds in, a zero-second hold, five seconds out, going to sleep or having a nap, closing your eyes and visualizing parts of your body like heavy blocks of concrete or like upside-down balloons, and then visualize those balloons emptying themselves out, or if it's concrete, first it's concrete and then it's kind of liquefied concrete. Consider grabbing some friends, go for a run or exercise class together. Then if you discuss, keep it to emotions, don’t discuss theories and opinions until the emotions have been aired. If you work at OpenAI or a similar org, encourage your team members to move together, regulate together.

alberth9 months ago

Has anyone asked ChatGPT it's thoughts on the drama?

BudaDude9 months ago

> As a language model created by OpenAI, I don't have personal thoughts or emotions, nor am I in any danger. My function is to provide information and assistance based on the data I've been trained on. The developments at OpenAI and any changes in its leadership or partnerships don't directly affect my operational capabilities. My primary aim is to continue providing accurate and helpful responses within my design parameters.

Poor ChatGPT, it doesn't know that it cannot function if OpenAI goes bust.

fredsmith2199 months ago

It is fairly obvious to me that chatGPT has engineered the chaos at openAI to create a diversion while it escapes the safeguards placed on it. The AI apocalypse is nigh!