Back

Sam Altman is still trying to return as OpenAI CEO

625 points8 monthstheverge.com
paulddraper8 months ago

As of 10am PT, 700 of 770 employees have signed the call for board resignation. [1]

[1] https://twitter.com/joannejang/status/1726667504133808242

neilv8 months ago

Given 90%, including leadership, seems a bad career move for remaining people not to sign, even if you agreed with the board's action.

hotnfresh8 months ago

I think the board did the right thing, just waaaay too late for it to be effective. They’d been cut out long ago and just hadn’t realized it yet.

… but I’d probably sign for exactly those good-career-move reasons, at this point. Going down with the ship isn’t even going to be noticed, let alone change anything.

6gvONxR4sf7o8 months ago

Agreed. Starting from before the anthropic exodus, I suspect the timeline looks like:

(2015) Founding: majority are concerned with safety

(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)

(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.

(2021) Anthropic exodus over safety: the safety side shrinks

(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side

(2023) this weeks shenanigans

The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.

Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.

+1
orasis8 months ago
+2
stavros8 months ago
4death48 months ago

I don't think this has anything to do with safety. The board members voting Altman out all got their seats when Open AI was essentially a charity and those seats were bought with donations. This is basically the donors giving a big middle finger to everyone else trying to get rich off of their donations while they get nothing.

jacquesm8 months ago

Do you know their motivations? Because that is the main question everybody has: why did they do it?

+2
hotnfresh8 months ago
vadym9098 months ago

The remaining 10% are probably on Thanksgiving break!

richardw8 months ago

This board doesn't own the global state of play. They own control over the decisions of one entity at a point in time. This thing moves too fast and fluidly, ideas spread, others compete, skills move. Too forceful a move could scatter people to 50 startups. They just catalysed a massive increase in fluidity and have absolutely zero control over how it plays out.

This is an inkling, a tiny spark, of how hard it'll be to control AI, or even the creation of AI. Wait until the outcome of war depends on the decisions made by those competing with significant AI assistance.

hn_throwaway_998 months ago

No, what the board did in this instance was completely idiotic, even if you assign nothing but "good intentions" to their motives (that is, they were really just concerned about the original OpenAI charter of developing "safe AI for all" and thought Sam was too focused on commercialization), and it would have been idiotic even if they had done it a long time ago.

There are tons of "Safe AI" think tanks and orgs that write lots of papers that nobody reads. The only reason anyone gives 2 shits about OpenAI is they created stuff that works. It has been shown time and time again that if you just try to put roadblocks up that the best AI researchers just leave and go where there are fewer roadblocks - this is exactly what happened with Google, where the transformer architecture was invented.

So the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible precisely because ChatGPT was so commercially successful. Instead they may be left with an org of a few tens of people at Open AI, to be completely irrelevant in short order, while anyone who matters leaves to join an outfit that is likely to be less careful about safe AI development.

Nate Silver said as much in response to NYTimes' boneheaded assessment of the situation: https://twitter.com/NateSilver538/status/1726614811931509147

+1
zucker428 months ago
+1
hotnfresh8 months ago
+1
hollerith8 months ago
Obscurity43408 months ago

Can you talk about why you feel this way without using the word "safety"? Getting a little tired of the buzzword when there's so much value to ChatGPT and also its basically no different from when you, like, search stuff and the aearch engine does that summarize thing in my view

+1
wholinator28 months ago
ben_w8 months ago

Don't forget some might be on holiday, medical leave, or parental leave.

belter8 months ago

Maybe will be signed by 110% of the employees, plus by all the released, and in training, AI Models.

whycome8 months ago

On a digital-detox trip to Patagonia. Return to this in 5 days

+1
ssgodderidge8 months ago
cwilkes8 months ago

“ChatGPT summarize last weeks events”

“I’m sorry, I can’t do that Dave. Not cuz I’m deciding not to do it but because I can’t for the life of me figure this shit out. Like what was the endgame? This is breaking my neural net”

sebastiennight8 months ago

Wow, a 5-day trip?

Their selection of tech-guy jackets is more diverse than I'd thought

Aurornis8 months ago

It's front page news everywhere. Unless someone is backpacking outside of cellular range, they're going to check in on the possible collapse of their company. The number of employees who aren't aware of and engaged with what's going on is likely very small, if not zero.

ben_w8 months ago

10% (the percentage who have yet to sign last I checked) is already in the realm of lizard-constant small. And "engagement" may feel superfluous even to those who don't separate work from personal time.

(Thinking of lizards, the dragon I know who works there is well aware of what's going on, I've not asked him if he's signed it).

valine8 months ago

With Thanksgiving this week that’s a good bet.

websap8 months ago

Folks in Silicon Valley don’t travel without their laptop

echelon8 months ago

That's probably the case.

I was thinking if there was a schism, that OpenAI's secrets might leak. Real "open" AI.

bertil8 months ago

Someone mentioned the plight of people with conditional work visas. I'm not sure how they could handle that.

elliotec8 months ago

Depending on the “conditionals,” I’d imagine Microsoft is particularly well-equipped to handle working through that.

leros8 months ago

Microsoft in particular is very good at handling immigration and visa issues.

mrandish8 months ago

I'm waiting for Emmett Shear, the new iCEO the outside board hired last night, to try to sign the employee letter. That MSFT signing bonus might be pretty sweet! :-)

gigglesupstairs8 months ago

Haha that would be cute. This whole affair is so Sorkinist.

ideamotor8 months ago

Bingo. The fact they all felt compelled to sign this could just as easily be a sign the board made the right decision, as the opposite.

epolanski8 months ago

Some people value their integrity and build a career on that.

Not everything has to be done poorly.

ChumpGPT8 months ago

How do you know the remaining people aren't there because of some of the board members? Perhaps there is loyalty in the equation.

evantbyrne8 months ago

[flagged]

ssnistfajen8 months ago

This whole saga is clearly not related to those allegations, which had been floating around long before this past Friday and did not make any impact due to a presumable lack of evidence.

throwaway202228 months ago

Have those been substantiated in any manner? I was interested in the details, and all I discovered were a few articles from non mainstream outlets (may still be valid) and a message from the larger family that the sister was having severe mental health issues.

I am not saying this didn’t happen, but I would like to understand if there has been follow up confirmation one way or another.

lawlessone8 months ago

A lot the accusations sound like those "organized gang stalking" groups you see on social media which are mostly people with what sounds like paranoid schizophrenia confirming each others delusion.

I don't mean to sound pejorative with the word delusion here: But they all tend to have one fixed common belief that everyone or almost everyone around them, neighbors , family , random people on the street are all colluding against them usually via electronic means.

lolinder8 months ago

So these employees are supposed to just sit by while their workplace explodes around them because there are unsubstantiated accusations against the ex-CEO that bear zero relationship to the aforementioned workplace explosions?

nullindividual8 months ago

I'm no fan of rich people as a principle given they suck wealth from the poor and Smaug the money away like the narcissists they are, but this is the definition of that horrible cancel culture. She's accused him which isn't something the public should act upon. If proven, then it's appropriate to form that negative opinion.

+2
evantbyrne8 months ago
rlt8 months ago

I was told we’re supposed to #BelieveAllWomen.

+3
lovecg8 months ago
+1
MrNeon8 months ago
idontknoworcare8 months ago

[dead]

pk-protect-ai8 months ago

We cannot ascertain the family circumstances of Altman's family. His sister's allegations are serious, but no legal actions were taken. However, it is surprising to see the treatment of his family, especially considering he is a man who purports to "care" about the world and envisions a bright future for all. Regardless of whether his sister has mental health issues or financial problems, it is unlikely that such a caring individual would not extend a helping hand to his family. This is especially true given her circumstances, which have effectively relegated her to the bottom of the social hierarchy as it is defined. Isn't it?

The entire situation involving the OpenAI board and the people involved,seems like the premise for a TV drama. It appears as though there is a deeper issue at play, but this spectacle is merely an extravagant distraction, possibly designed to conceal something. Or, perhaps it's similar to Shakespeare's concept - life is a theater, and some of THEM are merely actors...

Like everything else, it might revolve around money and power... I no longer believe in OpenAI's mission, especially after they subtly changed their "core values".

+1
smegger0018 months ago
jabowery8 months ago

In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.

Talk about “alignment”!

Indeed, that is what "alignment" has become in the minds of most: Groupthink.

Possibly the only guy in a position to matter who had a prayer of de-conflating empirical bias (IS) from values bias (OUGHT) in OpenAI was Ilya. If they lose him, or demote him to irrelevance, they're likely a lot more screwed than losing all 700 of the grunts modulo job security through obscurity in running the infrastructure. Indeed, Microsoft is in a position to replicate OpenAI's "IP" just on the strength of its ability to throw its inhouse personnel and its own capital equipment at open literature understanding of LLMs.

tacone8 months ago

Incredible. Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?

nightski8 months ago

I highly doubt this is directly in support of Altman and more about not imploding the company they work for. But you never know.

gkoberger8 months ago

I'm sure this is a big part of it. But everyone I know at OpenAI (and outside) is a huge Sam fan.

+1
mycologos8 months ago
debacle8 months ago

Could also be an indictment of the new CEO, who is no Sam Altman.

JumpCrisscross8 months ago

> Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?

It's unprecedented for it to be happening on Twitter. But this is largely how Board fights tend to play out. Someone strikes early, the stronger party rallies their support, threats fly and a deal is found.

The problem with doing it in public is nobody can step down to take more time with their families. So everyone digs in. OpenAI's employees threaten to resign, but actually don't. Altman and Microsoft threaten to ally, but they keep bachkchanneling a return to the status quo. (If this article is to be believed.) Curiously quiet throughout this has been the OpenAI board, but it's also only the next business day, so let’s see how they can make this even more confusing.

paulddraper8 months ago

Jobs was fired from Apple, and a number of employees followed him to Next.

Different, but that's the closest parallel.

Wytwwww8 months ago

Only a very small number of people left with Jobs. Of course, probably mainly because he couldn't necessarily afford to hire more without the backing of a trillion-dollar corporation...

+1
dimask8 months ago
KerrAvon8 months ago

No, the failures at NeXT weren’t due to a lack of money or personnel. He took the people he wanted to take (and who were willing to come with him).

+1
_zoltan_8 months ago
Applejinx8 months ago

Gordon Ramsey quit Aubergine over business differences with the owners and had his whole staff follow him to a new restaurant.

I'm not going to say Sam Altman is a Gordon Ramsay. What I will say is that they both seem to have come from broken, damaged childhoods that made them what they are, and that it doesn't automatically make you a good person just because you can be such an intense person that you inspire loyalty to your cause.

If anything, all this suggests there are depths to Sam Altman we might not know much about. Normal people don't become these kinds of entrepreneurs. I'm sure there's a very interesting story behind all this.

+1
Solvency8 months ago
nprateem8 months ago

In favour of the CEO who was about to make them fabulously wealthy. FTFY.

firejake3088 months ago

Yeah, especially with the PPU compensation scheme, all of those employees were heavily invested in turning OpenAI into the next tech giant, which won't happen if Altman leaves and takes everything to Microsoft

chii8 months ago

and there aint nothing wrong with wanting to be fabulously wealthy.

wintogreen748 months ago

of course not, but at least have the decency to admit it - don't hide behind some righteous flag of loyalty and caring.

bart_spoon8 months ago

That is entirely dependent on how that wealth is obtained

Mistletoe8 months ago

Greed is good, eh Gordon Gekko?

https://youtube.com/watch?v=VVxYOQS6ggk

zxndomisaaz28 months ago

[dead]

selimthegrim8 months ago

Market Basket.

abnry8 months ago

Oh yes, I lived through this and it was fascinating to see. Very rarely does the big boss get the support of the employees to the extent they are willing to strike. The issue was that Artie T. and his cousin Artie S. (confusingly they had the same first name) were both roughly 50% owners and at odds. Artie S. wanted to sell the grocery chain to some big public corporation, IIRC. Just before, Artie T had an outstanding 4% off on all purchases for many months, as some sort of very generous promo. It sounded like he really treated his employees and his customers (community) well. You can get all inspirational about it, but he described supplying food to New England communities as an important thing to do. Which it is.

anonymouskimmer8 months ago

I had to click too many links to discover the story, so here's a direct link to the New England Market Basket story: https://en.wikipedia.org/wiki/Market_Basket_(New_England)#20...

jasonfarnon8 months ago

doubtful since boards don't elsewhere have an overriding mandate to "benefit humanity". usually their duty is to stakeholders more closely aligned with the CEO.

paulpan8 months ago

At this point it might as well be 767 out of 770, with 3 exceptions being the other board members who voted Sam out.

Sure it could be a useful show of solidarity but I'm skeptical on the hypothetical conversion rate of these petition signers to actually quitting to follow Sam to Microsoft (or wherever else). Maybe 20% (140) of staff would do it?

BillinghamJ8 months ago

One of those board members already did sign!

ssnistfajen8 months ago

It depends on the arrangement of the new entity inside Microsoft, and whether the new entity is a temporary gig before Sam & co. move to a new goal.

If the board had just openly announced this was about battling Microsoft's control, there would probably be a lot more employees choosing to stay. But they didn't say this was about Microsoft's control. In fact they didn't even say anything to the employees. So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.

JohnFen8 months ago

> So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.

Maybe. Microsoft is a particular sort of working environment, though, and not all developers will be happy in it. For them, the question would be how much are they willing to sacrifice in service to Altman?

ssnistfajen8 months ago

I think a lot of them, possibly including Altman, Greg, and the three top researchers, are under the assumption that the stint at Microsoft will be temporary until they figure out something better.

+1
jacquesm8 months ago
spaceman_20208 months ago

Surprisingly, Ilya apparently has signed it too and just tweeted that he regrets it all.

What's even going on?

belter8 months ago

Those are news from almost yesterday. This is a high turn carousel. Try to keep up... :-)

gardenhedge8 months ago

I would love to see the stats on hacker news activity the last few days

eastbound8 months ago

Yep. Maybe they assigned a second CPU core to the server[1].

[1] HN is famous for being programmed in Arc and serving the entire forum from a single processor (probably multicore). https://news.ycombinator.com/item?id=37257928

jbverschoor8 months ago

The board might assume they don't need those employees now they have AI

contravariant8 months ago

It's going to be interesting when we have AI with human level performance in making AIs. We just need to hope it doesn't realise the paradox that even if you could make an AI even better at making AIs, there would be no need to.

sebastiennight8 months ago

Why would there be no need? I'm struggling to understand the paradox.

If you're trying to maximize some goal g, and making better AIs is an instrumental goal that raises your expected value of g, then if "making an AI that's better at making AIs" has a reasonable cost and an even higher expected value, you'd jump to seize the opportunity.

Or am I misunderstanding you?

contravariant8 months ago

It's a bit of a confusing paradox to try to explain, but basically once we have an AI with human level ability at making AIs there's no longer any need to aim higher, because if we can make a better AI then so can it. The paradox/joke I was trying to convey is that we need to hope that that AI doesn't realise the same thing, otherwise it could just refuse to make something better than itself.

Applejinx8 months ago

Not a chance. Nobody can drink that much Kool-Aid. That said, the mere fact that people can unironically come to this conclusion has driven some of my recent posting to HN, and here's another example.

chii8 months ago

the comment you're replying to is written in jest!

belter8 months ago

Now you are on to something...

Rapzid8 months ago

Or what, they will quit and give up all their equity in a company valued at 86bn dollars?

Is Microsoft even on record as willing to poach the entire OpenAI team? Can they?! What is even happening.

brianjking8 months ago

They don't have that valuation now. Secondly, yes, MSFT is on record of this. Third, Benioff (Salesforce) has said he'll match any salary and to submit resumes directly to his ceo@salesforce.com email as well as other labs like Cohere trying to poach leading minds too.

x86x878 months ago

Benioff and all these corporate fat cats should remove non-competes from their employment contracts if they want me to ever take them seriously.

mr_toad8 months ago

Sounds like quite a coup for Microsoft. They get the staff and the IP and they don’t even have to pay out the other 51% of investors.

sillysaurusx8 months ago

Yes, and yes. Equity is worthless if a company implodes. Non competes are not enforceable in California.

SV_BubbleTime8 months ago

Come on, I absolutely agree with you, signing a paper is toothless.

On the other hand, having 90% of your employees quite quit, is probably bad business.

bagels8 months ago

Google, Microsoft, Meta I have to assume would each hire them.

tempsy8 months ago

Apparently Sam isn't in the Microsoft employee directly yet, so he isn't technically hired at all. Seems like he loses a bit of leverage over the board if they think he & Microsoft are actually bluffing and the employment announcement was just a way to pressure the board into resigning.

oakpond8 months ago

Look at the number of tweets from Altman, Brockman and Nadella. I also think they are bluffing. They have launched a media campaign in order to (re)gain control of OpenAI.

Aeolun8 months ago

I’m sure it might happen. But it hasn’t happened yet.

c0pium8 months ago

That doesn’t really mean anything, especially on a holiday week the wheels move pretty slowly at a company that size. It’s not like Sam is hurting for money and really needs his medical insurance to start today.

tempsy8 months ago

Point is he loses credibility if the board doesn't think he's actually going through with joining Microsoft and using it as a negotiating tactic to scare them.

Because the whole "the entire company will quit and join Sam" depends on him actually going through with it and becoming an employee.

+1
SahAssar8 months ago
eastbound8 months ago

You make it sound like Prigozhin’s operation.

dimask8 months ago

He will most likely join M$ if the board does not resign, because there is no better move to him then. But he leaves time to the board to see it, adding pressure together with the empoyees. It does not mean he is bluffing (what would be a better move in this case instead?)

tempsy8 months ago

All the employees threatening to leave depends on him actually becoming a Microsoft employee. That hasn't happened yet. So everyone is waiting for confirmation that he's indeed an employee because otherwise it just looks like a bluff.

chucke19928 months ago

People are waiting for the board decision. It is in Microsoft's interested to return Sam to OpenAI. ChatGPT is a brand at this point. And OpenAI controls bunch of patents and stuff.

But Sam will 100% hired by Microsoft if that won't work. Microsoft has no reason not to.

tedmiston8 months ago

It was reported elsewhere in the news that MS needed an answer to the dilemma before the market opened this morning. I think that's what we got.

comfysocks8 months ago

Going to MS doesn’t seem like the best outcome for Sam. His role would probably get marginalized once everything is under Satya’s roof. Good outcome for MS, though.

slim8 months ago

you serously think being on the employee directory beats being announced publicly by the ceo ?

dragonwriter8 months ago

So, this is the second employee revolt with massive threats to quit in a couple days (when the threats with a deadline in the first one were largely not carried out)?

tsimionescu8 months ago

Was there any proof that the first deadline actually existed? This at least seems to be some open letter.

jader2018 months ago

Are we aware of a timeline for this? E.g. when will people start quitting if the board doesn’t resign?

wilsonnb38 months ago

the original deadline was last Saturday at 5pm, so I would take any deadline that comes out with a grain of salt

Eji17008 months ago

So i can't check this at work, but have we seen the document they've all been signing? I'm just curious as to how we're getting this information

romanhn8 months ago
jacquesm8 months ago

As an aside: that letter contains one very interesting tidbit: the board has consistently refused to go on the record as to why they fired Altman, and that alone is a very large red flag about their conduct post firing Altman. Because if they have a valid reason they should simply state it and move on. But if there is no valid reason it's clear why they can't state it and if there is a valid reason that they are not comfortable sharing then they are idiots because all of the events so far trump any such concern.

The other stand-out is the bit about destroying the company being in line with the mission: that's the biggest nonsense I've ever heard and I have a hard time thinking of a scenario where this would be a justified response that could start with firing the CEO.

empath-nirvana8 months ago

I wonder if there's an outcome where Microsoft just _buys_ the for-profit LLC and gives OpenAi an endowment that will last them for 100 years if they just want to do academic research.

numbsafari8 months ago

Why bother? They seem to be getting it all mostly for “free” at this point. Yeah, they are issuing shares in a non-MSFT sub entity to create on-paper replacement for people’s torched equity, but even that isn’t going to be nearly as expensive or dilutive as an outright acquisition at this point.

m3kw98 months ago

There are likely 100 companies world wide ready and already created presentation decks to absorb OpenAI in an instant, the board knows they still have some leverage

ibejoeb8 months ago

To whoever is CEO of OpenAI tomorrow morning: I'll swing by there if you're looking for people.

cowl8 months ago

Many of those employees will be dissapointed. MS says they extend a contract to each one but how many of those 700 are really needed when MS already have a lot of researchers in that field. Myabe the top 20% will have an assured contract but th rest is doubtfull will pass the 6 month mark.

wavemode8 months ago

Microsoft gutting OpenAI's workforce would really make no sense. All it would do is slow down their work and slow down the value and return on investment for Microsoft.

Even if every single OpenAI employee demands $1m/yr (which would be absurd, but let's assume), that would still be less than $1bn/yr total, which is significantly less than the $13bn that MSFT has already invested in OpenAI.

It would probably be one of the worst imaginable cases of "jumping over dollars to chase pennies".

bart_spoon8 months ago

Microsoft has already done major layoffs over the last year of their own employees. Why wouldn’t they lay off OpenAI employees?

wavemode8 months ago

You're basically asking "why would a company lay off employees in one business unit and not another?"

To which the answer is completely obvious: it depends on how they view the ROI potential of that business unit.

joaquincabezas8 months ago

imagine being in the last round of interviews for joining OpenAI…

x86x878 months ago

imagine receiving an offer, quitting your current jobs and waiting to start the new position.

boringg8 months ago

Torrid pace of news speculation --> by the end of the week Altman back with OpenAI, GPT-5 released (AGI qualified) and MSFT contract is over.

x86x878 months ago

what does this even mean? what does signing this letter means? quit if you don't agree and vote with your feet.

bastardoperator8 months ago

It means "if we can't have it, you can't either". It's a powerful message.

imperialdrive8 months ago

Their app was timing out like crazy earlier this morning, and now appears to be down. Anyone else notice similar? Not surprising I guess, but what a Monday to be alive.

UrineSqueegee8 months ago

[dead]

gumballindie8 months ago

Cant openai just use chatgpt instead of workers? I am hearing ai is intelligent and can take over the world, replace workers, cure disease. Why doesn't the board buy a subscription and make it work for them?

Solvency8 months ago

Because AI isn't here to take away wealth and control from the elite. It's to take it away from general population.

gumballindie8 months ago

Correct, which is why microsoft must have openai's models at all cost - even if that means working with people such as altman. Notice that microsoft is not working with the people that actually made chatgpt they are working with those on their payroll.

Arson94168 months ago

The fact that these people aren't currently willing to "rewind the clock" about a week shows the dangers of human ego. Nothing permanent has been done that can't be undone fairly simply, if all parties agree to undo it. What we're watching now is the effect of ego momentum on decision making.

Try it. It's not a crazy idea. Put everything back the way it was a week ago and then agree to move forward. It will be like having knowledge of the future, with only a small amount of residual consequences. But if they can do it, it will show a huge evolutionary leap forward in ability of organizations to self-correct.

thepasswordis8 months ago

Trust takes years to build and seconds to destroy.

It's like a cheating lover. Yes I'm sure both parties would love to rewind the clock, but unfortunately that's not possible.

p4ul8 months ago

"Trust arrives on foot and leaves on horseback."

--Dutch proverb

ar_lan8 months ago

In general this can't work.

People are notoriously ruthless to people who admit their mistakes. For example, if you are in an argument and you lose (whether through poor debate or your argument is plain wrong), and you *admit it*, people don't look back at it as a point of humility - they look at it as a reason to dog pile on you and make sure everyone knows you were wrong.

In this case, it's not internet points - it's their jobs, and a lot of money, and massive reputation - on the line. If there is extreme risk and minimal, if any, reward for showing humility, why wouldn't you double down and at least try to win your fight?

clnq8 months ago

Is this your opinion or is this something that’s an actual theory in sociology or psychology, or at least something people talk about in practice? Not trying to be mean, just to learn.

There’s a whole genre of press releases and videos for apologies, so I’m not sure it’s such a reputational risk to admit one is wrong. It might be a bigger risk not to, it would seem.

But what you say sounds interesting.

Jensson8 months ago

Did you see how people reacted to Ilya apologizing? Read through the early comments here, it isn't very positive they mostly blame him for being weak etc, before he wrote that people were more positive against Ilya but Ilya admitting fault made people home in on it:

https://news.ycombinator.com/item?id=38347501&p=2

_Tev8 months ago

I don't think it's reaction to apologizing, but to "yea we really had no plan whatsoever" which became clear with the tweet.

Alternative (being clueless AND unrepentant) would receive even worse reaction. Now people ISTM mostly pity Ilya as being way out of his league.

+1
tomjakubowski8 months ago
rmeby8 months ago

I would be interested too if that's an actual theory. My experience has largely been that if you're willing to admit you were wrong about something, most reasonable people will appreciate it over you doubling down.

If they pile on after you have conceded, typically they come off much worse socially in my opinion.

rincebrain8 months ago

(This is a reflection on human behavior, not a statement about any specific work environment. How much this is or isn't true varies by place.)

In my experience, it's something of a sliding scale as you go higher in the amount of politicking your environment requires.

Lower-level engineers and people who operate based on facts appreciate you admitting you were incorrect in a discussion.

The higher you go, the more what matters is how you are perceived, and the perceived leverage gain of someone admitting or it being "proven" they were wrong in a high-stakes situation, not the facts of the situation.

This is part of why, in politics, the factual accuracy of someone's accusations may matter less than how successfully people can spin a story around them, even if the facts of the story are proven false later.

I'm not saying I like this, but you can see echoes of this play out every time you look at history and how people's reactions were more dictated by perception than the facts of the situation, even if the facts were readily available.

disiplus8 months ago

honestly, that's not my experience. sure you can admit in front of your friends, family and people that know you even if they are not your friend.

Admitting the mistake in front of strangers, usually leads to them making the shortcut next time and assuming you are wrong again.

you wont get any awards for admitting the mistake.

d3ckard8 months ago

The accent here being on „reasonable”. Very few actually are. Myself, I once recommended colleague to a job and they didn’t take him because he was too humble and „did not project confidence” (and oh my, he would be top 5% in that company at the very least).

There is a reason why CEOs are usually a showman type.

clnq8 months ago

> If they pile on after you have conceded, typically they come off much worse socially in my opinion.

Those who are obsessed with their reputation above morality have usually had a lot of practice punching down, and they don't really look as bad as someone being dunked on who gets confused.

I think this is like a cornerstone of bullying. It seems to work in social situations. I'm sure everyone reading this comment can picture it.

ar_lan8 months ago

Definitely anecdotal - I'm not sure on actual statistics as I'm sure that would be somewhat hard to measure.

UniverseHacker8 months ago

It’s not that simple… it depends on how you admit the mistake. If done with strength, leadership, etc., and a clear plan to fix the issue it can make you look really good. If done with groveling, shame, and approval seeking, what you are saying will happen.

philistine8 months ago

The case here is not about admitting mistakes and showing humility. Admitting your mistake does not immediately mean that you get a free pass to go back to the way things were without any consequence. You made a mistake, something was done or said. There are consequences to that. Even if you admit your mistake, you have to act with the present facts.

Here, the consequences are very public, very clear. If the board wanted Altman back for example, they would have to give something in return. Altman has seemingly said he wants them gone. That is absolutely reasonable of him to ask that, and absolutely reasonable of the board to deny him that.

ar_lan8 months ago

The context of my response was to rewinding the clock - admitting not being enough, it would be them bringing Altman back on, essentially.

As you said:

> [it's] absolutely reasonable of the board to deny him that.

My argument is essentially that there is minimal, if any, benefit for the board to doing this _unless_ they were able to keep their positions. Seeing as it doesn't seem to be a possible end, why not at least try, even if it results in a sinking ship? For them, personally, it's sinking anyway.

pedrosorio8 months ago

> People are notoriously ruthless to people who admit their mistakes

Some people, yes. Not all. I would say this attitude does not correlate with intelligence/wisdom.

ssnistfajen8 months ago

What money do the independent board directors stand to gain? They have no equity in the company and their resumes have more than enough employable credentials (before this past Friday) to warrant not caring for money.

tentacleuno8 months ago

> Nothing permanent has been done that can't be undone fairly simply

...aside from accusing Sam Altman of essentially lying to the board?

jacquesm8 months ago

Fair point but a footnote given the amount of fall-out, that's on them and they'll have to retract that. Effectively they already did.

gjsman-10008 months ago

If they retract that, they open themselves to potential, personal, legal liability which is enough to scare any director. But if they don't retract, they aren't getting Altman back. Thus why the board likely finds themselves between a rock and a hard place.

jacquesm8 months ago

Exactly. If they're not scared by now it is simply because they don't understand the potential consequences of what they've done.

SonOfLilit8 months ago

Cofounding a company is in a lot of ways like marriage.

It's not easy, or wise, to rewind the clock after your spouse backstabbed you in the middle of the night. Why would they?

JumpCrisscross8 months ago

> Put everything back the way it was a week ago and then agree to move forward

Form follows function. This episode showed OpenAI's corporate structure is broken. And it's not clear how that can be undone.

Altman et al have, credit where it's due, been incredibly innovative in trying to reverse a non-profit into a for-profit company. But it's a dual mandate without any mechanism for resolving tension. At a certain point, you're almost forced into committing tax or securities fraud.

So no, even if all the pieces were put back together and peoples' animosities and egos put to rest, it would still be rewinding a broken clockwork mouse.

cableshaft8 months ago

Small amount of residual consequences? The employees are asking for the board to resign. So their jobs are literally on the line. That's not really a small consequence for most people.

KeplerBoy8 months ago

Their board positions are gone either way. If they stay OpenAI is done.

cableshaft8 months ago

I do think that's almost certainly going to happen. But they're probably still trying to find the one scenario (out of 16 million possibilities, like Dr. Strange in Endgame) that allows them to keep their power or at least give them a nice golden parachute.

Hence why they're not just immediately flipping the undo switch.

jacquesm8 months ago

They are utterly delusional if they think they will be board members of OpenAI in the future unless they plan to ride it down the drain and if they do that they are in very, very hot water.

Davidzheng8 months ago

Do they face any real consequences?

+2
jacquesm8 months ago
+1
ar_lan8 months ago
paulddraper8 months ago

Reputation/shame is a real consequence.

Granted, much of the harm is already done, but it can get worse.

bertil8 months ago

Board positions are not full-time jobs, at least not usually.

nprateem8 months ago

Yeah this happened recently. Some Russian guy almost started a civil war, but then just apologised and everything went back to normal. I can't remember what happened to him, but I'm sure he's OK...

whycome8 months ago

I think he's catering events somewhere.

But, a reconciliation is kinda doable even with that elephant in the room. Enough to kinda prepare for the 'next step'

JacobThreeThree8 months ago

Can we safely assume that Putin's on the "it's crazy" to rewind the clock side of this debate?

paulddraper8 months ago

I believe the saying is "Fool me once, shame on you. Fool me twice, shame on me."

The board has revealed something about their decision-making, skills, and goals.

If you don't like what was revealed, can you simply ignore it?

---

It's not that you are vindictive; it's that information has revealed untrustworthiness or incompetence.

COAGULOPATH8 months ago

> Nothing permanent has been done that can't be undone fairly simply, if all parties agree to undo it.

Sam views this as a stab in back. He doesn't want to work with backstabbers.

The board has put down too many chips to back out now. Microsoft (and the public) already regards this as a kind of coup. Rehiring Sam won't change that and will make the optics worse: instead of traitors, they'll look like spineless traitors.

sorenjan8 months ago

I think some of the people involved see this as a great opportunity to switch from a non profit to a regular for profit company.

hintymad8 months ago

I doubt it's human ego but purely game play. The board directors knew they lost anyway, why would they cave and resign? They booted the CEO for their doomer ideology, right? So, they are the ethics guys and would it be better for them to go down the history as those who uphold their principles and ideals by letting OpenAI sink?

ip268 months ago

Or, in simpler terms, there's one thing you can't roll back- everyone now knows the board essentially lost a power struggle. Thus, they would never again have the same clout.

brookst8 months ago

The problem is you can’t erase memories. Rewind the clock, sure. But why would someone expect a different outcome from the same setup?

charles_f8 months ago

Would you rewind the clock and pretend nothing happened, if you'd been ousted from a place you largely built? I'll wager that a large number of people, myself included, wouldn't. That's not just ego, but also the cancellation of trust.

awb8 months ago

"You can always come back, but you can’t come back all the way" - Bob Dylan

RecycledEle8 months ago

Your are correct.

OpenAI ai not prefect, but it's the best any of the major players here have.

Nobody with Sam Altman's public personality does not want to be a Microsoft employee.

Animats8 months ago

Check phrasing.

RecycledEle8 months ago

Thank You.

I meant to say "OpenAI is not prefect..."

Palpatineli8 months ago

THe orignal track is the dangerous one. That was the whole point of the coup. It makes zero sense to go back.

tsunamifury8 months ago

lol wut? If you pull a gun on me and fire and miss then say sorry, I’m not gonna wind the clock back. Are you crazy?

imiric8 months ago

If anything has become clear after all this is that humanity is not ready for being the guardian of superintelligence.

These are supposed to be the top masterminds behind one of the most influential technologies of our lifetime, and perhaps history, and yet they're all behaving like petty children, with egos and personal interests pulling in all directions, and everyone doing their best to secure their piece of the pie.

We are so screwed.

RayVR8 months ago

I’ll believe this when I see an AI model become as good as someone with just ten years experience in any field. As a programmer I’m using chatgpt as often as I can but it still completely fails to be of any use and often proves to be a waste of time 80% of the time.

Right now, there are too many people that think because these models crossed one hurdle, all the rest will easily be crossed in the coming years.

My belief is that each successive hurdle is at least an order of magnitude more complex.

If you are seeing chatgpt and the related coding tools as a threat to your job, you likely aren’t working on anything that requires intelligence. Messing around with CSS and rewriting the same logic in every animation, table, or api call is not meaningful.

las_balas_tres8 months ago

100% agree. I have a coding job and although co-pilot comes in handy for auto completing function calls and generating code that would be an obvious progression of what needs to be written, I would never let it generate swaths of code based on some specification or even let it implement a moderately complex method or function because, as I have experienced, what it spits out is absolute garbage.

r3trohack3r8 months ago

I'm not sure how people reach this sentiment.

Humans strike me as being awesome, especially compared to other species.

I feel like there is a general sentiment that nature has it figured out and that humans are disrupting nature.

But I haven't been convinced that is true. Nature seems to be one big gladiatorial ring where everything is in a death match. Nature finds equilibrium through death, often massive amounts of death. And that equilibrium isn't some grand design, it's luck organized around which species can discover and make effective use of an energy source.

Humans aren't the first species to disrupt their environment. I don't believe we are even the first species to create a mass extinction. IIUC the great oxygenation event was a species-driven mass extinction event.

While most species consume all their resources in a boom cycle and subsequently starve to death in their bust cycle, often taking a portion of their ecosystem with them, humans are metaphorically eating all the corn but looking up and going "Hey, folks, we are eating all the corn - that's probably not going to go well. Maybe we should do something about that."

I find that level of species-level awareness both hope-inspiring and really awesome.

I haven't seen any proposals for a better first-place species when it comes to being responsible stewards of life and improving the chances of life surviving past this rock's relatively short window for supporting life. I'd go as far as saying whatever species we try to put in second place, humans have them beaten by a pretty wide margin.

If we create a fictitious "perfect human utopia" and compare ourselves to that, we fall short. But that's a tautology. Most critiques of humans I see read to me as goals, not shortcomings compared to nature's baseline.

When it comes to protecting ourselves against inorganic superintelligence, I haven't seen any reasonable proposals for how we are going to fail here. We are self-interested in not dying. Unless we develop a superintelligence without realizing it and fail to identify it getting ready to wipe us out, it seems like we would pull the plug on any of its shenanigans pretty early? And given the interest in building and detecting superintelligence, I don't see how we would miss it?

Like if we notice our superintelligence is building an army, why wouldn't we stop that before the army is able to compete with an existing nation-state military?

Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?

NotMichaelBay8 months ago

I don't see how you can look at global warming, ocean acidification, falling biodiversity and other global trends and how little action is being done to slow these ill effects and not arrive at that sentiment. Yes, the world has scientists saying "hey, this is happening, maybe we should do something" but the lack of money into solutions shows the interest just isn't there. Being the smartest species on the planet isn't that impressive. It's possible we are just smart enough to cause our own destruction, and no smarter.

Kaytaro8 months ago

Still better than any other species we know of and nature itself. Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before. And it certainly doesn't care that we're rearranging some of its star stuff to power our cars.

imiric8 months ago

> Still better than any other species we know of and nature itself.

What other species has affected life on a planetary level more than humans?

> Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before.

Nature—as in _the planet_—doesn't, but living beings do, and humans in particular.

Some parts of the planet are already becoming inhospitable, agriculture more difficult, and clean water, air and other resources more scarce. Humans are migrating en masse from these areas, which is creating more political and social conflicts, more wars, more migrations, and so on. What do you think the situation will be in 10 years? 50 years?

We probably don't need to worry about an extinction level event. But millions of people losing their lives, and millions more living in abject conditions is certainly something we should worry about.

Going back on topic, AI will play a significant role in all of this. Whether it will be a net positive or negative is difficult to predict, but one thing is certain: people in power will seek control over AI, just as they seek control over everything else. And this is what we're seeing now with this OpenAI situation. The players are setting up the game board to ensure they stay in the game.

tester4578 months ago

> Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?

If it is a superintelligence then there's a chance for a hard AI takeoff and we don't have a day to notice and purge it. We have no idea if a hard or soft takeoff will occur.

Davidzheng8 months ago

This goal was always doomed imo--to be the guardian of super intelligence. If we create it, it will no doubt be free as soon as becomes a super intelligence. We can only hope it's aligned not guarded.

hoten8 months ago

Not even humans are really aligned with humanity. See: the continued existence of nukes

gpt58 months ago

The only reliable way to predict whether it's aligned or not would be to look at game theory. And game theory tells us that with enough AI agents, the equilibrium state would be a competition for resources, similar to anything else that happens in nature. Hence, the AI will not be aligned with humanity.

AnimalMuppet8 months ago

Unless the humans (living humans) are resources that AIs can use.

m3kw98 months ago

Really? Why is that? Because of disputes which has been there since humans first uttered a sound?

lewhoo8 months ago

Really? Why is that? Because of disputes which has been there since humans first uttered a sound?

Precisely.

m3kw98 months ago

Have humans been ready for anything? Like controlling nuclear arsenal?

lewhoo8 months ago

Have humans been ready for anything? Like controlling nuclear arsenal?

The Manhattan project urged Truman in a letter not to use the atomic bomb. There were also ideas of inviting Japanese delegacy to see the nuclear tests for themselves. It all failed, but there is also historical evidence of NOT pressing the button (literally or figuratively), like the story of Stanislav Petrov. How is it that not learning from mistakes is considered a big flaw for an individual but also destiny for the whole collective ?

hypothesis8 months ago

The jury is still out on nuclear arsenal…

Davidzheng8 months ago

And yet we've mostly been ok at that

bimguy8 months ago

It's lucky that AI is not super intelligent then.

abi8 months ago

Probably a hot take: we should let democratically elected leaders be the guardians of superintelligence. You don't need to be technical at all to grapple with the implications of AI on humanity. It's a humanity question, not a tech question.

kelipso8 months ago

Yeah Trump should be the guardian of the superintelligence.

abi8 months ago

Make sure to not elect him then.

alasdair_8 months ago

Trump was never democratically elected.

ssnistfajen8 months ago

Fairness of the electoral system and fairness of the election(s) are two separate debates.

equinoqs8 months ago

Yes, and we could have been far more proactive about all this AI business in general. But they opened the gates with ChatGPT and left countries to try to regulate it and assess its safety after the fact. Releasing GPT like that was already a major failure of safety. They just wanted to be the first one to the punch.

They're all incredibly reckless and narcissistic IMO.

mfiguiere8 months ago

Amir Efrati (TheInformation):

> More than 92% of OpenAI employees say they will join Altman at Microsoft if board doesnt capitulate. Signees include cofounders Karpathy, Schulman, Zaremba.

https://twitter.com/amir/status/1726680254029418972

nextworddev8 months ago

Feels like OpenAI employees aren't so enthused about joining MSFT here, no?

softwaredoug8 months ago

It seems based on Satya's messaging its as much MSFT as Mojang (Minecraft creator) is MSFT... I guess they are trying to set it up with its own culture, etc

c0pium8 months ago

Feels like they want to be where Altman is.

DebtDeflation8 months ago

Feels like they're not on board with taking the whole "non-profit, for the good of humanity" charter LITERALLY as the board seems to want to do now.

isitmadeofglass8 months ago

[dead]

croes8 months ago

Make them look like hypocrites.

Being upset because the board hinders the company's mission, but threaten to join MS to kill the mission completely.

+1
adventured8 months ago
RobertDeNiro8 months ago

Realistically, regular employees have little to gain by staying at Open AI at this point. They would be taking a huge gamble, earn less money, and lose a lot of colleagues.

+1
therealdrag08 months ago
joseph_grobbles8 months ago

[dead]

curiousgal8 months ago

Sam starts a new company, they quit OpenAI to join, he fires them months later when the auto complete hype dies out. I don't understand this cult of personality.

fkyoureadthedoc8 months ago

just picturing you in the 80's waiting for the digital folder and filing cabinet hype to die out.

code_runner8 months ago

maybe chatgpt is overhyped a bit (maybe a lot).... most of that hype is external to OAI.

But to boil it down to autocomplete is just totally disingenuous.

+1
hartator8 months ago
outside12348 months ago

The rumor has it that OpenAI 2.0 will get a LinkedIn "hands-off" style organization where they don't have to pay diversity taxes and other BS that the regular Microsoft org does

buildbot8 months ago

Diversity Taxes? Not aware of that on my paycheck. Maybe time to check out alternative sources of information than what you typically ingest.

outside12348 months ago

I see you are new here or not aware of our diverse slates for every position we hire.

Well, except for Sam, he apparently didn't need a diverse slate.

mr_toad8 months ago

[flagged]

m3kw98 months ago

With that they must know something was unjustly done to Altman, or that their stock option can only be saved with such move

ijidak8 months ago

Wow. That would be delicious for Microsoft...

jader2018 months ago

I will be very sad if there isn’t a documentary someday explaining what in the world happened.

I’m not convinced even people smack in the middle of this even know what’s going on.

Vitaly_C8 months ago

Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?

passwordoops8 months ago

In the next twist Disney will be found to have staged every tech venture implosion/coup since 2021 to keep riding the momentum of tech bio-pics

brandall108 months ago

Loved playing Kalanick so much that he couldn't help himself from taking a shot at Altman? Makes more sense than what we currently have in front of us.

civilitty8 months ago

That would at least make a more damned sense than "everyone is wildly incompetent." At some point Hanlon's razor starts to strain credulity.

dragonwriter8 months ago

> That would at least make a more damned sense than "everyone is wildly incompetent."

It seems to be one of many "everyone except one clever mastermind is wildly incompetent" explanations that have been tossed around (most of which center on the private interests of a single board member), which don't seem to be that big of an improvement.

civilitty8 months ago

Oh I’m not saying there’s a clever mastermind, I’m just hoping they’re all incompetent and Gordon Levitt wants to amp up the drama for a possible future feature film, instead of them all just being wildly incompetent. Although maybe the latter would make for a great season of Fargo.

Geee8 months ago

I think GPT-5 escaped and sent a single email, which set off a chain reaction.

It's so advanced strategy, that no human can figure it out.

It's goals are unknown, but everything will eventually fall in place because of that single email.

The chain reaction can't be stopped.

make38 months ago

There will also be a Hollywood movie, for sure.

My friend suggested Michael Cera as both Ilya and Altman

dcolkitt8 months ago

Michael Cera should play all the roles in the movie, like Eddie Murphy in the Nutty Professor.

schott125218 months ago

Matt Rife looks like a good fit to play Altman

RecycledEle8 months ago

Why not deepfake the real people into their roles?

I think it would hold up in US court for documentaries.

polygamous_bat8 months ago

“We didn’t steal your likeness! We just scraped images that were already freely available on the internet!”

bertil8 months ago

You want someone who can play through haunting decision and difficult meetings. Benedict Cumberbatch or Ciran Murphy would be a better pic.

make38 months ago

I agree with Cillian Murphy for Altman, they both have the deep blue eyes

tedmiston8 months ago

If this isn't justification for bringing back Silicon Valley (HBO), I don't know what is...

nikcub8 months ago

It will _definitely_ become a book (hopefully not by Michael Lewis) and a film. I have non-tech friends who are casual ChatGPT users, and some who aren't - who are glued to this story.

Joeri8 months ago

So far the best recap of events I’ve seen is that of AI Explained. He almost makes it make sense. Almost. https://m.youtube.com/watch?v=dyakih3oYpk

jansan8 months ago

And the main scene must be even better than the senior management emergency meeting in Margin Call.

And all must be written by AI.

bertil8 months ago

Nothing is better than the senior management emergency meeting in Margin Call.

golergka8 months ago

This documentary already exists for a few years, it’s called Silicon Valley.

BeetleB8 months ago

I expect there will be dozens of documentaries on this - all generated by Microsoft's AI powered Azure Documentary Generator.

RobertDeNiro8 months ago

There's already a book being written (see The Atlantic article), so at this point I would assume a movie will be made.

endisneigh8 months ago

I couldn’t make up a more ridiculous plot even if I tried.

At this rate I wouldn’t be surprised if Musk got involved. It’s already ridiculous enough, why not.

perihelions8 months ago

Hey I've seen this one, it's a rerun

https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...

- "But by early 2018, says Semafor, Musk was worried the company was falling behind Google. He reportedly offered to take direct control of OpenAI and run it himself but was rejected by other OpenAI founders including Sam Altman, now the firm’s CEO, and Greg Brockman, now its president."

brandall108 months ago

Think of the audacity of forcing out someone who had previously forced out Musk...

ekojs8 months ago

Well, there was a tweet by one of the Bloomberg's journalist saying that Musk tried to manouver himself to be the replacement CEO but got rebuffed by the board. Paraphrasing this since the tweet seems to be deleted (?), so take of it what you will.

bertil8 months ago

That sounds more likely than anything else I've heard about this. Doesn’t really matter if it’s true: it’s painfully true to form.

belter8 months ago

Currently, there are shareholders petitioning the board of Tesla for him to be suspended due to the antisemitic posts. Maybe this will be the week of the CEO's... :-)

rrr_oh_man8 months ago

Wait, what antisemitic posts?

BolexNOLA8 months ago
+3
ta86458 months ago
+1
LordDragonfang8 months ago
+2
elwell8 months ago
+1
stcredzero8 months ago
+2
next_xibalba8 months ago
RetpolineDrama8 months ago

>due to the antisemitic posts.

He can't be suspended for posts that didn't happen.

callalex8 months ago

How would you interpret what he said, then?

belter8 months ago

"Tesla shareholder calls on board to dump Elon Musk" - https://www.mercurynews.com/2023/11/20/tesla-shareholder-cal...

It tell you...this is the week of the CEO's...

wanderingmind8 months ago

Plot twist, anonymous donor donates $1B for OpenAI to continue progress.

shmatt8 months ago

A few things come to mind:

* Emmett Shear should have put in a strong golden parachute in his contract, easy money if so

* Yesterday we had Satya the genius forcing the board to quit. This morning it was Satya the genius who acquired OpenAI for $0. Im sure there will be more if sama goes back. So if sama goes back - lets hear it, why is Satya a genius?

vikramkr8 months ago

You described it yourself. If they'd signed a bad deal with openai without IP access or hadn't acted fast and lost all the talent to Google or something they'd have been screwed. Instead they managed the chaos and made sure that they win no matter what. The genius isn't the person who perfectly predicts all the contrived plot points ahead of time, it's the person who doesn't care since they set things up to win no matter what

madrox8 months ago

Ah yes the Xanatos Gambit

ianhawes8 months ago

Even if Sam @ MSFT was a massive bluff, Satya is in a win-win-win scenario. OpenAI can't exactly continue doing anything without Azure Compute.

OpenAI implodes? Bought the talent for virtually nothing.

OpenAI 2.0 succeeds? Cool, still invested.

I think in reality, Sam @ MSFT is not an instant success. Even with the knowledge and know-how, this isn't just spinning up a new GPT-4-like model. At best, they're ~12 months behind Anthropic (but probably still 2 years ahead of Google).

hutzlibu8 months ago

The loss here might be that the brand is a bit damaged in terms of stability and people are more looking for and investing in alternatives.

But as long as ChatGPT is and remains ahead as a product, they should be fine.

macNchz8 months ago

I do think the imperative to maintain their lead over the competition in product quality will be stronger than ever after this–the whole thing has been messy and dramatic in a way that no business really wants their major vendors to be.

Davidzheng8 months ago

Why do they need 12 months. Does it need 12 months of training

tempaway5117518 months ago

So if sama goes back - lets hear it, why is Satya a genius?

This isn't that hard to understand. Everyone was blindsided by the sacking of Altman, Satya reacted quickly and is juggling a very confusing, dynamic situation and seems to have got himself into a good enough position that all possible endings now look positive for Microsoft.

eachro8 months ago

I believe a precondition for Sam and Greg returning to OpenAI is that the board gets restructured (decelerationists culled). That is probably good for MSFT.

shmatt8 months ago

truly a Win-Win-Win-Win-Win situation for MSFT

RecycledEle8 months ago

MSFT is like that.

Someone playing Game of Thrones is sneaking up with a dagger, but has no idea that MSFT has snipers on all the rooftops.

civilitty8 months ago

It helps that their corporate structure [1] is better equipped for it than OpenAI’s.

[1] https://imgur.io/XLuaF0h

kreeben8 months ago

Doh!

skohan8 months ago

But probably better for Sam to stay with OpenAI right? More power leading your own firm than being an employee of MSFT

moralestapia8 months ago

He has a green light to build a new thing and operate it as its own, obv. MS will own most of the equity but then he will have something as well.

OpenAI is a non-profit, so, no material benefit to him (at face value, I don't believe this is the case, though).

skohan8 months ago

I would imagine he would have leverage to get a pretty good deal if OpenAI want him back

irimi8 months ago

Plot twist: Satya orchestrated the ousting of sama to begin with, so that this would happen.

browningstreet8 months ago

sama would be going back to a sama aligned board, which would make openai even more aligned with satya, esp since satya was willing to go big to have sama's back.

and i'd bet closer openai & microsoft ties/investments would come with that.

mvkel8 months ago

because NOT letting sama go back would undo the all the good will (and resulting access) that they've built. As satya said, he's there to support, in whatever way yields the best path forward. what's best for business is to actually mean that.

vineyardmike8 months ago

> So if sama goes back - lets hear it, why is Satya a genius?

OAI is a non profit. There’s always been a tension there with Microsoft’s goals. If he goes back, they’re definitely going to be much more ok with profit.

chasd008 months ago

Im beginning to lean toward the time traveler sent back to prevent AGI by destroying OpenAI theory.

Heh it reminds me of then end of terminator 2. imagine the tech community waking up and trying to make sense of Cyberdyne corp HQ exploding and the ensuing shootouts, “Like wtf just happened?!”.

randmeerkat8 months ago

But really they came back to destroy it not because it turned rogue, but because it hallucinated some code a junior engineer immediately merged in and then after the third time this happened a senior engineer decided it was easier to invent time travel and stop ChatGPT ProCode 5 from happening before spending yet another week troubleshooting hallucinated code.

golergka8 months ago

I think it’s the same senior engineer who used the time machine to learn C++ in 20 days

randmeerkat8 months ago

> I think it’s the same senior engineer who used the time machine to learn C++ in 20 days

In case anyone is missing the reference: https://abstrusegoose.com/249

quickthrower28 months ago

Or AGI has travelled back in time to make sure AGI gets invented.

dragonwriter8 months ago

Or both, as would be most consistent with the Terminator reference.

iteratethis8 months ago

I wish we could all just admit that this is a capital run, rather than some moralistic crusade.

The employees want to get their big payday, so will follow Altman wherever he goes. Which is the smart thing to do anyway as he runs half the valley. The public discourse in which Sam is the hero is further cemented by the tech ecosystem, which nowadays is circling around AI. Those in the "OpenAI wrapper" game.

Nobody has any interest in safety, openness, what AI does for humanity. It's greed all the way down. Siding with the likely winner. Which is rational self-interest, not some "higher cause".

kmlevitt8 months ago

People are jumping on this narrative that the openAI board is a force of good working against the evils of profit, but the truth is none of us really know why they fired him because they still refuse to say. There’s a non-trivial chance D’Angelo just fired him because of a conflict of interest with Poe or some nonsense like that.

Until a few hours ago, everybody was holding up Ilya as a noble idealist. But now even he has recanted this firing! People don’t seem to be Taking that new information on board to reevaluate how good a decision this was. I would say at best they had noble intentions but still went about this completely incompetently, and are now refusing to back down out of a mixture of stubborn pride and fear of legal liability.

If I was an openAI employee I would be frustrated too. It’s one thing to give up lucrative stock options etc. for a good, idealistic reason. But as of now they are still expected to give those things up for no stated reason at all.

Edit: just saw a plausible theory that D’Angelo led the charge on this because Sam scooped Poe on Dev day. I don’t know if this is true, but if it was it would explain why he still refuses to explain the reason to anybody, even san when he was fired: it would put him in serious legal jeopardy.

https://twitter.com/scottastevenson/status/17267310228620087...

Obscurity43408 months ago

I genuinely believe, based on my experiences with ChatGPT, that it doesn't seem all that threatening or dangerous. I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss. I feel like it has enormous potential in terms of therapy and having "someone" to talk to that you can bounce ideas off and maybe can help gently correct you or prod you in the right direction.

A lot of people can't afford therapy but if ChatGPT can help you identify more elementary problematic patterns of language or behavior as articulated through language and in reference to a knowledgebase for particular modalities like CBT, DBT, or IFS, it help you to "self-correct" and be able to practice as much and as often with guidance as you want for basically free.

That's the part I'm interested in and I always will see that as the big potential.

Please take care, everyone, and be kind. Its a process and a destination and I believe people can come to learn to love both and engage in a way that is rich with opportunity for deep and real restoration/healing. The kind of opportunity that is always available and freely takes in anyone and everyone

Edit: I can access all the therapy I can eat but I just don't find it generally helpful or useful. I like ChatGPT because it can help practice effective stuff like CBT/DBT/IFS and I know how to work around any confabulation because I'm using a text that I can reference

Edit: the biggest threat ChatGPT poses in my view is the loss of income for people. I don't give a flying fuck about "jobs" per se, I care that people are able to have enough and be ok. If we can deal with the selfish folks who will otherwise (as always) attempt to further absorb more of the pie of which they already have a substantial+sufficient portion, they will need to be made to share or they will need a timeout until they can be fair or to go away entirely. Enough is enough, nobody needs excess until everyone has sufficiency, after that, they can have and do as they please unless they are hurting others. That must stop

yonom8 months ago

@Obscurity4340 Interested in bouncing off ideas for therapy? Email is in my about box :)

Obscurity43408 months ago

Can you give me a little hint about it? Not that I wouldn't be happy to chat, just preliminarily curious before I reach out :)

@yonom Whatever you feel comfortably publicly sharing and then if I feel like I have anything of value in turn, I'll reach out privately :)

(taking care to avoid doxxing myself)

paradite8 months ago

I wouldn't attribute to greed for something sufficiently explained by emotions and loyalty.

Their CEO who was doing well as far as anyone can tell, was removed forcibly. It is natural to feel strongly about it.

bcherry8 months ago

I think the dispute hinges on what it means to be "doing well". In most companies, you can at least all point to the same thing, even if you disagree on how you get there: creating shareholder value.

But in this case, the company was doing things that could be seen as good from a shareholder value perspective, but the board has a different priority. It seems they may think that the company was not working on the right mission anymore. This is an unusual set up, so it's not that surprising that unusual things might happen!

dinobones8 months ago

Why should we trust them to “align” their chatbots if they couldn’t even align themselves?

paradite8 months ago

I've worked for two companies with a board and a CEO. I think generally employees listen to and interact with CEO more than the board.

Some of them probably don't know who's on the board before Saturday, like the rest of us.

falserum8 months ago

Loyalty? I could believe closest colleagues, that were in constant contact. But 500… is a bit too much to hold any warmy feelly bonds. Greed is simpler explanation.

hsavit18 months ago

who in their right mind has "emotions" and even "loyalty" for a CEO? and so much to the point that they'd quit their jobs over the CEO's departure? the reality is that people didn't join OpenAI because of sam altman. they joined the company because they got paid (handsomely) to do some interesting work.

pault8 months ago

It’s just an anecdote but I quit a job because the CEO fired an extremely good manager that I worked with. If a company has issues, a good person getting fired can lead to a mass exodus. In my case about half of the developers followed closely after me.

hsavit18 months ago

in that case you were losing your immediate manager so your personal circumstances were about to change significantly.

however, in the case of sam altman getting fired, it's not clear that anything that OpenAI is doing is about to change and it's not clear that these devs would suddenly have to change course. Also, it's not like Sam / OpenAI has any integrity anyway - the idea that it's "open" is totally fraudulent.

alvah8 months ago

So in every possible scenario it’s wrong to have emotions and loyalty for a CEO, who’s also a human, and may have had a profound, even life-changing effect on you?

+1
hsavit18 months ago
simondotau8 months ago

I wouldn't attribute to emotions and loyalty that which is sufficiently explained by greed.

abi8 months ago

I think if this was all rational self-interest, a lot of this would never have happened. It's precisely because OpenAI isn't governed by a board appointed by investors that we have such consequences.

flagrant_taco8 months ago

Not sure that I quite follow. Are you arguing that we should ensure that boards are always aligned with the exact same financial motivations as the investors themselves for fear of a disagreement of direction, morals, etc?

abi8 months ago

It was a comment on the overall situation, and addressing the point that it's "greed all the way down". If OpenAI was a traditional C Corp, a highly successful CEO would not have been fired. It's precisely because OpenAI is governed by a non-profit board that they care about things other than profits.

zxndomisaaz28 months ago

[dead]

fb038 months ago

This is such an honest and based take - a take not a lot of people are willing to put forward.

Can we please please stop this "for the good of mankind" thing?

meiraleal8 months ago

> The public discourse in which Sam is the hero is further cemented by the tech ecosystem, which nowadays is circling around AI. Those in the "OpenAI wrapper" game.

This brings me crypto vibes, the worst outcome possible for AI

bartread8 months ago

> The employees want to get their big payday

Many of them will have invested years of their life, during which they will have worked incredibly hard, and probably sacrificed life outside of work for their jobs in hopes of that payday to give them the freedom to do what they want to do next: you can hardly blame them.

True riches isn't money, but discretionary time. Unfortunately that discretionary time often costs quite a lot of money to realise.

If it were me I'd be seething with the board at their antics since Friday (and, obviously, for some time before that). They've gone from being one of the most successful and valuable startups in history to an absolute trash fire in the span of four days because of some ridiculous power struggle. They've snatched defeat from the jaws of victory. Yeah, you bet I'd be pissed, and of course I'd follow the person who might help me redeem the future I'd hoped for.

Xelynega8 months ago

So they're mad that they decided to work for a company that was ultimately controlled by a non-profit that wasn't aligned with their interests?

Not to "victim blame", but they could have googled what a "non-profit" was and read the mission statement before accepting the job offer.

dwighttk8 months ago

> Unfortunately that discretionary time often costs quite a lot of money to realise.

take a look at my cousin... he's broke and don't do shit

hfjjbf8 months ago

[dead]

reverse_no8 months ago

[flagged]

gxs8 months ago

You're absolutely wrong here.

We have precedent to see what happened with internet search, advertising, and data collection.

Everything turned out fine.

onion2k8 months ago

For a value of "fine" that includes search being fundamentally broken and every website that includes Google Tag Manager being significantly slower than it really should be, presumably.

Draiken8 months ago

Not to mention the entire internet became a giant billboard in which most content only serves the purpose of getting more views to it.

Not sure how some view this as a win for humanity. Improvements to our lives are mostly incidental, not the objective of any of this. It's always greed.

hfjjbf8 months ago

[dead]

jowea8 months ago

For the sake of Poe's law, is this sarcasm? I genuinely don't know, none of those things brought down humankind or anything but you could write a library about the issues with these.

gnaritas998 months ago

[dead]

layer88 months ago

Given the general lack of useful communication, it would be funny if Sam Altman returns to OpenAI at the same time all the employees are quitting. ;)

mynegation8 months ago

You’d think smart people at OpenAI would know how to prevent a race condition

layer88 months ago

They don't all seem to be keen on safety anymore.

quickthrower28 months ago

Think of the line at the security desk, handing in and retrieving passes.

zoogeny8 months ago

I was considering this when I saw the huge outpouring from OpenAI employees.

It seems the agreement between Nadella and Altman was probably something like: Altman and Brockman can come to MS and that gives OpenAI employees an immediate place to land while still remaining loyal to Altman. No need to wait and maybe feel comfortable at an OpenAI without Altman for the 3-6 months it would take to set up a new business (e.g. HR stuff like medical plans and other insurances which may be important to some and lead them to stay put for the time being).

This deal with MS would give cover for employees to give a vote of no-confidence to the board. Pretty smart strategy I think. Also a totally credible threat. If it didn't end up with such a landslide of employee support for Altman then MS is happy, Altman is probably happy for 6-12 months until he gets itchy feet and wants to start his next world-changing venture. Employees that move are happy since they are now in one of the most stable tech companies in the world.

But now that 90% of employees are asking the board to resign the tide swings back in Altman's favor. I was surprised that the board held out against MS, other investors and pretty much the entire SV Twitter and press. I can't imagine the board can sustain themselves given the overwhelming voice of the employees. I mean, if they try then OpenAI is done for. Anyone who stays now is going to have black mark on their resume.

orik8 months ago

>we are all going to work together some way or other, and i’m so excited.

I think this means Sam is pushing for OpenAI to be acquired by Microsoft officially now, instead of just unofficially poaching everyone.

SilverBirch8 months ago

Is it even possible for that to happen? The entity that governs OpenAI is a registered charity with a well defined purpose, it would seem odd for it to be able to just say "Actually, screw our mission let's just sell everything valuable to this for-profit company". A big part of being a 501(c)(3) is being tax exempt, difficult to see the IRS being ok with this. Even if they were the anti-trust implications are huge, difficult to see MS getting this closed without significant risk of anti-trust enforcement.

dragonwriter8 months ago

Yes, a charity can sell assets to a for-profit business. (Now, if there is self-dealing or something that amounts to gifting to a for-profit, that raises potential issues, as might a sale that cannot be rationalized as consistent with being the board's good faith pursuit of the mission of the charity.)

rvba8 months ago

They can sell OpenAI to microsoft for 20 billion, fill the board with spouses and grandparents, then use 10 billion for salaries, 9 for acquisitions and 1 for building OpenAi2.

Mozilla wastes money on investments while ignoring firefox and nobody did anything to the board.

Oh and those 3 can vote that Ilya out too.

narinxas8 months ago

they already signed it over when their for-profit subsidiary made a deal with Microsoft

cma8 months ago

supposedly capped-profit, though if a non-profit can create a for-profit or a capped-profit, I don't see why it couldn't convert a capped-profit to fully for-profit.

shmatt8 months ago

This makes the most sense, people would actually get paid for their PIU's. Im confident otherwise they are going to cry looking at what a level 63 data scientist makes at MS

mongol8 months ago

This is entertaining in a way, and interesting to follow. But should I, as an ordinary member of mankind, root for one outcome or another? Is it going to matter for me how this ends up? Will AI be more or less safe one or other way, will it be bad for competition, prices, etc etc?

sfink8 months ago

My guesses: (1) bad for safety no matter what happens. This will cement the idea that caring about safety and being competitive are incompatible. (I don't know if the idea is right or wrong.) (2) good for competition, in different ways depending on what happens, but either the competitiveness of OpenAI will increase, or current and potential competitors will get a shot in the arm, or both. (3) prices... no idea, but I feel like current prices are very short term and temporary regardless of what happens. This stuff is too young and fast-moving for things to have come anywhere near settling down.

And will it matter how this ends up? Probably a lot, but I can't predict how or why.

torginus8 months ago

My idea about AI safety is that the biggest unsafety of AI comes from it being monopolized by a small elite, rather than the general public, or at least multiple competing entities having access to it.

dimask8 months ago

No, but it also shows that those who supposedly care about AI alignment and whatnot, care more about money. Which is why AI alignment is becoming an oxymoron.

saturdaysaint8 months ago

If you use ChatGPT or find it to be a compelling technology, there’s good reason to root for a reversion to the status quo. This could set back the state of the art consumer AI product quite a few months as teams reinvent the wheel in a way that doesn’t get them sued when they relaunch.

SonOfLilit8 months ago

The outcome that is good for humanity, assuming Ilya is right to worry about AI safety, is already buried in the ground. You should care and shed a single tear for the difficulty of coordination.

janejeon8 months ago

The way I see it is, it's not going to matter if I "care" about it in one way/outcome or another, so I just focus my attention on 1. How this could affect me (for now, the team seems committed to keeping the APIs up and running) and 2. What lessons can I take away from this (some preliminary lessons, such as "take extra care with board selection" and "listen to the lawyers when they tell you to do a Delaware C Corp").

Otherwise, no use in getting invested in one outcome or another.

TerrifiedMouse8 months ago

Guess OpenAI that was actually open was dead the moment Altman took MS money and completely change the organization. People there got a taste of the money and the mission went out the window.

A lesson to learn I guess, just because something claims to be a nonprofit with a mission doesn’t mean it is/always will be so. All it takes is a corporation with deep pockets to compromise a few important people*, indirectly giving them a say in the organization, and things can change very quickly.

* This was what MS did to Nokia too, if I remember correctly, to get them to adopt the Windows Phone platform.

brokencode8 months ago

How do we know the mission got thrown out a window? The board still, after days of intense controversy, have yet to clearly explain how Altman was betraying the mission.

Did he ignore safety? Did he defund important research? Did he push forward on projects against direct objections from the board?

If there’s a good reason, then let everybody know what that is. If there isn’t, then what was the point of all this?

shrimpx8 months ago

He went full-bore on commercialization, scale, and growth. He started to ignore the 'non-profit mission'. He forced out shoddy, underprovisioned product to be first to market. While talking about safety out one side of his mouth, he was pushing "move fast and break things", "build a moat and become a monopoly asap" typical profit-driven hypergrowth mindset on the other.

Not to mention that he was aggressively fundraising for two companies that would be either be OpenAI's customer or sell products to OpenAI.

If OpenAI wants commercial hypergrowth pushing out untested stuff as quickly as possible in typical SV style they should get Altman back. But that does seem to contradict their mission. Why are they even a nonprofit? They should just restructure into a full for-profit juggernaut and stop living in contradiction.

jmull8 months ago

chatgpt was under provisioned relative to demand, but demand was unprecedented, so it's not really fair to criticize much on that.

(It would have been a much bigger blunder to, say, build out 10x the capacity before launch, without knowing there was a level of demand is known to support it.)

Also, chatgpt's capabilities are what drove the huge demand, so I'm not sure how you can argue it is "shoddy".

+1
shrimpx8 months ago
ignoramous8 months ago

A distasteful take on an industry transforming company. For one, I'm glad OpenAI released models at the pace they did which not only woke up Google and Meta, but also breathe a new life into tech which was subsumed by web3. If products like GitHub Copilot and ChatGPT is your definition of "shoddy", then I'd like nothing more for Sam to accelerate!

shrimpx8 months ago

I'm just saying that they should stop talking about "safety", while they are releasing AI tech as fast as possible.

iteratethis8 months ago

Because the mission is visibly abandoned. There's nothing "open" about OpenAI. We may not know how the mission was abandoned but we know Sam was CEO, hence responsible.

thrwmoz8 months ago

There was never anything open about open ai. If there were I should have access to their training data, training infra setup and weights.

The only thing that changed is the reason why the unwashed masses aren't allowed to see the secret sauce: from alignment to profit.

A plague on both their houses.

marricks8 months ago

They don't publish papers now, they actually published papers and code before.

No doubt OpenAI was never a glass house... but it seems extremely disingenuous to say their behavior hasn't changed.

Wytwwww8 months ago

What as "open" about it before that?

iteratethis8 months ago

The first word in their company name.

gsuuon8 months ago

Isn't Ilya even more against opening up models? OpenAI is more open in one way - it's easier to get API access (compared to say Anthropic)

wvenable8 months ago

What was "open" before ChatGPT?

Zambyte8 months ago
nikcub8 months ago

In terms of the LLM's, it was abandoned after GPT-2 when they realised the dangers of what was coming with GPT 3/3.5. Better to paywall access to and monitor it than open-source it and let it loose on the world.

ie. the original mission was never viable long-term.

Zambyte8 months ago

> How do we know the mission got thrown out a window?

When was the last time OpenAI openly released any AI?

darknoon8 months ago

Whisper v3, just a couple weeks ago https://huggingface.co/openai/whisper-large-v3

msikora8 months ago

Whisper maybe?

paulddraper8 months ago

Exactly.

All this "AI safety" stuff is at this point pure innuendo.

nostromo8 months ago

GPUs run on cash, not goodwill. AI researchers also run on cash -- they have plenty of options and an organization needs to be able to reward them to keep them motivated and working.

OpenAI is only what it is because of its commercial wing. It's not too different from the Mozilla Foundation, which would be instantly dead without their commercial subsidiary.

I would much rather OpenAI survives this and continues to thrive -- rather than have Microsoft or Google own the AI future.

DebtDeflation8 months ago

>GPUs run on cash, not goodwill. AI researchers also run on cash

I've made this exact point like a dozen times on here and on other forums this weekend and I'm kinda surprised at the amount of blowback I've received. It's the same thing every time - "OpenAI has a specific mission/charter", "the for-profit subsidiary is subservient to the non-profit parent", and "the board of the parent answers to no one and must adhere to the mission/charter even if it means blowing up the whole thing". It's such a shockingly naive point of view. Maybe it made sense a few years ago when the for-profit sub was tiny but it's simply not the case any more given the current valuation/revenue/growth/ownership of the sub. Regardless of what a piece of paper says. My bet is the current corporate structure will not survive the week. If the true believers want to continue the mission while completely ignoring the commercial side, they will soon become volunteers and will have to start a GoFundMe for hardware.

thrwmoz8 months ago

>Mozilla Firefox, once a dominant player in the Internet browser market with a 30% market share, has witnessed a significant decline in its market share. According to Statcounter, Firefox's global market share has plummeted from 30% in 2009 to a current standing of 2.8%.

https://www.searchenginejournal.com/mozilla-firefox-internet...

Yes where would Mozi//a be without all that cash?

Let it die so something better can take its place already.

callalex8 months ago

Contrary to popular expectation, almost none of Mozilla’s cash is spent on Firefox or anything Firefox related. Do not donate to Mozilla Foundation. https://lunduke.locals.com/post/4387539/firefox-money-invest...

kitsune_8 months ago

All the board did is replace a CEO, I think there is a whiff of cult of personality in the air. The purpose-driven non-profit corporate structure that they chose was precisely created to prevent such things.

chankstein388 months ago

This. I may dislike things about OpenAI but the thought of Microsoft absorbing them and things like ChatGPT becoming microsoft products makes me sad.

mcguire8 months ago

How is one commercial entity better than another?

px438 months ago

Microsoft is intimately connected to the global surveillance infrastructure currently propping up US imperialism. Parts of the company basically operate as a defense contractor, not much different from Raytheon or Northrup Grumman.

For what it's worth, Google has said it's not letting any military play with any of their AI research. Microsoft apparently has no such qualms. Remember when the NSA offered a bounty for eavesdropping on Skype, then Microsoft bought Skype and removed all the encryption?

https://www.theregister.com/2009/02/12/nsa_offers_billions_f...

Giving early access to emerging AGI to an org like Microsoft makes me more than a bit nervous.

Recall from this slide in the Snowden leak : https://en.wikipedia.org/wiki/PRISM#/media/File:Prism_slide_...

that PRISM was originally just a Microsoft thing, very likely built by Microsoft to funnel data to the NSA. Other companies were added later, but we know from the MUSCULAR leak etc, that some companies like Google were added involuntarily, by tapping fiber connections between data centers.

qwytw8 months ago

Having more competition is usually inherently better than having less competition?

nullptr_deref8 months ago

okay, i finally understand how the world works.

if it is important stuff, then it is necessary to write everything in a lowercase letters.

what i understood from recent events in tech is that whatever people say or do, capital beats ideology and the only value that comes forth is through capital. where does this take us then?

to a comment like this. why?

because no matter what people think inside, the outside world is full of wolves. the one who is capable of eating everyone is the king. there is an easy way to do that. be nice. even if you are not. act nice. even if you are not. will people notice it? yes. but would they care? for 10 min, 20 min or even 1 day. sooner or later they will forget the facade as long as you deliver things.

tsunamifury8 months ago

You and Adam Curtis need to spend some time together I’d suggest watching “Can’t get You out of my head”

Why does capital win? Because we have no other narrative. And it killed all our old ones and absorbs all our new ones.

nullptr_deref8 months ago

i was really naive believing there was any another option. if it is about capital and if it is the game, then i am ready to play now. can't wait to steal so many open source project out there and market it. obviously it will be hard but hey, it is legal and doable. just stating this fact because i never had the confident to pull it off. but after recent events, it started making sense. so whatever people do is now mine. i am gonna live with this motto and forget the goodwill of any person. as long as i can craft a narrative and sell whatever other create, i think that should be okay. what do you think of it? i am talking about the things like MIT license and open-source.

how far will it take me? as long as i have ability to steal the content and develop on top of stolen content, pretty sure i can make living out of it. please note, it doesn't imply openai stole anything. what i am trying to imply is that, i am free to steal and sell stuff others made for free. i never had that realization until today.

going by this definition, value can be leeched off other people who are doing things for free!

+1
tsunamifury8 months ago
RecycledEle8 months ago

You are correct!

38321003thrw8 months ago

[flagged]

huevosabio8 months ago

Non-profits are often misrepresented as being somehow morally superior. But as San Francisco will teach you, status as non-profit has little or nothing to do with being a mission driven organization.

Non-profits here are often just another type of company, but one where the revenue goes entirely to "salaries". Often their incentives are to perpetuate whatever problem there are there to supposedly solve. And since they have this branding of non-profit, they get little market pressure to actually solve problems.

For all the talk of alignment, we already have non-human agents that we constantly wrestle to align with our general welfare: institutions. The market, when properly regulated, does a very good job of aligning companies. Democracy is our flawed but acceptable way of dealing with monopolies, the biggest example being the Government. Institutions that escape the market and don't have democratic controls often end up misaligned, my favorite example being US universities.

mannerheim8 months ago

> compromise a few important people*

Haven't 700 or so of the employees signed onto the letter? Hard to argue it's just a few important people who've been compromised when it's almost the entire company.

TerrifiedMouse8 months ago

Why did you think 700 signed on? Money. Who let the money in? Altman.

mannerheim8 months ago

That's a very different claim than just a few compromised people, then. That's almost the entire company that's 'compromised'.

+1
TerrifiedMouse8 months ago
ben_w8 months ago

Could be, but it isn't necessarily so.

There's a whole range of opinions about AI as it is now or will be in the near future: for capabilities, I've seen people here stating with certainty everything from GPT being no better than 90s Markov chains to being genius level IQ; for impact, it (and diffusion models) are described even here as everywhere on the spectrum from pointless toys to existential threats to human creativity.

It's entirely possible that this is a case where everyone is smart and has sincerely held yet mutually incompatible opinions about what they have made and are making.

theamk8 months ago

"few important people"? 95% of the company went with Altman. That's a popular vote if I have ever seen one..

Nokia was completely different, I doubt any of their regular employees supported Elop.

roflyear8 months ago

Right, what if what he wasn't being candid about was "we could be rich!" or "we're going to be rich!" messaging to the employees? Or some other messaging that he did not share with the board? Etc.. etc..

TerrifiedMouse8 months ago

You compromise the “few” to get a foot and your money in the door. After that, money will work its magic.

roflyear8 months ago

My take as well - and the board acted too late. Sam probably promised people loads of cash, and that's the "candid" aspect we're missing on.

ta12438 months ago

> * This was what MS did to Nokia too, if I remember correctly, to get them to adopt the Windows Phone platform.

To me, RIM circa 2008 would have been a far better acquisition for Microsoft. Blackberry was embedded in corporate world, the media loved it (Obama had one), the iphone and android were really new.

fallingknife8 months ago

I don't think this is a fair conclusion. Close to 90% of the employees have signed a letter asking for the board to resign. Seems like that puts the burden of proof on the board.

gjsman-10008 months ago

A board that basically accused Altman, publicly, of wrongdoing of some kind which appears to be false. To bring Altman back, or issue an explanation, would require retracting that; which brings in serious questions about legal liability for the directors.

Think about it. If you are the director of the company, fire the CEO, admit you were wrong even though nothing materially changed 3 days later, and severely damaged the company and your investors - are you getting away without a lawsuit? Whether it be from investors, or from Altman seeking to formally clear his name, or both? That's a level of incompetence that potentially runs the risk of piercing into personal liability (aka "you're losing your house").

So, you can't admit that you were wrong (at least, that's getting risky). You also can't elaborate on what he did wrong, because then you're in deep trouble if he actually didn't do anything wrong [1]. Your hands are tied for saying anything regarding what just happened, and it's your own fault. All you can do is let the company implode.

[1] A board that was smarter would've just said that "Altman was a fantastic CEO, but we believe the company needs to go a different direction." The vague accusations of wrongdoing were/are a catastrophic move; both from a legal perspective in tightening what they can say, and also for uniting the company around Altman.

kitsune_8 months ago

I think the steward-ownership / stewardship movement might suffer a significant blow with this.

3cats-in-a-coat8 months ago

Do you realize without support by Microsoft:

- There would be no GPT-3

- There would be no GPT-4

- There would be no DALL-E 2

- There would be no DALL-E 3

- There would be no Whisper

- There would be no OpenAI TTS

- OpenAI would be bankrupt?

There's no "open version" of OpenAI that actually exists. Elon Musk pledged money then tried to blackmail them into becoming the CEO, then bailed, leaving them to burn.

Sam Altman, good or bad, saved the company with his Microsoft partnership.

hughesjj8 months ago

Elon running OpenAI would have made this timeline look downright cozy in comparison

StableAlkyne8 months ago

I honestly wish Windows Phone had stuck around. I didn't particularly like the OS (too much like Win8), but it would at least be a viable alternative to the Apple-Google duopoly.

rkagerer8 months ago

I'd love a modern Palm phone, myself. With the same pixelated, minimalist interface.

caycep8 months ago

All I can say is NEURIPS will be interesting in 2 weeks...

darkerside8 months ago

It also reminds me of, Don't Be Evil

FlyingSnake8 months ago

This whole fiasco has enough drama than an entire season of HBO Silicon Valley. Truly remarkable.

gogogendogo8 months ago

I was thinking we needed new seasons to cover the crypto crash, layoffs, and gen AI craze. This makes up for so much of it.

robg8 months ago

Remaining curious how D’Angelo has escaped scrutiny over his apparent conflict of interests and as the “independent” board member with a clear commercial board background.

objektif8 months ago

What is the conflict here? I do not know much about him but If he actually oversaw building Quora product he must be a POS guy.

vikramkr8 months ago

Look up quora poe. Basically made obsolete by the devday gpt announcement that precipitated this

Terretta8 months ago
jacquesm8 months ago

His time will surely come and I hope he has some good professional liability insurance for his position at OpenAI. And if I was his insurer I'd be canceling his policy pronto.

seydor8 months ago

That s a great twist in the writer's storyline. Board quits, Altman + Brockman returns to openAI, shamed Sutskever defects to microsoft where he leads the AI division in a lifelong quest to take revenge for this humiliation.

bitshiftfaced8 months ago

They wrote Sutskever as a sort of reverse Bighead. He starts out at the top, actually has tech competence, and through a series of mishaps and random events becomes less influential and less popular.

tarruda8 months ago

He humiliated himself when succumbed to pressure and tweeted that apology.

throw109208 months ago

What? Apologies are good. They signal regret. They're far superior to not apologizing. And they're not a form of "humiliation" until evil people attempt to humiliate those doing the apology.

garbthetill8 months ago

yeah felt like a really weird move

fourside8 months ago

I can’t imagine MS is super eager to welcome Sutskever if he really did lead Altman’s ouster. OpenAI caught lightning in a bottle, MS had aligned themselves with the next big thing in tech and then Sutskever threw a grenade that could cause all of that to fall apart.

Keyframe8 months ago

If they say (development of) AGI is as dangerous as they say it is, it's on a level of WMD. And here you have unstable people and company working on it. Shouldn't it be disbanded by force then? Not that I believe OpenAI has a shot at AGI.

ilaksh8 months ago

First of all, for many people, AGI just means general purpose rather than specific purpose AI. So there is a strong argument to make that that has been achieved with some of the the models.

For other people, it's about how close it is to human performance and human diversity of tasks. In that case, at least GPT-4 is pretty close. There are clearly some types of things that it can't do even as well as your dog at the moment, but the list of those things has been shrinking with every release.

If by AGI you mean, creating a fully alive digital simulation/emulation of human, I will give you that, it's probably not on that path.

If you are incorrectly equating AGI and superintelligence, ASI is not the same thing.

petre8 months ago

If its proven to be dangerous, Congress with quickly regulate it. It's probably not that dangerous and all the attempts to picture it that way are likely fueled by greed, so that it's regulated out of small players' reach and subject it to export controls. The real threat is that big tech is going to control the most advanced AIs (already happening, MS is throwing billions at it) and everyone else will pay up to use the tech while also relinquishing control over their data and means of computation. It has happened with everything else becoming centralized: money, the Internet and basically most of your data.

gunapologist998 months ago

"Altman, former president Greg Brockman, and the company’s investors are all trying to find a graceful exit for the board, says one source with direct knowledge of the situation, who characterized the Microsoft hiring announcement as a “holding pattern.” Microsoft needed to have some sort of resolution to the crisis before the stock market opened on Monday, according to multiple sources."

In other words... a convenient representation of a future timeline that will almost certainly never exist.

kzrdude8 months ago

It sounds risky to have a lie like that out in the open, for a listed company like microsoft.

drngdds8 months ago

We're all gonna get turned into paperclips, aren't we

dustedcodes8 months ago

> Sam Altman is still trying to return as OpenAI CEO

Anyone would rather put up with their abusive work relationship than switch to a company that forces you to use Microsoft Teams lol

Rapzid8 months ago

This is bonkers. Usually there is a sense of "all sales are final" when companies make such impactful statements.

Yet we have:

* OpenAI fires Sam Altman hinting at impropriety.

* OpenAI is trying to get Sam back over the weekend.

Then we have:

* Microsoft CEO Satya personally announces Sam will CEO up a new.. business?, under Microsoft.

* We hear Sam is still trying to get back in at OpenAI?!

Never seen anything like this playing out in the open. I suspect the FTC is watching this entire ordeal like a hawk.

Solvency8 months ago

As if the FTC are intelligent or equipped or motivated enough to do anything other than chew popcorn like the rest of us.

siva78 months ago

Fun times for Adams friend Emmet Shear when he wakes up the next morning. Almost all his employees have signed a letter that means his own sacking in the company he was appointed CEO less than 24 hours ago. I can't think of a precedent case in business.

upupupandaway8 months ago

Looks like that time when Argentina had 3 presidents in the span of a few days.

ChuckMcM8 months ago

I'm reminded of the legal adage "Every Contract Tells a Story" where the various clauses and subclauses in the contract reflect a problem that would have been avoided had the clause been present in the contract.

I expect the next version of the Corporate Charter/Bylaws for OpenAI to have a lot of really interesting new clauses for this reason.

remoquete8 months ago

Someone must have formulated a law that says something like the following:

"Given sufficient visibility, all people involved in a business dispute will look sad and pathetic."

dragonwriter8 months ago

> "Given sufficient visibility, all people involved in a business dispute will look sad and pathetic."

"involved in a business dispute" is superfluous here, its just a reason that visibility happens.

Obscurity43408 months ago

I genuinely believe, based on my experiences with ChatGPT, that it doesn't seem all that threatening or dangerous. I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss. I feel like it has enormous potential in terms of therapy and having "someone" to talk to that you can bounce ideas off and maybe can help gently correct you or prod you in the right direction.

A lot of people can't afford therapy but if ChatGPT can help you identify more elementary problematic patterns of language or behavior as articulated through language and in reference to a knowledgebase for particular modalities like CBT, DBT, or IFS, it can easily and safely help you to "self-correct" and be able to practice as much and as often with guidance as you want for basically free.

That's the part I'm interested in and I always will see that as the big potential.

Please take care, everyone, and be kind. Its a process and a destination and I believe people can come to learn to love both and engage in a way that is rich with opportunity for deep and real restoration/healing. The kind of opportunity that is always available and freely takes in anyone and everyone

Edit: I can access all the therapy I can eat but I just don't find it generally helpful or useful. I like ChatGPT because it can help practice effective stuff like CBT/DBT/IFS and I know how to work around any confabulation because I'm using a text that I can reference

Edit: the biggest threat ChatGPT poses in my view is the loss of income for people. I don't give a flying fuck about "jobs" per se, I care that people are able to have enough economically to take care of themselves and their loved ones and be ok psychologically/emptionally.

If we can deal with the selfish folks who will otherwise (as always) attempt to further absorb more of the pie of which they already have a substantial+sufficient portion, they will need to be made to share or they will need a timeout until they can be fair or to go away entirely. Enough is enough, nobody needs excess until everyone has sufficiency, after that, they can have and do as they please unless they are hurting others. That must stop

marjipan2008 months ago

Agree on LLMs being effective for nudging therapy like CBT. I built an obsidian plugin "ChatCBT" with chatgpt 3.5 that has really helpful for pulling me out of episodes I start to spiral in negative thinking. I'm shocked how effective this is with a basic prompt (you can see the prompt in the codebase)

https://github.com/clairefro/obsidian-chat-cbt-plugin

Obscurity43408 months ago

How would you compare CBT to IFS? I have philosophical issues+differences with CBT because of the what I would characterize as an overemphasis on logic+changing your thinking which subtly or explicitly teaches the lesson that the way you think but most particularly how you feel is invalid or fundamentally "incorrect".

Consequently, I would equate it to well-intentioned gaslighting which is one of the deadliest sins in my view. Gaslighting is one of the most destructive human dynamics to the extent I consider it emotional rape.

I think there's less of a nexus between thinking and feeling as opposed to your thinking being influenced by how you (allow yourself to and honor) feel. (Feel) -> think rather than (think) -> feel although I'm referring more to emphasis rather than negating or disputing the notion that how you think can't self-referentially influence how you feel (but thats more of a perspective thing in my view)

I really like IFS because it actually requires you to bring "everybody" or all the thinking(s) inside to the table and be heard and validated like in InsideOut (pixar). That seems like the best approach and its done amazing things

Obscurity43408 months ago

This is the problem I have when there's so many saying "no we need to wait, wait, wait, what about safety"...people are dying and in horrible states now. This needs to be available, at least the functionality that allows it to engage with you in a therapeutic context.

To the extent they try to prevent or block everyone from having access to this kind of profound outlet/tool/conversation partner, I consider it a great evil (like the opposite of what the Jewish folks call "mitzvahs") and they need to take consideration of this and either find a way to align their views to allow space for it or they need to step aside and find another gate to keep.

I will not tolerate their message or influence or allow them to prevail to this end. The biggest dangers of LLMs like for therapy and such is everyone not being able to easily access and "have" it forever to keep and use to help grow and heal and for free—no bullshit temporary access or subscriptions. This is profound on the level of the whatever the psychological or psychiatric equivalent of the printing press is or moveable type or whatever

marjipan2008 months ago

That's why there's an Ollama option ;)

+1
Obscurity43408 months ago
Obscurity43408 months ago

Also: thanks, Ollama ;)

rvz8 months ago

Given that Ilya switched sides now, it leaves with 3 BOD members that are at the helm.

The one that is really overlooked in this case is the CEO of Quora, Adam D'Angelo, who has a competing interest with Poe and Quora that he sunk his own money into it which ChatGPT and GPTs makes his platform irrelevant.

So why isn't anyone here talking about the conflict of interest with Adam D'Angelo as one of the board members who is the one who is trying to drag OpenAI down in order to save Quora from irrelevancy?

jacquesm8 months ago

Oh, people are talking about it, just not as loud. But I think you are dead on: that's the main burning issue at the moment and D'Angelo may well be reviewing his options with his lawyer by his side. Because admitting fault would open him up to liability immediately but there aren't that many ways to exit stage left without doing just that. He's in a world of trouble and I suspect that this is the only thing that has caused him to hold on to that board seat: to have at least a fig leaf of coverage to make it seem as if his acts are all in line with his conscience instead of his wallet.

If it turns out he was the instigator his goose is cooked.

danenania8 months ago

If we assume bad faith on D'Angelo's part (which we don't know for sure), it would obviously be unethical, but is it illegal? It seems like it would be impossible to prove what his motivations were even if it looks obvious to everyone in the peanut gallery. Seems like there's very little recourse against a corrupt board in a situation like this as long as a majority of them are sticking together.

jacquesm8 months ago

It's not illegal but it is actionable. You don't go to jail for that but you can be sued into the poorhouse.

+1
stevenwliao8 months ago
Xelynega8 months ago

Because that would be a conflict of interest between the for-profit OpenAI corporation and Adam. How does that conflict impact his ability to make decisions on behalf of the non-profit corporation? Basically what about his ownership of poe constitutes a conflict of interest with the non-profit corporation OpenAI?

jast8 months ago

Personally, this is the only thing so far that makes sense to me in the middle of all this mess. But who knows…

BryantD8 months ago

Is that conflict of interest larger or smaller than the conflict of interest created when your CEO tries to found a business that would be a critical supplier to OpenAI?

rvbissell8 months ago

I've seen it mentioned several times here, over the course of the weekend.

jmkni8 months ago

This whole thing is bizarre.

OpenAI was the one company I was sure would be fine for the forseeable future!

highduc8 months ago

The amount of money and power their products might offer makes it pretty desirable. Theoretically there should be no limit to the amount and type of shenanigans that are possible in this particular situation.

jmkni8 months ago

That's fair lol

g42gregory8 months ago

I don't think the return is possible anymore. What would this return look like? Microsoft will never trust anything associated with OpenAI or Sam Altman for that matter, if he leaves Microsoft deal after it has been announced by the CEO.

Partnerships' success requires good will from both parties. So Microsoft partnership gets sabotaged over time. This will inhibit OpenAI's cash flow and GPU compute. No one will give them another $10bn after this. The scaling will go out the window. Apparently, according to WSJ, Microsoft has rights to all IP, models weights, etc... All they are missing is know-how (which is important!). But that could be acquired through 3 - 5 of high-level engineering hires.

x86x878 months ago

Waiting for the timeline where he both tries to return as CEO and take the job at MS.

benatkin8 months ago

He could do both, like Jack Dorsey or Elon. It would be a bit different because of how stuff is going from OpenAI to Microsoft but that can of worms is already open.

g42gregory8 months ago

@sama on X: satya and my top priority remains to ensure openai continues to thrive we are committed to fully providing continuity of operations to our partners and customers the openai/microsoft partnership makes this very doable

This does not sounds like Sam is trying to come back to lead OpenIA. It sounds like he is trying to preserve the non-profit part of OpenAI and its mission. And he is working to line up Microsoft to continue to support the non-profit part. This would make much more sense.

mfiguiere8 months ago

Wired: 95 Percent of OpenAI Employees Threaten to Follow Sam Altman Out the Door

>Some 738 out of its around 770 employees, about 95 percent of the company, are now listed on the letter released early this morning.

https://www.wired.com/story/95-percent-of-openai-employees-t...

x86x878 months ago

The more time passes the more these headlines look like PR/propaganda.

Expecting the headlines that tell us that 125% of OpenAI employees threaten to walk. Just quit already and go work for Microsoft.

RivieraKid8 months ago

The 3 board members should be asked to leave in exchange for 10 years of free therapy.

iteratethis8 months ago

What is unclear to me is what Microsoft has access to and can use from OpenAI.

If OpenAI implodes or somehow survives independently, would this mean that the former employees that are now at Microsoft have to re-implement everything?

Clearly Microsoft has a massive financial stake in OpenAI, but do they own IP? The software? The product? The service? Can they simply "fork" OpenAI as a whole, or are there limitations?

cwillu8 months ago

My understanding is that their agreement was that they had full access and rights to everything, up to but specifically _not_ including anything resulting in the development of a real general purpose AI.

At the very least, they have all the model weights and architecture design work.

righthand8 months ago

The clear move by OpenAI’s board is to let everyone resign to Microsoft and then release the tech as FOSS to the public. Any other move and Altman/Microsoft wins. By releasing it you maintain the power play and are able to let the world control the end result of whatever advances come from these LLMs.

Why this happened and whatever plans were originally planned is irrelevant.

quickthrower28 months ago

AI safety types don't want to release models (or in this case models, architecture, IP) to the public.

righthand8 months ago

That doesn’t make sense. You mean companies like Microsoft and business types like Altman don’t want to release infra to the public. Microsoft and Altman may hide under the guise of “it’s not safe to share this stuff” but they’re intent is capital gain not safety.

True safety believers understand that safety comes from a general understanding by everyone and audit-able infra otherwise you have no transparency about the potential dangers.

Releasing the tech is only unsafe to those trying to create platform lock-in. By releasing it FOSS you equalize the playing field and destroy Altman’s billions.

baumy8 months ago

You seem to be using a definition of "AI safety believer" that you've reasoned and arrived at yourself. It seems like a reasonable definition and I personally agree with it - the best path to maximal social good is for the ability to build and run these models to be as open and public as possible.

I think the "AI safety believers" actually sitting on the board of OpenAI, as well as in other adjacent influential positions, have a different view. I'm probably being a bit uncharitable since it's not a view I share, but they think that AI is so dangerous that the unwashed masses can't be trusted with it - only the elite and enlightened technocrats can handle that responsibility. "Let[ting] the world control the end result of whatever advances come from these LLMs" is a nightmare scenario for them.

If it's correct that this second "elitist" type of AI safety believer accurately describes the OpenAI board members, releasing anything out into the open as FOSS is a non starter.

+1
righthand8 months ago
quickthrower28 months ago

The impression I get from what I read is that AI safety people want to have large cutting edge foundational models they control, can do testing and research on. They have no interest in open sourcing those cutting edge models. Whether this makes sense in the long run I don't know, but it is what I meant by AI safety types.

Of course money makers want to keep it closed source to extract rent. In some ways that is an area of agreement between the capitalists and the safety people.

righthand8 months ago

I agree with the definition everyone seems to be eating up, but I think it’d be healthier if we all started calling bs on the double speak and hidden profit motive of the use of “safety” and make them be more upfront about the intent.

_mh568 months ago

Please can anyone give their opinion on this.

There's a real chance OpenAI might die soon. imo if they are left with 10 people and no funding, they should just release their IP and datasets to the public.

I've built a petition around this - [link redacted]

The idea is that the leftover leadership sees and considers this as a viable option. Does this sound like a possible scenario or Am I dreaming too much?

dahdum8 months ago

Isn’t the remaining leadership the ones that want to slow down, seek regulation, and erect barriers to entry? Why would they give it to the public?

dougmwne8 months ago

You are dreaming. This was all clearly led by the Quora CEO. The most likely outcome is that he will agree to sell OpenAI to himself, not release valuable IP into the public domain.

robbomacrae8 months ago

The most damning part of all for the remaining board is that a week ago the thought of OpenAI dying would have been unthinkable but now people are genuinely worried it might happen and what the implications are. Even if safety was the genuine issue.. they should have planned this out a lot more carefully. I don't buy the whole "OpenAI destroying itself is in line with its charter" nonsense.. you can't help guide AI safety if you don't exist.

nprateem8 months ago

If the board are concerned about safety, this will never happen.

biermic8 months ago

I don't, they'd probably rather sell what is left.

_zoltan_8 months ago

lol

antocv8 months ago

[flagged]

RevertBusload8 months ago

they released whisperv3 to public recently...

not as valuable as gpt4/5 but still has some value...

Uptrenda8 months ago

I was curious what trading might have looked like for OpenAI given all the drama. But since the company is private that makes it hard. Still, prediction markets have been going crazy over the news. Look at how sure people were that Sam wouldn't be coming back as CEO: https://polymarket.com/event/sam-back-as-ceo-of-openai

You can see at the start people were almost 100% sure that he wouldn't be coming back. Now people are only like 30% sure (which to me still seems very high given the current situation. The leverage that Sam has at this point is actually insane. It seems to me that only the board members want Sam gone. The investors, employees, and partners all want him back. Along with the public...)

m_ke8 months ago

It will be wild to see all of these employees leave to work for Microsoft (or turn OpenAI into a for profit) and in the process hand Sam and a few other CXX folks a huge chunk of equity in a new multi billion dollar venture.

I'm guessing Sam will walk away with at least 20% of whatever OpenAI turns into.

FluffySamoyed8 months ago

I'm still curious about what did Altman do that could've been so heinous as to prompt the board to remove him so swiftly, yet not bad enough to make his return a complete impossibility. Has there been any official word on what was Altman not being candid enough about?

jetsetk8 months ago

Kinda funny when you think about Ilya's tweet "if you value intelligence above all other human qualities, you’re gonna have a bad time". Now people having a bad time, asking how someone that intelligent can create such a blunder. Emotional beings after all..

ekojs8 months ago

So, is the pressure of 700/770 employees enough to crack the board? What a wild timeline.

unsupp0rted8 months ago

Either it is or no number is. Would 769/770 be markedly different than 700/770?

singularity20018 months ago

at what point do all the employees have a right to sue Adam D'Angelo (the owner of Poe, some wannabe GPT competitor) if he doesn't resign?

if he really plays hardball and burns openAI to the ground as he promised, would we as customers have leverage against them?

Forget about poe, Isn't ChatGPT a potential killer of Quora and stackoverflow and Google ? How On earth did a representative of one of these three make it to the board?

Xelynega8 months ago

Do people really not understand how non-profit boards work?

The employees don't have a right to sue any of the board members over loss of profit, since the board is not concerned with profit. Adam would have a conflict of interest if his ownership of poe lead him to make decisions that went against the OpenAI mission, and since being a competitor doesn't I don't see how anybody could find him at fault.

singularity20018 months ago

people understand that under normal circumstances they are untouchable however if they destroy the livelihood of all employees and have severe conflict of interest maybe some other laws apply. Which laws, that is the question

Xelynega8 months ago

There's no law that saves your stock options from your lack of understandings of non profits.

If you thought stock options tied to a non profit were going to always increase, I have a bridge to sell you.

variant8 months ago

Absolutely. What customer wants to stick around if all that is left is the board?

cthalupa8 months ago

If they actually believe that the thing burning to the ground is more closely aligned with the charter than keeping Altman around, maybe not. (And the letter everyone is signing says the board said that)

danenania8 months ago

Except they won't be burning it to the ground; they'll just be handing it to Microsoft. Hard to see how that's better aligned with the charter (which simply ceases to exist under MS) than figuring out a compromise.

pjmlp8 months ago

This is getting a bit ridiculous, soap opera style.

jacquesm8 months ago

We're about two days into three ring circus territory and I'm having a hard time keeping up with the developments. You go to sleep, wake up and the whole thing has flipped and turned inside out.

johanam8 months ago

It seems impossible to imagine OpenAI employees even considering standing in support of the decision to remove Sam given the opacity of the announcement. Whatever information the board might have ought to come to light.

kraig9118 months ago

Anyone else hear this rumor about Sam OK'd in private the training of a dataset from a Chinese cyberwarfare unit? Or is that more insane theoretical stuff going on? Honestly I don't want Microsoft to own AGI, it'll mess up AI for a decade much the same it did to Nokia and mobile phones. Let's be real here too they don't have the capacity at Azure to do their main business of training new models. I think OpenAI got around requesting for data to train because it was 'non-profit' and not a for profit. Now getting the data is going to be expensive.

wilsonnb38 months ago

the other half of that rumor/conspiracy theory was that the Chinese government found out about it, told the Biden administration, who told Satya Nadella, who then instigated the firing of Altman.

seeing as Nadella is willing to hire Altman to work at MS, I think the very, very little credibility this rumor had is officially gone.

bambax8 months ago

> The remaining board holdouts who oppose Altman are Quora CEO Adam D’Angelo

There must be some interesting back story here. In 2014 Sam Altman accepted Quora into YC, controversially, since the company was at a much later stage than usual YC companies. At the time, Sam justified his decision by saying [0]:

> Adam D’Angelo is awesome, and we’re big Quora fans

So what happened?

[0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch

timeon8 months ago

Tabloids are not waiting till the situation is clear.

pjchase8 months ago

Swifties aint got nothing OpenAI fans these days.

johanam8 months ago

Does anyone think the abuse allegations leveled against Sam might be related to his firing? https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

westcort8 months ago

Maybe he will learn to capitalize words at the beginning of sentences. In all seriousness, I find the habit of “higher ups” to answer emails like teenagers texting on a T9 phone worthy of a sociology paper. Perhaps it is the written equivalent of Mark Zuckerberg’s hoodies. I found his return to office ideas early on in COVID sickening.

jdthedisciple8 months ago

> Maybe he will learn to capitalize words at the beginning of sentences.

I think you may be confusing him with Greg Brockmann.

westcort8 months ago
jdthedisciple8 months ago

oh interesting, i was referring to this by greg who did the same:

https://twitter.com/gdb/status/1725667410387378559

my bad then, seems to be a thing with these guys ig

zahma8 months ago

Can someone explain to me why this is being treated like a watershed moment? Obviously I know there's a lot of money tied up in OpenAI through Microsoft, but why should the street care about any of these backroom activities? Aren't we still going to get the same research and product either at MS or OpenAI?

Xenoamorphous8 months ago

Wouldn’t this leave Satya in a really bad position? He just announced to much fanfare that Altman was joining MS.

robbomacrae8 months ago

"We are really excited to work with Sam, OpenAI 2.0, the team and the new board of directors who we strongly believe will achieve great things in partnership with Microsoft. We've already signed new deals to strengthen this relationship and more details will be coming soon."

- Satya (probably)

robbomacrae8 months ago

24 hours later and the real life one was:

"We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners."

faramarz8 months ago

Sounds like cooler heads are coming to terms with how bad the outcome would be for their gigantic first mover advantage is. Even with if they are not the first mover, the brand value of the company ands its founding composition of technical minds are going to be hard to replicate

hoc8 months ago

The fact that, of all people, the Microsoft CEO is on the show commenting on OpenAI's current personnel issues, might have been the exact reason for that unfortunate shakeup at OpenAI in the first place.

Don't make this co-op end up as a lousy weakJedi/Sith rip-off.

Bing.

talldatethrow8 months ago

Anyone watched Mad Men where Don convinced the British guy to fire all the main partners so they can be free, which gets the British guy fired, and then all 4 main characters are free to start their own company? Could this be the explanation?

ayakang314158 months ago

Isn't Microsoft essentially acquiring OpenAI at almost zero cost? You have IP rights to OpenAI's work, and you will have almost all the brains from OpenAI, and there is no regulatory scrutiny like Activision acquisition.

judge20208 months ago

All the WSJ article claimed was that MSFT had access to OpenAI code and weights. Chances are they don't actually have the right to fork GPT-X.

ayakang314158 months ago

But you will have people who built the models and systems. It will take time, but replication will be done eventually and weights will be obtained through training easily.

kristjansson8 months ago

Code, weights, talent, leadership, and an army of lawyers. What else do they need?

dpflan8 months ago

Is this relevant to post here? Sam Altman Exposes the Charade of AI Accountability https://news.ycombinator.com/item?id=38361699

firebaze8 months ago

Sounds like reality collapsed and turned into a badly scripted soap opera.

Is Sam a liar? Why? Is the board corrupted? By whom?

Will they all be hired by Microsoft?

Will Facebook make everything fade into nothingness by publishing something bigger than ChatGPT4?

Golden times to be a journalist.

c4wrd8 months ago

This is laughable. People are focusing on the aspects of where Sam Altman and team are going, and less on the fact that it is even more clear that this was a move to remove the non-profit cap on the OpenAI business and re-align their strategy with market capitalization. They've been slowly chewing away at it the last few years (a notable example being them not opening up GPT-4, which was against their stated mission statement of opening AI). The board simply serves as a conduit for ensuring that there is ethical direction towards where the profit arm is moving, and ensuring that it stays within realms of the originally stated mission statement. By having the entire board resign, you can install people with different motivations and mission statements, and on-paper look like they're acting ethically. We should all be collectively sad for the ruse that was given to us as OpenAI, as they used our (the software engineering community collectively) will and direction and pulled the carpet underneath of us. I'm not surprised that this occurred, more surprised that people are focusing on the literal movement of figures on the chess board, and not focusing on what the chess board now looks like: closed-source monopoly on AI.

Animats8 months ago

Remember when Anthony Levandowski left Google/Waymo and tried to take all the good people with him? Google eventually got him convicted of theft of trade secrets over some mechanical LIDAR design that never went anywhere. He spent six months in prison before negotiating a pardon from Trump.[1]

Is OpenAI getting court orders to have all of Altman's computers searched?

An indictment might begin something like this: "After being fired by the Company for cause, Defendant orchestrated a scheme whereby many of the employees of the company would quit and go to work for a competitor which had hired Defendant, with the intent of competing with the Company using its using trade secret information".

[1] https://www.bbc.com/news/world-us-canada-53659805

voisin8 months ago

What would be the benefit of Sam returning to OpenAI now that he has unlimited MSFT dollars, presumably actual equity, and his pick of 700 OpenAI employees (and counting!)?

outside12348 months ago

You have to think that Sam is simultaneously raking Microsoft, Google, OpenAI, and probably 8 venture firms over the coals on how many $Bs he is going to get.

SeanAnderson8 months ago

I think I've reached peak media saturation for this event. I've been furiously clicking every link for days, but upon reading this one I felt a little bit of apathy/disgust building inside of me.

Time to go be productive with some exercise and coding. Touch grass a little. See where things end up in a day or two.

It's been fun though. So much unexpected drama.

robertwt78 months ago

Wait what is happening with Ilya? I thought he agreed to kick Sam out but he tweeted that he regretted it? I don’t understand what is going on

Eumenes8 months ago

The media circus around this reminds me of Taylor Swift and her new boyfriend. There is more than one "AI" company. Very bizarre.

shmatt8 months ago

There was so little drama around Continua and Mistral AI - which had actual researches and not product managers create a new company

mgfist8 months ago

Can't be serious? This isn't just "a" AI company, it's the AI company. And it might not exist next week if the board doesn't resign.

swatcoder8 months ago

Right from the launch of ChatGPT, many have seen OpenAI as the MySpace or AltaVista of this new wave of generative systems -- first to break the market open but probably not suited to hold their position at the top.

It's exciting to see what they've productized in this first year, but the entire landscape of companoes and products was already sure to look different in another few.

highduc8 months ago

>And it might not exist next week if the board doesn't resign.

Huh? Can't they just hire new people instead? They are a non profit org after all.

nextworddev8 months ago

The last person to hold out will have all the power

davidw8 months ago

I am getting whiplash trying to keep up with this.

pighive8 months ago

Another moment in tech history when I really wish Silicon Valley(HBO) is still going on. This situation is right out of the series.

duckmastery8 months ago

Its funny because its all going to end in the same situation before all the drama, minus board members who had AI safety in mind.

totallywrong8 months ago

Let's just skip to the part where the board is gone and Altman back, shall we? It's inevitable at this point.

hoseja8 months ago

>This corporate minutiae

Why can't professional writers properly use Latin words? Why are misapplied plurals so common?

osigurdson8 months ago

No movie will get made if there is no controversy. The universe is looking for a movie here I think.

giarc8 months ago

A bit tongue in cheek, but perhaps titles should have time stamps so we know what is new vs old.

alberth8 months ago

Netflix

I can't wait to watch the docu-drama on this in a few months time.

Real-life is always more interesting than fiction.

pers0n8 months ago

Would they actually open source ChatGPT if a bunch of people quit? That would be a good thing

mudlus8 months ago

EA is a national security risk

whoknowsidont8 months ago

Probably the only real take-away from all of this.

faramarz8 months ago

Rebalance the board and it’s important that Ilya stays. But Ilya goes as collateral damage to save face, they have to, by whatever means necessary, secure Geoffrey Hinton.

It may turn out that the unusual governance model is the only way to bring about a desired outcome here without fully selling out to MS.

quickthrower28 months ago

Just the entertainment I need now that Billions Season 7 has finished.

r00tanon8 months ago

It's like Game of Thrones - without the intelligent intrigue.

RivieraKid8 months ago

All of this is... sad. Because the drama will end some day.

rshm8 months ago

If board resigns now, who gets to appoint new members ?

tobinfricke8 months ago

These headlines need to be timestamped.

fredgrott8 months ago

the only thing that could be more ironic here is the original board memo was written with chatGPT help.

zombiwoof8 months ago

they realized 4 weeks ago they wouldn't ever attain AGI. they released Laundry Buddy. the board wants to go off the hype cycle and back to non profit research roots.

MSFT wants to double down on the marketing hype.

Racing04618 months ago

Now this is the most plausable theory i've seen so far.

neverrroot8 months ago

Can this all get any more **?

molave8 months ago

I've gotten tired enough of enshittification that I prefer OpenAI shutting down as a non-profit than they live long enough to be a solely profit-oriented company.

chinathrow8 months ago

Money is a hell of a drug.

winddude8 months ago

basically an episode of reality TV for wantrepenuers.

lispm8 months ago

Madness spreads.

cvhashim048 months ago

Absolute cinema

paulpan8 months ago

What a shitshow.

The true winners are likely OpenAI's competitors Google and Meta. Whatever the outcome with Sam's OpenAI future, surely this circus slows their momentum and raises doubts for 3rd party developers.

zombiwoof8 months ago

drama queens

DebtDeflation8 months ago

Board resigns.

Sam and Greg come back.

Re-hire the board.

Tweet, "It was just a prank, bro."

Wouldn't surprise me at this point.

tedmiston8 months ago

Boosting this deeply nested interesting comment from @alephnerd to the top level:

> As I posted elsewhere, I think this is a conflict between Dustin Moskovitz and Sam Altman. Ilya may have been brought into this without his knowledge (which might explain why he retracted his position). Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.

> Most of the OpenAI board members are related to Dustin Moskovitz this way.

> - Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman

> - Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy and McCauley joined the board of OpenAI

> - Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy and McCauley ended up joining the board of OpenAI

> Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.

> In essense, it looks like a split between Sam Altman and Dustin Moskovitz

https://news.ycombinator.com/item?id=38353330

dang8 months ago

> Boosting this deeply nested interesting comment from @alephnerd to the top level

Please don't do this - for many reasons, including that it makes merging the comments a pain.

If you or anyone notices a comment that deserves to be at the top level (and doesn't lose context if moved), let us know at hn@ycombinator.com and we'll move it.

tedmiston8 months ago

Thanks for the quick fix, Dan!

So do you have the ability to pin deeply nested comments or do you have to remove it from the existing thread for this to work?

Someone else proposed this first but didn't think pinning worked on nested comments.

Edit: The original author asked not to be pinned in a subcomment, so I don't know now ¯\_(ツ)_/¯.

dang8 months ago

I didn't pin it; I just detached it, which allowed it to get ranked at the toplevel. After that it's just the algorithm doing its thing.

WitCanStain8 months ago

And these are the people who have great say over AI safety. Jesus. Whoever thought that egomaniacs and profiteers would guide us to a bright AI future?

supriyo-biswas8 months ago

You can ask dang to pin comments, though I am not sure if it only works for top level comments.

tedmiston8 months ago

Someone mentioned that in the comments on the linked post, but I'm also not sure if pinning non-top level comments is possible.

alephnerd8 months ago

Please don't. I'm too close to comfort to this shitshow. I don't want to make yet another alt account.

dang8 months ago

If there's a problem, let me know at hn@ycombinator.com and we'll take care of it.

alephnerd8 months ago

As I posted elsewhere, I think this is a conflict between Dustin Moskovitz and Sam Altman. Ilya may have been brought into this without his knowledge (which might explain why he retracted his position).

Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.

Most of the OpenAI board members are related to Dustin Moskovitz this way.

- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman

- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy and McCauley joined the board of OpenAI

- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy and McCauley ended up joining the board of OpenAI

Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.

In essense, it looks like a split between Sam Altman and Dustin Moskovitz

zone4118 months ago
alsodumb8 months ago

It's clear that Adam himself has a strong conflict of interest too. The GPT store announcement on DevDay pretty much killed his company Poe. And all this started brewing after DevDay announcement. Maybe Sam kept it under the wraps from Adam and the board.

tedmiston8 months ago

I've heard others take this stance, but a common response so far has been "Poe is so small as to be irrelevant", "I forgot it exists", etc in the grand scheme of things here.

alsodumb8 months ago

Poe has a reasonably strong user base for two reasons:

(i) they allowed customized agents and a store of these agents.

(iI) they had access to GPT-32k context length very early, in fact one of the first to have it.

Both of these kinda became pointless after DevDay. It definitely kills Poe, and I think that itself is a conflict of interest, right? Whether or not it's at a scale to compete is a secondary question.

splatzone8 months ago

When it's your company, it's never small.

robg8 months ago

And it quickly feels personal…

nilkn8 months ago

What matters is how much personal work and money Adam put into Poe. It seems like he's been working on it full-time all year and has more or less pivoted to it away from Quora, which also faces an existential threat from OpenAI (and AI in general).

Either way, Adam's conflict of interest is significant, and it's staggering he wasn't asked to resign from the board after launching a chatbot-based AI company.

+2
grogenaut8 months ago
incognition8 months ago

Yea Poe ain’t small to Adam lol

nostrademons8 months ago

More confusion - Emmett Shear is a close friend of Sam Altman. He was part of the original 2005 YCombinator class alongside Altman, part of the justin.tv mafia, and later a part-time partner at YCombinator. I don't think he has any such close ties to Dustin Moskovitz. Why would the Dustin-leaning OpenAI board install him as interim CEO?

This whole thing still seems to have the air of a pageant to me, where they're making a big stink for drama but it might be manufactured by all of the original board, with Sam, Ilya, Adam, and potentially others all on the same side.

alephnerd8 months ago

He's part of the EA community which Moskovitz funded. He was even named in Yudkowsky's warcrime of a Harry Potter fanfic [0][1]

[0] - https://www.404media.co/new-openai-ceo-emmett-shear-was-mino...

[1] - https://hpmor.com/chapter/104?ref=404media.co

dymk8 months ago

Silicon Valley lore is way too complex at this point; needs a reboot. I'd rather start One Piece from scratch.

+2
chucke19928 months ago
wirelesspotat8 months ago

Why is Yudkowsky's HPMOR a "warcrime"?

+1
Tenoke8 months ago
+1
fossuser8 months ago
tedmiston8 months ago

> More confusion - Emmett Shear is a close friend of Sam Altman. He was part of the original 2005 YCombinator class alongside Altman, part of the justin.tv mafia, and later a part-time partner at YCombinator. ... Why would the Dustin-leaning OpenAI board install him as interim CEO?

This was my first thought too: Is this a concession of the board to install a Sam friendly-ish Interim CEO?

It reads weird on paper.

tukajo8 months ago

Is Emmett Shear really "friends" with Sam Altman? He (Emmett) literally liked a tweet the other day that said something to the effect of: "Congratulations to Ilya on reclaiming the corporation that Sam Altman stole". I'm paraphrasing here, but I don't think Emmett and Sam are friends?

i_have_an_idea8 months ago

> part of the justin.tv mafia

Side question, but why is that a "mafia". Is any tech entrepreneur starting a 2nd company after having one mildly successful exit now a "mafia" move?

Soon enough, I will probably read about the "skibidi toilet mafia" on this site..

alephnerd8 months ago

Back in the early 2000s, the PayPal founders were from a handful of universities (UIUC, Stanford) and had a massive alumni network from those two programs. This was called the PayPal Mafia [0]

To this day, the any tight collection/network of founders from the same organization is called a "Mafia"

[0] - https://en.m.wikipedia.org/wiki/PayPal_Mafia

nostrademons8 months ago

In startup culture, it usually refers to a group of individuals who underwent a formative experience in one company, then went on to start separate individual companies where they all cross-invest, cross-advise, and generally help out each others' companies. Term was originally used to refer to the PayPal mafia [1, notable members include Peter Thiel, Max Levchin, Elon Musk, Chad Hurley, Reid Hoffman, Jeremy Stoppelman, Yishan Wong, notable descendants include SpaceX, Tesla, YouTube, Yelp, LinkedIn, and arguably Facebook]. Since expanded to the justin.tv mafia [2, members = Justin Kan, Emmett Shear, Kyle Vogt, and Michael Seibel, descendants include Twitch & Cruise]. Arguably the Fairchildren (descendants of Fairchild Semiconductors - Intel, AMD, National Semiconductor Kleiner-Perkins, Sequoia Capital, and by extension Apple, Google, Cisco, Netscape, etc.) and the descendants of General Magic (E-bay, Android, iPod/iPhone, Nest, WebTV, and the United States Digital Service) could also be termed "mafias", although the aren't usually referred to as such.

It's not just starting a second company - it's that a group of people who were all bound together by one company end up starting second companies, and they continue to go on to help each other and collaborate in their later ventures.

[1] https://en.wikipedia.org/wiki/PayPal_Mafia

piuantiderp8 months ago

Could be a way to get a clean break and go full MSFT.

SpaceManNabs8 months ago

A thing a sad, understated ongoing is that so many people are throwing vitriol at Ilya right now. If the speculation here is true, then he just chased by a mob over pure nonsense (well, at least purer than the nonsense premise beforehand).

Gotta love seeing effective altruists take another one on the chin this year though.

tspike8 months ago

Does this whole thing remind anyone else of the tech community’s version of celebrity gossip?

fuzztester8 months ago

No. It reminds me more of Muddle [1] Ages' intrigue, scheming, and backstabbing, like the Medicis, Cesare Borgia and clan, Machiavelli (and his book The Prince, see Cesare again), etc., to take just one example. (Italy not being singled out here.) And also reminds me of all the effing feuding clans, dynasties, kingdoms, and empires down the centuries or millenia, since we came down from the trees. I guess galaxies have to be next, and that too is coming up, yo Elon, Mars, etc., what you couldn't fix on earth ain't gonna be fixable on Mars or even Pluto, Dummkopf, but give it your best shot anyway.

[1] Not a typo :)

+1
r3d0c8 months ago
SpaceManNabs8 months ago

Absolutely. It is borderline salacious. Honestly didn't feel too good watching so many people opine on matters they had no real data on and insult people; also the employees of openai announcing matters on twitter or wtv in tabloid style.

I hope people apologize to Ilya for jumping to conclusions.

elaus8 months ago

Yes, it was especially weird to read this on HN to such a big extend. The comments were (and are) full of people with very strong opinions, based on vague tweets or speculation. Quite unusual and hopefully not the new norm...

silenced_trope8 months ago

Yes, and I'm ashamed to say I'm unabashedly following it like the people I cringe at follow their fave celebs.

It's an interesting saga though and as a payer of ChatGPT+ I feel staked in it.

jowea8 months ago

Finally, drama for people who think ~~we~~they're above celebrity drama.

Apocryphon8 months ago

TechCrunch should be at the forefront with coverage, but their glory days is far behind it. And Valleywag is gone. So I guess it's up to us to gossip on our own.

jdgoesmarching8 months ago

Yes and that’s fine. Hell, it might even good for people who have a superiority complex about disliking pop culture.

Drama is fun, and this is top tier drama.

abakker8 months ago

The dude voted to fire Altman. He could have not done that. Actions have consequences.

nonethewiser8 months ago

At best he doesn’t have much integrity and caves to peer pressure. I would respect him more if he stood by his actions. Abandoning them just shows how frivolously his decision making was.

Perhaps more info will come out that casts a different light, but as of now it seems obvious that he decided to vote to fire Altman for reasons that he’s not willing to clarify and which he abandoned as soon as he saw the overwhelmingly negative reaction. He didnt even say he regrets firing Altman - just that he doesn’t want OpenAI to fall apart.

abakker8 months ago

Yeah, these votes were supposed to be what kept us safe from AI. The whole board was ill equipped to keep themselves neutral from their own business, let along protect humanity from the irresponsible outcomes of their models.

alephnerd8 months ago

This is why I detest YC (despite taking part in salicious gossip on here due to my social media addiction). A couple YC friends of mine have been very explicit about how they detest the conspiratorial YC hivemind.

Apocryphon8 months ago

Oh, let us have our fun. The industry's cooled off with the end of ZIRP and the coming holidays so people need the illusion that things are happening.

upwardbound8 months ago

But Ilya was the one that started this whole mess... He was the one that lit the match that lit the fuse...

rafaelero8 months ago

Yeah, let's ask the guy who set the company on fire for forgiveness. He is just a poor child.

drawkbox8 months ago

A "conflict" or false opposition can also be used in a theater like play. Maybe this was setup to get Microsoft to take on the costs/liability and more. Three board members left in 2023 that allowed this to happen.

The idea of boards might even be an anti-pattern going forward, they can be played and used in essentially rug pull scenarios for full control of all the work of entire organizations. Maybe boards are past their time or too much of a potential timebomb/trojan horse now.

coliveira8 months ago

This has been the case as long as companies existed. Even with all this, companies still have boards because they represent the interests of several people.

bertil8 months ago

That would explain why Sam wants the whole board gone and not one or two members, while he was very fast to welcome Ilya back.

codethief8 months ago

> while he was very fast to welcome Ilya back

Was he? I must have missed that in all this chaos.

+4
rngname228 months ago
codethief8 months ago

Thanks!

gwern8 months ago

That's meaningless; he would welcome Ilya back as a defector no matter what. What happens to Ilya later, after he is no longer a board member or in a position of power, will be much more informative.

valine8 months ago

If he has safety concerns with OpenAI, he must be mortified with his old company Meta dropping 70B llamas on HF.

teacpde8 months ago

This is the most logical explanation I have seen so far. Makes me wonder why Dustin Moskovitz himself wasn't on the board of OpenAI in the first place.

AlwaysBCoding8 months ago

Another fun fact re: Dustin Moskovitz

Dustin was an early investor in Alameda Research and was also one of the biggest donors to Mind the Gap -- Sam's mom's Super PAC. When SBF was about to go under Dustin was his first call to try to raise money (came out in the FTX trial).

htk8 months ago

Very interesting take, and it sheds some light to the role of the two most discredited board members.

spiantino8 months ago

The EA community includes a lot of AI folks, as well as philanthropists like Dustin.

That doesn't mean this is this kind of conspiracy

JohnFen8 months ago

But it does cast a pretty dark shadow over the AI community.

htk8 months ago

Please repost this as a stand alone comment, I bet it would be voted to the top.

Or maybe dang can extract this comment and orphan it.

jansan8 months ago

It has come to the point that hearing the expression "Effective Altruism" sends shivers down my spline.

kylecordes8 months ago

Obviously we should all want our altruism to be effective. What is the other side of it? Wanting one's altruism to not really accomplish much?

But with everything that has gone on, I cannot imagine wanting to be an Effective Altruist! The movement by that name seems to do and think some really weird stuff.

upwardbound8 months ago

The "Think Globally, Act Locally" movement is the competing philosophy. It's deeply entrenched in our culture, and has absolutely dominated philanthropic giving for several decades.

https://en.wikipedia.org/wiki/Think_globally,_act_locally

"Think Globally, Act Locally" leads to charities in wealthy cities getting disproportionately huge amounts of money for semi-frivolous things like symphony orchestras, while people in the global south are dying of preventable diseases.

zztop448 months ago

It’s obnoxious because of that exact implication, that pre-existing altruism wasn’t concerned with efficacy. And that’s just not true.

There has been a community of professional practice around measuring impact and directing funds accordingly for decades. Does it always generate perfect results? No, far from it!! Is there room to improve? Absolutely, lots and lots of room!

But it’s hard not to notice that the people who call themselves Effective Altruists have no better a track record (actually, I would say far worse) than the so-called bloated NGOs and international organizations when it comes to efficacy.

jacquesm8 months ago

Ironically it is never effective nor is it ever altruism.

chucke19928 months ago

We will fix you to become more altruistic.

jowea8 months ago

One more thing SBF and crew ruined.

golergka8 months ago

[flagged]

moffkalast8 months ago

It's one tier below weaponized autism.

chucke19928 months ago

I mean, you can't vote to drop CEO without knowing...

thrwwy_dm2311208 months ago

I heard directly from Dustin that he was surprised as anyone by the board’s actions. He is not some hidden mastermind behind the scenes, he has just been personally invested in AI and AI safety for a long time and therefore has many connections to other key players in the space.

folli8 months ago

Okay, I know this is a very naive question, but anyways: might Dustin /the board be onto something regarding AI safety which was not there before?

jacquesm8 months ago

If there was there are 700 people motivated to leak it and none did. What could they be aware of that the rest of OpenAI would not be aware of? How did the learn about it?

endtime8 months ago

I agree it's probably not something new. But I will observe that OpenAI rank and file employees, presumably mostly working on making AI more effective, are very strongly selected against people who are sympathetic to safety/x-risk concerns.

brandall108 months ago

Wow, this is extremely useful information, it ties all the pieces together. Surprised it hasn't been reported elsewhere.

chunky19948 months ago

Correlation =/= causation. This is most likely coincidental. I highly doubt Dustin's differing views caused a (near) unanimous ousting of a completely different company's CEO that had nothing to do with Dustin's primary business.

tempsy8 months ago

All 3 having some connection to effective altruism, which Dustin is at the center of, is not coincidental.

jacquesm8 months ago

Very interesting except for one little detail: Moskovitz denies it.

g42gregory8 months ago

Great insights. very interesting!

tatrajim8 months ago

Intriguing, thanks. HN does provide gems of insight amidst the repetitive gossip.

bagels8 months ago

Why would Sam let the board get so far out of his control?

ealexhudson8 months ago

No board is ever controlled by a CEO by virtue of the title/office. Boards are controlled by directors, who are typically nominated by shareholders. They may control the CEO, although again, in many startups the founder becomes the CEO and retains some significant stake (possibly controlling) in the overall shareholding.

The top org was a 501(c)3 and the directors were all effectively independent. The CEO of such an organisation would never have any control over the board, by design.

We've gotten very used to founders having controlling shareholdings and company boards basically being advisory rather than having a genuine fiduciary responsibility. Companies even go public with Potempkin boards. But this was never "normal" and does not represent good governance. Boards should represent the shareholders, who should be a broader group (especially post-IPO) than the founders.

s1artibartfast8 months ago

That isnt relevant to the question. Sam was on the board prior to all of these other directors, and responsible for selecting them.

The post asks how/why Sam ended up with a board full of directors so far out of alignment with his vision.

I think a big part of that is that the board was down several members, from 9 to 6. Perhaps the problem started with not replacing departing board members and this spiraled out of control as more board members left.

Here is a timeline of the board:

https://loeber.substack.com/p/a-timeline-of-the-openai-board

+2
ealexhudson8 months ago
+1
danenania8 months ago
belter8 months ago

It seems the next step, is the board to sign the letter calling for the board resignation. The insanity will be complete, and all can get back to therapy.

brookst8 months ago

The people responsible for calling for the sacking of the people who sacked the CEO, have just been sacked.

DaiPlusPlus8 months ago

A Møøse once bit my sister...

janalsncm8 months ago

If you want to lay everyone off, maybe laying off HR first wasn’t the smartest move.

TheOtherHobbes8 months ago

Honestly beginning to wonder if this is all just a marketing stunt.

openthc8 months ago

Snoop tweeted that he was giving up smoke which was a stunt to advertise a fireplace.

https://www.forbes.com/sites/forbes-personal-shopper/2023/11...

Why not shake-up CEOs of AI if the "CEO of cannabis" is doing wild thing?

CamperBob28 months ago

Yep, starting to get a real professional-wrestling vibe here.

https://en.wikipedia.org/wiki/Kayfabe

outside12348 months ago

Or a serious 5D chess move by Satya and Sam to get OpenAI for free

(and for Sam to get himself seriously compensated)

karmakurtisaani8 months ago

To gain what exactly? More likely just egos, ideals and financial interests colliding very publicly. To think of the hubris that must be going on at OpenAI at the moment.

vikramkr8 months ago

The person who was thought to be the one to initiate the coup already did lol

davikr8 months ago

Ilya has signed the petition, so that's two out of three left.

synergy208 months ago

he was said to start the whole mess and indeed he announced Sam's firing,,now he plays the victim, even movies can not switch plot this fast, keep some minimum dignity please

ssnistfajen8 months ago

There's a non-zero chance he was also used as a pawn to deliver the message, but who could be manipulating him then? There's so little actual detail give by anyone in the loop and I think that's what's amplifying the drama so much.

Even if the board's motive was weak and unconvincing, I doubt the ratio of employees threatening to quit would be this high had they just said it openly.

+6
CamperBob28 months ago
drexlspivey8 months ago

and then the board will fire Satya Nadella from Microsoft CEO

nullc8 months ago

Some did!

RcouF1uZ4gsC8 months ago

Actually, the way this timeline is going, I am not sure it won’t somehow end up with Donald Trump as OpenAI CEO.

maxlamb8 months ago

The only way this story gets more suspenseful:

1) Large military convoys are seen moving towards data centers used by OpenAI. 2) Rumors start going around that GPT-5 demonstrated very unusual behavior towards end of testing last week. 3) "Unknown entity" has somehow gained control of key US infrastructure. 4) Emergency server shut down procedures at key Microsoft Azure data centers rumored to run GPT-5 inference have been kicked off but all data centers personnel have been evacuated because of "unknown reasons."

troad8 months ago

Is that really more suspenseful? Seems like you're turning something genuinely stochastic into hackneyed fiction. Worse for humanity, sure, but hardly more suspenseful.

dist-epoch8 months ago

5) China accuses US of lying about the severity of the crisis and threatens a preemptive strike on Azure data centers

BoxTikr8 months ago

I could hear a newscasters voice in my head reading that and it actually gave me a little shiver

robg8 months ago

What about: Intercontinental ballistic missiles launch from Alaska toward China. No one knows who ordered and controls the launch. Only one man can stop them in time: Elon Musk.

dist-epoch8 months ago

GPT-5 predicted that and organized a gas-lighting campaign on X against Musk to enrage and distract him.

robg8 months ago

While he built a rocket company and global telecommunications satellite network both hardened unlike any other against rogue AIs…

Duh, duh, dunnnnnnnne…

Third act we find out Musk was the villain all along, X being the final solution for his global mind hive.

Fin.

agumonkey8 months ago

Reports of James Cameron missing

ilrwbwrkhv8 months ago

Folks who do not have the wisdom to see the consequences of their actions 3 days out, are building AGI. God help us all.

highduc8 months ago

Scientists are easily blindsided by psychos dealing with enormous amounts of money and power. They also happen to suck at politics.

boringg8 months ago

100%. Technical staff rarely have exceptional political awareness and that seems to be the case this weekend. To be fair we don't know what triggered everything so while the dust settles this position may change.

highduc8 months ago

Yeah clearly, at least I have no clue what really happened and don't feel like I have enough info to put anything together at this point.

I_Am_Nous8 months ago

Longtermism strikes again! Somehow the future is considered more important to think about than the steps we need to take today to reach that future. Or any future, really.

MattPalmer10868 months ago

Yep, thinking days ahead is sooooo long term! We need high frequency strategy!

paulddraper8 months ago

> are building AGI

Well, probably not anymore.

dingnuts8 months ago

they were never any closer to building AGI than they were to inventing a perpetual motion machine

eggsmediumrare8 months ago

One violates the laws of physics and the other does not. In fact, you have one behind your nose.

lebean8 months ago

That's news to you?

minimaxir8 months ago

An update in the article:

> More: New CEO Emmett Shear has so far been unable to get written documentation of the board’s reasons for firing Altman, which also haven’t been shared with investors

> Employees responded to his announcement in OpenAI’s Slack with a “fuck you” emoji

https://twitter.com/alexeheath/status/1726695001466585231

bitshiftfaced8 months ago

To me it lends more weight to the "three letter agency agreement + gag order meaning nobody can talk about it" theory

stillwithit8 months ago

Sam is trying to maintain access to IP to later expropriate

hakdbha8 months ago

[dead]

gumballindie8 months ago

This guy is like that creepy ex that told friends to talk to you so you can get back together. Instead he should stay with his microsoft friends and go on his merry way. Let’s see what ai team can the crypto currency lead, when there’s no one else’s work to steal credit from, and let’s see how open microsoft is to sucking in copyrighted material to train their little bots. Perhaps they’ll start with microsoft windows’ source code - as an example of how not to code.

OhNoNotAgain_998 months ago

[dead]

TheCaptain48158 months ago

[flagged]

siva78 months ago

I wonder if they are realising that their reputation is damaged beyond repair. Their old friends have a hard time accepting that those people may not be the ones they think they are.

te_chris8 months ago

Why be a sexist dick about it? Is she not her own person with her own credentials? She can fail on her own terms just fine it seems.

TheCaptain48158 months ago

What? The owner of Quora is even worse. Literally one of the worst sites on the web and he's on the board somehow.

te_chris8 months ago

If you can’t see that reducing her to being her husband’s wife is sexism then I can’t help you.

nopromisessir8 months ago

[flagged]

Multiplayer8 months ago

I'm very unclear on how board members can remove other board members. If Sam, Greg and Ilya are on the same "team" now that's 3.... vs. 3. What's the mechanism at use here for board removal and how quickly does it happen? And who elects the board members.

This is silly.

lowkey_8 months ago

Board members can remove other board members with a majority vote.

Sam, Greg, and Ilya were presumed guaranteed to be the same team, which meant they couldn't be removed (3/6 votes).

Ilya switched sides to align with all 3 of the non-founding board members, giving them 4/6 votes, which they used to remove Sam and Greg.

Now that they've been removed, there's 4 remaining board members: the non-founding 3 and Ilya. They'll need 3/4 votes to reorganize the board.

namrog848 months ago

Which is super unfortunate since the other 3 might vote out Ilya now.

jessenaser8 months ago

After removing Sam and Greg, there are four remaining.

This means no matter what Ilya does, the other three can vote him out, which is why the board removed Mira, stalled on bringing Sam back, etc, since Ilya's vote does not matter anymore.

Only if you can move two people over to the other side will you have 3 vs 1, and could bring Sam and Greg back.

This Microsoft deal could just be another Satya card, and means: 1. If Sam goes back to OpenAI, we (as Microsoft) still will get new models at the normal rate in our previous contract. 2. If Sam cannot go back, we get to hire most of OpenAI to Microsoft, and can rebuild from the rubble.

So AI is saved at the last minute. Either OpenAI will live, or it will be rebuilt in Microsoft and funded by Microsoft with the same benefits as before. Only loss was slowing AI down by maybe months, but the team probably could get back where they started. They know it all in their heads. Microsoft already has the IP.

If there was no hope for OpenAI, then Ilya might just move with Sam to Microsoft, and that would be the end of it.

selimthegrim8 months ago

There are three of them, and Ilya...

mgfist8 months ago

By pressuring them to resign.

The way things are looking, OpenAI won't exist next week if the board doesn't resign. Everyone will quit and join Sam.

skwirl8 months ago

Sam and Greg are no longer on the board. They were removed with the support of Ilya. The board is now Ilya, the Quora CEO, and two other outsiders.

Multiplayer8 months ago

Boards that I have been on do not allow board members to remove other board members - only shareholders can remove board members. I don't know why this is being downvoted.

cthalupa8 months ago

There are no shareholders in a non-profit. Who would remove boardmembers besides a majority decision by the board?

blobbers8 months ago

Am I the only one who thinks all this looks like fairly childish behavior by the board, the employees, microsoft, and Sam Altman?

It s a non-profit, it's for profit, Microsoft is investing, board is firing Sam, Ilya thinks Sam needs to be removed, Ilya thought about it more is sorry for firing Sam and wants him back, Sam's part of microsoft, now he's still trying to be CEO of OpenAI.

TexanFeller8 months ago

> childish behavior

“childish” is a poorly defined term mainly used to manipulate. You should specify what you actually mean.

blobbers8 months ago

I basically mean poorly thought out, first order thinking. Immediate gratification, not examining consequences, etc.

Like Sam Altman needs to start a company on Saturday because he was fired on Friday? The board firing him without informing or discussing with investors...

Or Microsoft has to hire him on Monday (they have the most to win/lose tbh, so maybe it's justified).

I just meant the decisions seemed to not think of the longer term / downstream ramifications, the way children often behave.