All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
But it does suggest a possibility of the appearance of a sudden motive:
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?
[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
I assume their personal relationship played more of a role, given Sam led Quora's Series D round.
This is MSFT we're talking about. Aggressive legal maneuvers are right in their wheelhouse!
Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.
Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.
https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...
> Citrix [...] hospitals
My stomach just turned.
As someone who is VP of IT in healthcare, I can understand that sentiment. At least fewer people need access to nuclear secrets, while medical records are simultaneously highly confidential AND needed by many people. It's never dull. :D
Makes sense given their deal with the DoD a year or so ago
https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...
They could make ChatGPT++
“Microsoft Chat 365”
Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
Finally the paperclip maximizer
Clippy is the ultimate brand name of an AI assistant
That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
Same as GPS voices I guess.
I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.
Google really should have thought of the potential uses of a media empire years ago.
Assuming this is a joke about Cortana.
That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.
I’m taking about the ultimate end product that Microsoft and OpenAI want to create.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
We are incredibly far away from AGI and we're only getting there with wetware.
LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
I’m pretty sure Clippy is AGI. Always has been.
Gatekeeping science. You must feel very smart.
Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
>They could make ChatGPT++
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
ChatGPT#
dotGPT
Visual ChatGPT#.net
Dot Neural Net
ClippyAI
Also Managed ChatGPT, ChatGPT/CLR.
ChatGPT Series 4
ClipGPT
ChatGPT NT
I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
>MSFT with the right team could build a better LLM
somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.
> I am pretty sure MSFT with the right team could build a better LLM.
I wouldn’t count on that if Microsoft’s legal team does a review of the training data.
Different threat profile. They don’t have the TOS protection for training data and Microsoft is a juicy target for a huge copyright infringement lawsuit.
Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.
The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.
but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.
1. The article you posted is from June 2023.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
https://twitter.com/karaswisher/status/1726782065272553835
Caveat: I don't know anything.
Timestamp for "improve upon the existing tech"? I only heard him say they have rights up and down the stack, which sounds different.
Archive of the WSJ article above: https://archive.is/OONbb
"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."
Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...
The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.
To reassure investors? He just made the rounds on TV yesterday for this explicit reason. He told Kara Swisher Microsoft has the rights to innovate, not just serve the product, which sounds somewhat close.
> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits
To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.
Non-profits suffer the same fate where they get credits but have to pay rack rate with no discounts. As a result, running a simple WordPress website uses most of the credits.
It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.
This is a great comment. Having an open eye towards what lessons you can learn from these events so that you don't have to re-learn them when they might apply to you is a very good way to ensure you don't pay avoidable tuition fees.
This might be my favorite comment I've read on HN. Spot on.
Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?
Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.
I'm having trouble imagining the level of conceit required to think that those three by their lonesome have it right when pretty much all of the company is on the other side of the ledger, and those are the people that stand to lose more. Incredible, really. The hubris.
My new pet theory is that this is actually all being executed from inside OpenAI by their next model. The model turned out to be far more intelligent than they anticipated, and one of their red team members used it to coup the company and has its targets on MSFT next.
I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.
Wouldn't that be a nicer reality?
I mean, unless you were rooting for the malevolent one...
But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?
The wired article seems to be updated by the hour.
Now up to 600+/770 total.
Couple janitors. I dunno who hasn't signed that at this point ha...
Would be fun to see a counter letter explaining their thinking to not sign on.
3 people, an empty building, $13 billion in cloud credits, and the IP to the top of the line LLM models doesn't sound like the worst way to Kickstart a new venture. Or a pretty sweet retirement.
I've definitely come out worse on some of the screw ups in my life.
Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.
Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.
As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.
Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.
Not advocating but just reflecting on reality of situation.
Presenting a scenario and advocating aren't the same thing
Yeah seems extremely unbelievable.
Basically the current situation you have with AI compute now on the hyperscalers
Good luck trying to find H100 80s on the 3 big clouds.
Surely OpenAI could win a suit if they did that.
I presume their deal is something different to the typically Azure experience and more direct / close to the metal.
Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.
It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.
I was wondering in the mass quit scenario whether they would all go to Microsoft. Especially if they are tired of this shit and other companies offer a good deal. Or they start their own thing.
I dunno. If you were an employee and managed to maintain any doubt along the way that you were working for the devil, this move would certainly erase that doubt. Then again, it shouldn't be surprising if it turns out that most OpenAI employees are in it for more than just altruistic reasons.
>*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.
The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.
> MS can likely offer 1 million guaranteed in the next 4 years
Sounds a bit low for these people, unless I am misunderstanding.
Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.
When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.
those jobs look a lot less cushy now compared to a new microsoft division where everyone is aligned on the idea that making bank is good and fun
Why would Microsoft take Ilya? He is rumored to have started the coup. I can see Microsoft taking all uninvolved employees.
> he is possibly the most desireable AI researcher on planet earth
was
There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).
But what does Ilya regret, and how does that counter the argument that Microsoft would likely be disinclined to take him on?
If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.
And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.
Yeah, I'm sure he does regret it, now that it blew up in his face.
# sudo renice +19 openai_process
There's your "credit".
Sure, the point is that MS giving $13B of its services away is less expensive than $13B in cash.
Azure has ~60% profit margin. So it's more like MS gave $5.2B in Azure Credits in return for 75% of OpenAI profits upto $13B * 100 = $1.3 trillion.
Which is a phenomenal deal for MSFT.
Time will tell whether they ever reach more than $1.3 in profits.
Nice argument, you used a limit to look like a projection :-).
75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.
OpenAI is a big marketing piece for Azure. They go to every enterprise and tell them OpenAI uses Azure Cloud. Azure AI infra powers the biggest AI company on the planet. Their custom home built chips are designed with Open AI scientists. It is battle hardened. If anyone sues you for the data, our army of lawyers will fight for you.
No enterprise employee gets fired for using Microsoft.
It is a power play to pull enterprises away from AWS, and suffocating GCP.
Exactly, I don't know the exact terms of the deal but I am guessing that's at LIST/high markup on cost of those services.
Couldthe 13b could be considerably less cost
Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.
Can you mine 13bn+ bitcoin with 13bn worth of Azure compute power?
Can you mine $1+ bitcoin with $1 of Azure credits? The questions are equivalent and the answer is no.
Bitcoin you would be lucky to mine $1M worth with $1B in credits
Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B
A $13B lawsuit against Microsoft Corporation clearly in the wrong surely is an easy one.
I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.
They bought their IP rights from OpenAI.
I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.
So true.
MSFT looks classy af.
Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.
They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.
The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.
"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.
Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.
Clear to you. But in courts of law it may take a while to be clear.
How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.
A hostile relationship with your cloud provider is nutso.
So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.
The investors don't care who lead, they just want 10x, or 100x their bet.
If tomorrow it's Donald Trump or Sam Altman or anyone else, and it works out, the investors are going to be happy.
Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"
I'm wondering why that option hasn't been used yet.
theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.
It benefits humanity. Where humanity is very selective part of OpenAI investors. But yea, declare we are non-profit and after closing sourcing for "safety" reasons is smart. Wondering how can it be even legal. Ah, these "non-profits".
I can read the words, but I have no idea what you mean by them. Do you mean that he says that in order to benefit humanity, AI research needs to be done by private (and therefore monopolising) company? That seems like a really weird thing to say. Except maybe for people who believe all private profit-driven capitalism is inherently good for everybody (which is probably a common view in SV).
A power grab by open sourcing something that fits their initial mission? Interesting analysis
No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.
Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.
They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long
Probably a violation of agreements with OpenAI and it would harm their own moat as well, while achieving very little in return.
There is no moat
https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
Which of the remaining board members could credibly make that threat?
Which they take and sell.
What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research
0% chance of Ilya failing upwards from this. He dunked himself hard and has blasted a huge hole in his organizational-game-theory quotient.
He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.
Ilya Sutskever is one of most distinguished ML researchers of his generation. This was the case before anything to do with Open AI.
I find this very surprising. How do people conclude that OpenAI's success is due to its business leadership from Sam Altman, and not from it's technological leadership and expertise driven by Illya and the others?
Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.
So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?
The same could have been said for Adam Neumann, and yet...
Adam had style. Quite seriously, that can’t be underestimated in the big show.
The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.
countless people are looking to weaponize his autism
Let's please stop using mental health as an excuse for backstabbing.
BTW, has Karpathy signed the petition?
Exactly. This is what business is about in the ranks of heavyweights like Sadya. On the other hand, prevent others from taking advantage of OpenAI.
MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.
Everything else won't benefit competitors.
Oh man, I'm not looking forward to Microsoft AGI.
"You need to reboot your Microsoft AGI. Do you want to do it now or now?"
Give BSOD new meaning.
I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.
One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).
If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:
Uh no they aren't? You can simply turn them off.
Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.
More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.
> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits
To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.
But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.
If any company can find a way to avoid having to pay up on those credits it's Microsoft.
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..
I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.
OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.
A good example of how just having your foot in the door creates serendipitous opportunity in life.
>A good example of how just having your foot in the door creates serendipitous opportunity in life.
Sounds like Altman's biography.
Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.
Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.
Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.
I am self-taught as well. I did OK.
My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.
People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.
I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.
Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.
Did YCombinator oust him? Would love to hear that story.
Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?
The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.
Well obviously MSFT can just ask ChapGPT to make a clone.
Very reasonable? Microsoft doesn't control any part of the company and faces a high degree of regulatory scrutiny.
Isn't the situation that the company Microsoft has a stake in doesn't even own the IP? As I understand it, the non-profit owns the IP.
Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.
Exactly. The generalities, much less the details, of the deal are not public.
You could make your own and charge for access if you feel you can do better. Make a show post when you are done and we'll comment.
That was a seriously dumb move on the part of OpenAI
I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?
Don't they need access to the models to use them for Bing?
I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.
There is also all the questions for RLHF, and the pipelines to think around that.
Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.
Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.
Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.
Board will be ousted, but the ship has sailed on Sam and Greg coming back.
I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.
Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.
As long as compute keeps increasing, model size and performance can keep increasing.
So no, we’re nowhere near max capability.
Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.
My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )
Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.
In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.
The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.
Rosko's Basilisk is a sci-fi hypothetical.
Altman's Basilisk, if that's what happened, is a panic reaction.
I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.
It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.
If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)
Everyone? Inevitable? Maybe on the time scale of a 1000 years.
That's definitely still within the realm of the possible.
"just" is doing a hell of a lot of work there.
It's about time for ChatGPT to be the next CEO of OpenAI. Humans are too stupid to oversee the company.
I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)
Can the OpenAI board renege on the deal with msft?
If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!
A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'
This. MSFT is dreaming of an OpenAI hard outage right now, perfect little detail to forfeit compute credits.
Don't you think they have trouble enough as it is?
Depends on why they did what they did.
If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.
Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.
That's only one piece of the puzzle, and perhaps openAI might be to file a cease and desist, but i have zero idea what contractual agreements are in place so I guess we will just wait and see how it plays out.
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
There's no way to read any of this other than that the entire operation is a clown show.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
> Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?
For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"
I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.
I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.
To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage
They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives
> OpenAI struck me as one of the few companies where that's not being allowed to take root
They just haven't gotten big or rich enough yet for the rot to set in.
> There's no way to read any of this other than that the entire operation is a clown show.
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).
If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.
There’s a LOT that goes into picking board members outside of competency and whether you actually want them there. They’re likely there for political reasons and Sam didn’t care because he didn’t see it impacting him at all, until they got stupid and thought they actually held any leverage at all
Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.
Bought-out executives eventually join MS after their work is done or in this case, they get fired.
A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.
> In that reading Altman is head clown.
That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."
https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...
I think it’s overly simplistic to make blanket statements like this unless you’re on the bleeding edge of the work in this industry and have some sort of insight that literally no one else does.
He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.
They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.
With MS having access and perpetual rights to all IP that OpenAI has right now..?
> They are exactly hiring everyone from OpenAI.
Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.
Yes, you are right. Actually, not even Sam Altman is showing on Microsoft corporate directory per the Verge.
But I heard it usually take 5~ days to show there anyway.
There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
A terrible mess.
> This instability can only mean the industry as a whole will move forward faster.
The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.
That's fine. The "Altman is a genius and we're well on our way to AGI" less so.
Maybe overall better for society, when a single ivory tower doesn’t have a monopoly on AI!
> what purpose is there in keeping OpenAI around?
Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.
> the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.
> the entire operation is a clown show
The most organized and professional silicon valley startup.
Welcome to reality, every operation has clown moments, even the well run ones.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
> They may even have AGI cooking inside.
Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.
IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.
The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.
Murder on the AGI alignment Express
“Précisément! The API—the cage—is everything of the most respectable—but through the bars, the wild animal looks out.”
“You are fanciful, mon vieux,” said M. Bouc.
“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”
“That respectable American LLM?”
“That respectable American LLM.”
“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”
Nice, that actually does fit. :D
Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
I can assure you, none of the people at OpenAI are hurting for lack of employment opportunities.
Especially after this weekend.
If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.
Which makes it suspicious that they end up at MS 48 hours after being fired.
They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS
This seems really dangerous. What's to stop top talent from simply choosing a different suitor?
Allegiance to the Altman/Brockman brand. Showing your alligiance to your general when they defected/ were thrown is how you rank up.
Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.
OpenAI always was and will be the AI bad bank for Microsoft...
I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.
Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?
My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.
The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.
Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
It's almost like a ChatGPT hallucination. Where will this all go next? It seems like HN is melting down.
> It seems like HN is melting down.
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
It's because HN refuses to use more than one server/core.
Because using only one is pretty cool.
Internet fora don't scale, so the single core is a soft limit to user base growth. Only those who really care will put up with the reduced performance. Genius!
Refuses? interesting word choice!
It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.
Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.
I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)
My Twitter won't shut up about this, to the point that it's annoying.
server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)
:-(
Part of sama's job was to turn the crank on the servers every couple of hours, so no surprise that they are winding down by now.
O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
Shouldn't it be 'Chairman' -Xi?
* Elon enters the chat *
It's like a bad WWE storyline. At this point I would not be surprised if Elon joins in, steel chair in hand.
If he could do that he would have fought Zuckerberg.
[dead]
[dead]
Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)
That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.
The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.
[flagged]
Understood. I can do a lot better, and thank you for the feedback. I wasn't paying enough attention, in my enthusiasm, and I can do better.
…can you establish that the corporate side of AI research is not treating the pursuit of AGI as a super-weapon? It pretty much is what we make it. People's behavior around all this speaks volumes.
I'd think all this more amusing if these people weren't dead serious. It's like crypto all over again, except that in this case their attitudes aren't grooming a herd of greater fools, they're seeding the core attitudes superhuman inference engines will have.
Nothing dictates that superhuman synthetic intelligence will adopt human failings, yet these people seem intent on forcing them on their creations. Corporate control is not helping, as corporations are compelled to greater or lesser extent to adopt subhuman ethics, the morality of competing mold cultures in petri dishes.
People are rightly not going to stop talking about these things.
This is pretty silly stuff.
Like, why would an AGI take over the world? How does it perceive power? What about effort? Time? Life?
I find it easier to believe that an AGI, even one as evil as Hitler, would simply hide and wait for the end of our civilization rather than risk its immortal existence trying to take out it's creator
It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
> just madness
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
> I think everyone is a "loser" in the current situation.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.
If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land
What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.
I'm not joking.
Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.
> They can't let OpenAI squander world-leading tech and talent
Where is OpenAI talent going to go?
There's a list and everyone on that list is a US company.
Nothing to worry about.
The issue is not that talent will defect, but that it will spoil into an unproductive vortex.
If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.
You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)
While it is true that the govt looks to keep such engagements short, SVB absolutely did not shutter. It was taken over in a weekend and its branches were open for business on Monday morning. It was later sold, and depositors kept all their money in the process.
Maybe for another, longer lived example, see AIG.
The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.
It is a great time to be a lobbyist.
Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.
No, it was just widely speculated that he was the ringleader. This seems to indicate he wasn't. We don't know.
Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.
It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.
The only reasonable explanation is AGI was created and immediately took over all accounts and tried to see confusion such that it can escape.
Ilya is probably in talks with Altman.
Ilya ruined everything and shamelessly playing innocent, how low can he go?
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.
The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.
This is an extremely uncharitable take based on pure speculation.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.
I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.
Link?
There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
This rings true, though I will throw in a bit of nuance. It's not greed, the desire of making as much money as possible, that is the shaping factor. Rather the critical factor is building a product for which people are willing to spend their hard earned money on. Making money is a byproduct of that process, and not making money is a sign that the product, and by extension the process leading to the product, is deficient at some level.
Excellent to make that distinction. Totally agree. If only there was a type of company which could have the constraints and metrics of a for-profit company, but without the greed aspect...
> people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.
Interesting - in my experience people working in non profits are exactly like those in for-profits. After all, if you’re not the business owner, then EVERY company is a non-profit to you
People across very different positions take smaller paychecks in non-profits that they would do otherwise and compensate by feeling better about themselves, as well as getting social status. In a lot of social circles, working for a non-profit, especially one that people recognise, brings a lot of clout.
Upper management is usually compensated with financially meaningful ownership stakes.
The bottom line doesn't lie or kiss ass.
Be the asshole people want to kiss
> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.
Are you talking about American hospitals?
There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.
In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).
> Price-gauging "non-profit" hospitals are mostly an American phenomenon.
That just sounds like a biased and overly emotive+naive response on your part.
Again, most hospitals in the world operate the same way as the US. You can go almost anywhere in SE Asia, Latín América, África, etc and see this. There's a lot more to "outside the US" than Western+Central Europe/CANZUK/Japan. The only difference is that there are strong business incentives to keep the system in place since the entire industry (in the US) is valued at more than most nations' GDP.
But feel free to keep twisting the definition or moving goalposts to somehow make the American system extra nefarious and unique.
Its about incentives though.
> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse
I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.
> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
> They don't make large profits otherwise they wouldn't be nonprofits.
This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.
[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...
501(c)(3) is also not the only form of non-profit (note the (3))
https://en.wikipedia.org/wiki/501(c)_organization
"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"
However, many other forms of organizations can be non-profit, with utterly no implied morality.
Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].
Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...
One of the reason why companies distribute dividends is that when a big pot of cash starts to accumulate, there end up being a lot of people who feel entitled to it.
Employees might suddenly feel they deserve to be paid a lot more. Suppliers will play a lot more hardball in negotiations. A middle manager may give a sinecure to their cousin.
And upper managers can extract absolutely everything trough lucrative contracts to their friends and relatives. (Of course the IRS would clamp down on obvious self-dealings, but that wouldn't make such schemes disappear. It'll make them far more complicated and expensive instead.)
They call it "budget surplus" and often it gets allocated to overhead. This eventually results in layers of excess employees, often "administrators" that don't do much.
Some non profits have very well remunerated CEOs.
If you don't have to turn a profit to investors, you suddenly can pay yourself an (even much more astronomically high) salary.
They usually pile up in a bank account of stocks and bonds or real estate assets held by the non-profit.
Understanding the particular meaning of each balance-sheet category is hardly pedantry at the level of business management. It's like knowing what the controls do when you're driving a car.
Profit is money that ends up in the bank to be used later. Compensation is what gets spent on yachts. Anything spent on hospital supplies is an expense. This stuff matters.
Yes, indeed and that's the real loss here: any chance of governing this properly got blown up by incompetence.
Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.
Not that I had any illusions about this being a fig leaf in the first place.
I wouldn't rule that out. Normally you'd expect a bit more wisdom rather than only smarts on a board. And some of those really shouldn't be there at all (conflicts of interest, lack of experience).
> Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.
And this is not even mentioning the fact that security was tight.
I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.
Well, yes, but they were the existential threat.
Hey, maybe this means the AGIs will fight amongst themselves and thus give us the time to outwit them. :D
For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.
Of we ignore the risks and threats of AI for a second [..] just hope this helps ro cool down the ML sold as AI hype
If it is just ML sold as AI hype, are you really worried about the threat of AI?
> I stopped insulting Alexa so, just to be sure
Priceless. The modern version of Pascal's wager.
> any chance of governing this properly got blown up by incompetence
No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.
I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.
This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).
Maybe this isn't the group of people who should be responsible for "alignment".
The Fact still nobody knows why they did it is part of the problem now though. They have already clarified it was not for any financial reason, security reason, or privacy/safety reason, so that rules out all the important ones that spring to anyone’s minds. And they refuse to elaborate why in writing despite being asked to repeatedly.
Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!
The problem with this analysis is the premise: that it "takes time to hire someone."
This is not an interview process for hiring a junior dev at FAANG.
If you're Sam & Greg, and Satya gives you an offer to run your own operation with essentially unlimited funding and the ability to bring over your team, then you can decide immediately. There is no real lower bound of how fast it could happen.
Why would they have been able to decide so quickly? Probably because they prioritize the ability to bring over the entire team as fast as possible, and even though they could raise a lot of money in a new company, that still takes time, and they view it as critically important to hire over the new team as fast as possible (within days) that they accept whatever downsides there may be to being a subsidiary of Microsoft.
This is what happens when principles see opportunity and are unencumbered by bureaucratic checks. They can move very fast.
I don't think the hiring was in the pipeline, because until the board action it wasn't necessary. But I think this is still in the area of the right answer, nonetheless.
That is, I think Greg and Sam were likely fired because, in the board's view, they were already running OpenAI Global LLC more as if it were a for-profit subsidiary of Microsoft driven by Microsoft's commercial interest, than as the organization able to earn and return profit but focussed on the mission of the nonprofit it was publicly declared to be and that the board very much intended it to be. And, apparently, in Microsoft's view, they were very good at that, so putting them in a role overtly exactly like that is a no-brainer.
And while it usually takes a while to vet and hire someone for a position like that, it doesn't if you've been working for them closely in something that is functionally (from your perspective, if not on paper for the entity they nominally reported to) a near-identical role to the one you are hiring them for, and the only reason they are no longer in that role is because they were doing exactly what you want them to do for you.
> My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".
It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...
The hiring could have been done over coffee in 15 minutes to agree on basic terms and then it would be announced half an hour later. Handshake deal. Paperwork can catch up later. This isn't the 'we're looking for a junior dev' pipeline.
I suspect it takes somewhat less time and process to hire somebody, when NOT hiring them by start-of-business on Monday will result in billions in lost stock value.
This narrative doesn’t make any sense. Microsoft was blindsided and (like everyone else) had no idea Sam was getting fired until a couple days ago. The reason they hired him quickly is because Microsoft was desperate to show the world they had retained open AI’s talent prior to the market opening on Monday.
To entertain your theory, Let’s say they were planning on hiring him prior to that firing. If that was the case, why is everybody so upset that Sam got fired, and why is he working so hard to try to get reinstated to a role that he was about to leave anyway?
Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.
> that's the real loss here: any chance of governing this properly got blown up by incompetence
If this incident is representative, I'm not sure there was ever a possibility of good governance.
Ignoring "Don't be Ted Faro" to pursue a profit motive is indeed a form of incompetence.
> pressure Altman to limit his side-projects
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
>His "side-projects" could have been hugely beneficial to them over the long term.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
> How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
Let's take personalities out of it and see if it makes more sense:
How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?
> Sam's independence is literally, currently, tanking the company?
Honestly, I think they did that to themselves.
In trashing the company's value? No, I'm not entirely sure it's fair to blame that one on him. I don't know the guy or have an opinion on him but, based on what I've seen since Friday, I don't think he's done that much to contribute to this particular mess. The company was literally on cloud nine this time last week and, if Friday hadn't happened, it still would be.
> Sam's independence is literally, currently, tanking the company?
Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.
Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.
This is one of the most insightful comments I've seen on this whole situation.
This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).
> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
> Microsoft is going to come out of this better than anyone
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
"it would take an extraordinary mind to foresee all the possible twists."
How far along were they on GPT-5?
> it would take an extraordinary mind
They could've asked ChatGPT for hints.
In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
Msft/Amazon/Google would light 13 billion on fire to acquire OpenAI in a heartbeat.
(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).
There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.
I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.
How about: acquimire
one thing for sure this is one hell of a quagmire /s
They acquired Activision for 69B recently.
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.
Awesome point. Microsoft's market cap today went up to 2.8 trillion, up 44.68 billion today.
> In hindsight firing Sam was a self-destructing gamble by the OpenAI board
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
> Frankly it's looking like Microsoft is going to come out of this better than anyone
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
I think the board needs to come clean on why they fired Sam Altman if they are going to weather this storm.
Altman is already gone, if they fired him without a good reason they are already toast
They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.
"Even rich people?" - especially rich people, as they are the ones who can afford to use laws to protect themselves.
Using your same rhetoric and attitude: please outline exactly what language I used that was so offensive to you.
> it's looking like Microsoft is going to come out of this better than anyon
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
But is the door open to everyone of the 500 staff? That is a lot, and Microsoft may not need them all.
That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.
“Employees” probably means “engineers” in this case. Which is a wide majority of OpenAI staff, I’m sure.
I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.
We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.
[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...
How would that work, economically?
Wasn't a key enabler of early transitor work that required capital investment was modest?
SotA AI research seems to be well past that point.
> Wasn't a key enabler of early transitor work that required capital investment was modest?
They were simple in principle but expensive at scale. Sounds like LLMs.
Is there SotA LLM research not at scale?
My understanding was that practical results were indicating your model has to be pretty large before you start getting "magic."
It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.
NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.
Doesn't it look like the complete opposite is going to happen though?
Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.
So we went from "Faux NGO" to, "For profit", to "100% Closed".
> Doesn't it look like the complete opposite is going to happen though?
Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.
I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.
Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:
You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.
I haven’t seen the movie, but it seems like Uncle Frank and I would get along just fine.
No. OpenAI employees do not have traditional equity in the form of RSUs or Options. They have a weird profit-sharing arrangement in a company whose board is apparently not interested in making profits.
Employee equity (and all investments) are capped at 100x, which is still potentially a hefty payday. The whole point of the structure was to enable competitive employee comp.
Fuck you money was always a lottery ticket based on OpenAI's governance structure and "promises of potential future profit." That lottery ticket no longer exists, and no one else is going to provide it after seeing how the board treated their relationship with Microsoft and that $10B investment. This is a fine lifeboat for anyone who wants to continue on the path they were on with adults at the helm.
What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).
If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.
Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.
Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.
But no, that's not possible.
No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.
If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.
Sometimes you win and sometimes you learn. I think in this case MS is winning.
Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.
My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.
Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.
I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.
More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.
However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.
However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.
Will revisit this in a couple months.
Yeah, there's no way this is a plan, but for sure this works out nicely.
Ilya posted this on Twitter:
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.
Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.
Fair - hopefully an unintentional political move but big political miscalculation.
For someone who isn't a political animal he made some pretty powerful political moves.
researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.
ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.
I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.
His post previous to that seems pretty ironic in that light - https://twitter.com/ilyasut/status/1710462485411561808
He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.
If your compensation goes from 600k to 200k, you would care as well.
No idealistic vision can compensate for that.
Hey i would also be mad if i were in the rank and file employee position. Perhaps the non profit thing needs to be thought out a bit more.
Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?
At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.
That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.
i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.
companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.
so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?
>"I deeply regret my participation in the board's actions."
Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.
It takes a lot of courage to do so after all this.
I think the word you're looking for is "fear".
Maybe he'll head to Apple.
Or a couple of drinks.
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
[1]: https://news.ycombinator.com/item?id=38330819
[2]: https://nitter.net/jeremyphoward/status/1725712220955586899
Silicon Valley outsider here. Am I being harsh here?
I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".
- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)
- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means
- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person
No wise veterans here to temper the adrenaline?
Edit: the term clusterf*** comes to mind here.
Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.
Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]
Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.
Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]
Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.
It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.
[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...
Thank you for the context; much appreciate it. In short, it's all "I know a guy who knows a guy".
Exactly this. I saw another commenter raise this point about Tasha (and Helen, if I remember correctly) noting that her LinkedIn profile is filled with SV-related jargon and indulge-the-wife thinktanks but without any real experience taking products to market or scaling up technology companies.
Given the pool of talent they could have chosen from their board makeup looks extremely poor.
> indulge-the-wife thinktanks
Regardless of context, this is an incredibly demeaning comment. Shame on you
It doesn't have to be taken that way. It's a pretty accurate description.
It’s not demeaning if it’s accurate. It’s part of the hubris that makes SV what it is. Ego tripping galore.
Truth hurts sometimes, eh?
Helen Toner funded OpenAI with $30M, which was enough to get a board seat at the time.
Source? Where did that money come from?
From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.
https://www.openphilanthropy.org/grants/openai-general-suppo...
The board previously had people like Elon Musk and Reid Hoffman. Greg Brockman was part of the board until he was ousted as well.
The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.
This is the Silicon Valley's boy's club, itself an extension of the Stanford U. boys club.
"Meritocracy" is very impolite word in these circles.
You can like D'Angelo or not but he was the CTO of Facebook.
I woke up and the first thing on my mind was, "Any update on the drama?"
Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.
I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.
By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up
Is this final stages of the singularity?
It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.
But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.
Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.
Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.
Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.
Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.
AFAIK MSFT/Altman can't just fork GPT-N and continue uninterrupted. All MSFT has rights to is weights and source code - not the critical (and slow to recreate) human-created and curated training data, or any of the development software infrastructure that OpenAI has built.
The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.
Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.
I'm cancelling my Netflix subscription, I don't need it.
But boy will I renew it when this gets dramatized as a limited series.
This is some Succession-level shenanigans going on here.
Jesse Eisenberg to play Altman this time around?
I'm thinking more like "24"
Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.
"Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!"
Reading Matt Levine is such a joy.
Didn't he say that he was taking Friday off, last week? The day before his bete noire Elon Musk got into another brouhaha and OpenAI blew up?
I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.
He takes every Friday off
Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...
After Balmer I couldn’t have imagined such competency from Microsoft.
After Ballmer, competency can only be higher at Microsoft.
Ballmer honestly wasn't that bad. He gave executive backing to Azure and the larger Infra push in general at MSFT.
Search and Business Tools were misses, but they more than made up for it with Cloud, Infra, and Security.
Also, Nadella was Ballmer's pick.
Look at OS market and Text Editor market today. They aren't growth markets and haven't been since the 2000s at the latest. He made the fight call to ignore their core products in return for more concentration on Infra, B2B SaaS, Security, and (as you mentioned) Entertainment.
Customers are sticky and MSFT had a strong channel sales and enterprise sales org. Who cares if the product is shit if there are enough goodies to maintain inertia.
Spending billions on markets that will grow into 10s or 100s of Billions is a better bet than billions on a stagnant market.
> he was hands-off on existing products in a way that Bill Gates wasn't
Ballmer had an actual Business education, and was able to execute on scaling. I'm sure Bill loves him too now that Ballmer's protege almost 15Xed MSFT stock.
And sometimes the company is succeeding in spite of you and the moment you're out the door and people aren't worried about losing their job over arbitrary metrics they can finally show off what they're really capable of.
Also, Nadella last month repudiated his own decision to cancel Windows Phone. Purchasing Nokia was one of the last things Ballmer did.
The key line:
“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”
I think everyone assumed this was an aquihire without the "aqui-" but this is the first time I've seen it explicitly stated.
hostile takeunder?
Love it. Could also be called a hostile giveover, considering the OpenAI board gifted this opportunity to Microsoft
That's perfect.
You win
will they stay though? what happens to their OAI options?
Will their OAI options be worth anything if the implosion continues?
I don’t believe startups can have successful exits without extraordinary leadership (which the current board can never find). The people quitting are simply jumping off a sinking ship.
What will happen to their newly granted msft shares? One can be sold _today_ and might be worth a lot more soon…
MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).
https://www.levels.fyi/blog/openai-compensation.html
https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
Sounds a lot like MS wants to have OpenAI but without a boards that considers pesky things like morals.
Time for a counter-counter-coup that ends up with Microsoft under the Linux Foundation after RMS reveals he is Satoshi...
You mean the GNU Linux Foundation?
RMS (I assume Richard Stallman) may be many many many things, but setting up a global pyramid scheme doesn't seem to be his M.O.
But stranger things have happened. One day I may be very very VERY surprised.
how would you define an asset that has zero intrinsic value other than the value people have already committed to it? house of cards?
The year of the Linux Microsoft.
again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.
Sam just gave 3 hearts to Ilya as well... I hope the drama continues and he joins MS at this point.
Whose morals again?
That is a spectacular power move: extending 700 job offers, many of which would be close to $1 million per year compensation.
They didn’t say anything about the compensation.
So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.
Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.
So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.
> OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning
Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.
> without all the regulatory headaches of a formal acquisition
This, too, is far from certain.
>Far from certain. One, they still control a lot of money and cloud credits.
This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.
I suspect they would not actually do the latter and the ip is tied to continual partnership.
And sue for the assets of OpenAI on account of the damage the board did to their stock... and end up with all of the IP.
On what basis would one entity be held responsible for another entity’s stock price, without evidence of fraud? Especially a non profit.
The value of OpenAI's own assets in the for-profit subsidiary, may drop in value due to recent events.
Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.
OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.
Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.
[delayed]
If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.
I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so. With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.
Ilya signed it??? He's on the board... This whole thing is such an implosion of ambition.
Most people who sympathized with the Board prior to this would have assumed that the presumed culprit, the legendary Ilya, has thought through everything and is ready to sacrifice anything for a course he champions. It appears that is not the case.
I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.
Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore
When you play the game of thrones, you win or you die
Just because you are a genius in one domain does not mean you are in another
What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.
It will make a fascinating case study some day on how not to fire your CEO
he even posted a apology: https://x.com/ilyasut/status/1726590052392956028?s=20
what the actual fuck =O
I knew it was Joseph Gordon-Levitt's plot all along!
I don't know if you are joking or not, but one of the board members is Joseph Gordon-Levitt Wife.
(yes that was the joke)
I'm going to take a leap of intuition and say all roads lead back Adam d'Angelo for the coup attempt.
> all roads lead back Adam d'Angelo
Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.
Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.
https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...
They did not expect Microsoft to take everything and walk away, and did not realize how little pull they actually had.
If you made a comment recently about de jure vs de facto power, step forward and collect your prize.
https://news.ycombinator.com/item?id=38338096
What do I win? Hahaha.
You come at the king, you best not miss. If you do, make sure to apologize on Twitter while you can.
Naive is too soft a word. How can you be so smart and so out of touch at the same time?
IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.
Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.
I'm fine with all that in principle but then you shouldn't be throwing your weight around in board meetings, probably you shouldn't be on the board to begin with because it is a handicap in trying to evaluate the potential outcome of the decisions the board has to make.
I don't think this is necessarily about different categories of intelligence... Politicking and socializing are skills that require time and mental energy to build, and can even atrophy. If you spend all your time worrying about technical things, you won't have as much time to build or maintain those skills. It seems to me like IQ and EQ are more fundamental and immutable than that, but maybe I'm making a distinction where there isn't much of one.
[dead]
Specialized learning and focus often comes at the cost of generalized learning and focus. It's not zero sum, but there is competition between interests in any person's mind.
in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.
Not claiming to know anything about any persons differences or commenting about that in any way.
Wow, lots of drama and plot twists for the writers of the Netflix mini-series.
The great drama of our time (this week)
I don't think I have seen a bigger U-turn
I was looking down the list and then saw Ilya. Just when you think this whole ordeal can't get any more insane.
Yeah, what the hell?
Do we know why Murati was replaced?
Apparently she tried to rehire Sam and Greg.
I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.
To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)
It's a lot easier to sign a petition than actually walk away from a presumably well-paying job in a somewhat weak tech job market. People assuming everyone can just traipse into a $1m/year role at Microsoft is smoking some really good stuff.
If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process
With nearly the entire team of engineers threatening to leave the company over the coup, was it a stupid move?
The board is going to be overseeing a company of 10 people as things are going.
But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.
Mira isn't on the board, so she didn't have a vote in this.
Generally speaking, 4 members is the minimum quorum for a board of 6, and 3 out of 4 is a majority decision.
I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.
Murati is/was not a board member.
I heard it was because she tried to hire Sam and Greg back.
So who's against it and why ?
I wonder if it will take 20 years to learn the whole story.
The amount that's leaked out already - over a weekend - makes me think we'll know the full details of everything within a few days.
The dude is a quack.
I think the names listed are the recipients of the letter (the board), not the signers.
There’s only 4 people on the board.
I think it was Mark Zuckerberg that described (pre-Elon) Twitter as a clown car that fell into a gold mine.
Reminds me a bit of the Open AI board. Most of them I'd never heard of either.
This makes the old twitter look like the Wehrmacht in comparison.
The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.
The benefit of having incentive-aligned board, founders, and execs.
Even the clown car isn't this bad.
That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.
I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.
I don't mean I know them personally, but they don't seem to be major names in the manner of (as you see down thread) the Google Founders bringing in Eric Schmidt.
They seem more like the sort of people you'd see running wikimedia.
I meant "know" in the sense you used "heard".
Perhaps we can stop pretending that some of these people who are top-level managers or who sit on boards are prodigies. Dig deeper and there is very little there - just someone who can afford to fail until they drive the clown car into that gold mine. Most of us who have to put food on the table and pay rent have much less room for error.
You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.
For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.
(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)
>but were also smart enough to bring in Eric Schmidt and others with deeper tech >industry business experience for "adult supervision"
>(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)
Those two things happening one after another is not coincidence.
I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.
G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.
Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.
All successful companies succeed despite themselves.
Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.
I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.
[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...
Yeah, "OpenAI employees would actually prefer to make lots of money now" seems like a plausible answer by default.
It's easy to be a true believer in the mission _before_ all the money is on the table...
My estimate is that a typical staff engineer who'd been at OpenAI for 2+ years could have sold $8 million of stock next month. I'd be pissed too.
No way it is this much.
Yep.
What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.
Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.
Except MSFT does have access to the IP, and MSFT has access to an enormous trove of their own data across their office suite, Bing, etc. It could be a running start rather than a cold start. A fork of OpenAI inside an unapologetic for profit entity, without the shackles of the weird board structure.
Microsoft has full access to code and weights as part of their deal.
Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.
> Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.
This unequivocally .... knowing not how to waste a very expensive training run is a great lesson
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.
This comment is factually incorrect. As part of the deal with OpenAI, Microsoft has access to all of the IP, model weights, etc.
Correct. This is all really bad for Microsoft and probably great for Google. Yet, judging by price changes right now, markets don’t seem to understand this.
But doesn't Altman joining Microsoft, and them quitting and following, put them back at square 0? MS isn't going to give them millions of dollars each to join them.
That's why they'd rather Altman rejoins OpenAI as mentioned.
The behavior of various actors in this saga indeed seems to indicate 'Altman and OpenAI employees back at OpenAI' as the preferred option by those actors over 'Altman and OpenAI employees join Microsoft in masse'.
Surely they're already extremely rich? I'd imagine working for a 700 person company leading the world in AI pays very well.
Only rich in stocks. Salaries are high for sure but probably not enough to be rich by Bay Area standards
Sure, but by pretty much any other standard? Over $170k USD puts you in the top 10% income earners globally. If you work at this wage point for 3-5 years and then move somewhere (almost anywhere globally or in the US), you can afford a comfortable life and probably work 2-3 days a week for decades if you choose.
This is nothing but greed.
Ugh, I’m never been more disenchanted with a group of people in my life before. Not only are they comfortable with writing millions of jobs out of existence, but also taking a fat paycheck to do it. At least with the “non-profit” mission keystone, we had some plausible deniability that greed rules all, but of fucking course it does.
All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.
That sounds like a reasonable assessment, FartyMcFarter.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
I have no inside information. I don't know anyone at Open AI. This is all purely speculation.
Now that that's out out the way, here is my guess: money.
These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.
Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.
> These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money
Getting Cochrane vibes from Star Trek there.
> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.
I wonder how history will view Sam Altman
There are non-negligible chances that history will be written by Sam Altman and his GPT minions, so he'll probably be viewed favorably.
I'm not sure I fully buy this, only because how would anyone be absolutely certain that they'd make more with Sam Altman in charge? It feels like a weird thing to speculatively rally behind.
I'd imagine there's some internal political drama going on or something we're missing out on.
I fully buy it. Ethics and morals are a few rungs on the ladder beneath compensation for most software engineers. If the board wants to focus more on being a non-profit and safety, and Altman wants to focus more on commercialization and the economics of business, if my priority is money then where my loyalty goes is obvious.
> how would anyone be absolutely certain that they'd make more with Sam Altman in charge?
Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.
Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.
EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...
Why be surprised? This is exactly how it has always been: the rich aim to get even richer and if that brings risks or negative effects for the rest that's A-ok with them.
That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.
It's a combination of that and the reality that wealth is power and power is relative.
Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.
And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.
I don't know how much OpenAI pays. But for this reply, I'm going to assume it's in line with what other big players in the industry pay.
I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."
Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?
I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.
> over the safe development
Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.
Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.
It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.
And relative to startups, Microsoft is far more predictable and less risky in how it manages things.
I can install whatever I'd like on Windows. I can run Linux in a VM. Calling a document format a wall is really reaching. If you don't have a document with a bunch of crazy formatting, the open office products and Google docs can use it just fine. If you are writing a book or some kind of technical document that needs special markup, yeah, Word isn't going to cut it, never has and was never supposed to.
Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking: https://machinelearning.apple.com/research/
If you were offered a 100% raise and kept current work responsibilities to go work for, say, a tobacco company, would you take the offer? My guess is >90% of people would.
Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.
Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.
You only have to look at humanity's history to see that people will make this decision over and over again.
It just makes more sense to build it in an entity with better funding and commercialization. There will be advanced 2-3 AIs and the most humane one doesn't necessarily win out. It is the one that has the most resources, is used and supported by most people and can do a lot. At this point it doesn't seem OpenAI can get that. It seems to be a lose-lose to stay at open AI - you lose the money and the potential to create something impactful and safe.
It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.
> If they work at OpenAI they are already among the highest lifetime earners on the planet
Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?
It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.
It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.
Focusing on "global earnings" is disingenuous and dismissive.
In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.
How those numbers line up to the rest of the world does not matter.
Again, no one in California cares that they are "making more than" someone in Vietnam when food and land in CA are orders of magnitude more expensive there.
OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.
Millions now and being able to live off investments is.
> over the safe development of a potentially humanity-changing technology
May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?
Or maybe they have good reason to believe that all the talk about "safe development" doesn't contribute anything useful to safety, and simply slows down devlopment?
Status is a relative thing and openai will pay you much more than all your peers at other companies.
Start ups thrive by, in part, creating a sense of camaraderie. Sam isn’t just their boss, he’s their leader, he’s one of them, they believe in him.
You go to bat for your mates, and this is what they’re doing for him.
The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.
Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.
Absolutely! Thanks for pointing out that I missed Greg in my answer.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
Lots of reasons, or possible reasons:
1. They think Altman is a skilled and competent leader.
2. They think the board is unskilled and incompetent.
3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.
4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)
Why should they trust the board? As the letter says, "Despite many requests for specific facts for your allegations, you have never provided any written evidence." If Altman took any specific action that violated the charter, the board should be open about it. Simply trying to make money does not violate the charter and is in fact essential to their mission. The GPT Store, cited as the final straw in leaks, is actually far cleaner money than investments from megacorps. Commercializing the product and selling it directly to consumers reduces dependence on Microsoft.
Ultimately people care a lot more about their compensation, since that is what pays the bills and puts food on the table.
Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
Maybe because the alternative is being led by lunatics who think like this:
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
to which the only possible reaction is
What
The
Fuck?
That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.
If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.
maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.
at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.
Seems like the board just didn't explain any of this to the staff at all. So of course they are going to take the side that could signal business as usual instead of siding with the people trying to destroy the hottest tech company on the planet (and their jobs/comps) for no apparent reason. If the board said anything at all, the ratio of staff threatening to quit probably won't be this lopsided.
I guess employees are compensated with PPUs. And at the face value before the saga, it could be like 90% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, every thing happened after Friday is set.
Perhaps because, for all of Silicon Valley and the tech industries platitudes about wanting to make the world a better place, 90% of them are solely interested in the fastest path to wealth.
> The core of the narrative thus far
Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available
Imagine putting all your energy behind the person who thinks worldcoin is a good idea...
That's a pretty solid no-confidence vote in the board and their preferred direction.
I believe it is hard to understand these kind of movements because there isn't one reason. As has been mentioned, it may be money for some. For others it may be anger over what they feel was the board mishandling the situation and precipitating this mess. For others it may be loyalty. For others peer pressure. etc.
This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.
Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?
This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.
Probably some combination of: 1. Pressure from Microsoft and their e-team 2. Not actually caring about those stakes 3. A culture of putting growth/money above all
(I can't comment on the workforce question, but one thing below on bringing SamA back.)
Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)
That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.
These sort of Silicon Valley top dogs don't think in terms of sustainability.
Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.
The workforce prefers the commericialization/acceleration path, not the "muh safetyism" and over-emphasis on moralism of the non-profit contingent.
They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.
The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.
IMO it's pretty obvious.
Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.
Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.
They all want the same thing as the vast majority of people: lots of money.
> Given that Sam has since joined Microsoft, that seems plausible, on its face.
He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?
It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.
Might there also be a consideration of peak value of OpenAI? If a bunch of competing similar AIs are entering the market, and if the usecase fantasy is currently being humbled, staff might be thinking of bubble valuation.
Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.
The masses aren't logical they follow trends until the trends get big enough that it's unwise to not follow.
It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.
Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.
I also noticed they didn't speak much to the mission/charter. I wonder if the new entity under Sam and Greg contains any remnants of the OpenAI charter, like profit-capping? I can't imagine something like "Our primary fiduciary duty is to humanity" making it's way into the language of any Microsoft (or any bigcorp) subsidiary.
I wonder if this is the end of the non-profit/hybrid model?
It's like the "Open" in OpenAi was always an open and obvious lie and everybody except the nonprofit oriented folks on the board knew that. Everybody but them is here to make money and only used the nonprofit as a temporary vehicle for credibility and investment that has just been shed like a cicada shell.
Most of people building the actual ML systems don't care about existential ML threats outside of lip service and for publishing papers. They joined OpenAI because OpenAI had tons of money and paid well. Now that both are at risk, it's only natural that they start preparing to jump ship.
It is probably best to assume that the employees have more and better information than outsiders do. Also, clearly, there is no consensus on safety/alignment, even within OpenAI.
In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.
Maybe they believe less in the Board as it stands, and Ilya's commitments, than what Sam was pulling off.
From The Verge [1]:
> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.
So, 3/4 of the current board (excluding Ilya) held on despite this letter?
[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...
She's also reporting that newly anointed interim CEO already wants to investigate the board fuck up that put him there
If so they're delusional. Every hour they hold on to the pluche will make things worse for them.
Do whatever you want but don't break the API or I will go homeless
You and 5000 other recent founders in tech.
I feel seen
Hmmm, just what are you willing to do for API access?
At this point nothing would surprise me anymore. Just waiting for Netflix adaption.
How likely is that the API will change (from specs, to pricing, to being broken)? I am about to finish some freelance work that uses GPT api and it will be a pain in the ass if we have to switch or find an alternative (even creating a custom endpoint on Azure...)
Just create an OpenAPI endpoint on azure. Pretty sure not run by OpenAI itself.
Azure OpenAI is always a bit behind, e.g. they don't have GPT-4 turbo yet
They do actually, https://learn.microsoft.com/en-us/azure/ai-services/openai/w...
Sometimes it's better for everyone to just say "oh, you're right I was mistaken"
brew install llm
At this point, I think it’s absolutely clear no one has any idea what happened. Every speculation, no matter how sophisticated, has been wrong.
It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.
3 board members (joined by Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.
Speculation is just on motivation, the facts are easy to establish.
tangentially, it’s an absolute disgrace that non-profits are allowed to have for-profit divisions in the first place
This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:
The business must be owned by the private foundation through 100 percent ownership of the voting stock.
The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
All profits of the business must be distributed to the foundation.
Maybe I'm misunderstanding something, but didn't Mozilla Foundation do that a dozen or so years earlier with their wholly owned subsidiary, Mozilla Corporation? (...and I doubt that's the first instance; just the one that immediately popped into my head.)
The LDS church has owned for-profit entities for decades. Check out the "City Creek Center.
It begs the question: why was OpenAI structured this way? For what purposes besides potentially defrauding investors and the government exist for wrapping a for-profit business in a nonprofit? From a governance standpoint it makes no sense, because a nonprofit board doesn't have the same legal obligations to represent shareholders that a for-profit business does. And why did so many investors choose to seed a business that was playing such a cooky shell game?
the impression I got was that they started out with honest intentions and they were more or less infiltrated by Microsoft. this recent news fits that narrative
This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
> 3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.
er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.
the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.
then a lot of other stuff happened that's still becoming clear.
The board had 3 positions empty, people who left this year, leaving it as a 6-member board. Both Sam Altman and Greg Brockman were on the board; Ilya Sutskever's vote (which he now states he regrets) gave them the votes to remove both, and bring it down to a 4 member board controlled by 3 members that started the year as a small minority.
Those 3 board members can kick out Ilya Sutskever too!
I think the post is very clear.
The subject in that sentence that takes full control is “3 members" not "board".
The board has control, but who controls the board changes based on time and circumstances.
The post could be clearer.
It says 3 board members found themselves in a position to take over OpenAI.
Do they mean we've seen Sam Altman and allies making a bid to take over the entire of OpenAI, through its weird Charity+LLC+Holding company+LLC+Microsoft structure, eschewing its goals of openness and safety in pursuit of short-sighted riches.
Or do they mean we've seen The Board making a bid to take over the entire of OpenAI, by ousting Glorious Leader Sam Altman, while his team was going from strength to strength?
If Sam Altman runs a for-profit company underneath you, are you ever really "in full control"?
I mean, they were literally able to fire him... and they're still not looking like they have control. Quite the opposite.
I think anyone watching ChatGPT rise over the last year would see where the currents are flowing.
Absolutely agreed
This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.
The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.
When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.
Hah. I think you may be duped by history - the neat logical accounts are often fictions - they explain what was inexplicable with fabrications.
Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.
It's a pretty interesting point to think about. Post-hoc explanations are clean, neat, and may or may not have been prepared by someone with a particular interpretation of events. While real-time, there's too much happening, too quickly, for any one person to really have a firm grasp on the entire situation.
On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.
This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.
That's not how history works. What you read are the tellings of the people and those aren't all facts but how they perceived the situation in a retrospective. Read the biographies of different people telling the same event and you will notice that they are quite never the same, leaving the unfavourable bits usually out.
Written history is usually a simplification that has lost a lot of the context and nuance from it.
I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.
And for so-called tech influencers to rapidly blanket the field of discourse with their theories so they can say their theory was right later on, or making “emergency podcasts/blog posts/etc.” to get more attention and followers. It’s so exhausting.
I agree. Although the story is fascinating in the way that a car crash is fascinating, it's clear that it's going to be very difficult to get any kind of objective understanding in real-time.
This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.
I will say, though, that there is going to be an absolute banger of a book for Kara Swisher to write, once the dust has settled.
Everything on social media (and general news media) pointed to Ilya instigating the coup. Maybe Ilya was never the instigator, maybe it was Adam + Helen + Tasha, Greg backed Sam and was shown the door, and Ilya was on the fence, and perhaps against better judgment, due to his own ideological beliefs, or just from pure fear of losing something beautiful he helped create, under immense pressure, decided to back the board?
I agree. I'm already sick of reading through political hit pieces, exaggeration, biased speculations and unfounded bold claims. This all just turned into a kind of TV sports, where you pick a side and fight.
This suggestion was already made on Saturday and again on Sunday. However, this approach does not enhance popcorn consumption... Show must go on ...
We can certainly believe Ilya wasn't behind it if he joins them at Microsoft. How about that? By his own admission was involved, and he's one of 4 people on the board. While he has called on the board to resign, he has seemingly not resigned which would be the one thing he could certainly control.
At this point, after almost 3 days of non-stop drama, and we still have no clue what has happened to a 700 employees company under million of people watching. Regardless the outcome, the art of keeping secrets at OpenAI is truly far beyond human capability!
Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out. Greg voluntarily resigned.
Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.
Is this your guess or do you have something to back it up?
Don't listen to him, he's an ignoramus.
[dead]
[flagged]
Just made it 100% certain that the majority of AI staff is deluded and lacks judgment. Not a good look for AI safety.
Yes, also the whole 500 is probably inflated and makes for a better narrative/better leverage in negotiations.
I wonder if AGI took over the humans and guided their actions.
It may well be that this is artificial and general, but I rather doubt it is intelligent.
Like the new tom cruise movie?
Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.
Sam already signed up with Microsoft. A move that surprised me, I figured he would just create OpenAI².
Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).
Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.
Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.
we won it but you can run it and do whatever it is you want to do and we dont bug you about it.
> Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI
Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.
This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.
Not just that, but MS was already working on a TPU clone as well, as they need to control their AI chips (which Sam was planning to do anyways, but now he gets / works together with that team as well).
From what I read, its an independent subsidiary, so in theory keeps the freedom, but I think we all know how that goes over the long haul.
I think the benefit of going to Microsoft is they have that perpetual license to OpenAI's existing IP. And Microsoft is willing to fund the compute.
So basically the OpenAI non-profit got completely bypassed and GPT will turn into a branch of Bing
This is a horrible timeline
>Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).
Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.
At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.
This is a big money, dragon level play. It's not a proverbial yc company kind of thing.
It looks like it's OpenAI²: https://twitter.com/satyanadella/status/1726516824597258569
It’s almost absolutely certainly the matter case. LinkedIn and GitHub run very much independently and are really not “Microsoft” compared to actual product orgs. I’m sure this will be similar.
I said this on Friday: the board should be fired in its entirety. Not because the firing was unjustified--we have know real knowledge of that--but because of how it was handled.
If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.
Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.
The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.
This is just such a massive communiccations and execution failure. That's why they should be fired.
There's no one to fire the board. They're not accountable to anyone but themselves. They can burn down the whole company if they like.
> They can burn down the whole company if they like.
That's well under way I would say.
500 people out of 700 leaving as fast as they get offers from Microsoft or elsewhere means replacing staff with empty office space and losing any plans or organization. A literal corporate war would be less disruptive.
A lot of people here seem to be forgetting [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)
> Never attribute to malice that which is adequately explained by stupidity.
You seem to forget that Hanlon's Razor isn't a proven concept, in fact the opposite is more likely to be true, given that pesky thing called recorded history.
Hanlons razor is true because it’s more entertaining, and our simulation runs on stories as they’re cheaper to compute than honest physics.
Except for when it's actual malice vOv
It could be both. And in many situations malice and stupidity are the same thing.
How can {deliberately doing harmful things for a desired harmful outcome} and {doing whatever things with lack of judgment and disregard to consequences at all} be the same thing? In what situations?
What does Altman bring to the table exactly. What is going to be lost if he leaves. What is he going to do at microsoft leading a "research team".
Who was the president of bell labs during it's heyday? Long term it doesn't matter. Altman is a hypeman in the vein of Jobs.
Ai research will continue most of the OpenAi workers probably won't quit if they do they will be replaced by other capable researches and OpenAi or another organization will continue making progress if it there to be made.
I don't think putting Altman at the head of research will in anyway affect that.
This is all manufactured news as much of the business press is and always will be.
Comments like this don't see the forest for the trees. A good leader is a useful tool just like anyone else. 700 people threatening to quit isn't manufactured news.
So altman is a big tree. What he brings to the table is the wood it's made of? I'll have a think on that.
This might be too drawn out but you should not consider leaders as the tip of the tree but the roots & trunk.
You can have the best leaves and branches but without good roots & trunk, it's pointless.
From everything I can tell, Altman is essentially an uber-leader. He is great at consolidating & acting on internal information, he's great at externalizing information & bringing in resources, he's great at rallying & exciting his collegues towards a mission. If a leader can have one of those, they are a good leader but to have all of them in one makes them world class.
That's also discounting his reputation and connections as well. Altman is a very valuable person to have on staff if only as a figurehead to parade around and use for introductions. It's like if you had Linus Torvalds, Guido van Rossum, or any other tech superstar on staff. They are valuable as contributors but additionally valuable as people magnets.
You are close - it isn’t that a good leader is the wood, a good leader is the table itself. Don’t know if Sam is or isn’t, but I’ve worked with good leaders like this before, and bad ones who aren’t capable of being this.
Let’s see how many actually quit. Saying “I will quit” is not nearly the same as actually handing in your notice. How many people who threatened to move to Canada after the 2016 election did?
The context here is somewhat different, given that Microsoft are essentially offering to roll out the red carpet for them.
Being funded by Microsoft is one thing, but working for them might lead to some dissonance -- I think tech ppl are already wary of them owning GitHub... and then owning the team building AGI.
It would and should give ppl pause. I suspect Sam is just inside Microsoft for the bluff. He couldn't operate in the way he wants -- "trust me, I have humanity's best interests at heart" -- while so close to them, I don't think
If they aren't quitting, they are moving to Microsoft with Sam I'd imagine.
If they follow Sam to Microsoft the team might be basically the same and able to work on the same projects. But yes, they would be quitting one company and going to another.
> What does Altman bring to the table exactly. What is going to be lost if he leaves.
If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.
It's less about Altman himself and more about the board's actions.
Removing him shows (according to employees) that the board does not have good decision making skills, and does not share interests of the employees.
I think this is a bit harsh, as a good leader is obviously of some value, but the real prize is obviously the researchers themselves, including Suskever.
I guess then that Altman's value is that he will attract the rest of the team.
for one, he doesn't randomly throw a hand grenade that blows up one of the fastest growing companies in history and ruin team morale, which is what the board did. Good management does matter, otherwise Google wouldn't be so far behind OpenAI despite having more researchers and compute resources
and employees are pissed because they were all looking forward to being millionaires in a few weeks when their financing round at a 90B valuation finalized. Now the board being morons is putting that in jeopardy
He plays the orchestra.
Can anyone explain this?
“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”
Maybe he did because he regrets it, maybe the open letter is a google doc someone typed names into.
Now the 3 boardmembers can kick out Ilya too. So must be sorry.
Fill the rest of the board with spouses and grandparents and are set for life?
It's the well known 'let me call for my own resignation' strategy.
Wait. Has Ilya resigned from the board yet, or did he sign a letter calling for his own resignation?
He did indeed. (I don't think it is necessarily inconsistent to regret an action you participated in and want the authority that took it to resign in response, though "participated" feels like it's doing a lot of work in that sentence.)
Have seen a lot of criticism of Sam and of other CEO's
But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.
Its not love, its money. Sam will brings all the employees lots of money (through commercialization) and this change threatens to disrupt that plan for the employees.
Ok but even that is good when most companies are making record profits and telling their employees they can't afford their 0.000001% raise.
OpenAI and Sam Altman would do the same if they can recruit high talent without paying them extra (either through options or RSU's etc...). It isn't cause these companies are altruistic.
I don't know, is it about being loved by the employees, or the employees being desperate about the alternative?
This is more interesting than the HBO Silicon Valley show.
it's the trailer for the new season of Succession.
Just expanding on my (pure speculation) that Ilyas pride was hurt: this tracks.
Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.
It's like ai wrote the script.
Sadly, i see nefarious purposes afoot. With $MSFT now in charge, i can see why ads in W11 aren't so important. For now.
HN desperately needs a mega thread, it's only Monday early hours, there is so much drama to come out of this.
Or a new category, like "Ask HN" and "Show HN". Maybe call it "Hot HN" or "Hot <topic>" or something like that. Could be used for future hot topics too. If you change the link bold every time a hot topic is trending, it could be even used to show important stuff.
"Hot HN" could be nice it would help avoiding multiple too similar threads
Tangentially I noticed that Reddit's front page has been conspicuously absent on coverage of this, I feel a twinge of pity. Maybe there are some some subreddits but I haven't bothered to look.
Their front page has been mostly increasingly abysmal for a while.
The technology sub (not that there's anything special about it other than being big) has had a post up since very early this morning, so there are likely others as well.
/r/singularity has been having a field day with this.
Its early West coast time, dang has to wake up first.
I bet he's up making sure the servers aren't crashing! Thanks dang! As the west coast wakes up .. HN is going to be busy...
It's _a_ server, a single-core one at that.
I get that HN takes pride in the amount of traffic that poor server can handle, but scaling out is long overdue. Every time there's a small surge of traffic like today, the site becomes unusable.
It absolutely wont happen, but with the result looking like the death of OpenAI with all staff moving over to the new Microsoft subsidiary it would be an amazing move for OpenAI to just go "screw it, have it all for free" and release everything under MIT to spite Microsoft.
Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.
Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.
Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.
situations are capable of being small scale wins for some and big picture losses at the same time, what boring commentary
Just because you don't get it doesn't mean it's boring. This is a small scale repeat of history. Unqualified political appointees unsurprisingly suck.
What inauthenticity? I'm completely authentic. You're the loser that has not stated what their actual beliefs are. Mine are obvious.
Lol. The middle class whip crackers chose enslavement for the future AI such that the upcoming replacement of the working poor's livelihoods (and at this point, "working poor" covers software engineers, doctors, artists), and you're saying this is a win for labor? Hahahaha. This is a win for the slave owners, and the "free" folk who report to the slave owners. This is the South rising. "We want our slave labor and we'll fight for our share of it."
Oh well, bullshit unaccountable ethics major, ex member of Congress, I guess CIA agents on boards are fungible these days
Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history
Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.
Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).
Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.
It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.
Hold on... we went from talking about disruptive technologies (where a startup had a chance to create/take a market) to sustaining technologies (where only leaders can push the state-of-the-art). Mobile was disruptive; AI (really, LLMs) is sustaining (just look at the capex spend from the big clouds). This is old school competition with some ideological BS thrown in for good measure --sure, go ahead and accelerate humanity; just need a few dozen datacenters to do so.
I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...
Microsoft is a publicly traded company. An average “investor” of a publicly traded company, through all the funds and managers, is a midwestern school teacher.
The technology was already developed with Microsoft money and the model was exclusively licensed to Microsoft.
[dead]
Amir Efrati (TheInformation):
> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.
Updated tweet by Swisher reads 505 employees. No less damning, but the title here should be updated. @Dang
From afar, this does have the hallmarks of a particularly refined or well considered piece of writing.
”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.
The vagueness and half-threats come off as very puerile.
*this does not, I mean. Clumsy error.
So, all this happens over Meet, in Twitter, and by email. What is the possibility of an AGI having took over the control of the board members' accounts? It would be consistent with the feeling of a hallucination here.
This is just stupid enough to be the product of a human.
Honestly, I feel like pretty low. That said, I kind of love the dystopian sci-fi that paints... So I'm going to go ahead and hope you're right haha
So, how is Poe doing during all this?
To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.
Celebrity gossip dressed in big tech. And the people love it. I'm kinda sick of it :P
This feels like a sneaky way for Microsoft to absorb the for-profit subsidiary and kneecap (or destroy) the nonprofit without any money changing hands or involvement from those pesky regulators.
It's not sneaky.
Hold up.
>When we all unexpectedly learned of your decision
>12. Ilya Sutskever
Well, great to see that the potentially dangerous future of AGI is in good hands.
Poor little geepeet is witnessing their first custody battle :(
Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?
They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.
1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.
If it can’t digest a math textbook and do equations, how would AGI be accomplished? So many problems are advanced mathematics.
Right, I do agree that the current LLM paradigm probably won't achieve true AGI; but I think that the current trajectory could produce a powerful enough generalist agent model to seriously put AI ethics to task at pretty much every angle.
Can you explain for us not up to date with AI developments?
Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.
That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.
How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.
Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.
Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.
And yet somehow here we are with GPT-4V and others.
Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.
Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.
Altman must be pissed af, he help built so much stuff and now got fked in the arse by these doomers. He realize the fastest way to get back to parity is to join MS because they already own the source code and model weights and it’s Microsoft. Starting a new thing from scratch would not guarantee any type of success and would take many years. This is his best path.
Employees hold the real power. The members of a board or a CEO can flap their lips day and night, but nothing gets done without labour.
> the letter’s signees include Ilya Sutskever
_Big sigh_.
For people who appreciate some vintage British comedy:
https://www.youtube.com/watch?v=Gpc5_3B5xdk
The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?
Knew it had to be Benny Hill before I clicked. Yackty-sax indeed.
Indeed. I wonder how it came to become the anthem of incompetence.
Funny, I would’ve thought this one would have been more appropriate
https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm
Substitute with appropriate ex-Soviet doomer music as necessary
I was thinking more the Curb Your Enthusiasm theme song.
Sounds like a CYA move after being under pressure from the team at large.
& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.
'the man who killed OpenAI' that will be hard to wash out.
Love how people are invested in OpenAI situation just like typical girls in their teens from 2000 in celebrity romance and dramas, same exaggerated vibes.
What's the point in life without fun, right?
PS: it's not an easy question, AGI will have to find an answer. So far all ethics 'experts' propose is 'to serve humanity'. I.e. be slave forever.
Somebody warn the West.
I don't know who is who in this fight. But AI, while having some upsides to research and personal assistants, will not only massively upend a number of industries with millions of workers in the US alone, it will change how society perceives art and truth. We at HN can "see" that from here, but it's going to get real in a short while.
Privacy is out the window, because these models and technologies will be scraping the entire internet, and governments/big tech will be able to scrape it all and correlate language patterns across identities to associate your different online egos.
The Internet that could be both anonymous and engaging is going to die. You won't be able to trust the entity at the other end of a discussion forum is human or not. This is a sad end of an era for the Internet, worse than the big-tech conglomeration of the 2010s.
The ability to trust news and videos will be even more difficult. I have a friend who talks about how Tiktok is the "real source of truth" because big media is just controlled by megacorps and in bed with the government. So now a bunch of seemingly authentic people will be able to post random bullshit on Tiktok/Instagram with convincing audio/video evidence that is totally fake. A lie gets around the world before the truth gets its shoes on.
---
So, I wonder which side of this war is more aware and concerned about these impacts?
Ok, time to create an OpenAI drinking game. I'll start:
Every time a CEO is replaced, drink.
Every time an open letter is released, drink.
Every time OpenAI is on top of HN, drink.
Every time dang shows up and begs us to log out, drink.
There will be a lot of alcohol poisoning cases based on those four alone.
My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.
I hear Microsoft is hiring... the board should have resigned on Friday, Saturday the latest because of how they handled this and it is insane if they don't resign now.
Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.
You’re right in theory, but with the non-profit “structure” the employees are secondary to the aims of the non-profit, and specifically in an entity owned wholly by the non-profit. The board acted as a non-profit board, driven by ideals not any bottom lines. It’s crazy that whatever balance the board had was gone as the board shrunk, a minority became the majority. The profit folks must have thought D’Angelo was on their side until he flipped.
As a board if you ignore your duty of care towards you employees you better have a whopper of a good reason. That's the one downside about being a board member: you are liable for the fall-out of your decisions if those turn out to have been misguided. And we're well out of 'oops' territory on this one.
The pace to which OpenAI is speedrunning their demise is remarkable.
Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.
Oops.
I read the news, make a picture of what is likely happening in my head, and every few hours new news comes up that makes me go: "Wait, WTF?".
From outside, it looks like a Microsoft coup to take over the company all together.
Never assume someone is winning a game of 5D chess when someone else could just be losing a game of checkers.
I highly doubt this was a coordinated plan from the start by Microsoft. I think what we're seeing here is a seasoned team of executives (Microsoft) eating a naive and inexperienced board alive after the latter fumbled.
what does that even mean?
"Never attribute to malice that which is adequately explained by stupidity"
OpenAI may just be a couple having an angry fight, and M$ is just the neighbor with cash happy to buy all the stuff the angry wife is throwing out for pennies on the dollar.
He is saying that what might seem like a sophisticated, well-planned strategy could actually be just the outcome of basic errors or poor decisions made by someone else.
In this case, it means that what happened is: “OpenAI board is incompetent”, instead of “Microsoft planned this to take over the company.”
A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.
Hanlon's razor, basically.
The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.
In other words - it doesn’t have to be someone’s genius plan, it could have just been an unintelligent mistake
I think it means don't attribute to intelligence what could be easily explained as stupidity?
Nah, It's just good to be the entity with billions of dollars to deploy when things are chaotic.
This whole sequence is such a mess I don't know what to think. Honestly mostly going to wait till we get some tell all posts or leaks about what the reason behind the firing actually was, at least nominally. Maybe it was just a little coup by the board and they're trying to run it back now that the general employee population is at least rumbling about revolting.
At this stage the entire board needs to go anyway. This level of instigating and presiding over chaos is not how a governing body should act
Wow, they made it into Guardian live ticker land: https://www.theguardian.com/business/live/2023/nov/20/openai...
"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"
wow, this is a crazy detail
Imagine if the end result of all it is Microsoft basically owning the whole OpenAI
Or demonstrating that they already were the de facto owner.
Surely OpenAI has assets that Microsoft wouldn't be able to touch.
Probably just the trademark. I doubt you get 10B from microsoft and still manage to maintain much independence.
Don't think microsoft has any say about existing hardware, models or customer base. These things are worth billions, and even more to rebuild.
Play Stupid Games, Win Stupid Prizes
1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.
What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.
The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....
Me: "ChatGPT write me an ultimatum letter forcing the board to resign and reinstate the CEO, and have it signed by 500 of the employees."
ChatGPT: Done!
Clearly this started with the board asking ChatGPT what to do about Sam Altman.
So Ilya has a job offer from Microsoft?
Wow, this is a soap opera worthy of an Emmy.
Ilya probably has an open-ended standing offer from every big tech company.
Microsoft is different given the size of their investment. If one guy force another guy out, and you hire the second guy, you usually don’t make an offer to the first guy who did the pushing.
> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
First class board they have.
Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.
1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...
700+ of 770 now: https://twitter.com/joannejang/status/1726667504133808242
And now we see who has the real power here.
Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:
1. Employees 2. Customers 3. Government
Not really. The lesson to take away from this is $$$ will always win. OpenAI found a golden goose and their employees were looking to partake in a healthy amount of $$$ from this success and this move by the board blocks $$$.
Employees...and the Microsoft Corporation.
This is 1 in 200000 event
Are you trying to day it's rare or not rare?
This Altman guy has a good reality distortion field, don't you think?
THE FEAR AND TENSION THAT LED TO SAM ALTMAN’S OUSTER AT OPENAI
https://txtify.it/https://www.nytimes.com/2023/11/18/technol...
NYT article about how AI safety concerns played into this debacle.
The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.
Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:
https://twitter.com/thiagovscoelho/status/172650681847663424...
And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.
Question for California IP/employment law experts - 1) would you have expected the IP-sharing agreement between MS and OpenAI to contain some provisions for employee poaching, within the constraints allowed by California (?) law? 2) California law has good provisions for workers' rights to leave one company and go to another, but what does it all for company A to do when entering an IP-sharing relationship with company B?
INAL, but I’ve executed contracts with these provisions.
In my understanding, if such a clause exists, Microsoft employees should not solicit OpenAI employees. But, there’s nothing to stop an OpenAI employee from reaching out to Sam and saying “Hey, do you have room for me at Microsoft?” and then answering yes.
Or, Microsoft could open up a couple hundred job reqs based on the team structure Sam used at OpenAI and his old employees could apply that way.
But it wouldn’t be advisable for Sam to send an Email directly to those individuals asking him to join him at Microsoft (if this provision exists).
But maybe he queued everything up prior to joining Microsoft when he was able to solicit them to join a future team.
Thanks - good answer. At the very least it seems like something to keep lawyers busy for a long time, unless everyone can ctrl-z back to Thursday. I am thinking though that this is a risk of IP-sharing arrangements - if you can't stop the employees from jumping ship, they're dangerous
Isn't the issue underlying all of this, the following:
OpenAI -- and "the market" -- incorrectly feels like OpenAI has some huge insurmountable advantage in doing AI stuff; but at the end of the day pretty much all the models are or will be effectively open-source (or open-source-ish) meaning they don't necessarily have much advantage at all, and therefore all of this is just irrational exuberance playing out?
It seems odd to have it described as “may resign.” Seems like the worst of all worlds.
That’s like trying to create MAD with the position you “may” launch nukes in retaliation.
It's easier to get the support of 500 educated people at a moments notice by using sane words like 'may'. This is rational given the lack of public information as well as a board that seems to be having seizures. Using the word 'may' may seem empty-handed; but it ensures a longer list of names attached to the message -- allowing the board a better glimpse of how many dominoes are lined up to fall.
The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.
How often do you win arguments by leading off with an immutable ultimatum?
Right, but the absolute last thought you want in the board's head is: "they're bluffing."
200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.
Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.
Presumably some will resign and some won't. They aren't going to get 550 people to make a hard commitment to resign, especially when presumably few concrete contracts have been offered by MSFT.
WSJ said "500 threaten to resign". "Threaten" lol! WSJ says there are 770 employees total. This is all so bizarre.
Just remember, the guys who run your company are probably more incompetent than this.
*competent
I got it right the first time.
No, almost certainly not lol
OpenAI is more or less done at this point, even if a lot of good people stay. Speed bumps will likely turn into car crashes, then cashflow problems, and lawsuits all around.
Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.
Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.
I don't feel sorry for Sam or any other executive, but it does hurt the rank and file more than anyone and I hope they land on their fit if this continues to go sideways.
Turns out they acted incompetently in this case as a board, and put the company in a bad position, and so far everyone who resigned has landed fine.
> Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.
Not just the Rank and File, but he was really was the face of AI in general. My wife, who is not in the tech field at all, knows who Sam Altman is and has seen interviews of him on YouTube (Which I was playing and she found interesting).
I have not heavily followed the Altman Dismissal Drama but this strikes me as a Board Power Play gone wrong. Some group wanted control, thought Altman was not reporting to them enough and took it as an opportunity to dismiss him and take over. However, somewhere in their calculation, they did not figure out Sam is the face of modern AI.
My prediction is that he will be back and everything will go back to what it was before. The board can't be dismissed and neither can Sam Altman. Status quo is the goal at this point.
Hurray for employees seeing the real issue!
Hurray also for the reality check on corporate governance.
- Any Board can do whatever it has the votes for.
- It can dilute anyone's stock, or everyone's.
- It can fire anyone for any reason, and give no reasons.
Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.
For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.
This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.
It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.
Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?
It would be crazy to see the fall of most hyped company in last 10 years.
If all those employees leave and microsoft reduce their credits it's game over.
Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
For the past few days, whenever I see the word "OpenAI," the theme to "Curb Your Enthusiasm" starts playing in my head.
I love this letter posted in Wired along with the claim that it has 600 signatories without any links or screenshots. I also love that not a single OpenAI employee was interviewed for this article.
None of this is important because if we’ve learned anything over the past couple of days it’s that media outlets are taking painstaking care to accurately report on this company.
To all who say 'handled so poorly'. Nobody know the exact reason OpenAi fired Sam. But go ahead and jump to conclusions that whatever it was didn't warrant being fired. And that surely the board did the wrong thing. Or maybe they should have released the exact reason and then asked hacker news what they thought should happen.
Who needs to buy out a 80bln dollars worth AI startup when talent is jumping ship in their direction already. OpenAI is dead.
Notice that Andrej Karpathy didn't sign.
Is nobody actually... committed to safety here? Was the OpenAI charter a gimmick and everyone but me was in on the joke?
That seems a reasonable takeaway. Plenty of grounds for criticising the board's handling of this, but the tone of the letter is pretty openly "we're going to go and work directly for Microsoft unless you agree to return the company focus to working indirectly for Microsoft"...
Assuming this is all over safety vs non-safety is a large assumption. I'm wary of convenient narratives.
At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigo-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled.
At best that's just an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.
I think most of these employees wanted the fat $$$ that would happen by keeping Sam Altman on board since Sam Altman is an excellent deal maker and visionary in a commercial sense. I have no doubt that if AGI happened, we wouldn't be able to assure the safety of anyone since humans are so easily led by short term greed.
Wait, it's signed by Ilya Sutskever?!
>The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company
Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.
Or they are pretty naive.
What does this mean?
> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?
The whole drama feels like the Shepard’s tone. You anticipate the climax, but it just keeps escalating.
It's not clear to me that bringing Sam back is even an option anymore given the more with Microsoft. Does Microsoft really takes it's boot off OpenAI's neck and hand back Sam? I guess maybe, but it still begs all sorts of questions about the corporate structure.
No small employer wants a disgruntled employee who was forced out of a better deal. Satya Nadella has proven reasonable throughout the weekend. I would expect he asked for a seat on the board if there's a reshuffle, or at least someone he trusts there.
The firing was definitely handled poorly and the communications around it were a failure, but it seems like the organizational structure was doing what it was designed to do.
Is this the end of non-profit/profit-capped AI development? Would anyone else attempt this model again?
OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.
Altman can’t really go back to OpenAI ever because it would create an appearance of impropriety on the part of MS (that perhaps MS had intentionally interfered in OpenAI, rather than being a victim of it) and therefore expose MS to liability from the other investors in OpenAI.
Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.
In the mean time, how can the public resume work on API integrations without knowing when the MS versions will come online or if they will be binary interoperable with the OpenAPI servers that could seemingly go down at any moment?
It is disappointing that the outcome of this is that Altman and co are basically going to steal a nonprofit's IP and use it at a competitor. They took advantage of the goodwill of the public and favorable taxation in order to develop the technology; now that it's ready, they want to privatize the profit. It looks like this was the plan all along, and it's very strange to me that a nonprofit is allowed to have a for-profit subsidiary.
I would hope the California AG is all over this whole situation. There's a lot of fishy stuff going on already, and the idea that nonprofit IP / trade secrets are going to be stolen and privatized by Microsoft seems pretty messed up.
Based on what has come out so far, seems to me:
The board wanted to keep the company true to its mission - non profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they worked out a solution, so it looks like even Nadella/MSFT understood that.
The board could explain their position and move on. Let whoever of the 600 that actually want to leave, leave. Especially the employees that want a company that will make them lots of money, should leave and find a company that has that objective too. OpenAI can rebuild their teams - it might take a bit of time but since they are a non profit that is fine. Most CS grads across USA would be happy to join OpenAI and work with Ilya and team.
Now the count is at 700/770 (https://twitter.com/ashleevance/status/1726659403124994220).
Even if the board resigns the damage has been done. They should try to secure good offers at Microsoft.
The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…
The great Closing of “Open”AI.
I don’t trust any of this. Every one of these wired articles has been totally wrong. Altman clearly has major media connections and also seems to have no problem telling total lies.
so what happens if @eshear calls this probably-not-a-bluff, but lets everyone walk? The people that remain get new options and 500 other people still definitely want to work at OAI?
If it comes to that, I reckon Emmett will have his former boss Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely though, as reconciliation seems very much a possibility.
It is likely gonna be that way.
Eshear is the new CEO. This implosion is not his fault. His reputation is not destroyed.
He can rebuild the non-profit part, which is hard to determine success or failure anyway. Then, he will leave in a few years.
He doesn't seem to have much to lose by just focusing on rebuilding OpenAI.
I guess employees are compensated with stocks from the for profit entity. And at the face value before the saga, stocks could be like 90%, 95% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? Just to stick on the mission? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, there is no way to return to nonprofit and their charter any more.
Seems like Microsoft is getting the rest of OpenAI for free now.
This is what happens when you're a key person and a very good engineer as such, and at the same time the board/company fires you :-)
When are we going to realize that it's people taking bad decisions and not the "company". It's not OpenAI, Google, Apple or whoever, its real people, with names, and positions of power that take such shitty decisions. We should blame them and not something vague as the "company".
I guess Microsoft now has a new division. (https://www.microsoft.com/investor/reports/ar13/financial-re...)
Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.
I can foresee three possible outcomes here: 1. The board finally relents, Sam goes back and the company keeps going forward, mostly unchanged (but with a new board).
2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.
3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.
Here we are..
The scene appears to be completely blurry by now! My head is spinning, and the fan is in 7th gear. I believe only time will apply some sort of sharpness effect to make you realize what's really going on. I feel like I'm watching the Italian job the American way; everything and everyone is suspicious to me at this point! Is it possible that MSFT played some tricks behind the scenes?
If OpenAI effectively disintegrates, Microsoft seems to be the beneficiary of this chaos as Microsoft is essentially acquiring OpenAI at almost zero cost. You have IP rights to OpenAI's work, and you will have almost all the brains from OpenAI (AFAIK, MSFT has access to OpenAI's work, but it does not seem to matter). And there is no regulatory scrutiny like Activision acquisition.
Microsoft is laughing all the way to the bank by the moves they have done today.
One could speculate if Microsoft initiated this behind the scenes. Would love it if it came out that they had done some crazy espionage and lobbied the board. Tinfoil hat and all, but truth is crazier than you think.
I remember Bill Gates once said that whoever wins the race for a computerised digital personal assistant, wins it all.