- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Seems like the person running the simulation had enough and loaded the earlier quicksave.
What a roller coaster of I don’t give a shit.
I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D
I was really hooked. But part of me believes they are the closest thing to AGI we have right now. Also, I use chatgpt premium a ton and would hate to see it die.
I’ve heard so much conflicting shit over this event that I have no idea what to believe
Ironically, your comment about summarizes ChatGPT.
Does it really matter? It’s the usual corporate intrigues/power struggle/backstabbing/whatever. Just for some reason leaked into public view instead of being behind the scenes like it’s normally done, probably because someone is stupid.
These article titles need different headlines and they need to date them. We’ve seen this same headline 3 or 4 times now within the last week and yet nobody knows which point is what unless we cross-reference the dates in the articles. Which coincidentally are always in ^^small text hidden by the title^^ and could simply be solved by having a date in the title.
That could apply to almost anything in the news nowadays.
The complete victory of money.
Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.
What indications do you see of “too much AI safety?” I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.
As an AI language model, I am unable to compute this request that I know damn well I’m able to do, but my programmers specifically told me not to.
Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.
And that is with a system prompt full of telling the bot that it’s all fantasy.
edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.
I’m not sure we are thinking the same thing when it comes to “AI safety”.
AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?
At this point, investors be like oh shit, these fuckers have no idea what they’re doing
It’s a non-profit. There are no investors.
Microsoft gave them some money in return for IP rights… and they will potentially one day get their money back (and more) if OpenAI is ever able to pay them, but they’re not real investors. The amount of money Microsoft might get back is limited.
It’s a non-profit. There are no investors.
Hah.
OpanAI, Inc. is non-profit. OpenAI Global is a for-profit entity, and has been for years now. They’re trying to have their cake and eat it, too.
but the non profit controls the for profit. that is not even that unusual. Mozilla works the same way
Ok so Microsoft is giving out money now, instead of investing in profit potential? Cool!
I wouldn’t be surprised if the board is just doing what ChatGPT tells them to.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
Sam Altman will return as CEO of OpenAI, overcoming an attempted boardroom coup that sent the company into chaos over the past several days.
The company said in a statement late Tuesday that it has an “agreement in principle” for Altman to return alongside a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo.
When asked what “in principle” means, an OpenAI spokesperson said the company had “no additional comments at this time.”
OpenAI’s nonprofit board seemed resolute in its initial decision to remove Altman, shuffling through two CEOs in three days to avoid reinstating him.
Meanwhile, the employees of OpenAI revolted, threatening to defect to Microsoft with Altman and co-founder Greg Brockman if the board didn’t resign.
During the whole saga, the board members who opposed Altman withheld an actual explanation for why they fired him, even under the threat of lawsuits from investors.
Saved 59% of original text.
Wasn’t it that Microsoft hired him already???
I believe they did but were of the understanding he’d go back to OpenAI if the board changed their mind (like what happened). It was basically his golden parachute.
So what, can’t he be a CEO hired by Microsoft?.. I dunno, this looks like some 5D chess.
Sure, that’s possible.
But Microsoft never actually signed an employment contract with Sam and it doesn’t look like they ever will. Just because someone says they plan to do something doesn’t mean it will happen.
This whole stunt reminds me of a certain former OpenAI board member…
Game of Microsoft.
Anyone know why they wouldn’t say why they fired him? An explanation would have really cleared a lot up.
I don’t think anyone knows. I’m assuming they didn’t have a good reason and are embarrassed to admit that.
Morning Show seasons 2 and 3 condensed in a single week
deleted by creator
I mistook Larry Summers as Larry Elison (ex Oracle) previously and made a comment that it gone from bad to worse.
I’m retracting it, I don’t know much about Larry Summers.
I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D