Now Reading
OpenAI Exits: What’s Happening Over There?

OpenAI Exits: What’s Happening Over There?

OpenAI Exits: What’s Happening Over There?

Nothing succeeds like success, but in Silicon Valley nothing raises eyebrows like a steady trickle out the door.

The exit of OpenAI‘s chief technology officer Mira Murati announced on Sept. 25 has set Silicon Valley tongues wagging that all is not well in Altmanland — especially since sources say she left because she’d given up on trying to reform or slow down the company from within. Murati was joined in her departure from the high-flying firm by two top science minds, chief research officer Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). All are leaving for no immediately known opportunity. 

The drama is both personal and philosophical — and goes to the heart of how the machine-intelligence age will be shaped.

It dates back to November, when a mix of Sam Altman’s allegedly squirrelly management style and safety questions about a top-secret project called Q* (later renamed Strawberry and released last month as o1) prompted some board members to try to oust the co-founder. They succeeded — but only for a few days. The 39-year-old face of the AI movement was able to regain control of his buzzy company, thanks in no small part to Satya Nadella’s Microsoft, which owns 49 percent of OpenAI and didn’t want Altman going anywhere. 

The board was shuffled to be more Altman-friendly and several directors who opposed him were forced out. A top executive wary of his motives, OpenAI co-founder and chief science officer Ilya Sutskever, would also eventually leave. Sutskever himself was concerned with Altman’s “accelerationism” — the idea of pushing ahead on AI development at any cost. Sutskever exited in May,  though a person who knows him tells The Hollywood Reporter he had effectively stopped being involved with the firm after the failed November coup. (Sutskever more than landed on his feet — he just raised $1 billion for a new AI safety company.)

Sutskever and another high-level staffer, Jan Leike, had run a “superalignment” team charged with forecasting and avoiding dangers. Leike left the same time as Sutskever, and the team was dissolved. Like several other employees, Leike has since joined Anthropic, OpenAI’s rival that is widely seen as more safety-conscious.

Murati, McGrew and Zoph are the latest dominoes to fall. Murati, too, had been concerned about safety — industry shorthand for the idea that new AI models can pose short-term risks like hidden bias and long-term hazards like Skynet scenarios and should thus undergo more rigorous testing. (This is deemed particularly likely with the achievement of artificial general intelligence, or AGI, the ability of a machine to problem-solve as well as a human which could be reached in as little as 1-2 years.)

But unlike Sutskever, after the November drama Murati decided to stay at the company in part to try to slow down Altman and president Greg Brockman’s accelerationist efforts from within, according to a person familiar with the workings of OpenAI who asked not to be identified because they were not authorized to speak about the situation.

It’s unclear what tipped Murati over the edge, but the release of o1 last month may have contributed to her decision. The product represents a new approach that aims not only to synthesize information as many current large language models do (“rewrite the Gettysburg address as a Taylor Swift song”) but reason out math and coding problems like a human. Those concerned with AI safety have urged more testing and guardrails before such products are unleashed on the public.

The flashy product release also comes at the same time as, and in a sense partly as a result of, OpenAI’s full transition to a for-profit company, with no nonprofit oversight and a CEO in Altman who will have equity like any other founder. That shift, which is conducive to accelerationism as well, also worried many of the departing executives, including Murati, the person said.

Murati said in an X post that “this moment feels right” to step away.

Concerns have grown so great that some ex-employees are sounding the alarm in the most prominent public spaces. Last month William Saunders, a former member of OpenAI’s technical staff, testified in front of the Senate Judiciary Committee that he left the company because he saw global disaster brewing if OpenAI remains on its current path.

“AGI would cause significant changes to society, including radical changes to the economy and employment. AGI could also cause the risk of catastrophic harm via systems autonomously conducting cyberattacks, or assisting in the creation of novel biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems will be safe and controlled … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.” An OpenAI spokesperson did not reply to a request for comment.

Founded as a nonprofit in 2015 — “we’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies,” its mission statement said — OpenAI launched a for-profit subsidiary in 2019. But it has until now still been controlled by the board of the nonprofit foundation. The decision to remove the nonprofit oversight gives the company more freedom — and incentive — to speed ahead on new products while also potentially making it more appealing to investors.

See Also
Elisabeth Moss & Kate Hudson in Dark Horror Comedy

And investment is crucial: a New York Times report found that OpenAI could lose $5 billion this year. (The cost of both chips and the power needed to run them are extremely high.) On Wednesday the company announced a fresh round of capital from parties including Microsoft and chipmaker Nvidia totaling some $6.6 billion.

OpenAI also must cut pricey licensing deals with publishers as lawsuits from the Times and others inhibit the firm’s ability to freely train their models on those publishers’ content.

OpenAI’s moves are giving industry watchdogs pause. “The switch to a for-profit solidified what was already clear: most of the talk about safety was probably just lip service,” Gary Marcus, a veteran AI expert and the author of the newly released book Taming Silicon Valley: How We Can Ensure That AI Works for Us, tells THR. “The company is interested in making money, and not having any checks and balances to ensure that it is safe.”

OpenAI has something of a history of releasing products before the industry thinks they’re ready. ChatGPT itself shocked the tech industry when it came out in November 2022; rivals at Google who had been working on a similar product thought none of the latest LLM’s were ready for primetime.

Whether OpenAI could keep innovating at this pace given the brain drain of the past week remains to be seen.

Perhaps to distract from the drama and reassure doubters, Altman put out a rare personal blog post last week positing that “superintelligence” — the far-reaching idea that machines can become so powerful they can do everything far better than humans — could happen as soon as the early 2030’s. “Astounding triumphs — fixing the climate, establishing a space colony, and the discovery of all of physics — will eventually become commonplace,” he wrote. Ironically, it may have been exactly such talk that made Sutskever and Murati head for the door.

Copyright © Lavish Life™ , All right reserved

Scroll To Top