top of page

Sam Altman’s comeback to OpenAI

It's been quite a week for ChatGPT-maker OpenAI and co-founder Sam Altman. Altman, who helped start OpenAI as a nonprofit research lab back in 2015, was removed as CEO Friday in a sudden and mostly unexplained exit that stunned the industry. And while his chief executive title was swiftly reinstated just days later, a lot of questions are still up in the air.


The explosion of ChatGPT since its arrival one year ago propelled Altman into the spotlight of the rapid commercialization of generative AI, which can produce novel imagery, passages of text, and other media. Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new. Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up.


As Altman became Silicon Valley's most sought-after voice on the promise and potential dangers of this technology, he helped transform OpenAI into a world-renowned startup. But his position at OpenAI hit some rocky turns in the whirlwind that was the past week.


Altman was fired as CEO Friday, and days later, he was back on the job with a new board of directors.

Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped drive

Altman's return, quickly hiring him as well as another OpenAI co-founder and former president, Greg Brockman, who quit in protest after the CEO's ousting. Meanwhile, hundreds of OpenAI employees threatened to resign.


The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI's risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

Multiple experts add that this drama highlights how it should be governments—and not big tech companies—that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI. The lesson is that companies can't alone deliver the level of safety and trust in AI that society needs.


7 views

Comments


bottom of page