The Artificial Intelligence (AI) world witnessed a fiasco in November 2023. CEO of OpenAI, the firm behind ChatGPT, Sam Altman was fired from his job, ensuing backlash and raising questions about the safety and direction AI is taking. Although the reasoning behind the firing is unclear, as OpenAI vaguely stated that he wasn't “consistently candid in his communications with the board,” it can be implied that his tendency to get involved in disputes and, increasingly, lack of attention towards the dangers of AI would significantly contribute to the decision. To get the full picture, let us start from the very beginning.

Ironically, OpenAI started in 2015 when co-founders Sam Altman and Elon Musk bonded over their concerns of the dangers of Artificial General Intelligence (AGI). The compan's chief goal was to ensure that AI was not misused and that it could benefit humanity. To fortify this standard, the pair made the company into a nonprofit prioritizing safety over profit. Continuing this mission, they tested their modules carefully to ensure that there were not any major malfunctions.

With the huge success of ChatGPT, OpenAI soon took center stage in the AI industry. However, criticism and disputes from within the company began to emerge regarding the safety of AI as tensions ensued. A particularly large conflict took place in 2019 between Musk and Altman regarding the decision to turn the company into a capped-profit organization to get investments and expand. A capped-profit organization means investors can only receive a capped amount of return from their investment and the rest goes back to the organization (in OpenAI, investors can receive up to 100 times their investment). Musk wanted to keep OpenAI as a nonprofit to maintain its original priority, while Altman motioned to keep growing the company through profit collection. In the end, Altman won and Musk, angered by this decision, resigned from the company.

Another controversy occurred in 2021 when Altman and some of his chief researchers disagreed on the release of ChatGPT. The researchers wanted to release the software in a small, staged way to avoid public spectacle so they could test the module in a small pool of people. However, Altman opposed this idea, driving those researchers to leave the company and later start Anthropic, OpenAI’s biggest rival. Finally, the last major dispute led to Altman’s firing. Helen Toner, an OpenAI board member, recently released a paper on how companies and the government determine the path of AI through product releases and policies. This piece angered Altman as he claimed that it favored Anthropic and criticized OpenAI. The rest of the board agreed with Altman and planned to fire Toner, but instead, decided to fire Altman at the very last minute.The board appeared to view the dispute as the final straw coupled with Altman’s history of decision-making for personal benefit throughout his career as CEO.

Additionally, the board of OpenAI became increasingly unsettled over Altman and his intentions. For example, Altman partakes in a mounting quantity of AI ventures, which raises concerns and inquiries over how OpenAI’s intellectual property is used, and if he shares confidential information on the creation of AI to other ventures. It also poses the question about the priority of OpenAI over his other ventures. Furthermore, Altman has refused to listen to warnings about AI spiraling out of control. These messages include a letter signed by over 1,100 prominent Silicon Valley leaders, such as former co-founder Elon Musk and Apple’s founder Steve Wozniak, about a 6 month pause in developing AI more advanced than GPT-4. The letter argued that AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Many fear that Atlman is going against the company’s original goal of using AI for the good of humanity and focusing on profit instead.

Chaos followed in its wake. On Saturday, November 18th, the day after Altman was fired, Greg Brockman, the president of OpenAI, resigned from his job after losing his position as chairman of the board from siding with Altman, three senior researchers quit in protest. Two days later, Microsoft, taking advantage of the situation, hired Altman, Brockman and other former OpenAI employees to start their own artificial intelligence team. In the meantime, the employees of OpenAI expressed their dissatisfaction on the firing through an ultimatum signed by almost 800 employees (almost the entire staff). The OpenAI board, on the brink of losing their entire company, was forced to reinstate Altman, who ousted the three board members from the board chiefly responsible for his firing–Helen Toner, Tasha McCauley and Ilya Sutskever, who still stayed on as the head scientist. These former board members were also the strongest supporters of adhering to OpenAI’s original goal of using artificial intelligence for the good of humanity and the harshest critics of Altman’s direction in popularizing ChatGPT and potentially causing an AI arms race. By reducing dissent within the company, he has more agency to expand the company and control it his way. Once again, Altman emerges victorious when faced with conflict.

The firing and reinstatement of Altman raises questions about the future of AI. Will Altman’s re-entrance in OpenAI foresee a darker future for AI? Will humanity’s natural urge of creativity be the very cause of its doom? Nevertheless, the OpenAI fiasco definitely shows that artificial intelligence is not going anywhere. On the contrary, under Altman’s leadership, it will continue to develop, grow, and further integrate itself into daily life.