Lessons from the Coup at OpenAI

OpenAI Ceo Sam Altman speaking at TechCrunch Disrupt NY 2014. Photo by TechCrunch

Just ahead of Thanksgiving last year, one of the most important tech-organizations faced an almost destabilizing coup. The executive board of OpenAI, responsible for the artificial intelligence (AI) system ChatGPT, decided to oust CEO Sam Altman. In a cryptic statement, OpenAI blamed Altman for being “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” What this firing reveals is a dissonance between what OpenAI once stood for and the direction it seems to be headed. But the disconnect began back when Altman decided to run OpenAI more like a business rather than a non-profit organization, prioritizing speed and competition over safety and users’ best interests. The subsequent threat from 650 of its employees to resign unless Altman was reinstated was a confirmation of the dissonance between leadership and employees. After Altman was ultimately reinstated a week later, the majority of the executive board, who represent the non-profit sector of OpenAI, decided to resign.  

OpenAI’s purported mission is to develop AI “in the best interests of humanity.” One of the executive board members, Helen Toner, believed that firing Altman would “be consistent with the mission.” Toner, who indirectly critiqued Altman for “stoking the flames of AI hype” and rushing the release of ChatGPT, arguably set the stage for the shakeup at OpenAI. Though it’s not known why exactly the majority of the executive board resigned, the general sentiment was the uncertainty of the future of OpenAI with Altman running it. The situation that unfolded at OpenAI represented a divide between worries about the speed AI systems were being developed and made ready to the public, and the pressure to pursue one of the biggest business opportunities of the generation in tech. The release of ChatGPT by OpenAI at the end of 2022 marked the beginning of an AI arms race among big tech companies—but its fast-growing pace proved that these systems still needed fine-tuning.

However, it is erroneous to believe that the desire of tech giants to invest in the development of generative AI and the public’s concerns with these systems are mutually exclusive. General concerns about generative AI are shared by both the public and tech giants, including bias and inaccuracies. The prospect of ChatGPT and similar AI bots as an extremely profitable business is evident. What happened at OpenAI should be a lesson that AI systems, such as ChatGPT, are still relatively new and that the future is still unstable given the constantly changing nature of the organizations that control them. Federal government intervention is necessary in order to provide comprehensive regulations on what these AI systems are able to do. 

At the end of October, the Biden Administration passed an executive order on the “safe, secure, and trustworthy development and use of artificial intelligence.” The executive order is the first of its kind that recognizes the speed at which AI capabilities are advancing, taking steps to also address the protection of civil rights and privacy. This is a step in the right direction, and it establishes a precedent  that the federal government is willing to take to address the development and use of AI. Two major regulations must also be addressed within generative AI systems with respect to copyright laws and explicit labeling of functions.

The way AI models work is through the input of training data, which essentially teaches these systems how to function. Good training data is imperative to the success of these AI models, which Biden’s executive order described as being respectful of consumer privacy. Consumer privacy must also explicitly extend to the respect of copyright laws, which require the continuous testing and human oversight over input. Due to the risk of widespread copyright infringement, AI companies must be required to regularly train their chatbots on legally-acquired datasets. This issue came to light when Sarah Silverman and other novelists sued OpenAI for being trained on illegally-acquired datasets that contained their works. Silverman alleges that OpenAI acquired her book from “shadow library” websites such as Z-Library, which contain thousands of books online, completely bypassing copyright infringement laws when prompted. Not only are these regulations necessary to protect the integrity of intellectual property, but also to prevent users themselves from plagiarizing. If AI systems’ databases are not consistently regulated and monitored, there will be an increase in copyright lawsuits, a crucial concern for big tech companies. Data training with respect to copyright laws is imperative in generative AI model’s success. Using poor-quality datasets, i.e., illegally-acquired datasets from websites such as Z-Library, can pose detriments to the credibility of these AI systems.

These regulations should also require companies to explicitly define generative AI’s proper usage. For example, Microsoft, on their Github website, a platform for software developers, used the analogy of a “co-pilot” to denote that the platform can be used for. A copilot “serves as an expert helper to a user trying to accomplish a task,” stated Kevin Scott, the Chief Technology Officer at Microsoft. AI chatbots are meant to function like Mad Libs or an autocomplete tool, because their large language model operates by analyzing massive amounts of text from the web, and regurgitating what they think should come next in a sequence in their output. Executive board member Helen Toner says to think of these bots like “improv machines.” 

Our desire to anthropomorphize these technologies stem from a place of not exactly knowing what AI chatbots are meant to do, which is why it’s essential for companies to explicitly mention their functions and limitations. Doing so would ensure the public is aware of these chatbot’s limitations. Instead of trying to use ChatGPT to completely replace search engines or calculators, users would feel more familiarized and comfortable using these AI systems. This also would allow big tech companies to feel safer to invest in further development knowing that the public will be aware of what these AI technologies can be used for and what their limits are. 

National regulations should be in place to ensure AI models are rigorously tested by humans and that its limits are made clear to users. Given the instability within the private sector, and the power that AI gives to big tech companies, the government needs to establish safeguards to prevent companies from making rash decisions that undermine public safety. 

Julianna Lozada is a staff writer at CPR and a senior at Columbia in the dual degree with Sciences Po. She is studying human rights with a specialization in Middle Eastern studies and a special concentration in sustainable development. You can probably find her creating WBAR playlists in Milstein or taking power naps on Butler lawn.