November 23, 2024
OpenAI Might Have Overlooked Safety and Security Protocols for GPT-4o
OpenAI is reportedly speeding through and neglecting the safety and security protocols in developing new artificial intelligence (AI) models. A new report highlighted that the negligence occurred before the AI firm’s latest GPT-4 Omni (or GPT-4o) model was launched. Some anonymous OpenAI employees had recently signed an open letter expressing concerns about the lack...

OpenAI has been at the forefront of the artificial intelligence (AI) boom with its ChatGPT chatbot and advanced Large Language Models (LLMs), but the company’s safety record has sparked concerns. A new report has claimed that the AI firm is speeding through and neglecting the safety and security protocols while developing new models. The report highlighted that the negligence occurred before the OpenAI’s latest GPT-4 Omni (or GPT-4o) model was launched.

Some anonymous OpenAI employees had recently signed an open letter expressing concerns about the lack of oversight around building AI systems. Notably, the AI firm also created a new Safety and Security Committee comprising select board members and directors to evaluate and develop new protocols.

OpenAI Said to Be Neglecting Safety Protocols

However, three unnamed OpenAI employees told The Washington Post that the team felt pressured to speed through a new testing protocol that was designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI’s leaders.”

Notably, these protocols exist to ensure the AI models do not provide harmful information such as how to build chemical, biological, radiological, and nuclear (CBRN) weapons or assist in carrying out cyberattacks.

Further, the report highlighted that a similar incident occurred before the launch of the GPT-4o, which the company touted as its most advanced AI model. “They planned the launch after-party prior to knowing if it was safe to launch. We basically failed at the process,” the report quoted an unnamed OpenAI employee as saying.

This is not the first time OpenAI employees have flagged an apparent disregard for safety and security protocols at the company. Last month, several former and current staffers of OpenAI and Google DeepMind signed an open letter expressing concerns over the lack of oversight in building new AI systems that can pose major risks.

The letter called for government intervention and regulatory mechanisms, as well as strong whistleblower protections to be offered by the employers. Two of the three godfathers of AI, Geoffrey Hinton and Yoshua Bengio, endorsed the open letter.

In May, OpenAI announced the creation of a new Safety and Security Committee, which has been tasked to evaluate and further develop the AI firm’s processes and safeguards on “critical safety and security decisions for OpenAI projects and operations.” The company also recently shared new guidelines towards building a responsible and ethical AI model, dubbed Model Spec.