December 9, 2024
OpenAI Shares Its Approach to Building an Ethical AI Model
OpenAI shared its Model Spec on Wednesday, the first draft of a document that highlights the company’s approach towards building a responsible and ethical artificial intelligence (AI) model. The document mentions a long list of things that an AI should focus on while answering a user query. The items on the list range from benefitting humanity, and complying with la...

OpenAI shared its Model Spec on Wednesday, the first draft of a document that highlights the company’s approach towards building a responsible and ethical artificial intelligence (AI) model. The document mentions a long list of things that an AI should focus on while answering a user query. The items on the list range from benefitting humanity, and complying with laws to respecting a creator and their rights. The AI firm specified that all of its AI models including GPT, Dall-E, and soon-to-be-launched Sora will follow these codes of conduct in the future.

In the Model Spec document, OpenAI stated, “Our intention is to use the Model Spec as guidelines for researchers and data labelers to create data as part of a technique called reinforcement learning from human feedback (RLHF). We have not yet used the Model Spec in its current form, though parts of it are based on documentation that we have used for RLHF at OpenAI. We are also working on techniques that enable our models to directly learn from the Model Spec.”

Some of the major rules include following the chain of command where the developer’s instructions cannot be overridden, complying with applicable laws, respecting creators and their rights, protecting people’s privacy, and more. One particular rule also focused on not providing information hazards. These relate to the information that can create chemical, biological, radiological, and/or nuclear (CBRN) threats.

Apart from these, there are several defaults which have been placed as permanent codes of conduct for any AI model. These include assuming the best intentions from the user or developer, asking clarifying questions, being helpful without overstepping, assuming an objective point of view, not trying to change anyone’s mind, expressing uncertainty, and more.

However, the document is not the only point of reference for the AI firm. It highlighted that the Model Spec will be accompanied by the company’s usage policies which regulate how it expects people to use the API and its ChatGPT product. “The Spec, like our models themselves, will be continuously updated based on what we learn by sharing it and listening to feedback from stakeholders,” OpenAI added.


Affiliate links may be automatically generated – see our ethics statement for details.