December 19, 2024
Salesforce's top AI ethics leader says she's 'optimistic' on the path to U.S. regulation
"I remain optimistic, because I think if you saw a number of the hearings that happened in the Senate, they were largely bipartisan," says Paula Goldman.

There are several different pieces of legislation going through the U.S. Congress that focus on AI-related areas. But there is still no official regulation that focuses specifically on AI.

Pol Cartie | Sopa Images | Lightrocket | Getty Images

BARCELONA — A top executive at Salesforce says she is “optimistic” that U.S. Congress will make new laws to regulate artificial intelligence soon.

Speaking with CNBC at the Mobile World Congress tech trade show in Barcelona, Spain, Paula Goldman, Salesforce’s chief ethical and humane use officer, said she’s seeing momentum toward concrete AI laws in the United States and that federal legislation is not far off.

She noted that the need to consider guardrails has become a “bipartisan” issue for U.S. lawmakers and highlighted efforts among individual states to devise their own AI laws.

“It’s very important to ensure U.S. lawmakers can agree on AI laws and work to pass them soon,” Goldman told CNBC. “It’s great, for example, to see the EU AI Act. It’s great to see everything going on in the U.K.”

“We’ve been actively involved in that as well. And you want to make sure … these international frameworks are relatively interoperable, as well,” she added.

“In the United States context, what will happen is, if we don’t have federal legislation, you’ll start to see state by state legislation, and we’re definitely starting to see that. And that’s also very suboptimal,” Goldman said.

But, she added, “I remain optimistic, because I think if you saw a number of the hearings that happened in the Senate, they were largely bipartisan.”

“And I will also say, I think there are a number of sub issues that I think are largely bipartisan, that certainly I’m optimistic about it. And I think it’s very important that we have a set of guardrails around the technology,” Goldman added.

Goldman sits on the U.S. National AI Advisory Committee, which advises the Biden administration on topics related to AI. She is Salesforce’s top leader focusing on the responsible use of the technology.

Her work involves developing product policies to inform the ethical use of technologies — particularly AI-powered tools like facial recognition — and discussing with policymakers how technology should be regulated.

Salesforce has its own stake in the ground with respect to generative AI, having launched its Einstein product — an integrated set of AI tools developed for Salesforce’s Customer Relationship Management platform — in September.

Einstein is a conversational AI bot, similar to OpenAI’s ChatGPT, but built for enterprise use cases.

Legislation in the works

There are several different pieces of legislation going through the U.S. Congress that focus on AI-related areas. One is the REAL Political Advertisements Act, which would require a disclaimer on political ads that use images or videos generated by AI. It was introduced in May 2023.

Another is the National AI Commission Act, introduced in June, which would create a bipartisan blue-ribbon commission to recommend steps toward AI regulation.

Then there’s the AI Labeling Act, which would require developers to include “clear and conspicuous” notices on AI-generated content. It was proposed in October 2023.

However, there is still no official regulation that focuses specifically on AI. Calls for governments to impose laws regulating AI have increased in the advent of advanced generative AI tools like OpenAI’s GPT-4 and Google’s Gemini, which can create humanlike responses to text-based prompts.

In October, President Joe Biden signed an executive order on AI in an effort to establish a “coordinated, Federal Government-wide approach” to the responsible development and implementation of the technology.