December 25, 2024
U.S. regulators warn they already have the power to go after A.I. bias — and they're ready to use it
Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by AI bias.

Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.

The warning comes as Congress is grappling with how it should take action to protect Americans from potential risks stemming from AI. The urgency behind that push has increased as the technology has rapidly advanced with tools that are readily accessible to consumers, like OpenAI’s chatbot ChatGPT. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced he’s working toward a broad framework for AI legislation, indicating it’s an important priority in Congress.

But even as lawmakers attempt to write targeted rules for the new technology, regulators asserted they already have the tools to pursue companies abusing or misusing AI in a variety of ways.

In a joint announcement from the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission and the Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.

For example, the CFPB is looking into so-called digital redlining, or housing discrimination that results from bias in lending or home-valuation algorithms, according to Rohit Chopra, the agency’s director. CFPB also plans to propose rules to ensure AI valuation models for residential real estate have safeguards against discrimination.

“There is not an exemption in our nation’s civil rights laws for new technologies and artificial intelligence that engages in unlawful discrimination,” Chopra told reporters during a virtual press conference Tuesday.

“Each agency here today has legal authorities to readily combat AI-driven harm,” FTC Chair Lina Khan said. “Firms should be on notice that systems that bolster fraud or perpetuate unlawful bias can violate the FTC Act. There is no AI exemption to the laws on the books.”

Khan added the FTC stands ready to hold companies accountable for their claims of what their AI technology can do, adding enforcing against deceptive marketing has long been part of the agency’s expertise.

The FTC is also prepared to take action against companies that unlawfully seek to block new entrants to AI markets, Khan said.

“A handful of powerful firms today control the necessary raw materials, not only the vast stores of data but also the cloud services and computing power, that startups and other businesses rely on to develop and deploy AI products,” Khan said. “And this control could create the opportunity for firms to engage in unfair methods of competition.”

Kristen Clarke, assistant attorney general for the DOJ Civil Rights Division, pointed to a prior settlement with Meta over allegations that the company had used algorithms that unlawfully discriminated on the basis of sex and race in displaying housing ads.

“The Civil Rights Division is committed to using federal civil rights laws to hold companies accountable when they use artificial intelligence in ways that prove discriminatory,” Clarke said.

EEOC Chair Charlotte Burrows noted the use of AI for hiring and recruitment, saying it can result in biased decisions if trained on biased datasets. That practice may look like screening out all candidates who don’t look like those in the select group the AI was trained to identify.

Still, regulators also acknowledged there’s room for Congress to act.

“I do believe that it’s important for Congress to be looking at this,” Burrows said. “I don’t want in any way the fact that I think we have pretty robust tools for some of the problems that we’re seeing to in any way undermine those important conversations and the thought that we need to do more as well.”

“Artificial intelligence poses some of the greatest modern day threats when it comes to discrimination today and these issues warrant closer study and examination by policymakers and others,” said Clarke, adding that in the meantime agencies have “an arsenal of bedrock civil rights laws” to “hold bad actors accountable.”

“While we continue with enforcement on the agency side, we’ve welcomed work that others might do to figure out how we can ensure that we are keeping up with the escalating threats that we see today,” Clarke said.

Subscribe to CNBC on YouTube.

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?