November 22, 2024
Google CEO Pichai says company will 'sort it out' if OpenAI misused YouTube for AI training
YouTube's videos have been used to help companies with AI model training.

Alphabet CEO Sundar Pichai speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco on Nov. 16, 2023.

David Paul Morris | Bloomberg | Getty Images

Alphabet CEO Sundar Pichai said Google will “sort it out” if it determines Microsoft-backed OpenAI relied on YouTube content to train an artificial intelligence model that can generate videos.

The comments, in an interview Tuesday with CNBC’s Deirdre Bosa, come after OpenAI technology chief Mira Murati told the Wall Street Journal in March that she wasn’t sure if YouTube videos were part of the training data for the company’s Sora model introduced earlier in the year.

Murati said OpenAI had drawn on publicly available data and on licensed data. The New York Times later reported that OpenAI had transcribed over a million hours of YouTube videos.

Asked if Google would sue OpenAI if the startup violated the search company’s terms of service, Pichai didn’t offer specifics.

“Look, I think it’s a question for them to answer,” Pichai said. “I don’t have anything to add. We do have clear terms of service. And so, you know, I think normally in these things we engage with companies and make sure they understand our terms of service. And we’ll sort it out.”

Pichai said Google has processes in place to figure out if OpenAI failed to comply with the rules. Newspapers such as The New York Times have already taken aim at OpenAI for allegedly breaking copyright law and training models on their articles.

Pichai’s interview followed a keynote to developers at Google’s I/O conference, where executives announced new AI models, including one called Veo that can compose synthetic videos. Those looking to get early access will have to receive approval from Google.

OpenAI preempted the Google event on Monday. The company revealed an AI model called GPT-4o and showed how users of its ChatGPT mobile app would be able to hold realistic voice conversations, interrupting the AI assistant and having it analyze what appears in front of a smartphone camera. On Tuesday, Google showed off similar upcoming capabilities.

“I don’t think they’ve shipped their demo to their users yet,” Pichai said of OpenAI. “I don’t think it’s available in the product.”

OpenAI said in a blog post on Monday that customers of its ChatGPT Plus subscriptions will be able to try an early version of the new voice mode in the weeks ahead. Pichai said Google’s Project Astra multimedia chat capabilities will come to its Gemini chatbot later this year.

“We have a clear sense of how to approach it, and we’ll get it right,” Pichai said.

Google has reduced the cost of serving up AI models in web searches by 80% since showing off a preview last year, relying on its custom Tensor Processing Units (TPUs) and Nvidia’s popular graphics processing units, he said. Google said during the keynote that it’s starting to display its AI Overviews in search results for all users in the U.S.

In June, Apple will hold its Worldwide Developers Conference in Cupertino, California. Bloomberg reported in March that Apple was discussing the idea of adding Gemini to the iPhone. Pichai told Bosa that Google has enjoyed “a great partnership with Apple over the years.” A Google expert witness said in court last November that the company gives Apple 36% of its search advertising revenue from the Safari browser.

“We have focused on delivering great experiences for the Apple ecosystem,” Pichai said. “It is something we take very seriously and I’m confident — we have many ways to make sure our products are accessible. We see that today, AI Overviews have been a popular feature on iOS when we have tested, and so we’ll continue — including Gemini. We’ll continue working to bring that there.”

WATCH: Alphabet CEO on report OpenAI trained GPT-4 on YouTube: We have clear terms of service