November 23, 2024
Alphabet-Backed Anthropic Releases OpenAI Rival Named Claude
Anthropic, an artificial intelligence company backed by Alphabet, on Tuesday released a large language model that competes directly with offerings from Microsoft-backed OpenAI, the creator of ChatGPT. Large language models are algorithms that are taught to generate text by feeding them human-written training text.

Anthropic, an artificial intelligence company backed by Alphabet, on Tuesday released a large language model that competes directly with offerings from Microsoft-backed OpenAI, the creator of ChatGPT.

Large language models are algorithms that are taught to generate text by feeding them human-written training text. In recent years, researchers have obtained much more human-like results with such models by drastically increasing the amount of data fed to them and the amount of computing power used to train them. 

Claude, as Anthropic’s model is known, is built to carry out similar tasks to ChatGPT by responding to prompts with human-like text output, whether that is in the form of editing legal contracts or writing computer code.

But Anthropic, which was co-founded by siblings Dario and Daniela Amodei, both of whom are former OpenAI executives, has put a focus on producing AI systems that are less likely to generate offensive or dangerous content, such as instructions for computer hacking or making weapons, than other systems.

Such AI safety concerns gained prominence last month after Microsoft said it would limit queries to its new chat-powered Bing search engine after a New York Times columnist found that the chatbot displayed an alter ego and produced unsettling responses during an extended conversation.

Safety issues have been a thorny problem for tech companies because chatbots do not understand the meaning of the words they generate.

To avoid generating harmful content, the creators of chatbots often program them to avoid certain subject areas altogether. But that leaves chatbots vulnerable to so-called “prompt engineering,” where users talk their way around restrictions.

Anthropic has taken a different approach, giving Claude a set of principles at the time the model is “trained” with vast amounts of text data. Rather than trying to avoid potentially dangerous topics, Claude is designed to explain its objections, based on its principles.

“There was nothing scary. That’s one of the reasons we liked Anthropic,” Richard Robinson, chief executive of Robin AI, a London-based startup that uses AI to analyze legal contracts that Anthropic granted early access to Claude, told Reuters in an interview.

Robinson said his firm had tried applying OpenAI’s technology to contracts but found that Claude was both better at understanding dense legal language and less likely to generate strange responses.

“If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses,” Robinson said.

© Thomson Reuters 2023


After facing headwinds in India last year, Xiaomi is all set to take on the competition in 2023. What are the company’s plans for its wide product portfolio and its Make in India commitment in the country? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.