November 5, 2024
How ChatGPT Can Fool Humans, Even When It's Wrong
ChatGPT, OpenAI's powerful chatbot that can generate sentences to mimic human-written text, is capable of fooling users, even when it is wrong, according to experts. The chatbot was able to generate a news story on Microsoft's earnings for 2021, and while the earnings figures were not accurate, it also generated a fake quote from CEO Satya Nadella that appeared to be ...

Since OpenAI took the wraps off ChatGPT, a chatbot that generates sentences that closely mimic actual human-written prose, social media has been abuzz with users trying fun, low-stakes uses for the technology. The bot has been asked to create cocktail recipes, compose lyrics and write a Gilligan’s Island script where the castaways deal with Covid. ChatGPT avoids some of the pitfalls of past chatbots — like racist or hateful language — and the excitement about this iteration of the technology is palpable. ChatGPT’s skill at coming up with fluent, authoritative-sounding answers and responding to additional, related questions in a coherent thread is a testament to how far artificial intelligence has advanced. But it’s also raising a host of questions about how readers will be able to tell the difference between the bot’s content and authentic human-written language. That’s because ChatGPT’s text can achieve a certain level of what comedian Stephen Colbert once called “truthiness” — something that has the look and feel of being true even if it’s not based in fact.

The tool was released last week. By Monday, Stack Overflow, a Q&A site for computer programmers, temporarily banned answers generated by ChatGPT, with moderators saying they were seeing thousands of such posts — and that they often contained inaccuracies, making them “substantially harmful” to the site. And even when the answers are accurate, the bot-generated material on, say, history or science is good enough to provoke debate about whether it could be used to cheat on tests or essays or job applications. Factual or not, the ChatGPT answers are a proximate echo of human speech, a facsimile of the real thing, boosting the case that OpenAI may have to come up with a way to flag such content as software-generated rather than human-authored.

Arvind Narayanan, a computer science professor at Princeton University, tested the chatbot on basic information security questions the day it was released. His conclusion: You can’t tell if the answer is wrong unless you already know what’s right.

“I haven’t seen any evidence that ChatGPT is so persuasive that it’s able to convince experts,” he said in an interview. “It is certainly a problem that non-experts can find it to be very plausible and authoritative and credible.” It’s also an issue for teachers who assign work that asks for a recitation of facts rather than analysis or critical thinking, he said. The chatbot does the first part pretty well, but usually falls down on the latter.

ChatGPT is the latest language AI technology from OpenAI, an artificial intelligence research shop that was founded in 2015 by backers including Elon Musk; current chief executive officer and entrepreneur, Sam Altman; and Chief Scientist Ilya Sutskever. Musk ended his involvement in 2019 and OpenAI is now heavily funded by Microsoft. The firm has focused on several versions of GPT, a so-called large language model, which scans massive volumes of content found on the internet and uses it to predict how to generate text. ChatGPT is an iteration that has been “trained” to answer questions.

Using the AI tool to write a basic news story shows its strengths as well as the potential drawbacks. Asked to write a piece about Microsoft’s quarterly earnings, the bot produces a credible replication of something that could have been an article on Microsoft’s financial results circa 2021. The story talks about rising revenue and profit, owing to strong cloud-computing software and video-game sales. ChatGPT didn’t make telltale errors that would have flagged it as written by a bot. The numbers were wrong, but were in the ballpark.

The bot bolstered its credibility by adding a fake quote from Microsoft CEO Satya Nadella, and therein lies a concerning problem. The comment, praising Microsoft’s execution during a tough pandemic period, is so believable even this Microsoft reporter had to check whether it was real. It was indeed completely made up.

As Microsoft AI ethics vice president Sarah Bird explained in an interview earlier this year, language models like GPT have learned that humans often back up assertions with a quote — so the software mimics that behaviour, but lacks the benefit of human understanding of ethics and attribution. The software will make up a quote, a speaker, or both.

The enthusiastic reception for ChatGPT is a marked contrast to another recent high-profile demonstration of a language model — Meta‘s Galactica, which ingested volumes of scientific papers and textbooks and was supposed to use that “learning” to spit out scientific truth. Users found the bot interspersed scientific buzzwords with inaccuracies and bias, leading Meta, Facebook‘s parent company, to pull the plug. “I’m not sure how anyone thought that was a good idea,” Narayanan said. “In science, accuracy is the whole game.”

OpenAI clearly states that its chatbot isn’t “capable of producing human-like speech,” according to a disclaimer on the service. “Language models like ChatGPT are designed to simulate human language patterns and to generate responses that are similar to how a human might respond, but they do not have the ability to produce human-like speech.”

ChatGPT has also been designed to avoid some of the more obvious pitfalls and to better account for the possibility of making an error. The software was only trained on data through last year. Ask a question about this year’s mid-term election, for example, and the software admits its limitations. “I’m sorry, but I am a large language model trained by OpenAI and do not have any information about current events or the results of recent elections,” it says. “My training data only goes up until 2021, and I do not have the ability to browse the internet or access any updated information. Is there something else I can help you with?”

Examples provided by OpenAI show ChatGPT refusing to answer questions about bullying or offering violent content. It didn’t answer a question I posed on the Jan. 6, 2021, insurrection at the US Capitol, and it sometimes acknowledges it’s made a mistake. OpenAI said it released ChatGPT as a “research preview” in order to incorporate feedback from actual use, which it views as a critical way of making safe systems.

Currently, it gets some things very wrong. New York University professor emeritus Gary Marcus has been collecting and sharing examples on Twitter, including ChatGPT’s advice on biking from San Francisco to Maui. Rong-Ching Chang, a University of California doctoral student, got the bot to talk about cannibalism at the Tiananmen Square protests. That’s why some AI experts say it’s worrisome that some tech executives and users see the technology as a way to replace internet search, especially since ChatGPT doesn’t show its work or list its sources.

“If you get an answer that you can’t trace back and say, ‘Where does this come from? What perspective is it representing? What’s the source for this information?’ then you are incredibly vulnerable to stuff that is made up and either just flat-out fabricated or reflecting the worst biases in the dataset back to you,” said Emily Bender, a University of Washington linguistics professor and author of a paper earlier this year that demonstrated concerns raised by language AI chatbots that claim to improve web search. The paper was largely in response to ideas unveiled by Google.

“The sort of killer app for this kind of technology is a situation where you don’t need anything truthful,” Bender said. “Nobody can make any decisions based on it.”

The software could also be used to launch “astroturfing” campaigns — which make an opinion appear to originate from large volumes of grassroots commentators but actually comes from a centrally managed operation.

As AI systems get better at mimicking humans, questions will multiply about how to tell when some piece of content — an image, an essay — has been created by a program in response to a few words of human direction, and whose responsibility is it to make sure readers or viewers know the content’s origin. In 2018, when Google released Duplex, an AI that simulated human speech to call companies on behalf of users, it ended up having to identify that the calls were coming from a bot after complaints it was deceitful.

It’s an idea OpenAI said it has explored — for example, its DALL-E system for generating images from text prompts places a signature on the images that states they are created by AI — and the company is continuing to research techniques for disclosing the provenance of the text created by its things like GPT. OpenAI’s policy also states that users sharing such content should clearly indicate it was made by a machine.

“In general, when there’s a tool that can be misused but also has a lot of positive uses, we put the onus on the user of the tool,” Narayanan said. “But these are very powerful tools, and the companies producing them are well resourced. And so perhaps they need to bear some part of the ethical responsibility here.”

© 2022 Bloomberg L.P.


Affiliate links may be automatically generated – see our ethics statement for details.