November 23, 2024

How to prompt friends and influence people — The fine art of human prompt engineering: How to talk to a person like ChatGPT People are more like AI language models than you might think. Here are some prompting tips.

Benj Edwards – Apr 3, 2024 11:30 am UTC Enlarge / With these tips, you too can prompt people successfully. reader comments 28 In a break from our normal practice, Ars is publishing this helpful guide to knowing how to prompt the “human brain,” should you encounter one during your daily routine.

While AI assistants like ChatGPT have taken the world by storm, a growing body of research shows that it’s also possible to generate useful outputs from what might be called “human language models,” or people. Much like large language models (LLMs) in AI, HLMs have the ability to take information you provide and transform it into meaningful responsesif you know how to craft effective instructions, called “prompts.” Further ReadingA jargon-free explanation of how AI large language models work

Human prompt engineering is an ancient art form dating at least back to Aristotle’s time, and it also became widely popular through books published in the modern era before the advent of computers.

Since interacting with humans can be difficult, we’ve put together a guide to a few key prompting techniques that will help you get the most out of conversations with human language models. But first, let’s go over some of what HLMs can do. Understanding human language models

LLMs like those that power ChatGPT, Microsoft Copilot, Google Gemini, and Anthropic Claude all rely on an input called a “prompt,” which can be a text string or an image encoded into a series of tokens (fragments of data). The goal of each AI model is to take those tokens and predict the next most-likely tokens that follow, based on data trained into their neural networks. That prediction becomes the output of the model.

Similarly, prompts allow human language models to draw upon their training data to recall information in a more contextually accurate way. For example, if you prompt a person with “Mary had a,” you might expect an HLM to complete the sentence with “little lamb” based on frequent instances of the famous nursery rhyme encountered in educational or upbringing datasets. But if you add more context to your prompt, such as “In the hospital, Mary had a,” the person instead might draw on training data related to hospitals and childbirth and complete the sentence with “baby.” Advertisement

Humans rely on a type of biological neural network (called “the brain”) to process information. Each brain has been trained since birth on a wide variety of both text and audiovisual media, including large copyrighted datasets. (Predictably, some humans are prone to reproducing copyrighted content or other people’s output occasionally, which can get them in trouble.) Further ReadingChatGPT is one year old. Heres how it changed the tech world.

Despite how often we interact with humans, scientists still have an incomplete grasp on how HLMs process language or interact with the world around them. HLMs are still considered a “black box,” in the sense that we know what goes in and what comes out, but how brain structure gives rise to complex thought processes is largely a mystery. For example, do humans actually “understand” what you’re prompting them, or do they simply react based on their training data? Can they truly “reason,” or are they just regurgitating novel permutations of facts learned from external sources? How can a biological machine acquire and use language? The ability appears to emerge spontaneously through pre-training from other humans and is then fine-tuned later through education.

Despite the black-box nature of their brains, most experts believe that humans build a world model (an internal representation of the exterior world around them) to help complete prompts and that they possess advanced mathematical capabilities, though that varies dramatically by model, and most still need access to external tools to complete accurate calculations. Still, a human’s most useful strength might lie in the verbal-visual user interface, which uses vision and language processing to encode multimodal inputs (speech, text, sound, or images) and then produce coherent outputs based on a prompt. Enlarge / Human language models are powered by a biological neural network called a “brain.”Getty Images

Humans also showcase impressive few-shot learning capabilities, being able to quickly adapt to new tasks in context (within the prompt) using a few provided examples. Their zero-shot learning abilities are equally remarkable, and many HLMs can tackle novel problems without any prior task-specific training data (or at least attempt to tackle them, to varying degrees of success).

Interestingly, some HLMs (but not all) demonstrate strong performance on common sense reasoning benchmarks, showcasing their ability to draw upon real-world “knowledge” to answer questions and make inferences. They also tend to excel at open-ended text generation tasks, such as story writing and essay composition, producing coherent and creative outputs. Page: 1 2 3 4 Next → reader comments 28 Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. Advertisement Channel Ars Technica ← Previous story Related Stories Today on Ars