January 24, 2025
Anthropic’s New Feature Will Make Claude’s Responses More Reliable
Anthropic introduced a new application programming interface (API) feature on Thursday to let developers ground the responses generated by artificial intelligence (AI) models. Dubbed Citations, the feature allows developers to restrict the output generation of the Claude family of AI models to source documents. This is aimed at improving the reliability and accuracy o...

Anthropic introduced a new application programming interface (API) feature on Thursday to let developers ground the responses generated by artificial intelligence (AI) models. Dubbed Citations, the feature allows developers to restrict the output generation of the Claude family of AI models to source documents. This is aimed at improving the reliability and accuracy of the AI-generated responses. The AI firm has already provided the feature to companies such as Thomson Reuters (for the CoCounsel platform) and Endex. Notably, the feature is available without any additional cost.

Anthropic Introduces a New Grounding Feature

Generative AI models are typically prone to errors and hallucination. This occurs because of the massive datasets they have to look through to find responses to user queries. Adding web searches to the equation only makes it trickier for large language models (LLMs) to avoid inaccurate information as they use relatively basic retrieval-augmented generation (RAG) mechanisms.

AI companies, which also build specialised tools, often restrict data access to the LLMs to improve accuracy and reliability. Some examples of such tools include Gemini in Google Docs, AI-powered Writing Assist tools in Samsung and Apple smartphones, and PDF analysis tools in Adobe Acrobat. However, creating such a layer is not possible in API as developers build a wide range of tools that have different data requirements.

To solve this problem, Anthropic introduced the Citations feature for its API. Detailed in a newsroom post, the feature lets Claude ground its responses in source documents. This means Claude AI models can provide detailed references to the exact paragraph and sentences it took the information to generate the output. The AI firm claims this tool will make the AI-generated responses easily verifiable and more trustworthy.

With this, users can add source documents to the context window, and Claude will automatically cite the source in its output wherever it infers them from the source material. As a result, developers will not have to rely upon complex prompts to ask Claude to include source information, which the company acknowledged as an inconsistent and cumbersome method.

Anthropic claimed that with Citations, developers will be able to easily build AI solutions for document summarisation, tools to answer complex queries based on long documents, as well as customer support systems.

Notably, the company stated that Citations uses Anthropic’s standard token-based pricing model and users will not pay for output tokens that return the quoted text. However, there might be an extra charge for additional input tokens that are used to process the source documents. Citations is currently available for the new Claude 3.5 Sonnet and Claude 3.5 Haiku models.