January 30, 2025
DeepSeek-R1 Is Not Fully Open-Source, Hugging Face Wants to Change That
Hugging Face announced a new initiative on Tuesday to build Open-R1, a fully open reproduction of the DeepSeek-R1 model. The hedge fund-backed Chinese AI firm released the DeepSeek-R1 artificial intelligence (AI) model in the public domain last week, sending shockwaves across Silicon Valley and NASDAQ. A big reason was that such an advanced and large-scale AI model, t...

Hugging Face announced a new initiative on Tuesday to build Open-R1, a fully open reproduction of the DeepSeek-R1 model. The hedge fund-backed Chinese AI firm released the DeepSeek-R1 artificial intelligence (AI) model in the public domain last week, sending shockwaves across Silicon Valley and NASDAQ. A big reason was that such an advanced and large-scale AI model, that could overtake OpenAI’s o1 model, has not yet been released in open-source. However, the model was not fully open-source, and Hugging Face researchers are now trying to find the missing pieces.

Why Is Hugging Face Building Open-R1?

In a blog post, Hugging Face researchers detailed their reason behind replicating DeepSeek’s famed AI model. Essentially, DeepSeek-R1 is what is known as a “black-box” release, meaning that the code and other assets needed to run the software are available however, the dataset as well as training code are not. This means anyone can download and run the AI model locally, but the information needed to replicate a model like it is not possible.

Some of the unreleased information includes the reasoning-specific datasets used to train the base model, the training code used to create the hyperparameters that allow the model to break down and process complex queries, and the compute and data trade-offs used in the training process.

The researchers said that the aim behind building a fully open-source version of DeepSeek-R1 is to provide transparency about reinforcement learning’s enhanced outcome and to share reproducible insights with the community.

Hugging Face’s Open-R1 Initiative

Since DeepSeek-R1 is available in the public domain, researchers were able to understand some aspects of the AI model. For instance, DeepSeek-V3, the base model used to create R1, was built with pure reinforcement learning without any human supervision. However, the reasoning-focused R1 model used several refinement steps that reject low-quality outputs, and produces polished and consistent answers.

To do this, Hugging Face researchers have developed a three-step plan. First, a distilled version of R1 will be created using its dataset. Then, the researchers will try to replicate the pure reinforcement learning pattern, and then the researchers will include supervised fine-tuning and further reinforcement learning till they adjust the responses on par with R1.

The synthetic dataset derived from distilling the R1 model as well as the training steps will then be released to the open-source community to allow developers to transform existing large language models (LLMs) into reasoning models just by fine-tuning them.

Notably, Hugging Face used a similar process to distil the Llama 3B AI model to show that test time compute (also known as inference time compute) can significantly enhance small language models.