May 30, 2025
Tencent’s HunyuanPortrait AI Turns Photos Into Animated Portraits
Tencent released a new artificial intelligence (AI) model on Tuesday that can animate still portrait images. Dubbed HunyuanPortrait, the large language model (LLM) is based on the diffusion architecture, and can generate videos with realistic animation based on a reference image and a guiding video. The researchers behind the project highlighted that the model can cap...

Tencent released a new artificial intelligence (AI) model on Tuesday that can animate still portrait images. Dubbed HunyuanPortrait, the large language model (LLM) is based on the diffusion architecture, and can generate videos with realistic animation based on a reference image and a guiding video. The researchers behind the project highlighted that the model can capture both facial data and spatial movements to accurately sync them into the reference image. Tencent has now open-sourced the HunyuanPortrait AI model, and it can be downloaded and run locally from popular repositories.

Tencent’s HunyuanPortrait Can Bring Still Portraits to Life

In a post on X (formerly known as Twitter), the official handle of Tencent Hunyuan announced that the HunyuanPortrait model is now available to the open community. The AI model can be downloaded from Tencent’s GitHub and Hugging Face listings. Additionally, a pre-print paper detailing the model is also being hosted on arXiv. Notably, the AI model is available for academic and research-based use cases, but not for commercial usage.

HunyuanPortrait can generate lifelike animated videos using a reference image and driving videos. It captures the facial data and head poses from the video and interpolates them onto the still portrait image. The company claims that the sync of the movement is accurate, and even the subtle facial expression changes are replicated.

HunyuanPortrait architecture
Photo Credit: Tencent

On its model page, Tencent researchers detailed the architecture of HunyuanPortrait. It is built on the architecture of Stable Diffusion models alongside a condition control encoder. These pre-trained encoders decouple motion information and identity in videos. The data is captured as control signals, which are then injected into the still portrait via a denoising unet. The company claims this brings both spatial accuracy as well as temporal consistency into the output.

Tencent claims that the AI model outperforms existing open-source alternatives on the parameters of temporal consistency and controllability, but these metrics have not been independently verified.

Such models can be useful in the filmmaking and animation industries. Traditionally, animators manually keyframe facial expressions or use expensive motion capture systems to animate the characters realistically. Models like HunyuanPortrait will allow them to just feed the character design and the target movements and facial expressions, and it will be able to generate the output. Such LLMs also have the potential to make high-quality animation accessible to smaller studios and independent creators.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Realme Neo 7 Turbo With MediaTek Dimensity 9400e SoC, 7,200mAh Battery Launched: Price, Specifications

Related Stories