May 30, 2025
OpenAI’s Viral Ghibli Trend Might Be a Privacy Minefield, Experts Say
The viral Ghibli-style image trend on ChatGPT has sparked global participation, including from celebrities and brands. However, experts warn of serious privacy risks, as users unknowingly share facial data that may train AI models. With permanent digital footprints, the potential for misuse, like deepfakes or identity theft, raises critical concerns around data contro...

Unless you live under a rock or abstain from social media and Internet pop culture entirely, you must have at least heard of the Ghibli trend, if not seen the thousands of images flooding popular social platforms. In the last couple of weeks, millions of individuals have used OpenAI’s artificial intelligence (AI) chatbot to turn their images into Studio Ghibli-style art. The tool’s ability to transform personal photos, memes, and historical scenes into the whimsical, hand-drawn aesthetic of Hayao Miyazaki’s films, like Spirited Away and My Neighbour Totoro, has led to millions trying their hands at it.

The trend has also resulted in a massive rise in popularity for OpenAI’s AI chatbot. However, while individuals are happily feeding the chatbot images of themselves, their family and friends, experts have raised privacy and data security concerns over the viral Ghibli trend. These are no trivial concerns either. Experts highlight that by submitting their images, users are potentially letting the company train its AI models on these images.

Additionally, a far nefarious problem is that their facial data might be part of the Internet forever, leading to a permanent loss of privacy. In the hands of bad actors, this data can also lead to cybercrimes such as identity theft. So, now that the dust has settled, let us break down the darker implications of OpenAI’s Ghibli trend that has witnessed global participation.

The Genesis and Rise of the Ghibli Trend

OpenAI introduced the native image generation feature in ChatGPT in the last week of March. Powered by new capabilities added to the GPT-4o artificial intelligence (AI) model, the feature was first released to the platform’s paid users, and a week later, it was expanded to even those on the free tier. While ChatGPT could generate images via the DALL-E model, the GPT-4o model brought improved abilities, such as adding an image as an input, better text rendering, and higher prompt adherence for inline edits.

The early adopters of the features quickly began experimenting, and the ability to add images as input turned out to be a popular one because it is much more fun to see your photos be turned into artwork than to create generic images using text prompts. While it is incredibly difficult to find out the true originator of the trend, software engineer and AI enthusiast Grant Slatton is credited as the populariser.

His post, where he converted an image of himself, his wife, and his family dog into aesthetic Ghibli-style art, has garnered more than 52 million views, 16,000 bookmarks, and 5,900 reposts at the time of writing this.

While precise figures on the total number of users who created Ghibli-style images are not available, the indicators above, along with the widespread sharing of these images across social media platforms like X (formerly known as Twitter), Facebook, Instagram, and Reddit, suggest that participation could be in the millions.

The trend also extended beyond individual users, with brands and even government entities, such as the Indian government’s MyGovIndia X account, participating by creating and sharing Ghibli-inspired visuals. Celebrities such as Sachin Tendulkar, Amitabh Bachchan were also seen sharing these images on social media.

Privacy and Data Security Concerns Behind the Ghibli Trend

As per its support pages, OpenAI collects user content, including text, images, and file uploads, to train its AI models. There is an opt-out method available on the platform, activating which will forbid the company from collecting the user’s data. However, the company does not explicitly tell users about the option that it collects data to train AI models when they are first registering and accessing the platform (It is part of ChatGPT’s terms of use, but most users tend not to read that. The “explicit” part refers to a pop-up page highlighting the data collection and opt-out mechanism).

This means most general users, including those who have been sharing their images to generate Ghibli-style art, have no idea about the privacy controls, and they end up sharing their data with the AI firm by default. So, what exactly happens to this data?

According to OpenAI’s support page, unless a user deletes a chat manually, the data is stored on its server perpetually. Even after deleting the data, permanent deletion from its servers can take up to 30 days. However, during the time user data is shared with OpenAI, the company may use the data to train its AI models (does not apply to Teams, Enterprise, or Education plans).

“When any AI model is pre-trained on any information, it becomes part of the model’s parameters. Even if a company removes user data from its storage systems, reversing the training process is extremely difficult. While it is unlikely to regurgitate the input data since companies add declassifiers, the AI model definitely retains the knowledge it gains from the data,” said Ripudaman Sanger, Technical Product Manager, Globallogic.

But, what is the harm — some may ask. The harm here in OpenAI or any other AI platform collecting user data without explicit consent is that users do not know and have no control over how it is used.

“Once a photo is uploaded, it’s not always clear what the platform does with it. Some may keep those images, reuse them, or use them to train future AI models. Most users aren’t given the option to delete their data, which raises serious concerns about control and consent,” said Pratim Mukherjee, Senior Director of Engineering, McAfee.

Mukherjee also explained that in the rare event of a data breach, where the user data is stolen by bad actors, the consequences could be dire. With the rise of deepfakes, bad actors can misuse the data to create fake content that damages the reputation of individuals or even scenarios like identity fraud.

The Consequences Could Be Long Lasting

A case can be made for the optimistic readers that a data breach is a rare possibility. However, those individuals are not considering the problem of permanence that comes with facial features.

“Unlike Personal Identifiable Information (PII) or card details, all of which can be replaced/changed, facial features are left permanently as digital footprints, leaving a permanent loss to privacy,” said Gagan Aggarwal, Researcher at CloudSEK.

This means even if a data breach occurs 20 years later, those whose images are leaked will still face security risks. Agarwal highlights that today, such open-source intelligence (OSINT) tools exist that can carry out Internet-wide face searches. If the dataset falls into the wrong hands, it can create a major risk for millions of people who participated in the Ghibli trend.

But the problem is only going to increase the more people keep sharing their data with cloud-based models and technologies. In recent days, we have seen Google introduce its Veo 3 video generation model that can not only create hyperrealistic videos of people but also include dialogue and background sounds in them. The model supports image-based video generation, which can soon lead to another similar trend.

The idea here is not to create fear or paranoia but to generate awareness about the risks users take when they participate in seemingly innocent Internet trends or casually share data with cloud-based AI models. The knowledge of the same will hopefully enable people to make well-informed choices in the future.

As Mukherjee explains, “Users shouldn’t have to trade their privacy for a bit of digital fun. Transparency, control, and security need to be part of the experience from the start.”

This technology is still in its nascent stage, and as newer capabilities emerge, more trends are sure to appear. The need of the hour is to be mindful as users interact with such tools. The old proverb about fire also happens to apply to AI: It is a good servant but a bad master.