Shutterstock on Tuesday announced an extended partnership with OpenAI, which will see the latter’s text-to-image model, DALL-E 2, integrated into Shutterstock’s platform in the coming months. The move comes at a time when the rise of text-to-image and text-to-video artificial intelligence (AI) models have put the stock image and video provider industry at risk of being diminished or being made obsolete. The stock image giant seems to have answered this challenge by adopting a first-movers strategy that sees them selling AI-generated media content themselves.
The stock-image giant has also announced the setting up of a “contributor fund” that will reimburse creators when the company sells their work to be used in machine learning models that leverage them as subjects to train itself. In recent times, there has been widespread criticism from the artists’ community who allege that their output is being scraped from the web without their consent to create these text-to-image AI systems. Additionally, Shutterstock also announced that it will ban the sale of any non-DALL-E AI-generated art, on its platform.
“The mediums to express creativity are constantly evolving and expanding. We recognize that it is our great responsibility to embrace this evolution and to ensure that the generative technology that drives innovation is grounded in ethical practices,” Shutterstock’s CEO Paul Hennessy said in a prepared statement announcing the partnership.
Shutterstock and OpenAI entered a strategic partnership in 2021, where Shutterstock sold images and metadata to OpenAI to help create DALL-E. Now DALL-E’s output will compete with the same individuals and creators whose work was used to train the model system it runs on.
“We’re excited for Shutterstock to offer DALL-E images to its customers as one of the first deployments through our application programming interface (API), and we look forward to future collaborations as artificial intelligence becomes an integral part of artists’ creative workflows,” added Sam Altman, CEO of OpenAI said in the statement.
Recently, Meta, the parent company to Facebook, Instagram, and WhatsApp, unveiled an AI system called ‘Make-A-Video’ that generates short video clips from by text description of the desired scene. Meta also shared a research paper that included details on the latest artificial intelligence generative technology, in which it stated that the model utilises pairs of images, captions, and unlabelled video footage sourced from WebVid-10M and HD-VILA-100M datasets that includes stock video footage created by sites like Shutterstock. Meta, however, has not released the model to users publicly.