adventures in latent space — When AI images were mind-blowingearly users recall the first days of DALL-E 2 How a group of friends found themselves at the center of a fierce debate about the future of art.
Benj Edwards – Apr 18, 2024 11:00 am UTC Enlarge / An AI-generated image from DALL-E 2 created with the prompt “A painting by Grant Wood of an astronaut couple, american gothic style.”AI Pictures That Go Hard / X reader comments 0
When OpenAI’s DALL-E 2 debuted on April 6, 2022, the idea that a computer could create relatively photorealistic images on demand based on just text descriptions caught a lot of people off guard. The launch began an innovative and tumultuous period in AI history, marked by a sense of wonder and a polarizing ethical debate that reverberates in the AI space to this day. Further ReadingFrom toy to tool: DALL-E 3 is a wake-up call for visual artistsand the rest of us
Last week, OpenAI turned off the ability to purchase new generation credits for the web version of DALL-E 2, effectively killing it. From a technological point of view, it’s not too surprising that OpenAI recently began winding down support for the service. The 2-year-old image generation model was groundbreaking for its time, but it has since been surpassed by DALL-E 3’s higher level of detail, and OpenAI has recently begun rolling out DALL-E 3 editing capabilities.
But for a tight-knit group of artists and tech enthusiasts who were there at the start of DALL-E 2, the service’s sunset marks the bittersweet end of a period where AI technology briefly felt like a magical portal to boundless creativity. “The arrival of DALL-E 2 was truly mind-blowing,” illustrator Douglas Bonneville told Ars in an interview. “There was an exhilarating sense of unlimited freedom in those first days that we all suspected AI was going to unleash. It felt like a liberation from something into something else, but it was never clear exactly what.” Rise of the latent space astronauts
Before DALL-E 2, AI image generation tech had been building in the background for some time. Since the dawn of computers with graphical displays in the 1950s, people have been creating images with them. As early as the 1960s, artists like Vera Molnar, Georg Nees, and Manfred Mohr let computers do the drawing, generatively creating artwork using algorithms. Experiments from artists like Karl Sims in the 1990s led to one of the earliest introductions of neural networks into the process. Further ReadingDALL-E image generator is now open to everyone
Use of AI in computer art picked up again in 2015 when Google’s DeepDream used a convolutional neural network to bring psychedelic details to existing images. Then came generators based on Transformer models, an architecture discovered in 2017 by a group of Google researchers. OpenAI’s DALL-E 1 debuted as a tech demo in early 2021, and Disco Diffusion launched later that year. Despite these precursors, DALL-E 2 arguably marked the mainstream breakout point for text-to-image generation, allowing each user to type a description of what they wanted to see and have a matching image appear before their eyes. Advertisement
When OpenAI first announced DALL-E 2 in April 2022, certain corners of Twitter quickly filled with examples of surrealistic artworks it generated, such as teddy bears as mad scientists and astronauts on horseback. Many people were genuinely shocked. “Ok it’s fake ?? tell me it’s fake. April fool joke a bit late,” read one early reaction on Twitter. “My mind can only be blown so many times. I can’t take much more of this,” wrote another Twitter user in May.
Other examples of DALL-E 2 artwork collected in threads soon followed, all of which were flowing from OpenAI and a group of 200 handpicked beta testers. An AI-generated image of “Teddy bears mixing sparkling chemicals as mad scientists in the style of steampunk, a 1990s Saturday morning cartoon, and digital art” created by OpenAI and released on April 6, 2022. OpenAI An AI-generated image of “a photo of an astronaut riding a horse” created by OpenAI and released on April 6, 2022. OpenAI An AI-generated image of “a Shiba Inu dog wearing a beret and black turtleneck” created by OpenAI and released on April 6, 2022. OpenAI An AI-generated image of “a bowl of soup that looks like a monster knitted out of wool, made of plasticine and spray painted on a wall” created by OpenAI and released on April 6, 2022. OpenAI An AI-generated image of “a sea otter in the style of Girl with a Pearl Earring by Johannes Vermeer” created by OpenAI and released on April 6, 2022. OpenAI
When OpenAI began handing out those beta testing invitations, the common bond quickly spawned a small community of artists who felt like pioneers exploring the new technology together. “There was a wild time where there were a few artists playing around with it. We all became friends,” said conceptual artist Danielle Baskin, who first received an invitation to use DALL-E 2 on March 30, 2022, and began testing in mid-April. “When I first got access, I felt like I had a portal into infinite alternate worlds. I didn’t think of it as ‘art making’it felt like playing. I’d stay awake for hours just exploring.”
Because each DALL-E image sprung forth from a written prompt like “a photo of a statue slipping on ice” (drawing from associations gained in training between captions and images), the beta testers found themselves merging language and their visual imaginations in novel ways. “It was like being set loose in a lab,” said an artist named Lapine in an interview with Ars. Lapine received early access to DALL-E 2 on April 6 and began sharing her generations on Twitter. “I was using descriptive language in a way I had not previously.” Page: 1 2 3 4 5 Next → reader comments 0 Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. Advertisement Channel Ars Technica ← Previous story Related Stories Today on Ars