This fascinating project is created with physical handmade ceramic pieces that are 3D-scanned, then transformed into AI-powered avatars in a sci-fi universe. In the story, the ceramic heads are memorabilia from humanity's memorial, long lost in the galaxy. They do their best to spread tales about life in the solar system while looking for funds to support their simulation and research practices.
2022
This 3D-rendered video was created with NVIDIA Audio2Face and Blender using a 3D-scanned ceramic character and AI-generated voice. AI text2image transformation was applied to some of the video frames, then used as a guide for style-transfer effect.
It was the pandemic that inspired Vanessa Rosa to imagine stories about Humanity’s Memorial. She first started creating humanoid elongated heads just as an experiment. Then, she 3D-scanned them and discovered how to animate the characters with motion capture. The storytelling slowly emerged from the artistic process: something about how much data we all produce in our everyday lives, how we can simulate our personalities with AI language bots, and how the details of our lives will be accessible for future generations. Human tales could be eternal in a never-ending simulation.
“A brief history of consciousness in the simulation” is an introduction to the Little Martians fictional universe. It explains how Humanity’s Memorial emerges from all the data humans produce, collect, and use as a base for creating simulated realities. The story was written by the artist, then she selected AI voices for the characters. After testing different solutions, the final ones were chosen from Play.ht ultra realistic voices.
NVIDIA® Omniverse Audio2Face™ was crucial for the Little Martians workflow, offering the most advanced system to-date for automatically animating a face using only an audio clip as input. The Audio2Emotion feature creates full face motions comparable to a real actor. For this project, the artist is using an NVIDIA RTX™ 5000 GPU. The weights of the animations are exported in JSON format to a plug-in in Blender called Faceit.
Several 3D scans taken by Vanessa of historical sites were used as scenarios–including locations in Portugal, Brazil, and the USA. The artist then added custom lighting, camera movements, and animations to the characters. Scans of museum sculptures (Asian Art Museum in San Francisco and the MET in New York) were also added as characters, in addition to the artist’s floating ceramic heads.
Some frames from the 3D render are then modified using the Stable Diffusion open-source photoshop plug-in by Christian Cantrell. The transformed frames are used as guides for Ebsynth software (AI style transfer with optical flow) and a new animation is compiled with After Effects, merging 2D painting with 3D animation. Finally, a different kind of animation was created using Deforum.art open source code, which is based on Stable Diffusion and makes AI interpolations.
The prompts used for the AI modification merged the artistic styles from Remedios Varo, Odilon Redon, and Ernst Haeckel to create a new aesthetic on top of the initial images.
The credits animation was created with the NVIDIA StyleGAN3 model, trained on 2000 photos Vanessa Rosa took of her own ceramics. Gene Kogan was responsible for the training.
Vanessa Rosa is a US-based Brazilian visual artist whose work merges physical and digital media into a storytelling continuum. Murals become portals to an imaginary world with projection mapping and ceramics metamorphose into living entities with the aid of AI models. She creates fictional tales about world history, where past and possible futures intertwine.
Vanessa has done art projects in many countries. Some of her most important works include: the Little Martians sci fi universe, the Art History children’s book Diana’s World, a painting about domestic violence for UN Women, a large-scale mural for Pioneer Works (NYC, USA), a mural for Le Centre (Cotonou, Benin), and coordination of the Sankofa project during the Rio de Janeiro Olympic Games.
Instagram | Twitter | Website
Explore the technical and creative process developed for the sci-fi world, Little Martians, which combines language models, text to speech, 3D scanning, 3D animation, and custom AI image generation to easily create an editable digital twin of any space, person, or object. With NVIDIA Omniverse tools like audio2face, it’s even possible to animate custom avatar voices and intelligence.
Learn about Vanessa Rosa’s unique digital workflow, using Adobe, Blender, and Omniverse Audio2Face, along with Universal Scene Description (USD), to bring ceramic humanoids to life with a sci-fi twist.