Series of Artworks from Machine Hallucinations

REFIK ANADOL

 2019-2020 / AI Data Painting  

Refik’s Machine Hallucinations are explorations of what can happen at the intersection of new media, machine learning, neuroscience, and architecture. They come to life in immersive installations, data paintings, and audio/visual performances.

The Experience

The Process

Machine Hallucinations is an ongoing project of data aesthetics based on collective visual memories of space, nature, and urban experiences. Since 2016, our studio has used DCGAN, PCGAN, and StyleGAN to train machine intelligence in processing these vast datasets and unfolding unrecognized layers of our external realities. We collect data from digital archives and social media platforms and process them with machine learning classification models such as CNNs, Variational Autoencoders, and deep ranking, filtering out noise and irrelevant data points. The sorted image datasets are then clustered into thematic categories to better understand the semantic context of the data universe.

 
gtc20-ai-art-gallery-overview-refik-anadol-process-1b-3c33-D

To capture these hallucinations from a multi-dimensional space, we use StyleGAN2, an NVIDIA DGX Station, 500 TFLOPS of AI power, and the world’s fastest workstation for leading-edge AI research and development. StyleGAN2 generates a model for the machine to process the archive and the model is trained on subsets of the sorted images, creating embeddings in 4096 dimensions. To understand this complex spatial structure visually, we use dimensional-reduction algorithms, such as cuml-UMAP, projecting to a navigable 3-dimensional universe. This projection enables the audience to use an interactive browser to virtually fly around this latent space and record their own journeys.

The final stage of production reveals Refik Anadol Studio’s pioneering work in machine-generated aesthetics and synesthetic perception through various forms of data paintings, sculptures, and latent cinematic experiences. Fluid dynamics has inspired Anadol since the inception of Machine Hallucinations. The studio’s exploration of digital pigmentation and light through fluid-solver algorithms accelerated by GPU computation and real time ray-traced lighting made possible by NVIDIA RTX GPUs brings this inspiration to life in a masterfully curated, multi-channel, self-generating experience.

 
gtc20-ai-art-gallery-overview-refik-anadol-process-2a-3c33-D

Space Dreams transforms a vast data set of 1.2 million images captured from the International Space Station, along with satellite images of Earth's topology, into a dynamic data painting. Each version of the machine’s dream sequences is derived from a different generative adversarial network (GAN), exploring AIs capacity to reach its own subconscious and offering an avant-garde form of cartographic aesthetics.

 
gtc20-ai-art-gallery-overview-refik-anadol-process-2b-3c33-D

In Urban Dreams, Anadol offers new insights into the representational possibilities emerging from the intersection of advanced technology, urban memory, and contemporary art. The work uses over 100 million images from New York City and Berlin, focusing specifically on typical public spaces. The content is used to train a StyleGAN2 to identify and learn patterns hidden in urban scenery, showcasing how the elusive process of memory retrieval transforms into data collections.

Nature Dreams is a series of synesthetic reality experiments based on StyleGAN2 algorithms, using over 69 million images of National Parks, Iceland, and other natural wonders to train a generative model for the machine to dream about the most mesmerizing features of our Mother Nature. This transformation of the data collection becomes not just a means of visualizing information, but also a transmutation of our desire for experiencing nature into a poetic visual.

gtc20-ai-art-gallery-overview-refik-anadol-headshot-2c50-D

Refik Anadol

Refik Anadol is a media artist, director, and pioneer in the aesthetics of machine intelligence that explores the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik offers radical visualizations of our digitized memories and the possibilities of architecture, narrative, and the body in motion. His site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines.  

Studio Team: Alex Morozov, Brian Chung, Carrie He, Christian Burke, Danny Lee, Efsun Erkilic, Ho Man Leung, Kerim Karaoglu, Kyle McLean, Nicholas Boss, Nidhi Parsana, Pelin Kivrak, Raman K. Mustafa, Toby Heinemann 

www.refikanadol.com  
InstagramTwitter   

Bryant Park Dreamscape (Before the Virus)

Daniel Ambrosi

2020 / Computational Photography, Artificial Intelligence  

For Daniel Ambrosi, computational photography is about more than just creating a sense of place. It’s about conveying how we feel about the environment—viscerally, cognitively, and emotionally.  

The Process

Developing the "full scene" AI-augmented artwork started with capturing 45 photos from a single location, which were then stitched (Autopano Pro) and blended (Aurora HDR Pro) into a giant (482+ megapixel) panorama and cropped and sweetened in Photoshop. It was processed with a proprietary version of Google DeepDream running on NVIDIA Quadro RTX GPUs.

Finally, Daniel experimented with tools and techniques (including effects available in Filter Forge) to further explore the graphic possibilities inherent in his “Dreamscapes.” His most recent attempts have led to a series he calls Infinite Dreams, an exploration of Cubism-inspired "refracted" dreaming.

 
gtc20-ai-art-gallery-overview-daniel-ambrosi-process-1c-3c33-D

In this video, Dreamscapes: The Tech Behind the Art, engineers Joseph Smarr (Google) and Chris Lamb (NVIDIA) offer a layman's explanation on how they modified Google's open-source DeepDream software to operate on multi-hundred megapixel images. See how this innovative process enabled computational photography artist Daniel Ambrosi to take his work to new heights.

The Experience

gtc20-ai-art-gallery-overview-daniel-ambrosi-headshot-2c50-D

Daniel Ambrosi

Daniel Ambrosi is one of the founding creators of the emerging AI art movement and is noted for the nuanced balance he achieves in human-AI hybrid art. He has been exploring novel methods of visual presentation for almost 40 years since Cornell University, where he earned a Bachelor of Architecture degree and a Masters in 3D Graphics. In 2011, Daniel devised a unique form of computational photography that generates exceptionally immersive landscape images. More recently, his "Dreamscapes" add a powerful new graphics tool: a computer vision program to visualize the inner workings of deep learning artificial intelligence models.  

www.dreamscapes.ai
Twitter

Artificial Remnants 2.0

SOFIA CRESPO & DARK FRACTURES

2020 / GAN, 3D-Style Transfer, GPT-2

Can artificial lifeforms help us rethink our connection with nature? Sofia Crespo uses the lens of new technology to explore how their presence in the digital world can help us explore our common bonds.

The Experience

The Process

The artists used 3D models of hundreds of insects, literature about their life cycles, and close-ups of their textures to create this unique work. The lack of large, open datasets led to a fair deal of data augmentation.

 
gtc20-ai-art-gallery-overview-sofia-crespo-process-cables-3c33-D

The insects were generated using a GAN trained on data from 3D models. They were then textured with convolutional neural networks (CNNs) and given names and descriptions from a fine-tuned GPT-2 language model. The process included NVIDIA Quadro and GeForce RTX GPUs with TensorFlow and PyTorch frameworks, and cables.gl for the web experience.

 
gtc20-ai-art-gallery-overview-sofia-crespo-process-surface-texture-3c33-D

Artificial intelligence is used in all parts of the process—from generating 3D forms to giving them surface texture and finally, descriptions.

gtc20-ai-art-gallery-overview-sofia-crespo-headshot-2c50-D

SOFIA CRESPO

Sofia Crespo is fascinated by biology-inspired technologies. One of her main focuses is the way organic life uses artificial mechanisms to simulate itself and evolve, implying that technologies are a biased product of the organic life that created them and not a completely separated object. She’s also concerned with the dynamic change in the role of the artists working with machine learning techniques.

www.sofiacrespo.com
Instagram  |  Twitter

DARK FRACTURES

Dark Fractures is a studio meditating on ecology, nature, and generative arts, with a focus on giving non-humans new forms of presence and life in digital space. The team is highly influenced by the development of new deep learning technologies, driving their passion to explore meditations on nature as a way of growing an appreciation for biodiversity and our natural world.

www.darkfractures.com
Instagram  |  Twitter

DARK FRACTURES

Dark Fractures is a studio meditating on ecology, nature, and generative arts, with a focus on giving non-humans new forms of presence and life in digital space. The team is highly influenced by the development of new deep learning technologies, driving their passion to explore meditations on nature as a way of growing an appreciation for biodiversity and our natural world.

www.darkfractures.com
Instagram  |  Twitter

Hyperbolic Composition I: Genesis

Scott Eaton

2019 / Drawing, GAN, custom dataset

When Scott wants to augment his artistic practice, he turns to deep neural networks. The expressive, novel figurative representations that emerge from that exploration take on a life of their own.

The Process

Scott’s ‘Figures’ dataset comprises 30,000+ unique photographs that he shot in the studio from a diverse set of volunteers over a two-year period. The usability of a neural network is often directly related to the quality of the training inputs, so carefully curating these was critical to building AI tools in his artistic practice. 

 
gtc20-ai-art-gallery-overview-scott-eaton-process-1b-3c33-D

A selection of time lapse videos from his drawing sessions is translated by the “Figures” network—an NVIDIA pix2pixHD, image-to-image translation generative adversarial (GAN) network. The network continually assesses the lines, shapes, and contours of each drawing for body patterns it ‘recognizes,’ then shades and renders them as appropriately as possible.  

 
gtc20-ai-art-gallery-overview-scott-eaton-process-1c-3c33-D

The drawing process for the master image for Fall of the Damned is underway. The final artwork is 2.2 meters tall, so the preparatory drawing had to be incredibly detailed. The final drawing was too big to fit in GPU memory, so was processed through the neural network in chunks of 8192x4096 before being combined into its final size of 20500x15200 pixels.

The Experience

Scott Eaton

Scott Eaton

Scott Eaton is an artist, educator, and creative technologist residing in London, UK. His work pushes the boundaries of figurative representation by combining traditional craft with contemporary digital tools. He got his master’s degree from the MIT Media Lab and studied academic drawing and sculpture in Florence, Italy. In addition to his own practice, Scott frequently collaborates with other artists and studios, as well as consulting widely in the visual effects, animation, and games industries.

www.scott-eaton.com
Twitter

Plankton Bay, Shallow Waters

Helena Sarin

2020 / Generative Mixed Media

It’s not uncommon for sketches to lead to great works of art. But what if they actually become the art itself? Helena’s #latentDoodles series shows exactly how that can work.

The Process

Using her own drawings, sketches, and photographs as datasets (between 100 and 2,000 samples), Helena trains her models to generate new visuals as the basis of her compositions. She uses SNGAN and CycleGAN exclusively, finding that SOTA performs poorly on small, diverse datasets. Helena uses a BYO local server, which now includes both NVIDIA Quadro and GeForce GPUs.

Having abandoned the easel in favor of GANs, Helena often organizes her works into a grid that’s randomly generated during the inference time as part of her artistic pipeline. This technique adds a layer of complexity and depth to her work.

Helena uses this proven, repeatable process to deal with GANs and showcase many images as one work. This approach has helped to define her unique design aesthetic.

The Book

Helena Sarin

Helena Sarin

Visual artist and software engineer Helena Sarin has always worked with cutting-edge technologies for tech companies. At the same time, she has done commission work in watercolor and pastel, as well as in the applied arts like fashion, food, and drink styling and photography. But art and software ran as parallel tracks in her life, all her art being analog, until she discovered GANs (Generative Adversarial Networks). She founded Neural Bricolage to demystify, promote, and display AI-assisted artwork. Last year, The Book of GANesis was immediately sold out and today she’s working on a second artist book.

www.neuralbricolage.com
Twitter

Madonna

Oxia Palus

2020 / GANs, GPUs, Multispectral Imaging

Oxia Palus is on a mission to uncover masterpieces lost to the ages using the power of AI. It’s a powerful dichotomy of history and innovative technology that can give us true insight into art lost to the ages.  

The Process

STEP ONE

The Leonardeschi were a group of artists that worked in the studio of or under the influence Leonardo da Vinci, and 225 of their paintings were used to train the NVIDIA pix2pixHD model.

The Madonna of the Carnation diagram is one example of an application of the model in which the model can map an edgemap back to a painting.

Training the pix2pixHD model with a coarse-grained labeling semantic helps to guide the model in recognizing attributes such as hair and skin.

Step Two

Oxia Palus manually co-registered (aligned) the x-ray of da Vinci's Virgin on the Rocks, from Imperial College London, with the x-ray trace produced by the National Gallery.

They then further manually edited the co-registered image, adding in missing trace components and labeling attributes such as skin, hair, and clothing.

 
gtc20-ai-art-gallery-oxia-palus-process-2c-3c33-D

Translucent colored labeling was used to prevent saturation of the underlying co-registered image.

Step Three

 
gtc20-ai-art-gallery-oxia-palus-process-3-1cCW-D

The left video shows a varying set of input color maps of a segment of the Virgin of the Rocks x-ray. The right video shows the output of the pix2pixHD Leonardeschi model, with a set of input color maps on the left. The X-ray is not a perfect ghostly outline. But by varying it slightly, a smooth video of potential reconstructions is created.