Refik Anadol’s data paintings and sculptures translate the logic of new media technology into spatial design to create immersive experiences based on vast datasets of nature, history, human activity, and art history.
Series of Artworks from Machine Hallucinations—MoMA Dreams 2021-2022 Generative Study I AI Data Paintings Anadol and his team created a unique exhibition of AI data paintings by training an AI model with the public metadata of The Museum of Modern Art’s collection spanning more than 200 years of art. Generative Study is from a series of algorithmic AI data paintings showcasing Anadol’s collaboration with GAN algorithms at the intersection of technology and aesthetics. Each frame in the series displays a cluster of chosen “latent space sequences,” as the artist goes through serendipitous allusions to modern visual expressions in the machine-mind.
Machine Hallucinations—MoMA Dreams
2021-2022
Machine Hallucinations is a multi-year research project from Refik Anadol Studio (RAS) that investigates data aesthetics based on collective visual memories of humanity. Machine Hallucinations - MoMA expands on this vision by processing 138,151 pieces of metadata from the entire MoMA archives in the mind of a machine.
Data Universe - MoMA is a global AI Data Painting simulating a latent walk among the museum’s digitized collection. It combines RAS’s vision of handling data within a universe that it creates for itself with their approach to data visualization’s latent space as a locus for never-ending, self-generating contemplation.
Using NVIDIA StyleGAN2 ADA to capture the machine’s “hallucinations” of MoMA’s vast archive of modern art in a multi-dimensional space, RAS trains a unique AI model with subsets of the collection, creating embeddings in 1024 dimensions. These hallucinations construct new aesthetic images and color combinations through unique lines drawn by algorithmic connections.
For MoMA - Fluid Dreams, Refik Anadol Studio’s signature fluid dynamics algorithm infinitely dreams about the MoMA archive. RAS synthesizes the vast data collected from MoMA archives into ethereal data pigments, and eventually into a representational form of fluid-inspired movements with the help of custom software and generative algorithms.
August 2020-April 2021 Through collaboration between Dr. Taylor Kuhn, coordinator of the Human Connectome Project (HCP) at UCLA, and technology partners Siemens and NVIDIA, Refik Anadol Studio (RAS) develops a dynamic network at the intersection of neuroscience and art to study fundamental questions about the human brain.
This experience showcases visual re-imaginings of the relationship between the form (spatial connections) and function (temporal connections) of the human mind.
RAS will exhibit the first visuals of this research collaboration in an immersive 3D-printed art installation, Sense of Space, at the 17th Architecture Biennale in Venice, Italy.
Approximately 70TB of multimodal Magnetic Resonance Imaging (MRI) data are used to train machine-learning algorithms—including structural, diffusion (DTI), and functional (fMRI) scans of people ranging from birth to nonagenarians and beyonds. The algorithms discover patterns and imagine the development of brain circuitry throughout the human lifespan.
These pipelines initially perform many low-level tasks, including spatial artifact distortion removal, surface generation, cross-modal registration, and alignment to standard space. RAS is then able to capitalize on the high-quality multimodal data offered by HCP for investigating the living, functioning human brain as people perform tasks and experience mental states (e.g. discrete emotions, music listening, and creating novel music).
Preprocessing pipelines developed by the HCP are followed by post-processing generation of cortical surfaces and regional segmentations, structural connection maps, and functional network maps.
The complex multi-modal data, representing the neural structure and function of all participants ages 0-100, is then aligned and registered to a standard brain atlas. This is used to train the generative Variational Auto-Encoder (VAE-GAN) model.
A diffusion (DTI)-based, tissue-segmented image is generated to perform anatomically constrained tractography. To capture the distribution of neural fiber orientation densities, multi-shell-multi-tissue (MSMT) analysis with constrained spherical deconvolution (CSD) is performed. The MSMT response functions for white matter (WM), grey matter (GM,) and cerebrospinal fluid (CSF) are estimated and used to create a 4D image corresponding to the tissue densities and fiber orientation direction. From these, deterministic tractography is conducted to generate streamline fibers representing structural connection throughout both cerebral hemispheres.
Lastly, the spherical-deconvolution informed filtering of tractograms (SIFT) algorithm is used to reduce the overall streamline count and provide more biologically meaningful estimates of structural connection density.
Using the fiber orientation distributions as an input for the generative VAE-GAN model, a synthetic form is generated by sampling the latent space to represent all possible brain structures.
The final structural and visual output is a synthetic brain, imagined by the VAE-GAN using the entire HCP data set. With this synthetic tractogram, the viewer is able to track the development of the brain across the human lifespan. This includes the functional connectivity of the mind and the moment one perceives the spatial and temporal dynamics of the architectural space, experiencing a collective consciousness in that fabric of space-time.
A VAE-based generative model is trained to learn unsupervised latent representations of fMRI data. The feature space encodes representations of the diffusion MRI data and uses probabilistic sampling to extract observations to generate millions of latent forms.
This approach makes it possible to model complex and high-dimensional data such as volumetric data of 3D-brain structures, including their complex networks of structural and functional connectivity between different regions. This also allows the generation of similar synthetic data where real data is limited.
With their research, RAS attempts to solve the limited data problem in medical imaging by generating synthetic multi-sequence brain MR images using GANs for data augmentation. This allows for greater diagnostic accuracy and reliability, which opens up possibilities for applications such as rare disease diagnosis where there’s a lack of data.
Refik Anadol is a media artist, director, and pioneer in the aesthetics of machine intelligence that explores the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik offers radical visualizations of our digitized memories and the possibilities of architecture, narrative, and the body in motion. His site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines.
Studio Team: Alex Morozov, Arda Mavi, Carrie He, Christian Burke, Danny Lee, Efsun Erkilic, Ho Man Leung, Kerim Karaoglu, Kyle McLean, Nicholas Boss, Nidhi Parsana, Pelin Kivrak, Raman K. Mustafa, Rishabh Chakarabarty, Toby Heinemann
refikanadol.com | Instagram | Twitter | Vimeo | YouTube
AI-generated art has reached the Museum of Modern Art in New York City. Refik Anadol’s piece, Machine Hallucinations: MoMA, uses a machine-learning model to interpret the visual and informational data of MoMA’s collection. It reflects the wave of excitement around generative AI’s creative tools for amateur and professional artists alike.
Refik Anadol, the director of Refik Anadol Studio, will present Machine Hallucinations: MoMA, which, unsupervised, processes 138,151 pieces of metadata from the Museum of Modern Art. Using StyleGAN2 ADA, the studio captures the machine’s "hallucinations" of modern art in a multidimensional space.
Refik Anadol Studio (RAS) embarks upon a new journey to explore the architecture of the human brain by combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools.
Go behind the scenes with one of our amazing AI Art Gallery artists for a virtual tour of their innovative studios, and learn how AI has helped shape their creative process.
This panel discussion with the GTC AI Art Gallery artists, Daniel Ambrosi, Refik Anadol, and Helena Sarin will explore their personal journeys that led to connecting AI and fine art, how the technology has influenced their artistic process, why AI is important in the broader field of fine art, how art education intersects with AI education, and whether AI will be capable of achieving autonomous control over the creative process.