By combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools, Refik Anadol Studio explores the architecture of the human brain from an interdisciplinary perspective.
August 2020-April 2021 Through collaboration between Dr. Taylor Kuhn, coordinator of the Human Connectome Project (HCP) at UCLA, and technology partners Siemens and NVIDIA, Refik Anadol Studio (RAS) develops a dynamic network at the intersection of neuroscience and art to study fundamental questions about the human brain.
This experience showcases visual re-imaginings of the relationship between form (spatial connections) and function (temporal connections) of the human mind.
RAS will exhibit the first visuals of this research collaboration in an immersive 3D-printed art installation, Sense of Space, at the 17th Architecture Biennale in Venice, Italy.
Approximately 70TB of multimodal Magnetic Resonance Imaging (MRI) data, including structural, diffusion (DTI) and functional (fMRI) scans of people ranging from birth to nonagenarians and beyond are used to train machine-learning algorithms that discover patterns and imagine the development of brain circuitry throughout the human lifespan.
These pipelines initially perform many low level tasks, including spatial artifact distortion removal, surface generation, cross-modal registration, and alignment to standard space, that then allows RAS to capitalize on the high quality multimodal data offered by HCP for investigating the living, functioning human brain as people perform tasks and experience mental states (e.g. discrete emotions, music listening, creating novel music).
Preprocessing pipelines developed by the HCP are utilized for spatial artifact and distortional removal, surface generation, cross-modal registration and standard space alignment followed by post-processing generation of cortical surfaces and regional segmentations, structural connection maps and functional network maps.
The complex multi-modal data, representing the neural structure and function of all participants, ages 0-100, is then aligned and registered to a standard brain atlas, which is used to train the generative Variational Auto-Encoder (VAE-GAN) model.
A diffusion (DTI)-based, tissue-segmented image is generated to perform Anatomically Constrained Tractography. To capture the distribution of neural fiber orientation densities; Multi-shell-multi-tissue (MSMT) analysis with Constrained Spherical Deconvolution (CSD) is performed. The MSMT response functions for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) are estimated. The combined average response functions create a 4D image, corresponding to the tissue densities and fiber orientation direction. From these, deterministic tractography is conducted to generate streamline fibers representing structural connection throughout both cerebral hemispheres.
Lastly, the Spherical-deconvolution Informed Filtering of Tractograms (SIFT) algorithm is used to reduce- the overall streamline count and provide more biologically meaningful estimates of structural connection density.
Utilizing the fiber orientation distributions as an input for the generative Variational Auto-Encoder (VAE-GAN), a synthetic form is generated by sampling the latent space to represent all possible brain structures.
The final structural and visual output is a synthetic brain, imagined by the VAE-GAIN using the entire HCP data set. With this synthetic tractogram, the viewer is able to track the development of the brain across the human lifespan. The overall project maps the functional connectivity of the mind, the moment one perceives the spatial and temporal dynamics of the architectural space, experiencing a collective consciousness in that fabric of space-time.
A VAE-based generative model is trained to learn unsupervised latent representations of fMRI data. The feature space encodes representations of the diffusion MRI data and uses probabilistic sampling to extract observations to generate millions of latent forms.
Using this approach, we are able to model complex and high-dimensional data such as volumetric data of 3D-brain structures including their complex networks of structural and functional connectivity between different regions in the brain. This also allows us to generate similar synthetic data where real data is limited.
With their research, RAS attempts to solve the limited data problem in medical imaging by generating synthetic multi-sequence brain MR images using GANs for data augmentation. This allows for greater diagnostic accuracy and reliability, which opens up possibilities for applications such as rare disease diagnosis where there is a lack of data.
Series of Artworks from Machine Hallucinations
2019-2020 AI Data Painting
Refik’s Machine Hallucinations are explorations of what can happen at the intersection of new media, machine learning, neuroscience, and architecture. They come to life in immersive installations, data paintings, and audio/visual performances.
This unique interactive web experience gives you a tour of Refik’s studio’s groundbreaking AI data paintings based on their most recent experimentations with NVIDIA StyleGAN2 models and NVIDIA Quadro RTX GPUs.
Experience optimized for Chrome.
Machine Hallucinations is an ongoing project of data aesthetics based on collective visual memories of space, nature, and urban experiences. Since 2016, our studio has used DCGAN, PCGAN, and StyleGAN to train machine intelligence in processing these vast datasets and unfolding unrecognized layers of our external realities. We collect data from digital archives and social media platforms and process them with machine learning classification models such as CNNs, Variational Autoencoders, and deep ranking, filtering out noise and irrelevant data points. The sorted image datasets are then clustered into thematic categories to better understand the semantic context of the data universe.
To capture these hallucinations from a multi-dimensional space, we use StyleGAN2, an NVIDIA DGX Station, 500 TFLOPS of AI power, and the world’s fastest workstation for leading-edge AI research and development. StyleGAN2 generates a model for the machine to process the archive and the model is trained on subsets of the sorted images, creating embeddings in 4096 dimensions. To understand this complex spatial structure visually, we use dimensional-reduction algorithms, such as cuml-UMAP, projecting to a navigable 3-dimensional universe. This projection enables the audience to use an interactive browser to virtually fly around this latent space and record their own journeys.
The final stage of production reveals Refik Anadol Studio’s pioneering work in machine-generated aesthetics and synesthetic perception through various forms of data paintings, sculptures, and latent cinematic experiences. Fluid dynamics has inspired Anadol since the inception of Machine Hallucinations. The studio’s exploration of digital pigmentation and light through fluid-solver algorithms accelerated by GPU computation and real time ray-traced lighting made possible by NVIDIA RTX GPUs brings this inspiration to life in a masterfully curated, multi-channel, self-generating experience.
Space Dreams transforms a vast data set of 1.2 million images captured from the International Space Station, along with satellite images of Earth's topology, into a dynamic data painting. Each version of the machine’s dream sequences is derived from a different generative adversarial network (GAN), exploring AIs capacity to reach its own subconscious and offering an avant-garde form of cartographic aesthetics.
In Urban Dreams, Anadol offers new insights into the representational possibilities emerging from the intersection of advanced technology, urban memory, and contemporary art. The work uses over 100 million images from New York City and Berlin, focusing specifically on typical public spaces. The content is used to train a StyleGAN2 to identify and learn patterns hidden in urban scenery, showcasing how the elusive process of memory retrieval transforms into data collections.
Nature Dreams is a series of synesthetic reality experiments based on StyleGAN2 algorithms, using over 69 million images of National Parks, Iceland, and other natural wonders to train a generative model for the machine to dream about the most mesmerizing features of our Mother Nature. This transformation of the data collection becomes not just a means of visualizing information, but also a transmutation of our desire for experiencing nature into a poetic visual.
Refik Anadol is a media artist, director, and pioneer in the aesthetics of machine intelligence that explores the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik offers radical visualizations of our digitized memories and the possibilities of architecture, narrative, and the body in motion. His site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines.
Studio Team: Alex Morozov, Arda Mavi, Carrie He, Christian Burke, Danny Lee, Efsun Erkilic, Ho Man Leung, Kerim Karaoglu, Kyle McLean, Nicholas Boss, Nidhi Parsana, Pelin Kivrak, Raman K. Mustafa, Rishabh Chakarabarty, Toby Heinemann
refikanadol.com | Instagram | Twitter | Vimeo | YouTube
Refik Anadol Studio (RAS) embarks upon a new journey to explore the architecture of the human brain by combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools.