This artwork helps to understand and showcase how people of India represent gender visually by collecting a dataset of drawings and using AI to generate a spectrum of gender forms as a circular tapestry.
2020 Archival print on canvas (4x6 feet) Three books of 200 pages each Custom dataset - GANs, image classification
A gradient texture of figure forms reveals the spectrum of gender representation when viewed up close. Three books present samples of dataset collected and individual generated figures.
This work was specially commissioned for the India Pavilion at Artissima 2020 by Myna Mukherjee (founder of Engendered). Due to the travel restrictions in 2020, the work will now be shown at Artissima 2021.
Book 1: See 200 randomly selected drawing samples from the collected male dataset.
Book 2: This book contains two randomly selected drawings generated by the AI for each percentage—starting from 0% female moving to 100% female.
Book 3: Browse to see 200 randomly selected drawing samples from the collected female dataset.
The artists collect a dataset made up of ~2300 hand drawn figures of standing female and male forms by people of India from all walks of life. Every individual contributing to the dataset draws both a female and male figure. Roughly half the collection was done in person (using a black marker pen on two white papers for female and male) and half using Mechanical Turk (mimicking the real-world data collection setup).
A GAN (NVIDIA StyleGAN 2) is trained on the dataset of gender drawings. The GAN outputs are fed through a binary classifier trained on the collected dataset to classify the outputs as either female or male with a certain confidence level. This spread of gender representation generated by the GAN is used to create the final circular tapestry—from 100% female in the centre to 0% at the outermost radius.
64/1 is an arts research and curatorial collective founded by brothers Karthik Kalyanaraman and Raghava KK that focuses on blurring the boundaries between art, art criticism, and art education.
Karthik is a conceptual artist, writer, and former academic whose PhD (Harvard) in Econometrics produced key research on establishing causality in statistics. Aside from teaching at UCL he has worked for a top political think tank in the US and has published works on social network analysis and aesthetic theory.
Raghava KK is an acclaimed artist who has worked in multiple disciplines. His work traverses traditional forms of painting, installation, and performance while also embracing new media to express post-human contemporary realities. He figured on CNN’s 2010 list of the 10 most remarkable people of the year and the 2020 Netflix documentary, The Creative Indians.
Instagram | Facebook
Harshit is an India-based artist working with artificial intelligence and emerging technologies. He uses machines and algorithms and often creates them as an essential part of his art process.
His work is part of the permanent exhibition at the largest computer science museum in the world, HNF museum in Germany. He was the only Indian among seven international AI art pioneers in one of the world’s first AI art shows in a contemporary gallery—Gradient Descent at Nature Morte. He has exhibited work at other premier venues like the Ars Electronica Festival (Austria), Asia Culture Center (Gwangju Korea), and Museum of Tomorrow (Brazil), among others. His work has also been covered in international media including the BBC, New York Times, and Stir World and he has given several talks on the subject of AI art, including three TEDx talks.
He graduated from the MIT Media Lab and IIT Guwahati. Along with his art practice, he has authored several publications and patents about his work at the intersection of human computer interaction and creative expression.
Instagram | Twitter | harshitagrawal.com
AI can be used by artists as a tool, a collaborator, or a muse, influencing their artwork in different stages of their process. This panel of artists from around the globe will compare how they combine their fine art backgrounds with their futuristic art practices.
Meet AIVA, an AI music composer trained on thousands of scores of music, with a mission to empower individuals by creating their own personalized soundtracks.
Playlist: Landing on Mars, Random Access Memory, 4-32.vACCESS_DENIED_99161.wav, Euphoria
2020/2021 Artificial Intelligence Virtual Artist-Composed Music
The interactive experience allows you to compose music in seconds. Just select a few parameters or upload a musical influence to bias the generation process, and AIVA will create your personalized track.
AIVA offers users two ways to create music. First, using pre-trained “preset styles” based on various in-house curated datasets made up of important musical features such as harmonic progressions, rhythmic patterns, and melodic lines. Second, using an uploaded song to influence the composition process into generating something completely unique with similar musical characteristics.
Over the years, the humans behind AIVA have built many AI models—such as recurrent neural networks, convolutional neural networks, and evolutionary algorithms—which enable AIVA’s creativity in the music domain. Modularity is an important part of AIVA’s design, letting artists who collaborate with AIVA interpret, edit, and regenerate AIVA’s creations at will.
AIVA’s creative process can be boiled down to learning from musical data, composing scores, and finally, turning those scores into audio. They use the power of AI to analyze scores and recordings, as well as to generate original compositions. But they rely on a more traditional method of sampling human musicians to produce audio, which adds a necessary human touch to the final product.
Pierre is co-founder and CEO of AIVA, the artificial intelligence composing emotional soundtrack music. As a computer scientist, award-nominated film director, and registered composer, Pierre leads the company towards its vision: empowering individuals by creating personalized soundtracks with AIVA.
Website | YouTube
Denis is co-founder and CTO of AIVA. As a published researcher and composer, Denis is leading the research and development efforts to solve the most challenging problems in the fast-evolving field of creative and personalized music generation using AI technologies.
This year, we're inviting the team from AIVA to lead a workshop using their web-based music app. You'll learn about how the app works, tips for getting the best results, and applications for AI-generated music.
AI-composed cinematic music inspired by the Mars landing
For Daniel Ambrosi, computational photography is about more than just creating a sense of place. It’s about conveying how we feel about the environment—viscerally, cognitively, and emotionally.
Abstract Dreams
2020/2021 Computational Photography + Artificial Intelligence
The next step in the evolution of our Dreamscapes project, this collection further dissociates the generated artwork from the underlying photography and toward the subconscious. These Abstract Dream video loops are designed to invoke a meditative response that yields illusions of depth perception as similar features recede or advance through changes in colors and textures.
This scene was captured in October 2018 during rush hour from a window on the 16th floor of an office building near Bryant Park in midtown Manhattan. The artist created this Dreamscape in March 2020 during the coronavirus crisis, when a bustling scene like this started to feel like a distant dream. It fuses computational photography and AI to create an environment that’s fascinating to explore.
Developing the "full scene" AI-augmented artwork started with capturing 45 photos from a single location, which were then stitched (Autopano Pro) and blended (Aurora HDR Pro) into a giant (482+ megapixel) panorama and cropped and sweetened in Photoshop. It was processed with a proprietary version of Google DeepDream running on NVIDIA Quadro RTX GPUs.
Finally, Daniel experimented with tools and techniques (including effects available in Filter Forge) to further explore the graphic possibilities inherent in his “Dreamscapes.” His most recent attempts have led to a series he calls Infinite Dreams, an exploration of Cubism-inspired "refracted" dreaming.
This extended video loop illustrates the stages in the artist’s process—capture/assembly of the original 295+ megapixel, 63-shot panorama; first application of AI augmentation using DeepDream; abstractification using Filter Forge; second application of DeepDream; and blending between details pulled from three different regions of the original panorama.
TECHNIQUE: Computational Photography + Artificial Intelligence CAMERA: Sony RX1 or Sony RX1Rii GRAPHICS SOFTWARE/HARDWARE: Aurora HDR Pro, Autopano Pro, Adobe Photoshop, Filter Forge Pro/iMac AI SOFTWARE/HARDWARE: Proprietary version of Google's DeepDream open-source code customized for the artist by Joseph Smarr (Google) and Chris Lamb (NVIDIA)/Quad-GPU compute server on Amazon EC2 (Elastic Compute Cloud)
In this video, Dreamscapes: The Tech Behind the Art, engineers Joseph Smarr (Google) and Chris Lamb (NVIDIA) offer a layman's explanation on how they modified Google's open-source DeepDream software to operate on multi-hundred megapixel images. See how this innovative process enabled computational photography artist Daniel Ambrosi to take his work to new heights.
Daniel Ambrosi is one of the founding creators of the emerging AI art movement and is noted for the nuanced balance he achieves in human-AI hybrid art. He has been exploring novel methods of visual presentation for almost 40 years since Cornell University, where he earned a Bachelor of Architecture degree and a Masters in 3D Graphics. In 2011, Daniel devised a unique form of computational photography that generates exceptionally immersive landscape images.
dreamscapes.ai | Twitter | Instagram | Facebook
Artists Pindar Van Arman and Daniel Ambrosi take very different approaches to using artificial intelligence in their artworks, but they have a shared focus on making physical AI-augmented artifacts. According to the artists, we are three-dimensional creatures living in a 3D world, but we experience that world entirely within our unlit skulls. Does it or should it matter whether the art artifact is comprised of bits or atoms?
It's said that art imitates life, but what if that art is flora and fauna created by an artist using artificial intelligence? Join a discussion with artists Sofia Crespo, Feileacan McCormick, Anna Ridler, and Daniel Ambrosi and NVIDIA technical specialist Chris Hebert to explore how they use AI in their creative process of generating interpretations of natural forms.
Unwind into an abstract dream of a Japanese Tea Garden by Daniel Ambrosi
By combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools, Refik Anadol Studio explores the architecture of the human brain from an interdisciplinary perspective.
August 2020-April 2021 Through collaboration between Dr. Taylor Kuhn, coordinator of the Human Connectome Project (HCP) at UCLA, and technology partners Siemens and NVIDIA, Refik Anadol Studio (RAS) develops a dynamic network at the intersection of neuroscience and art to study fundamental questions about the human brain.
This experience showcases visual re-imaginings of the relationship between form (spatial connections) and function (temporal connections) of the human mind.
RAS will exhibit the first visuals of this research collaboration in an immersive 3D-printed art installation, Sense of Space, at the 17th Architecture Biennale in Venice, Italy.
Approximately 70TB of multimodal Magnetic Resonance Imaging (MRI) data, including structural, diffusion (DTI) and functional (fMRI) scans of people ranging from birth to nonagenarians and beyond are used to train machine-learning algorithms that discover patterns and imagine the development of brain circuitry throughout the human lifespan.
These pipelines initially perform many low level tasks, including spatial artifact distortion removal, surface generation, cross-modal registration, and alignment to standard space, that then allows RAS to capitalize on the high quality multimodal data offered by HCP for investigating the living, functioning human brain as people perform tasks and experience mental states (e.g. discrete emotions, music listening, creating novel music).
Preprocessing pipelines developed by the HCP are utilized for spatial artifact and distortional removal, surface generation, cross-modal registration and standard space alignment followed by post-processing generation of cortical surfaces and regional segmentations, structural connection maps and functional network maps.
The complex multi-modal data, representing the neural structure and function of all participants, ages 0-100, is then aligned and registered to a standard brain atlas, which is used to train the generative Variational Auto-Encoder (VAE-GAN) model.
A diffusion (DTI)-based, tissue-segmented image is generated to perform Anatomically Constrained Tractography. To capture the distribution of neural fiber orientation densities; Multi-shell-multi-tissue (MSMT) analysis with Constrained Spherical Deconvolution (CSD) is performed. The MSMT response functions for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) are estimated. The combined average response functions create a 4D image, corresponding to the tissue densities and fiber orientation direction. From these, deterministic tractography is conducted to generate streamline fibers representing structural connection throughout both cerebral hemispheres.
Lastly, the Spherical-deconvolution Informed Filtering of Tractograms (SIFT) algorithm is used to reduce- the overall streamline count and provide more biologically meaningful estimates of structural connection density.
Utilizing the fiber orientation distributions as an input for the generative Variational Auto-Encoder (VAE-GAN), a synthetic form is generated by sampling the latent space to represent all possible brain structures.
The final structural and visual output is a synthetic brain, imagined by the VAE-GAIN using the entire HCP data set. With this synthetic tractogram, the viewer is able to track the development of the brain across the human lifespan. The overall project maps the functional connectivity of the mind, the moment one perceives the spatial and temporal dynamics of the architectural space, experiencing a collective consciousness in that fabric of space-time.
A VAE-based generative model is trained to learn unsupervised latent representations of fMRI data. The feature space encodes representations of the diffusion MRI data and uses probabilistic sampling to extract observations to generate millions of latent forms.
Using this approach, we are able to model complex and high-dimensional data such as volumetric data of 3D-brain structures including their complex networks of structural and functional connectivity between different regions in the brain. This also allows us to generate similar synthetic data where real data is limited.
With their research, RAS attempts to solve the limited data problem in medical imaging by generating synthetic multi-sequence brain MR images using GANs for data augmentation. This allows for greater diagnostic accuracy and reliability, which opens up possibilities for applications such as rare disease diagnosis where there is a lack of data.
Series of Artworks from Machine Hallucinations
2019-2020 AI Data Painting
Refik’s Machine Hallucinations are explorations of what can happen at the intersection of new media, machine learning, neuroscience, and architecture. They come to life in immersive installations, data paintings, and audio/visual performances.
This unique interactive web experience gives you a tour of Refik’s studio’s groundbreaking AI data paintings based on their most recent experimentations with NVIDIA StyleGAN2 models and NVIDIA Quadro RTX GPUs.
Experience optimized for Chrome.
Machine Hallucinations is an ongoing project of data aesthetics based on collective visual memories of space, nature, and urban experiences. Since 2016, our studio has used DCGAN, PCGAN, and StyleGAN to train machine intelligence in processing these vast datasets and unfolding unrecognized layers of our external realities. We collect data from digital archives and social media platforms and process them with machine learning classification models such as CNNs, Variational Autoencoders, and deep ranking, filtering out noise and irrelevant data points. The sorted image datasets are then clustered into thematic categories to better understand the semantic context of the data universe.
To capture these hallucinations from a multi-dimensional space, we use StyleGAN2, an NVIDIA DGX Station, 500 TFLOPS of AI power, and the world’s fastest workstation for leading-edge AI research and development. StyleGAN2 generates a model for the machine to process the archive and the model is trained on subsets of the sorted images, creating embeddings in 4096 dimensions. To understand this complex spatial structure visually, we use dimensional-reduction algorithms, such as cuml-UMAP, projecting to a navigable 3-dimensional universe. This projection enables the audience to use an interactive browser to virtually fly around this latent space and record their own journeys.
The final stage of production reveals Refik Anadol Studio’s pioneering work in machine-generated aesthetics and synesthetic perception through various forms of data paintings, sculptures, and latent cinematic experiences. Fluid dynamics has inspired Anadol since the inception of Machine Hallucinations. The studio’s exploration of digital pigmentation and light through fluid-solver algorithms accelerated by GPU computation and real time ray-traced lighting made possible by NVIDIA RTX GPUs brings this inspiration to life in a masterfully curated, multi-channel, self-generating experience.
Space Dreams transforms a vast data set of 1.2 million images captured from the International Space Station, along with satellite images of Earth's topology, into a dynamic data painting. Each version of the machine’s dream sequences is derived from a different generative adversarial network (GAN), exploring AIs capacity to reach its own subconscious and offering an avant-garde form of cartographic aesthetics.
In Urban Dreams, Anadol offers new insights into the representational possibilities emerging from the intersection of advanced technology, urban memory, and contemporary art. The work uses over 100 million images from New York City and Berlin, focusing specifically on typical public spaces. The content is used to train a StyleGAN2 to identify and learn patterns hidden in urban scenery, showcasing how the elusive process of memory retrieval transforms into data collections.
Nature Dreams is a series of synesthetic reality experiments based on StyleGAN2 algorithms, using over 69 million images of National Parks, Iceland, and other natural wonders to train a generative model for the machine to dream about the most mesmerizing features of our Mother Nature. This transformation of the data collection becomes not just a means of visualizing information, but also a transmutation of our desire for experiencing nature into a poetic visual.
Refik Anadol is a media artist, director, and pioneer in the aesthetics of machine intelligence that explores the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik offers radical visualizations of our digitized memories and the possibilities of architecture, narrative, and the body in motion. His site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines.
Studio Team: Alex Morozov, Arda Mavi, Carrie He, Christian Burke, Danny Lee, Efsun Erkilic, Ho Man Leung, Kerim Karaoglu, Kyle McLean, Nicholas Boss, Nidhi Parsana, Pelin Kivrak, Raman K. Mustafa, Rishabh Chakarabarty, Toby Heinemann
refikanadol.com | Instagram | Twitter | Vimeo | YouTube
Refik Anadol Studio (RAS) embarks upon a new journey to explore the architecture of the human brain by combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools.
Can artificial lifeforms help us rethink our connection with nature? Sofia Crespo and Entangled Others Studio use the lens of new technology to explore how life forms built on AI in the digital world can help us explore our common bonds.
Beneath the Neural Waves explores biodiversity through an attempt at creating (digitally) an aquatic ecosystem as a means of attempting to engage with the very abstract concept of relationship.
Explore a neurally generated fragment of coral reef and some of the specimens that reside there.
Experience optimized for Chrome, Firefox, and Safari.
Datasets used included synthetic coral data produced by Joel Simon, publicly available data on aquatic specimens and hand-generated data.
The datasets were processed with Blender, and the specimens generated with 3D-GANs, CNNs (thanks to Alex Mordvintsev), and GANs.
AI is used as a means of exploring what we know of critical parts of our natural world through an “essential” re-enactment that at once is a reflection of what we do, and do not, know about these essential ecosystems and how they relate internally and to us.