64/1 And Harshit Agrawal

This artwork helps to understand and showcase how people of India represent gender visually by collecting a dataset of drawings and using AI to generate a spectrum of gender forms as a circular tapestry.

The Artwork

The Books

The Process


The artists collect a dataset made up of ~2300 hand drawn figures of standing female and male forms by people of India from all walks of life. Every individual contributing to the dataset draws both a female and male figure. Roughly half the collection was done in person (using a black marker pen on two white papers for female and male) and half using Mechanical Turk (mimicking the real-world data collection setup).


A GAN (NVIDIA StyleGAN 2) is trained on the dataset of gender drawings. The GAN outputs are fed through a binary classifier trained on the collected dataset to classify the outputs as either female or male with a certain confidence level. This spread of gender representation generated by the GAN is used to create the final circular tapestry—from 100% female in the centre to 0% at the outermost radius.

Artist collective 64/1 is Karthik Kalyanaraman and Raghava KK

About 64/1

64/1 is an arts research and curatorial collective founded by brothers Karthik Kalyanaraman and Raghava KK that focuses on blurring the boundaries between art, art criticism, and art education.

Karthik is a conceptual artist, writer, and former academic whose PhD (Harvard) in Econometrics produced key research on establishing causality in statistics. Aside from teaching at UCL he has worked for a top political think tank in the US and has published works on social network analysis and aesthetic theory.

Raghava KK is an acclaimed artist who has worked in multiple disciplines. His work traverses traditional forms of painting, installation, and performance while also embracing new media to express post-human contemporary realities. He figured on CNN’s 2010 list of the 10 most remarkable people of the year and the 2020 Netflix documentary, The Creative Indians.

 Instagram | Facebook

About Harshit Agrawal

Harshit is an India-based artist working with artificial intelligence and emerging technologies. He uses machines and algorithms and often creates them as an essential part of his art process.  

His work is part of the permanent exhibition at the largest computer science museum in the world, HNF museum in Germany. He was the only Indian among seven international AI art pioneers in one of the world’s first AI art shows in a contemporary gallery—Gradient Descent at Nature Morte. He has exhibited work at other premier venues like the Ars Electronica Festival (Austria), Asia Culture Center (Gwangju Korea), and Museum of Tomorrow (Brazil), among others. His work has also been covered in international media including the BBC, New York Times, and Stir World and he has given several talks on the subject of AI art, including three TEDx talks.

He graduated from the MIT Media Lab and IIT Guwahati. Along with his art practice, he has authored several publications and patents about his work at the intersection of human computer interaction and creative expression. 

Instagram | Twitter | harshitagrawal.com

Harshit Agrawal

Featured Sessions

Art in Light of AI - April 13 @ 8 a.m. PST

AI can be used by artists as a tool, a collaborator, or a muse, influencing their artwork in different stages of their process. This panel of artists from around the globe will compare how they combine their fine art backgrounds with their futuristic art practices.


Meet AIVA, an AI music composer trained on thousands of scores of music, with a mission to empower individuals by creating their own personalized soundtracks.

The Tracks

Playlist: Landing on Mars, Random Access Memory, 4-32.vACCESS_DENIED_99161.wav, Euphoria  

Artificial Intelligence Virtual Artist-Composed Music  

The Experience

The Process


About Pierre Barreau

Pierre is co-founder and CEO of AIVA, the artificial intelligence composing emotional soundtrack music. As a computer scientist, award-nominated film director, and registered composer, Pierre leads the company towards its vision: empowering individuals by creating personalized soundtracks with AIVA. 

Website | YouTube

About Denis Shtefan

Denis is co-founder and CTO of AIVA. As a published researcher and composer, Denis is leading the research and development efforts to solve the most challenging problems in the fast-evolving field of creative and personalized music generation using AI technologies.



Featured Sessions

Music Making Workshop - April 14 @ 9 a.m. PST

This year, we're inviting the team from AIVA to lead a workshop using their web-based music app. You'll learn about how the app works, tips for getting the best results, and applications for AI-generated music.

10 Minutes: A Musical Mars Landing - On-demand starting April 12 @ 10 a.m. PST

AI-composed cinematic music inspired by the Mars landing

Daniel Ambrosi

For Daniel Ambrosi, computational photography is about more than just creating a sense of place. It’s about conveying how we feel about the environment—viscerally, cognitively, and emotionally.

The Artwork

Azalea Walk
High Ute Ranch
Mining Nightfall
Bryant Park
Japanese Tea Garden
Guadalajara Cathedral

Abstract Dreams

Computational Photography + Artificial Intelligence

The next step in the evolution of our Dreamscapes project, this collection further dissociates the generated artwork from the underlying photography and toward the subconscious. These Abstract Dream video loops are designed to invoke a meditative response that yields illusions of depth perception as similar features recede or advance through changes in colors and textures. 

The Process


About Daniel Ambrosi

Daniel Ambrosi is one of the founding creators of the emerging AI art movement and is noted for the nuanced balance he achieves in human-AI hybrid art. He has been exploring novel methods of visual presentation for almost 40 years since Cornell University, where he earned a Bachelor of Architecture degree and a Masters in 3D Graphics. In 2011, Daniel devised a unique form of computational photography that generates exceptionally immersive landscape images.

dreamscapes.ai | Twitter | Instagram | Facebook

Featured Sessions

Dinner with Strangers: Digital to Physical...and Back Again - April 14 @ 5 p.m. PST

Artists Pindar Van Arman and Daniel Ambrosi take very different approaches to using artificial intelligence in their artworks, but they have a shared focus on making physical AI-augmented artifacts. According to the artists, we are three-dimensional creatures living in a 3D world, but we experience that world entirely within our unlit skulls. Does it or should it matter whether the art artifact is comprised of bits or atoms?

AI Representing Natural Forms in Art - April 14 @ 11 a.m. PST

It's said that art imitates life, but what if that art is flora and fauna created by an artist using artificial intelligence? Join a discussion with artists Sofia Crespo, Feileacan McCormick, Anna Ridler, and Daniel Ambrosi and NVIDIA technical specialist Chris Hebert to explore how they use AI in their creative process of generating interpretations of natural forms.

1 Minute: Meditation on a Deep Dream Landscape - On-demand starting April 12 @ 10 a.m. PST

Unwind into an abstract dream of a Japanese Tea Garden by Daniel Ambrosi


By combining advanced neuroimaging techniques with cutting-edge AI and multi-modal data visualization tools, Refik Anadol Studio explores the architecture of the human brain from an interdisciplinary perspective. 

The Experience

The Process


The Process

Machine Hallucinations is an ongoing project of data aesthetics based on collective visual memories of space, nature, and urban experiences. Since 2016, our studio has used DCGAN, PCGAN, and StyleGAN to train machine intelligence in processing these vast datasets and unfolding unrecognized layers of our external realities. We collect data from digital archives and social media platforms and process them with machine learning classification models such as CNNs, Variational Autoencoders, and deep ranking, filtering out noise and irrelevant data points. The sorted image datasets are then clustered into thematic categories to better understand the semantic context of the data universe.


To capture these hallucinations from a multi-dimensional space, we use StyleGAN2, an NVIDIA DGX Station, 500 TFLOPS of AI power, and the world’s fastest workstation for leading-edge AI research and development. StyleGAN2 generates a model for the machine to process the archive and the model is trained on subsets of the sorted images, creating embeddings in 4096 dimensions. To understand this complex spatial structure visually, we use dimensional-reduction algorithms, such as cuml-UMAP, projecting to a navigable 3-dimensional universe. This projection enables the audience to use an interactive browser to virtually fly around this latent space and record their own journeys.

The final stage of production reveals Refik Anadol Studio’s pioneering work in machine-generated aesthetics and synesthetic perception through various forms of data paintings, sculptures, and latent cinematic experiences. Fluid dynamics has inspired Anadol since the inception of Machine Hallucinations. The studio’s exploration of digital pigmentation and light through fluid-solver algorithms accelerated by GPU computation and real time ray-traced lighting made possible by NVIDIA RTX GPUs brings this inspiration to life in a masterfully curated, multi-channel, self-generating experience.


Space Dreams transforms a vast data set of 1.2 million images captured from the International Space Station, along with satellite images of Earth's topology, into a dynamic data painting. Each version of the machine’s dream sequences is derived from a different generative adversarial network (GAN), exploring AIs capacity to reach its own subconscious and offering an avant-garde form of cartographic aesthetics.


In Urban Dreams, Anadol offers new insights into the representational possibilities emerging from the intersection of advanced technology, urban memory, and contemporary art. The work uses over 100 million images from New York City and Berlin, focusing specifically on typical public spaces. The content is used to train a StyleGAN2 to identify and learn patterns hidden in urban scenery, showcasing how the elusive process of memory retrieval transforms into data collections.

Nature Dreams is a series of synesthetic reality experiments based on StyleGAN2 algorithms, using over 69 million images of National Parks, Iceland, and other natural wonders to train a generative model for the machine to dream about the most mesmerizing features of our Mother Nature. This transformation of the data collection becomes not just a means of visualizing information, but also a transmutation of our desire for experiencing nature into a poetic visual.


About Refik Anadol

Refik Anadol is a media artist, director, and pioneer in the aesthetics of machine intelligence that explores the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik offers radical visualizations of our digitized memories and the possibilities of architecture, narrative, and the body in motion. His site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines. 

Studio Team: Alex Morozov, Arda Mavi, Carrie He, Christian Burke, Danny Lee, Efsun Erkilic, Ho Man Leung, Kerim Karaoglu, Kyle McLean, Nicholas Boss, Nidhi Parsana, Pelin Kivrak, Raman K. Mustafa, Rishabh Chakarabarty, Toby Heinemann

refikanadol.com | InstagramTwitter | Instagram | Vimeo | YouTube  

Sofia Crespo And Entangled Others Studio

Can artificial lifeforms help us rethink our connection with nature? Sofia Crespo and Entangled Others Studio use the lens of new technology to explore how life forms built on AI in the digital world can help us explore our common bonds.

The Experience

The Process

Datasets used included synthetic coral data produced by Joel Simon, publicly available data on aquatic specimens and hand-generated data.


The datasets were processed with Blender, and the specimens generated with 3D-GANs, CNNs (thanks to Alex Mordvintsev), and GANs.


AI is used as a means of exploring what we know of critical parts of our natural world through an “essential” re-enactment that at once is a reflection of what we do, and do not, know about these essential ecosystems and how they relate internally and to us.

The Experience

The Process

The artists used 3D models of hundreds of insects, literature about their life cycles, and close-ups of their textures to create this unique work. The lack of large, open datasets led to a fair deal of data augmentation.


The insects were generated using a GAN trained on data from 3D models. They were then textured with convolutional neural networks (CNNs) and given names and descriptions from a fine-tuned GPT-2 language model. The process included NVIDIA Quadro and GeForce RTX GPUs with TensorFlow and PyTorch frameworks, and cables.gl for the web experience.