NVIDIA BioNeMo Service

End-to-end AI-powered drug discovery pipelines.

What is BioNeMo?

BioNeMo is an AI-powered drug discovery cloud service and framework built on NVIDIA NeMo Megatron for training and deploying large biomolecular transformer AI models at supercomputing scale. The service includes pretrained large language models (LLMs) and native support for common file formats for proteins, DNA, RNA, and chemistry, providing data loaders for SMILES for molecular structures and FASTA for amino acid and nucleotide sequences. The BioNeMo framework will also be available for download for running on your own infrastructure.

Check out related products.
Check out related products.

Explore features and benefits.

LLMs for Chemistry and Biology

Access pretrained LLMs for chemistry and biology.

BioNeMo comes with numerous pre-trained LLMs. MegaMolBART is a generative chemistry model trained on 1.4 billion molecules (SMILES strings) and can be used for a variety of cheminformatics applications.

ProtT5 and ESM1-85M are transformer-based protein language models that can be used to generate learned embeddings for tasks like protein structure and property prediction.

OpenFold, a deep learning model for 3D structure prediction of novel protein sequences, will be available in BioNeMo service

Supercoming for Inference

Optimize inference at supercomputing scale.

BioNeMo allows developers to deploy LLMs with billions and trillions of parameters. Today’s protein language models contain billions of parameters that require supercomputing infrastructure for inferencing on the vast chemical space. Dynamic resource scaling in the cloud allows LLM inference pipelines to automatically scale to meet compute demands.

Accelerates Drug Discovery Pipelines

Use a turnkey solution for AI drug discovery pipelines.

BioNeMo makes it easy to get started with pre-trained models, automatic downloaders, and preprocessors for UniRef50 and ZINC databases. Various models, embeddings, and outputs can be combined to bring multimodal data together, thanks to unsupervised structured learners. Unsupervised pre-training also eliminates the need for labeled data, jumpstarting generation of learned embeddings to predict protein structure, function, cellular location, water solubility, membrane-boundness, conserved and variable regions, and more.

Sign up for early access to BioNeMo service