NGC | Catalog
Logo for NAMD
Description
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis.
Publisher
UIUC
Latest Tag
3.0-beta5
Modified
March 1, 2024
Compressed Size
1.5 GB
Multinode Support
No
Multi-Arch Support
Yes

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.

System requirements

Before running the NGC NAMD container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
    • r545 (>=.23.08)
    • >= 525.60.13

Running NAMD Examples

Download Dataset

The ApoA1 benchmark consists of 92,224 atoms and has been a standard NAMD cross-platform benchmark for years. Follow the steps below to use the APOA1 input dataset to test the NGC NAMD container.

Download the APOA1 dataset to your current directory:

wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash

Take a moment to inspect the shell script above. In particular, it injects the CUDASOAintegrate on in the configuration file, which enables the NAMD 3.0 GPU-resident mode code path.

Replace {input_file} in the examples below with the path to the apoa1 namd input file:

/host_pwd/apoa1/apoa1_nve_cuda_soa.namd

Select Tag

Several NAMD images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export NAMD_TAG={TAG}

Where {TAG} is 3.0-beta5 or any other tag previously posted on NGC.

Set the executable name depending on the chosen tag.

export NAMD_EXE=namd3

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

docker run -it --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/namd:$NAMD_TAG 

Usage

Launch NAMD with 1 CPU thread, utilizing 1 GPU (simplest way for NAMD versions >= 3.0), on your local machine or single node:

docker run -it --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/namd:$NAMD_TAG ${NAMD_EXE} +p1 +devices 0 +setcpuaffinity {input_file}

The +p argument is used to specify the number of cores to be used, and +devices specify the GPUs used. To use 2 CPU threads and 2 GPUs:

docker run -it --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/namd:$NAMD_TAG ${NAMD_EXE} +p2 +devices 0,1 +setcpuaffinity {input_file}

It is recommanded to run NAMD 2.x for very large systems, multi-node simulations, or Pascal GPUs. In NAMD 3, this can be achieved by setting CUDASOAintegrate off or simply not setting it in the configuration file. The input file /host_pwd/apoa1/apoa1_nve_cuda.namd (note the lack of _soa in file name) in the APOA1 dataset can be used to test this:

docker run -it --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/namd:$NAMD_TAG ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Running with Singularity

singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd nvcr.io/hpc/namd:$NAMD_TAG

Launch NAMD with 1 CPU thread, utilizing 1 GPU (simplest way for NAMD versions >= 3.0), on your local machine or single node:

${SINGULARITY} ${NAMD_EXE} +p1 +devices 0 +setcpuaffinity {input_file}

The +p argument is used to specify the number of cores to be used, and +devices specify the GPUs used. To use 2 CPU threads and 2 GPUs:

${SINGULARITY} ${NAMD_EXE} +p2 +devices 0,1 +setcpuaffinity {input_file}

In addition, for very large systems, multi-node simulations, or Pascal GPUs, it is recommanded to run NAMD 2.x. In NAMD 3, this can be achieved by setting CUDASOAintegrate off or simply not setting it in the configuration file. The input file /host_pwd/apoa1/apoa1_nve_cuda.namd (note the lack of _soa in file name) in the APOA1 dataset can be used to test this:

${SINGULARITY} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Note: Singularity 3.1.x - 3.5.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Running on Base Platform Command

NVIDIA Base Command Platform (BCP) offers a ready-to-use cloud-hosted solution that manages the end-to-end lifecycle of development, workflows, and resource management. Before running the commands below, install and configure the ngc cli, more information can be found here.

Uploading the Dataset to BCP

Note: apoa1_nve_cuda_soa.namd needs to be modified to remove the outputName parameter due to the nature of the mounted read-only dataset directory.

Upload the apoa1 dataset using the command below

ngc dataset upload --source ./apoa1/ --desc "NAMD dataset" namd_dataset

Running NAMD on BCP

Single node on a single GPU running the apoa1 dataset

ngc batch run --name "NAMD_single_gpu" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 0s --ace <your-ace> --instance dgxa100.80g.1.norm --commandline "namd3 +p1 +devices 0 +setcpuaffinity --outputName /results/namd_output /work/apoa1_nve_cuda_soa.namd" --result /results/ --image "hpc/namd:${NAMD_TAG}" --org <your-org> --datasetid <datasetid>:/work/

Known Issue for tag 3.0-beta2

  • Using more than one thread per GPU will result in an error at the end of execution. This bug will be fixed in a coming version. To avoid this error, either use tag 3.0-beta5 or specify both GPU-resident mode (CUDASOAintegrate on) and device migration (DeviceMigration on).

Suggested Reading

NAMD

NAMD Manual

BCP User Guide