GPU-ACCELERATED BigDFT

Get Started Today with This GPU-Ready Apps Guide.

BigDFT

BigDFT is a density functional theory (DFT) massively parallel electronic structure code using a wavelet basis set with the capability to use a linear scaling method. Wavelets form a real-space basis set distributed on an adaptive mesh (two levels of resolution in our implementation). Goedecker-Teter-Hutter (GTH) or Hartwigsen-Goedecker-Hutter (HGH) pseudopotentials are used to remove the core electrons.

BigDFT is available in ABINIT v5.5 and higher but can also be downloaded in a standalone version from the website. The Poisson solver based on a Green function formalism, periodic systems, surfaces, and isolated systems can be simulated with explicit boundary conditions.
The Poisson solver can also be downloaded and used independently and is integrated in ABINIT, Octopus, and CP2K. The code, tutorials, and documentation are available on BigDFT.

Installation

You have the option to download and install BigDFT on bare-metal or pull and run the BigDFT container from NVIDIA GPU Cloud.

Installing applications in a high performance computing (HPC) environment can be complex. Containers let you run the application without installing it on the system, making it easy to deploy the most recent version of the application while optimizing performance.

Running a BigDFT container is very straightforward and can be set up in minutes.

Running Jobs

Once you pull the BigDFT container from NGC, there are four options to run the it.

  • Running it directly from the nvidia-docker run command
  • Running it interactively within the container
  • Running it with Jupyter Notebook from a web browser
  • Running it with MPI/OpenMP/GPU

1. Running BigDFT from the Command Line

To run the BigDFT container from the command-line interface (CLI), you need input files in your current directory on the host system. Then you can issue the following command, which runs BigDFT and makes the current working directory accessible within the container as “/results”:

nvidia-docker run -it --rm -v $(pwd):/results -w /results

nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl bigdft

2. Running BigDFT Interactively

In this example, we will run BigDFT interactively and reproduce a FeHyb test included within the container.

To run the container interactively, issue the following command, which starts the container and also mounts your current directory to /results so it’s available inside the container. (See the -v options on the command below to set the mapping of your local data directory to the one inside the container.)

nvidia-docker run -it --rm -v $(pwd):/results

nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl bash

After the container starts, you’ll be in the / directory. You can then change to /docker/FeHyb/GPU, where you can find input files (input.yaml, posinp.xyz, and psppar.Fe) and run the test:

cd /docker/FeHyb/GPU

bigdft

After the computation, output can be found in log.yaml file and timings in time.yaml. These files can be copied to /results so that you can save them in the current folder on the host system.

3. Running with Jupyter Notebook

The default command of the container launches a Jupyter server inside the container, which can be accessed from any web browser, and allows interactive sessions to be run with BigDFT. For this to be possible, you just need to redirect the 8888 port from the container to any port on the host system. If you then want to access this from another computer, this port must be opened to connections:

nvidia-docker run -p 8888:8888 -it --rm -v $(pwd):/results

nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl

In this example, we can then launch a browser and access the server from a web browser with address localhost:8888

The password of the running Jupyter server is “bigdft.” An example of the notebook is present in /ContainerXP/H2O-Polarizability.ipynb

More documentation can be found here.

4. Running with MPI/OpenMPI/GPU

MVAPICH-GDR 2.3 is installed in the container, and launching with mpirun -np processes is possible.

By default, OpenMP is activated and uses all cores from the host system. To use message passing interface (MPI), OMP_NUM_THREADS needs to be set to a lower value (ncores/nprocesses, for example).

Direct run with a 4-process and 8-thread OpenMP on a 32-core node:

nvidia-docker run -it --rm -e OMP_NUM_THREADS=8 -v

$(pwd):/results -w /results nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl

mpirun -np 4 bigdft

If multiple GPUs are available on the node, each MPI process will try to use a different one and cycle if more processes than GPUs are found (use of --ipc=host option of docker run is also recommended). As NVIDIA® CUDA™-aware MPI is needed for PBE0 computation, setting MV2_USE_CUDA to 1 can also be necessary. It’s not enabled by default, as it could slow down non-GPU runs.

For a multi-GPU run with 4 processes using 32 cores:

nvidia-docker run -it --rm --ipc=host -e MV2_USE_CUDA=1 -e OMP_NUM_THREADS=8 -v

$(pwd):/results -w /results nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl

mpirun -np 4 bigdft

Test Setup

The BigDFT container contains input files to test the behavior of the software, with GPU acceleration or CPU only:

nvidia-docker run -it --rm -v $(pwd):/results

nvcr.io/hpc/bigdft:cuda9-ubuntu1604-mvapich2_gdr-mkl bash

After the container starts, you’ll be in the / directory. You can then change to /docker/FeHyb/GPU, where you can find input files (input.yaml, posinp.xyz, and psppar.Fe) and run the test:

cd /ContainerXP/FeHyb/GPU

bigdft

cp log.yaml /results/log_gpu.yaml

cp time.yaml /results/time_gpu.yaml

cd /ContainerXP/FeHyb/NOGPU

bigdft

cp log.yaml /results/log_cpu.yaml

cp time.yaml /results/time_cpu.yaml

Comparison between both runs should show a massive improvement between the CPU and GPU version.

Other tests can be performed to best use the resources on the host system. A bigger test set is available in the /docker/H2O_32 folder.

To validate output correctness, log.yaml file can be compared to log.ref.yaml files available in CPU variants of the test folders. To perform an easy comparison, launch:

python /usr/local/bigdft/lib/python2.7/site-packages/fldiff_yaml.py -d log.yaml -r

./log.ref.yaml -t ./tols-BigDFT.yaml

The correct output should be:

Test succeeded: True

RECOMMENDED SYSTEMS CONFIGURATIONS

The BigDFT container is optimized and tested for reliability to run on NVIDIA Pascal™- and NVIDIA Volta-powered systems with CUDA 9 or newer. Visual molecular dynamics (VMD) and all the HPC application containers available on NVIDIA GPU Cloud can run on the following systems:

  • Workstation: Powered by NVIDIA Titan V and x86 CPU
  • NVIDIA® DGX™ Systems
  • HPC cluster with Pascal/Volta GPUs, CUDA 9, x86 CPU
  • Cloud (AWS, Google Cloud Platform and more)

GET ACCESS TO GPU-ACCELERATED APPLICATION
CONTAINERS WITH with NVIDIA GPU Cloud.