Get started today with this GPU-Ready Apps Guide.


MILC represents part of a set of codes written by the MIMD Lattice Computation (MILC) collaboration used to study quantum chromodynamics (QCD), the theory of the strong interactions of subatomic physics. It performs simulations of four-dimensional SU(3) lattice gauge theory on MIMD parallel machines. "Strong interactions" are responsible for binding quarks into protons and neutrons and holding them all together in the atomic nucleus.

The MILC collaboration has produced application codes to study several different QCD research areas, only one of which—ks_dynamical simulations with conventional dynamical Kogut-Susskind quarks—is used here. More info on MILC can be found here.


MILC can be downloaded and installed on bare-metal with the build instructions provided on MILC’s website. Containers for MILC are also available on NVIDIA GPU Cloud (NGC)

Installing applications in a high performance computing (HPC) environment can be challenging. Containers let you run the application without installing it on the system, making it easy to deploy the most recent version of the application while optimizing performance. Furthermore, running a VMD container is very straightforward and can be set up in minutes.

Running Jobs

Once you pull the MILC container from NGC, there are two ways to run it.

  • Run MILC from the nvidia-docker run command.
  • Run MILC interactively within the container.

1. Running MILC from the Command Line

In this example, we’re running the APEX benchmark on an 18x18x18x36 lattice with the scripts in the /workspace/examples directory inside of the container on one GPU.

Note that the APEX data will be downloaded by the script if it’s not available in the directory mounted to /data in the container.

To save the output, we’re mapping (with -v) the current working directory to the /apex directory inside of the container and saving our log file there so they’ll be available outside of the container when complete. To run the MILC container from the command-line interface (CLI), issue the following command:

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/apex /workspace/examples/ 1

Note that you could also point the CLI command to your local directory instead and run your own scripts (*.sh for example). The script below starts the MILC container and runs the *.sh script from your results directory:

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/results /results/*.sh

2. Running MILC Interactively

In this example, we’re running the APEX benchmark again while inside the /workspace directory in the container. Running interactively is useful for making multiple MILC containers run within the same OS image.

To run the MILC container interactively, issue the following command, which starts the container and also mounts your current directory to /work so it’s available inside the container. (See the -v options on the command below to set the mapping of your local data directory to the one inside the container.)

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/work /bin/bash

After the container starts, you’ll be in the /workspace directory and can run in two different ways. One way is to run inside the /workspace directory using the default scripts and modifying the scripts and running again. Note that any mounted datasets will be in /data if you use the above command.

/workspace/examples/ 1

You can also connect your own working directory with your scripts to /work in the container and run these once inside the container:

-v <youworkingdir>:/work


This section shows the typical performance of the MILC container on GPU-accelerated systems.

MILC Runs Over 17x Faster on Pascal GPUs
MILC Runs Over 23x Faster on Volta GPUs


The MILC container is optimized and tested for reliability to run on NVIDIA Pascal™- and NVIDIA Volta-powered systems with NVIDIA CUDA® 9 or newer. MILC and all the HPC application containers available on NVIDIA GPU Cloud can run on the following systems:

  • Workstation: Powered by NVIDIA Titan V and x86 CPU
  • NVIDIA® DGX™ Systems
  • HPC cluster with Pascal/Volta GPUs, CUDA 9, x86 CPU
  • Cloud (AWS, Google Cloud Platform and more)