CUDA Spotlight: GPU-Accelerated Multi-Phase Flow Simulations
This week's Spotlight is on Dr. Mehdi Raessi, Assistant Professor in the Department of Mechanical Engineering at University of Massachusetts-Dartmouth.
NVIDIA: Mehdi, tell us about your research.
Mehdi: The focus of my research is primarily on multi-phase flows and free-surface flows with phase change. We develop computational algorithms and flow solvers, and use them to study industrial and research applications that involve multi-phase flows. Examples include materials processing (thermal spray coating and casting), energy systems (both renewable and conventional), and environmentally friendly or “green” refrigeration systems.
NVIDIA: Explain “multi-phase flow” in layman’s terms.
Mehdi Suppose there are two (or more) immiscible “non-blendable” fluids flowing together, like the type of fluid flow we see in a lava lamp. The flow consists of different fluids, or phases, and the fluids are separated by a distinct interface or boundary. We call such flows multi-phase flows; they are also referred to as interfacial flows. The definition for multi-phase flow can be broader and include a solid phase as well as liquid or gas phases.
Multi-phase flows are ubiquitous in industrial and scientific applications. A few examples are ink-jet printers, sprays and atomizers, casting and coating processes, and bubbly flows.
Numerical simulation of impact of a titanium alloy droplet
NVIDIA: What role does GPU computing play in your work?
Mehdi: Our numerical algorithm for solving the fluid flow equations involves a step in which we solve a large system of linear equations to compute the pressure field. That single step can take from 50 to 99.9 percent of the total simulation time! As we increase the number of grid points in our simulations, the pressure solution step takes a larger percentage of the total simulation time.
To speed up this task, my graduate student, Stephen Codyer, ported the pressure calculations to the GPU. His tests show that the GPU-accelerated solver can run a 3D simulation with over 28 million grid points 15 times faster (compared to performing the same calculation on the CPU).
My colleague, Prof. Gaurav Khanna, from our Physics Department, helped us a lot in this project and shared his extensive experience in GPU computing. We minimized the communications between the GPU and the host CPU and achieved much better speedups than what open-source GPU linear algebra libraries offer.
NVIDIA: What are the benefits of using CUDA?
Mehdi: The main benefits of using CUDA 4.0 for our research and the type of computational fluid dynamics that we do are the ease of programming and performance. Anyone with basic to advanced C programming knowledge can access the power of CUDA-enabled GPUs and use the included libraries, such as cuBLAS and cuSPARSE. The libraries include common features such as a basic dot product as well as advanced features like a sparse linear system solver. Additional CUDA API commands allow the user to control exactly which threads get what task.
While the limited on board memory on a GPU may deter some compared to the potentially unlimited memory sizes of a cluster, a single Tesla C2070 was able to suffice for most of our computational needs, because we port only a portion (usually less than 6 GB) of our calculations to the GPU. We are now exploring using multiple GPUs for our calculations.
I would like to add that with the 15X speedup that I mentioned earlier, we can of course run simulations faster or increase the size of our simulations; but, just as importantly, a 15X speedup using CUDA means that a computational scientist could perhaps replace a common small-scale CPU cluster with a single Tesla workstation. This would be a significant saving in costs (both in terms of procurement and power-consumption). This could also potentially tie in very well with various “green” initiatives because GPU computing is extremely effective in terms of performance-delivered per watt-consumed.
NVIDIA: Tell us about the Scientific Computing Group at UMass-Dartmouth.
Mehdi: This is an interdisciplinary group comprised of faculty members and students from the departments of Mathematics, Physics, Mechanical Engineering, Civil & Environmental Engineering as well as the School for Marine Science & Technology.
The members of this group have collaborations on a variety of applications that include complex flows in energy devices (especially renewable energy), material processing and mechanics, astrophysics, etc.
I am pleased to mention that our Scientific Computing Group recently acquired and installed a GPU cluster from IBM, which has 60 Tesla M2050 GPUs. We are currently planning to add data visualization capability to the group.
NVIDIA: What future applications can you envision in your research area?
Mehdi: As we all know, energy and the environment have become the most pressing issues in the world. Addressing these issues requires new technology and drastic changes in the ways that we use our energy resources. After events like the oil spill in the Gulf of Mexico and the Fukushima Daiichi nuclear disaster, I think everyone agrees that we should plan to use energy resources that have low potential to cause catastrophic events.
We have begun projects that are targeting these issues. With GPU-accelerated computational tools, we are now able to study much larger problems at a level of detail that was not feasible before. These simulations can lead to new energy devices that are more efficient and have less environmental impact. I believe the capability to run faster and faster simulations with GPUs will one day enable us to predict, respond to and mitigate catastrophic events.
Dr. Mehdi Raessi is an Assistant Professor in the Department of Mechanical Engineering at the University of Massachusetts-Dartmouth. He joined UMass-Dartmouth in 2010 following a postdoctoral study at NASA-Stanford University's Center for Turbulence Research (CTR). Dr. Raessi obtained his PhD in Mechanical Engineering from the University of Toronto in 2008. During his graduate studies, he worked in the Centre for Advanced Coating Technologies (CACT).
He is the recipient of the Industrial Research and Development Fellowship from the Government of Canada, Postdoctoral Fellowship from NASA-Stanford University’s Center for Turbulence Research, and Early Career Teaching Award from the University of Toronto.
Dr. Mehdi Raessi
Computational Multi-phase Flows Group
Mechanical Engineering Department
University of Massachusetts-Dartmouth
285 Old Westport Road, North Dartmouth,
Massachusetts, USA 02747-2300