CUDA Spotlight: Anne Elster



Anne Elster

GPU-Accelerated Imaging and Simulations

This week's Spotlight is on Anne C. Elster.

Anne is an Associate Professor at the Norwegian University of Science and Technology (NTNU), a CUDA Research Center and CUDA Teaching Center, where she runs the HPC-Lab (Heterogeneous and Parallel Computing Lab). She is also a Visiting Scientist at the University of Texas at Austin, a CUDA Teaching Center.

This interview is part of the CUDA Spotlight Series.


Q & A with Anne Elster

NVIDIA: Anne, what are some examples of projects you are working on at NTNU?
Anne: Most of our projects today are related to medical imaging and oil & gas simulations on heterogeneous computer systems. My lab has close collaborations with Statoil, SINTEF Med Tech and the Department of Medical Imaging at our Medical School.

In addition, we have developed a nice real-time 3D snow simulation that we continue to add features to. This simulation can be used as a visual test-bed for numerical algorithms, terrain interactions, road planning and more.

At GTC 2013, we demonstrated how the snow simulation calculates 4+ million particles being affected by the wind field and terrain in real-time by harnessing the compute power of GPUs. We are also experimenting with adding SPH and other fluid techniques to simulate avalanches etc. [Read more about it here.]

NVIDIA: What are the advantages of GPU computing?
Anne: My group at NTNU started harnessing the processing power of the GPU in 2006, back before CUDA was available. Even in those early days, we realized that GPU computing offers a lot of computational power at a low energy and $$ cost by leveraging technology that was originally motivated by the video game industry.

NVIDIA: What's on the horizon in the world of GPU computing?
Anne: Multi- and many-core systems based on GPU technology will continue to become even more omnipresent. The increased computational power of smaller and smaller devices integrated with sensor technology will lead to applications we have not even thought of yet, and could, for instance, enable complicated biometric algorithms to replace keys, ID cards and pin codes, etc.

NVIDIA: Can you share any programming techniques or tips with us?
Anne: I always tell my students that when programming for performance the key is, like in real estate, location, location, location. The closer to the processing cores your data is the better your application will perform. This is, of course, also true in CUDA. If you use a traditional GPU system over the PCIE bus, you really want to try to fit as much of the data needed at all times on the GPU RAM.

NVIDIA: What courses do you teach?
Anne: This semester I am teaching a compilers course with over 60 students and a Ph.D. course with seven students on heterogeneous and cloud computing. Each fall I teach the parallel computing class I developed. Last fall, this course had 54 students, each of whom completed the programming assignments and final exam.

I currently supervise four Ph.D. students at NTNU and co-supervise another four, including one (defacto) at the University of Texas at Austin where I spend each summer and my sabbaticals as a visiting scientist. In addition, I supervise nine masters students. More than 25 of my graduate students have completed theses on GPU and heterogeneous computing.

NVIDIA: Why do you like teaching?
Anne: It is very rewarding to see young minds get excited about technology. I also feel it is very important for our computer science students to learn about compiler and parallel programming given how fast the industry is moving with multi- and many-core systems, in everything from supercomputers to workstations to handheld devices.

NVIDIA: What is your advice for young people who are just learning about coding?
Anne: I know that not everyone agrees with me, but I want students to experience MPI programming before learning CUDA, OpenCL or even OpenMP. MPI forces the students to think about data locality and, in my opinion, helps them become better programmers. I teach my parallel computing class this way.


Bio for Anne Elster

Dr. Elster is an Associate Professor at the Norwegian University of Science and Technology (NTNU), where she runs the HPC-Lab (Heterogeneous and Parallel Computing Lab). She is also a Visiting Scientist at the University of Texas at Austin.

She served on the MPI standards committees (MPI and MPI-2) for Cornell and Schlumberger, respectively, and became a Senior Member of the IEEE in 2000. She has worked with GPGPU computing since 2006 and is the PI for NTNU's and UT Austin's CUDA Teaching Centers as well as the CUDA Research Center at NTNU.

Dr. Elster holds M.S. and Ph.D. degrees in Electrical Engineering from Cornell University where she explored various HPC systems in the late 1980s and early 1990s. After graduating from Cornell, she worked for Schlumberger in Austin before returning to academia via the University of Texas at Austin in 1997.

Relevant Links
http://www.idi.ntnu.no/~elster/

Contact Info
Elster [at] idi.ntnu [dot] no