CUDA Spotlight: Lorena Barba, Boston University






GPU Computing for Scientific Discovery

This week's spotlight is on Lorena Barba, Assistant Professor at Boston University. Professor Barba is a computational scientist and a fluid dynamicist. She was recently presented with the prestigious National Science Foundation CAREER award and is a CUDA Fellow. This interview is part of the CUDA Spotlight Series.

NVIDIA: Lorena, how did you get involved in using GPUs for scientific computing?
Lorena: Towards the end of 2006 (while I was in England), I was working with a visiting student from Chile, whose visit I was able to fund thanks to a program of collaboration of the European Union with Latin America. This student—Felipe Cruz, who now has graduated as my first PhD student and is working as a postdoctoral researcher at the Nagasaki Advanced Computing Center—was making excellent progress understanding a rather complicated algorithm, the fast multipole method.

As his visit gave rise to a desire to do a PhD in computational science, I began to look for funding for him. It turned out, we’d had a presentation from Airbus recently about their urgent need for dramatically increasing their capability for fluid mechanics simulations. They talked about “mega-class” simulations, a million times faster, within a decade, and were eager to increase their collaboration with academia.

With the help of a staff scientist at the HPC center in Bristol University, we reached out to Airbus with a proposal to investigate new algorithms for fluid simulation that would scale and achieve very high performance. My proposal also said we would “explore possible increases in simulation capability of novel architectures such as multi-core processors and heterogeneous computing,” mentioning GPU, ClearSpeed coprocessors and IBM Cell as examples. This was in March 2007, and CUDA was just out, but through phone conversations with Airbus, we knew they were interested in this avenue of research. I got the funding from Airbus, and they also gave us access to an experimental cluster that had a sample of all the latest hardware coming out, including some GPUs.

NVIDIA: Why is GPU computing so compelling?
Lorena: There are several reasons why I find GPUs compelling. First, how they showed up in the ecosystem of high-performance computing is fascinating. It was the result of the insatiable market demand for games that fueled huge R&D budgets to innovate in computer hardware. Did you know that about five million people in the US play online games for more than 40 hours a week? That’s like a full-time job. At the same time, game designers have demanded more innovation to be able to let loose their artistic expression. This is how the “graphics processing unit” was born. The fact that some scientists started to experiment with GPUs to do numerical computing is remarkable. But scientists are always craving for more computing power, and they will try anything. Then we have the fact that NVIDIA paid attention and responded with CUDA, and less than four years later the number-one supercomputer in the world was using GPUs. That’s pretty amazing.

The second reason that I find GPUs interesting is that they make you take a big plunge into parallelism. A GPU chip has hundreds of processors that work in parallel. It is built to process many objects in the same exact way, as required in the rendering of video. Its fundamental architectural features make GPUs ideal for performing computations that have a high level of data parallelism. We all know that parallel programming is hard, but the crux is that parallelism is the only avenue for increasing computing performance in the foreseeable future. So if most coding will have to be parallel, why not take the plunge and take advantage of the most parallel hardware around? That is the GPU right now, and will be for the foreseeable future.

GPUs are without a doubt a disruptive technology in the world of high-performance computing. But they should no longer be talked about as “novel hardware”; they are now prominent in the supercomputer centers of the world and are probably the best candidate for reaching exascale computing. As an interesting sign, a recent article on InsideHPC focusing on the K computer (current #1 in the world) says that it is “unique in what it doesn’t use: accelerators.” So now not using GPUs is called “unique”!

NVIDIA: We hear a lot about “exascale” as a milestone. Why is it important?
Lorena: The current worldwide race to reach exascale computing is really about maintaining growth in computer performance. No technology has ever grown as fast as computing, from the three operations per second that the Harvard Mark I could do in 1944, to the current level of 10 petaflop/s (peta equals 1 with 15 zeros) for the K computer.

In the lifetime of, say, my graduate students, computing performance has multiplied a billion times. What most people don’t realize is that society is completely dependent on maintaining this growth. In practically all sectors—science, government, health, financial services, education—there is an expectation that computing will keep getting faster and cheaper. A recent report by the National Academies analyzes this point in detail.

If computing performance has been increasing at such an accelerating pace, what is special about reaching exascale computing? This milestone is different because there are formidable obstacles. The clock speed of CPUs has hit power limits and, to quote the National Academy report, “there is no known alternative to parallel systems for sustaining growth in computing performance.” Basically, every computer programmer will need to develop parallel programs in the near future, and most have not been trained to do it.

"GPUs are without a doubt a disruptive technology
in the world of high-performance computing"

NVIDIA: What is your vision for educating the next generation of computational scientists?
Lorena: Bridging the skills gap is a monumental problem, and we need to dedicate coordinated efforts to solving it. There are several aspects to this. For example, technology businesses need thousands more engineers and computer scientists, yet we continue to have a high attrition in these degrees and a low participation of women. Only about 13 percent of computer science students are women, and this is down substantially from previous years. If we could convince more young women to enter into this field, we’d be on our way to increasing the numbers of graduates.

And if educational methods that have been proven to work were more widely applied, we would be able to reduce attrition and increase success in engineering and computer science programs. The first recommendation in a recent report of the President’s Council of Advisors on Science and Technology (PCAST) on STEM education is to “catalyze widespread adoption of empirically validated teaching practices.” As an educator, I am thoroughly invested in this, and I am dedicating consistent efforts to improve student learning using the latest cognitive research and teaching methodologies.

In addition, I take a stand in advocating and contributing to open educational resources (posting lectures on iTunes U and You Tube, for example), and have developed several extra-mural opportunities for students in computational science. For example, the Pan-American Advanced Studies Institute's Scientific Computing in the Americas: the Challenge of Massive Parallelism event in January 2011 hosted 68 students and participants, who learned from 14 world-leading experts. Several of these students participated in the “CUDA Research Fast Forward” presentation at the NVIDIA booth in SC11, which I also organized.

Educating the future generation of computational scientists is crucial for success in exploiting computer performance for scientific discovery. My vision for this endeavor is founded on promoting collaboration, advocating and contributing to open science and open source, and using technology for open education.

NVIDIA: Tell us about your most recent initiative to improve learning in a computational course.
Lorena: This Spring, I’m teaching a class in Computational Fluid Dynamics. Instead of the traditional format, where I would lecture and cover the theory in class, and assign homework to get the students to practice problem-solving, I am doing the opposite: the transfer of information is assigned for “home work” by means of lecture videos, and class time is spent in purely practical problem-solving. This model is recently being called the “flipped class”. For a computational course, it is an excellent method to increase student success.

Who has ever learned computing through lectures? You really need a collaborative problem-solving environment, ideally with well-designed tasks given for you to solve. So I’m not lecturing at all in this class. Instead, I design tasks that will embed knowledge and get the students talking and collaborating. One example: students had previously completed writing a Python code to solve a one-dimensional transport problem (a traveling shock wave); in the next class, I had students work in pairs with one student reading his code out loud to the other while explaining the functionality. The students then had to discuss things they had done differently and find the best solution by agreement.

The task was inspired by the methodology of code reviews in software engineering and, sure enough, most students found bugs or better ways to solve a problem. Even some students who thought their code was completely correct found a bug. They were throwing arms in the air and you could hear an occasional shriek from behind a computer screen. I can tell you that no student can fall asleep in such a class!


Relevant Links
http://www.bu.edu/me/2012/03/07/professor-lorena-a-barba-wins-nsf-career-award/
http://www.bu.edu/me/2012/03/13/flipped-classroom-energizes-computational-fluid-dynamics-course/

Bio
Lorena A. Barba obtained her PhD in Aeronautics at the California Institute of Technology in 2004, and then joined the University of Bristol, England as a Lecturer in Applied Mathematics. In 2008, she took a position in Boston University as Assistant Professor of Mechanical Engineering. Her research interests include computational fluid dynamics, especially immersed boundary methods and particle methods for fluid simulation; fundamental and applied aspects of fluid dynamics, especially flows dominated by vorticity dynamics; the fast multipole method and applications; and scientific computing on GPU architecture. Prof. Barba is an Amelia Earhart Fellow of the Zonta Foundation (1999), a recipient of the EPSRC First Grant program (UK, 2007), an NVIDIA Academic Partner award recipient (2011), a recipient of the NSF Faculty Early CAREER award (2012), a CUDA Fellow (2012) and a leader in computational science and engineering internationally.

Contact Info
labarba (at) bu.edu