CUDA Spotlight: Lorena Barba, Boston Universityby Calisa Cole, posted Apr 16 2012 at 04:44PM GPU Computing for Scientific Discovery This week's spotlight is on Lorena Barba, Assistant Professor at Boston University. Professor Barba is a computational scientist and a fluid dynamicist. She was recently presented with the prestigious National Science Foundation CAREER award and is a CUDA Fellow. This interview is part of the CUDA Spotlight Series. NVIDIA: Lorena, how did you get involved in using GPUs for scientific computing? As his visit gave rise to a desire to do a PhD in computational science, I began to look for funding for him. It turned out, we’d had a presentation from Airbus recently about their urgent need for dramatically increasing their capability for fluid mechanics simulations. They talked about “mega-class” simulations, a million times faster, within a decade, and were eager to increase their collaboration with academia. With the help of a staff scientist at the HPC center in Bristol University, we reached out to Airbus with a proposal to investigate new algorithms for fluid simulation that would scale and achieve very high performance. My proposal also said we would “explore possible increases in simulation capability of novel architectures such as multi-core processors and heterogeneous computing,” mentioning GPU, ClearSpeed coprocessors and IBM Cell as examples. This was in March 2007, and CUDA was just out, but through phone conversations with Airbus, we knew they were interested in this avenue of research. I got the funding from Airbus, and they also gave us access to an experimental cluster that had a sample of all the latest hardware coming out, including some GPUs. NVIDIA: Why is GPU computing so compelling? The second reason that I find GPUs interesting is that they make you take a big plunge into parallelism. A GPU chip has hundreds of processors that work in parallel. It is built to process many objects in the same exact way, as required in the rendering of video. Its fundamental architectural features make GPUs ideal for performing computations that have a high level of data parallelism. We all know that parallel programming is hard, but the crux is that parallelism is the only avenue for increasing computing performance in the foreseeable future. So if most coding will have to be parallel, why not take the plunge and take advantage of the most parallel hardware around? That is the GPU right now, and will be for the foreseeable future. GPUs are without a doubt a disruptive technology in the world of high-performance computing. But they should no longer be talked about as “novel hardware”; they are now prominent in the supercomputer centers of the world and are probably the best candidate for reaching exascale computing. As an interesting sign, a recent article on InsideHPC focusing on the K computer (current #1 in the world) says that it is “unique in what it doesn’t use: accelerators.” So now not using GPUs is called “unique”! NVIDIA: We hear a lot about “exascale” as a milestone. Why is it important? In the lifetime of, say, my graduate students, computing performance has multiplied a billion times. What most people don’t realize is that society is completely dependent on maintaining this growth. In practically all sectors—science, government, health, financial services, education—there is an expectation that computing will keep getting faster and cheaper. A recent report by the National Academies analyzes this point in detail. If computing performance has been increasing at such an accelerating pace, what is special about reaching exascale computing? This milestone is different because there are formidable obstacles. The clock speed of CPUs has hit power limits and, to quote the National Academy report, “there is no known alternative to parallel systems for sustaining growth in computing performance.” Basically, every computer programmer will need to develop parallel programs in the near future, and most have not been trained to do it.
NVIDIA: What is your vision for educating the next generation of computational scientists? And if educational methods that have been proven to work were more widely applied, we would be able to reduce attrition and increase success in engineering and computer science programs. The first recommendation in a recent report of the President’s Council of Advisors on Science and Technology (PCAST) on STEM education is to “catalyze widespread adoption of empirically validated teaching practices.” As an educator, I am thoroughly invested in this, and I am dedicating consistent efforts to improve student learning using the latest cognitive research and teaching methodologies. In addition, I take a stand in advocating and contributing to open educational resources (posting lectures on iTunes U and You Tube, for example), and have developed several extra-mural opportunities for students in computational science. For example, the Pan-American Advanced Studies Institute's Scientific Computing in the Americas: the Challenge of Massive Parallelism event in January 2011 hosted 68 students and participants, who learned from 14 world-leading experts. Several of these students participated in the “CUDA Research Fast Forward” presentation at the NVIDIA booth in SC11, which I also organized. Educating the future generation of computational scientists is crucial for success in exploiting computer performance for scientific discovery. My vision for this endeavor is founded on promoting collaboration, advocating and contributing to open science and open source, and using technology for open education. NVIDIA: Tell us about your most recent initiative to improve learning in a computational course. Who has ever learned computing through lectures? You really need a collaborative problem-solving environment, ideally with well-designed tasks given for you to solve. So I’m not lecturing at all in this class. Instead, I design tasks that will embed knowledge and get the students talking and collaborating. One example: students had previously completed writing a Python code to solve a one-dimensional transport problem (a traveling shock wave); in the next class, I had students work in pairs with one student reading his code out loud to the other while explaining the functionality. The students then had to discuss things they had done differently and find the best solution by agreement. The task was inspired by the methodology of code reviews in software engineering and, sure enough, most students found bugs or better ways to solve a problem. Even some students who thought their code was completely correct found a bug. They were throwing arms in the air and you could hear an occasional shriek from behind a computer screen. I can tell you that no student can fall asleep in such a class! Relevant Links Bio Contact Info |