CUDA: Week in Review
Fri., Nov. 11, 2011, Issue #65 - Newsletter Home  


Welcome to the online newsletter for the worldwide CUDA, GPGPU and parallel programming ecosystem.
Dr. Ian Buck, NVIDIA
CUDA 4.1 Release Candidate
2X in 4 Weeks. Guaranteed
Leading Apps Add GPU Acceleration
New CULA Sparse from EM Photonics
MSC Announces MSC Nastran 2012
Chinese Researchers Simulate H1N1 Virus
New Research from IDC
CUDA Course in Egypt from Applied Parallel Computing
GPU Computing Gems - Jade Edition
GPU@BU Workshop
Sign up to be a CUDA Registered Developer
Follow @GPUComputing on Twitter
CUDA Spotlight
CUDA Turns 5!
This week marks the fifth anniversary of the CUDA programming model. To celebrate the occasion, we caught up with Ian Buck, inventor of CUDA and NVIDIA’s General Manager for GPU Computing. Here is a preview of the interview:
Ian Buck
NVIDIA: Ian, how did you get hooked on computing with GPUs?
Ian: During my Ph.D. studies at Stanford, the trends in programmable graphics hardware were really exciting to me as well as the opportunity to work on technology that could influence a wide breadth of sciences, whether it was molecular dynamics, mechanical engineering or turbulence research. I could see that GPUs were becoming powerful enough to help people working on the big questions in science.
NVIDIA: How did you first start using GPUs? Was it as a programmer or a gamer?
Ian: Let’s just say that Stanford had a great internet connection, and as a result, we had an awesome QuakeServer.
NVIDIA: What’s next for GPU computing?
Ian: GPU computing is going mainstream. And it’s not just about NVIDIA. Just look at the great work being done with tools, compilers and apps. Ad hoc GPU user groups are popping up all over the world. A gigantic ecosystem is coming to life. Today, any kind of meaningful simulation is done with GPUs. This focused activity will enable us to solve some of the fundamental problems in science.
NVIDIA: What have you learned along the way, since your days at Stanford working on Brook to your current role as GM for GPU Computing?
Ian: First of all, hire great people. Don’t compromise. The people who use CUDA in industry and academia are intensely driven. I look for the same drive in the people we hire to work on CUDA at NVIDIA -- intellectual curiosity combined with a passion to solve cool and interesting problems. Second, don’t underestimate the value of working on productivity features for developers. The CUDA technology roadmap is focused on making GPU computing easier. Anything we can do to make a developer’s life easier is always worth it.
- Read the full interview with Ian Buck

Editor’s Note: Next week, Ian will be at the SC11 conference in Seattle and would be happy to meet up with current and future CUDA users. If you can’t attend in person, be sure to check out our Facebook live stream from SC11 at:

(To suggest a CUDA Spotlight, email
CUDA Developer News

CUDA 4.1 Release Candidate

back to the top
The CUDA Toolkit v4.1 release candidate (RC1) is now available to CUDA Registered Developers. New features include an open source LLVM-based compiler, 1000+ new image processing functions and a redesigned Visual Profiler with automated performance analysis.
- See:

2X in 4 Weeks. Guaranteed.

Double your application performance with directives and GPUs. Simply insert a few "hints" into the compiler and it automatically optimizes and accelerates your code. To help you get started, NVIDIA and PGI are offering a free 30-day license of the directives-based PGI Accelerator compiler. Not only that, we are guaranteeing your application will achieve at least a 2X speedup in 4 weeks or less.
- See:

Leading Apps Add Multiple GPU Acceleration Support

Four top applications for materials science and biomolecular modeling - LAMMPS, GROMACS, GAMESS and QMCPACK - have added support for multiple GPU acceleration, enabling a reduction in simulation times from days to hours.
- See:

New CULA Sparse from EM Photonics

EM Photonics released the general availability version of CULA Sparse, a collection of matrix solvers for sparse systems on NVIDIA GPUs. In addition, new functionality has been added to CULA R13.
- See:

MSC Software Announces MSC Nastran 2012

MSC Software announced MSC Nastran 2012, available for download in late November. The software will be GPU accelerated.
- See:

Chinese Researchers Simulate H1N1 Virus

Chinese researchers achieved a breakthrough by creating the world’s first computer simulation of a whole H1N1 influenza virus at the atomic level. Researchers at the Institute of Process Engineering of Chinese Academy of Sciences (CAS-IPE) are using molecular-dynamics simulations as a "computational microscope" to peer into the structure of the virus. The work is performed on a supercomputer with 2000+ Tesla GPUs.
- See:

New Research from IDC

IDC published a report on "Heterogeneous Computing: A New Paradigm for the Exascale Era."
- See:

New CUDA Course in Egypt from Applied Parallel Computing

Applied Parallel Computing will hold an advanced three-day course on GPU computing and CUDA in Sharm El Sheikh, Egypt, Dec. 13-16.
- See:

GPU Computing Gems - Jade Edition

The second volume of Morgan Kaufmann’s GPU Computing Gems series offers insights, ideas and hands-on skills. The 30 chapters are written to be accessible to researchers from any industry.
- See:

GPU@BU Workshop

Boston University held a research symposium and tutorial this week on GPUs in scientific computing. The event was organized by Lorena Barba, Richard Brower, Martin Herbordt and Claudio Rebbi.
- See:
New on the NVIDIA Blog
  back to the top
Path to Exascale Computing, by Sumit Gupta
GPU Acceleration Made Easy, by Roy Kim
Replays of the Week
  back to the top
NEW: Each week we highlight a session from a GPU Technology Conference event. Here is our pick for this week: Large-Scale CCTV Face Recognition (GTC 2010) by Abbas Bigdeli and Ben Lever, NICTA
- see:
  back to the top
NEW: Sportvision is seeking a Senior Software Engineer with a proven track record working with computer vision based products. Requirements: Enthusiasm and ability to participate in solving interesting and complex problems.
- See:
(To submit a job listing, email
  back to the top
The GPU Meetups offer a great way to learn about GPU computing and meet interesting people in a relaxed environment.

GPU Meetup of Seattle, Wed., Nov. 16, 7:00 pm (networking), 7:45 pm (program)
Special SC11 Meetup! Talks by NVIDIA, Microsoft, LexisNexis. Location: Amazon
GPU Meetup of New Mexico, Wed., Nov. 16, 7:00 pm
Topic: ROMIO and MPI-IO in Hybrid HPC
GPU Meetup of Brisbane, Nov. 17, 6:00 p.m
GPU Meetup of New York City, Nov. 21, 6:00 pm

(Would you like to start a Meetup? Email
  back to the top
GTC U.S. 2012 (May 14-17)
Poster deadline: Dec. 8
CUDA Calendar

November 2011

back to the top
- Supercomputing 2011 (SC11)
Nov. 12-18, Seattle, Washington
Learn about NVIDIA activities at SC11:
For more information on SC11, visit
Join the GPU Technology Theater from your desk via the SC11 Facebook live stream:

- GPU Programming for Defense/Intelligence — AccelerEyes (Webinar)
Nov. 15, 2011
Learn to accelerate common defense and intelligence algorithms using easy, powerful programming libraries, with Jacket for use with MATLAB and LibJacket for C/C++/Fortran.

- Heterogeneous Data-Parallel Programming (Webinar)
Nov. 16, 2011
Presenter: Prof. Satnam Singh, University of Birmingham, U.K.

- NEW: CUDA 4.1 RC1 (Webinar)
Nov. 22, 2011
Presented by NVIDIA

- CUDA Training (Basic and Advanced) — CAPS
Nov. 22-24, 2011, Rennes, France
Presented by CAPS
Email: training (at)

- CUDA 4-Day Training Course – Acceleware
Nov. 22-25, 2011, Frankfurt, Germany
Presented by Acceleware with Microsoft
Instructor: Michael Durocher

December 2011

- AGU (American Geophysical Union) Meeting
Dec. 5, 2011, San Francisco
Session on High-Res Modeling Using GPU and Many-Core Architectures

- Intro to GPU Programming Workshop - La Maison de la Simulation
Dec. 5-9, 2011, France

- NEW: Advanced CUDA 3-Day Training Course - Applied Parallel Computing
Dec. 13-16, Sharm El Sheikh, Egypt

- GTC Asia
Dec. 14-15, 2011, Beijing, China
Featuring the latest GPU computing breakthroughs, demos and presentations.

- LibJacket CUDA Library for Maximus — AccelerEyes (Webinar)
Dec. 15, 2011
Learn to integrate computations with visualizations in a CUDA-based app through simple visualization functions for plotting, image and volume rendering, and more.


- NEW: CUDA Programming 1-Day Course - Delft University of Technology

Feb. 3, 2012, Netherlands

(To list an event, email:

CUDA Resources

Tesla MD SimCluster

back to the top
– Want to test drive a GPU? Try the Tesla Molecular Dynamics SimCluster:


– CUDA 4.0:
– Parallel Nsight:


– Parallel Nsight:

CUDA Registered Developer Program

– Sign up:


– List of CUDA-enabled GPUs:

CUDA on the Web

– See previous issues of CUDA: Week in Review:
– Follow CUDA & GPU Computing on Twitter:
– Network with other developers:
– Stay tuned to GPGPU news and events:
– Learn more about CUDA on CUDA Zone:
– Check out the NVIDIA Research page:

CUDA Recommended Reading

– Future of Computing Performance:
– Supercomputing for the Masses, Part 21:
– CUDA books:

CUDA Recommended Viewing

– The Third Pillar of Science:
– GTC 2010 presentations:
– SC10 presentations:
About CUDA
CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). NVIDIA provides a complete toolkit for programming on the CUDA architecture, supporting standard computing languages such as C, C++ and Fortran as well as APIs such as OpenCL and DirectCompute. Send comments and suggestions to:
You are receiving this email because you have previously expressed interest in NVIDIA products and technologies. Click here to opt in specifically to CUDA: Week in Review. NVIDIA
Feel free to forward this email to customers, partners and colleagues.

Copyright © 2011 NVIDIA Corporation. All rights reserved. 2701 San Tomas Expressway, Santa Clara, CA 95050.