CUDA: Week in Review
Tuesday, Oct. 12, 2010, Issue #38 - Newsletter Home
Welcome to CUDA: Week in Review, an online news summary for the worldwide CUDA and GPU computing community.
  - NVIDIA CEO Jen-Hsun Huang discusses the new Huang Engineering Center at
     Stanford University
. (1:50)
  - David Ragones of NVIDIA describes how the GPU-accelerated web will allow more
     immersive and interactive web sites
. (1:46)
  - Rocker Rudy Sarzo of Quiet Riot talks about GPU computing, artistic creativity
     and working at the 'speed of thought'
. (5:14)
  - Industrial Light & Magic demonstrates how the GPU is enabling very cool effects in
. (6:32)
The Portland Group (PGI) recently announced a partnership with NVIDIA. We interviewed PGI's Douglas Miles for more details.
NVIDIA: Douglas, tell us about PGI.
Douglas: PGI is based in Portland, Oregon. We create software tools that maximize performance and portability of applications across Linux, Windows and OSX. Today, these tools include CUDA Fortran and the PGI Accelerator for NVIDIA GPUs.
NVIDIA: What did PGI announce at GTC 2010?
Douglas: We announced the "PGI CUDA C compiler," a new tool that will enable CUDA developers to deploy their applications on systems based on the industry-standard x86 architecture.
NVIDIA: Why is this significant?
Douglas: Today's application developers need flexibility. They want to be able to create innovative apps that leverage parallel computing and then deploy these apps on a wide range of target systems. The new PGI CUDA C compiler will enable developers to write parallel CUDA C applications that can run on x86 workstations, servers and clusters - with or without NVIDIA GPUs.
NVIDIA: Will the new PGI CUDA C compiler work with both AMD and Intel processors?
Douglas: Yes. PGI compilers have been optimized for performance on the latest AMD and Intel processors since 1997. All of that technology will be put to work optimizing both the sequential and massively parallel components of CUDA C applications.
NVIDIA: What is the timing for the rollout?
Douglas: We will demonstrate a prototype at SC '10 in November in New Orleans. We aim to have a first production release in Q2 2011.
For more info, see the PGI press release.
GTC 2010 Keynote Speaker Featured in New York Times
The work of Dr. Sebastian Thrun, who delivered the closing address at this year's GTC, was highlighted in the New York Times on Oct. 10 in an article titled "Google Cars Drive Themselves, in Traffic."
  - See:
Plenoptics and the Future of Digital Photography
Abbas Jaffar Ali of T-Break Tech saw Adobe's plenoptics technology demoed at GTC 2010. He writes: "Plenoptics - remember that word as it might just be the future of digital photography. I had the opportunity to watch David Salesin and Dr. Todor Georgiev from Adobe, who explained what plenoptics is and the technology behind it... The reason these guys were at NVIDIA's GTC is because using GPUs to stitch images together is about five hundred times faster than CPUs."
  - See:
MATLAB Adds GPU Support
Michael Feldman of HPCwire reports: "MATLAB users with a taste for GPU computing now have a perfect reason to move up to the latest version. Release R2010b adds native GPGPU support that allows users to harness NVIDIA graphics processors for engineering and scientific computing."
  - See:
New Version of Thrust
NVIDIA released Thrust v1.3, an open-source template library for developing CUDA applications. Modeled after the C++ Standard Template Library (STL), Thrust brings a familiar abstraction layer to GPU computing. To get started, download Thrust v1.3 and then follow the online quick-start guide.
Parallel Nsight and CUDA Toolkit Overview
NVIDIA has added new performance improvements and capabilities to Parallel Nsight and the CUDA Toolkit. These enhancements give developers more flexibility and power to easily create high-performance GPU-accelerated apps. For more info, watch the video overview by NVIDIA's Will Ramey and Stephen Jones.
Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is seeking a postdoc research associate for the project "Massively Parallel Block Structured Adaptive Mesh Refinement on Hybrid Architectures for Subsurface Flow Applications." The ideal applicant will have a Ph.D. in Applied Math, C.S. or related field; Experience with PETSc, Hypre, SAMRAI libraries; Parallel programming experience with MPI; and experience with CUDA.
  - See:
Wolfram Technology Conference
Oct. 13-15, Champaign, Illinois

MATLAB for Finance and Insurance
Oct. 15, Paris
Presented in English and French

NEW: GPU Computing Conference - Sprinx Systems and Faculty of Information

Oct. 15, Prague, Czech Republic

Microsoft Technical Computing across Client, Cluster and Cloud (TC3)
Oct. 20, London
Includes Visual Studio and Parallel Nsight briefings
Register here:; Special invitation code: 437DB9

Cray Workshop on High Performance Computing - Cray and HLRS
Oct. 25, Stuttgart

NEW: Beginner CUDA Seminar - empulse GmbH
Oct. 26, Cologne, Germany

NEW: GPUs for Molecular Dynamics - GROMACS
Oct. 28-29, Espoo, Finland

NEW: Beginner CUDA Course - SagivTech
Oct. 31-Nov. 2, Ramat Gan, Israel

Supercomputing 2010
Nov. 13-19, New Orleans

NEW: Advanced GPU Supercomputing for HFT (High-Frequency Trading)
Nov. 15-17, New York (taught by Andrew Sheppard)

Training from CAPS
Nov. 23-25, Rennes, France

Dec. 16-18, Seoul

Scientific Computing in the Americas: The Challenge of Massive Parallelism
Jan. 3-14, 2011, Valparaiso, Chile

IEEE International Parallel & Distributed Processing Symposium
May 16-20, 2011, Anchorage

NEW: CUDA and Advanced Image Processing - SagivTech
Dec. 12-14, Ramat Gan, Israel

– CUDA Certification:
– GPU Computing Webinars:
– Training from EMPhotonics:

(To list an event, email:

GPU Technology Conference
– See presentations and keynotes from GTC 2010:
– See list of CUDA-enabled GPUs:
CUDA Downloads
– Download CUDA Toolkit 3.2:
– Download OpenCL v1.1 pre-release drivers and SDK code samples (Log in or
   apply for an account
CUDA Documentation
– Developer guides and docs:
CUDA and Academia
– Learn more at
CUDA on the Web
– Read previous issues of CUDA: Week in Review:
– Follow CUDA & GPU Computing on Twitter:
– Network with other developers:
– Stayed tuned to GPGPU news and events:
– Learn more about CUDA on CUDA Zone:
– Read Kudos for CUDA:
– Read Supercomputing for the Masses, Part 20:
About CUDA
CUDA is NVIDIA’s parallel computing hardware architecture. NVIDIA provides a complete toolkit for programming on the CUDA architecture, supporting standard computing languages such as C, C++ and Fortran as well as APIs such as OpenCL and DirectCompute. Send comments and suggestions to:

Click here to opt in specifically to CUDA: Week in Review.

Copyright © 2010 NVIDIA Corporation. All rights reserved.
2701 San Tomas Expressway, Santa Clara, CA 95050.