CUDA: Week in Review
Fri., Feb. 4, 2011, Issue #47 - Newsletter Home
Welcome to CUDA: Week in Review, an online news summary for the worldwide CUDA, GPU computing and parallel programming ecosystem.
GPUs in the Big Apple
This week’s spotlight is on Andrew "Shep" Sheppard. Shep is a financial consultant with extensive experience in quantitative financial analysis and trading-desk software. Most recently, he was chief technology officer and chief quantitative analyst at a New York multi-strategy hedge fund. A CUDA developer and published author (with technical publisher O’Reilly), Shep entered finance after conducting scientific research at Oxford University, Caltech’s Jet Propulsion Lab and the Berkeley Space Sciences Lab, where he worked on earth and planetary remote sensing probes.
NVIDIA: Shep, tell us about the current landscape in finance and computing.
Shep: Data in finance is exploding. So too is the velocity of the data, by which I mean the speed and direction in which it moves from place to place (for example, prices for similar assets quoted on multiple exchanges or trading venues being pulled into a ticker plant). And there is a pressing need to analyze this data to make it actionable. To meet this deluge of data and analysis, and the push to make everything run in something near real-time, the GPU is seeing wide application.
NVIDIA: Where is the momentum?
Shep: I am seeing the GPU gaining momentum in a number of key areas: pricing of complex assets, such as collateralized debt obligations (CDOs); moving risk from overnight batch processes to real-time; and back testing of strategies for high frequency trading, or HFT as it’s known.
NVIDIA: What sorts of problems in finance are GPUs a good match for?
Shep: A surprising proportion of financial problems are what’s known as embarrassingly parallel, in the sense that they are very easy to map to parallel technologies, such as multicore and GPU, with spectacular speedups (10X, 100X and beyond). I would turn the question around. In finance, you may be hard pressed to find problems that aren’t a good fit for HPC and GPU supercomputing!
NVIDIA: What are you working on now?
Shep: I help people in finance make more money by applying supercomputing technologies. These projects cover a wide area, from real-time risk to HFT. I am also active in the HPC/GPU space generally. I’ve set up special interest groups and meetups in New York and Boston, and those have been wonderfully successful in a very short time, reflecting a surge in interest in HPC and GPU supercomputing. And I have a couple of books ("GPU Supercomputing in the Cloud" and "Programming GPUs") in the pipeline with the publisher O’Reilly (who is, in my opinion, the best technical publisher on the planet!).

  - Read Shep’s blog post about the meetup he organized in New York:
  - See the O’Reilly webinar here:

  (Would you like to be featured in the CUDA Spotlight? Email us at
New Paper on Parallelism from Stanford
Researchers in the Pervasive Parallelism Laboratory at Stanford published a paper titled "A Domain-Specific Approach to Heterogeneous Parallelism," which describes a framework for parallel computing and includes benchmarks of MATLAB code using GPUs with Jacket from Accelereyes.
- See:

Accelerating Smoke, Fire, Liquid with CUDA
Double Negative (DNeg), the largest visual effects facility in London, uses an NVIDIA GPU-based render farm to accelerate components of its VFX (visual effects) pipeline, resulting in speedups of up to 20X. Key to DNeg’s VFX workflow is Squirt, a CUDA-optimized system which enables video professionals to simulate effects like smoke, fire and liquid. DNeg’s Dan Bailey comments: "...CUDA is great to work with. In the future, we’re looking at driving as much of our computation as we can onto the GPU..." DNeg’s work can be seen in films ranging from Inception to 2012.
- See blog post by NVIDIA’s Danny Shapiro:

Factory Production Line Inspection with GPUs
MVTec Software GmbH is developing a powerful machine vision solution for product inspection in factories. With NVIDIA GPUs, key functions are accelerated by up to 30X. "Product inspection machines are integral to factory automation today," says Dr. Wolfgang Eckstein of MVTec.
- See:

GPU-Accelerated Image Processing on the Cloud
Directions Magazine interviewed Rui Gome Da Silva of Incogna about the company’s cloud-computing, GPU-based GIS (geographic information system) technology, which harnesses the parallel nature of Tesla GPUs. "Our image processing system takes tasks that were once very difficult, such as counting all oil well pads on the planet… and now makes them possible," says Rui Gome Da Silva.
- Read the Directions article:

Testbed Explores Role of GPU in Scientific Computing
The National Energy Research Scientific Computing Center (NERSC), in collaboration with the Berkeley Lab, launched a GPGPU computing testbed called Dirac (named in honor of the 1933 Nobel laureate). The system allows users to explore the applicability of GPUs to scientific simulations on individual GPUs as well as GPU clusters.
- Read the Scientific Computing article:
NEW: Each week we highlight sessions from GTC 2010 and SC10. Here are our picks for this week:

    Industrial Seismic Imaging on GPUs (GTC 2010)
    Scott Morton – Hess Corporation (video – 43 mins.)

    GPU Cloud Computing 101: Getting Started (SC10)
    Dale Southard – NVIDIA (pdf)
Johannes-Gutenberg University Mainz seeks PostDocs/PhD students for several research associate positions. Requires Master’s degree (or equiv.) in computer science, bioinformatics, mathematics, or related subject; background in algorithm design; and excellent programming skills (preferably C/C++). Research areas include utilization of accelerator architectures (e.g. GPUs and CUDA/OpenCL) with a focus on bioinformatics applications. Location: Mainz, Germany.
- For info, contact:
- See:
February – March 2011

NEW: High-Performance Computing Advances in ANSYS 13.0
February 8, 2011, 1:00 pm pacific and February 10, 2011, 6:00 am pacific
Note: Will showcase extended parallel scaling and new support for GPU acceleration in ANSYS Mechanical 13.0

NEW: SagivTech 3-Day OpenCL Course
February 13-15, Ramat Gan, Israel

Symposium on Principles and Practice of Parallel Programming - ACM
February 12-16, 2011, San Antonio, TX

Performance Benefits of NVIDIA GPUs for ANSYS Mechanical –

February 17, 2011, noon-1:00 pm pacific, Sunnyvale, California
Hosted at Ozen Engineering, 1210 E. Arques Ave #207, Sunnyvale, CA 94085
Note: Pizza will be served

NEW: HPC & GPU Supercomputing Group of Boston Meetup
March 1, 2011, Cambridge, Massachusetts
Hosted at Microsoft NERD Center, 1 Memorial Drive

NEW: HPC & GPU Supercomputing Group of New York Meetup
March 2, 2011, New York, New York
Hosted at Microsoft, 1290 Avenue of the Americas

NEW: Workshop on General Purpose Processing on GPUs (with ASPLOS XVI)
March 5, 2011, Newport Beach, California

GPU Computing Session, German Physical Society Conference
March 13-18, 2011, Dresden, Germany

ASIM Workshop 2011 - ASIM and Technische Universitat Munchen (TUM)
March 14-16, 2011, Leibniz, Germany
Theme: Trends in Computational Science & Engineering: Foundations of Modeling & Simulation

NEW: SagivTech 3-Day CUDA Course
March 27-29, Ramat Gan, Israel

April– July 2011

Workshop on High Performance Computational Biology - IEEE
May 16, 2011, Anchorage, Alaska
Note: Held with International Parallel & Distributed Processing Symposium

NEW: 25th International Conference on Supercomputing
June 1-4, 2011, Tucson, Arizona

Intelligent Vehicles Conference - IEEE
June 5-9, 2011, Baden-Baden, Germany

Internat'l. Supercomputing Conference (ISC)
June 19-23, 2011, Hamburg, Germany

Internat'l. Conference on Computer Systems and Applications
June 27-30, 2011, Sharm El-Sheikh, Egypt

Genetic and Evolutionary Computation Conference (GECCO)
July 12-16, 2011 Dublin, Ireland

NEW: World Congress in C.S., Computer Engineering, Applied Computing

(Call for papers: March 10, 2011)
July 18-21, 2011, Las Vegas, Nevada

Application Accelerators in High Performance Computing (SAAHPC 2011)
Call for papers: May 6, 2011
Event: July 19-21, 2011, Univ. of Tennessee, Knoxville, Tennessee

– CUDA Training from EMPhotonics:
– CUDA Training from Acceleware:
– CUDA Certification:
– GPU Computing Webinars:

(To list an event, email:

CUDA Registered Developer Program
– Sign up:
CUDA GPU Computing Forum
– Link to forum:
– List of CUDA-enabled GPUs:
CUDA Libraries Performance Report
– Download:
CUDA Downloads
– Download CUDA Toolkit 3.2:
– Download OpenCL v1.1 pre-release drivers and SDK code samples (Log in or
   apply for an account
– Download Parallel Nsight:
– Get developer guides and docs:
CUDA on the Web
– See previous issues of CUDA: Week in Review:
– Follow CUDA & GPU Computing on Twitter:
– Network with other developers:
– Stayed tuned to GPGPU news and events:
– Learn more about CUDA on CUDA Zone:
– Check out the NVIDIA Research page:
CUDA Recommended Reading
– Kudos for CUDA:
– Supercomputing for the Masses, Part 20:
– CUDA books:
CUDA Recommended Viewing
– Third Pillar of Science:
– GTC 2010 presentations:
– SC10 presentations:
About CUDA
CUDA is NVIDIA’s parallel computing hardware architecture. NVIDIA provides a complete toolkit for programming on the CUDA architecture, supporting standard computing languages such as C, C++ and Fortran as well as APIs such as OpenCL and DirectCompute. Send comments and suggestions to:
Stay in Touch with NVIDIA
Twitter   Follow GPU Computing on Twitter
Facebook   Become a fan of NVIDIA on Facebook
NVIDIA online profiles   See list of NVIDIA online profiles

Click here to opt in specifically to CUDA: Week in Review.

Copyright © 2011 NVIDIA Corporation. All rights reserved. 2701 San Tomas Expressway, Santa Clara, CA 95050.