Share:   Facebook Twitter Linked-in Google Reddit Stumbleupon email
CUDA Week in Review Newsletter
Fri., March 7, 2014, Issue #108 Newsletter Home

Welcome to CUDA: Week In Review
News and resources for the worldwide GPU and parallel programming community.

The new Maxwell GPU architecture is the most power efficient GPU yet. Maxwell improves instruction scheduling and latency, and adds more dedicated shared memory and shared memory atomic instructions. Learn all about it on the Parallel Forall Blog.


Keynotes Revealed
Beyond the deep-dive technical sessions, interactive tutorials and hands-on labs, we’ve assembled an all-star roster of keynotes to offer insight and ignite your imagination. Here’s the lineup:
  Jen-Hsun Huang, NVIDIA CEO and Co-Founder (opening keynote on March 25)
  Dirk Van Gelder and Danny Nahmias of Pixar (March 26)
  Adam Gazzaley, visionary neuroscientist, UC San Francisco (March 27)
Looking forward to seeing you soon in San Jose, Calif. (20% discount code: GM20CD)


Ian LaneCUDA-Accelerated Speech Recognition
Our Spotlight is on Ian Lane of Carnegie Mellon University. Dr. Lane leads the speech and language processing group at CMU Silicon Valley. His team is developing methods to accelerate speech and language technologies using GPUs. Read the Spotlight.


NVIDIA CUDACUDA 6 RC Now Available to All
CUDA 6 Release Candidate (RC) is now available to all, featuring Unified Memory; Drop-in Libraries; and Multi-GPU Scaling in cuBLAS and cuFFT Libraries. Download today and check out these upcoming webinars:
March 11: CUDA 6: Performance Review
March 13: CUDA 6: Drop-in Performance Optimized Libraries
March 18: CUDA 6: Unified Memory


March 25: Silicon Valley (Co-located with GTC14. All GPU Meetup members are welcome)
Special guest speaker: Dr. JoAnn Kuchera-Morin, University of California, Santa Barbara


back to the top
Subscribe to the Parallel Forall RSS feed Parallel Forall
7 Powerful New Features in OpenACC 2.0, J. Larkin
5 Things You Should Know About the New Maxwell GPU Architecture, M. Harris
CUDACasts 17: Unstructured Data Lifetimes in OpenACC 2.0, M. Ebersole
Subscribe to NVIDIA RSS feed NVIDIA
Hollywood Comes to Silicon Valley for GPU Tech Conference, G. Estes
CUDA by the Numbers: 270+ Apps and Counting, G. Millington
Sochi Winter Olympics Broadcast, M. Steele


back to the top

Excited about #Nvidias #Maxwell architecture! More L2 cache and faster atomics will be good for MapD. - @ToddMostak

If you’re a superoptimizer, the #CUDA 6 RC disassembler can now pretty print a register/predicate "lifetime" diagram: - ‏@pixelio

For daily updates, follow @gpucomputing.


back to the top
GPU Technology Conference (GTC 2014)
  March 24-27, 2014, San Jose, Calif.
500 sessions | Hands-on developer labs & tutorials
Meet with luminaries, technologists and peers from 50+ countries

4-Day CUDA Course (Acceleware)
  May 6-9, 2014, Calgary, AB, Canada

IEEE Int’l Parallel & Distributed Processing Symposium
  May 19-23, 2014, Phoenix, Arizona

PRACE Scientific and Industrial Conference
  May 20-22, 2014, Barcelona Spain

4-Day CUDA Course (Acceleware)
  June 24-27, 2014, San Jose, Calif.

Programming Heterogeneous Systems in Physics (Workshop)
  July 14-15, 2014, Jena, Germany

HPCS 2014
  July 21-25, 2014, Bologna, Italy

(To list an event, email:


back to the top

Online Learning

Udacity | Coursera | APC Russia

CUDA Consulting

Training, programming and project development services are available from CUDA consultants around the world. To be considered for inclusion on list, email:

GPU Test Drive

Want to try Tesla K40 for free? Sign up here.


Check out our new series of short videos about CUDA.

Tell Us Your CUDA Story

If you are a CUDA developer, tell us how you are using CUDA.

GPU-Accelerated Apps

See updated list of 270+ GPU-accelerated applications.

GPU Meetups

Learn about Meetups in your city, or start one up.

CUDA Documentation

The CUDA documentation site includes release notes, programming guides, manuals and code samples.

NVIDIA Developer Forums

Join us on the NVIDIA DevTalk forums to share your experience and learn from other developers. You can also ask questions on Stack Overflow, using the ’cuda’ tag.

GPU Computing on Twitter

For daily updates about GPU computing and parallel programming, follow @gpucomputing on Twitter.



CUDA on the Web

CUDA Spotlights
CUDA Newsletters
GPU Test Drive

NVIDIA Newsletters

Sign up for NVIDIA Newsletters, including Media & Entertainment, GeForce, NVIDIA GRID VDI and Shield.

Please fill out our reader survey so we can improve in 2014. Thanks!


CUDA® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). NVIDIA provides a complete toolkit for programming on the CUDA architecture, supporting standard computing languages such as C, C++ and Fortran. Send comments to
Copyright © 2014 NVIDIA Corporation. All rights reserved. 2701 San Tomas Expressway, Santa Clara, CA 95050.