New CULA Linear Algebra Library From EM Photonics Brings GPU Computing To Millions Of Developers
CUDA-Optimized Implementation of Industry Standard LAPACK Library Released
For further information, contact:
FOR IMMEDIATE RELEASE:
SANTA CLARA, Calif. —Aug. 17, 2009—EM Photonics today released a beta version of CULA, an implementation of the industry-standard LAPACK linear algebra library designed and optimized for NVIDIA’s massively parallel CUDA™-enabled graphics processing units (GPUs).
The millions of developers that rely on LAPACK routines for solving problems ranging from computational physics and structural mechanics to electronic design automation can now get up to a 10X boost in performance over a single quad-core CPU1 by using NVIDIA® Tesla™ GPUs in their workstation or datacenter.
"One promising evolutionary path of high-performance computing architectures is a hybrid system consisting of multi-core CPUs and many core GPUs," said Professor Satoshi Matsuoka, of the Tokyo Institute of Technology. "LAPACK is key for many scientific applications, so a CUDA-optimized implementation will significantly broaden the appeal of hybrid systems in science and engineering, giving them a strong competitive edge over competing architectures"
“We began a partnership with NASA Ames Research Center to create GPU-accelerated linear algebra libraries in 2007,” said Eric Kelmelis, CEO of EM Photonics. “As an offshoot of this project and through a partnership with NVIDIA, EM Photonics is releasing CULA and allowing developers to experience the computational performance of a supercomputer right at their desk.”
EM Photonics’ CULAtools is a product family comprised of CULA Basic, Premium, and Commercial. The CULA library is a GPU-accelerated implementation of the most popular LAPACK routines. LAPACK is a collection of commonly used functions in linear algebra, used by millions of developers in the scientific and engineering community. The problems they tackle can often be approximated by linear models and can, therefore, be solved using linear algebra routines. CULA exploits the massively parallel CUDA architecture of NVIDIA’s GPUs to accelerate many of the common LAPACK routines.
“Our customer base has been anticipating the release of a linear algebra library similar to LAPACK. This fundamental math library brings the power of GPU computing to a much broader developer base in the scientific computing community”, said Andy Keane, general manager of the Tesla business unit at NVIDIA. “CULA forms yet another key branch in our rapidly increasing ecosystem of CUDA libraries which now includes FFT, BLAS, image processing, computer vision, ray tracing, rendering, molecular dynamics, and more.”
A full production release of CULA is scheduled for NVIDIA’s GPU Technology Conference, being held from September 30th to October 2nd at the Fairmont Hotel in San Jose, California. Anyone interested in downloading the beta preview of CULA Basic can register at www.culatools.com.
About NVIDIANVIDIA awakened the world to the power of computer graphics when it invented the graphics processing unit (GPU) in 1999. Since then, it has consistently set new standards in visual computing with breathtaking, interactive graphics available on devices ranging from smart phones to notebooks to workstations. NVIDIA’s expertise in programmable GPUs has led to breakthroughs in parallel processing which make supercomputing inexpensive and widely accessible. Fortune magazine has ranked NVIDIA #1 in innovation in the semiconductor industry for two years in a row. For more information, see www.nvidia.com
1 Performance comparison based on a single NVIDIA Tesla C1060 card vs an Intel Quad-Core Core i7 (Nehalem) CPU running Intel’s Math Kernel Library (MKL)
Certain statements in this press release including, but not limited to, statements as to: the benefits, features, impact, performance and capabilities of NVIDIA Tesla GPUs and CUDA architecture and their effect of LAPACK and CULA are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: development of more efficient or faster technology; design, manufacturing or software defects; the impact of technological development and competition; changes in consumer preferences and demands; customer adoption of different standards or our competitor's products; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission including its Form 10-Q for the fiscal period ended April 26, 2009. Copies of reports filed with the SEC are posted on NVIDIA’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
# # #
© 2009 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA and Tesla, are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.
Copyright© 2016 NVIDIA Corporation. All rights reserved. All company and/or product names may be trade names, trademarks, and/or registered trademarks of the respective owners with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.
Note to editors: If you are interested in viewing additional information on NVIDIA, please visit the NVIDIA Press Room at http://www.nvidia.com/page/press_room.html