16 NANOMETER FINFET FOR UNPRECEDENTED ENERGY EFFICIENCY With 150 billion transistors built on bleeding-edge 16 nanometer FinFET fabrication technology, the Pascal GPU is the world's largest FinFET chip ever built. It’s engineered to deliver the fastest performance and best energy efficiency for workloads with near-infinite computing needs.
16 NANOMETER FINFET FOR UNPRECEDENTED ENERGY EFFICIENCY With 150 billion transistors built on bleeding-edge 16 nanometer FinFET fabrication technology, the Pascal GPU is the world's largest FinFET chip ever built. It’s engineered to deliver the fastest performance and best energy efficiency for workloads with near-infinite computing needs.
AN EXPONENTIAL LEAP IN PERFORMANCE Pascal is the most powerful compute architecture ever built inside a GPU. It transforms a computer into a supercomputer that delivers unprecedented performance, including over 5 teraflops of double precision performance for HPC workloads. For deep learning, a Pascal-powered system offers over 12X leap in neural network training—reducing training time from weeks to hours—and a 7X increase in deep learning inference throughput compared to current-generation GPU architectures.
AN EXPONENTIAL LEAP IN PERFORMANCE Pascal is the most powerful compute architecture ever built inside a GPU. It transforms a computer into a supercomputer that delivers unprecedented performance, including over 5 teraflops of double precision performance for HPC workloads. For deep learning, a Pascal-powered system offers over 12X leap in neural network training—reducing training time from weeks to hours—and a 7X increase in deep learning inference throughput compared to current-generation GPU architectures.
NEW ARTIFICAL INTELLIGENCE (AI) ALGORITHMS New half-precision, 16-bit floating point instructions deliver over 21 teraflops for unprecedented training performance. With 47 TOPS (tera-operations per second) of performance, new 8-bit integer instructions in Pascal allow AI algorithms to deliver real-time responsiveness for deep learning inference.
NEW ARTIFICAL INTELLIGENCE (AI) ALGORITHMS New half-precision, 16-bit floating point instructions deliver over 21 teraflops for unprecedented training performance. With 47 TOPS (tera-operations per second) of performance, new 8-bit integer instructions in Pascal allow AI algorithms to deliver real-time responsiveness for deep learning inference.