# Transcript: NVIDIA GTC 2026 Keynote with Jensen Huang
URL: https://www.youtube.com/watch?v=jw_o0xr8MWU
Duration: 02:18:51
Date:16 March 2026
## Summary/Context
> Jensen Huang’s GTC 2026 keynote centers on NVIDIA’s AI roadmap: Blackwell in production, Vera Rubin as the next system platform, major updates in networking and software, and expansion across robotics, automotive, telecom, and enterprise AI. The keynote emphasizes inference, reasoning, agents, and physical AI as the workloads driving NVIDIA’s evolution into a broader computing-systems and infrastructure company.
## Transcript


### [0:00 The Economics of Tokens: Scaling Intelligence via High-Throughput Tokenomics](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=0s)

[INTRO VIDEO]
This is how intelligence is made. A new kind of factory generator of tokens.
The building blocks of AI. Tokens have opened a new frontier, turning data into knowledge and drawing on all we have learned. Tokens are harnessing a new wave of clean energy. And unlocking the secrets of the stars. In virtual worlds. They help robots learn. And in the physical world, perfect. Forging new paths. And clearing the way for a Bountiful harvest. In the moments that matter. Tokens are already there. And in the miles between they never stop. They work where human hands cannot. So we may all breathe easier. And the smallest hearts beat stronger. Tokens are helping us break new ground.
On a scale never attempted. To empower the world. So we can reach Star Cloud one. Separation confirmed well beyond it. Together we take the next great leap into a bright new future. Built for all mankind. And here is where it all begins.
### [3:15 Welcome to GTC](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=195s)

***Jensen Huang***

 Welcome to GTC! I just want to remind you this is a tech conference. All these people lining up so early in the morning. All of you in here. It's great to see you, GTC. We're going to talk about technology. We're going to talk about platforms.

NVIDIA has three platforms. You think that we mostly talk about. One of them is related to CUDA-X. Our systems is another platform. And now we have a new platform called AI factories. We're going to talk about all of them. And most importantly we're going to talk about ecosystems.
But before I start. Let me thank our pre game show hosts. I thought they did a great job. Sarah Guo of conviction. Alfred Lin. Sequoia capital. NVIDIA's first venture capitalist. Gavin Baker NVIDIA's first major institutional investor. These three people are deep in technology, deep in what's going on.
And of course, they have just a really broad range of technology ecosystem. And then, of course, all of the VIPs that are hand-selected to join us today, all star team, I want to thank all of you for that.
I also want to thank all the companies that are here. NVIDIA, as you know, is a platform company. We have technology. We have our platforms. We have rich ecosystem. And today there are probably 100% of the $100 trillion of industry here. 450 companies sponsored this event. I want to thank you. A thousand technical sessions, 2000 speakers. This is this conference is going to cover every single layer of the five layer cake of artificial intelligence from land, power, and shell to infrastructure to chips to the platforms, the models.
And of course, the most important and ultimately what's going to take get this industry taken off is all of the applications.
### [6:02 20 Years of CUDA](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=362s")

***Jensen Huang***

But it all began. It all began here. This is the 20th anniversary of CUDA. We've been working on CUDA for 20 years. For 20 years we've been dedicated to this architecture, this revolutionary invention. SIMT single instruction, multithreaded writing scalar code could spun off into multithreaded application. Much, much easier to program than SIMD. We recently added tiles so that we could help people program tensor cores and the structures of mathematics that are so foundational to artificial intelligence today.
Thousands of tools and compilers and frameworks and libraries in open source. There's a couple of hundred thousand public projects. CUDA literally is integrated into every single ecosystem.
This chart basically describes 100% of NVIDIA's strategies. You've been watching me talk about this slide from the very beginning. And ultimately, the single hardest thing to achieve is the thing on the bottom installed base. It has taken us 20 years to now have built up hundreds of millions of GPUs and computing systems around the world that run CUDA. We are in every cloud. We're in every computer company. We serve just about every single industry.
The installed base of CUDA is the reason why the flywheel is accelerating. The installed base is what attracts developers, who then creates new algorithms that achieves a breakthrough, for example, deep learning. There are so many others. Those breakthroughs lead to entirely new markets, which builds new ecosystems around them with other companies that join, which creates a larger installed base. This flywheel. This flywheel is now accelerating. The number of downloads of NVIDIA libraries is incredibly accelerating. It's at a very large scale and growing faster than ever. This flywheel is what makes this computing platform able to sustain so much applications, so many new breakthroughs.
But most importantly, it also enables these infrastructures to have extraordinarily useful life. And the reason for that is very obvious. There are so many applications that you can run on NVIDIA CUDA. We support the entire every single phase of the AI lifecycle. We address every single data processing platform. We accelerate scientific principal solvers of all different kinds. And so the application reach is so great that once you install NVIDIA GPUs, the useful life of it is incredibly high. It is also one of the reasons why Ampere that we shipped some six years ago. The pricing of Ampere in the cloud is going up.
And so all of that is made possible fundamentally because the installed base is high, the flywheel is high, the developer reach is great. And when all of that happens and we continuously update our software, the computing cost declines, the combination of accelerated computing speeding up applications tremendously. Meanwhile, as we continue to nurture and continue to update software over its life, not only do you get the first time pop, you get the continuous cost reduction of accelerated computing over time, and we're willing to nurture willing to support every single one of these GPUs in the world because they're all architecturally compatible.
We're willing to do so because the installed base is so large. If we release a new optimization, it benefits millions. This applies to everybody in the world. This combination of dynamics is what makes the NVIDIA architecture expand its reach, accelerating its growth. At the same time driving down computing costs, which ultimately encourages new growth. So CUDA is at the center of it.
But our journey to CUDA actually started 25 years ago.

### [10:23 GeForce](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=623s)

***Jensen Huang***

GeForce. I know how many of you grew up with GeForce. GeForce is NVIDIA's greatest marketing campaign. We attract future customers starting long before you could afford to pay for it yourself.
Your parents paid. Your parents paid your parents. Your parents paid for you to be NVIDIA customers. And every single year they paid up year after year after year until someday you became an amazing computer scientist and became a proper customer, a proper developer. But this is, this is the house that GeForce made. 25 years ago, we started our journey, which led to CUDA. 25 years ago, we invented the programmable shader, a perfectly unobvious invention to make an accelerator programmable the world's first programmable accelerator, the pixel shader, 25 years ago. That led us to explore further and further 20 years later.
Five years later, the invention of CUDA, one of the biggest investments that we made, and we couldn't afford it at the time, and it consumed the vast majority of our company's profits, was to take CUDA on the backs of GeForce to every single computer We dedicated ourselves to creating this platform because we felt so much—we felt so strongly about its potential. But ultimately, the company's dedication to it, despite the hardships in the beginning, believing in every single day for. For 13 generations of 20 years, we now have CUDA installed everywhere. The pixel shader led to, of course, the revolution of GeForce.
And then ten years ago we introduced about ten years ago. What is it? Eight years ago, we introduced RTX, a complete redesign of our architecture for the modern era of computer graphics. GeForce GTX CUDA to the world. GeForce therefore enabled Alex Krizhevsky and Ilya Sutskever and Geoff Hinton, Andrew Ng, and so many others to discover that the GPU could be their friend in accelerating deep learning. It started the big bang of AI ten years ago. We decided that we would fuse programmable shading and introduced two new ideas ray tracing, hardware, ray tracing, which is incredibly hard to do, and a new idea at the time.
Imagine about ten years ago, we thought that AI would revolutionize computer graphics. Just as GeForce brought AI to the world, AI is now going to go back and revolutionize how computer graphics is done all together. Well, today I'm going to show you something of the future. This is our next generation of graphics technology. We call it neural rendering, the fusion, the fusion of 3D graphics and artificial intelligence.
### [13:50 DLSS 5](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=830s)

***Jensen Huang***

This is DLSS 5. Take a look at it.
[VIDEO - NO NARRATION]
Is that incredible? Computer graphics comes to life. Now what did we do? We fused controllable 3D graphics. The ground truth of virtual worlds.
The structured data. Remember this word? The structured data of virtual worlds of generative generated worlds. We combine 3D graphics, structured data with generative AI probabilistic computing. One of them is completely predictive, the other one probabilistic, yet highly realistic. We combine these two ideas, combine these two ideas, control through structured data controlled perfectly and yet generating at the same time. And as a result, the content is beautiful. amazing as well as controllable. This concept of fusing structured information and generative AI will repeat itself in one industry after another, industry after another industry. Structured data is the foundation of trustworthy AI.
Well. This is going to scare you a little bit. I'm going to flip the slide and don't gasp.

### [16:19 Structured Data is the Ground Truth of AI](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=979s)


***Jensen Huang***

So we're going to go through this schematic for the rest of the time. This is my best slide. Every time I asked my I asked the team, what's my best slide? Repeatedly. This was it. They said, don't do it, Jensen. Don't do it. I said, no. These seats are free, for some of you. So this is your price of admission. So this is this is structured data. You've heard of it. SQL, Spark. Pandas, velox. Some of these really really important very large platforms.
Snow. Snow. Snowflake. Databricks EMR, Amazon EMR, um Azure fabric. Google cloud BigQuery. All of these platforms are processing data frames. These data frames are giant spreadsheets and they hold all of life's information. This is the structured data, the ground truth of business. This is the ground truth of enterprise computing. Well, now we're going to have AI use structured data and we better accelerate the living daylights out of it. It used to be okay. And we would, you know, of course, we would accelerate structured data so that we could do more. We could do it more cheaply.
We could do it more frequently per day and keep the company running at a much more synchronized way. However, in the future, what's going to happen is these data structures are going to be used by AI, and AI is going to be much, much faster than us. Future agents are going to use structured databases as well.

And then of course the unstructured database, the generative database this database is represents the vast majority of the world vector databases, unstructured data, PDFs, videos, speeches, all of the world's information. About 90% of what's generated every single year is unstructured data.
Until now, this data has been completely useless to the world. We read it, we put it into our file system, and that's it. Unfortunately, we can't query it. We can't search for it. It's hard to do that. And the reason for that is because there's no easy indexing of unstructured data. You have to understand its meaning, its purpose. And so now we have AI do that. Just as AI was able to solve multi-modality perception, you can and understanding you can use that same technology multi-modality perception and understanding to go read a PDF to understand its meaning, and from that meaning embedded into a larger structure that we can search into, we can query into.
NVIDIA created two foundational libraries, just like we created RTX for 3D graphics. We created cuDF for DataFrames, structured data, we created cuVS for vector stores, semantic data, unstructured data, AI data. These two platforms are going to be two of the most important platforms in the future. Super excited to see its adoption throughout the network. That's complicated network of the world's data processing systems. And the reason for that is because data processing has been around a long time and therefore so many different companies and platforms and services. It has taken us a long time to integrate deeply into this ecosystem.

### [19:45 NVIDIA’s Vertical Integration](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=1185s)

***Jensen Huang***


I'm super proud of the work that we're doing here. And then today we're announcing several of them. IBM, the inventor of SQL, one of the most important domain specific languages of all time, of all time is accelerating Watsonx.data with cuDF. Let's take a look at it.
[VIDEO NARRATION  - JENSEN]

### [20:16 IBM Reinvents Data Processing With NVIDIA](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=1216s)

***Jensen Huang***


60 years ago, IBM introduced the system 360, the first modern platform for general purpose computing. Launching the computing era, then SQL, a declarative language to query data without requiring the computer to be instructed step by step and the data warehouse. Each. The foundations of modern enterprise computing today. IBM and NVIDIA are reinventing data processing for the era of AI by accelerating IBM Watsonx.data data SQL engines with NVIDIA GPU computing libraries.
Data is the ground truth that gives AI context and meaning. AI needs rapid access to massive data sets. Today's CPU data processing systems can't keep up. Nestle makes thousands of supply chain decisions every day. Their order to cash Data Mart aggregates every supply order and delivery event across global operations in 185 countries. On CPUs, Nestle refreshed the Data Mart a few times a day with accelerated Watsonx.data data running on NVIDIA GPUs. Nestle can run the same workload five times faster at 83% lower cost. The next computing platform has arrived. Accelerated computing for the era of AI.
[JENSEN]
NVIDIA accelerates data processing in the cloud. We also accelerate data processing on prem. As you know, Dell is the world's leading computer systems maker. And they also are one of the world's leading storage providers. And they worked with us to create the Dell AI data platform that integrates cuDF and cuVS to create an accelerated data platform. Well, for the era of AI, and this is an example of what they did with NTT data. Huge speed up.
This is cloud, Google Cloud and Google Cloud. As you know we've been working with Google Cloud for a very long time.
We accelerate Google's Vertex AI. We now accelerate BigQuery. really important framework and really important platform. And this is an example of our work together with Snapchat, where we reduce their cost of computing by nearly 80%. When you accelerate data processing, when you accelerate computing, you get the benefit of speed, you get the benefit of scale, but most importantly, you also get the benefit of cost. And so all of those come together as one. It was originally called Moore's Law. Moore's law was about getting performance doubling every couple of years. That's another way of saying so long as the price remains about the same and most computers remained about the same.
You're also getting twice the performance every year, or you're reducing the cost of computing every single year. Well, Moore's Law has run out of steam. We need a new approach. Accelerated computing allows us to take these giant leaps forward. And as you will see later, because we continue to optimize the algorithms. And NVIDIA is an algorithm company as we continue to optimize the algorithms. And because our reach is so large and our installed base is so large, we can reduce the computing cost, increasing the scale, increasing the speed for everybody continuously.
This is Google Cloud. You could see this pattern I just mentioned.
I just wanted to show you three versions of it. NVIDIA built the accelerated computing platform, has a bunch of libraries on top. I gave you three examples. RTX is one of them. cuDF is another cuVS and we'll show you a few more. These libraries sit on top of our platform, but ultimately we integrate into the world's cloud services, into the world's OEMs and together and other platforms that I'll show you. Together, we're able to reach the world. This pattern. NVIDIA, Google Cloud. Snapchat will repeat over and over again and it kind of looks like this. And so this is one example.
NVIDIA with Google Cloud we accelerate vertex AI. We accelerate BigQuery. We accelerate. I'm super proud of the work that we've done with Jackson XLA. We are incredible on PyTorch. We're the only accelerator in the world that's incredible on PyTorch and incredible on Jackson, XLA and the customers that we support, the Basetens, the CrowdStrike, Puma, Salesforce, they're not our customers, but their customers, developers of ours that we've integrated the NVIDIA technologies into that we can then land on the clouds. Our relationship with cloud service providers are essentially us bringing customers to them. We integrate our libraries, we accelerate workloads, and we land those customers in the clouds.
And so as you could see, most of our cloud service providers love working with us, and they're always asking us to land the next customer on their cloud. And I just want to let you know, there are a lot of customers. We're going to accelerate everybody. And so there'll be lots and lots of customers. We'll be able to land in your cloud. Just be patient with us.

And so this is Google Cloud. This is AWS. We've been working with AWS a long time. And one of the areas, one of the, one of the things I'm super excited about this year is we're going to bring Open AI to AWS.
And so it's going to drive enormous consumption of cloud computing at AWS. It's going to expand the reach and expand the compute of Open AI. And as you know, they are completely compute constrained. And so AWS, we accelerate EMR, we accelerate SageMaker, we accelerate bedrock. NVIDIA's integrated really deeply into AWS. They were our first cloud partner.

Microsoft Azure. NVIDIA's A100 supercomputer was the first one we built was for NVIDIA. The first one we installed was at Azure, and that led to the the the big successful partnership with OpenAI. But we've been working with Azure for quite a long time.
We accelerate Azure Cloud. Now their AI foundry, we partner deeply with we accelerate Bing search. We work with them on Azure regions. This is one of the areas that is incredibly important as we continue to expand AI throughout the world. One of the capabilities that we offer is confidential computing that in confidential computing, you want to make sure that even the operator cannot see your data, even the operator cannot touch or see your model's confidential computing NVIDIA's GPUs as the first ones in the world to do that. It's now able to support confidential computing and protected deployment of these very valuable OpenAI models.
And and Anthropic models throughout clouds and different regions, and all because of our confidential computing. Confidential computing is super important. And here's an example where we have different customers that we work with. Synopsis. A great partner of ours, we're accelerating all of their EDA and C workflows. And then we landed at Microsoft Azure.

We were Oracle's first AI customer. Most people would have thought we were their first supplier. We were their first supplier also, but we were their first AI customer. I'm quite proud of the fact that I explained AI clouds to Oracle for the first time, and we were their first customer.
Since then, they've really taken off. We've landed a whole bunch of our partners there. Coherent fireworks. And of course, very famously, Open AI.

A great partnership with CoreWeave. They're the world's first AI native cloud, a company that was built with only one singular purpose to provision to host GPUs. As the era of accelerated computing showed up and the host for AI clouds. They've got some fantastic customers and they're growing incredibly.

One of the platforms that I'm quite excited about is Palantir and Dell. The three of our companies have made it possible to stand up a brand new type of AI platform, the Palantir ontology platform and AI platform.
And we could stand up these platforms in any country, in any air gapped region, completely on prem, completely on site, completely in the field. AI could be deployed literally everywhere without our confidential computing capability, without our ability to build the end to end system, as well as offer the entire accelerated computing and AI stack from data processing, whether it's vectors or structures all the way to AI. It wouldn't have been possible. I wanted to show you these examples. This is our special working relationship with the world's cloud service providers. And while all of them are here and I get the benefit of seeing them during booth tour, and it's just so incredibly exciting, I just want to thank all of you for the hard work.
What NVIDIA has done is this, and you're going to see this theme over and over again.
NVIDIA is vertically integrated the world's first vertically integrated but horizontally open company.
And the reason that's necessary is very simple. Accelerated computing is not a chip problem. Accelerated computing is not a systems problem. Accelerated computing has a missing word. We just never say it anymore. Application acceleration. If I could make a computer run everything faster, that's called a CPU, but that's run out of steam. The only way for us to accelerate applications going forward and continue to bring tremendous speed up, tremendous cost reduction is through application or domain specific acceleration.
I dropped that phrase in the in the front. And therefore it just became “accelerated computing.” And that is the reason why NVIDIA has to be library after library, domain after domain, vertical after vertical.
We are a vertically integrated computing company. There is no other way.
We have to understand the applications. We have to understand the domain. We have to understand fundamentally the algorithms, and we have to figure out how to deploy the algorithm in whatever scenario it wants to be deployed, whether it's a data center, cloud on prem at the edge, or in a robotic system.
All of those computing systems are different. And finally, the systems and chips, we are vertically integrated. What makes it incredibly powerful, and the reason why you saw all the slides is because NVIDIA's horizontally open will work and integrate NVIDIA's technology into whatever platform you would like us to integrate into. We offer you the software, we offer you libraries, we integrate with your technology so that we can bring accelerated computing to everybody in the world.

Well, this GTC is really a great demonstration of that. You know, most of the time, most of the time you'll see me talk about these verticals and I'll use them some examples, but in every single case, whether it's automotive, by the way, financial services, the largest percentage of attendees at this GTC is from the financial services industry.
I know. I'm hoping it's developers, not traders, guys. Here's, here's, here's one thing I wanted to say. And so. In the audience represents NVIDIA's ecosystem upstream of our supply chain and downstream of our supply chain. And we work, we think about our supply chain upstream and downstream. And it's just so exciting that our entire upstream supply chain this last year, irrespective of whether you're a 50 year old company, we have 70 year old companies. We have 150 year old company who are now part of NVIDIA supply chain and partnering with us either upstream or downstream. And last year you had your record year, did you not?
Congratulations. We're on to something here. This is the beginning of something very, very big. And so if you look at accelerated computing, we've now set the computing platform. But in order for us to activate those computing platforms, we need to have domain specific libraries that solve very important problems in each one of the verticals that we address. You see us addressing every single one of these autonomous vehicles, our reach, our breadth, our impact. Incredible. We have a track on that. Financial services I just mentioned, algorithmic trading is going from classical machine learning with human feature engineering called quant.
The quants did that too. Now, supercomputers studying massive amounts of data, discovering insight and discovering patterns by itself. And so this is going through its deep learning and its transformer moment. Healthcare is going is going through their ChatGPT moment. Some really exciting work that we're there we have we have a great keynote track here. We have a great keynote track, Kimberly, a great keynote track. For healthcare, we're talking about AI physics or AI biology for drug discovery, AI agents for customer service and support of diagnosis, diagnosis, and of course, physical AI, robotic systems. All these different vectors of AI have different platforms that NVIDIA provides industrial.
We are completely resetting and starting the largest build out of human history and most of the world's industries. Building AI factories. Building chip plants, building computer plants are represented here today. Media and entertainment. Gaming, of course. Real time AI platform so that we could translation and broadcast support and live live games and live video. Enormous amount of it will be augmented with AI. We have a we have a platform called Holoscan quantum. There are 35 different companies here building with us the next generation of quantum GPU hybrid systems. Retail and CPG using NVIDIA for supply chain using.
Creating a genetic shopping systems. AI agents for customer support. A lot of work being done here. $35 trillion industry robotics, $50 trillion industry in manufacturing. NVIDIA has been working in this area for a decade now, building three computers, the fundamental computers necessary to build robotic systems. We are integrated with working with literally every single company that we know of building robots. We have 110 robots here at the show. And then telecommunications, about as large as the world's IT industry, about $2 trillion. We see. Of course, base stations everywhere. It's one of the world's infrastructures. It was the infrastructure of the last generation of computing.
That infrastructure is going to get completely reinvented. And the reason for that is very simple. That base station, which is. It does one thing, which is base station. It's going to be an AI infrastructure platform in the future. AI will run at the edge. And so lots of, lots of great, um, great, uh, discussion there. And our platform there is called Ariel. Our AI ran big partnership with Nokia. Big partnership with T-Mobile and many others. At the core of our business, everything that I just mentioned, computing platforms, but very importantly, our CUDA libraries, our CUDA libraries is the algorithm, the algorithms that NVIDIA invents.
We are an algorithm company. That's what makes us special. That's what that's what makes it possible for me to be able to go into every single one of these industries, imagine the future and have the world's best computer scientists describe and solve problems, refactor it, re-express it, and turn it into a library. We have so many. I think we have. At this show, we are announcing 100 100 libraries, 70 libraries, maybe 40 models, and that's just at the show. We're updating these all the time. We're updating them all the time. The libraries is the crown jewels of our company.
It is what makes it possible for that platform, the computing platform, to be activated in service of solving a problem making impact. One of the biggest, one of the most important libraries that we ever created, cuDNN CUDA deep neural networks, it completely revolutionized artificial intelligence, caused a big bang of modern AI.

### [38:26 CUDA-X](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=2306s)

***Jensen Huang***

 Let me show you a short video about CUDA-X.

[VIDEO NARRATION]

### [38:31 NVIDIA Foundational Technology Montage](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=2311s)

***Jensen Huang***


20 years ago we built CUDA, a single architecture for accelerated computing. Today we've reinvented computing a thousand CUDA-X libraries helped developers make breakthroughs in every field of science and engineering.
cuOpt opt for decision optimization. cuLitho for computational lithography, cuDSS for direct sparse solvers to cuEquivariance for geometry aware neural networks. Aerial for AI ran. Warp for differentiable physics. Parabricks for genomics. At their foundation are algorithms. And they are beautiful.
Everything you saw was a simulation. Some of it was principal solvers, fundamental physics solvers. Some of it was AI surrogates, AI physical models, and some of it was physical AI robotics models. Everything was simulated. Nothing was animated. Nothing was articulated. Everything was completely simulated. That is what fundamentally NVIDIA does. It is through the connection of understanding of the algorithms with our computing platforms that we're able to open up to unlock these opportunities.
NVIDIA is a vertically integrated computing company with open horizontal integration with the world. So that's CUDA x. Well, just now you saw a whole bunch of companies. You saw Walmart. And you know there's L'Oreal and incredible companies, established companies JP Morgan and Roche. And these are companies in companies that have defined society to today. Toyota is here. These are some of the largest companies in the world.

### [43:15 AI Natives](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=2595s)

***Jensen Huang***


It is also true that there's a whole bunch of companies you've never heard of. These are companies. We call them AI natives. A whole bunch of small companies. It's the list is gigantic.
I couldn't this is just a little tiny, tiny bit of it. And, um, I couldn't decide whether to show you more or show you less. And so I made it so that you couldn't see any. And nobody's feelings are hurt. However, inside this list are a bunch of brand new companies. They're companies. Like, for example, you might have heard a couple of them, OpenAI, Anthropic, but there's a whole bunch of others. There's a whole bunch of others and they serve different verticals. Something happened in the last two years, particularly this last year. We've been working with the AI natives for a long time, and this last year it just skyrocketed.
I'll explain to you why it happened. This this industry has skyrocketed $150 billion of investment into venture investment into startups, the largest in human history. This is also the first time that the scale of the investments went from millions of dollars, tens of millions of dollars to hundreds of millions of dollars and billions of dollars. And the reason for that is this is the first time in history that every single one of these companies needs compute, and lots and lots of it. They need tokens, lots and lots of it. They're either they're either going to create and build and create tokens and generate tokens, or they're going to Integrate.
Add value to tokens that are available created by Anthropic and OpenAI and others. And so this industry is different in so many different ways. But the one thing that is very clear, the impact that they're making this, the incredible value that they're delivering already is quite tangible. AI natives, all because we reinvented computing, just like during the PC revolution, a whole bunch of new companies were created just as just as during the internet revolution, a whole bunch of companies were created and mobile cloud, a whole bunch of companies were created. Each one of them had their own standards.
And we're talking about one of the major standards that just happened. Incredibly important. And this generation, we also have our own large number of very, very special companies. We reinvented computing. It stands to reason there's going to be a whole new crop of really important companies Consequential companies for the future of the world the Googles, the Amazons, the Meta's consequential companies that have come as a result of the last computing platform shift. We are now at the beginning of a new platform shift. But what happened in the last couple of years? Well, we've been watching as you know, we've been working on deep learning and working on AI.

### [46:01 Inference Inflection Arrives](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=2761s)


***Jensen Huang***

The big bang of modern AI. We were right there at the spot, and we've been advancing this field for quite some time. But why the last two years? What happened in the last two years? Well, three things.
ChatGPT, of course, started the generative AI era. It's able to not just understand, perceive and understand. It's able to also translate and generate generation of unique content. I showed you the fusion of generative AI with computer graphics and brought computer graphics to life. You guys just everybody in the world should be using ChatGPT. I know I use it every single morning.
Used it plenty this morning. And so ChatGPT was the generative AI, the era, the second. By the way, generative generative computing versus the way we used to do computing. It's not. It's generative. AI is a capability of software, but it has profoundly changed how computing is done. Computing used to be retrieval based. Now it's generative. Keep that thought in mind when I talk about certain things, and you'll realize why it is that everything that we do is going to change how computers are architected, how computers are provided, how computers are going to be built out, and what is the meaning of computing altogether.
Generative AI 2023. End of 22, 2023. The next reasoning AI O1, which and then took off with O3. Reasoning allowed it to reflect, allows it to think to itself, allows it to plan, break down, break down problems, and decompose a problem it couldn't understand into steps or parts that it could understand. It could ground itself on research O1 made generative AI trustworthy and grounded on truth that caused ChatGPT to simply took off. And that was a very, very big moment. The amount of input tokens that was necessary in order to produce, and the amount of output tokens it generated in order to reason the model was a little bit larger.
It, you know, of course, you could have much larger models. The model O1 was a little bit larger, not much larger, but its input token usage for context and its output token for thinking increased the amount of computation tremendously. Then came Claude Code, the first Agentic model, it was able to read files code, compile it, test it, evaluate it, go back and iterate on it. Claude Code has revolutionized software engineering. As all of you know, 100% of NVIDIA is using a combination of or oftentimes all three of them Claude Code, Codex and cursor all over NVIDIA.
There's not one software engineer today who is not assisted by one or many AI agents helping them code. Claude Code completely revolutionizes the the new inflection and for the first time, you don't ask the AI what, where, when, how you ask it. Create do build. You ask it to use tools. Take your context, read files. It's able to break down a problem. Reason about it. Reflect on it. It's able to solve problems and actually perform tasks. An AI that was able to perceive became an AI that could generate an AI that could generate, became an AI that could reason, an AI that could reason now became an AI that can actually do work, very productive work.
The amount of computation in the last two years, we know that everybody in this room knows the computing demand for NVIDIA GPUs off the charts. Spot pricing is skyrocketing. You couldn't find a GPU if you tried. And yet in the meantime, we're shipping GPUs out incredible amounts of it, and demand just keeps on going up. There's a reason for that, this fundamental inflection. Finally, AI is able to do productive work and therefore the inflection point of inference has arrived.

### [50:43 The Inflection Point for Inference has Arrived](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=3043s)

***Jensen Huang***


AI now has to think in order to think, it has to inference. AI now has to do. In order to do it has to inference. AI has to read. In order to do so. It has to inference. It has to reason. It has inference. Every part of AI, every time. It has to think, it has to reason, it has to do. It has to generate tokens. It has to inference. It's way past training. Now it's in the in the field of inference. So the the inference inflection has arrived.
At the time when the amount of tokens, the amount of compute necessary increased by roughly 10,000 times. Now, when I combine these two, the fact that since in the last two years the computing demand, computing demand of the work has gone up by 10,000 times, and the amount of usage, the amount of usage has probably gone up by 100 times.
People have heard me say, I believe that computing demand has increased by 1 million times in the last two years. It is the feeling that we all have. It is the feeling every startup has. It's the feeling that OpenAI has. It's the feeling that Anthropic has. If they could just get more capacity, they could generate more tokens. Their revenues would go up. More people could use it. The more advanced, the smarter the AI could become. We are now at that positive flywheel system. We have. We have reached that moment. The inflection, the inference inflection has arrived.

### [52:23  Inference Inflection Drives Strong Growth](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=3143s)

***Jensen Huang***

Last year at this time, I said. That where I stood at that moment in time, we saw about $500 billion. We saw $500 billion of very high confidence demand and purchase orders for Blackwell and Rubin through 2026. I said that last year. Now, I don't know if you guys feel the same way, but $500 billion is an enormous amount of revenue. Not one impressed. I know why you're not impressed because all of you had record years. Well, I'm here to tell you that right now where I stand a few short months after GT, CDC. One year after last GTC right here where I stand.
I see through 2027. At least $1 trillion.

Now. Does it make any sense? And that's what I'm going to spend the rest of the time talking about. In fact, we are going to be short. I am certain computing demand will be much higher than that. And there's a reason for that. So the first thing is we did a lot of work in the last year. Of course, as you know, 2025 was NVIDIA's year of inference. We wanted to make sure that not only were we good at training and post training that we were incredibly good at every single phase of AI.
So that the investments that were made, investments made in our infrastructure could scale out for as long as they would like to use it. And the useful life of NVIDIA's infrastructure would be long, and therefore the cost would be incredibly low. The longer you could use it, the lower the cost. There's no question in my mind, NVIDIA systems are the lowest cost infrastructure you could get for AI infrastructure in the world. And so the first part was last year was all about AI for inference. And it drove this inflection point. Simultaneously, we were very pleased last year that Anthropic has come to NVIDIA, that MSL Meta SL has chosen NVIDIA.
And meanwhile, meanwhile, and as a collection as a group, this represents one third of the world's AI compute. open source models. Open source models have reached near the frontier and it is literally everywhere. And NVIDIA, as you know today were the only platform in the world today that runs every single domain of AI across every single one of these AI models in language and biology and computer graphics, computer vision and speech. Proteins and chemicals, robotics and otherwise edge or cloud, any language. NVIDIA's architecture is fungible for all of that, and we're incredible for all of that.
That allows us to be the lowest cost, the highest confidence platform. Because when you're building these systems, as I mentioned, $1 trillion is an enormous amount of infrastructure. You have to have complete confidence that the trillion dollars you're putting down will be utilized, would be performant, would be incredibly cost effective, and have useful life for as long as you could see that infrastructure investment you could make on NVIDIA. You could make with complete confidence. We have now proven that it is the only infrastructure in the world that you could go anywhere in the world and build with complete confidence.
You want to put it in any of the clouds. We're delighted by that. You want to put it on prem. We're happy about that. You want to put it in any country, anywhere. We're delighted to support you. We are now a computing platform that runs all of AI. Now, our business already starting to show that 60% of our business is hyperscalers, the top five hyperscalers. However, even within that top five hyperscalers, some of it is internal AI consumption. The internal AI consumption. Really important work, like recsys is moving from recommender systems of tables and collaborative filtering and content filtering.
It's moving towards deep learning and large language models. Search. Moving to deep learning. Large language models. Almost all of these different hyperscale workloads are now moving, shifting towards a workload that NVIDIA GPUs are incredibly good at. But on top of that, because we work with every AI lab, because we work with every AI, we accelerate every AI model. And because we have a large ecosystem of AI natives that we work with that we can bring to the clouds. That investment, no matter how large, no matter how quick that compute will be consumed. And that represents 60% of our business.
The other 40% is just everywhere. Regional clouds. Sovereign clouds. Enterprise. Industrial. Robotics. Edge. Big systems. Supercomputing systems. Small servers. Enterprise servers. The number of systems. Incredible. The diversity of AI is also its resilience. The span of reach of AI is its resilience. There is no question this is not a one app technology. This is now fundamental. This is absolutely a new computing platform shift. Well, our job is to continue to advance the technology. And one of the most important things that I mentioned last year was last year was our year of inference. We dedicated everything. We took a giant chance and reinvented while Harper was at its prime and it was just cooking, we decided that the Hopper architecture, the NVLink by eight, had to be taken to the next level.
We completely re-architected the system, disaggregated to the computing system altogether and created NVLink 72. The way that it's built, the way it's manufactured, the way it's programmed, completely changed. Grace Blackwell NVLink 72 was a giant bet, and it wasn't easy for anybody. And many of my partners here in the room. I want to thank all of you for the hard work that you guys did. Thank you.
NVLink 72 NVFP4 for not just FP4 precision. FP4 is a whole different type of tensor core and computational unit. We've demonstrated now that we can inference NVFP4 for without loss of precision, but gigantic boost in performance and energy efficiency.
We've also been able to use NVFP4 for training. So NVLink 72 NVFP4 the invention of Dynamo TensorRT-LLM, a whole bunch of new algorithms. We even built a supercomputer to help us optimize kernels and help us optimize our complete stack. We call it DGX cloud. We invested billions of dollars of supercomputing capability help us create the kernels, the software that made inference possible.
Well. The results all came together and people told. People used to tell me. But, Jensen, inference is so easy. Inference is the ultimate hard. Inference is ultimate hard. It is also ultimate important because it drives your revenues.

And so this is the outcome. This is from SemiAnalysis. This is the largest, most comprehensive sweep of AI that has AI inference that has ever been done.

### [1:00:53 NVIDIA Extreme Co-Design Revolutionized Token Cost](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=3653s)


***Jensen Huang***

And what you see here on the left, on this side, on this side is tokens per watt tokens per watt is important because every data center, every single factory, by definition, is power constrained. A one gigawatt factory will never become two. It's physically constrained. The laws of atoms, the laws of physicality. And so that one gigawatt of data center. You want to drive the maximum number of tokens, which is the production, the product of that factory.
So you want that. You want to be on top of that curve as high as you want this. The x axis is the interactivity, the speed of inference, the speed of each inference. The faster you can inference, the faster you could of course, respond. But very importantly, the faster you can influence. The larger the models, the more context you can process, the more tokens you can think through. This axis is the same as smartness of the AI, and so this is the throughput of the AI. This is the smartness of the AI. I notice the smarter the AI, the lower your throughput.
Makes sense. You're thinking longer. Okay. And so this axis is the speed. And I'm going to come back to this. This is important. This is where I torture all of you. But it's too important. Every CEO in the world you watch, every CEO in the world will study their business from now on in the way I'm about to describe, because this is your token factory. This is your AI factory. This is your revenues. There's no question about that going forward. And so this is the throughput. This is the intelligence better per per watt for a given power of data center.
The more throughput, the more tokens you can produce on this side is cost. Notice NVIDIA is the highest performance in the world. Nobody would be surprised by that. They would be surprised by the fact that in one generation. whereas Moore's Law would have given us through transistors 50%. Two times. Moore's law would probably give us one and a half times more performance. You would have expected from Hopper, age 201 and a half times higher. Nobody would have expected 35 times higher. I said last year at this time that NVIDIA's Grace Blackwell NVLink 72 was 35 times perf per watt.
Nobody believed me. And then SemiAnalysis came out and Dylan Patel had a quote. He accused me of sandbagging. He accused me of sandbagging. He says Jensen sandbagged. It's actually 50 times. And he's not wrong. He's not wrong. And so our cost per token. Yeah. Our cost per token is the lowest in the world. You can beat it. I've said before, if you have the wrong architecture, even if it's free, it's not cheap enough. And the reason for that is because no matter what happens, you still have to build a gigawatt data center. You still have to build a gigawatt factory.
And that gigawatt factory for 15 years amortized across that gigawatt factory is about $40 billion. Even when you put nothing on it, it's $40 billion in. You better make for darn sure you put the best computer system on that thing so that you could have the best token cost. NVIDIA's token cost is world class, basically untouchable at the moment. And the reason that's true is because of extreme co-design. And so I'm very happy that he named us.

### [1:04:50 InferenceMax King](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=3890s)

***Jensen Huang***

There was a monkey King, token king. Well, we take we take all of our softwares, as I told you.
We vertically integrate, but we horizontally open where vertical integration horizontal open. We integrate all of our software and all of our technology. However, we could package it up and integrate it into the world's inference service providers. And these these companies are growing so fast. They're growing so fast. Fireworks. Lin is here. Together. They're just growing so incredibly fast. 100 times in the last year. They are token factories. And the effectiveness, the performance and the token cost production capability for their factories is everything to them.

And this is what happened.


### [1:06:13 NVIDIA is the Global Standard for AI Inference at Scale](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=3973s)

***Jensen Huang***


This is we updated their software same system.
And notice their token speeds. Incredible. The difference. Before before NVIDIA updated everything and all of our algorithms and software and all the technology that we bring to bear about 700 tokens per second average went to nearly 5007 times higher. And so this is the incredible power of extreme co-design.

### [1:06:46 AI Factories are the Industrial Infrastructure of the AI Era](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4006s)


***Jensen Huang***


I mentioned earlier the importance of factories. This is the importance of factory your data center. It used to be a data center for files. It's now a factory to generate tokens. Your factory is limited no matter what. Everybody is looking for land, power and shell. Once you build it, you are power limited within that power.
Limited infrastructure. You better make for darn sure that your inference, because, you know, inference is your workload and tokens is your new commodity. That compute is your revenues, that you want to make sure that the architecture is as optimized as you can in the future. Every single CSP, every single computer company, every single cloud company, every single AI company, every single company period, are going to be thinking about their token factory effectiveness. This is your factory in the future. And the reason why I know that is because everybody in this room is powered by intelligence. And in the future, that intelligence will be augmented by tokens.
So let me show you how we got here.

[VIDEO NARRATION - JENSEN]

### [1:07:56 A Decade of AI Infrastructure Innovation: From NVIDIA DGX-1 to NVIDIA Vera Rubin](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4076s)

***Jensen Huang***


On April 6th, 2016, a decade ago, we introduced DGX one, the world's first computer designed for deep learning. Eight Pascal GPUs connected with the first generation NVLink 170 teraflops in one computer, the world's first computer designed for AI researchers. With Volta. We introduced NVLink switch 16 GPUs connected with full all to all bandwidth, operating as one giant GPU, a giant step forward, but model sizes continue to grow. The data center needed to become a single unit of computing, so Mellanox joined NVIDIA. In 2020, DGX A100 Superpod became the first GPU supercomputer combining scale up and scale out architecture.
NVLink 3 for scale Up, ConnectX 6, and Quantum InfiniBand for scale out. Then Hopper, the first GPU with the Fp8 transformer engine that launched the generative AI era NVLink for ConnectX 7 Bluefield three DPUs second generation quantum InfiniBand. It revolutionized computing. Blackwell redefined AI supercomputing system architecture with NVLink 7272 GPUs connected by NVLink spun 130TB per second of all, to all bandwidth compute, trace, integrate Blackwell GPUs, Grace CPUs ConnectX 8 and BlueField-3. Scale out runs over Spectrum for Ethernet with three scaling laws in full steam pre-training, post training and inference, and now Agentic systems compute, demand continues to grow exponentially.
And now Vera Rubin architected for every phase of a genetic AI, advancing every pillar of computing, including CPU storage, networking, and security. Vera Rubin, NVLink 72, 3.6. Exaflops of compute 260TB per second of all to all NVLink bandwidth. The engine supercharging the era of Agentic AI, the Vera CPU rack designed for orchestration and agentic workflows. The Stat rack AI native storage built with Bluefield for scale out with Spectrum-X co-packaged optics, increasing energy efficiency and resiliency, and an incredible new addition, the Groq 3 LPX rack, tightly connected to Vera Rubin Groqs LPU's massive on chip SRAM, a token accelerator to the already incredibly fast Vera Rubin together 35 times more throughput per megawatt.
The new Vera Rubin platform seven chips, five rack scale computers, one revolutionary AI supercomputer for agentic AI 40 million times more compute in just ten years.


### [1:11:26  NVIDIA Vera Rubin](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4286s)

***Jensen Huang***


Now, in the. In the good old days, when I would say Hopper, I would hold up a chip. That's just adorable. This is Vera Rubin. When we think. When we when we think Vera Rubin, we think the entire system vertically integrated completely with software extended end to end optimized as one giant system. The reason why it's designed for Agentic systems is very clear, because agents. Of course, the most important workload is its thinking.
The large language model, the large language models are getting larger and larger and larger. It's going to generate more and more tokens more quickly, so it can think more quickly. But it also has to access memory. It's going to pound on memory really hard. KB cache structured data, unstructured data, CVS. It's going to be pounding on the on the storage system really, really hard, which is the reason why we reinvented the storage system. It is also going to use tools and unlike humans that are more tolerant to slower computers, AI wants the tools to be as fast as possible.
These tools. Web browsers. In the future, they could also be virtual PCs in the cloud. Those PCs have to be. And those computers have to be as fast as possible. We created a brand new CPU, a brand new CPU that's designed for extremely high single threaded performance, incredibly high data output, incredibly good at data processing and extreme energy efficiency. It is the only data center CPU in the world that uses LPDDR5, LPDDR5, and incredible single threaded performance and performance per watt. That is unrivaled. And so that's why we built that, so that it could go along with the rest of these racks for a genetic processing.
And so here it is. This is the Grace black wall.

### [1:13:53 NVIDIA Vera Rubin, NVLink and Groq](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4433s)

***Jensen Huang***

No. Vera Rubin, where is it? Here it is. Okay, so this is the Vera Rubin system. Notice, since the last time 100% liquid cooled, all of the cables gone. What used to take what used to take two days to install now takes two hours. Incredible. And so the manufacturing cycle time is going to dramatically reduce. This is also a supercomputer that is cooled by. It's cooled by hot water 45 degrees which takes the pressure off of the data center. Takes all of that cost and all of that energy that's used to cool the data center and makes it available for the system.
This is the secret sauce. It is the only we're the only company in the world that has today built the sixth, sixth, generation scale up switching system. This is not Ethernet. This is not InfiniBand. This is NVLink. This is the sixth generation NVLink. This is insanely hard to do well. It is insanely hard to do, period. And I'm just super proud of the team. NVLink completely cooled.
This is the brand new Groq system and I'll show you a little bit more about it. This system eight Groq Chips. This is the LP 30. The world's never seen it.
Anything that the world's ever seen is V1. This is third generation. And we're in volume production now. And I'll show you more about that in just a second.

### [1:15:29 NVIDIA Spectrum-X-Switch, Co-packaged Optics, Vera and NVIDIA BlueField-4](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4529s)


***Jensen Huang***

The world's first CPO Spectrum-X switch. This is also in full production co-packaged optics. Optics comes directly onto this chip interfaces directly to silicon. Electrons get translated to photons. And it gets directly directly connected to this chip. We invented the process technology with TSMC. We're the only one in production with it today. It's called COOP. It's completely revolutionary. NVIDIA is in full production with Spectrum-X.
This is the Vera system, twice the performance per watt of any, any, CPUs in the world today.
It is also in production. Well, you know, we never we never thought we would be selling CPUs standalone. Um, we are selling a lot of CPU standalone. This is already for sure going to be a multi-billion dollar business for us. So I'm very, very pleased with our CPU architects. We've designed a revolutionary CPU and this is the CX9 powered with Vera CPU. The BlueField-4 STX. Our new storage platform. Okay, so these are the four. These are the the racks and it's connected. Each one of these racks. The NVLink rack. This is. I've shown you guys this before.
It's a super heavy and seems to get heavier every year. Because I think there's just more cables in there every year. And so. So this is the NVLink rack. We've also taken this technology because it, it is so efficient to create a data center with these cabling systems, structured cables. So we decided to do that for Ethernet. So this is Ethernet 256 liquid cooled nodes in one rack. And it is also connected with these incredible connectors. You guys want to see um. Rubin Ultra.

### [1:17:38 Rubin Ultra](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4658s)


***Jensen Huang***


So this is the Rubin Ultra compute node. Unlike Rubin that slides in horizontally Rubin Ultra goes into a whole new rack.
It's called Kyber that enables us to connect 144 GPUs in one NVLink domain. And so the Kyber rack this I, I could lift it, I'm sure, but I won't. It's quite heavy. This, this is one compute node and it slides into the Kyber rack vertically. This is where it connects into this is the midplane the Kyber racks, those four top NVLink connectors, slide in and connect into this, and this becomes one of the nodes. And each one of these racks is a different compute node. And this is the amazing part. This is the midplane and the back of the midplane instead of the cabling system, which has its limits in terms of how far we could drive cables, copper cables, we now have this system to connect 144 GPUs.
This is the new NVLink. This sits also vertically and it connects into the midplane on the back compute in the front, NVLink switches in the back one giant computer. Okay, so that is. Reuben ultra. As I mentioned, as I mentioned. How about we take this back down? I need the rest of my slides.

### [1:19:41 Inference Performance and Efficiency Drive Company Results](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=4781s)

***Jensen Huang***

Oh, it's coming down. Okay. Thank you. Jeanine. This is what happens when you this is what happens when you don't practice.

Okay. All right. So, um, you saw you. Take your time. Just don't get hurt. You saw. You saw this slide. You know, only on NVIDIA's keynote will you see last year's slide presented again.
And the reason for that is I just want to let you know that last year, I told you something very, very important and it's so important is worthwhile to tell you again. This is probably the single most important chart for the future of AI factories. And every CEO, every CEO in the world will be tracking it. We'll be studying it very deeply. It's much, much more complicated than this. It's multi-dimensional. But you will be studying the throughput and the speed of your AI factories, the throughput token speed at ISO power, because that's all the power you have throughput and token speed for your factories forever.
And that that analysis is going to lead directly to your revenues. What you do this year will show up precisely next year as your revenues. And this chart is what it's all about. And I said on the vertical axis, on the vertical axis, thank you guys. On the vertical axis is throughput. On the horizontal axis is token rate.

Today I'm going to show you this because we're able because we're now able to increase the token speed and because model sizes are increasing, because the token length, the context length, depending on the different grades of different application use case continues to grow from maybe 100,000 tokens input length to maybe millions.
The token input length is growing and also the output token length is growing. And so all of these play into ultimately. The marketing and the pricing of future tokens. Tokens are the new commodity and like all commodities, once it reaches an inflection, once it becomes mature or becomes maturing, it will segment into different parts. The high throughput low speed could be used for the free tier. The next tier could be the medium tier, larger model, maybe higher speed for sure. Larger input context. Length that translates to a different price point. You could see from all the different services.
This one is free. It's a free tier. The first tier could be $3 per million tokens. The next tier could be $6 per million tokens. You would like to be able to keep pushing this boundary because the larger the model smarter the more input token context length more relevant, the higher the speed, the longer the more you can think and iterate smarter AI models. So this is about smarter AI models. And when you have smarter AI models, each one of these clicks allows you to increase the price. So this is $45. And maybe one day there'll be a premium model that allows you a premium service that allows you to generate token speeds that are incredibly high because you're in a critical path, or maybe you're doing really long research and $150 per million tokens is just not a thing.
So let's translate that. Suppose you were to use 50 million tokens per day as a researcher at $150 per million tokens. As it turns out, as a research team, that's not even a thing. So we believe that this is the future. This is where AI wants to go. This is where it is today. It had to start here to establish the value and establish its usefulness and get better and better and better in the future. You're going to see most services encompass, encompass all of that.

This is Hopper. Hopper started and I moved it moved the chart.
This is 50. This is 100 Hopper looks like this. And you would have expected Hopper the next generation to be higher. But nobody would have expected it to be that much higher.

This is Grace Blackwell. What Grace Blackwell did is at your free tier increase your throughput tremendously, However, where you mostly monetize your service, it increased your throughput by 35 times. This is no different than any product that every company makes. The higher the tier, the higher the quality. The higher the performance. The lower the volume, the lower the capacity. And so it is no different than any other business in the world.
And so now we're able to increase this tier by 35 x. And we introduced a whole new tier. This. This is the benefit of Grace Blackwell. A huge jump over Hopper. Well, this is what we're doing with.

Okay, so this is Grace Blackwell. Okay, let me just reset. Reset this.

And this is Vera Rubin. Okay. Now just think. Just think what just happened at every single tier, at every single tier, at every single tier, we increase the throughput. And at the tier that where your highest ASP and your most valuable segment, we increased it by ten X.
That is the hard work. This is incredibly hard to do out here. This is the benefit of NVLink 72. This is the benefit of extremely low latency. This is the benefit of extreme co-design that we could shift the entire area up. Now, what does it mean from a customer perspective in the end?

Suppose I were to take all of that and I just multiply it against. Suppose I took 25% of my power, used it in a free tier, 25% of my power in the medium tier, 25% of my power in the high tier, and 25% of my power in the premium tier.
My data center only has a gigawatt. And so I get to decide how I want to distribute. The free tier allows me to attract more customers. This allows me to serve my most valuable customers. And the combination, the product of all that allows you basically your revenues, the revenues you can generate. Assuming this simplistic example allows Blackwell to generate five times more revenues. Vera Rubin to generate five times. Yeah.

So, Vera Rubin, you should get there as soon as you can. And the reason for that is because your, your cost of tokens goes down and your throughput goes up.
Now, but we want even more. We want even more. And so let me just show you back to this. This is as you, as I, as I told you, this throughput requires a ton of flops. This latency, this interactivity requires an enormous amount of bandwidth. Computers don't like extreme amounts of flops. Extreme amount of bandwidth. Because there's only so much surface area for chips that any systems has. And so optimizing for high throughput and optimizing for low latency are in fact enemies of each other. And so this is what happened when we combined with Groq. Okay.

And so we, we acquired the team that worked on the Groq chips and licensed the technology. And we've been working together now to integrate the system. This is what that looks like. So at the most valuable tier, at the most valuable tier, we're now going to increase performance by 35 X. Now this very simple chart revealed to you exactly the reason why NVIDIA is so strong in the vast majority of the workloads so far. And the reason for that is because up in this area throughput matters so much. NVLink 72 is so game changing. It is exactly the right architecture, and it's even hard to beat.
Even as you add Groq to it. However, if you extended this chart way out here and you said you wanted to have services, that delivers not 400 tokens per second, but a thousand tokens per second, all of a sudden, NVLink 72 runs out of steam and simply can't get there. We just don't have enough bandwidth. And so this is where Groq comes in, and this is what happens when we push that out.

So it goes out beyond. Thank you. Goes out beyond even the limits of what NVLink 72 can do. And if you were to do that, translate that into revenues relative to Blackwell.

Vera Rubin is five X if most of your workload is high throughput. I would stick with just 100%. Vera Rubin. If a lot of your workload wants to be coding and very high valued engineering token generation, I would add Groq to it. I would add Groq to maybe 25% of my total data center. The rest of my data center is all 100%. Vera Rubin And so that gives you a sense of how you would add Groq to Vera Rubin and extend its performance and extend its value even more.

This is what happens. This is a contrast.
The reason why the reason why Groq was so attractive to me is because their computing system, a deterministic data flow processor. It is statically compiled.

### [1:29:16 Uniting Processors of Extreme Performances](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=5356s)


***Jensen Huang***

It is compiler scheduled, meaning the compiler figures out when the data. When do the compute. The compute and data arrives at the same time. All of that is done statically in advance and scheduled completely in software. There's no dynamic scheduling. The architecture is designed with massive amounts of SRAM. It is designed just for inference. This one workload. Now this one workload, as it turns out, is the workload of AI factories. And as the world continues to increase the amount of high speed tokens it wants to generate with super smart tokens, it wants to generate, the value of this integration is going to get even higher.
And so these are two extreme processors. You could see one chip 500MB one Vera Rubin chip, one Rubin chip, 288GB. It would take a lot of rock chips to be able to hold the parameter size of Rubin, as well as all of the context that has to go. The CVV cache that has to go along with it. so that limited Grox ability to really reach the mainstream, to really take off until we had a great idea. What if we disaggregated inference all together with a piece of software called Dynamo? What if we re-architected the way that inference is done in the pipeline, so that we could put the work that makes perfect sense on Vera Rubin, and then offload the decode generation, the low latency, the bandwidth limited challenged part of the workload for Groq. And so we united unified two processors of extreme differences one for high throughput, one for low latency. It still doesn't change the fact that we need a lot of memory.

And so Groq, we're just going to add a whole bunch of Groq chips, which expands the amount of memory it has. And so if you could just imagine out of a trillion parameter model, We have to store all of that in rock chips. However, it sits next to. NVIDIA Vera Rubin where we could. We could hold the massive amounts of KV cache that's necessary in processing all of these agentic AI systems.

It's based upon this idea of disaggregated inference. We do the prefill. That's the easy part. But we also tightly integrate the decode. So the attention part of decode is done on NVIDIA's Vera Rubin, which needs a lot of math and the feedforward network part of it, the decode part is done. The token generation part is done on Vera Rubin on the Groq chip, the two of them working tightly coupled together over today Ethernet with a special mode to reduce latency by about half. And so that capability allows us to integrate these two systems. We run Dynamo, this incredible operating system for AI factories on top of it and you get 35 times increase. 35 times increase.
Not to mention additional new tiers of inference performance. For token generation, the world's never seen.

### [1:32:52 NVIDIA Groq 3 LPX](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=5572s)

***Jensen Huang***


So this is it. This is Groq.
The Vera Rubin systems including Groq. I want to thank Samsung, who manufactures the Groq LP 30 chip for us. And they're cranking as hard as they can. I really appreciate appreciate you guys. We're in production with the Groq chip and, uh, you know, we'll ship it in the second half, probably about Q3 timeframe. Okay. Groq LP.

### [1:33:30 Announcing NVIDIA Launch Partners](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=5610s)

***Jensen Huang***


Vera Rubin You know, it's kind of hard. It's kind of hard to imagine any more customers.
You know, and, and the really great thing is, is, um, grace Blackwell early sampling of it was really complicated because of coming together with NVLink 72, but the sampling of Vera Rubin is just going incredibly well. And in fact, Satya, I think, texted out already that the first Vera Rubin rack is already up and running at Microsoft Azure. And so I'm super excited for them. We're just going to keep cranking these things out. We have now set up a supply chain that could manufacture thousands a week of these systems, essentially multi gigawatts of AI factories per month inside our supply chain.
And so we're going to crank out these, these Vera Rubin racks while we're cranking out the GB 300 racks. We are in full production.

The Vera CPUs incredibly successful. And the reason for that is because AI needs CPUs for tool use and various CPU was designed just perfectly for that sweet spot. Incredible. For the next generation of data processing, Vera CPU is ideal.

The Vera CPU plus CX-9 connected into the BlueField-4 stack. 100% of the world's storage industry is joining us on this system, and the reason for that is because they see exactly the same thing.
The storage system is going to get pounded. It's going to get pounded because we used to have humans using the storage systems. We used to have humans using SQL. Now we're going to have AI using these storage systems. And it's going to store cuDF accelerated storage. cuVS accelerated storage as well as very importantly KV caching.

Okay, so this is the Vera Rubin system.

### [1:35:25 NVIDIA Vera Rubin: 7 Chips - 5 Rack Systems](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=5726s)



***Jensen Huang***


Now what's amazing is this in just two years time, in a one gigawatt factory, in just two years time, in one gigawatt factory. Using using the mathematics that I showed you earlier. Whereas Moore's law would have given us a couple of steps, we would have, you know, X factor, the number of transistors we had, X factor, the number of flops we had X factor, the number of amount of bandwidth.
But with this architecture, we're going to take our token generation speed token generation rate from 2 million to 700,000,350 times increase. This is this is the power of extreme co-design. This is what I mean when we integrate and optimize vertically, but then we open it horizontally for everybody to enjoy.

### [1:36:28 NVIDIA Extreme Co-Design Delivering X-Factors Every Year](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=5788s)

***Jensen Huang***


This is our roadmap. Very quickly. Blackwell is here. The Oberon system. In the case of Rubin, we have the Oberon system. We're always backwards compatible so that if you wanted to not change anything and just keep on moving through with the new architecture, you could do so.
The old system, the standard rack system, Oberon still available. Oberon is copper scale up, and with Oberon we could also use optical scale out or excuse me, optical scale up to expand to NVLink 576. Okay. And so there's a lot of conversation about is NVIDIA going to copper scale up or optical scale up? We're going to do both. So we're going to have NVLink 144 with Kyber and then with Oberon we're going to NVLink 72, plus optical to get to NVLink 576 the next generation of Rubin with Rubin Ultra. We have the Reuben ultra chip, which is coming, which is taping out, and we have a brand new chip LP 35, LP 35 will, for the first time incorporate NVIDIA's NVFP4 computing structure, give you another few X factor speedup.
Okay. And so this is Oberon NVLink 72 optical scale up. And it uses Spectrum-6, the world's first co-packaged optical. And um all of this is in production.
The next generation from here is Feynman. Feynman has a new GPU, of course. It also has a new LPU, LP 40 big step up. Incredible, incredible new technology now uniting the scale of NVIDIA and the Groq team building together. LP 40. It's going to be incredible. A brand new CPU called Rosa, short for Rosalyne. BlueField-5, which connects the CPU with the next SuperNIC-10. We will have Kyber, which is copper scale up.
We will also have Kyber CPO scale up. So for the first time, we will scale up with both copper and co-packaged optics. Okay. And so a lot of people have been asking, you know Jensen, is copper going to still be important? The answer is yes. Jensen are you going to scale up optical? Yes. Are you going to scale out optical? Yes. And so for everybody who is in our ecosystem, we need a lot more capacity. And that's really the key. We need a lot more capacity for for copper. We need a lot more capacity for optics. We need a lot more capacity for CPO. And that's the reason why we've been working with all of you to lay the foundation for this level of growth. And so Feynman will have all of that.
Let me see if I missed everything. That's it. Every single year, brand new architecture.

### [1:40:05 NVIDIA DSX AI Factory Platform](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=6005s)


***Jensen Huang***


Very quickly. Very quickly, NVIDIA went from a chip company to an AI factory company or AI infrastructure company, AI computing company, these systems. And now we're building entire AI factories. There's so much power that is squandered in these AI factories. We want to make sure that these AI factories come together designed in the best possible way.
Most of these components never meet each other. Most. Most of us technology vendors now, we all know each other. But in the past we never met each other until the data center. That can't happen. We're building super complex systems and so we have to meet each other virtually somewhere else. And so we created Omniverse and Omniverse DS world, a platform where all of us can meet and design these gigafactories. Giga, you know, gigawatt AI factories virtually in system. We have simulation systems for the racks for mechanical thermal, electrical networking. Those simulation systems integrated into all of our ecosystem partners with incredible tools companies.
See also operated connected to the grid so that we could interact with each other, send each other information so that we could adjust grid power and data center power accordingly, saving energy. And then inside the data center using Max-Q so that we could adjust the system dynamically across power and cooling and all of the different technologies we all work on together, so that we leave no power squandered, so that we run at the most optimal rate to deliver enormous amount of token throughput. There's no question in my mind there's a factor of two in here and a factor of two at the scale we're talking about is gigantic.
We call this the NVIDIA DSX platform. And just as all of our platforms, there's the hardware layer, there's the library layer and there's the ecosystem layer. It's exactly the same way. Let's show it to you.

[VIDEO NARRATION]


### [1:42:15 How AI Factories Maximise Tokens, Power, and Profit with NVIDIA DSX](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=6135s)

***Jensen Huang***


The greatest infrastructure build out in history is underway. The world is racing to build chip system and AI factories, and every month of delay costs billions in lost revenues. AI factory revenues are equal to tokens per watt, so with power constraints every unused. watt is revenue lost. NVIDIA RDS is an Omniverse digital twin blueprint for designing and operating AI factories for maximum token throughput, resilience, and energy efficiency.
Developers connect through several APIs. DSXM for physical, electrical, thermal, and network simulation. DSS Exchange for AI factory operational data. DSS flex for secure dynamic power management between the grid and DSS Max-Q to dynamically maximize token throughput. It starts with SIM ready assets from NVIDIA and equipment manufacturers managed by PTC Windchill PLM. Then model based systems engineering is done in Dassault Systemes 3D experience. Jacobs brings the data into their custom Omniverse app to finalize design. It's tested with leading simulation tools using Siemens STAR-CCM+ for external thermals. Cadence reality for internal eTap for electrical and NVIDIA's network simulator DSX Air, and virtually commission through Procore to ensure accelerated construction time.
When the site goes live, the digital twin becomes the operator. AI agents work with DS Max-Q to dynamically orchestrate infrastructure. Phaedra's agent oversees cooling and electrical systems, sending signals to Max-Q, which continuously optimizes compute throughput and energy efficiency. Emerald AI agents interpret live grid demand and stress signals and adjust power dynamically. With DSX, NVIDIA and our ecosystem of partners are racing to build AI infrastructure around the world, ensuring extreme resiliency, efficiency, and throughput.

It's incredible. Right? Well. On Omniverse, Omniverse was designed to hold the world's digital twin starting from the earth, and it's going to hold digital twins of all sizes.
And so we have just such a great ecosystem of partners. I want to thank all of you. All of these companies are brand new to our world. We didn't know many of you. Just a couple of years ago, and now we're working so close together to work on and build together the largest computer the world's ever seen, and also to do it at planetary scale. So NVIDIA DSX is our new AI factory platform.

### [1:45:40 Space-1 Vera Rubin Module](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=6340s)

***Jensen Huang***

I'll spend very little time on this at this time. However, we're going to space. We've already been out in space. Thor is radiation approved and we're in satellites.
You do imaging from the from satellites. In the future, we'll also build data centers in the in the in the in space, obviously very complicated to do. So we have we're working with our partners on a new computer called Vera Rubin Space one. And it's going to go out to space and start data centers out, out in space. now. Of course, in space there's no connection, there's no convection, there's just radiation. And so we have to figure out how to, um, cool these systems out in space. But we've got lots of great engineers working on it.

### [1:46:23 NemoClaw for OpenClaw](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=6383s)

***Jensen Huang***

Let me talk to you about something new. So, so, um, uh, Peter Steinberger is here, and, um, he, he wrote a piece of software, uh, it's called OpenClaw. And and, um, I don't know if he realized, uh, how successful it was going to be. Um, but the importance is profound. OpenClaw is the number one. It's the most popular open source project in the history of humanity. And it did so in just a few weeks. It exceeded it exceeded what Linux did in 30 years. And it's that important. It is that important. It will do, well, uh, this is all you do. Okay.

We're announcing our support of it. Uh, let me let me just quickly go through this. I want to show you a couple of things. You simply type this, you type it this into a into a console and it goes out. It finds OpenClaw, it downloads it, it builds you an AI agent and then you could tell it whatever else you need to do. Okay, so let's take a look.

[VIDEO]:
[VIDEO SOUNDCLIPS]

### [1:47:47 The ChatGPT Moment for Long-Running, Autonomous Agents](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=6467s)

***Jensen Huang***

An open source project just dropped. Andrej Karpathy has just launched something called a. Research is a huge deal.
You give an AI agent a task, go to sleep.
It runs 100 experiments overnight. Keeping what works and what doesn't.
[PETER STEINBERGER]
I really love what my stuff enables that person to do. And he had like one guy, he told me like he installed it as a 60 year old dad and like they made beer, connected the machine via Bluetooth to open cloth. And then we automated everything, including the whole website for people to order the lobster.
Hundreds of people are queuing up for lobsters in Shenzhen.
Do you want to build open floor with OpenClaw?
Everyone is talking about OpenClaw. But what is OpenClaw? Believe it or not, there's already a ClawCon.

Incredible. Incredible. Now, I illustrated effectively what OpenClaw is in this way. And so all of you can understand it. But let's just think what happened. What is OpenClaw? It connects. It's an it's a genetic system. It calls and connects to large language models. So the first thing it has it has resources that it manages it. It could access tools. It could access file systems. It could access large language models. It's able to do scheduling. It's able to do cron jobs. It's able to decompose a problem that a prompt that you gave it into step by step by step.
It could spawn off and call upon other subagents. It has IO. You could talk to it in any modality you want. You could wave at it and understand you. You could talk to any modality you want. It sends you messages. It texts you, sends you email. So it's got I o. Um, what else does it have? Well, based on that, you could, you could say, in fact, it's an operating system. I've just used the same syntax that I would describe in operating system art. OpenClaw has open sourced essentially the operating system of agentic computers.

It is no different than how windows made it possible for us to create personal computers. Now, OpenClaw has made it possible for us to create personal agents. The implication is incredible. The implications are incredible. First of all, the adoption says something, you know all in itself. However, the most important thing is this every single company now realizes every single company, every single software company, every single technology company. For the CEOs, the question is, what's your open source strategy? Just as we need it all have a Linux strategy. We all need it to have HTTP, HTML strategy, which started the internet.
We all needed to have a Kubernetes strategy, which made it possible for mobile cloud to happen. Every company in the world today needs to have an open strategy, an agentic system strategy. This is the new computer.

Now this is just the exciting part. This is enterprise IT before OpenClaw, you know, and, and I mentioned earlier the way enterprise it works. And the reason why it's called data centers is because these large rooms, these large buildings held data, held the files of people. The structured data of business, it would pass through software that has tools and systems of records and all kinds of workflow that's codified into it, and that turns into tools that humans would use.
Digital workers would use. That is the old IT industry, software companies creating tools, saving files, and of course, GSIs consultants that help companies figure out how to use these tools and integrate these tools. These these tools are incredibly valuable for governance and security and privacy and compliance. And all of that continues to be true. It's just that post-OpenClaw post agentic. This is what it's going to look like.

This is the extraordinary part. Every single IT company, every single company, every SaaS company, every SaaS company will become a. Agaas company. No question about it. Every single SaaS company will become Agaas company, an Agentic-as-a-service company.
And what's amazing is this you now open cloud gave us gave the industry exactly what it needed at exactly the time, just as Linux gave the industry exactly what it needed, exactly the time, just as Kubernetes showed up at exactly the right time, just as HTML showed up, it made it possible for the entire industry to grab on to this open source stack and go do something with it. There's just one catch. Agentic systems in the corporate network can have access to sensitive information. It can execute code and it can communicate externally. Just say that out loud.
Okay, think about it. Access sensitive information. Execute code. Communicate externally. You could of course access employee information, access supply chain, access finance information sensitive information and send it out communicate externally. Obviously this can't possibly be allowed. And so what we did was we worked with Peter, we took some of the world's best security and computing experts, and we worked with Peter to make OpenClaw, OpenClaw, enterprise secure and enterprise private capable.

And we call that this is our NVIDIA OpenClaw reference for open Nemo claw, which is a reference for OpenClaw. And it has all these Agentic AI tool kits.
And the first part of it is technology we call open shell that has now been integrated into OpenClaw. Now it's enterprise ready. This stack, this stack with a reference design we call Nemo. NemoClaw. Okay. With a reference stack we call NemoClaw. You could download it, play with it and you could connect to it. The policy engine of all of the SaaS companies in the world. And your policy engines are super important, super valuable. So the policy engines could be connected. NemoClaw or OpenClaw with open shell would be able to execute that policy engine.
It has a policy. It has a network guardrail, it has a privacy router. And as a result, we could protect and keep the the clause from executing inside our company and do it safely. We also added several things to the system. And one of the most important things you want to do with your own claw custom claws is so that you can have your custom models. And this is NVIDIA's open model initiative.

### [1:57:01 NVIDIA Nemotron and Open Models](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7021s)


***Jensen Huang***


We are now at the frontier of every single domain of AI models, whether it's Nemotron, Cosmos, World Foundation model, GR00T, artificial general robotics, humanoid robotics models, Alpamayo for autonomous vehicle, BioNeMo for digital biology, Earth-2 for AI physics.
We are at the frontier on every single one. Take a look.

[VIDEO NARRATION - JENSEN]

### [1:57:29 How NVIDIA Open Models Power Every Industry’s AI](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7049s)


***Jensen Huang***


The world is diverse. No single model can serve every industry. Open models is one of the largest and most diverse AI ecosystems in the world. Nearly 3 million open models across language, vision, biology, physics and autonomous systems enable AI builds for specialized domains. NVIDIA is one of the largest contributors to open source AI. We build and release six families of open frontier models, plus the training data, recipes, and frameworks to help developers customize and adopt new leaderboard topping models are launching for every family. At the core, Nemotron, reasoning models for language. Visual understanding. RAG. Safety. And speech.
Cosmos frontier models for physical AI world generation and understanding. Alpamayo. The world's first thinking and reasoning. Autonomous vehicle AI. GR00T, foundation models for general purpose robots. BioNeMo open models for biology, chemistry and molecular design. Earth-2 models for weather and climate forecasting rooted in AI physics. NVIDIA open models give researchers and developers the foundation to build and deploy AI for their own specialized domains.

Our models are. Thank you. Our models are valuable to all of you because, number one, it's on the top of the leaderboard.
It's world class. But most importantly, it's because we are not going to give up working on it. We're going to keep on working on it every single day. Nemotron 3 is going to be followed by neutron for cosmos one was followed by cosmos two. GR00T, GR00T at generation two each and one of these. We're going to continue to advance these models.
Vertical integration. Horizontal openness.
So that we can enable everybody to join the AI revolution. Number one on leaderboard, across research and voice and world models and artificial general robotics and self-driving cars and reasoning.

And of course, one of the most important one, this is Nemotron 3 in OpenClaw.
This is Nemotron 3 and OpenClaw. And look at the top three. There are the three best models in the world. Okay, so we are at the frontier.

It is also true. It is also true that we want to create the foundation model so that all of you can fine tune it, post train it into exactly the intelligence you need. This is Nemotron 3 Ultra. It is going to be the best base model the worlds ever created.

This allows us to help every country build their sovereign AI, and we're working with so many different companies out there.
And one of the most exciting things that we're doing today. I'm announcing today is a Nemotron coalition. We are so dedicated to this. We have invested billions of dollars of AI infrastructure so that we could develop the core engines for AI that's necessary for all the libraries of inference and so on, but also to create the AI models to activate every single industry in the world. Large language models is really important. Of course it's important. How can how can human intelligence not be. However, in different industries around the world, in different countries, around the world, you need to have the ability to customize your own models and the domains.
The domain of the domain of the models is radically different from biology to physics, to The self-driving cars to general robotics, to, of course, human language. And we have the ability to work with every single region to create their domain specific, their sovereign AI. Today, we're announcing a coalition to partner with us to make Nemotron 4 even more amazing.

### [2:01:46 Announcing Global AI Leaders Join NVIDIA Nemotron Coalition to Advance Open Frontier Models](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7306s)


***Jensen Huang***

And that coalition has some amazing companies in it. Black Forest Labs imaging company, Cursor, the famous coding company. We use lots of it. LangChain billion downloads for creating custom agents. Mistral. Arthur. Arthur Mensch. I think he's here.
Incredible. Incredible company. Perplexity. Perplexity. His computer. Absolutely. Use it. Everybody use it. It is so good. A multi-modal agentic system. Reflection. Sarvam from India. Thinking Machines. Mira Murati's lab. Incredible companies joining us. Thank you. I said. I said that every single enterprise company, every single software company in the world needs an Agentic systems need an agent strategy. You need to have an open source strategy.

And they all agree. And they're all partnering with us to integrate Nemo, the NemoClaw reference design, the NVIDIA Agentic AI toolkit, and of course, all of our open models. One company after another.
There's so many.

And we're partnering with all of you. I'm really grateful for that. And, um, this is our moment. This is a reinvention. This is, this is a renaissance, a renaissance of the enterprise. It from what would be a $2 trillion industry. This is going to become a multi trillion dollar industry offering not just tools for people to use, but agents that are specialized in very special domains that you're expert in that we could rent. I could totally imagine in the future, every single engineer in our company will need an annual token budget. They're going to make a few hundred thousand dollars a year their base pay.
I'm going to give them probably half of that on top of it as tokens so that they could be amplified ten X. Of course we would. It is now one of the recruiting tools in Silicon Valley. How many tokens comes along with my job. And the reason for that is very clear, because every engineer that has access to tokens will be more productive. And those tokens, as you know, will be produced by AI factories that all of you and us, we partner to build. Okay. So every single enterprise company in today sit on top of file systems and data centers.
Every single software company of the future will be agentic and they will be token manufacturers. They'll be token users for their engineers, and there'll be token manufacturers for all of their customers.

### [2:04:43 Announcing NVIDIA NemoClaw Reference OpenClaw](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7483s)


***Jensen Huang***

The open clause in event the OpenClaw event cannot be understated. This is as big of a deal as HTML. This is as big of a deal as Linux. We have now a world class open, a gigantic framework that all of us could use to build our open core strategy, and we've created a reference design we call Nemo. Nemo that all of you could use that is optimized.
It's performant. It is safe and secure.

### [2:05:22 Physical AI and Robotics](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7522s)


***Jensen Huang***


Speaking of agents, agents, as you know, perceive, reason and act. Most of the agents in the world today that I've spoken about are digital agents. They act in the digital world. They reason, they write software. It's all digital. But we also have been working on physically embodied agents for a long time. We call them robots. And the AIS that they need are physical eyes. We have some big announcements here. I'm going to just walk through a few of them, 110 robots here, almost every single company in the world, I can't think of one that are building robots is working with NVIDIA.
We have three computers the training computer, the synthetic data generation and simulation computer, and of course, the robotics computer that sits inside the robot itself. We have all the software stacks necessary to do so, the AI models to help you. And all of this is integrated into ecosystems around the world. And all of our partners, from Siemens to Cadence, incredible partners everywhere. And today we're announcing a whole bunch of new, new partners. As you know, we've been working on self-driving cars for a long time. The ChatGPT moment of self-driving cars has arrived. We now know we could successfully, autonomously drive cars.
And today, we are announcing four new partners for NVIDIA's robotaxi ready platform. BYD, Hyundai, Nissan, Geely. All together, 18 million cars built each year, joining our partners from before. Mercedes. Toyota. GM. The number of robotaxi ready cars in the future are going to be incredible. And we're announcing also a partnership with Uber. Multiple cities we're going to be deploying and connecting these robo taxi ready vehicles into their network. And so a whole bunch of new cars. We have ABB, Universal Robotics, Kuka, so many robotics companies here, and we're working with them to implement our physical AI models integrated into simulation systems so that we could deploy these robots into manufacturing lines all over.
We have Caterpillar here. We even have T-Mobile here. And the reason for that is in the future, that radio radio tower used to be a radio tower is going to be an NVIDIA Aerial AI ram. And so this is going to be a robotics radio tower, meaning it can reason about the traffic, figures out how to adjust its beamforming so that it could save as much energy as possible and increase the amount of fidelity as possible.
There's so many humanoid robots here, but one of my favorites. One of my favorites is a Disney robot. You know what? Tell you what, let me just show you some of the videos. Let's look at that first.

[VIDEO NARRATION - JENSEN]

### [2:08:33 The Age of Physical AI and Robotics](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7713s)


***Jensen Huang***


The first global rollout of physical AI at scale is here. Autonomous vehicles and with NVIDIA Alpamayo vehicles now have reasoning helping them operate safely and intelligently across scenarios. We ask the car to narrate its actions.
I'm changing lanes to the right to follow my route.
Explain its thinking as it makes decisions.
There's a double parked vehicle in my lane. I'm going around it.
And follow instructions.
Hey, Mercedes, can we speed up?
Sure, I'll speed up.
This is the age of physical AI and robotics.
Around the world. Developers are building robots of every kind, but the real world is massively diverse, unpredictable, full of edge cases. Real world data will never be enough to train for every scenario. We need data generated from AI and simulation for robots. Compute is data developers, pre-trained World Foundation models on internet scale, video and human demonstrations, and evaluate the model's performance to prepare them for post-training. Using classical and neural simulation. They generate massive amounts of synthetic data and trained policies at scale to accelerate developers. NVIDIA built open source Isaac Lab for robot training and evaluation and simulation.
Newton for extensible and GPU accelerated differentiable physics simulation.
Cosmos world models for neural simulation.
And GR00T open robotics foundation models for robot reasoning and action generation. With enough compute, developers everywhere are closing the physical AI data gap.
Peritas AI trains their operating room assistant robot in NVIDIA Isaac Lab, multiplying their data with NVIDIA Cosmos World models.
Skild AI uses Isaac Lab and Cosmos to generate post-training data for their Skild AI brain. They use reinforcement learning to harden the model across thousands of variations.
Humanoid uses Isaac Lab to train whole body control and manipulation policies.
Hexagon robotics uses Isaac Lab for training and data generation.
Foxconn fine tunes Group models in Isaac lab, as does Noble Machines.
Disney Research uses their Kamino physics simulator in Newton and Isaac Lab to train policies across their character robots in every universe.

### [2:13:00 Olaf takes the stage with Jensen Huang](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=7980s)


***Jensen Huang***
 Ladies and gentlemen. Olaf.
***OLAF***
Coming through.
***Jensen Huang***
Newton works. Wow. Omniverse works. Olaf, how are you?
***OLAF***
 I'm so happy now that I'm meeting you.
***Jensen Huang**
 I know because I gave you your computer. Jetson.
***OLAF***
What's that?
***Jensen Huang***
 Well, It's in your tummy.
***OLAF***
That's going to be amazing.
***Jensen Huang***
And you learn how to walk inside Omniverse.
***OLAF***
 I just love walking. This is so much better than riding on a reindeer. Gazing up at the beautiful sky.
***Jensen Huang***
 And it was because of physics. Using this Newton solver that runs on top of NVIDIA Warp that we jointly developed with Disney and with DeepMind, that made it possible for you to be able to adapt to the physical world. Check that out. That's how smart you are.
***OLAF***
 I'm a snowman, not a snowclopedia.
***Jensen Huang***
 Could you imagine this? The future of Disneyland. All these. All these robots. All these characters wandering around. Oh, you know, I have to admit, though. I thought you were going to be taller. I've never seen such a short snowman, to be honest.
***OLAF***
 Nope.
***Jensen Huang***
 Hey, tell you what, you want to help me out.
***OLAF***
 Hooray!
***Jensen Huang***
 Okay. Usually. Usually I close the keynote by telling you what I told you. We talked about inference, inflection. We talked about the AI factory. We talked about the open core agent revolution that's happening. And of course, we talked about physical AI and robotics. But tell you what, why don't we get some friends to help us close it out?
***OLAF***
Of course.
***Jensen Huang***
All right, play it. Come on.
[VIDEO - SONG PLAYS]
***Jensen Huang***
All right. Have a great GTC.

### [2:14:55  Official Keynote Closing Video | GTC 2026](https://www.youtube.com/watch?v=jw_o0xr8MWU&t=8095s)


