Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Everything Announced at Nvidia’s CES Event in 12 Minutes – Video


Everything Announced at Nvidia’s CES Event in 12 Minutes

At CES 2025, Nvidia CEO Jensen Huang introduced CES, the world’s largest electronics show, with the new RTX chip, an update on its AI device Grace Blackwell and its future plans to explore deeper into robots and autonomous vehicles.

Here it is. Our new GForce RTX 50 series, Blackwell architecture, GPU is a beast, 92 billion transistors, 4000 peaks, 4 petaflops of AI, three times higher than Ada of the last generation, and we need everything to make the pixels I showed you . 380 ray tracing teraflops so we can calculate the pixels we need to calculate, calculate the most beautiful image you can and 125 shader teraflops. There are teraflops of shaders used simultaneously and a fair share of parallel performance. So two shadows, one is a float 0.1 is a number. G7 memory from micron 1.8 terabytes per minute, double the performance of our last generation, and now we have the ability to combine AI functions with computer graphics. And one of the amazing features of this generation is that the adaptive shader can also process neural networks. So the shader can handle neural networks and for that, we created it. Neuro texture compression and neural material shading with Blackwell family RTX 5070, 4090 performance at 5:49. Impossible without artificial intelligence, impossible without 4 point, 4 tear ops AI tensor cores. Impossible without the memory of the G7. Well, so 5070, 4090 work, $549 and here is the whole family from 5070 to $5090 5090 dollars, double the work of 4090. Starting Yes we are making a very large presence since January. Well, it’s amazing, but we were able to put these big GPUs in a laptop. This is a 5070 laptop for 1299. This 5070 laptop has the function of a 4090. And so 5090, 5090. It will fit in a laptop, a thin laptop. The last laptop was 14, 4.9 mm. You have 5080, 5070 TI and 5070. But what we have now is 72 Blackwell GPUs or 144 dies. This chip here is 1.4 exaflops. The world’s most advanced computers, the most advanced, soon. This entire high-end room has recently benefited from exaflop plus. This is 1.4 exaflops of AI floating point performance. It has 14 terabytes of memory, but this is an amazing thing that memory bandwidth is 1.2 petabytes per second. That’s really all. The number of people on the Internet that is happening right now. More and more of the world’s internet is being processed through these chips, right? And we have 10 130 trillion transistors in total, 2,592 CPU cores. A whole bunch of networks and so this I wish I could do this. I don’t think I will so here are the blackwells. This is our ConnectX. Chips on the network, this is the MV link and we are trying to pretend about the backbone of the MV link, but it is not possible, okay. And all of this is HBM memory, 1214 terabytes of HBM memory. This is what we are trying to do and this is a miracle, this is a miracle of black walls so we sing them well using our skills and our creativity and we turn them into a Llama Nemotron suite of open models. There are little things that connect in the fastest response time the smallest time uh that’s what we call the super llama Nemotron supers is actually your big version of your version or your super version, the super version can be used be an example of a teacher for a whole group of other examples. It can be an example of a reward. Uh, judge some examples to create answers and decide if the answer is good or not, especially to comment other examples. It can be dissolved in different ways, especially the model of the teacher, the melting of knowledge, uh, uh, the model, the largest, the most capable, so all of this is available on the Internet and Via Cosmos, the first in the world. World foundation model. It is trained on 20 million hours of video. Over 20 million videos focused on dynamic objects, dynamic nature, natural themes, people, movement, moving hands, manipulating objects, uh, you know, things that are, uh, fast. camera movements. It’s about training AI, not to create artificial things, but to train AI to understand the world and from this with this physical AI. There are a lot of downstream things that we can do so we can create data structures to train uh models. We can zoom in and zoom out to see the basics of robots. You can make it do a bunch of cool, cool, futuristic exercises, especially do Doctor Strange. Um, you can, because, because this kind of understands the world, so you’ve seen a lot of pictures made by this metaphor to understand the world, and also, uh, it can put captions and it can take videos. , pronounce it very well, and the description and video can be used for teaching. Major languages. Multimodality big languages ​​and uh so you can use this technology to use this example to teach robotics robots and big multimodality languages ​​and this is Nvidia cosmos. The platform is an independent version of real-time applications such as a widespread version of high-quality graphics. It’s incredible tokenizer especially learning uh real world terms and data pipeline so if you want to take all of this and train it on your data, this data pipeline because there’s so much involved we’ve improved everything. to explain why this is the world’s first scalable data pipeline and AI supports all of this is part of the Cosmos platform and today. we announce. That Cosmos is open source. It is open source on GitHub. Well, today we’re announcing that the next generation processor for the car, our next generation computer for the car is called Thor. I have one right here. Expect a little. Well, this is Thor. This is Thor This is a robotics computer. This is robotics computers taking sensors and an insane amount of sensor information, processing it, you know. Incomplete cameras, advanced radars, LIDARs, are all coming into this chip, and this chip must process all the sensors, convert them into tokens, put them into a transformer, and predict the next path. And this AV computer is now fully developed. Thor is 20 times. The planning capability of our last generation of Orin, which is truly the standard for autonomous vehicles today. And so this is really, really incredible. Thor is fully formed. This robotic processor, by the way, also goes into a complete robot and so it can be an AMR, it can be aaa a person or a robot, uh, it can be a brain, it can be, uh, a manipulator, uh, this. This processor is basically a world-class robotics computer. GPT chat time. For general robotics is just around the corner. And in fact, all the assistive technologies I’ve been talking about are. It’s going to make it possible for the next few years to see the fastest, most amazing advances in general robotics. Now the reason why robotics is so important is that robots with movement and wheels need a special environment to support them. There are three robots. Three robots in the world we can make that don’t need green fields. Brown color change is good. If we, if we can make these amazing robots, we can put them in a world of our own making. These three robots are one robots and AIs because you know they are information workers as long as they can have the computers that we have in our offices, it will be fine. Number 2, self-driving cars, and that’s why I spent 100+ years building roads and cities. And then number 3, humans or robots. If we have the technology to solve this 3. This will be the biggest technology companies have ever seen. This is Nvidia’s latest AI computer. And, and finally it’s called Project Digits right now and if you have a good name, reach out to us. Um, uh, this is the surprise, this is a supercomputer AI. It runs the entire Nvidia AI stack. All Nvidia software supports this. DGX Cloud manages this. This is fine, somewhere and it’s wireless or you know it’s connected to your computer, it’s a workstation if you want it to be there and you can find it you can find it like a super cloud computer and Nvidia’s AI works there. um it’s based on the most secret chip we’ve ever handled called the GB 110, the smallest Grace Blackwell that we make, and this is the chip inside. It is in the making. This hidden high-end chip, uh, we did in conjunction with the CPU, the gray CPU was, uh, it’s made by Nvidia in conjunction with MediaTech. Uh, they’re a world-leading SOC company, and they worked with us to build the CPU, the CPU SOC, and the chip to chip interface and the Blackwell GPU link, and, ah, this little one, this little one here. it is fully developed. We expect this computer to be available around May time.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *