字幕列表 影片播放 列印英文字幕 This is what hundreds of millions of gamers in the world plays on. It's a GeForce. This is the chip that's inside. For nearly 30 years. Nvidia's chips have been coveted by gamers shaping what's possible in graphics and dominating the entire market since it first popularized the term graphics processing unit with the GeForce 256. Now its chips are powering something entirely different. ChatGPT has started a very intense conversation. He thinks it's the most revolutionary thing since the iPhone. Venture capital interest in AI startups has skyrocketed. All of us working in this field have been optimistic that at some point the broader world would understand the importance of this technology. And it's it's actually really exciting that that's starting to happen. As the engine behind large language models like ChatGPT, Nvidia is finally reaping rewards for its investment in AI, even as other chip giants suffer in the shadow of U.S.-China trade tensions and an ease in the chip shortage that's weakened demand. But the California-based chip designer relies on Taiwan Semiconductor Manufacturing Company to make nearly all its chips, leaving it vulnerable. The biggest risk is really kind of U.S.-China relations and the potential impact to TSMC. That's, if I'm a shareholder in Nvidia, that's really the only thing that keeps me up at night. This isn't the first time Nvidia has found itself teetering on the leading edge of an uncertain emerging market. It's neared bankruptcy a handful of times in its history when founder and CEO Jensen Huang bet the company on impossible seeming ventures. Every company makes mistakes and I make a lot of them. And some of them, some of them puts the company in peril. Especially in the beginning, because we were small and and we're up against very, very large companies and we're trying to invent this brand new technology. We sat down with Huang at Nvidia's Silicon Valley headquarters to find out how he pulled off this latest reinvention and got a behind-the-scenes look at all the ways it powers far more than just gaming. Now one of the world's top ten most valuable companies, Nvidia is one of the rare Silicon Valley giants that, 30 years in, still has its founder at the helm. I delivered the first one of these inside an AI supercomputer to OpenAI when it was first created. 60-year-old Jensen Huang, a Fortune Businessperson of the Year and one of Time's most influential people in 2021, immigrated to the U.S . from Taiwan as a kid and studied engineering at Oregon State and Stanford. In the early 90s, Huang met fellow engineers Chris Malachowsky and Curtis Priem at Denny's, where they talked about dreams of enabling PCs with 3D graphics, the kind made popular by movies like Jurassic Park at the time. If you go back 30 years, at the time, the PC revolution was just starting and there was quite a bit of debate about what is the future of computing and how should software be run. And there was a large camp and rightfully so, that believed that CPU or general purpose software was the best way to go. And it was the best way to go for a long time. We felt, however, that there was a class of applications that wouldn't be possible without acceleration. The friends launched Nvidia out of a condo in Fremont, California, in 1993. The name was inspired by N .V. for next version and Invidia, the Latin word for envy. They hoped to speed up computing so much, everyone would be green with envy. At more than 80% of revenue, its primary business remains GPUs. Typically sold as cards that plug into a PC's motherboard, they accelerate - add computing power - to central processing units, CPUs, from companies like AMD and Intel. You know, they were one among tens of GPU makers at that time. They are the only ones, them and AMD actually, who really survived because Nvidia worked very well with the software community. This is not a chip business. This is a business of figuring out things end to end. But at the start, its future was far from guaranteed. In the beginning there weren't that many applications for it, frankly, and we smartly chose one particular combination that was a home run. It was computer graphics and we applied it to video games. Now Nvidia is known for revolutionizing gaming and Hollywood with rapid rendering of visual effects. Nvidia designed its first high performance graphics chip in 1997. Designed, not manufactured, because Huang was committed to making Nvidia a fabless chip company, keeping capital expenditure way down by outsourcing the extraordinary expense of making the chips to TSMC. On behalf of all of us, you're my hero. Thank you. Nvidia today wouldn't be here if and nor nor the other thousand fabless semiconductor companies wouldn't be here if not for the pioneering work that TSMC did. In 1999, after laying off the majority of workers and nearly going bankrupt to do it, Nvidia released what it claims was the world's first official GPU, the GeForce 256. It was the first programable graphics card that allowed custom shading and lighting effects. By 2000, Nvidia was the exclusive graphics provider for Microsoft's first Xbox. Microsoft and the Xbox happened at exactly the time that we invented this thing called the programable shader, and it defines how computer graphics is done today. Nvidia went public in 1999 and its stock stayed largely flat until demand went through the roof during the pandemic. In 2006, it released a software toolkit called CUDA that would eventually propel it to the center of the AI boom. It's essentially a computing platform and programing model that changes how Nvidia GPUs work, from serial to parallel compute. Parallel computing is: let me take a task and attack it all at the same time using much smaller machines. Right? So it's the difference between having an army where you have one giant soldier who is able to do things very well, but one at a time, versus an army of thousands of soldiers who are able to take that problem and do it in parallel. So it's a very different computing approach. Nvidia's big steps haven't always been in the right direction. In the early 2010s, it made unsuccessful moves into smartphones with its Tegra line of processors. You know, they quickly realized that the smartphone market wasn't for them, so they exited right from that . In 2020, Nvidia closed a long awaited $7 billion deal to acquire data center chip company Mellanox. But just last year, Nvidia had to abandon a $40 billion bid to acquire Arm, citing significant regulatory challenges. Arm is a major CPU company known for licensing its signature Arm architecture to Apple for iPhones and iPads, Amazon for Kindles and many major carmakers. Despite some setbacks, today Nvidia has 26,000 employees, a newly built polygon-themed headquarters in Santa Clara, California, and billions of chips used for far more than just graphics. Think data centers, cloud computing, and most prominently, AI. We're in every cloud made by every computer company. And then all of a sudden one day a new application that wasn't possible before discovers you. More than a decade ago, Nvidia's CUDA and GPUs were the engine behind AlexNet, what many consider AI's Big Bang moment. It was a new, incredibly accurate neural network that obliterated the competition during a prominent image recognition contest in 2012. Turns out the same parallel processing needed to create lifelike graphics is also ideal for deep learning, where a computer learns by itself rather than relying on a programmer's code. We had the good wisdom to go put the whole company behind it. We saw early on, about a decade or so ago, that this way of doing software could change everything, and we changed the company from the bottom all the way to the top and sideways. Every chip that we made was focused on artificial intelligence. Bryan Catanzaro was the first and only employee on Nvidia's deep learning team six years ago. Now it's 50 people and growing. For ten years, Wall Street asked Nvidia, why are you making this investment and no one's using it? And they valued it at $0 in our market cap. And it wasn't until around 2016, ten years after CUDA came out, that all of a sudden people understood this is a dramatically different way of writing computer programs and it has transformational speedups that then yield breakthrough results in artificial intelligence. So what are some real world applications for Nvidia's AI? Healthcare is one big area. Think far faster drug discovery and DNA sequencing that takes hours instead of weeks. We were able to achieve the Guinness World Record in a genomic sequencing technique to actually diagnose these patients and administer one of the patients in the trial to have a heart transplant. A 13-year-old boy who's thriving today as a result, and then also a three-month-old baby that was having epileptic seizures and to be able to prescribe an anti-seizure medication. And then there's art powered by Nvidia AI, like Rafik Anadol's creations that cover entire buildings. And when crypto started to boom, Nvidia's GPUs became the coveted tool for mining the digital currency. Which is not really a recommended usage, but that has created, you know, problems because, you know, crypto mining has been a boom or bust cycle. So gaming cards go out of stock prices, get bid up and then when the crypto mining boom collapses, then there's a big crash on the gaming side. Although Nvidia did create a simplified GPU made just for mining, it didn't stop crypto miners from buying up gaming GPUs, sending prices through the roof. And although that shortage is over, Nvidia caused major sticker shock among some gamers last year by pricing its new 40-series GPUs far higher than the previous generation. Now there's too much supply and the most recently reported quarterly gaming revenue was down 46% from the year before. But Nvidia still beat expectations in its most recent earnings report, thanks to the AI boom, as tech giants like Microsoft and Google fill their data centers with thousands of Nvidia A100s, the engines used to train large language models like ChatGPT. When we ship them, we don't ship them in packs of one. We ship them in packs of eight. With a suggested price of nearly $200,000. Nvidia's DGX A100 server board has eight Ampere GPUs that work together to enable things like the insanely fast and uncannily humanlike responses of ChatGPT. I have been trained on a massive dataset of text which allows me to understand and generate text on a wide range of topics. Companies scrambling to compete in generative AI are publicly boasting about how many Nvidia A100s they have. Microsoft, for example, trained ChatGPT with 10,000. It's very easy to use their products and add more computing capacity. And once you add that computing capacity, computing capacity is basically the currency of the valley right now. And the next generation up from Ampere, Hopper, has already started to ship. Some uses for generative AI are real time translation and instant text-to-image renderings. But this is also the tech behind eerily convincing and some say dangerous deepfake videos, text and audio. Are there any ways that Nvidia is sort of protecting against some of these bigger fears that people have or building in safeguards? Yes, I think the safeguards that we're building as an industry about how AI is going to be used are extraordinarily important. We're trying to find ways of authenticating content so that we can know if a video was actually created in the real world or virtually. Similarly for text and audio. But being at the center of the generative AI boom doesn't make Nvidia immune to wider market concerns. In October, the U.S. introduced sweeping new rules that banned exports of leading edge AI chips to China, including Nvidia's A100. About a quarter of your revenue comes from mainland China. How do you calm investor fears over the new export controls? Well Nvidia's technology is export controlled, it's a reflection of the importance of the technology that we make. The first thing that we have to do is comply with the regulations, and it was a turbulent, you know, month or so as the company went upside down to re-engineer all of our products so that it's compliant with the regulation and yet still be able to serve the commercial customers that we have in China. We're able to serve our customers in China with the regulated parts and delightfully support them. But perhaps an even bigger geopolitical risk for Nvidia is its dependance on TSMC in Taiwan. There's two issues. One, will China take over the island of Taiwan at some point? And two, is there a viable, you know, competitor to TSMC? And as of right now, Intel is trying aggressively to to get there. And you know, their goal is by 2025. And we will see. And this is not just an Nvidia risk. This is a risk for AMD, for Qualcomm, even for Intel. This is a big reason why the U.S. passed the Chips Act last summer, which sets aside $52 billion to incentivize chip companies to manufacture on U.S. soil. Now TSMC is spending $40 billion to build two chip fabrication plants, fabs, in Arizona. The fact of the matter is TSMC is a really important company and the world doesn't have more than one of them. It is imperative upon ourselves and them for them to also invest in diversity and redundancy. And will you be moving any of your manufacturing to Arizona? Oh, absolutely. We'll use Arizona. Yeah. And then there's the chip shortage. As it largely comes to a close and supply catches up with demand, some types of chips are experiencing a price slump. But for Nvidia, the chatbot boom means demand for its AI chips continues to grow, at least for now. See, the biggest question for them is how do they stay ahead? Because their customers can be their competitors also. Microsoft can try and design these things internally. Amazon and Google are already designing these things internally. Tesla and Apple are designing their own custom chips, too. But Jensen says competition is a net good. The amount of power that the world needs in the data center will grow. And you can see in the recent trends it's growing very quickly and that's a real issue for the world. While AI and ChatGPT have been generating lots of buzz for Nvidia, it's far from Huang's only focus. And we take that model and we put it into this computer and that's a self-driving car. And we take that computer and we put it into here, and that's a little robot computer. Like the kind that's used at Amazon. That's right. Amazon and others use Nvidia to power robots in their warehouses and to create digital twins of the massive spaces and run simulations to optimize the flow of millions of packages each day. Driving units like these in Nvidia's robotics lab are powered by the Tegra chips that were once a flop in mobile phones. Now they're used to power the world's biggest e-commerce operations. Nvidia's Tegra chips were also used in Tesla model 3s from 2016 to 2019. Now Tesla uses its own chips, but Nvidia is making autonomous driving tech for other carmakers like Mercedes-Benz. So we call it Nvidia Drive. And basically Nvidia D rive's a scalable platform whether you want to use it for simple ADAS, assisted driving for your emergency braking warning, pre-collision warning or just holding the lane for cruise control, all the way up to a robotaxi where it is doing everything, driving anywhere in any condition, any type of weather. Nvidia is also trying to compete in a totally different arena, releasing its own data center CPU, Grace. What do you say to gamers who wish you had kept focus entirely on the core business of gaming? Well, if not for all of our work in physics simulation, if not for all of our research in artificial intelligence, what we did recently with GeForce RTX would not have been possible. Released in 2018, RTX is Nvidia's next big move in graphics with a new technology called ray tracing. For us to take computer graphics and video games to the next level, we had to reinvent and disrupt ourselves, basically simulating the pathways of light and simulate everything with generative AI. And so we compute one pixel and we imagine with AI the other seven. It's really quite amazing. Imagine a jigsaw puzzle and we gave you one out of eight pieces and somehow the AI filled in the rest. Ray tracing is used in nearly 300 games now, like Cyberpunk 2077, Fortnite and Minecraft. And Nvidia Geforce GPUs in the cloud allow full-quality streaming of 1500-plus games to nearly any PC. It's also part of what enables simulations, modeling of how objects would behave in real world situations. Think climate forecasting or autonomous drive tech that's informed by millions of miles of virtual roads. It's all part of what Nvidia calls the Omniverse, what Huang points to as the company's next big bet. We have 700-plus customers who are trying it now, from the car industry to logistics warehouse to wind turbine plants. And so I'm really excited about the progress there. And it represents probably the single greatest container of all of Nvidia's technology: computer graphics, artificial intelligence, robotics and physics simulation all into one. I have great hopes for it.
B1 中級 美國腔 輝達(Nvidia)如何從A.I.到遊戲市場皆佔有一席之地? 目前正推動ChatGPT成長(How Nvidia Grew From Gaming To A.I. Giant, Now Powering ChatGPT) 40 2 a011212191 發佈於 2023 年 08 月 17 日 更多分享 分享 收藏 回報 影片單字