Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • Just last week, Google announced that they've put

    就在上週,谷歌宣佈他們已經將

  • CUDF in the cloud and Accelerate Pandas.

    雲中的 CUDF 和加速 Pandas。

  • Pandas is the most popular data science library in the world.

    Pandas 是世界上最流行的數據科學庫。

  • Many of you in here probably already use Pandas.

    在座的許多人可能已經在使用 Pandas 了。

  • It's used by 10 million data scientists in the world, downloaded 170 million times each month.

    全球有 1000 萬數據科學家在使用它,每月下載次數達 1.7 億次。

  • It is the Excel that is the spreadsheet of data scientists.

    Excel 是數據科學家的電子表格。

  • Well, with just one click, you can now use Pandas in Colab, which is Google's cloud data centers platform, accelerated by CUDF.

    現在,只需點擊一下,您就可以在 Colab 中使用 Pandas,Colab 是谷歌的雲數據中心平臺,由 CUDF 加速。

  • The speed up is really incredible.

    速度的提升真的令人難以置信。

  • Let's take a look.

    讓我們一起來看看。

  • That was a great demo, right?

    這個演示很棒吧?

  • Didn't take long.

    沒花多少時間

  • This is Earth 2.

    這裡是地球 2 號。

  • The idea that we would create a digital twin of the Earth, that we would go and simulate the Earth so that we could predict the future of our planet to better avert disasters or better understand the impact of climate change so that we can adapt better, so that we could change our habits now.

    我們的想法是創建一個地球的數字孿生體,我們可以去模擬地球,這樣我們就可以預測地球的未來,從而更好地避免災難,或者更好地瞭解氣候變化的影響,這樣我們就可以更好地適應,從而改變我們現在的習慣。

  • This digital twin of Earth is probably one of the most ambitious projects that the world's ever undertaken.

    這個數字孿生地球可能是世界上最雄心勃勃的項目之一。

  • And we're taking large steps every single year.

    我們每年都在邁出一大步。

  • And I'll show you results every single year.

    我每年都會向你展示成果。

  • But this year, we made some great breakthroughs.

    但今年,我們取得了一些重大突破。

  • Let's take a look.

    讓我們一起來看看。

  • On Monday, the storm will veer north again and approach Taiwan.

    下週一,風暴將再次向北偏轉,接近臺灣。

  • There are big uncertainties regarding its path.

    其路徑存在很大的不確定性。

  • Different paths will have different levels of impact on Taiwan.

    不同的路徑會對臺灣產生不同程度的影響。

  • Someday, in the near future, we will have continuous weather prediction at every square kilometer on the planet.

    在不久的將來,我們將能對地球上每平方公里進行連續的天氣預測。

  • We will have continuous weather prediction at every square kilometer on the planet.

    我們將對地球上每平方公里的天氣進行連續預測。

  • You will always know what the climate's going to be.

    你總會知道氣候會怎樣。

  • You will always know.

    你總會知道的。

  • And this will run continuously because we've trained the AI.

    這將持續運行,因為我們已經對人工智能進行了訓練。

  • And the AI requires so little energy.

    而且人工智能需要的能量非常少。

  • In the late 1890s, Nikola Tesla invented an AC generator.

    19 世紀 90 年代末,尼古拉-特斯拉發明了交流發電機。

  • We invented an AI generator.

    我們發明了人工智能生成器。

  • The AC generator generated electrons.

    交流發電機產生電子。

  • NVIDIA's AI generator generates electrons.

    英偉達™(NVIDIA®)人工智能發生器產生電子。

  • The AI generator generates tokens.

    人工智能生成器會生成代幣。

  • Both of these things have large market opportunities.

    這兩樣東西都有很大的市場機會。

  • It's completely fungible in almost every industry.

    在幾乎所有行業中,它都是完全可替代的。

  • And that's why it's a new industrial revolution.

    這就是為什麼這是一場新的工業革命。

  • And now we have a new factory, a new computer.

    現在我們有了新工廠、新電腦。

  • And what we will run on top of this is a new type of software.

    在此基礎上,我們將運行一種新型軟件。

  • And we call it NIMS, NVIDIA Inference Microservices.

    我們稱之為 NIMS,英偉達推理微服務。

  • Now, what happens is the NIM runs inside this factory.

    現在,NIM 就在這個工廠內運行。

  • And this NIM is a pre-trained model.

    而這個 NIM 是一個預先訓練好的模型。

  • It's an AI.

    這是一個人工智能。

  • Well, this AI is, of course, quite complex in itself.

    當然,這種人工智能本身就相當複雜。

  • But the computing stack that runs AIs are insanely complex.

    但是,運行人工智能的計算堆棧複雜得令人難以置信。

  • When you go and use chat GPT, underneath their stack is a whole bunch of software.

    當你使用哈拉 GPT 時,他們的堆棧下是一大堆軟件。

  • Underneath that prompt is a ton of software.

    提示下方是大量軟件。

  • And it's incredibly complex because the models are large, billions to trillions of parameters.

    由於模型龐大,需要數十億到數萬億個參數,是以複雜程度令人難以置信。

  • It doesn't run on just one computer.

    它不能只在一臺電腦上運行。

  • It runs on multiple computers.

    它可在多臺計算機上運行。

  • It has to distribute the workload across multiple GPUs, tensor parallelism, pipeline parallelism, data parallelism, all kinds of parallelism, expert parallelism, all kinds of parallelism, distributing the workload across multiple GPUs, processing it as fast as possible.

    它必須在多個 GPU、張量並行、流水線並行、數據並行、各種並行、專家並行、各種並行中分配工作負載,在多個 GPU 上儘可能快地處理工作負載。

  • Because if you are in a factory, if you run a factory, your throughput directly correlates to your revenues.

    因為如果你在工廠工作,如果你經營一家工廠,你的吞吐量直接關係到你的收入。

  • Your throughput directly correlates to quality of service.

    您的吞吐量直接關係到服務質量。

  • And your throughput directly correlates to the number of people who can use your service.

    您的吞吐量與使用您服務的人數直接相關。

  • We are now in a world where data center throughput utilization is vitally important.

    現在,數據中心的吞吐量利用率至關重要。

  • It was important in the past, but not vitally important.

    它在過去很重要,但並不至關重要。

  • It was important in the past, but people don't measure it.

    這在過去是很重要的,但人們並不去衡量它。

  • Today, every parameter is measured.

    如今,每個參數都可以測量。

  • Start time, up time, utilization, throughput, idle time, you name it.

    啟動時間、運行時間、利用率、吞吐量、空閒時間,應有盡有。

  • Because it's a factory.

    因為這是一家工廠。

  • When something is a factory, its operations directly correlate to the financial performance of the company.

    工廠的營運直接關係到公司的財務業績。

  • And so we realized that this is incredibly complex for most companies to do.

    是以,我們意識到,對於大多數公司來說,要做到這一點非常複雜。

  • So what we did was we created this AI in a box.

    是以,我們在一個盒子裡創建了人工智能。

  • And the container's an incredible amount of software.

    貨櫃裡有大量的軟件。

  • Inside this container is CUDA, CUDNN, TensorRT,

    這個容器中包含 CUDA、CUDNN 和 TensorRT、

  • Triton for inference services.

    Triton 提供推理服務。

  • It is cloud native, so that you could autoscale in a Kubernetes environment.

    它是雲原生的,是以可以在 Kubernetes 環境中自動擴展。

  • It has management services and hooks, so that you can monitor your AIs.

    它具有管理服務和鉤子,是以您可以監控您的人工智能。

  • It has common APIs, standard APIs, so that you could literally chat with this box.

    它有通用的應用程序接口(API)和標準的應用程序接口(API),這樣你就可以和這個盒子哈拉了。

  • We now have the ability to create large language models and pre-trained models of all kinds.

    我們現在有能力創建大型語言模型和各種預訓練模型。

  • And we have all of these various versions, whether it's language-based, or vision-based, or imaging-based.

    我們有各種不同的版本,無論是基於語言的,還是基於視覺的,或是基於影像的。

  • We have versions that are available for health care, digital biology.

    我們有適用於醫療保健和數字生物學的版本。

  • We have versions that are digital humans that I'll talk to you about.

    我們有數字人的版本,我會跟你介紹。

  • And the way you use this, just come to AI.NVIDIA.com.

    使用方法是訪問 AI.NVIDIA.com。

  • And today, we just posted up in Hugging Face the LLAMA3 NIM, fully optimized.

    今天,我們剛剛在 Hugging Face 上發佈了經過全面優化的 LLAMA3 NIM。

  • It's available there for you to try.

    您可以在那裡試用。

  • And you can even take it with you.

    您甚至可以將它隨身攜帶。

  • It's available to you for free.

    您可以免費使用。

  • And finally, AI models that reproduce lifelike appearances, enabling real-time path traced subsurface scattering to simulate the way light penetrates the skin, scatters, and exits at various points, giving skin its soft and translucent appearance.

    最後,人工智能模型可以再現逼真的外觀,實現實時路徑跟蹤次表面散射,模擬光線穿透皮膚、散射和從不同點射出的方式,使皮膚呈現出柔和、半透明的外觀。

  • NVIDIA ACE is a suite of digital human technologies packaged as easy-to-deploy, fully-optimized microservices, or NIMs.

    英偉達™ ACE 是一套數字人類技術,以易於部署、全面優化的微服務或 NIM 的形式打包。

  • Developers can integrate ACE NIMs into their existing frameworks, engines, and digital human experiences.

    開發人員可將 ACE NIM 集成到現有框架、引擎和數字人類體驗中。

  • Nemotron SLM and LLM NIMs to understand our intent and orchestrate other models.

    Nemotron SLM 和 LLM NIM 瞭解我們的意圖並協調其他模型。

  • Riva Speech NIMs for interactive speech and translation.

    用於交互式語音和翻譯的 Riva 語音 NIM。

  • Audio-to-Face and Gesture NIMs for facial and body animation.

    用於面部和肢體動畫的音頻-面部和手勢 NIM。

  • And Omniverse RTX with DLSS for neural rendering of skin and hair.

    Omniverse RTX 和 DLSS 可用於皮膚和頭髮的神經渲染。

  • And so we installed every single RTX GPU with Tensor Core processing.

    是以,我們安裝了所有帶有張量核心處理功能的 RTX GPU。

  • And now we have 100 million GeForce RTX AI PCs in the world.

    現在,全球已有 1 億臺 GeForce RTX AI 電腦。

  • And we're shipping 200.

    我們正在運送 200 個。

  • And this Computex, we're featuring four new amazing laptops.

    在本屆 Computex 上,我們將展示四款令人驚歎的全新筆記本電腦。

  • All of them are able to run AI.

    它們都能運行人工智能。

  • Your future laptop, your future PC, will become an AI.

    你未來的筆記本電腦,你未來的個人電腦,都將成為一個人工智能。

  • It'll be constantly helping you, assisting you in the background.

    它會不斷幫助你,在後臺協助你。

  • Ladies and gentlemen, this is Blackwell.

    女士們先生們 我是布萊克韋爾 Ladies and gentlemen, this is Blackwell.

  • Blackwell is in production.

    布萊克威爾》正在製作中。

  • Incredible amounts of technology.

    令人難以置信的大量技術。

  • This is our production board.

    這是我們的生產板。

  • This is the most complex, highest performance computer the world's ever made.

    這是世界上最複雜、性能最高的計算機。

  • This is the gray CPU.

    這是灰色 CPU。

  • And these are, you could see, the CPUs are the most powerful, the most powerful, and these are, you could see, each one of these Blackwell dies, two of them connected together.

    你可以看到,這些 CPU 是最強大的,也是最強大的,你可以看到,每一個 Blackwell 芯片都是兩個連接在一起的。

  • You see that?

    看到了嗎?

  • It is the largest die, the largest chip the world makes.

    它是世界上最大的芯片,也是世界上最大的芯片。

  • And then we connect two of them together with a 10 terabyte per second link.

    然後,我們用每秒 10 TB 的鏈路將其中兩個連接起來。

  • So this is a DGX Blackwell.

    這就是 DGX Blackwell。

  • This has, this is air-cooled, has eight of these GPUs inside.

    它採用風冷設計,內部有八個 GPU。

  • Look at the size of the heat sinks on these GPUs.

    看看這些 GPU 散熱器的尺寸。

  • About 15 kilowatts, 15,000 watts, and completely air-cooled.

    約 15 千瓦,15000 瓦,完全風冷。

  • This version supports x86, and it goes into the infrastructure that we've been shipping hoppers into.

    該版本支持 x86,並進入了我們一直在運送料斗的基礎架構。

  • However, if you would like to have liquid cooling, we have a new system.

    不過,如果您想使用液冷系統,我們有一套新系統。

  • And this new system is based on this board, and we call it MGX for modular.

    這套新系統就是基於這塊電路板,我們稱之為模塊化 MGX。

  • And this modular system, you won't be able to see this.

    而這個模塊化系統,你是看不到的。

  • Can they see this?

    他們能看到嗎?

  • Can you see this?

    你能看到嗎?

  • You can?

    你能嗎?

  • Are you?

    是你嗎?

  • OK.

    好的。

  • I see.

    我明白了

  • OK.

    好的。

  • And so this is the MGX system, and here's the two Blackwell boards.

    這是 MGX 系統,這是兩塊 Blackwell 電路板。

  • So this one node has four Blackwell chips.

    是以,這個節點有四個 Blackwell 芯片。

  • These four Blackwell chips, this is liquid cooled.

    這四塊 Blackwell 芯片是液冷的。

  • Nine of them, nine of them, well, 72 of these, 72 of these GPUs, 72 of these GPUs are then connected together with a new NVLink.

    其中 9 個,9 個,72 個,72 個 GPU,72 個 GPU 然後通過新的 NVLink 連接在一起。

  • This is NVLink switch fifth generation.

    這是第五代 NVLink 交換機。

  • And the NVLink switch is a technology miracle.

    而 NVLink 交換機則是一個技術奇蹟。

  • This is the most advanced switch the world's ever made.

    這是世界上最先進的開關。

  • The data rate is insane.

    數據傳輸速率太瘋狂了。

  • And these switches connect every single one of these Blackwells to each other so that we have one giant 72 GPU Blackwell.

    這些交換機將每一個黑井相互連接起來,這樣我們就有了一個巨大的 72 GPU 黑井。

  • Well, the benefit of this is that in one domain, one GPU domain, this now looks like one GPU.

    這樣做的好處是,在一個域(一個 GPU 域)中,現在看起來就像一個 GPU。

  • This one GPU has 72 versus the last generation of eight, so we increased it by nine times.

    這一代 GPU 有 72 個,而上一代只有 8 個,是以我們將其增加了 9 倍。

  • The amount of bandwidth we've increased by 18 times.

    我們的帶寬增加了 18 倍。

  • The AI flops we've increased by 45 times, and yet the amount of power is only 10 times.

    我們的人工智能翻轉次數增加了 45 倍,而功率卻只增加了 10 倍。

  • This is 100 kilowatts, and that is 10 kilowatts.

    這是 100 千瓦,那是 10 千瓦。

  • This is one GPU.

    這是一個 GPU。

  • Ladies and gentlemen, DGX GPU.

    女士們,先生們,有請 DGX GPU。

  • The back of this GPU is the NVLink spine.

    這款 GPU 的背面是 NVLink 接口。

  • The NVLink spine is 5,000 wires, two miles, and it's right here.

    NVLink 的脊柱有 5000 根電線,兩英里長,就在這裡。

  • This is an NVLink spine, and it connects 72 GPUs to each other.

    這是一個 NVLink spine,可將 72 個 GPU 互相連接起來。

  • This is an electrical mechanical miracle.

    這是一個電氣機械奇蹟。

  • The transceivers makes it possible for us to drive the entire length in copper.

    有了收發器,我們就可以用銅纜驅動整個長度。

  • And as a result, this switch, the NVLink switch, driving the NVLink spine in copper makes it possible for us to save 20 kilowatts in one rack.

    是以,該交換機(NVLink 交換機)通過銅纜驅動 NVLink脊柱,使我們能夠在一個機架中節省 20 千瓦。

  • 20 kilowatts can now be used for processing.

    現在可以使用 20 千瓦進行加工。

  • Just an incredible achievement.

    這真是一項了不起的成就。

  • We have code names in our company, and we try to keep them very secret.

    我們公司有代號,而且儘量保密。

  • Oftentimes, most of the employees don't even know.

    很多時候,大多數員工甚至都不知道。

  • But our next generation platform is called Rubin.

    但我們的下一代平臺叫 Rubin。

  • The Rubin platform, I'm not going to spend much time on it.

    魯賓平臺,我不會花太多時間在上面。

  • I know what's going to happen.

    我知道會發生什麼。

  • You're going to take pictures of it, and you're going to go look at the fine prints, and feel free to do that.

    你要拍照,你要去看印刷的細節,請隨意。

  • So we have the Rubin platform, and one year later, we'd have the Rubin Ultra platform.

    是以,我們有了 Rubin 平臺,一年後,我們又有了 Rubin Ultra 平臺。

  • All of these chips that I'm showing you here are all in full development, 100% of them.

    我在這裡向你們展示的所有這些芯片,100% 都處於全面開發階段。

  • And the rhythm is one year at the limits of technology, all 100% architecturally compatible.

    而節奏則是在技術極限的一年,所有建築都是 100% 兼容的。

  • So this is basically what NVIDIA is building.

    是以,這基本上就是英偉達正在打造的產品。

  • A robotic factory is designed with three computers.

    機器人工廠設計有三臺計算機。

  • Train the AI on NVIDIA AI.

    在英偉達™(NVIDIA®)人工智能系統上訓練人工智能。

  • You have the robot running on the PLC systems for orchestrating the factories.

    機器人在 PLC 系統上運行,對工廠進行協調。

  • And then you, of course, simulate everything inside Omniverse.

    當然,你還可以模擬 Omniverse 內部的一切。

  • Well, the robotic arm and the robotic AMRs are also the same way, three computer systems.

    機器人手臂和機器人 AMR 也是一樣,都是三個計算機系統。

  • The difference is the two Omniverses will come together.

    所不同的是,兩個宇宙將匯聚在一起。

  • So they'll share one virtual space.

    是以,它們將共享一個虛擬空間。

  • When they share one virtual space, that robotic arm will become inside the robotic factory.

    當它們共享一個虛擬空間時,機械臂就會進入機器人工廠。

  • And again, three computers, and we provide the computer, the acceleration layers, and pre-trained AI models.

    同樣,三臺計算機,我們提供計算機、加速層和預先訓練好的人工智能模型。

  • Well, I think we have some robots that we'd like to welcome.

    我想我們歡迎一些機器人。

  • Here we go.

    開始了

  • About my size.

    關於我的尺寸

  • And we have some friends to join us.

    我們還有一些朋友要加入我們。

  • So the future of robotics is here, the next wave of AI.

    是以,機器人技術的未來就在這裡,下一波人工智能浪潮就在這裡。

  • And of course, Taiwan builds computers with keyboards.

    當然,臺灣製造的電腦還帶有鍵盤。

  • You build computers for your pocket.

    你為自己的口袋製造電腦

  • You build computers for data centers in the cloud.

    你們為雲計算數據中心製造計算機。

  • In the future, you're going to build computers that walk and computers that roll around.

    未來,你將製造出能行走的電腦和能滾動的電腦。

  • And so these are all just computers.

    是以,這些都只是電腦而已。

  • And as it turns out, the technology is very similar to the technology of building all of the other computers that you already built today.

    而事實證明,這項技術與今天你已經建造的所有其他計算機的建造技術非常相似。

  • So this is going to be a really extraordinary journey for us.

    是以,對我們來說,這將是一次非同尋常的旅程。

Just last week, Google announced that they've put

就在上週,谷歌宣佈他們已經將

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋