字幕列表 影片播放
Over the last year, everyone has been talking about:
在過去的一年裡每個人都在談論:
Generative AI.
生成式 AI。
Generative AI.
生成式 AI。
Generative AI.
生成式 AI。
Generative AI.
生成式 AI。
I'm like, "Wait, why am I doing this? I just wait for the AI to do it."
我想:「等等,我為什麼要這麼做?我就等著 AI 來做就好啦。」
Driving the boom are AI chips.
推動經濟繁榮的是 AI 晶片。
Some no bigger than the size of your palm, and the demand for them has skyrocketed.
有的還沒有手掌大,它們的需求也急劇上升。
We originally thought the total market for data center, AI accelerators would be about 150 billion, and now we think it's gonna be over 400 billion.
我們原本認為資料中心、人工智慧加速器的總市場規模約為 1500 億,現在我們認為將超過 4000 億。
As AI gains popularity, some of the world's tech titans are racing to design chips that run better and faster.
隨著人工智慧的普及,世界上一些科技巨頭正在競相設計運行更好、更快的晶片。
Here's how they work and why tech companies are betting they're the future.
以下是它們的運作原理以及科技公司押注它們是未來的原因。
This is "The Tech Behind AI Chips."
這集是《人工智慧晶片背後的技術》。
This is Amazon's chip lab in Austin, Texas, where the company designs AI chips to use in AWS's servers.
這是亞馬遜位於德州奧斯汀的晶片實驗室,該公司在這裡設計用於 AWS 伺服器的人工智慧晶片。
Right out of manufacturing, we get something that is called the wafer.
製造完成後,我們得到了一種叫做晶圓的東西。
Ron Diamant is the chief architect of Inferentia and Trainium, the company's custom AI chips.
Ron Diamant 是該公司客製化人工智慧晶片 Inferentia 和 Trainium 的首席架構師。
These are the compute elements or the components that actually perform the computation.
這些是實際執行計算的計算元素或元件。
Each of these rectangles, called dice, is a chip.
每個長方形(稱為裸晶)都是一個晶片。
Each die contains tens of billions of microscopic semiconductors called transistors that communicate inputs and outputs.
每個裸晶都包含數百億個稱為電晶體的微觀半導體,用於通訊輸入和輸出。
Think about one millionth of a centimeter, that's roughly the size of each one of these transistors.
想像一下百萬分之一厘米,這大約是每個電晶體的大小。
All chips use semiconductors like this.
所有晶片都使用這樣的半導體。
What makes AI chips different from CPUs, the kind of chip that powers your computer or phone, is how they're packaged.
AI 晶片與為電腦或手機供電的 CPU 的不同之處在於它們的封裝方式。
Say, for example, you want to generate a new image of a cat.
舉例來說,你想要產生一張新的貓咪影像。
CPUs have a smaller number of powerful cores.
CPU 的強大核心數量較少。
The units that make up the chip that are good at doing a lot of different things, these cores process information sequentially.
組成晶片的單元擅長做很多不同的事情,這些核心按順序處理資訊。
So one calculation after another.
於是,計算一個接一個。
So to create a brand new image of a cat, it would only produce a couple pixels at a time.
因此,要創建一個全新的貓圖像,它一次只會產生幾個像素。
But an AI chip has more cores that run in parallel, so it can process hundreds or even thousands of those cat pixels all at once.
但人工智慧晶片有更多並行運行的核心,因此它可以同時處理數百甚至數千個貓像素。
These cores are smaller and typically do less than CPU cores, but are specially designed for running AI calculations.
這些核心更小,通常比 CPU 核心執行的任務更少,但專門為運行 AI 計算而設計。
But those chips can't operate on their own.
但這些晶片不能獨立運行。
That compute die then gets integrated into a package, and that's what people typically think about when they think about the chip.
然後,該計算晶片被整合到一個封裝中,這就是人們在考慮晶片時通常會想到的。
Amazon makes two different AI chips, named for its two essential functions, training and inference.
亞馬遜生產了兩種不同的人工智慧晶片,以其兩個基本功能命名:訓練和推理。
Training is where an AI model is set millions of examples of something, images of cats, for instance, to teach it what a cat is and what it looks like.
訓練是指人工智慧模型設定數以百萬計的範例,例如貓的圖像,以教導它貓是什麼以及它是什麼樣子。
Inference is when it uses that training to actually generate an original image of a cat.
推理是指它使用該訓練來實際生成貓的原始圖像。
Training is the most difficult part of this process.
訓練是這個過程中最困難的部分。
We typically train not on one chip, but rather on tens of thousands of chips.
我們通常不是在一個晶片上進行訓練,而是在數萬個晶片上進行訓練。
In contrast, inference is typically done on 1 to 16 chips.
相比之下,推理通常在 1 到 16 個晶片上完成。
Processing all of that information demands a lot of energy, which generates heat.
處理所有資訊需要大量能源,從而產生熱量。
And we're able to use this device here in order to force a certain temperature to the chip,
我們可以在這裡使用這個設備來強制晶片達到一定的溫度,
and that's how we're able to test that the chip is reliable at very low temperatures and very high temperatures.
這就是我們測試晶片在極低溫和極高溫度下的可靠性的方法。
To help keep chips cool, they're attached to heat sinks, pieces of metal with vents that help dissipate heat.
為了幫助晶片保持涼爽,它們被連接到散熱器上,散熱器是帶有通風孔的金屬片,有助於散熱。
Once they're packaged, the chips are integrated into servers for Amazon's AWS cloud.
一旦封裝完成,晶片就會整合到亞馬遜 AWS 雲端的伺服器中。
So the training cards will be mounted on this baseboard, eight of them in total, and they are interconnected between them at a very high bandwidth and low latency.
因此,訓練卡將安裝在該基板上,總共八張,並且它們之間以非常高的頻寬和低延遲互連。
So this allows the different training devices inside the server to work together on the same training job.
因此,這允許伺服器內的不同訓練設備在同一訓練作業上協同工作。
So if you are interacting with an AI chatbot, your text, your question will hit the CPUs, and the CPUs will move the data into the Inferentia2 devices, which will collectively perform a gigantic computation.
因此,如果你正在與 AI 聊天機器人交互,你的文字、你的問題將影響 CPU,CPU 會將資料移至 Inferentia2 設備中,這些設備將共同執行巨大的計算。
Basically performing the AI model, will respond to the CPU with the result, and the CPU will send the result back to you.
基本上執行AI模型,會將結果回應CPU,然後CPU將結果傳回給你。
Amazon's chips are just one type competing in this emerging market, which is currently dominated by the biggest chip designer, Nvidia.
亞馬遜的晶片只是在這個新興市場中競爭的一種類型,目前該市場由最大的晶片設計商 Nvidia 主導。
Nvidia is still a chip provider to all different types of customers who have to run different workloads.
Nvidia 仍然是一家晶片供應商,為必須運行不同工作負載的所有不同類型的客戶提供服務。
And then the next category of competitor that you have is the major cloud providers.
下一類競爭對手是主要的雲端供應商。
Microsoft, Amazon AWS, and Google are all designing their own chips because they can optimize their computing workloads for the software that runs on their cloud to get a performance edge,
微軟、亞馬遜AWS和谷歌都在設計自己的晶片,因為他們可以優化在其雲端上運行的軟體的運算工作負載以獲得效能優勢,
and they don't have to give Nvidia its very juicy profit margin on the sale of every chip.
而且他們不必在每塊晶片的銷售上都給予 Nvidia 非常豐厚的利潤。
-But right now, generative AI is still a young technology.
但現在,生成式 AI 還是一項新的技術。
It's mostly used in consumer-facing products like chatbots and image generators,
它主要用於面向消費者的產品,例如聊天機器人和圖像生成器,
but experts say that hype cycles around technology can pay off in the end.
但專家表示,圍繞科技的炒作週期最終會帶來回報。
While there might be something like a dot-com bubble for the current AI hype cycle, at the end of the dot-com bubble was still the internet.
雖然目前的人工智慧炒作週期可能存在類似網路泡沫的情況,但網路泡沫的盡頭仍然是網路。
And I think we're in a similar situation with generative AI.
我認為我們在生成式 AI 方面也面臨類似的情況。
The technology's rapid advance means that chips and the software to use them are going to have to keep up.
該技術的快速發展意味著晶片和使用它們的軟體必須跟上。
Amazon says it uses a mixture of its own chips and Nvidia's chips to give customers multiple options.
亞馬遜表示,它混合使用自己的晶片和 Nvidia 的晶片,為客戶提供多種選擇。
Microsoft says it's following a similar model.
微軟表示,它正在採用類似的模式。
For those cloud providers, the question is, how much of their computing workloads for AI is gonna be offered through Nvidia versus their own custom AI chips?
對於那些雲端供應商來說,問題是,他們的人工智慧運算工作負載有多少是透過 Nvidia 提供的,而不是他們自己的客製化人工智慧晶片?
And that's the battle that's playing out in corporate boardrooms all over the world.
這就是世界各地企業董事會中正在上演的戰鬥。
Amazon released a new version of Trainium in November.
亞馬遜於 11 月發布了新版本的 Trainium。
Diamant says he doesn't see the AI boom slowing down anytime soon.
Diamant 表示,他認為人工智慧的繁榮不會很快放緩。
We've been investing in machine learning and artificial intelligence for almost two decades now,
近二十年來,我們一直在投資機器學習和人工智慧,
and we're just seeing a step-up in pace of innovation and capabilities that these models are enabling us.
我們剛剛看到這些模型為我們帶來的創新步伐和能力的加快。
So our investment in AI chips is here to stay with a significant step-up in capabilities from generation to generation.
因此,我們對人工智慧晶片的投資將隨著一代又一代的能力顯著提升而持續下去。