字幕列表 影片播放
DeepSeek R1 was released just a few days ago and it has sent shockwaves through the AI industry.
DeepSeek R1 發佈僅幾天,就在人工智能行業引起了震動。
R1 is an AI model that has the ability to think just like OpenAI's cutting-edge, state-of-the-art O1 and O3 models.
R1 是一種人工智能模型,具有與 OpenAI 最先進的 O1 和 O3 模型一樣的思維能力。
But here's the thing, it's completely open source and open weights.
但問題是,它是完全開源和開放權重的。
DeepSeek, a small Chinese company, gave all of it away for free and they even detailed how to reproduce it.
DeepSeek 是一家中國小公司,他們免費提供了所有這些資訊,甚至詳細介紹瞭如何複製這些資訊。
But that's not even the craziest part.
但這還不是最瘋狂的部分。
It was trained for just $5 million, as compared to the tens and hundreds of millions of dollars that most people in the AI industry thought was required to train a model of this caliber.
它的訓練費用僅為 500 萬美元,而人工智能行業的大多數人都認為訓練這樣一個模型需要數千萬甚至數億美元。
And it has sent everyone in the AI industry scrambling to understand the ramifications.
這讓人工智能行業的每個人都爭先恐後地瞭解其影響。
DeepSeek has been called everything from the downfall of major US tech companies like OpenAI and Meta, to the greatest gift to humanity, to a Chinese psyop meant to shake the US to its core.
DeepSeek 被稱為美國大型科技公司(如 OpenAI 和 Meta)的垮臺、人類最偉大的禮物,以及中國旨在動搖美國核心的心理戰。
This story is wild, so buckle up.
這個故事很瘋狂,請繫好安全帶。
So just about a week ago, President Trump, Sam Altman, the founder and CEO of OpenAI, the founder of Oracle, and many others got together to make the announcement about Project Stargate.
就在一週前,美國總統特朗普、OpenAI 創始人兼首席執行官山姆-奧特曼、甲骨文公司創始人以及其他許多人聚在一起,宣佈了 "星際之門計劃"。
That is a $500 billion investment in AI infrastructure built in the US.
這相當於在美國投資 5000 億美元建設人工智能基礎設施。
That is on top of the billions and potentially trillions that have already been spent on GPUs, mostly coming from NVIDIA.
除此之外,英偉達™(NVIDIA®)公司已經在 GPU 上投入了數十億甚至數萬億的資金,其中大部分來自英偉達™(NVIDIA®)公司。
Then, right after that, Mark Zuckerberg doubled down on how much his company, Meta, is going to spend on AI infrastructure.
緊接著,馬克-扎克伯格又加倍強調了他的公司 Meta 在人工智能基礎設施上的投入。
Also stating that they are going to continue to spend many billions of dollars building out energy infrastructure and AI infrastructure.
同時還表示,他們將繼續斥資數十億美元建設能源基礎設施和人工智能基礎設施。
So the theme amongst the biggest tech companies in the world is spend as much as we can to win at AI.
是以,世界上最大的科技公司的主題就是盡我們所能,在人工智能領域取得勝利。
And then something happened.
然後發生了一件事。
On January 20th, 2025, a small Chinese research firm called DeepSeek released DeepSeek R1, a completely open source, open weights, AI model that has the ability to think, also known as test time compute, that is directly competitive, if not slightly better than the O1 model by OpenAI that cost hundreds of millions of dollars to train.
2025 年 1 月 20 日,中國一家名為 DeepSeek 的小型研究公司發佈了 DeepSeek R1,這是一個完全開源、開放權重的人工智能模型,它具有思考能力,也稱為測試時間計算能力,與花費數億美元訓練的 OpenAI 的 O1 模型相比,即使不略勝一籌,也有直接競爭力。
And just like that, the AI world was flipped upside down.
就這樣,人工智能世界發生了翻天覆地的變化。
All of a sudden, we had this completely open source version of a state-of-the-art model that we didn't think we were going to have so soon, let alone to be absolutely open source and essentially free.
突然之間,我們擁有了一個完全開源版本的最先進模型,我們沒想到會這麼快擁有它,更沒想到它是絕對開源的,而且基本上是免費的。
The initial reaction was extremely strong.
最初的反應非常強烈。
I've made multiple videos about it.
我已經制作了多個相關視頻。
I'll drop them down in the description below.
我會在下面的描述中一一列出。
People looked at this and were stunned.
人們看到這一幕,都驚呆了。
The biggest names in the AI industry realized we now had a completely open source, state-of-the-art model.
人工智能行業的大佬們意識到,我們現在有了一個完全開源、最先進的模型。
And as everybody was taking this in and super excited that we can play around with it, reproduce it, suddenly the tone shifted.
就在大家都沉浸其中,為我們能玩轉它、複製它而興奮不已時,基調突然發生了變化。
In the technical paper that was released alongside DeepSeek, it was noted that that model was trained for just $5 million.
與 DeepSeek 同時發佈的技術論文指出,該模型的訓練費用僅為 500 萬美元。
That is a fraction of the cost of what every other state-of-the-art model costs to train.
這只是其他先進模型訓練成本的一小部分。
Now think about what this means.
現在想想這意味著什麼。
Meta, Microsoft, OpenAI, and all of the magnificent seven, basically the biggest seven tech companies in the world, have been investing trillions of dollars building out AI infrastructure.
Meta、微軟、OpenAI,以及所有七大巨頭,基本上是世界上最大的七家科技公司,一直在投資數萬億美元建設人工智能基礎設施。
And then all of a sudden, this little Chinese company comes along, open sources a model that's comparable to the best models out there.
然後突然間,這家中國小公司出現了,開源了一款可與最好機型媲美的機型。
And not only did they make it completely free, but they said it only costs $5 million.
他們不僅完全免費,還說只需 500 萬美元。
And then all of a sudden, a lot of analysts are looking at these big companies spending billions of dollars per year and thinking, do we really need that?
突然之間,很多分析師看到這些大公司每年花費數十億美元,心想,我們真的需要這樣嗎?
And a lot of people are pointing at these big companies saying, you guys are about to You've invested so much money and it wasn't even necessary.
很多人指著這些大公司說,你們就要投資這麼多錢了,但這根本沒必要。
Now I will tell you, I do not agree with that whatsoever.
現在我要告訴你們,我完全不同意這種說法。
But that is a theme going on right now in the AI industry.
但這正是目前人工智能行業的一個主題。
And then somebody on Twitter asked, how is DeepSeek going to make money?
推特上有人問,DeepSeek 如何賺錢?
Because they're giving it away for free.
因為他們是免費贈送的。
How are they actually going to make money?
他們究竟如何賺錢?
And the API endpoint to actually run the model is really, really cheap.
而實際運行模型的 API 端點非常非常便宜。
And you don't even need it.
你甚至不需要它。
You can run it on your own hardware.
你可以在自己的硬件上運行它。
And then this tweet went viral.
然後這條推文就火了。
DeepSeek's holding, and this is the Chinese company's name, is a quant company, meaning They are mathematicians tasked with building trading algorithms simply to make money.
DeepSeek 的控股公司,也就是這家中國公司的名字,是一家量化公司,這意味著他們是數學家,任務是建立交易算法,只是為了賺錢。
That's it.
就是這樣。
Many years already, super smart guys with top math background happen to own a lot of GPU for trading mining purposes.
多年來,擁有頂尖數學背景的超級聰明人碰巧擁有大量 GPU,用於交易挖礦。
And DeepSeek is their side project for squeezing those GPUs.
而 DeepSeek 是他們的副業,用於壓榨這些 GPU。
Essentially, this is not even the main function of the company.
從根本上說,這甚至不是公司的主要職能。
This was a side project.
這是一個副業。
So a handful of smart people got together, figured out how to make a state of the art model, incredibly cheap, append the entire AI industry, and it was their side project.
於是,幾個聰明人聚在一起,想出瞭如何製作一個最先進的模型,以難以置信的低廉價格為整個人工智能行業提供附加服務,這就是他們的副業。
That's insane to think about.
想想都覺得瘋狂。
And this went viral.
這件事就這樣傳開了。
And the memes were strong.
而這些 "備忘錄 "也很強大。
Let me show you a few of the reactions from people in the industry.
讓我向你們展示一些業內人士的反應。
So here's one from Stimp for Satoshi.
這是 Stimp 給 Satoshi 的禮物。
Sam spent more on this, referencing this incredible automobile, which I know is multiple millions of dollars, and that Sam Altman driving it, than DeepSeek did to train the model that killed OpenAI.
山姆花在這上面的錢比 DeepSeek 花在訓練殺死 OpenAI 的模型上的錢還多,他提到了這輛令人難以置信的汽車,我知道這輛汽車價值數百萬美元,而且是山姆-奧特曼駕駛的。
Now, again, I don't really believe this.
現在,我再次重申,我並不真的相信這一點。
I will explain what I think is going on in a little bit.
我稍後會解釋我的想法。
Here we have Neil Khosla, son of Vinod Khosla, saying, DeepSeek is a CCP state psyop, plus economic warfare to make American AI unprofitable.
維諾德-科斯拉(Vinod Khosla)的兒子尼爾-科斯拉(Neil Khosla)在這裡說,DeepSeek 是中共的國家迷魂陣,外加經濟戰,目的是讓美國的人工智能無利可圖。
They are faking the cost was low to justify setting price low and hoping everyone switches to it to damage AI competitiveness in the US.
他們偽造成本低的假象,以證明把價格定得低是合理的,並希望所有人都改用它,從而損害人工智能在美國的競爭力。
Don't take the bait.
別上鉤
Now, there was a community note saying there's zero evidence of this.
現在,有一份社區說明稱,沒有任何證據證明這一點。
And that wasn't even the craziest take.
這還不是最瘋狂的。
In Davos, Alexander Wang, the CEO of ScaleAI, basically called out DeepSeek saying, no, they actually have many more GPUs than they're telling us, simply because there is an export ban on China from the US that we cannot export our cutting edge chips to them at scale.
在達沃斯論壇上,ScaleAI 首席執行官亞歷山大-王(Alexander Wang)基本上是在指責 DeepSeek,他說:"不,他們實際上擁有的 GPU 比他們告訴我們的要多得多,原因很簡單,因為美國對中國有出口禁令,我們無法向他們大規模出口我們的尖端芯片。
And so in the research paper, if they admitted that they had a bunch of GPUs, obviously, the US would be pretty pissed.
是以,在研究論文中,如果他們承認自己有一堆 GPU,顯然美國會非常生氣。
And in this clip, Alexander Wang talks about how DeepSeek probably has 50,000 H100s, which are NVIDIA's top of the line GPUs.
在這個片段中,亞歷山大-王談到 DeepSeek 可能擁有 50,000 個 H100,這是英偉達公司的頂級 GPU。
And the fact that they can't talk about it because it goes against the export controls that the US has in place.
事實上,他們不能談論這件事,因為這違反了美國的出口管制。
And maybe that's true, although, again, remember, everything is open sourced and they really went into deep detail, they being DeepSeek, into how they actually produced this model for so cheap.
也許這是真的,不過,請記住,所有東西都是開源的,而且作為 DeepSeek,他們真的深入細緻地研究瞭如何以如此低廉的價格製作出這個模型。
And the company hugging face is reproducing it right now.
而公司抱團取暖的面孔正在重現。
Now let me show you some posts from Emad, who is the founder of Stability AI, who basically ran the numbers and figured out, yeah, it's actually legit what they're saying.
現在,讓我給你們看看埃馬德的一些文章,他是 Stability AI 的創始人,他基本上是通過數字計算得出的結論,是的,他們所說的其實是合法的。
DeepSeek are not faking the cost of run.
DeepSeek 在運行成本方面沒有造假。
It's pretty much in line with what you'd expect, given the data structure, active parameters and other elements and other models trained by other people.
考慮到數據結構、活動參數和其他元素以及其他人訓練的其他模型,這與你的預期基本一致。
You can run it independently at the same cost.
您可以獨立運行,成本不變。
It's a good lab working hard.
這是一個努力工作的優秀實驗室。
Now it wasn't enough.
現在這還不夠。
He didn't put any numbers, but of course, he followed up and did.
他沒有提供任何數字,但當然,他隨後又提供了。
Check this out.
看看這個
So he basically says, for those who want the numbers here, it is optimized.
是以,他基本上是說,對於那些想要這裡的數字的人來說,它已經優化了。
H100 could do it in less than 2.5 million.
H100 可以在 250 萬以內完成。
And he actually used to figure it out.
實際上,他以前也是這麼想的。
Now I'm not going to go through.
現在我不打算過去了。
This is a bit technical for this video.
這個視頻有點技術性。
And again, now all of the focus is back to the major tech companies, Anthropic, Meta, Open AI, Microsoft, who have raised and spent billions and billions of dollars to build out AI infrastructure only to have the rug pulled out from under them from this tiny Chinese company.
再說一遍,現在所有的焦點又回到了那些大型科技公司身上,Anthropic、Meta、Open AI、微軟,它們已經籌集並花費了數十億美元來建設人工智能基礎設施,卻被這家中國小公司搶走了風頭。
Listen to this.
聽聽這個
DeepSeek goes mega viral and they can handle the demand on their two Chromebooks they have to use for inference.
DeepSeek 引發了巨大的病毒式傳播,他們必須使用兩臺 Chromebook 進行推理,才能滿足需求。
Meanwhile, Anthropic cannot handle the load of their paying customers with billions in funding.
與此同時,Anthropic 無法承受數十億資金的付費客戶的負擔。
Do I get this right?
我理解得對嗎?
And that seems to be the sentiment across the board.
這似乎也是大家的共同心聲。
Here's another one.
這裡還有一個。
I've made over 200,000 requests to the DeepSeek API in the last few hours, zero rate limiting and the whole thing cost me like 50 cents.
在過去幾個小時裡,我向 DeepSeek API 提出了 20 多萬個請求,沒有任何費率限制,整個過程只花了我 50 美分。
Bless the CCP, OpenAI could never.
祝福 CCP,OpenAI 永遠做不到。
Now here's the thing.
事情是這樣的
We've been talking on this channel a lot about test time compute.
我們在這個頻道上經常談到測試時間計算。
A lot of the scaling that's happening in AI right now is not at pre-training, not that $5 million that it costs to actually build out the model.
目前,人工智能領域的很多擴展都不是在預訓練階段,而不是花費 500 萬美元來真正建立模型。
But since these models can now think and the more thinking they do, the better the results, that thinking is actually just compute.
但是,由於這些模型現在可以思考,而且思考得越多,結果就越好,是以這種思考實際上只是計算。
It's using compute.
它在使用計算。
And so what's interesting about this is that even at test time, so they're hitting the API 200,000 times, zero rate limiting and extremely inexpensive, unless they are just losing tons of money and have a bunch of GPUs that we don't know about, they've figured out something about efficiency that the U.S. companies have not.
是以,有趣的是,即使在測試階段,他們也要對 API 進行 20 萬次攻擊,速度限制為零,而且成本極低,除非他們虧損嚴重,擁有大量我們不知道的 GPU,否則他們已經找到了美國公司沒有找到的提高效率的方法。
Alexander Wang follows up with a post, DeepSeek is a wake up call for America, but it doesn't change the strategy.
亞歷山大-王(Alexander Wang)跟進發表了一篇文章:DeepSeek 為美國敲響了警鐘,但並沒有改變戰略。
USA must out innovate and race faster, as we have done in the entire history of AI and tighten export controls on chips so that we can maintain future leads.
美國必須加快創新和競賽速度,就像我們在整個人工智能發展史上所做的那樣,並加強對芯片的出口管制,這樣我們才能保持未來的領先地位。
Every major breakthrough in AI has been American.
人工智能領域的每一次重大突破都是美國人取得的。
And continuing, China's DeepSeek could represent the biggest threat to U.S. equity markets as the company seems to have built a groundbreaking AI model at an extremely low price and without having access to cutting edge chips, calling into question the utility of the hundreds of billions of dollars worth of capex being poured into the industry.
此外,中國的 DeepSeek 公司可能是美國股市的最大威脅,因為該公司似乎以極低的價格建立了一個開創性的人工智能模型,但卻無法獲得最先進的芯片,這讓人們對投入該行業的數千億美元資本支出的效用產生了懷疑。
So that's a huge, huge claim here.
是以,這是一個巨大的索賠要求。
Now it's one thing to be able to train the model originally at a very cheap and efficient price, but it's another thing to actually be able to run the inference at an extremely cheap and efficient price.
現在,以非常低廉和高效的價格訓練模型是一回事,但以非常低廉和高效的價格實際運行推理又是另一回事。
Now I said earlier, I don't believe it.
我之前說過,我不相信。
And let me tell you why.
讓我告訴你為什麼。
So there's two possibilities.
所以有兩種可能性。
Let's just assume they were able to figure out how to make this model extremely cheaply.
讓我們假設他們能夠想出辦法,以極其低廉的價格製造這種模型。
We're going to be able to replicate that. Right.
我們將能夠複製這一點。 好的
Everybody wins.
大家都是贏家。
That's the power of open source.
這就是開源的力量。
Now, at inference time, at thinking time, even if let's go down the two paths, even if this model is able to run inference extremely cheaply, then we are getting to Jevons paradox as the cost per unit of any technology decreases, the usage, the total usage and the spend actually increases.
現在,在推理時間,在思考時間,即使讓我們沿著這兩條路走下去,即使這個模型能夠以極其低廉的價格運行推理,那麼我們也會陷入傑文斯悖論,因為任何技術的單位成本都會降低,而使用量、總使用量和花費實際上都會增加。
We've talked about that on this channel.
我們已經在這個頻道上討論過這個問題。
That is because as the unit cost of any tech decreases, the amount of use cases that it can apply to in a positive ROI way increases dramatically.
這是因為,隨著任何技術的單位成本降低,其能夠以正投資回報率方式應用的用例數量也會大幅增加。
That's what we've seen with every tech throughout history.
歷史上的每項技術都是如此。
Then let's think about the other path.
那麼,讓我們想想另一條路。
They actually do have a bunch of GPUs powering it, and they're simply faking how efficient it is.
實際上,他們確實有一堆 GPU 為其提供動力,而他們只是在偽造其效率有多高。
Well, first of all, we're going to figure that out because we have AI companies throughout the world replicating DeepSeek R1 right now.
首先,我們要弄清楚這一點,因為現在全世界都有人工智能公司在複製 DeepSeek R1。
But let's just assume they're doing that.
但我們姑且認為他們正在這麼做。
Then that's fine.
那好吧。
All of this investment is still very valid.
所有這些投資仍然非常有效。
And even if it is really efficient, all of this huge investment by these AI companies in AI infrastructure is still valid because at the end of the day, whoever has the most compute will have the smartest model.
即使效率真的很高,這些人工智能公司在人工智能基礎設施上的鉅額投資仍然有效,因為到最後,誰擁有最多的計算能力,誰就能擁有最智能的模型。
It doesn't matter if it costs $100 per token or a fraction of a penny per token.
不管是每個代幣 100 美元,還是每個代幣幾分錢,都沒有關係。
The more compute, the better.
計算能力越強越好。
Whoever has the smartest AI will win.
誰的人工智能最聰明,誰就能獲勝。
And here's Gary Tan, the president of Y Combinator, basically saying the same thing.
Y Combinator 的總裁 Gary Tan 也是這麼說的。
And this is in reference to the chart that we just talked about where it is a big threat to U.S. equity markets.
這與我們剛才談到的圖表有關,它對美國股市構成了巨大威脅。
Do people really believe this?
人們真的相信這些嗎?
If training models get cheaper, faster, and easier, the demand for inference, actual real world use of AI will grow and accelerate even faster, which assures the supply of compute will be used.
如果訓練模型變得更便宜、更快、更容易,那麼對推理的需求、人工智能在現實世界中的實際應用就會增長並加速,這就確保了計算供應將得到利用。
Yes, that is the way to think about it.
是的,就該這麼想。
I agree wholeheartedly.
我完全同意。
But not everybody agrees.
但並非所有人都同意。
Chamath Palihapitiya, billionaire investor, former early Facebook employee, and all in podcast bestie, has the exact opposite to say.
億萬富翁投資人、前 Facebook 早期員工、播客摯友查馬特-帕利哈皮蒂亞(Chamath Palihapitiya)的觀點恰恰相反。
And he actually broke it down pretty well.
實際上,他說得很清楚。
So in his first point, he's saying in the 1% probability that the CCP has all of these chips that they shouldn't, we need to go investigate that.
是以,在他的第一點中,他是說在中共擁有所有這些不應該擁有的籌碼的 1%概率中,我們需要去調查這一點。
So that's point one.
這就是第一點。
Next, he talks about training versus inference.
接下來,他談到了訓練與推理。
Now, we are in the era of inference right now.
現在,我們正處於推理時代。
We always knew this day would come, but it probably surprised many that it would be this weekend.
我們一直都知道這一天會到來,但這一天會在這個週末到來,可能會讓很多人感到驚訝。
With a model this cheap, many new products and experiences can now emerge trying to win the hearts and minds of the global populace.
有了這種廉價的模式,現在可以出現許多新產品和新體驗,試圖贏得全球民眾的青睞。
Team USA needs to win here.
美國隊需要在這裡獲勝。
To that point, we may still want to export control AI training chips.
在這一點上,我們可能仍然希望對人工智能訓練芯片進行出口控制。
We should probably view inference chips differently.
我們或許應該以不同的方式來看待推理芯片。
We should want everyone around the world using our solutions over others.
我們應該希望全世界所有人都使用我們的解決方案,而不是其他解決方案。
Now I'm going to jump down to point four now, because this is interesting and the part that I really disagree with.
現在我要跳到第四點,因為這一點很有意思,也是我真正不同意的部分。
There will be volatility in the stock market as capital markets absorb all of this information and reprice the values of the mag seven.
隨著資本市場吸收所有這些資訊,並對七大貨幣的價值重新定價,股票市場將出現波動。
That's the magnificent seven companies like Tesla and Meta and Microsoft.
這就是特斯拉、Meta 和微軟等七家公司的輝煌。
So keep that in mind.
是以,請牢記這一點。
Tesla is the least exposed.
特斯拉的風險最小。
The rest are exposed as a direct function of the amount of capex they have publicly announced.
其餘公司的風險直接取決於其公開宣佈的資本支出數額。
Translating that, it basically means the company's stock might go down because of how much they have invested into AI infrastructure, because if everything's cheaper now, why did they spend so much?
換句話說,這基本上意味著該公司的股票可能會因為他們在人工智能基礎設施上的投資而下跌,因為如果現在所有東西都便宜了,他們為什麼還要花那麼多錢呢?
Do not agree with that at all.
完全不同意這種說法。
Again, let's look at Javon's paradox.
我們再來看看賈文的悖論。
The cheaper the tech, the more it's going to be used, the more inference needs to be used.
技術越便宜,使用越多,就越需要使用推理。
Thus, all of that supply of GPU is going to be used.
是以,所有 GPU 的供應都將被使用。
NVIDIA is the most at risk for obvious reasons.
由於顯而易見的原因,英偉達面臨的風險最大。
That said, markets will love it if Meta, Microsoft, Google, et cetera, can win without having to spend 50 to $80 billion per year.
儘管如此,如果 Meta、微軟、谷歌等公司無需每年花費 500 億至 800 億美元就能獲勝,市場將會非常高興。
The markets might love that, but that is not going to be the case.
市場可能會喜歡這種說法,但事實並非如此。
Whoever has the smartest AI will win.
誰的人工智能最聰明,誰就能獲勝。
Eventually, when we reach artificial superintelligence, it is literally a battle of who has the smartest AI.
最終,當我們達到人工超級智能時,這簡直就是一場誰擁有最聰明的人工智能之戰。
What does that take?
這需要什麼?
The most amount of inference, or the most compute in general.
推理量最大,或者說計算量最大。
What does that take?
這需要什麼?
The most chips, the most spend into chips.
籌碼越多,花在籌碼上的錢就越多。
If we find really efficient ways to use these chips, great, everybody wins.
如果我們能找到真正有效的方法來使用這些芯片,那就太好了,大家都是贏家。
But ultimately, the cumulative number of chips is really what's going to matter, or compute.
但歸根結底,芯片的累積數量才是真正的關鍵,也就是計算能力。
He goes on to criticize the U.S. and saying that we've been asleep.
他接著責備美國,說我們一直在沉睡。
And I'll just read this because it's an interesting take.
我只想讀讀這個,因為它的觀點很有趣。
The innovation from China speaks to how asleep we've been for the past 15 years.
中國的創新說明了我們在過去 15 年裡是多麼沉睡。
We've been running towards the big money, shiny object spending programs, and have thrown hundreds of billions of dollars at a problem versus thinking through the problem more cleverly and using resource constraints as an enabler.
我們一直在追求大手筆、閃閃發光的支出項目,在一個問題上投入了數千億美元,而不是更巧妙地思考問題,利用資源限制來解決問題。
Now, a key concept to know is that if people are faced with bigger restrictions and bigger constraints, they tend to get more creative.
現在,要知道的一個關鍵概念是,如果人們面臨更大的限制和約束,他們往往會變得更有創造力。
They tend to be able to extract more efficiency out of less, and that's what he's really referring to here.
他們往往能以少勝多,這才是他真正的意思。
I think the quote is, constraint is the mother of innovation, something like that.
我記得有一句話是這樣說的:約束是創新之母。
But not everybody thinks it's just conspiracy theories and the end of U.S. tech companies.
但並非所有人都認為這只是陰謀論和美國科技公司的末日。
Dionne LeCun, the head of Meta's AI division, who is a big proponent of open source, has this to say.
Meta 公司人工智能部門的負責人 Dionne LeCun 是開放源代碼的忠實擁護者。
To people who see the performance of DeepSeek and think China is surpassing the U.S. in AI, you are reading this wrong.
如果有人看到 DeepSeek 的表現就認為中國在人工智能領域超越了美國,那你就大錯特錯了。
The correct reading is open source models are surpassing proprietary ones.
正確的解讀是開源模式正在超越專有模式。
DeepSeek has profited from open research and open source, e.g.
DeepSeek 從開放研究和開放源代碼中獲益匪淺,例如
PyTorch and Lama from Meta.
來自 Meta 的 PyTorch 和 Lama。
They came up with new ideas and built them on top of other people's work.
他們提出新的想法,並將其建立在其他人的工作之上。
Because their work is published in open source, everyone can profit from it.
因為他們的作品是以開放源代碼的形式發佈的,每個人都可以從中獲利。
That is the power of open research and open source.
這就是開放研究和開放源代碼的力量。
And I could not agree more.
我完全同意。
This is a huge win for open source.
這是開放源代碼的巨大勝利。
This is going to allow many companies to start competing with the closed frontier models by having open source, state-of-the-art models.
這將使許多公司開始通過開源、最先進的模式與封閉的前沿模式競爭。
This story is still unfolding.
故事仍在繼續。
It has been crazy to watch the AI industry react to the news that essentially everything that they thought might actually be changing right now.
人工智能行業對這一消息的反應令人抓狂,因為他們所認為的一切現在都可能發生改變。
So what do you think?
你怎麼看?
Do you think they have more GPUs than they're leading on?
你認為他們擁有的 GPU 數量比他們領先的數量多嗎?
Do you think that they were able to basically come up with this amazing efficiency with just a handful of people as a side project?
你是否認為,作為一個副項目,他們僅靠幾個人就能實現如此驚人的效率?
Did China just jump into the lead of AI?
中國是否剛剛躍居人工智能的領先地位?
Or is this just a great gift to the world because it is open sourced?
還是說,這只是因為它是開源的,所以是送給世界的一份大禮?
I'm going to continue following up on this story.
我會繼續跟進這個故事。
I am enthralled with it.
我為它著迷。
I am absolutely fascinated by what's happening right now in the world and I hope I broke it down for you well.
我對當前世界上發生的事情非常著迷,我希望我能夠很好地為你們分析一下。
If you enjoyed this video, please consider giving a like and subscribe and I'll see you in the next one.
如果您喜歡這段視頻,請點贊、訂閱,我們下期再見。