Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • AI breakthroughs have been a question of when, not if.

    人工智能的突破只是時間問題,而不是是否突破的問題。

  • Google unveiling long-awaited new details about its large language model, Gemini.

    谷歌公佈了期待已久的大型語言模型 "雙子座"(Gemini)的新細節。

  • Cloud 3 is arguably now one of the most powerful AI models out there, if not the most powerful.

    雲 3 可以說是目前最強大的人工智能模型之一,甚至是最強大的。

  • Preview, if you will, for its chat GPT-5.

    如果你願意,可以預覽它的哈拉 GPT-5。

  • I expect it to be a significant leap forward.

    我預計這將是一次重大飛躍。

  • But what if that core assumption that models can only keep getting bigger and better is now fizzling?

    但是,如果 "模型只能越做越大、越做越好 "這一核心假設現在正在消失呢?

  • Is there really a slowing in progress?

    真的在放緩嗎?

  • Because that wasn't expected.

    因為這是意料之外的。

  • It could spell cracks in the NVIDIA Bull story.

    這可能會讓英偉達公牛的故事出現裂痕。

  • We're increasing GPUs at the same rate, but we're not getting the intelligence improvements out of it.

    我們正在以同樣的速度增加 GPU,但我們並沒有從中獲得智能方面的改進。

  • Calling into question the gigantic ramp in spending from Amazon, Google, Microsoft.

    亞馬遜、谷歌、微軟的鉅額支出令人質疑。

  • A rush for tangible use cases and a killer app.

    急於尋找切實的用例和殺手級應用。

  • I'm Deirdre Bosa with the Tech Check take.

    我是迪爾德麗-博薩(Deirdre Bosa),為您帶來 "技術檢查 "報道。

  • Has AI progress peaked?

    人工智能的進步是否已經達到頂峰?

  • Call it performance anxiety.

    稱之為 "表現焦慮"。

  • The growing concern in Silicon Valley that AI's rapid progression is losing steam.

    硅谷越來越擔心人工智能的快速發展會失去動力。

  • We've really slowed down in terms of the amount of improvement.

    我們的改進速度確實放慢了。

  • Reached a ceiling and is now slowing down.

    已達到上限,目前正在放緩。

  • In the pure model competition, the question is, when do we start seeing an asymptote to scale?

    在純模型競爭中,問題是我們何時開始看到規模漸近線?

  • Hitting walls that even the biggest players from open AI to Google can't seem to overcome.

    從開放式人工智能到谷歌,即使是最大的參與者似乎也無法逾越這些障礙。

  • Progress didn't come cheap.

    進步來之不易。

  • Billions of dollars invested to keep pace, banking on the idea that returns, they would be outsized too.

    數十億美元的投資是為了跟上步伐,寄希望於回報也能超額完成。

  • But no gold rush is guaranteed to last.

    但任何淘金熱都不可能持久。

  • And early signs of struggle are now bubbling up at major AI players.

    目前,主要的人工智能公司都出現了掙扎的早期跡象。

  • The first indication that things are turning, the lack of progression between models.

    事情正在發生轉變的第一個跡象,就是模型之間缺乏進展。

  • I expect that the delta between five and four will be the same as between four and three.

    我預計,5 和 4 之間的差距將與 4 和 3 之間的差距相同。

  • Each new generation of open AI's flagship GPT models, the ones that power chat GPT, they have been exponentially more advanced than the last in terms of their ability to understand, generate and reason.

    每一代開放人工智能的旗艦 GPT 模型,也就是為哈拉 GPT 提供動力的模型,在理解、生成和推理能力方面都比上一代有了指數級的提升。

  • But according to reports, that's not happening anymore.

    但據報道,這種情況已不復存在。

  • There was talk prior to now that these companies were just going to train on bigger and bigger and bigger system.

    在此之前,人們一直在談論,這些公司只是在訓練越來越大、越來越大的系統。

  • If it's true that it's top, that's not going to happen anymore.

    如果真的是頂部,那就不會再發生了。

  • Open AI has led the pack in terms of advancements.

    開放式人工智能在進步方面遙遙領先。

  • It's highly anticipated next model called Aride.

    這是備受期待的下一個型號,名為 Aride。

  • It was expected to be a groundbreaking system that would represent a generational leap in bringing us closer to AGI or artificial general intelligence.

    人們期待它成為一個開創性的系統,代表著一代人的飛躍,讓我們更接近 AGI 或人工通用智能。

  • But that initial vision, it's now being scaled back.

    但最初的願景現在正在縮減。

  • Employees who have used or tested Orion told the information that the increase in quality was far smaller than the jump between GPT three and four, and that they believed Orion isn't reliably better than its predecessor at handling certain tasks like coding.

    使用過或測試過 Orion 的員工告訴本報記者,品質的提高遠遠小於 GPT 3 和 4 之間的飛躍,而且他們認為 Orion 在處理某些任務(如編碼)方面並不比前者可靠。

  • Put in perspective, remember, chat GPT came out at the end of 2022.

    請記住,哈拉 GPT 是在 2022 年底發佈的。

  • So now it's been, you know, close to two years.

    所以,現在,你知道,已經有將近兩年的時間了。

  • And so you had initially a huge ramp up in terms of what all these new models can do.

    是以,就所有這些新機型的功能而言,你們最初都有一個巨大的提升。

  • And what's happening now is you've really trained all these models.

    現在的情況是,你已經訓練了所有這些模型。

  • And so the performance increases are kind of leveling off.

    是以,性能的提升正在趨於平穩。

  • The same thing may be happening at other leading AI developers.

    同樣的情況也可能發生在其他領先的人工智能開發商身上。

  • The startup Anthropic, it could be hitting roadblocks to improving its most powerful model, the Opus, quietly removing wording from its website that promised a new version of Opus later this year.

    初創公司 Anthropic 在改進其最強大的機型 Opus 時可能遇到了障礙,它悄悄地從網站上刪除了承諾今年晚些時候推出新版 Opus 的措辭。

  • And sources telling Bloomberg that the model didn't perform better than the previous versions as much as it should, given the size of the model and how costly it was to build and run.

    消息人士告訴彭博社,考慮到模型的大小以及建造和運行成本的高昂,該模型的性能並沒有像它應該做的那樣比之前的版本更好。

  • These are startups focused on one thing, the development of large language models with billions of dollars in backing from names like Microsoft and Amazon and Venture Capital.

    這些初創公司只專注於一件事,那就是開發大型語言模型,並獲得微軟、亞馬遜和風險投資公司等數十億美元的支持。

  • But even Google, which has enough cash on hand to buy an entire country, it may also be seeing progress plateau.

    不過,即使是手頭現金足以買下整個國家的谷歌,也可能會看到進展趨於平穩。

  • The current generation of LLM models are roughly, you know, a few companies have converged at the top, but I think they're all working on our next versions, too.

    目前的 LLM 模型大致是這樣的,你知道,有幾家公司在頂端趨於一致,但我認為他們也都在開發我們的下一個版本。

  • I think the progress is going to get harder.

    我認為進步會越來越難。

  • When I look at 25, the low hanging fruit is gone.

    當我看到 25 歲時,低垂的果實已經不見了。

  • You know, the curve, the hill is steeper.

    要知道,彎道、山坡都比較陡峭。

  • Its principal AI model, Gemini, is already playing catch up to open AI and Anthropics.

    它的主要人工智能模型 "雙子座"(Gemini)已經在追趕開放式人工智能(open AI)和人類學(Anthropics)。

  • Now, Bloomberg reports, quoting sources, that an upcoming version is not living up to internal expectations.

    現在,彭博社援引消息人士的話報道稱,即將推出的版本沒有達到內部預期。

  • That's to make you think, OK, are we going to go through a period here where we're going to need to digest all this hundreds of billions of dollars we've spent on AI over the last couple of years, especially if revenue forecasts are getting cut or not changing, even though you're increasing the spending you're doing on AI.

    這就是讓你思考,好吧,我們是否會經歷這樣一個時期:我們需要消化過去幾年在人工智能上花費的數千億美元,特別是如果收入預測被削減或沒有變化,即使你正在增加人工智能上的支出。

  • The trend has even been confirmed by one of the most widely respected and pioneering AI researchers, Ilya Sutskever, who co-founded OpenAI and raised a billion dollar seed round for his new AI startup.

    這一趨勢甚至得到了最受人尊敬的人工智能先驅研究人員之一伊利亞-蘇茨基弗(Ilya Sutskever)的證實,他是 OpenAI 的聯合創始人,併為其新成立的人工智能初創公司籌集了十億美元的種子輪資金。

  • As you scale up pre-training, a lot of the low hanging fruit was plucked.

    隨著訓練前規模的擴大,很多低垂的果實都被摘掉了。

  • And so it makes sense to me that you're seeing a deceleration in the rate of improvement.

    是以,我認為你們看到的改善速度放緩是有道理的。

  • But not everyone agrees the rate of progress has peaked.

    但並非所有人都認為進步速度已經達到頂峰。

  • Foundation model pre-training scaling is intact and it's continuing.

    基礎模型預訓練縮放保持不變,並在繼續進行。

  • As you know, this is an empirical law, not a fundamental physical law.

    如你所知,這是一條經驗法則,而不是基本物理法則。

  • But the evidence is that it continues to scale.

    但有證據表明,其規模仍在不斷擴大。

  • Nothing I've seen in the field is out of character with what I've seen over the last 10 years or leads me to expect that things will slow down.

    我在現場看到的一切都與我過去 10 年看到的情況不符,也沒有讓我預期情況會放緩。

  • There's no evidence that the scaling has laws, as they're called, have begun to stop.

    沒有證據表明所謂的 "縮放法 "已經開始停止。

  • They will eventually stop, but we're not there yet.

    它們最終會停止,但我們還沒到那一步。

  • And even Sam Altman posting simply, there is no wall.

    就連薩姆-奧特曼(Sam Altman)也只是簡單地貼出了 "沒有牆 "的字樣。

  • OpenAI and Anthropic, they didn't respond to requests for comment.

    OpenAI和Anthropic沒有迴應置評請求。

  • Google says it's pleased with its progress on Gemini and has seen meaningful performance gains and capabilities like reasoning and coding.

    谷歌表示,它對 Gemini 的進展感到滿意,性能和推理與編碼等功能都有了顯著提升。

  • Let's get to the why.

    讓我們來談談原因。

  • If progress is, in fact, plateauing, it has to do with scaling laws.

    如果說進展確實趨於平穩,那麼這與縮放定律有關。

  • The idea that adding more compute power and more data guarantees better models to an infinite degree.

    這種想法認為,增加計算能力和數據,就能無限保證模型的改進。

  • In recent years, Silicon Valley has treated this as religion.

    近年來,硅谷將此視為宗教。

  • One of the properties of machine learning, of course, is that the larger the brain, the more data we can teach it, the smarter it becomes.

    當然,機器學習的特性之一是,大腦越大,我們能教給它的數據越多,它就越聰明。

  • We call it the scaling law.

    我們稱之為縮放定律。

  • There's every evidence that as we scale up the size of the models, the amount of training data, the effectiveness, the quality, the performance of the intelligence improves.

    各種證據表明,隨著我們擴大模型的規模、訓練數據的數量,智能的有效性、品質和性能都會得到改善。

  • In other words, all you need to do is buy more NVIDIA GPUs, find more articles or YouTube videos or research papers to feed the models, and it's guaranteed to get smarter.

    換句話說,你只需購買更多的英偉達™(NVIDIA®)GPU,找到更多的文章、YouTube 視頻或研究論文來為模型提供素材,就能保證它變得更聰明。

  • But recent development suggests that may be more theory than law.

    但最近的發展表明,這可能只是理論而非法律。

  • People call them scaling laws.

    人們稱之為縮放法。

  • That's a misnomer, like Moore's law is a misnomer.

    這是名不副實的說法,就像摩爾定律是名不副實的說法一樣。

  • Moore's law, scaling laws, they're not laws of the universe.

    摩爾定律、縮放定律都不是宇宙定律。

  • They're empirical regularities.

    它們是經驗規律。

  • I am going to bet in favor of them continuing, but I'm not certain of that.

    我打賭他們會繼續下去,但我並不確定。

  • The hitch may be data.

    搭扣可能是數據。

  • It's a key component of that scaling equation, but there's only so much of it in the world.

    它是縮放等式的關鍵組成部分,但世界上只有這麼多。

  • And experts have long speculated that companies would eventually hit what is called the data wall.

    專家們早就猜測,公司最終會撞上所謂的數據牆。

  • That is run out of it.

    那就是用完了。

  • If we do nothing and if, you know, at scale, we don't continue innovating, we're likely to face similar bottlenecks in data like the ones that we see in computational capability and chip production or power or data center build outs.

    如果我們什麼都不做,如果我們不繼續大規模創新,我們很可能會在數據方面遇到類似的瓶頸,就像我們在計算能力、芯片生產、電力或數據中心建設方面看到的瓶頸一樣。

  • So AI companies have been turning to so-called synthetic data, data created by AI, fed back into AI.

    是以,人工智能公司一直在轉向所謂的合成數據,即由人工智能創建並反饋給人工智能的數據。

  • But that could create its own problem.

    但這樣做也會產生問題。

  • AI is an industry which is garbage in, garbage out.

    人工智能是一個 "垃圾進,垃圾出 "的行業。

  • So if you feed into these models a lot of AI gobbledygook, then the models are going to spit out more AI gobbledygook.

    是以,如果你向這些模型輸入大量人工智能胡言亂語,那麼這些模型就會吐出更多的人工智能胡言亂語。

  • The information reports that Orion was trained in part on AI generated data produced by other open AI models and that Google has found duplicates of some data in the sets used to train Gemini.

    據報道,"獵戶座 "的部分訓練數據是由其他開放式人工智能模型生成的人工智能數據,谷歌在用於訓練 "雙子座 "的數據集中發現了一些重複數據。

  • The problem?

    問題出在哪裡?

  • Low quality data, low quality performance.

    低質量的數據,低質量的性能。

  • This is what a lot of the research that's focused on synthetic data is focused on.

    這也是許多合成數據研究的重點所在。

  • Right.

  • So if you if you if you don't do this well, you don't get much more than you started with.

    所以,如果你做得不好,你得到的也不會比你開始時多多少。

  • But even if the rate of progress for large language models is plateauing, some argue that the next phase, post-training or inference, will require just as much compute power.

    不過,即使大型語言模型的發展速度趨於平穩,一些人還是認為,下一階段,即後訓練或推理,將需要同樣強大的計算能力。

  • Databricks CEO Ali Ghazi says there's plenty to build on top of the existing models.

    Databricks 首席執行官阿里-加齊(Ali Ghazi)說,在現有模式的基礎上,還有很多東西可以開發。

  • I think lots and lots of innovation is still left on the AI side.

    我認為,人工智能領域還有很多創新空間。

  • Maybe those who expected all of our ROI to happen in 2023, 2024, maybe they, you know, they should readjust their horizons.

    也許那些期望我們所有投資回報率都發生在 2023 年、2024 年的人,也許他們,你知道,他們應該重新調整他們的視野。

  • The place where the industry is squeezing to get to get that progress is shifted from pre-training, which is, you know, lots of Internet data, maybe trying synthetic data on huge clusters of GPUs towards post-training and test and compute, which is more about, you know, smaller amounts of data, but it's very high quality, very specific.

    目前,該行業正在努力取得進展,從前期訓練轉向後期訓練、測試和計算,前期訓練就是大量的互聯網數據,或許可以在巨大的 GPU 集群上嘗試合成數據。

  • Feeding data, testing different types of data, adding more compute.

    輸入數據、測試不同類型的數據、增加計算能力。

  • That all happens during the pre-training phase when models are still being built before it's released to the world.

    這一切都發生在預訓練階段,在向世界發佈之前,模型仍在構建之中。

  • So now companies are trying to improve models in the post-training phase.

    是以,現在公司正試圖在培訓後階段改進模型。

  • That means making adjustments and tweaks to how it generates responses to try and boost its performance.

    這意味著要對其生成響應的方式進行調整和調整,以嘗試提高其性能。

  • And it also means a whole new crop of AI models designed to be smarter in this post-training phase.

    這也意味著全新的人工智能模型將在訓練後階段變得更加智能。

  • OpenAI just announcing an improved model, their AI model.

    OpenAI 剛剛宣佈了一個改進的模型,即他們的人工智能模型。

  • They say it has better reasoning.

    他們說它有更好的推理能力。

  • This had been reportedly called strawberry.

    據報道,這曾被稱為草莓。

  • So there's been a lot of buzz around it.

    是以,圍繞它的討論非常熱烈。

  • They're called reasoning models, able to think before they answer and the newest leg in the AI race.

    它們被稱為推理模型,能夠在回答問題前進行思考,是人工智能競賽的最新賽段。

  • We know that thinking is oftentimes more than just one shot.

    我們知道,思考往往不僅僅是一針見血。

  • And thinking requires us to maybe do multi plans, multiple potential answers that we choose the best one from.

    思考需要我們做多種計劃,多個可能的答案,我們從中選擇最好的一個。

  • Just like when we're thinking, we might reflect on the answer before we deliver the answer.

    就像我們在思考時,可能會先思考答案,然後再給出答案。

  • Reflection.

    反思。

  • We might take a problem and break it down into step by step by step.

    我們可能會把一個問題分解成一個又一個步驟。

  • Chain of thought.

    思維鏈。

  • If AI acceleration is tapped out, what's next?

    如果人工智能的加速能力已經耗盡,下一步該怎麼辦?

  • The search for use cases becomes urgent.

    尋找使用案例變得刻不容緩。

  • Just in the last multiple weeks, there's a lot of debate or have we hit the wall with scaling laws?

    就在過去的幾周裡,有很多爭論,還是說我們在縮放法方面碰壁了?

  • It's actually good to have some skepticism, some debate, because that I think will motivate, quite frankly, more innovation.

    實際上,有一些懷疑、一些辯論是好事,因為我認為,坦率地說,這將激勵更多的創新。

  • Because we've barely scratched the surface of what existing models can do.

    因為我們對現有模型的功能所知甚少。

  • The models are actually so powerful today and we've not really utilized them to anywhere close to the level of capability that they actually offer to us and bring true business transformation.

    如今,這些模型的功能已經非常強大,但我們對它們的利用程度還遠遠沒有達到它們為我們帶來真正業務轉型的能力水準。

  • OpenAI, Anthropic and Google, they're making some of the most compelling use cases yet.

    OpenAI、Anthropic 和谷歌,他們正在創造一些最引人注目的使用案例。

  • OpenAI is getting into the search business.

    OpenAI 正在涉足搜索業務。

  • Anthropic unveiling a new AI tool that can analyze your computer screen and take over to act on your behalf.

    Anthropic 發佈了一款新的人工智能工具,它可以分析你的電腦屏幕,並接管你的電腦,代你行事。

  • One of my favorite applications is Notebook LM, you know, this Google application that came out.

    我最喜歡的應用程序之一是 Notebook LM,就是谷歌推出的這款應用程序。

  • I used a living daylights out of it just because it's fun.

    因為好玩,我用得不亦樂乎。

  • But the next phase, the development and deployment of AI agents, that's expected to be another game changer for users.

    但下一階段,人工智能代理的開發和部署有望再次改變用戶的遊戲規則。

  • I think we're going to live in a world where there are going to be hundreds of millions and billions of different AI agents, eventually probably more AI agents than there are people in the world.

    我認為,我們將生活在一個人工智能代理數以億計、甚至數十億計的世界裡,最終人工智能代理的數量可能會超過世界上的人口數量。

  • I spoke with with a call.

    我打過一個電話。

  • They said, Jim, you better start thinking about how to use the term agentic when you're out there, because agentic is the term.

    他們說,吉姆,你最好開始考慮如何使用 "代理人 "這個詞,因為 "代理人 "就是這個詞。

  • Benioff's been using it for a while.

    貝尼奧夫已經用了一段時間了。

  • He's very agentic.

    他很有經紀人的風範。

  • You can have health agents and banking agents and product agents and ops agents and sales agents and support agents and marketing agents and customer experience agents and analytics agents and finance agents and HR agents.

    您可以擁有健康代理、銀行代理、產品代理、營運代理、銷售代理、支持代理、營銷代理、客戶體驗代理、分析代理、財務代理和人力資源代理。

  • And it's all built on this Salesforce platform.

    而這一切都建立在 Salesforce 平臺上。

  • Meaning it's all powered by software.

    也就是說,這一切都是由軟件驅動的。

  • Everybody's talking about when is AI going to kick in for software?

    每個人都在談論,人工智能何時才能在軟件領域發揮作用?

  • It's happening now.

    現在正在發生。

  • Well, it has.

    的確如此。

  • It's not a future thing.

    這不是未來的事情。

  • It's now.

    就是現在。

  • It's something the stock market is already taking note of.

    股市已經注意到了這一點。

  • Software stocks seeing their biggest outperformance versus semis in years.

    軟件類股票與半導體類股票相比,取得了多年來最大的超額收益。

  • And it's key for NVIDIA, which has become the most valuable company in the world and has powered broader market gains.

    這對英偉達來說至關重要,因為英偉達已成為全球最有價值的公司,並推動了整個市場的發展。

  • It's hard for me to imagine that NVIDIA can grow as fast as people are modeling.

    我很難想象英偉達的發展速度能像人們想象的那樣快。

  • And I see that probably as a problem at some point when you get into next year and NVIDIA shipping Blackwell in volume, which is their latest chip.

    我認為,當明年英偉達批量出貨其最新芯片 Blackwell 時,這可能會成為一個問題。

  • And then the vendors can say, OK, we're getting what we need.

    然後供應商就可以說,好的,我們得到了我們需要的東西。

  • And now we just need to digest all this money that we've spent because it's not scaling as fast as we thought in terms of the improvements.

    現在,我們只需要消化我們花掉的這些錢,因為在改進方面,它的擴展速度沒有我們想象的那麼快。

  • The sustainability of the AI trade hinges on this debate.

    人工智能貿易的可持續性取決於這場辯論。

  • OpenAI, XAI, Meta, Anthropic and Google, they're all set to release new models over the next 18 months.

    OpenAI、XAI、Meta、Anthropic 和谷歌,它們都將在未來 18 個月內發佈新的模型。

  • Their rate of progress or lack of it could redefine the stakes of the race.

    他們的進步與否可能會重新定義比賽的利害關係。

AI breakthroughs have been a question of when, not if.

人工智能的突破只是時間問題,而不是是否突破的問題。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋