Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • As we've launched the Core Ultra with Meteor Lake, it also introduced this next generation of chiplet-based design.

    在我們推出採用 Meteor Lake 技術的 Core Ultra 時,也引入了這種基於芯片組的新一代設計。

  • And Lunar Lake is the next step forward.

    而月牙湖則是下一個進步。

  • And I'm happy to announce it today.

    今天,我很高興地宣佈這一消息。

  • Lunar Lake is a revolutionary design.

    月牙湖是一個革命性的設計。

  • It's new IP blocks for CPU, GPU, and NPU.

    它是 CPU、GPU 和 NPU 的新 IP 模塊。

  • It'll power the largest number of next-gen AI PCs in the industry.

    它將為業界數量最多的下一代人工智能 PC 提供動力。

  • We already have over 80 designs with 20 OEMs that will start shipping in volume in Q3.

    我們已經與 20 家原始設備製造商合作了 80 多項設計,並將在第三季度開始批量出貨。

  • First, it starts with a great CPU.

    首先,它需要一個出色的 CPU。

  • And with that, this is our next generation LionCove processor that has significant IPC improvements and delivers that performance while also delivering dramatic power efficiency gains as well.

    是以,這是我們的下一代 LionCove 處理器,在大幅提升 IPC 和性能的同時,還顯著提高了能效。

  • So it's delivering Core Ultra performance at nearly half the power that we had in Meteor Lake, which was already a great chip.

    是以,它在提供 Core Ultra 性能時,功耗幾乎只有 Meteor Lake 芯片的一半,而 Meteor Lake 已經是一款非常出色的芯片。

  • The GPU is also a huge step forward.

    GPU 也向前邁進了一大步。

  • It's based on our next generation C2 IP.

    它基於我們的下一代 C2 IP。

  • And it delivers 50% more graphics performance.

    它的圖形性能提高了 50%。

  • And literally, we've taken a discrete graphics card and we've shoved it into this amazing chip called Lunar Lake.

    從字面上看,我們把獨立顯卡塞進了這款名為 Lunar Lake 的神奇芯片中。

  • Alongside this, we're delivering strong AI compute performance with our enhanced NPU, up to 48 tops of performance.

    與此同時,我們還通過增強型 NPU 提供了強大的人工智能計算性能,最高性能可達 48。

  • And as you heard Satya talk about, our collaboration with Microsoft and Copilot Plus and along with 300 other ISVs, incredible software support, more applications than anyone else.

    正如薩提亞所說,我們與微軟、Copilot Plus 以及其他 300 個 ISV 合作,提供了令人難以置信的軟件支持,應用程序比其他任何人都要多。

  • Now, some say that the NPU is the only thing that you need.

    有人說,只有 NPU 才是你所需要的。

  • And simply put, that's not true.

    簡單地說,事實並非如此。

  • And now having engaged with hundreds of ISVs, most of them are taking advantage of CPU, GPU, and NPU performance.

    現在,在與數百家 ISV 的合作中,大多數 ISV 都在利用 CPU、GPU 和 NPU 的性能優勢。

  • In fact, our new C2 GPU is an incredible on-device AI performance engine.

    事實上,我們的全新 C2 GPU 是一個令人難以置信的設備上人工智能性能引擎。

  • Only 30% of the ISVs we've engaged with are only using the NPU.

    在我們接觸過的 ISV 中,只有 30% 只使用 NPU。

  • The GPU and the CPU in combination deliver extraordinary performance.

    圖形處理器和中央處理器的組合可提供非凡的性能。

  • The GPU, 67 tops with our XMS performance, 3 and 1 1x the gains over prior generation.

    GPU 的性能與我們的 XMS 性能相比,分別提高了 3 倍和 1 倍。

  • And since there's been some talk about this other XElite chip coming out and its superiority to the x86,

    既然有人在談論即將推出的另一款 XElite 芯片及其優於 x86 的性能、

  • I just want to put that to bed right now.

    我現在就想把這事給忘了。

  • Ain't true.

    不是這樣的。

  • Lunar Lake running in our labs today outperforms the XElite on the CPU, on the GPU, and on AI performance, delivering a stunning 120 tops of total platform performance.

    今天,在我們的實驗室中運行的 Lunar Lake 在 CPU、GPU 和人工智能性能上都超過了 XElite,總平臺性能達到了驚人的 120。

  • And it's compatible.

    而且是兼容的。

  • So you don't need any of those compatibility issues.

    是以,你不需要任何兼容性問題。

  • This is x86 at its finest.

    這就是 x86 的精髓所在。

  • Every enterprise, every customer, every historical driver and capability simply works.

    每個企業、每個客戶、每個歷史驅動因素和能力都能簡單地發揮作用。

  • This is a no-brainer.

    這是毋庸置疑的。

  • Everyone should upgrade.

    每個人都應該升級。

  • And the final nail in the coffin of this discussion is some say the x86 can't win on power efficiency.

    討論的最後一個釘子是,有人說 x86 無法在能效上取勝。

  • Lunar Lake busts this myth as well.

    月牙湖也打破了這一神話。

  • This radical new SoC architecture and design delivers unprecedented power efficiency, up to 40% lower SoC performance than Meteor Lake, which was already very good.

    這種激進的全新 SoC 架構和設計帶來了前所未有的能效,SoC 性能比 Meteor Lake 降低了 40%,而 Meteor Lake 的性能已經非常出色。

  • Customers are looking for high-performance, cost effective, gen AI training and inferencing solutions.

    客戶正在尋找高性能、高性價比的基因人工智能培訓和推理解決方案。

  • And they've started to turn to alternatives like Gaudi.

    他們開始轉向像高迪這樣的替代品。

  • They want choice.

    他們想要選擇。

  • They want open, open software and hardware solutions and time-to-market solutions at dramatically lower TCOs.

    他們需要的是開放的軟件和硬件解決方案,以及上市時間短、總擁有成本大幅降低的解決方案。

  • And that's why we're seeing customers like Naver, Airtel, Bosch, Infosys, and Seeker turning to Gaudi too.

    正因如此,我們看到 Naver、Airtel、博世、Infosys 和 Seeker 等客戶也紛紛轉向高迪。

  • And we're putting these pieces together.

    我們正在把這些碎片拼接起來。

  • We're standardizing through the open source community and the Linux Foundation.

    我們正在通過開源社區和 Linux 基金會實現標準化。

  • We've created the open platform for enterprise AI to make Xeon and Gaudi a standardized AI solution for workloads like RAG.

    我們創建了企業人工智能開放平臺,使 Xeon 和 Gaudi 成為 RAG 等工作負載的標準化人工智能解決方案。

  • So let me start with maybe a quick medical query.

    所以,讓我先簡單地詢問一下醫學方面的問題。

  • So this is Xeon and Gaudi working together on a medical query.

    這就是 Xeon 和 Gaudi 共同完成的醫學查詢。

  • So it's a lot of private, confidential, on-prem data being combined with a open source LLM.

    是以,這是大量私有、保密、內部數據與開源 LLM 的結合。

  • Exactly.

    沒錯。

  • OK, very cool.

    好的,非常酷。

  • All right, so let's see what our LLM has to say.

    好吧,讓我們看看我們的法學碩士怎麼說。

  • So you can see a typical LLM, we're getting the text answer here, standard, but it's a multimodal LLM.

    是以,你可以看到一個典型的 LLM,我們在這裡得到的是標準的文本答案,但這是一個多模式 LLM。

  • So we also have this great visual here of the chest X-ray.

    是以,我們在這裡還能看到胸部 X 光片。

  • I'm not good at reading X-rays, so what does this say?

    我不擅長看 X 光片,這說明了什麼?

  • I'm not great either.

    我也不怎麼樣。

  • But the nice thing about, and I'm going to spare you my typing skills,

    不過,好在我的打字技術還不錯、

  • I'm going to do a little cut and pasting here.

    我要在這裡做一點剪切和粘貼。

  • The nice thing about this multimodal LLM is we can actually ask it questions to further illustrate what's going on here.

    這種多模式 LLM 的好處在於,我們可以向它提出問題,進一步說明這裡發生了什麼。

  • So this LLM is actually going to analyze this image and tell us a little bit more about this hazy opacity, such as it is.

    是以,這個 LLM 實際上是要分析這張影像,告訴我們更多關於這種朦朧不透明度的資訊,比如它是什麼。

  • You can see here it's saying it's down here in the lower left.

    你可以看到這裡說它在左下方。

  • So once again, just a great example of multimodal LLM.

    是以,這再次成為多模式 LLM 的典範。

  • And as you see, Gaudi is not just winning on price, it's also delivering incredible TCO and incredible performance.

    正如您所看到的,高迪不僅在價格上獲勝,還提供了令人難以置信的總擁有成本和令人難以置信的性能。

  • And that performance is only getting better with Gaudi 3.

    隨著《高迪 3》的推出,這種性能只會越來越好。

  • Gaudi 3 architecture is the only MLPerf benchmark alternative to H100s for LLM training and inferencing, and Gaudi 3 only makes it stronger.

    高迪 3 架構是 LLM 訓練和推理中唯一可替代 H100 的 MLPerf 基準,而高迪 3 只會讓它變得更強。

  • We're projected to deliver 40% faster time to train than H100s, and 1.5x versus H200s, and faster inferencing than H100s, and delivering that 2.3x performance per dollar in throughput versus H100s.

    預計我們的訓練時間將比 H100s 快 40%,是 H200s 的 1.5 倍,推理速度比 H100s 快,每美元吞吐量性能是 H100s 的 2.3 倍。

  • And in training, Gaudi 3 is expected to deliver 2x the performance per dollar.

    在培訓方面,高迪 3 的每美元性能預計將提高 2 倍。

  • And this idea is simply music to our customers' ears.

    這種想法對我們的客戶來說簡直是天籟之音。

  • Spend less and get more.

    少花錢,多辦事。

  • It's highly scalable, uses open industry standards like Ethernet, which we'll talk more about in a second.

    它具有很強的可擴展性,使用以太網等開放式行業標準,我們稍後會詳細介紹。

  • We're also supporting all of the expected open source frameworks like PyTorch, VLLM.

    我們還支持所有預期的開源框架,如 PyTorch 和 VLLM。

  • And hundreds of thousands of models are now available on Hugging Face for Gaudi.

    現在,在 "擁抱高迪的臉 "網站上可以看到成千上萬的模型。

  • And with our developer cloud, you can experience Gaudi capabilities firsthand, easily accessible, and readily available.

    通過我們的開發人員雲,您可以親身體驗高迪的功能,輕鬆訪問,隨時可用。

  • But of course, with this, the entire ecosystem is lining up behind Gaudi 3.

    當然,有了它,整個生態系統都在支持《高迪 3》。

  • And it's my pleasure today to show you the wall of Gaudi 3.

    今天,我很榮幸向大家展示高迪 3 號牆。

  • Today, we're launching Xeon 6 with eCores.

    今天,我們將推出配備 eCores 的至強 6。

  • And we see this as an essential upgrade for the modern data center, a high core count, high density, exceptional performance per watt.

    我們認為這是現代數據中心必不可少的升級,高核心數、高密度、每瓦特性能卓越。

  • It's also important to note that this is our first product on Intel 3.

    同樣重要的是,這是我們在英特爾 3 處理器上的第一款產品。

  • And Intel 3 is the third of our five nodes in four years as we continue our march back to process technology, competitiveness, and leadership next year.

    英特爾三代是我們四年內五個節點中的第三個,明年我們將繼續向工藝技術、競爭力和領先地位邁進。

  • I'd like you to fill this rack with the equivalent compute capability of the Gen 2 using Gen 6, OK?

    我希望你能把這個機架裝滿與 2 代和 6 代計算能力相當的設備,好嗎?

  • Give me a minute or two.

    給我一兩分鐘

  • I'll make it happen.

    我會做到的。

  • OK, get with it.

    好吧,就這樣吧。

  • Come on.

    來吧

  • Hop to it, buddy.

    來吧,夥計

  • And it's important to think about the data centers.

    考慮數據中心也很重要。

  • Every data center provider I know today is being crushed by how they upgrade, how they expand their footprint and the space, the flexibility.

    今天,我所知道的每一家數據中心提供商都被他們如何升級、如何擴大佔地面積、空間和靈活性所壓垮。

  • For high performance computing, they have more demands for AI in the data center.

    在高性能計算方面,他們對數據中心的人工智能提出了更高的要求。

  • And having a processor with 144 cores versus 28 cores for Gen 2 gives them the ability to both condense as well as to attack these new workloads as well with performance and efficiency that was never seen before.

    與第二代處理器的 28 個內核相比,第二代處理器擁有 144 個內核,這使他們有能力以前所未有的性能和效率來壓縮和處理這些新的工作負載。

  • So Chuck, are you done?

    查克,你說完了嗎?

  • I'm done.

    我不玩了

  • I wanted a few more reps, but you said equivalent.

    我還想多做幾個動作,但你說了等價。

  • You can put a little bit more in there.

    你可以多放一點進去。

  • OK, so let me get it.

    好吧,讓我來。

  • That rack has become this.

    那個架子變成了這個。

  • And what you just saw was eCores delivering this distinct advantage for cloud native and hyperscale workloads, 4.2x in media transcode, 2.6x performance per watt.

    你剛才看到的是 eCores 為雲原生和超大規模工作負載帶來的明顯優勢,媒體轉碼性能提高了 4.2 倍,每瓦性能提高了 2.6 倍。

  • And from a sustainability perspective, this is just game changing.

    從可持續發展的角度來看,這簡直就是在改變遊戲規則。

  • You know, a three to one rack consolidation over a four year cycle, just one 200 rack data center would save 80k megawatts per megawatt hours of energy.

    要知道,在四年的週期內,機架整合的比例為三比一,僅一個 200 機架的數據中心,每兆瓦時就能節省 8 萬兆瓦的能源。

  • And Xeon is everywhere.

    至強無處不在。

  • So imagine the benefits that this could have across the thousands and tens of thousands of data centers.

    試想一下,這將為數千乃至數萬個數據中心帶來多大的好處。

  • In fact, if just 500 data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwan households for a year, 3.7 million cars off the road for a year, or power Taipei 101 for 500 years.

    事實上,如果僅將 500 個數據中心升級為我們剛才看到的樣子,就能為近 140 萬個臺灣家庭提供一年的電力,讓 370 萬輛汽車停駛一年,或為臺北 101 大樓提供 500 年的電力。

  • And by the way, this will only get better.

    順便說一句,這隻會越來越好。

  • And if 144 cores is good, well, let's put two of them together and let's have 288 cores.

    如果 144 個內核已經很不錯了,那麼我們把兩個內核放在一起,就可以有 288 個內核了。

  • So later this year, we'll be bringing the second generation of our Xeon 6 with eCores, a whopping 288 cores.

    是以,今年晚些時候,我們將推出配備 eCores(多達 288 個核心)的第二代至強 6。

  • And this will enable a stunning six to one consolidation ratio, better claim than anything we've seen in the industry.

    這將使合併率達到驚人的六比一,比我們在業內看到的任何情況都要好。

As we've launched the Core Ultra with Meteor Lake, it also introduced this next generation of chiplet-based design.

在我們推出採用 Meteor Lake 技術的 Core Ultra 時,也引入了這種基於芯片組的新一代設計。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋