multimodal
US
・UK
B1 中級
adj.形容詞多峰
The brain is multi-modal, it has so many interacting components
影片字幕
谷歌 I/O 24 10 分鐘內 (Google I/O '24 in under 10 minutes)
09:58
- Unlocking knowledge across formats is why we built Gemini to be multimodal from the ground up.
跨格式解鎖知識是我們從頭開始將 Gemini 打造為多模式的原因。
- is why we built Gemini to be multimodal from the ground up.
這就是為什麼我們從一開始就將雙子座設計為多模式的原因。
8 個麻醉亞專科詳解 💉 您是否應該專攻? (8 Anesthesia Subspecialties Explained 💉 Should You Specialize?)
13:31
- You'll also gain skills in acute pain consults, multimodal analgesia, and opioid sparing protocols.
您還將獲得急性疼痛會診、多模式鎮痛和阿片類藥物稀釋方案方面的技能。
30 分鐘掌握克勞德密碼 (Mastering Claude Code in 30 minutes)
28:07
- You mentioned giving an image to Cloud Code, which made me wonder if there's some sort of multimodal functionality that I'm not aware of.
你提到要為雲代碼提供影像,這讓我想到是否有某種我不知道的多模式功能。
- Yeah, so Cloud Code is fully multimodal.
是的,所以雲代碼是完全多模式的。
Gemini 3 智慧新紀元! (A new era of intelligence with Gemini 3)
01:57
- Gemini has been multimodal since the beginning.
Gemini 從一開始就是 multimodality。
人工智能何時取代放射科醫生?🤖 (When Will Artificial Intelligence Replace Radiologists? 🤖)
15:49
- These models are increasingly multimodal and sometimes called vision language models, or VLMs, because they can now process images and videos alongside text.
這些模型越來越多模態,有時也被稱為視覺語言模型(VLMs),因為它們現在可以同時處理文字、影像和影片。
使用新版 GPT-4o Vision 可以做的瘋狂事情 (The Insane Things You Can Do With The New GPT-4o Vision)
12:31
- Three, multimodal learning.
三是多模式學習。
- GPT-40 Vision uses multimodal learning to understand context and nuances that are not apparent when analyzing text or images separately.
GPT-40 Vision 利用多模態學習來理解上下文和細微差別,而這些在單獨分析文本或影像時並不明顯。
Intel 英特爾 AI 晶片發佈會,10 分鐘濃縮精華帶你看!(Intel's Lunar Lake AI Chip Event: Everything Revealed in 10 Minutes)
09:46
- So you can see a typical LLM, we're getting the text answer here, standard, but it's a multimodal LLM.
是以,你可以看到一個典型的 LLM,我們在這裡得到的是標準的文本答案,但這是一個多模式 LLM。
- The nice thing about this multimodal LLM is we can actually ask it questions to further illustrate what's going on here.
這種多模式 LLM 的好處在於,我們可以向它提出問題,進一步說明這裡發生了什麼。
NVIDIA 和 Disney 合作開發超可愛小型機器人!快來看他們在 GTC 上的有趣互動! (Nvidia Revealed Project GROOT and Disney Bots at GTC)
05:46
- The GR00T model takes multimodal instructions and past interactions as input and produces the next action for the robot to execute.
GR00T模型將多模式指令和過去的交互作為輸入,並生成機器人執行的下一個動作。
英偉達 2024 年人工智能大會:16 分鐘內揭曉一切 (Nvidia 2024 AI Event: Everything Revealed in 16 Minutes)
16:00
- The group model takes multimodal instructions and past interactions as input and produces the next action for the robot to execute.
群模型將多模態指令和過去的互動作為輸入,並生成機器人要執行的下一個動作。