distillation
US /ˌdɪstəˈleʃən/
・UK /ˌdɪstɪ'leɪʃn/
B2 中高級
n. (u.)不可數名詞蒸餾
The chemical was distilled to remove impurities
影片字幕
DeepSeek:一家中國公司是否開啟了人工智能的新篇章? (DeepSeek: Has a Chinese company opened a new chapter for AI?)
31:24
- OpenAI potentially under pressure. And OpenAI, interestingly, came out not long after, and said that it's found evidence that DeepSeek used some of OpenAI's proprietary models to train its own rival product through a process what's known in the industry as distillation. It's quite a common industry practice. But OpenAI is concerned DeepSeek was doing it to build a competing service, which it says is against OpenAI's terms of service. Of course, the irony here is that
OpenAI 可能面臨壓力。有趣的是,不久之後,OpenAI 站出來說,它發現有證據表明 DeepSeek 使用了一些 OpenAI 的專有模型來訓練自己的競爭對手產品,這個過程在業內被稱為 "蒸餾"(distillation)。這是一種相當普遍的行業做法。但OpenAI擔心,DeepSeek這樣做是為了建立一項競爭服務,並稱這違反了OpenAI的服務條款。當然,具有諷刺意味的是
向你的奶奶解釋 DeepSeek R1 (DeepSeek R1 Explained to your grandma)
08:33
- And in this video, I'll talk about the three main takeaways from their paper, including how they use Chain of Thought in order to have the model self-evaluate its performance, how it uses pure reinforcement learning to have the model guide itself, and how they use model distillation to make DeepSeek and other LLMs more accessible to everyone.
在這段視頻中,我將談談他們論文中的三個主要啟示,包括他們如何使用 "思維鏈"(Chain of Thought)讓模型自我評估性能,如何使用純強化學習讓模型自我指導,以及如何使用模型提煉讓 DeepSeek 和其他 LLM 更容易為每個人所使用。
- And so the third important technique that the DeepSeq researchers use with their R1 model is model distillation.
是以,DeepSeq 研究人員使用 R1 模型的第三項重要技術就是模型提煉。