Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • I'm running something called Private AI.

    我正在運行一個名為 "私人人工智能 "的系統。

  • It's kind of like ChatGPT, except it's not.

    它有點像 ChatGPT,只不過不是。

  • Everything about it is running right here on my computer.

    它的一切都在我的電腦上運行。

  • I'm not even connected to the internet.

    我甚至都沒有聯網。

  • This is private, contained, and my data isn't being shared with some random company.

    這是私人的、保密的,我的數據不會被隨便一家公司共享。

  • So in this video, I want to do two things.

    是以,在這段視頻中,我想做兩件事。

  • First, I want to show you how to set this up.

    首先,我想告訴大家如何設置。

  • It is ridiculously easy and fast to run your own AI on your laptop, computer, or whatever it is.

    在筆記本電腦、計算機或其他任何設備上運行自己的人工智能,簡單快捷得令人髮指。

  • This is free, it's amazing, it'll take you about five minutes.

    這是免費的,非常神奇,只需五分鐘。

  • And if you stick around to the end, I want to show you something even crazier, a bit more advanced.

    如果你堅持到最後,我想給你看一些更瘋狂、更先進的東西。

  • I'll show you how you can connect your knowledge base, your notes, your documents, your journal entries, to your own Private GPT, and then ask it questions about your stuff.

    我將向你展示如何將你的知識庫、筆記、文檔、日誌條目與你的私人 GPT 相連接,然後向它提出有關你的東西的問題。

  • And then second, I want to talk about how Private AI is helping us in the area we need help most, our jobs.

    其次,我想談談私人人工智能如何在我們最需要幫助的領域--我們的工作--幫助我們。

  • You may not know this, but not everyone can use ChatGPT or something like it at their job.

    您可能不知道,並不是每個人都能在工作中使用 ChatGPT 或類似功能。

  • Their companies won't let them, mainly because of privacy and security reasons.

    他們的公司不允許他們這樣做,主要是因為隱私和安全原因。

  • But if they could run their own Private AI, that's a different story.

    但如果他們能運行自己的私人人工智能,那就另當別論了。

  • That's a whole different ballgame.

    這完全是兩碼事。

  • And VMware is a big reason this is possible.

    而 VMware 正是實現這一點的重要原因。

  • They are the sponsor of this video, and they're enabling some amazing things that companies can do on-prem in their own data center to run their own AI.

    他們是本視頻的贊助商,公司可以在自己的數據中心運行自己的人工智能。

  • And it's not just the cloud, man, it's like in your data center.

    不僅是雲計算,就連數據中心也是如此。

  • The stuff they're doing is crazy.

    他們做的事情太瘋狂了。

  • We're gonna talk about it here in a bit.

    我們稍後再談。

  • But tell you what, go ahead and do this.

    這樣吧,你來做吧。

  • There's a link in the description.

    說明中有一個鏈接。

  • Just go ahead and open it and take a little glimpse at what they're doing.

    快打開看看他們在做什麼吧。

  • We're gonna dive deeper, so just go ahead and have it open right in your second monitor or something, or on the side, or minimize.

    我們要深入研究,所以請在你的第二個顯示器或其他地方打開它,或者在側面,或者最小化。

  • I don't know what you're doing, I don't know how many monitors you have.

    我不知道你在做什麼,也不知道你有多少臺顯示器。

  • You have three, actually, Bob.

    其實你有三個,鮑勃

  • I can see you.

    我看見你了

  • Oh, and before we get started, I have to show you this.

    哦,在我們開始之前,我必須給你看這個。

  • You can run your own private AI that's kind of uncensored.

    你可以運行自己的私人人工智能,這種人工智能是未經審查的。

  • Like, watch this.

    比如,看這個。

  • I love you, dude, I love you.

    我愛你 夥計 我愛你

  • So yeah, please don't do this to destroy me.

    所以,是的,請不要這樣做毀了我。

  • Also, make sure you're paying attention.

    此外,請務必集中注意力。

  • At the end of this video, I'm doing a quiz.

    在視頻的最後,我要做一個小測驗。

  • And if you're one of the first five people to get 100% on this quiz, you're getting some free coffee.

    如果您是前五名在測試中獲得 100% 高分的人之一,您將獲得免費咖啡。

  • Network Chuck coffee.

    網絡查克咖啡

  • So take some notes, study up, let's do this.

    所以,做一些筆記,好好學習,讓我們開始吧。

  • Now, real quick, before we install a private local AI model on your computer, what does it even mean?

    現在,在我們在你的電腦上安裝私人在地人工智能模型之前,請快速瞭解一下,它到底意味著什麼?

  • What's an AI model?

    什麼是人工智能模型?

  • At its core, an AI model is simply an artificial intelligence pre-trained on data we've provided.

    人工智能模型的核心是根據我們提供的數據預先訓練的人工智能。

  • One you may have heard of is OpenAI's chat GPT, but it's not the only one out there.

    您可能聽說過 OpenAI 的哈拉 GPT,但它並不是唯一的一種。

  • Let's take a field trip.

    讓我們進行一次實地考察。

  • We're gonna go to a website called huggingface.co.

    我們要去一個叫 huggingface.co 的網站。

  • Just an incredible brand name, I love it so much.

    這真是一個不可思議的品牌名稱,我非常喜歡。

  • This is an entire community dedicated to providing and sharing AI models.

    這是一個致力於提供和共享人工智能模型的完整社區。

  • And there are a ton.

    而且有很多。

  • You're about to have your mind blown, ready?

    你將大開眼界,準備好了嗎?

  • I'm gonna click on models up here.

    我要點擊上面的模型。

  • Do you see that number?

    你看到這個數字了嗎?

  • 505,000 AI models.

    50.5 萬個人工智能模型。

  • Many of these are open and free for you to use, and they're pre-trained, which is kind of a crazy thing.

    其中很多都是開放的,你可以免費使用,而且它們都經過預先培訓,這有點瘋狂。

  • Let me show you this.

    讓我給你看看這個。

  • We're gonna search for a model named Llama 2, one of the most popular models out there.

    我們將搜索一款名為 "Llama 2 "的機型,它是最受歡迎的機型之一。

  • We'll do Llama 2 7B.

    我們將進行喇嘛 2 7B。

  • I, again, I love the branding.

    我還是喜歡這個品牌。

  • Llama 2 is an AI model known as an LLM or large language model.

    Llama 2 是一種人工智能模型,被稱為 LLM 或大型語言模型。

  • OpenAI's chat GPT is also an LLM.

    OpenAI 的哈拉 GPT 也是一名法學碩士。

  • Now this LLM, this pre-trained AI model was made by Meta, AKA Facebook.

    現在這個 LLM,這個預先訓練好的人工智能模型是由 Meta(又名 Facebook)製作的。

  • And what they did to pre-train this model is kind of insane.

    他們對這個模型的預先訓練簡直是瘋了。

  • And the fact that we're about to download this and use it, even crazier.

    而事實上,我們正要下載並使用它,這就更瘋狂了。

  • Check this out.

    看看這個

  • If you scroll down just a little bit, here we go, training data.

    如果你向下滾動一點,就會看到訓練數據。

  • It was trained by over 2 trillion tokens of data from publicly available sources, instruction data sets, over a million human annotated examples.

    它由超過 2 萬億個來自公開來源的數據、指令數據集和超過一百萬個人工註釋示例進行訓練。

  • Data freshness, we're talking July, 2023.

    數據新鮮度,我們說的是 2023 年 7 月。

  • I love that term, data freshness.

    我喜歡 "數據新鮮度 "這個詞。

  • And getting the data was just step one.

    獲取數據只是第一步。

  • Step two is insane because this is where the training happens.

    第二步是瘋狂的,因為培訓就在這裡進行。

  • Meta, to train this model, put together what's called a super cluster.

    為了訓練這個模型,Meta 組建了一個超級集群。

  • It already sounds cool, right?

    聽起來已經很酷了,對吧?

  • This sucker is over 6,000 GPUs.

    這傢伙的 GPU 超過 6000 個。

  • It took 1.7 million GPU hours to train this model.

    該模型的訓練花費了 170 萬 GPU 小時。

  • And it's estimated it costs around $20 million to train it.

    據估計,它的訓練費用約為 2000 萬美元。

  • And now Meta's just like, here you go kid, download this incredibly powerful thing.

    現在 Meta 就像在說,給你,孩子,下載這個無比強大的東西。

  • I don't want to call it a being yet.

    我還不想稱它為存在。

  • I'm not ready for that.

    我還沒準備好。

  • But this intelligent source of information that you can just download on your laptop and ask it questions.

    但這個智能信息源,你只需下載到筆記本電腦上,就可以向它提問。

  • No internet required.

    無需網絡。

  • And this is just one of the many models we could download.

    這只是我們可以下載的眾多模型之一。

  • They have special models like text to speech, image to image.

    它們有特殊的模式,如文本到語音、影像到影像。

  • They even have uncensored ones.

    甚至還有未經審查的。

  • They have an uncensored version of Allama too.

    他們還有未經審查的《阿拉瑪》版本。

  • This guy, George Sung, took this model and fine tuned it with a pretty hefty GPU, took him 19 hours and made it to where you could pretty much ask this thing anything you wanted.

    這個叫喬治-宋的傢伙利用這個模型,並通過一個相當大的 GPU 對其進行了微調,他花了 19 個小時,終於做到了你可以問這個東西任何你想要的東西。

  • Whatever question comes to mind, it's not going to hold back.

    無論想到什麼問題,它都不會退縮。

  • So how do we get this fine tuned model onto your computer?

    那麼,我們如何把這個經過微調的模型放到你的電腦上呢?

  • Well, actually I should warn you, this involves quite a bit of Allamas, more than you would expect.

    實際上,我得提醒你,這裡面涉及到不少阿拉瑪斯,比你想象的要多。

  • Our journey starts at a tool called Allama.

    我們的旅程從一個名為 "阿拉瑪 "的工具開始。

  • Let's go ahead and take a field trip out there real quick.

    我們先去實地考察一下吧。

  • We'll go to allama.ai.

    我們去 allama.ai。

  • All we have to do is install this little guy, Mr. Allama.

    我們要做的就是安裝這個小傢伙,阿拉瑪先生。

  • And then we can run a ton of different LLMs.

    然後,我們就可以運行大量不同的 LLM。

  • Llama2, Code Llama, told you lots of llamas.

    拉瑪 2,代號 "拉瑪",告訴你有很多拉瑪。

  • And there's others that are pretty fun like Llama2 Uncensored, more llamas.

    還有一些其他的遊戲也很有趣,比如《Llama2 Uncensored》,裡面有更多的駱駝。

  • Mistral, I'll show you in a second.

    米斯特爾,我馬上就給你看。

  • But first, what do we install Allama on?

    但首先,我們要在什麼上面安裝阿拉瑪?

  • We can see right down here that we have it available on Mac OS and Linux, but oh, bummer, Windows coming soon.

    我們可以看到,我們已經在 Mac OS 和 Linux 上提供了該軟件,但無奈的是,Windows 即將推出。

  • It's okay, because we've got WSL, the Windows Subsystem for Linux, which is now really easy to set up.

    沒關係,因為我們已經有了 WSL,Linux 的 Windows 子系統,現在設置起來非常簡單。

  • So we'll go ahead and click on download right here.

    是以,我們將繼續點擊下載。

  • For Mac OS, you'll just simply download this and install it like one of your regular applications.

    對於 Mac OS,只需下載並安裝即可,就像安裝普通應用程序一樣簡單。

  • For Linux, we'll click on this.

    對於 Linux,我們將點擊這個。

  • We got a fun curl command that will copy and paste.

    我們有一個有趣的 curl 命令,可以複製和粘貼。

  • Now, because we're going to install WSL on Windows, this will be the same step.

    現在,因為我們要在 Windows 上安裝 WSL,所以這將是相同的步驟。

  • So, Mac OS folks, go ahead and just run that installer.

    是以,Mac OS 的用戶請繼續運行安裝程序。

  • Linux and Windows folks, let's keep going.

    Linux 和 Windows 的朋友們,讓我們繼續。

  • Now, if you're on Windows, all you have to do now to get WSL installed is launch your Windows terminal.

    現在,如果你使用的是 Windows 系統,要安裝 WSL,只需啟動 Windows 終端即可。

  • Just go to your search bar and search for terminal.

    只需在搜索欄中搜索終端。

  • And with one command, it'll just happen.

    只要一聲令下,它就會發生。

  • It used to be so much harder, which is WSL dash dash install.

    以前的安裝難度很大,這就是 WSL 儀表盤的安裝。

  • It'll go through a few steps.

    它會經過幾個步驟。

  • It'll install Ubuntu as default.

    它將默認安裝 Ubuntu。

  • I'll go ahead and let that do that.

    我會繼續讓它這樣做。

  • And boom, just like that, I've got Ubuntu 22.04.3 LTS installed, and I'm actually inside of it right now.

    就這樣,我安裝好了 Ubuntu 22.04.3 LTS,現在我就在裡面。

  • So now at this point, Linux and Windows folks, we've converged, we're on the same path.

    是以,在這一點上,Linux 和 Windows 的朋友們,我們已經趨於一致,走在了同一條道路上。

  • Let's install Olama.

    讓我們安裝 Olama。

  • I'm going to copy that curl command that Olama gave us, jump back into my terminal, paste that in there, and press enter.

    我要複製 Olama 給我們的 curl 命令,跳回到我的終端,把它粘貼進去,然後按回車鍵。

  • Fingers crossed, everything should be going great, like the way it is right now.

    希望一切順利,就像現在這樣。

  • It'll ask for my sudo password.

    它會詢問我的 sudo 密碼。

  • And that was it.

    就是這樣。

  • Olama is now installed.

    Olama 已安裝完畢。

  • Now, this will directly apply to Linux people and Windows people.

    現在,這將直接適用於 Linux 用戶和 Windows 用戶。

  • See right here where it says NVIDIA GPU installed?

    看到這裡顯示已安裝 NVIDIA GPU 嗎?

  • If you have that, you're going to have a better time than other people who don't have that.

    如果你有這種能力,你就會比其他沒有這種能力的人過得更好。

  • I'll show you here in a second.

    我馬上就給你看。

  • If you don't have it, that's fine.

    如果沒有,也沒關係。

  • We'll keep going.

    我們將繼續前進。

  • Now let's run an LLM.

    現在,讓我們運行一個 LLM。

  • We'll start with Llama 2.

    我們從拉瑪 2 號開始。

  • So we'll simply type in Olama, run, and then we'll pick one, Llama 2.

    是以,我們只需輸入 Olama,運行,然後選擇一個,Llama 2。

  • And that's it.

    就是這樣。

  • Ready, set, go.

    準備,準備,開始

  • It's going to pull the manifest.

    它將拉動艙單。

  • It'll then start pulling down and downloading Llama 2, and I want you to just realize this, that powerful Llama 2 pre-training we talked about, all the money and hours spent, that's how big it is.

    然後它就會開始下拉並下載 Llama 2,我希望你能意識到這一點,我們所說的強大的 Llama 2 前期培訓,所花費的所有金錢和時間,就是這麼大。

  • This is the 7 billion parameter model, or the 7B.

    這就是 70 億參數模型,或稱 7B。

  • It's pretty powerful.

    它非常強大。

  • And we're about to literally have this in the palm of our hands.

    而這一切,就在我們的掌中。

  • In like three, two, one.

    三 二 一

  • Oh, I thought I had it.

    哦,我以為我拿到了。

  • Anyways, it's almost done.

    總之,就快完工了。

  • And boom, it's done.

    然後 "砰 "的一聲,就完成了。

  • We've got a nice success message right here, and it's ready for us.

    我們這裡有一個很好的成功資訊,它已經為我們準備好了。

  • We can ask you anything.

    我們可以問你任何問題。

  • Let's try, what is a pug?

    讓我們試試,什麼是八哥犬?

  • Now, the reason this is going so fast, just like a side note, is that I'm running a GPU, and AI models love GPUs.

    現在,這一切之所以進行得如此之快,就像一個附帶說明,是因為我正在運行 GPU,而人工智能模型喜歡 GPU。

  • So let me show you real quick.

    所以,讓我給你演示一下。

  • I did install Llama on a Linux virtual machine.

    我在 Linux 虛擬機上安裝了 Llama。

  • And I'll just demo the performance for you real quick.

    我將為你快速演示一下性能。

  • By the way, if you're running like a Mac with an M1, M2, or M3 processor, it actually works great.

    順便說一句,如果你使用的是裝有 M1、M2 或 M3 處理器的 Mac,它的運行效果會非常好。

  • I forgot to install it.

    我忘記安裝了。

  • I gotta install it real quick.

    我得趕緊安裝。

  • And it'll ask you that same question, what is a pug?

    它會問你同樣的問題:什麼是八哥犬?

  • It's going to take a minute.

    需要一分鐘

  • It'll still work, but it's going to be slower on CPUs.

    它仍然可以工作,但 CPU 運行速度會變慢。

  • And there it goes.

    就是這樣。

  • It didn't take too long, but notice it is a bit slower.

    花的時間並不長,但注意到速度有點慢。

  • Now, if you're running WSL, and you know you have an Nvidia GPU and it didn't show up, I'll show you in a minute how you can get those drivers installed.

    現在,如果你正在運行 WSL,而且你知道你有一個 Nvidia GPU,但它沒有顯示出來,我馬上就會告訴你如何安裝這些驅動程序。

  • But anyways, just sit back for a minute, sip your coffee, and think about how powerful this is.

    但無論如何,請稍安勿躁,喝一口咖啡,想想這有多麼強大。

  • The tinfoil hat version of me, stinkin' loves this.

    戴著錫箔帽的我非常喜歡這個。

  • Because let's say the zombie apocalypse happens, right?

    因為假設殭屍啟示錄發生了,對嗎?

  • The grid goes down, things are crazy.

    電網癱瘓,一切都瘋了。

  • But as long as I have my laptop and a solar panel, I still have AI, and it can help me survive the zombie apocalypse.

    但只要我有筆記本電腦和太陽能電池板,我就仍然擁有人工智能,它可以幫助我在殭屍啟示錄中倖存下來。

  • Let's actually see how that would work.

    讓我們來看看實際效果如何。

  • It gives me next steps.

    它告訴我下一步該怎麼做。

  • I can have it help me with the water filtration system.

    我可以讓它幫我安裝水過濾系統。

  • This is just cool, right?

    這很酷吧?

  • It's amazing.

    太神奇了

  • But can I show you something funny?

    但我能給你看點有趣的東西嗎?

  • You may have caught this earlier.

    你可能已經注意到了。

  • Who is Network Chuck?

    誰是網絡查克?

  • What?

    什麼?

  • Dude, I've always wanted to be Rick Grimes.

    老兄,我一直想成為瑞克-格蘭姆斯。

  • That is so fun.

    太有趣了

  • But seriously, it kind of like hallucinated there.

    不過說真的,那裡有點像幻覺。

  • It didn't have the correct information.

    它沒有正確的資訊。

  • It's so funny how it mixed the zombie apocalypse prompt with me.

    它把殭屍啟示錄的提示和我混在一起,真是太有趣了。

  • I love that so much.

    我非常喜歡。

  • Let's try a different model.

    讓我們試試不同的模式。

  • I'll say bye.

    我會說再見的。

  • I'll try a really fun one called Mistral.

    我想試試一款非常有趣的產品,名叫 "米斯特拉爾"(Mistral)。

  • And by the way, if you want to know which ones you can run with OLAMA, which LLMs, they got a page for their models right here.

    順便說一下,如果你想知道哪些可以用 OLAMA 運行,哪些是 LLM,他們在這裡有一個模型頁面。

  • All the ones you can run, including LLAMA2 Uncensored.

    你能運行的所有程序,包括 LLAMA2 Uncensored。

  • Wizard Math.

    魔法數學

  • I might give that to my kids, actually.

    事實上,我可能會把它送給我的孩子們。

  • Let's see what it says now.

    讓我們看看它現在怎麼說。

  • Who is Network Chuck?

    誰是網絡查克?

  • Now my name is not Chuck Davis.

    我現在不叫查克-戴維斯了。

  • And my YouTube channel is not called Network Chuck on Tech.

    我的 YouTube 頻道也不叫 "科技網絡查克"。

  • So clearly the data this thing was trained on is either not up to date or just plain wrong.

    是以,很明顯,這東西所依據的數據要麼不是最新的,要麼就是完全錯誤的。

  • So now the question is, cool, we've got this local private AI, this LLM.

    所以現在的問題是,酷,我們已經有了這個在地的私人人工智能,這個法學碩士。

  • That's super powerful.

    太強大了

  • But how do we teach it the correct information for us?

    但是,我們該如何向它傳授正確的資訊呢?

  • How can I teach it to know that I'm Network Chuck, Chuck Keith, not Chuck Davis, and my channel's called Network Chuck?

    我怎麼教它知道我是網絡查克、查克-基思,而不是查克-戴維斯,我的頻道叫網絡查克?

  • Or maybe I'm a business and I want it to know more than just what's publicly available.

    或者,我是一家企業,我希望它知道的不僅僅是公開的資訊。

  • Because sure, right now, if you downloaded this LLM, you could probably use it in your job.

    因為可以肯定的是,現在,如果你下載了這個法學碩士學位,你可能會在工作中用到它。

  • But you can only go so far without it knowing more about your job.

    但是,如果它不對你的工作有更多瞭解,你也只能走這麼遠。

  • For example, maybe you're on a help desk.

    例如,您可能在服務檯工作。

  • Imagine if you could take your help desk's knowledge base, your IT procedures, your documentation.

    試想一下,如果你能將服務檯的知識庫、IT 程序和文檔都納入其中,你會怎樣做?

  • Not only that, but maybe you have a database of closed tickets, open tickets.

    不僅如此,您可能還有一個已關閉票據和未關閉票據數據庫。

  • If you could take all that data and feed it to this LLM and then ask it questions about all of that, that would be crazy.

    如果你能把所有這些數據輸入這個法學碩士,然後就所有這些向它提問,那就太瘋狂了。

  • Or maybe you want it to help troubleshoot code that your company's written.

    或者,您可能希望它能幫助排除貴公司編寫的代碼的故障。

  • You can even make this LLM public facing for your customers.

    您甚至可以為客戶提供面向公眾的 LLM。

  • You feed it information about your product and the customer could interact with that chat bot you make, maybe.

    您向它提供產品資訊,客戶就可以與您製作的哈拉機器人進行互動。

  • This is all possible with a process called fine tuning, where we can train this AI on our own proprietary, secret, private stuff about our company or maybe our lives or whatever you want to use it for, whatever use case is.

    這一切都可以通過一個名為 "微調 "的過程來實現,在這個過程中,我們可以用我們自己的專有、祕密、私人的東西來訓練人工智能,比如我們的公司、我們的生活或者你想用它來做的任何事情,不管是什麼用例。

  • And this is fantastic because maybe before, you couldn't use a public LLM because you weren't allowed to share your company's data with that LLM.

    這真是太棒了,因為也許在此之前,你不能使用公共 LLM,因為你不能與該 LLM 共享你公司的數據。

  • Whether it's compliance reasons or you just simply didn't want to share that data because it's secret.

    無論是出於合規原因,還是因為數據保密而不想共享這些數據。

  • Whatever the case, it's possible now because this AI is private, it's local.

    無論如何,現在有可能了,因為這種人工智能是私人的,是在地的。

  • And whatever data you feed to it is gonna stay right there in your company.

    無論你輸入什麼數據,它都會留在你的公司裡。

  • It's not leaving the door.

    不是離開門。

  • That idea just makes me so excited because I think it is the future of AI and how companies and individuals will approach it.

    這個想法讓我非常興奮,因為我認為這是人工智能的未來,也是公司和個人的未來。

  • It's gonna be more private.

    會更私密

  • Back to our question though, fine tuning.

    回到我們的問題,微調。

  • That sounds cool, training an AI on your own data, but how does that work?

    用自己的數據訓練人工智能,這聽起來很酷,但如何實現呢?

  • Because as we saw before, with pre-training a model with Meta, it took them 6,000 GPUs over 1.7 million GPU hours.

    正如我們之前看到的,使用 Meta 對模型進行預訓練,他們花費了 6000 個 GPU,超過 170 萬 GPU 小時。

  • Do we have to have this massive data center to make this happen?

    我們是否必須擁有這個龐大的數據中心才能實現這一目標?

  • No, check this out.

    不,看看這個。

  • And this is such a fun example, VMware.

    這就是一個有趣的例子,VMware。

  • They asked ChatGPT, what's the latest version of VMware vSphere?

    他們問 ChatGPT,VMware vSphere 的最新版本是什麼?

  • Now the latest ChatGPT knew about was vSphere 7.0, but that wasn't helpful to VMware because their latest version they were working on, which hadn't been released yet, so it wasn't public knowledge, was vSphere 8 update 2.

    現在,ChatGPT 所知道的最新版本是 vSphere 7.0,但這對 VMware 毫無幫助,因為他們正在開發的最新版本是 vSphere 8 update 2,該版本尚未發佈,是以並不為公眾所知。

  • And they wanted information like this, internal information not yet released to the public.

    他們想要這樣的資訊,尚未向公眾公佈的內部資訊。

  • They wanted this to be available to their internal team.

    他們希望內部團隊也能使用。

  • So they could ask something like ChatGPT, hey, what's the latest version of vSphere?

    是以,他們可以詢問類似 ChatGPT 的問題:"嘿,vSphere 的最新版本是什麼?

  • And they could answer correctly.

    他們都能回答正確。

  • So to do what VMware is trying to do, to fine tune a model or train it on new data, it does require a lot.

    是以,要完成 VMware 所要做的工作,對模型進行微調或在新數據上進行訓練,確實需要很多東西。

  • First of all, you would need some hardware, servers with GPUs.

    首先,你需要一些硬件,配備 GPU 的服務器。

  • Then you would also need a bunch of tools and libraries and SDKs like PyTorch and TensorFlow, Pandas, NumPy, Scikit-Learn, Transformers, and FastAI.

    然後,你還需要大量的工具、庫和 SDK,如 PyTorch 和 TensorFlow、Pandas、NumPy、Scikit-Learn、Transformers 和 FastAI。

  • The list goes on.

    這樣的例子不勝枚舉。

  • You need lots of tools and resources in order to fine tune an LLM.

    您需要大量的工具和資源來完善法律碩士課程。

  • That's why I'm a massive fan of what VMware is doing.

    這就是為什麼我是 VMware 的忠實粉絲。

  • Right here, they have something called the VMware Private AI with NVIDIA.

    在這裡,他們有一種叫做 VMware Private AI with NVIDIA 的東西。

  • The gajillion things I just listed off, they include in one package, one combo meal, a recipe of AI fine tuning goodness.

    我剛才列出的數以百萬計的東西,它們都包含在一個套餐、一份套餐、一份人工智能微調食譜中。

  • So as a company, it becomes a bit easier to do this stuff yourself, locally.

    是以,作為一家公司,在在地自己做這些事情會變得更容易一些。

  • For the system engineer you have on staff who knows VMware and loves it, they could do this stuff.

    如果你的系統工程師瞭解 VMware 並熱愛它,他們就可以做這些工作。

  • They could implement this.

    他們可以這樣做。

  • And the data scientists they have on staff that will actually do some of the fine tuning, all the tools are right there.

    他們的數據科學家會進行一些微調,所有的工具都在那裡。

  • So here's what it looks like to fine tune.

    下面就是微調後的效果。

  • And we're gonna kinda peek behind the curtain at what a data scientist actually does.

    我們將從幕後窺探數據科學家的實際工作。

  • So first we have the infrastructure, and we start here in vSphere.

    是以,我們首先要有基礎架構,然後從 vSphere 開始。

  • Now if you don't know what vSphere is, or VMware, think virtual machines.

    如果你不知道什麼是 vSphere 或 VMware,那就想想虛擬機吧。

  • You got one big physical server, the hardware, the stuff you can feel, touch, and smell.

    你有一個大的物理服務器,硬件,你能感覺到、觸摸到和聞到的東西。

  • If you haven't smelled the server, I don't know what you're doing.

    如果你還沒聞到服務器的味道,我就不知道你在幹什麼了。

  • And instead of installing one operating system on them like Windows or Linux, you install VMware's ESXi, which will then allow you to virtualize or create a bunch of additional virtual computers.

    你可以安裝 VMware 的 ESXi,而不是在上面安裝一個作業系統,如 Windows 或 Linux,這樣你就可以虛擬化或創建大量額外的虛擬計算機。

  • So instead of one computer, you've got a bunch of computers all using the same hardware resources.

    是以,使用相同硬件資源的不是一臺電腦,而是一堆電腦。

  • And that's what we have right here.

    這就是我們這裡的情況。

  • One of those virtual computers, a virtual machine.

    其中一臺虛擬電腦,一臺虛擬機。

  • This, by the way, is one of their special deep learning VMs that has all the tools I mentioned, and many, many more, pre-installed, ready to go.

    順便說一下,這是他們的一個特殊深度學習虛擬機,預裝了我提到的所有工具,還有很多很多其他工具,隨時可以使用。

  • Everything a data scientist could love.

    數據科學家所鍾愛的一切。

  • It's kinda like a surgeon walking in to do some surgery, and like their doctor assistants or whatever, have prepared all their tools.

    這就有點像外科醫生走進來做手術,而他們的醫生助理或其他什麼的,已經準備好了所有的工具。

  • It's all on the tray, laid out, nice and neat, to where the surgeon only has to do is walk in and just go scalpel.

    所有東西都放在托盤上,擺放整齊,外科醫生只需走進去,拿起手術刀就可以了。

  • That's what we're doing here for the data scientist.

    這就是我們為數據科學家所做的。

  • Now talking more about hardware, this guy has a couple NVIDIA GPUs assigned to it, or passed through to it through a technology called PCIe pass-through.

    說到硬件,這傢伙配備了幾個英偉達™(NVIDIA®)GPU,或通過一種名為 PCIe 穿透的技術傳輸到它。

  • These are some beefy GPUs, and notice they are VGPUs for virtual GPU, similar to what you do with the CPU, cutting up the CPU and assigning some of that to a virtual CPU on a virtual machine.

    這些 GPU 非常強大,注意它們是虛擬 GPU 的 VGPU,類似於 CPU,將 CPU 分割開來,將其中一部分分配給虛擬機上的虛擬 CPU。

  • So here we are in data scientist world.

    現在我們來到了數據科學家的世界。

  • This is a Jupyter Notebook, a common tool used by a data scientist.

    這是一個 Jupyter 筆記本,是數據科學家常用的工具。

  • And what you're gonna see here is a lot of code that they're using to prepare the data, specifically the data that they're gonna train or fine-tune the existing model on.

    在這裡,你會看到很多代碼,它們用來準備數據,特別是用來訓練或微調現有模型的數據。

  • Now we're not gonna dive deep on that, but I do want you to see this.

    現在我們不會深入探討這個問題,但我想讓你們看看這個。

  • Check this out.

    看看這個

  • A lot of this code is all about getting the data ready.

    許多代碼都是為了準備數據。

  • So in VMware's case, it might be a bunch of their knowledge base, product documentation, and they're getting it ready to be fed to the LLM.

    是以,在 VMware 的案例中,這可能是他們的知識庫、產品文檔,他們正準備將其輸入 LLM。

  • And here's what I wanted you to see.

    這就是我想讓你看到的。

  • Here's the dataset that we're training this model on, or fine-tuning.

    這是我們正在訓練這個模型的數據集,或者說是微調數據集。

  • We only have 9,800 examples that we're giving it, or 9,800 new prompts or pieces of data.

    我們只給它提供了 9800 個例子,或者 9800 個新的提示或數據。

  • And that data might look like this, like a simple question or a prompt.

    這些數據可能是這樣的,就像一個簡單的問題或提示。

  • And then we feed it the correct answer.

    然後我們給它提供正確的答案。

  • And that's how we essentially train AI.

    這就是我們訓練人工智能的本質。

  • But again, we're only giving it 9,800 examples, which is not a lot at all.

    但同樣,我們只舉了 9800 個例子,根本不算多。

  • And it's extremely small compared to how the model was originally trained.

    而且,與最初訓練模型的方式相比,它是非常小的。

  • And I point that out to say that we're not gonna need a ton of hardware or a ton of resources to fine-tune this model.

    我指出這一點是想說,我們不需要大量的硬件或資源來微調這個模型。

  • We won't need the 6,000 GPUs we needed for Meta to originally create this model.

    我們不需要 Meta 最初創建這個模型所需的 6000 個 GPU。

  • We're just kind of adding to it, changing some things or fine-tuning it to what our use case is.

    我們只是對其進行補充、修改或微調,以適應我們的使用情況。

  • And looking at what actually will be changed when we run this, when we train it.

    當我們運行它、訓練它的時候,看看到底會發生什麼變化。

  • We're only changing 65 million parameters, which sounds like a lot, right?

    我們只改變了 6500 萬個參數,聽起來很多,對吧?

  • But not in the grand scheme of things of like a 7 billion parameter model.

    但在類似 70 億參數模型的宏大計劃中,這並不重要。

  • We're only changing 0.93% of the model.

    我們只改變了模型的 0.93%。

  • And then we can actually run our fine-tuning, which this is a specific technique in fine-tuning called prompt-tuning, where we simply feed it additional prompts with answers to change how it will react to people asking it questions.

    然後,我們就可以進行微調,這是微調中的一項特殊技術,叫做 "提示微調",我們只需向它提供額外的提示和答案,就能改變它對人們提問的反應。

  • This process will take three to four minutes to fine-tune it because again, we're not changing a lot.

    這個過程需要三到四分鐘來進行微調,因為我們的改動並不是很大。

  • And that is just so super powerful.

    這簡直太強大了。

  • And I think VMware is leading the charge with private AI.

    我認為,VMware 正在引領私有人工智能的發展。

  • VMware and NVIDIA take all the guesswork out of getting things set up to fine-tune in LLM.

    VMware 和英偉達™(NVIDIA®)為在 LLM 中進行微調的設置消除了所有猜測。

  • They've got deep learning VMs, which are insane.

    他們有深度學習虛擬機,這太瘋狂了。

  • VMs that come pre-installed with everything you could want, everything a data scientist would need to fine-tune in LLM.

    虛擬機預裝了你想要的一切,以及數據科學家在 LLM 中需要微調的一切。

  • And then NVIDIA has an entire suite of tools centered around their GPUs, taking advantage of some really exciting things to help you fine-tune your LLMs.

    此外,英偉達™(NVIDIA®)還擁有一整套以 GPU 為中心的工具,利用一些非常令人興奮的功能來幫助您微調 LLM。

  • Now, there's one thing I didn't talk about because I wanted to save it for last, for right now.

    現在,有一件事我沒有談,因為我想把它留到最後,留到現在。

  • It's this right here, this Vector Database PostgreSQL box here.

    就是這裡,這個 Vector Database PostgreSQL 框。

  • This is something called RAG.

    這就是所謂的 RAG。

  • And it's what we're about to do with our own personal GPT here in a bit.

    這也是我們稍後要做的個人 GPT。

  • Retrieval Augmented Generation.

    檢索增強生成。

  • So scenario, let's say you have a database of product information, internal docs, whatever it is, and you haven't fine-tuned your LLM on this just yet.

    假設你有一個產品資訊、內部文檔等數據庫,而你還沒有對 LLM 進行微調。

  • So it doesn't know about it.

    所以它不知道這件事。

  • You don't have to do that.

    你不必這麼做。

  • With RAG, you can connect your LLM to this database, of information, this knowledge base, and give it these instructions.

    有了 RAG,你就可以將 LLM 與這個資訊數據庫、知識庫連接起來,並向它下達這些指令。

  • Say, whenever I ask you a question about any of the things in this database, before you answer, consult the database.

    說,每當我問你關於這個數據庫中任何東西的問題時,在你回答之前,先查閱一下數據庫。

  • Go look at it and make sure what you're saying is accurate.

    去看看吧,確保你說的話是準確的。

  • We're not retraining the LLM.

    我們不是在重新培訓法律碩士。

  • We're just saying, hey, before you answer, go check real quick in this database to make sure it's accurate, to make sure you got your stuff right.

    我們只是說,嘿,在你回答之前,請快速檢查一下這個數據庫,確保它是準確的,確保你的資料是正確的。

  • Isn't that cool?

    很酷吧?

  • So yes, fine-tuning is cool, and training an LLM on your own data is awesome.

    是的,微調很酷,用自己的數據訓練 LLM 也很棒。

  • But in between those moments of fine-tuning, you can have RAG set up where it can consult your database, your internal documentation, and give correct answers based on what you have in that database.

    但在微調的間隙,你可以設置 RAG,讓它查詢你的數據庫、內部文檔,並根據數據庫中的內容給出正確答案。

  • That is so stinking cool.

    太酷了

  • So with VMware Private AI Foundation, with NVIDIA, they have those tools baked right in to where it just kind of works for what would otherwise be a very complex setup.

    是以,有了 VMware Private AI Foundation 和英偉達™(NVIDIA®),他們就可以將這些工具直接嵌入到原本非常複雜的設置中。

  • And by the way, this whole RAG thing, like I said earlier, we're about to do this.

    順便說一下,整個 RAG 的事情,就像我之前說的,我們正要做這件事。

  • I actually connected a lot of my notes and journal entries to a private GPT using RAG, and I was able to talk with it about me, asking it about my journal entries and answering questions about my past.

    實際上,我用 RAG 將我的很多筆記和日誌條目連接到了一個私人 GPT 上,我可以與它談論我自己,向它詢問我的日誌條目,回答有關我過去的問題。

  • That's so powerful.

    太強大了

  • Now, before we move on, I just wanna highlight the fact that NVIDIA, with their NVIDIA AI Enterprise, gives you some amazing, fantastic tools to pull the LLM of your choice and then fine-tune and customize and deploy that LLM.

    現在,在我們繼續之前,我只想強調一個事實,英偉達公司的英偉達人工智能企業版為你提供了一些令人驚歎的神奇工具,讓你可以選擇 LLM,然後對其進行微調、定製和部署。

  • It's all built in right here.

    一切都在這裡實現。

  • So VMware Cloud Foundation, they provide the robust infrastructure.

    是以,VMware Cloud Foundation 提供了強大的基礎架構。

  • And NVIDIA provides all the amazing AI tools you need to develop and deploy these custom LLMs.

    英偉達™(NVIDIA®)還提供了開發和部署這些定製 LLM 所需的所有人工智能工具。

  • Now, it's not just NVIDIA.

    現在,不僅是英偉達公司。

  • They're partnering with Intel as well.

    他們還與英特爾公司合作。

  • So VMware's covering all the tools that admins care about.

    是以,VMware 涵蓋了管理員關心的所有工具。

  • And then for the data scientist, this is for you.

    對於數據科學家來說,這是為你準備的。

  • Intel's got your back.

    英特爾會支持你

  • Data analytics, generative AI and deep learning tools, and some classic ML, or machine learning.

    數據分析、生成式人工智能和深度學習工具,以及一些經典的 ML 或機器學習。

  • And they're also working with IBM.

    他們還與 IBM 合作。

  • All you IBM fans, you can do this too.

    所有 IBM 的粉絲們,你們也可以這樣做。

  • Again, VMware has the admin's back.

    同樣,VMware 也是管理員的後盾。

  • But for the data scientist, Watson, one of the first AI things I ever heard about.

    但對於數據科學家來說,沃森是我最早聽說的人工智能之一。

  • Red Hat and OpenShift.

    紅帽和 OpenShift

  • And I love this because what VMware's doing is all about choice.

    我喜歡這一點,因為 VMware 所做的一切都與選擇有關。

  • If you wanna run your own local private AI, you can.

    如果你想運行自己的在地私人人工智能,可以。

  • You're not just stuck with one of the big guys out there.

    你不只是被那些大公司所束縛。

  • And you can choose to run it with NVIDIA and VMware, Intel and VMware, IBM and VMware.

    你可以選擇在英偉達™(NVIDIA®)和 VMware、英特爾™(Intel®)和 VMware、IBM™(IBM®)和 VMware 中運行它。

  • You got options.

    你有選擇

  • So there's nothing stopping you.

    所以,沒有什麼能阻止你。

  • So now for some of the bonus section of this video, and that's how to run your own private GPT with your own knowledge base.

    現在,我們來看看本視頻的獎勵部分,即如何利用自己的知識庫運行自己的私人 GPT。

  • Now, fair warning, it is a bit more advanced.

    現在,我要提醒大家,它有點高級。

  • But if you stick with me, you should be able to get this up and running.

    不過,如果你能跟上我的進度,你應該能把它啟動並運行起來。

  • So take one more sip of coffee.

    那就再喝一口咖啡吧。

  • Let's get this going.

    讓我們開始吧。

  • Now, first of all, this will not be using OLAMA.

    首先,我們不會使用 OLAMA。

  • This will be a separate project called PrivateGPT.

    這將是一個獨立的項目,名為 PrivateGPT。

  • Now, disclaimer, this is kind of hard to do.

    免責聲明,這有點難。

  • Unlike VMware PrivateAI, which they do it all for you.

    與 VMware PrivateAI 不同的是,他們會為你做這一切。

  • It's a complete solution for companies to run their own private local AI.

    它是企業運行自己的私有在地人工智能的完整解決方案。

  • What I'm about to show you is not that at all.

    我要向你們展示的完全不是這樣。

  • No affiliation with VMware.

    與 VMware 無關。

  • It's a free side project.

    這是一個免費的輔助項目。

  • You can try just to get a little taste of what running your own private GPT with RAG tastes like.

    您可以嘗試一下用 RAG 運行自己的私人 GPT 是什麼滋味。

  • Did I do that right?

    我做得對嗎?

  • I don't know.

    我不知道。

  • Now, L Martinez has a great doc on how to install this.

    現在,L Martinez 提供了一份關於如何安裝的精彩文檔。

  • It's a lot, but you can do it.

    雖然工作量很大,但你可以做到。

  • And if you just want a quick start, he does have a few lines of code for Linux and Mac users.

    如果你只想快速入門,他還為 Linux 和 Mac 用戶提供了幾行代碼。

  • Fair warning, this is CPU only.

    需要提醒的是,這僅適用於 CPU。

  • You can't really take advantage of RAG without a GPU, which is what I wanted to do.

    沒有 GPU 就無法真正利用 RAG,而這正是我想要做的。

  • So here's my very specific scenario.

    這就是我的具體設想。

  • I've got a Windows PC with an NVIDIA 4090.

    我有一臺裝有英偉達 4090 的 Windows 電腦。

  • How do I run this Linux-based project?

    如何運行這個基於 Linux 的項目?

  • WSL.

    WSL.

  • And I'm so thankful to this guy, Emilian Lancelot.

    我非常感謝這個人,埃米利安-蘭斯洛特。

  • He put an entire guide together of how to set this up.

    他還編寫了一份完整的指南,介紹如何進行設置。

  • I'm not gonna walk you through every step because he already did that.

    我不會教你每一個步驟,因為他已經教過了。

  • Link below.

    鏈接如下。

  • But I seriously need to buy this guy a coffee.

    但我真得請他喝杯咖啡。

  • How do I do that?

    我該怎麼做?

  • I don't know.

    我不知道。

  • Emilian, if you're watching this, reach out to me.

    埃米利安,如果你在看這個,請聯繫我。

  • I'll send you some coffee.

    我給你送點咖啡

  • So anyways, I went through every step from installing all the prereqs to installing NVIDIA drivers and using Poetry to handle dependencies, which Poetry's pretty cool.

    總之,我經歷了從安裝所有先決條件到安裝 NVIDIA 驅動程序的每一個步驟,並使用 Poetry 來處理依賴關係,Poetry 非常酷。

  • I landed here.

    我在這裡著陸。

  • I've got a private, local, working, private GPT that I can access through my web browser.

    我有一個專用的在地工作 GPT,可以通過網絡瀏覽器訪問。

  • And it's using my GPU, which is pretty cool.

    它還使用了我的 GPU,非常酷。

  • Now, first I try a simple document upload.

    現在,我先試著上傳一個簡單的文檔。

  • I've got this VMware article that details a lot of what we talked about in this video.

    我有一篇 VMware 的文章,詳細介紹了我們在視頻中談到的很多內容。

  • I upload it and I start asking it questions about this article.

    我上傳了它,並開始向它詢問有關這篇文章的問題。

  • I tried something specific, like show me something about VMware AI market growth.

    我試著問一些具體的問題,比如向我介紹一下 VMware AI 市場的增長情況。

  • Bam, it figured it out.

    砰,它想通了。

  • It told me.

    它告訴我

  • Then I'm like, what's the coolest thing about VMware private AI?

    然後我就想,VMware 私有人工智能最酷的地方是什麼?

  • It told me.

    它告訴我

  • I'm sitting here chatting with a document, but then I'm like, let's try something bigger.

    我坐在這裡和一份文件哈拉,但後來我想,讓我們嘗試更大的東西。

  • I want to chat with my journals.

    我想和我的日記哈拉。

  • I've got a ton of journals on Markdown format and I want to ask it questions about me.

    我有一大堆 Markdown 格式的日誌,我想向它提出關於我的問題。

  • Now, this specific step is not covered in the article, so here's how you do it.

    現在,文章中沒有涉及這一具體步驟,所以下面介紹一下如何操作。

  • First, you'll want to grab your folder of whatever documents you want to ask questions about and throw it onto your machine.

    首先,你要把文件夾裡你想問的任何文件都放到你的機器上。

  • So I copied over to my WSL machine and then I ingested it with this command.

    是以,我將其複製到我的 WSL 機器上,然後用這條命令將其導入。

  • Once complete and I ran private GPT again, here's all my documents and I'm ready to ask it questions.

    完成後,我再次運行了私人 GPT,這裡有我所有的文件,我已經準備好向它提問了。

  • So let's test this out.

    讓我們來測試一下。

  • I'm going to ask it, what did I do in Takayama?

    我要問它,我在高山做了什麼?

  • So I went to Japan in November of 2023.

    於是,我在 2023 年 11 月去了日本。

  • Let's see if it can search my notes, figure out when that was and what I did.

    看看它能不能搜索我的筆記,找出那是什麼時候,我做了什麼。

  • That's awesome.

    太棒了

  • Oh my goodness.

    我的天啊

  • Let's see.

    讓我們看看

  • What did I eat in Tokyo?

    我在東京吃了什麼?

  • How cool is that?

    這有多酷?

  • Oh my gosh, it's so fun.

    天哪,太有趣了。

  • No, it's not perfect, but I can see the potential here.

    不,它並不完美,但我看到了它的潛力。

  • That's insane.

    太瘋狂了

  • I love this so much.

    我太喜歡這個了。

  • Private AI is the future.

    私人人工智能是未來的趨勢。

  • And that's why we're seeing VMware bring products like this to companies to run their own private local AI.

    這就是為什麼我們看到 VMware 為公司帶來這樣的產品,以運行他們自己的私有在地人工智能。

  • And they make it pretty easy.

    而且他們做起來非常容易。

  • Like if you actually did that private GPT thing, that little side project, there's a lot to it.

    就像如果你真的做了那個私人 GPT 項目,那個小小的副業項目,就會有很多收穫。

  • Lots of tools you have to install.

    需要安裝很多工具。

  • It's kind of a pain, but with VMware, they kind of cover everything.

    這有點麻煩,但對於 VMware 來說,他們幾乎涵蓋了一切。

  • Like that deep learning VM they offer as part of their solution.

    比如他們作為解決方案的一部分提供的深度學習虛擬機。

  • It's got all the tools ready to go.

    所有工具一應俱全。

  • Pre-baked.

    預先烘烤。

  • Again, you're like a surgeon just walking in saying, scalpel.

    再說一遍,你就像一個外科醫生,走進來就說,手術刀。

  • You got all this stuff right there.

    這些東西都在這裡。

  • So if you want to bring AI to your company, check out VMware Private AI, link below.

    是以,如果您想將人工智能引入您的公司,請查看下面的鏈接:VMware Private AI。

  • And thank you to VMware by Broadcom for sponsoring this video.

    感謝 VMware by Broadcom 贊助本視頻。

  • You made it to the end of the video.

    你看完了視頻。

  • Time for a quiz.

    測驗時間到

  • This quiz will test the knowledge you've gained in this video.

    本測驗將測試您在本視頻中獲得的知識。

  • And the first five people to get 100% on this quiz will get free coffee from Network Chug Coffee.

    前五位在測試中獲得 100% 分數的人將獲得 Network Chug Coffee 提供的免費咖啡。

  • So here's how you take the quiz.

    下面是測驗的方法。

  • Right now, check the description in your video and click on this link.

    現在,請查看視頻中的描述並點擊此鏈接。

  • If you're not currently signed into the Academy, go ahead and get signed in.

    如果您目前尚未登錄學院,請繼續登錄。

  • If you're not a member, go ahead and click on sign up.

    如果您還不是會員,請點擊註冊。

  • It's free.

    它是免費的。

  • Once you're signed in, it will take you to your dashboard, showing you all the stuff you have access to with your free Academy account.

    登錄後,您將進入儀表板,顯示您的免費學院賬戶可以訪問的所有內容。

  • But to get right back to that quiz, go back to the YouTube video, click on that link once more, and it should take you right to it.

    不過,如果想直接回到那個測驗,回到 YouTube 視頻,再點擊一次那個鏈接,就能直接進入測驗。

  • Go ahead and click on start now and start your quiz.

    點擊 "現在開始",開始測驗。

  • Here's a little preview.

    下面是一個小預覽。

  • That's it.

    就是這樣。

  • The first five to get 100% free coffee.

    前五名可獲得 100% 免費咖啡。

  • If you're one of the five, you'll know because you'll receive an email with free coffee.

    如果你是這五個人之一,你就會知道,因為你會收到一封電子郵件,裡面有免費咖啡。

  • You gotta be quick.

    你得快點。

  • You gotta be smart.

    你得聰明點。

  • I'll see you guys in the next video.

    我們下期視頻再見。

I'm running something called Private AI.

我正在運行一個名為 "私人人工智能 "的系統。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋