Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • The deadline to apply for the first YC Spring Batch is February 11th.

    第一批青年志願者春季班的申請截止日期是 2 月 11 日。

  • If you're accepted, you'll receive $500,000 in investment, plus access to the best startup community in the world.

    如果您被錄取,您將獲得 500,000 美元的投資,還可以進入世界上最好的初創企業社區。

  • So apply now and come build the future with us.

    現在就申請吧,和我們一起創造未來。

  • If you ask people what AGI was, they would say it's a model that you can actually interact with.

    如果你問人們什麼是 AGI,他們會說這是一個可以真正與之互動的模型。

  • It passes the Turing test.

    它通過了圖靈測試。

  • It can look at things.

    它可以觀察事物。

  • It can write code.

    它可以編寫代碼。

  • It can even draw an image for you.

    它甚至可以為你繪製影像。

  • Way there.

    去那裡的路

  • Yeah.

    是啊

  • And like we've had this for years.

    就像我們多年來都是這樣。

  • And if you said, okay, well, what happens when you get all those capabilities?

    如果你說,好吧,那麼當你獲得所有這些能力時會發生什麼呢?

  • Say, well, everybody's out of a job and game over for humanity.

    說,好吧,每個人都失業了,人類的遊戲結束了。

  • And none of that is happening.

    而這些都沒有發生。

  • I think in the big picture, we're reaching that bottleneck for pre-training and data.

    我認為,從大局來看,我們正在達到預培訓和數據的瓶頸。

  • But now we have this new mechanism with reasoning and test time compute.

    但現在,我們有了這種新的機制,可以進行推理和測試時間計算。

  • What we're going to see out of reasoning is that it's really going to unlock the possibility of agents to do actions on your behalf, which has sort of always been possible, but it's just never been quite good enough.

    我們將從推理中看到的是,它將真正釋放代理人代表你採取行動的可能性。

  • You really need a lot of reliability.

    你真的需要很多可靠性。

  • I think that is now in sight.

    我想這已經指日可待了。

  • Hey guys, we have a real treat today.

    嘿,夥計們,今天有好戲看了。

  • Bob McGrew, formerly chief research officer at OpenAI.

    Bob McGrew,OpenAI 前首席研究官。

  • You were a part of building a lot of the research team.

    你參與組建了很多研究團隊。

  • What was that like early at OpenAI?

    早期在 OpenAI 工作是什麼感覺?

  • The really interesting thing about OpenAI is that I did not originally intend to go to a research lab.

    OpenAI 真正有趣的地方在於,我原本並不打算去研究實驗室。

  • When I left Palantir, I wanted to start a company.

    離開 Palantir 時,我想創辦一家公司。

  • I had a thesis that robotics would be the first real business that was built out of deep learning.

    我曾有過這樣一個論斷,即機器人技術將是第一個通過深度學習建立起來的真正業務。

  • This was back in 2015.

    這要追溯到 2015 年。

  • And I talked my way into a friend's nonprofit.

    我還說服了一個朋友加入了非營利組織。

  • I never had a badge, but I would go in, he'd open the door for me.

    我沒有徽章,但我進去後,他會給我開門。

  • And I learned deep learning by teaching a robot how to play checkers from vision.

    我還通過教機器人如何從視覺出發下跳棋來學習深度學習。

  • And in the process of doing this, I learned a lot about robotics.

    在這個過程中,我學到了很多關於機器人的知識。

  • And I learned that robotics was definitely not the right startup to start in 2015 or 2016.

    我還了解到,機器人技術絕對不是 2015 年或 2016 年創業的正確選擇。

  • I ended up going to OpenAI basically because it was a place full of very smart people and it had big ambitions.

    我最終去了 OpenAI,基本上是因為這裡有很多非常聰明的人,而且有很大的抱負。

  • It was a place where I could really learn.

    在這裡,我可以真正學到東西。

  • I had all this management experience from Palantir, but it was just a place for me to really become an expert in deep learning.

    我在 Palantir 積累了豐富的管理經驗,但那只是讓我真正成為深度學習專家的地方。

  • And from there, figure out what it could actually be used and applied for.

    然後,再找出它的實際用途和適用範圍。

  • What were some of the earliest things that you remember working on?

    在您的記憶中,最早的工作是什麼?

  • And how did that play into what everyone knows OpenAI to be now?

    這又是如何影響大家對 OpenAI 的認識的?

  • Yeah.

    是啊

  • When OpenAI started, the goal was always to build AGI.

    OpenAI 創立之初,目標始終是構建 AGI。

  • But the theory early on was that we would build AGI by doing a lot of research and writing a lot of papers.

    但早期的理論是,我們將通過大量研究和撰寫大量論文來構建 AGI。

  • And we knew that this was a bad theory.

    我們知道這是一個錯誤的理論。

  • I think for a lot of the early people who were startup people, Sam, Greg, myself, it felt painful and a little academic.

    我認為,對於很多早期的創業者來說,比如山姆、格雷格和我自己,這感覺很痛苦,而且有點學術化。

  • But at the same time, it was what we could do at the time.

    但同時,這也是我們當時所能做的。

  • And so some of the early projects, I worked on a robotics project where we took a robot hand, a humanoid robot hand, and we taught it to solve Rubik's Cube.

    是以,在早期的一些項目中,我參與了一個機器人項目,我們用一隻機器手,一隻仿人機器手,教它解魔術方塊。

  • The idea in doing that was that if we could make the environments complicated enough, the artificial intelligence would be able to generalize out of the narrow domain it was taught and learn something more complicated, which was one of the ideas that later we see coming back with LLMs.

    這樣做的想法是,如果我們能把環境變得足夠複雜,人工智能就能跳出狹隘的教學領域,學習更復雜的東西。

  • The other really early big project was solving Dota 2.

    另一個真正的早期大項目是解決 Dota 2。

  • So there's a long history of solving games as a path towards building better AI, from Othello to Go.

    是以,從黑白棋到圍棋,將解決遊戲問題作為構建更好人工智能的途徑由來已久。

  • And after beating Go, the next hardest set of games are actually video games.

    而在戰勝圍棋之後,下一組最難的遊戲其實是電子遊戲。

  • They're not very classy, but they're a lot of fun.

    它們並不高雅,但非常有趣。

  • And I can assure you that mathematically, they were harder.

    我可以向你保證,從數學上講,它們更難。

  • And so DeepMind went after StarCraft, OpenAI went after Dota 2.

    於是,DeepMind 開始追逐《星際爭霸》,OpenAI 開始追逐《Dota 2》。

  • And there was real insight that was generated there, which was that it really strengthened our belief that scale was the path to improving artificial intelligence.

    我們從中獲得了真知灼見,這讓我們更加堅信,規模化是改進人工智能的必由之路。

  • That with Dota 2, the secret idea was that we could take huge amounts of experience and feed it into a neural network, and that the neural network would actually learn and generalize from that.

    在《Dota 2》中,我們的祕密想法是,我們可以將大量的經驗輸入到神經網絡中,然後神經網絡就能從中學習和歸納。

  • And later, we actually went back and applied this to the robot hand, and that became the key idea for the robot hand.

    後來,我們又把它應用到了機械手上,這也成為了機械手的關鍵創意。

  • And at the same time as these two big projects were going on, Alec Radford was experimenting with language.

    在這兩個大項目進行的同時,亞歷克-拉德福德也在進行語言實驗。

  • And the core idea behind GPT-1 is that if you have a transformer, and you apply this super simple objective of guessing the next token, guessing the next word, that that would be enough signal that you could actually have something that would be able to generate coherent text.

    GPT-1 背後的核心理念是,如果你有一個變壓器,並應用這個超級簡單的目標來猜測下一個標記,猜測下一個單詞,這將是足夠的信號,你實際上可以有一個能夠生成連貫文本的東西。

  • And in retrospect, it sounds sort of obvious, right?

    現在回想起來,這聽起來很明顯,對嗎?

  • Like, you know, clearly this was going to work, but no one thought this would work at the time.

    就像,你知道,這顯然是行得通的,但當時沒有人認為這行得通。

  • Alec really had to persevere for years in order to make this work, and that became GPT-1.

    艾力克真的是堅持了好幾年才把這件事做成的,這就是後來的 GPT-1。

  • And then after GPT-1 seemed successful, we brought in the ideas from Dota and from the robot hand of training at larger and larger amounts of scale, and training on a really diverse set of data, and looking for generalization.

    在 GPT-1 似乎取得成功後,我們引入了 Dota 和機器人手的理念,即以越來越大的規模進行訓練,在真正多樣化的數據集上進行訓練,並尋求泛化。

  • And together, that brings you to GPT-2, and GPT-3, and GPT-4.

    合起來就是 GPT-2、GPT-3 和 GPT-4。

  • So one of the things that OpenAI really pioneered and sort of figured out was this concept of scale.

    是以,OpenAI 真正開創並摸索出的一個概念就是 "規模"。

  • How is it that it was OpenAI that made the right decisions and sort of found large language models first?

    為什麼是 OpenAI 做出了正確的決定,並首先找到了大型語言模型?

  • Early on, there were sort of a couple big projects, as I said, and then some room for exploratory research.

    正如我所說,早期有幾個大項目,然後還有一些探索性研究的空間。

  • And at the very earliest days, the exploratory research was really about what the researcher wanted to do, but also it was about sort of the company's opinion.

    在最開始的時候,探索性研究實際上是關於研究人員想要做什麼,同時也是關於公司的意見。

  • And in this, it was primarily formed by Ilya, with influence from a lot of people, but I think Ilya was really the guiding light here early on.

    在這一點上,它主要是由伊利亞形成的,也受到了很多人的影響,但我認為伊利亞在早期確實是這裡的指路明燈。

  • Sometimes I think about the OpenAI culture, and I like to oppose it to sort of Google Brain and to DeepMind.

    有時我在思考 OpenAI 文化,我喜歡把它與谷歌大腦和 DeepMind 對立起來。

  • And so early on, the DeepMind culture was a caricature.

    是以,在早期,DeepMind 的文化就是一幅漫畫。

  • Demis had a big plan, and he wanted to hire a bunch of researchers so he could tell them to move forward with his plan.

    戴米斯有一個大計劃,他想僱一批研究人員,這樣他就可以告訴他們推進他的計劃。

  • And Google Brain said, let's rebuild academia.

    谷歌大腦說,讓我們重建學術界吧。

  • Let's like bring in all these super talented researchers.

    讓我們把這些才華橫溢的研究人員都請來吧。

  • Let's not tell them anything.

    我們什麼都不要告訴他們。

  • Let's just let them figure out what they want to do, give them lots of resources and hope that amazing products pop out.

    讓他們自己想辦法吧,給他們提供大量資源,希望他們能創造出令人驚歎的產品。

  • And of course they did, but they didn't necessarily happen at Google.

    當然,它們確實發生了,但不一定發生在谷歌。

  • And we took a different approach, which was really more like a startup, where there was no sort of big centralized plan, but at the same time, people didn't have, it wasn't just sort of, let's let a thousand flowers bloom.

    我們採取了一種不同的方法,這種方法其實更像一家初創公司,沒有什麼集中式的大計劃,但與此同時,人們也沒有那種 "百花齊放 "的想法。

  • Instead, we had opinions about what needed to be done and things like, how do you show scale as a way of making your idea get better?

    相反,我們對需要做的事情有自己的看法,比如,如何顯示規模才能讓你的想法變得更好?

  • And that opinion was set by the research leadership.

    而這一意見是由研究領導層確定的。

  • Again, early on, people like Ilya, people like Dario, that was how we made sure that we didn't just sort of throw resources at everybody, but neither did we have just one set of ideas that were there.

    同樣,在早期,像伊利亞這樣的人,像達里奧這樣的人,就是我們如何確保我們不只是把資源扔給每個人,但我們也不是隻有一套想法。

  • We found this sort of happy medium between the two.

    我們在二者之間找到了一種平衡點。

  • I guess one of the critiques of maybe pure academia or some of the AI research labs, we don't have to name any of them, but we've heard stories about looking at the number of researchers on any given paper, there might be way more people on it.

    我想,對純粹的學術界或一些人工智能研究實驗室的責備之一是,我們不必點名責備它們,但我們聽說過這樣的故事:從某篇論文的研究人員數量來看,可能有更多的人参與其中。

  • And if you really dig into some of the papers there, they look like maybe a little bit of this plus a little bit of that.

    如果你仔細研究其中的一些論文,就會發現它們看起來就像是這個加一點那個。

  • And that sort of reflected the nature of that's what it took to get compute.

    這也反映了計算的本質。

  • And this is at other AI labs.

    這也是其他人工智能實驗室的情況。

  • I mean, what was it about open AI where you were able to sort of avoid that?

    我的意思是,在開放式人工智能中,你是如何避免這種情況的?

  • Well, I think the paper example is a really interesting example, because I think that's sort of both good and bad.

    我認為紙張的例子非常有趣,因為我認為這既有好處也有壞處。

  • I am hugely positive on academics and researchers, but actually pretty negative on academia.

    我對學者和研究人員的評價非常積極,但實際上對學術界的評價相當消極。

  • I think academia is good for this very narrow thing of small groups, trying out crazy ideas, but academia has a lot of incentives that prevent people from collaborating.

    我認為,學術界對於小團體、嘗試瘋狂想法這種非常狹隘的事情來說是件好事,但學術界有很多阻礙人們合作的激勵機制。

  • And in particular in academia, there's this obsession with credit.

    特別是在學術界,人們對學分有一種痴迷。

  • One of the things that's interesting about the way that papers have turned out in big labs is that early on we made the decision that we would try to be as Catholic as possible in putting everybody's name on it.

    關於大型實驗室的論文成果,其中一件有趣的事情是,我們很早就做出決定,我們將盡可能地把每個人的名字都寫在論文上。

  • And one of the early robotics papers, we actually said cite as open AI because we didn't wanna get into a fight.

    在早期的一篇機器人論文中,我們實際上說引用的是開放式人工智能,因為我們不想捲入爭論。

  • The first author is the one who gets cited and their name shows up every single time.

    第一作者是被引用的人,他們的名字每次都會出現。

  • So we said, we're not gonna try to have this fight.

    所以我們說,我們不打算打這場架了。

  • We're not gonna say who is the person who really did it.

    我們不會說誰是真正的凶手。

  • We're just gonna say cite as open AI.

    我們只能說是開放式人工智能。

  • And I think that is actually a really important cultural piece, the ability to accept that people want credit, but to be able to channel it into, it's your internal reputation, not the position you have on the paper that really matters.

    我認為,這實際上是一個非常重要的文化部分,即接受人們想要榮譽的能力,但能夠將其轉化為你的內部聲譽,而不是你在報紙上的位置,這才是真正重要的。

  • And for a long time, open AI didn't really have any titles except for always a CEO title, right?

    而在很長一段時間裡,開放式人工智能並沒有真正意義上的頭銜,只有一個首席執行官的頭銜,不是嗎?

  • But didn't really have a lot of titles within the organization itself.

    但在組織內部並沒有什麼頭銜。

  • People always knew who the great researchers were.

    人們總是知道誰是偉大的研究者。

  • Once you have the scaling laws and certainly how AI research is being done now, there's sort of this shift where basically scale is all you need for increasingly more and more AI domains.

    一旦你掌握了擴展規律,當然還有現在人工智能研究的方式,就會出現這樣一種轉變,即對於越來越多的人工智能領域來說,基本上規模就是你所需要的一切。

  • It's sort of potentially coming true in image diffusion models or in earlier to bring it back to what you're starting out with.

    它有可能在影像擴散模型或更早的模型中實現,使其回到你開始時的狀態。

  • There's some sense that similar principles to scaling laws actually do apply in the right domains in robotics.

    從某種意義上說,類似於縮放定律的原則確實適用於機器人技術的正確領域。

  • Is that sort of one of the things that you're seeing or how would you respond to that?

    這是你看到的情況之一嗎?你會如何應對?

  • I think if you look at AI progress, you see scaling laws all over the place.

    我認為,如果你看看人工智能的進步,就會發現到處都是縮放規律。

  • And so the interesting question is, well, if scaling laws exist and they're commonplace, what does that mean?

    是以,有趣的問題是,好吧,如果縮放法存在並且司空見慣,這意味著什麼?

  • What does that mean for you if you're a company, if you're a researcher, if you're trying to make things better?

    如果你是一家公司,如果你是一名研究人員,如果你想讓事情變得更好,這對你意味著什麼?

  • Why didn't we take advantage of scaling laws earlier in these other domains?

    為什麼我們沒有更早地在這些其他領域利用縮放定律呢?

  • Well, I think we were really trying to.

    嗯,我想我們真的在努力。

  • Usually in order to, the first step is actually getting to a scaling law.

    通常情況下,第一步要做的就是找到一個縮放規律。

  • To take an example that's not LLMs.

    舉個不是法學碩士的例子。

  • If you think about Dolly, which was how do you take text and make an image out of it?

    想想《多莉》吧,你是如何把文字變成影像的?

  • I think Aditya Ramesh who built that model spent 18 months, maybe two years just getting to the first version that clearly worked.

    我認為,建立該模型的 Aditya Ramesh 花了 18 個月、甚至兩年的時間,才完成了第一個明確可行的版本。

  • So I remember he'd be working on this and Ilya would come and show me, he'd be like, Aditya's been working on this for a year.

    所以我記得他在做這個,伊利亞會過來給我看,他會說,阿迪提亞已經做了一年了。

  • He's trying to make a pink panda.

    他想做一隻粉紅色的熊貓。

  • That's skating on ice because it's something that's clearly not in the training set.

    這是在冰上滑行,因為這顯然不在訓練集中。

  • And here's an image and you can see it's like pink up there and white down there.

    這是一張圖片,你可以看到上面是粉紅色,下面是白色。

  • It's really beginning to work.

    它真的開始起作用了。

  • And I would look at that.

    我會看看這個。

  • I'd be like, really?

    我會想,真的嗎?

  • I mean, maybe, maybe, I don't know.

    我是說,也許,也許,我不知道。

  • But just getting to that point where it's sort of plausibly begins to work is a huge difficult problem.

    但是,要達到這樣的程度,讓它似是而非地開始工作,卻是一個巨大的難題。

  • And it's completely separate from using scaling laws.

    這與使用縮放定律完全不同。

  • Now, once you get it to work, that's when scaling laws come into play.

    現在,一旦你讓它工作起來,這就是縮放定律發揮作用的時候了。

  • And with scaling laws, you have two hard things that you can do.

    有了縮放法,你可以做兩件難事。

  • One of them is just the pure scale itself.

    其中之一就是純量表本身。

  • Scaling is not easy.

    擴大規模並非易事。

  • It is in fact, probably the practical problem in any sort of model building.

    事實上,這可能是任何一種模型製作中的實際問題。

  • And it's a systems problem.

    這是一個系統問題。

  • It's a data problem.

    這是一個數據問題。

  • It's an algorithmic problem.

    這是一個算法問題。

  • Even if you're just trying to scale the same architecture.

    即使你只是試圖擴展相同的架構。

  • The second thing you can do is you can try to change the slope of the scaling law or just bump it up a little bit.

    你可以做的第二件事是嘗試改變縮放定律的斜率,或者把它調高一點。

  • And that is searching for better architectures, searching for better optimization algorithms, all of the algorithmic improvements that you can do.

    這就是尋找更好的架構,尋找更好的優化算法,以及所有你能做到的算法改進。

  • And if you put all of those together, that is what explains the very fast progress that we're seeing in AI today.

    如果把所有這些放在一起,這就是我們今天看到的人工智能飛速發展的原因所在。

  • I guess that is one of the bigger debates that's ongoing certainly out there in the community.

    我想這也是目前社會上爭論的焦點之一。

  • Are the scaling laws going to continue to hold or are we hitting some sort of bottlenecks?

    縮放規律是否會繼續保持,或者我們是否會遇到某種瓶頸?

  • I don't know how much you can talk about it, but what's your view at this point on maybe LLM scaling, but certainly other domains too?

    我不知道你能談多少,但目前你對法律碩士的規模,當然也包括其他領域有什麼看法?

  • It is definitely the case that there is a data wall and that if you take the same techniques that we were using to scale LLMs, at some point you're going to run into that.

    數據牆肯定是存在的,如果採用與我們擴展 LLM 相同的技術,在某些時候就會遇到數據牆。

  • The thing that's been really exciting, of course, is going from the LLM scaling of pre-training where you're just bringing bigger and bigger corpuses and trying to predict the next token and shifting gears and using techniques like reasoning, which OpenAI has shipped and it's 01 and 03 models and Gemini has now also shipped in Gemini Flash thinking.

    當然,真正令人興奮的事情是從 LLM 的預訓練擴展而來,在這種預訓練中,你只需引入越來越大的語料庫,並嘗試預測下一個標記,然後換擋並使用推理等技術,OpenAI 已經推出了 01 和 03 模型,Gemini 現在也在 Gemini Flash thinking 中推出了這種技術。

  • If you think about Moore's law, with Moore's law, Moore's law is one big exponential curve, but it's actually the sum of a bunch of little S-curves.

    如果你想想摩爾定律,摩爾定律是一條大的指數曲線,但它實際上是一堆小 S 曲線的總和。

  • And you start off with Dennard scaling and at some point that breaks.

    一開始,丹納德會進行縮放,但到了一定程度,縮放就會中斷。

  • But if you think about how NVIDIA has gone, Moore's law has continued.

    但是,如果你仔細想想英偉達的發展歷程,就會發現摩爾定律一直在延續。

  • It's just come through a different mechanism.

    它只是通過不同的機制產生的。

  • So you solve some bottleneck, but then you S-curve that particular solution.

    是以,你解決了一些瓶頸問題,但隨後你又對該特定解決方案進行了 S 曲線分析。

  • But there are other places where there are other bottlenecks.

    但還有一些地方存在其他瓶頸。

  • And then you have a new bottleneck and you have to go attack that.

    然後你又遇到了新的瓶頸,你必須去解決它。

  • And so, I think in the big picture, we're reaching that bottleneck for pre-training and data.

    是以,我認為從全局來看,我們已經達到了預培訓和數據的瓶頸。

  • Are we exactly there?

    我們真的到了嗎?

  • It's a little hard to say.

    這有點難說。

  • But now we have this new mechanism with reasoning and test-time compute.

    但現在,我們有了這種新的推理和測試時間計算機制。

  • I think if you go back and you think about what it took for AI, for building AGI, for, I would say the last five years, people have thought that, people at the big frontier labs have felt that, you know, step one was pre-training and that the remaining gap to have something that could scale all the way to AGI was reasoning.

    我認為,如果你回過頭來思考人工智能、AGI 的構建過程,在過去的五年裡,人們認為,大型前沿實驗室的人們認為,第一步是預訓練,剩下的差距就是推理,這樣才能一直擴展到 AGI。

  • Some ability to take the same pre-trained model and have the ability to give it more time to think or more compute of various kinds and get a better answer at the other end.

    有能力使用相同的預訓練模型,給它更多時間思考或進行更多的各種計算,並在另一端得到更好的答案。

  • And now that that has been cracked, at this point, I think we actually have a very clear path to just focus on scaling.

    現在,我們已經破解了這一難題,在這一點上,我認為我們實際上已經有了一條非常清晰的道路,只需專注於擴大規模。

  • You know, we were, you know, talking about that, you know, the zero to one part that's not about scaling.

    你知道,我們在討論,你知道,從零到一的那部分,與縮放無關。

  • I think there's a really strong case to be made that in LLMs, that's not relevant anymore.

    我認為在法學碩士課程中,這一點已經不再重要,這一點很有說服力。

  • And that now we're in the pure scaling regime.

    而現在,我們正處於純粹的縮放機制中。

  • I'm pretty impressed by the five levels of AGI and that it feels like things are basically playing out the way that original post on the OpenAI website sort of described it.

    五級 AGI 給我留下了深刻印象,而且感覺事情基本上就是按照 OpenAI 網站上最初那篇帖子所描述的那樣發展的。

  • It's, you know, reasoners are here.

    你知道,推理者就在這裡。

  • And then I'm hearing a ton about innovators.

    然後,我聽到了很多關於創新者的故事。

  • So taking a thing like O3 or, you know, maybe when O3 Pro comes out, that'll be a real moment where you can hook that up to a bio lab and have, you know, sort of autonomous exploration of, you know, scientific spaces.

    是以,像 O3 這樣的東西,或者,你知道,也許當 O3 Pro 問世時,那將是一個真正的時刻,你可以把它連接到生物實驗室,然後對科學空間進行自主探索。

  • What can you say about that stuff?

    你還能說什麼呢?

  • The really interesting thing about that is we're probably going to be blocked for now on the ability of the models to work in the physical world.

    真正有趣的是,我們可能暫時還無法確定模型在物理世界中的工作能力。

  • It's going to be a little strange.

    這會有點奇怪。

  • We're probably going to have a model that can explore scientific hypotheses and figure out how to run experiments with them before we have something that can actually run the experiments themselves.

    我們可能會先有一個能探索科學假設的模型,並找出如何用這些假設進行實驗,然後才會有真正能進行實驗的東西。

  • And so maybe that's one of those new S-curves.

    也許這就是新的 S 型曲線之一。

  • We're back to robotics then.

    那我們就回到機器人技術上來。

  • Yeah, exactly.

    是啊,沒錯。

  • And we're back to robotics.

    我們又回到了機器人技術。

  • The other thing that I think is really interesting that the reasoning models enable is agents.

    我認為推理模型所能實現的另一個非常有趣的功能是代理。

  • And it's a very generic term.

    這是一個非常通用的術語。

  • It's probably a little overplayed.

    這可能有點過頭了。

  • But, you know, fundamentally what reasoning is, is it's the ability for a model to have a coherent chain of thought that is steadily making progress on a problem over a long period of time.

    但是,你知道,從根本上說,推理是一種能力,它使一個模型有一個連貫的思維鏈,在很長一段時間內穩步推進一個問題的解決。

  • And the techniques that give that to you in terms of thinking harder also apply to taking action, you know, in the real world, in the virtual world.

    讓你更努力思考的技巧也適用於在現實世界和虛擬世界中採取行動。

  • I think what we're going to see out of reasoning, out of long thinking, is that it's really going to unlock the possibility of agents to do actions on your behalf, which, you know, has sort of always been possible, but it's just never been quite good enough.

    我認為,從推理和長期思考中,我們將看到的是,它將真正釋放代理人代表你採取行動的可能性,你知道,這一直都是可能的,只是一直不夠好。

  • And you really need a lot of reliability.

    你真的需要很多可靠性。

  • And in order for you to be willing to wait five minutes or five hours in order for something to happen, it's got to actually work at the end.

    為了讓你願意等上 5 分鐘或 5 個小時來等待某件事情的發生,它必須在最後真正發揮作用。

  • And I think that is now in sight.

    我想,這已經指日可待了。

  • The thing that prevents people from trusting an agent to do the action is that mainly a frequency of how often is that action the correct action versus the wrong action.

    阻礙人們信任代理機構的主要原因是,該行動正確與錯誤的頻率有多高。

  • Yeah.

    是啊

  • There's a rule of thumb that I like, if you want to go, if you want to add a nine, if you want to go from 90 to 99 percent or 99 to 99.9 percent, that's maybe an order of magnitude increase in compute.

    我喜歡的一個經驗法則是,如果你想從 90% 提高到 99%,或者從 99% 提高到 99.9%,計算量可能會增加一個數量級。

  • And historically, we've only been able to make order of magnitude increases in compute by training bigger models.

    從歷史上看,我們只能通過訓練更大的模型來實現計算量級的提升。

  • And now with reasoning, we're able to do that by letting the models think for longer.

    而現在,通過推理,我們可以讓模型思考更長時間,從而做到這一點。

  • And look, letting the models think for longer, this is a really hard problem.

    聽著,讓模型多思考一會兒,這確實是個難題。

  • With O1, with O3, you know, you're getting longer and longer chains.

    O1 和 O3 的鏈條越來越長。

  • It requires more scaling.

    它需要更多的擴展。

  • We just talked about scaling is, you know, the central problem.

    我們剛才談到了縮放是核心問題。

  • So this is not easy.

    所以這並不容易。

  • It's not done by any means.

    無論如何都做不到。

  • But there's a very clear path now that allows you to get to those higher and higher levels of reliability.

    但現在有一條非常清晰的道路,可以讓你達到越來越高的可靠性水準。

  • And I think that unlocks so many things downstream.

    我認為,這將開啟下游的許多東西。

  • What do you think's happening with like distillation?

    你認為蒸餾是怎麼回事?

  • I was looking at some of these sort of capability graphs of some of the mini models, and it sounds like basically the mini models increasingly are getting better and better.

    我看了一些迷你機型的性能圖表,感覺迷你機型的性能越來越好。

  • Is that like sort of a function of parent models teaching, sort of child models or, you know, what's happening there and what can people expect?

    這是否是家長模式、兒童模式的一種功能,或者,你知道,那裡發生了什麼,人們可以期待什麼?

  • Yeah, I think over the last year, the big frontier labs and a lot of other people have figured out the tricks to take big models and, you know, take a very particular distribution of user input and train a model that is that is almost as good as the big model, but much, much smaller and much, much faster.

    是的,我認為在過去的一年裡,大型前沿實驗室和其他很多人已經找到了一些技巧,可以利用大型模型,你知道,利用用戶輸入的一個非常特殊的分佈,訓練出一個幾乎和大型模型一樣好的模型,但要小得多、快得多。

  • And so I think we're going to see this a lot going forward, especially if you look at, the Sonnet versus Haiku, you know, Gemini versus Gemini Flash, you know, O1 versus O1 mini, 4.0 mini.

    是以,我認為在未來我們會經常看到這種情況,特別是如果你看看 Sonnet 與 Haiku,你知道的,Gemini 與 Gemini Flash,你知道的,O1 與 O1 mini,4.0 mini。

  • Every lab has really focused on this.

    每個實驗室都非常重視這一點。

  • And in fact, you see distillation as a service coming.

    事實上,蒸餾作為一種服務即將出現。

  • What would you say to people watching who are trying to make AI startups right now?

    對於那些正在努力創建人工智能初創企業的觀眾,你有什麼想說的?

  • Often they're vertical startups, but some of them are consumer too, actually.

    它們通常是垂直領域的初創企業,但實際上有些也是消費領域的初創企業。

  • Yeah, I would say if you're a founder, the right approach is to start with the very best model you can because, you know, your startup is only going to be successful if it exploits something about AI that realistically is going to be on, you know, the frontier.

    是的,我想說的是,如果你是創始人,正確的做法是從你能做到的最好的模型開始,因為,你知道,你的初創公司只有在利用了人工智能的某些東西時才會成功,而這些東西實際上將處於,你知道,前沿。

  • So start with the very best model that you can and get it to work.

    是以,請從最好的模型開始,讓它發揮作用。

  • And once you've gotten it to work, then you can use distillation.

    一旦成功,就可以使用蒸餾法。

  • You can take a dumber model and you can try prompting it.

    你可以使用一個更笨的模型,並嘗試提示它。

  • You can try to have the frontier model, train the smaller model.

    你可以嘗試建立前沿模型,訓練較小的模型。

  • But, you know, the most important thing in a startup is actually your time, right?

    但是,創業公司最重要的其實是你的時間,對嗎?

  • You don't want to be, unless you have to, you don't want to be like Palantir taking three years to get to market.

    除非迫不得已,否則你不會想 Palantir 那樣花三年時間才進入市場。

  • You want to be able to build that product as quickly as possible.

    您希望能夠儘快開發出該產品。

  • And only once you've actually figured out where the value is, probably by iterating with your user, then you can think about cost.

    只有當你真正弄清了價值所在(可能是通過與用戶反覆溝通),你才能考慮成本問題。

  • Working backwards, it sort of feels like the movie Her is more or less inevitable.

    倒推一下,就會覺得電影《她》或多或少是不可避免的。

  • I am a little skeptical of the deep emotional connection, you know, that, you know, guys are going to have AI girlfriends.

    我對這種深層次的情感聯繫有點懷疑,你知道,男生都會有 AI 女友。

  • I think that's not what guys are looking for in a girlfriend, frankly.

    老實說,我覺得這不是男人想要的女朋友。

  • I think, you know, an AI that shops for you, well there it's really helpful to know a lot about your preferences.

    我認為,如果人工智能能幫你購物,那麼瞭解你的喜好就會很有幫助。

  • An AI that is your assistant at work.

    人工智能是你的工作助手。

  • Again, very helpful to know about your preferences.

    再次強調,瞭解您的喜好非常有幫助。

  • One other thing I think would be cool would be an AI that it's Gary's AI bot.

    還有一件事我覺得很酷,那就是讓人工智能知道它是加里的人工智能機器人。

  • And if I want to know what Gary's thinking, I could just ask your AI bot.

    如果我想知道加里在想什麼,我可以直接問你的人工智能機器人。

  • And if I get a good enough answer, then I can go about my job.

    如果我得到了足夠好的答案,我就可以繼續我的工作了。

  • And if not, then I have to, you know, actually bother you in person.

    如果沒有,我就得,你知道的,親自來打擾你。

  • You know, I think that would be just a tremendous feat of personalization if you could make something like that happen.

    我認為,如果能實現這樣的功能,那將是個性化的一大壯舉。

  • And anything that works with you at work needs a huge amount of context about you.

    在工作中,任何與你合作的人都需要大量關於你的背景資料。

  • It should be able to, you know, see your slacks and your Gmail and all the different productivity tools that you have.

    它應該能看到你的休閒褲、Gmail 和所有不同的生產力工具。

  • And I think it's actually surprising.

    我覺得這其實很令人吃驚。

  • You know, I think this is actually a real hole in the market because that's not something I can go out and purchase today.

    你知道,我認為這實際上是一個真正的市場空白,因為這不是我今天就能出去購買的東西。

  • I mean, in my mind's eye, what I can imagine is kind of like a super intelligent genie.

    我的意思是,在我的腦海中,我能想象的是一個超級聰明的精靈。

  • It knows, you know, who you are, what you're about, and it might actually know, you know, your job, your goals in life.

    它知道,你是誰,你是做什麼的,它可能真的知道,你的工作,你的人生目標。

  • And it'll actually tell you, oh, hey, you should probably do this.

    它會告訴你,哦,嘿,你也許應該這麼做。

  • And it might go out and get an appointment for you.

    它可能會出去幫你預約。

  • And like, oh, yeah, it's time to take the LSAT, buddy.

    就像,哦,對了,是時候參加 LSAT 考試了,夥計。

  • You said you wanted to, you know, go be a lawyer.

    你說你想,你知道,去當一名律師。

  • Like, well, this is the first step.

    比如,好吧,這是第一步。

  • You know, do you want to do it?

    你知道,你想這樣做嗎?

  • Yes or no.

    是還是不是

  • Right.

  • You know, and there's something really interesting about this idea because I think it's very compelling that, you know, the AI is your life coach, but then it goes back to like, so what are you even doing with your life in the first place?

    你知道,這個想法非常有趣,因為我認為它非常引人注目,你知道,人工智能是你的人生導師,但它又回到了像,所以你甚至在做什麼與你的生活擺在首位?

  • Right.

  • If the AI is better than you.

    如果人工智能比你強

  • And I think there's actually a really deep mystery here.

    我覺得這裡面其實有一個很深的謎團。

  • When we were first thinking about GPT-1 back in 2018, you know, if you asked people what AGI was, they would say, well, you know, it's, it's, you know, a model that you can actually interact with.

    當我們在 2018 年第一次考慮 GPT-1 時,你知道,如果你問人們 AGI 是什麼,他們會說,嗯,你知道,它是,它是,你知道,一個你可以真正與之互動的模型。

  • It passes the Turing test.

    它通過了圖靈測試。

  • It can look at things.

    它可以觀察事物。

  • It can write code.

    它可以編寫代碼。

  • It can even draw an image for you.

    它甚至可以為你繪製影像。

  • We're there.

    我們到了

  • Yeah.

    是啊

  • And like, we've had this for years, right?

    而且,這東西我們已經用了很多年了,對吧?

  • And if you said, OK, well, what happens when you get all those capabilities?

    如果你說,好吧,當你獲得所有這些能力時會發生什麼?

  • Say, well, everybody's out of a job.

    說,好吧,大家都失業了。

  • You know, all laptop jobs are immediately automated and game over for humanity.

    要知道,所有筆記本電腦的工作都會立即自動化,人類的遊戲也就結束了。

  • And none of that is happening, right?

    這一切都沒有發生,對嗎?

  • I mean, yes, AI has had some effects, you know, particularly on people who write code, but, you know, I don't think you can see it in the productivity statistics, unless it's about how big the data centers are that we're building.

    我的意思是,是的,人工智能已經產生了一些影響,你知道,特別是對寫代碼的人,但是,你知道,我不認為你能在生產力統計中看到它,除非它關係到我們正在建設的數據中心有多大。

  • And I think this is a really deep mystery.

    我認為這是一個非常深奧的謎。

  • Why is it that AI adoption is so slow relative to what we thought should be happening in 2018?

    為什麼人工智能的應用相對於我們想象中的 2018 年會如此緩慢?

  • What you just said really reminds me of our days at Palantir, actually, where, you know, one of the core missions that, you know, Palantir started with, really, is this idea that, you know, the technology is already here.

    你剛才說的話讓我想起了我們在 Palantir 的日子,實際上,Palantir 最初的核心任務之一就是,你知道,技術已經在這裡了。

  • It's just not evenly distributed.

    只是分佈不均勻而已。

  • And I feel like that was one of the things you guys actually really discovered.

    我覺得這是你們真正發現的東西之一。

  • And, you know, part of the reason why Palantir actually exists, it's you went into places in government, three-letter agencies, some of the most impactful decisions that a society might have to make.

    你知道,Palantir 存在的部分原因,就是你進入了政府機構,三字機構,一些對社會影響最大的決策。

  • And you look around and there was no software in there.

    你環顧四周,裡面沒有任何軟件。

  • And that was what Palantir and certainly Palantir government was very early on.

    這就是 Palantir,當然也是 Palantir 政府早期的定位。

  • The fun piece there was just, you know, thinking through what it is that these people do and then how you could just completely reimagine it with technology, where, you know, if you were checking to see if a particular, you know, person who was flying into the U.S. had, you know, a record or if there was any suspicion, you know, you look through 20 different databases.

    其中的樂趣在於,你知道,思考這些人的工作內容,然後你如何用技術來完全重新想象它,你知道,如果你要檢查一個特定的人,你知道,飛往美國的人,是否有記錄,或者是否有任何可疑之處,你知道,你要翻閱20個不同的數據庫。

  • One approach would be to say, well, let's make it faster to look through 20 different databases.

    一種方法是說,好吧,讓我們更快地查看 20 個不同的數據庫。

  • Another approach is to say, well, maybe, you know, you can just do look for it once and it checks all the databases for you.

    另一種方法是,也許你只需查找一次,它就會幫你檢查所有數據庫。

  • And, you know, I think that's the, like that we need some twist like that for AI that lets people figure out how to use the AI to solve the problem they actually have, not just sort of take their existing workflow and have AI do that work.

    而且,你知道,我認為這就是,我們需要一些類似的人工智能轉折,讓人們找出如何使用人工智能來解決他們實際遇到的問題,而不僅僅是利用他們現有的工作流程,讓人工智能來完成這些工作。

  • Yeah, it's like not just having the data, it's not just having the intelligence.

    是的,這不僅僅是擁有數據,也不僅僅是擁有情報。

  • I mean, what AI desperately needs right now is, like you said, the UI, the software, it's just building software.

    我的意思是,人工智能現在迫切需要的是,就像你說的,用戶界面、軟件,它只是在構建軟件。

  • And if you can put that in a package that a particular person really, really needs, I feel like that's one of the big things that we learned at Palantir.

    如果你能把這些東西打包成某個人真正需要的東西,我覺得這就是我們在 Palantir 學到的重要東西之一。

  • It's like there's a whole job that is exactly that, forward deployed engineer.

    就好像有一整份工作就是這樣,前沿部署工程師。

  • It's a very evocative term, right?

    這個詞很有感染力,對吧?

  • Like forward deployed, you're not way back at the HQ, you're all the way in the customer's office.

    就像向前部署一樣,你不是在總部,而是在客戶的辦公室裡。

  • You're sitting right next to them at their computer watching how they do something.

    你就坐在他們旁邊的電腦前,看著他們如何做事。

  • And then you're making the perfect software that they would never get access to.

    然後,你就能製作出他們永遠無法訪問的完美軟件。

  • Like the alternative is Excel spreadsheet, writing SQL statements yourself or cost plus, you know, government integrator or like Accenture.

    比如 Excel 電子表格、自己編寫 SQL 語句或成本加成,你知道的,政府集成商或像埃森哲這樣的公司。

  • And they're never going to get something usable.

    他們永遠也得不到有用的東西。

  • Whereas a really good engineer who's a good designer, who can understand exactly what that person is, you know, needs and is trying to do, they can build the perfect thing for that very person.

    而一個真正優秀的工程師,如果是一個優秀的設計師,能夠準確地理解那個人的需求和目的,他們就能為那個人設計出完美的產品。

  • And so maybe that's the answer to your question.

    也許這就是你問題的答案。

  • Like, why didn't it happen yet?

    比如,為什麼還沒發生?

  • It's like, we just need more software engineers who are like that forward deployed engineer to link up the intelligence.

    這就好比,我們需要更多的軟件工程師,就像那個前沿部署的工程師一樣,把智能連接起來。

  • And we're there.

    我們到了

  • I think it's really funny because, you know, if you think back to 2015, when I left Palantir, people were skeptical of Palantir because of the existence of the forward deployed engineers.

    我覺得這真的很有趣,因為你知道,如果你回想一下2015年,當我離開Palantir時,人們對Palantir持懷疑態度,因為前沿部署工程師的存在。

  • If you had a really good product, you wouldn't need the forward deployed engineers.

    如果你有真正的好產品,你就不需要前沿部署的工程師。

  • You wouldn't need to specialize it to every customer.

    您不需要把它專門提供給每一位客戶。

  • And, you know, wait five years and Palantir has a great IPO.

    然後,再等五年,Palantir 就會有一個偉大的首次公開募股。

  • Wait 10 years.

    再等 10 年。

  • It's a very valuable company.

    這是一家非常有價值的公司。

  • Suddenly everybody is talking about building their forward deployed engineering function.

    突然間,每個人都在談論如何建立自己的前沿部署工程功能。

  • And I think it's a good thing.

    我認為這是一件好事。

  • I think, you know, hopefully this gives us a lot of software that is actually very tuned to what the customers need, not just something off the rack that you then say, well, there's a way to accomplish what you can do.

    我認為,你知道,希望這能為我們提供很多軟件,這些軟件實際上非常符合客戶的需求,而不僅僅是一些現成的東西,你可以說,好吧,有一種方法可以實現你能做的事情。

  • Go figure it out.

    去想辦法吧。

  • Bob, both of us are parents and, you know, we just spent a lot of time talking about some pretty wild concepts that are about to affect all of society.

    鮑勃,我們都是為人父母,你知道,我們剛剛花了很多時間來討論一些即將影響整個社會的非常瘋狂的概念。

  • Has that affected how you think about, you know, what we should be doing with our kids?

    這是否影響了你對 "我們應該如何對待孩子 "的思考?

  • I really struggle with this.

    我真的很糾結。

  • And there's a very crisp version of this for me, which is that my eight year old son is really excited about coding.

    對我來說,這還有一個非常簡潔的版本,那就是我八歲的兒子對編碼非常感興趣。

  • He actually is really excited.

    事實上,他真的很興奮。

  • He wants to start a company.

    他想創辦一家公司。

  • He has a great name and it's going to do asteroid mining and all sorts of cool stuff.

    他有一個很棒的名字,它將從事小行星採礦和各種很酷的工作。

  • And so every day he says, you know, dad, can you teach me a little bit about how to code?

    是以,他每天都會說,爸爸,你能教我一點如何編碼嗎?

  • This is actually what I do most with language models is I have the language model.

    實際上,我在語言模型方面做得最多的就是建立語言模型。

  • I figure out what he's interested in.

    我知道他對什麼感興趣。

  • I have the language model, make a lesson for him that teaches some idea that I want to teach him.

    我有語言模型,給他做一堂課,教他一些我想教他的想法。

  • Like it teaches about networking or teaches him about loops and it fits his idea.

    比如,它教給他關於網絡的知識,或者教給他關於循環的知識,而且符合他的想法。

  • And my wife asked, why are you doing this if the language models are going to be able to code?

    我妻子問,如果語言模型能夠編碼,你為什麼要這麼做?

  • And I think the answer is that like right now, we still have to like, this is how you learn how to do critical thinking.

    我認為答案是,就像現在,我們仍然必須像這樣,學習如何進行批判性思維。

  • And, you know, I think back to Paul Graham's idea of the resistance of the medium, like even once, you know, the computer can do the programming for you.

    我又想起了保羅-格雷厄姆(Paul Graham)關於媒介阻力的觀點,即使是在電腦可以為你編程的情況下。

  • I think there's still something to having like had your hands in it yourself and knowing what's possible and what's not possible and that you can have that intuition.

    我認為,親身參與其中,知道什麼是可能的,什麼是不可能的,並擁有這種直覺,還是很有意義的。

  • I think that the role that we're going to be playing, you know, one, I think there's going to be two roles.

    我認為,我們將要扮演的角色,你知道,首先,我認為會有兩個角色。

  • One will be something like a lone genius.

    一個是類似於孤獨的天才。

  • You know, the Alec Radford of the world working alone at his computer, coming up with some crazy idea.

    你知道,世界上的亞歷克-拉德福德(Alec Radford)獨自在電腦前工作,想出一些瘋狂的點子。

  • But now with that computer being able to leverage him up so much.

    但現在有了這臺電腦,他的能力就大大提高了。

  • And the other role is manager that, you know, you will be the CEO of your own firm and that firm will mostly be AI.

    另一個角色是經理,你知道,你將成為自己公司的首席執行官,而這家公司主要是人工智能公司。

  • I think it will be other humans in there.

    我認為裡面會有其他人類。

  • I don't think the whole company gets replaced, although this is another really interesting question for us to answer.

    我不認為整個公司都會被替換,儘管這是我們要回答的另一個非常有趣的問題。

  • But, you know, I think those will be the two jobs of the future, genius and manager.

    但是,你知道,我認為這將是未來的兩種工作,天才和經理。

  • I think that that is actually pretty awesome.

    我覺得這其實很不錯。

  • Those are two things that would be really fun jobs, honestly.

    老實說,這兩件事會是非常有趣的工作。

  • When cameras, when like the photographic camera and film came out, what happened to artists?

    照相機和膠捲問世後,藝術家們發生了什麼?

  • And, you know, they're still around and people still learn to paint.

    你知道,它們依然存在,人們依然在學習繪畫。

  • And there are probably more people who learn to paint because more people have an appreciation for art and painting and the visual visual arts.

    可能有更多的人學習繪畫,因為有更多的人欣賞藝術、繪畫和視覺視覺藝術。

  • So my hope is that that's what happens.

    所以,我的希望就是這樣。

  • And I think, I mean, if you go back to the last time we automated away, you know, most human jobs, you know, in the 1880s, most people were farmers.

    我認為,我的意思是,如果你追溯到上一次我們的自動化,你知道,大多數人類的工作,你知道,在19世紀80年代,大多數人都是農民。

  • And now, you know, three percent of Americans maybe are farmers.

    現在,你知道,也許有百分之三的美國人是農民。

  • And we all do jobs.

    我們都有工作。

  • I think we try to explain to people from 1880, you know, what like, you know, being a being a software engineer or, you know, running a startup incubator, you know, they'll be like, what the hell is this?

    我認為,我們試圖向來自1880個國家的人解釋,你知道,什麼是軟件工程師,什麼是營運初創企業孵化器,你知道,他們會說,這到底是什麼?

  • Right.

  • These aren't real jobs.

    這些都不是真正的工作。

  • At the end of it, I'm very much an optimist about humanity.

    歸根結底,我對人性持樂觀態度。

  • I think that humans will have important and valuable roles to play.

    我認為,人類將發揮重要而寶貴的作用。

  • But, you know, just like, you know, that first 90 percent of jobs that got automated away, we couldn't really, you know, those farmers didn't know what the jobs of their grandchildren would look like.

    但是,你知道,就像最初 90% 的工作都被自動化了一樣,我們真的不知道,你知道,那些農民不知道他們子孫後代的工作會是什麼樣子。

  • I think we have that same period now where we don't know what the jobs of our grandchildren will look like.

    我想我們現在也有這樣的時期,我們不知道我們的子孫後代的工作會是什麼樣子。

  • And we're just going to have to play it by ear and figure it out.

    我們只能聽天由命,想辦法解決。

  • I guess going back to robotics, you know, one of my hopes is actually that maybe the level four innovators will suddenly break through on a bunch of very specific problems that currently hold back robotics.

    說回機器人技術,我的一個希望是,也許第四級創新者會突然突破目前阻礙機器人技術發展的一系列非常具體的問題。

  • Have you spent time, you know, back in that space recently?

    你最近有沒有花時間回到那個空間?

  • And what are the odds of that coming together in the next, I don't know, a couple of years even?

    那麼,在未來幾年內,我不知道,這種可能性有多大?

  • Like, do you feel like there will be continued breakthroughs on maybe the figure robot and different things like that?

    比如,你覺得在人物機器人和其他類似的東西上還會有突破嗎?

  • What's your sense for robotics in the next year or two?

    您對未來一兩年的機器人技術有何看法?

  • Robotics companies now are where, you know, LLM companies were five years ago.

    現在的機器人公司,就像五年前的法律碩士公司一樣。

  • So I think in five years, you know, or even even sometime in the next five years, we will see the chat GPT moment for robotics.

    是以,我認為在五年內,甚至是未來五年的某個時候,我們將看到機器人技術的GPT時刻。

  • I think it's a little harder to scale because you've got to build physical robots.

    我認為這有點難以擴展,因為你必須製造實體機器人。

  • But if you look at companies like Skilled AI or Physical Intelligence, who are building foundation models for robots, you know, the progress that we've seen there is just really dramatic.

    但是,如果你看看像 Skilled AI 或 Physical Intelligence 這樣的公司,他們正在為機器人建立基礎模型,你知道,我們在那裡看到的進步真的是非常驚人。

  • There's some point we're going to get out of that zero to one phase where you're just trying to make it work at all.

    總有一天,我們會從 "從零到一 "的階段中走出來,在這個階段中,你只需努力讓它運轉起來。

  • And we're going to be in something where it kind of works.

    我們要做的就是讓它發揮作用。

  • And then we're just scaling to increase the reliability and increase the scope of the market.

    然後,我們將不斷擴大規模,提高可靠性,擴大市場範圍。

  • I remember working with Sam Altman at YC, and he was bringing in some pretty wild hard tech companies like Helion focused on fusion or Oclo in the energy space.

    我記得在 YC 與山姆-奧特曼共事時,他引進了一些非常狂野的硬科技公司,比如專注於核聚變的 Helion 或能源領域的 Oclo。

  • And at the time, I don't know if I totally understood why, but I don't know after the AGI part, you're becoming much more real.

    當時,我不知道自己是否完全理解為什麼,但我不知道在 AGI 部分之後,你變得更加真實了。

  • Plus that it feels like, you know, if you add robotics, that's one of the more profound sort of triumvirates of technology that might come together that will create quite a lot more abundance for everyone.

    另外,我覺得,如果再加上機器人技術,這將是一個更深遠的技術三部曲,可能會為每個人創造更多的財富。

  • Yeah, I mean, it's the, you know, whatever the part of the stack is that isn't automated becomes the bottleneck.

    是的,我的意思是,無論堆棧的哪一部分沒有實現自動化,都會成為瓶頸。

  • And so, you know, I think really we're going to end up with, you know, automating the scientist, the innovator before we automate, you know, the experiment doer.

    是以,你知道,我認為我們最終的結果是,你知道,在自動化之前,先自動化科學家和創新者,你知道,自動化實驗執行者。

  • But then, you know, if that comes through, I think the potential for really fast scientific advance is totally there.

    但是,你知道,如果能實現這一點,我認為科學的快速發展潛力是完全存在的。

  • I think we will find some other bottleneck.

    我想我們會找到其他瓶頸的。

  • I think we're going to look back at this conversation where I say we did all the things and science is only going like 30% faster than it was.

    我想我們會回顧這次談話,我說我們做了所有的事情,但科學的發展速度只比原來快了30%。

  • Why isn't it 300 times faster?

    為什麼不是快 300 倍?

  • And we'll have to figure it out.

    我們必須想辦法解決。

  • I mean, it'd be a great problem to have, honestly.

    老實說,這是個大問題。

  • That's going to be 30% is great, but 300%, that would be insane.

    30% 很好,但 300% 就太瘋狂了。

  • Hey, room for thousands more startups.

    嘿,還有數千家初創企業的發展空間。

  • That sounds great.

    聽起來不錯。

  • Bob, thank you so much for joining us.

    鮑勃,非常感謝你參加我們的節目。

  • This is, I feel like I learn a lot every time I get to see you.

    每次見到你,我都覺得學到了很多東西。

  • Great to see you again.

    很高興再次見到你。

  • Thanks for coming on the channel.

    感謝您來到我們的頻道。

  • It's always fun to have these conversations with you, Gary.

    和你哈拉總是很有趣,加里。

The deadline to apply for the first YC Spring Batch is February 11th.

第一批青年志願者春季班的申請截止日期是 2 月 11 日。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋