Placeholder Image

字幕列表 影片播放

已審核 字幕已審核
  • Intelligence is what gives us power over the world.

    智慧賦予我們支配世界的力量。

  • If we're going to make entities, machines that are more intelligent than us, they would be more powerful than human beings.

    如果製造出比人類更聰明的個體、機器,它們將比人類更加強大。

  • And so then the question is:

    所以我們面對的問題是:

  • How do we retain power forever over entities that are more powerful than ourselves?

    怎樣才能一直掌握權力,不讓任何事物奪走?

  • About five years ago, what had previously been a very obscure branch of AI technology, suddenly began to take off.

    大約五年前、以前默默無聞的人工智能技術分支,突然開始起飛。

  • These developments represent a tipping point in the history of the field.

    這些發展是人工智能史上的一個轉折點。

  • And they're giving people now, in a very real sense, what would it be like if we had artificial and general intelligence without regulation to ensure that those systems are safe.

    這真真切切地向人們展示了如果擁有人工智能及通用人工智慧,卻沒確保其安全的規範會是什麼樣子。

  • We may well lose control over our own future.

    我們很可能失去人類未來的掌控權。

  • My name is Stuart Russell.

    我是斯圖爾特·羅素。

  • I work in the area of artificial intelligence, and have been doing so for about 45 years.

    我已從事人工智能領域工作約 45 年。

  • As soon as we had working computers, the people who developed them wanted to make those machines intelligent.

    從電腦發明以來,發明家們便想賦予它們智能。

  • And then about five years ago, language models suddenly began to take off.

    約五年前,語言模型發展突然開始迅速起飛。

  • So a "language model" is a predictor that says, "Given a sequence of words, what's the next word likely to be?"

    語言模型是個預測模組,回答此問題:在這串單詞序列後,下個單詞可能是什麼?

  • The latest model, GPT-4 from OpenAI, is a 32,768-gram model, meaning it predicts the next word from the preceding 32,767 words.

    OpenAI 最新的 GPT-4 是可處理 32,768 個標記的模型,這意味著它可以根據前面的 32,767 字預測下一個單詞。

  • So that's an enormous model.

    這是個龐大的模型。

  • All of a sudden, these language models went from being a little, niche technology to something that appears to be extremely intelligent.

    突然之間,這些語言模型從小眾技術變成了高級人工智能技術。

  • You can ask it to draw pictures.

    你可以要它畫圖。

  • You can ask it to write code.

    要它寫程式碼。

  • You can say, "I've forgotten the proof of Pythagoras' theorem, but I'd love you to give it to me in the form of a Shakespeare sonnet," and it will do that.

    可以說,「我忘記畢氏定理了,用莎士比亞十四行詩的形式解釋給我聽。」它也會照做。

  • The interesting question is: Is it intelligence?

    有趣的問題是:人工智能真的有智慧嗎?

  • Has the system built an internal representation of the world?

    人工智能是否已建立對這個世界的認知?

  • And the answer is: We haven't the faintest idea.

    而答案是:我們全然不知。

  • A lot of people ask me, "Should we be worried about these systems?"

    很多人都問,「我們該擔心人工智能嗎?」

  • I think it's unlikely that the current generation of models represent a real threat to human control.

    我認為當前這一代模型不太可能對人類控制權構成真正威脅。

  • I don't wanna come across as a naysayer or a Luddite, but I think we could face serious risks:

    我不想被當作為反對而反對或守舊派,但我認為人類可能正面臨嚴重危險。

  • When you release a system, you should provide convincing evidence that it's going to behave itself,

    發佈一套系統時,應該提供可信證據證明其自主運行時,

  • that it's not going to cause risks to people by giving medical advice or advising them to commit suicide.

    不會提供錯誤醫療建議或鼓吹自殺威脅到人類群體。

  • The threat that most people recognize, that something that can outthink human beings,

    大多數人認識到的威脅,足以超越人類的東西,

  • what some people call 'AGI' or Artificial General Intelligence,

    被稱為通用人工智慧(AGI )

  • would clearly represent a threat to humanity if we couldn't figure out how to solve the control problem.

    如果我們想不出控制的方法,它顯然會成為人類的威脅。

  • The way we develop AI systems, we specify an objective, and off it goes.

    現行開發人工智能的方式是我們指定一個目標,然後就讓它自由發揮。

  • If the system knows the objective, they're gonna pursue it at all costs.

    如果人工智能知道其目標,就會不惜一切代價去達成。

  • What happens if we specify the objective incorrectly?

    如果我們指定了錯誤的目標會怎樣?

  • Imagine, for example, that an AI system is helping us figure out how to fix climate change.

    例如,想像一下,人工智能正在幫助我們解決氣候變遷問題。

  • One of the consequences we don't like is the acidification of the oceans.

    一個我們想解決的問題是海洋酸化。

  • AI system figures this all out, but the catalytic reaction starts to absorb oxygen from the atmosphere, and that's enough for us all to die slowly and painfully.

    人工智能想辦法解決了,但催化反應會從大氣中吸收氧氣,而這會導致所有人緩慢而痛苦地死去。

  • Changing the way we think about AI so that their only goal is to bring about the futures that we humans prefer,

    得改變我們對人工智能的想法,讓其唯一目標是實現人類想要的未來,

  • you get this very different kind of behavior that defers to humans, that asks permission, that is cautious.

    如此一來,人工智能的行為模式將大有不同,變得順從人類,會請求許可,謹慎行事。

  • It'll ask permission before messing with the oceans.

    在大肆擾亂海洋前,它會先徵求許可。

  • It'll say, "Is it okay to get rid of a quarter of the oxygen in the atmosphere?"

    它會問,「可以抽掉大氣層中四分之一的氧氣嗎?」

  • And we would say, "Ah, no, don't like that."

    而我們會說,「不行,那樣不好。」

  • We have to start designing standards to make sure that whenever you design AI systems and release them in the world, that they conform to this basic design template.

    我們必須著手設立標準,確保無論何時設計、發行的人工智能系統都符合這個基本準則。

  • So what I'm working on is a new way of thinking about AI that is not susceptible to this problem.

    所以我正在研究的是一種關於 AI 的新思維方式,不會受這個問題影響。

  • We're going to need regulation.

    監管及規則是必須的。

  • In fact, I think it's long past the time when we need some regulation.

    事實上,我認為早就過了該設立規範的時間了。

  • If we do, we really could do marvelous things.

    如此施行,我們能有更偉大的成就。

  • We could greatly accelerate the rate of scientific progress.

    可以大幅加快科學進步速度。

  • We could have much better healthcare.

    可以擁有更好的醫療照護。

  • Maybe we could have better politics.

    也許還會有更好的政治。

  • So I don't want to say we should cut off AI research at this stage.

    所以我不認為該在此階段斬斷人工智能研究。

  • We accept these types of regulations in many, many other spheres.

    在許多其他領域都有這類規定。

  • For example, in aviation, you can develop supersonic aircraft, but you can't put passengers in them until you've shown that they are safe.

    舉航空為例,你可以開發超音速飛機,但在證明安全性之前不能載客。

  • We're still at the very early stages of figuring out how to do this type of regulation, but now that AI systems are really quite powerful,

    對於如何設立規範,我們仍在摸索階段,但如今人工智能已然壯大,

  • now that they can talk to someone for days on end, we need to have guarantees that the systems don't pose an undue risk.

    可以連續幾天不間斷地與人交談,我們得確保人工智能不會構成不必要的風險。

Intelligence is what gives us power over the world.

智慧賦予我們支配世界的力量。

字幕與單字
已審核 字幕已審核

單字即點即查 點擊單字可以查詢單字解釋