Placeholder Image

字幕列表 影片播放

  • I'm going to talk about my research

    今天要談論的是我做的研究

  • on the long term future of artificial intelligence.

    有關人工智慧的長遠未來

  • In particular, I want to tell you

    尤其,我想跟你們介紹

  • about a very important phenomenon called "Intelligence Explosion."

    一個極其重要的現象,稱為「智能大爆炸」

  • There are two reasons that I work on intelligence explosion

    我鑽研智能大爆炸的原因有兩個

  • and that I think it's worth sharing.

    也是我認為值得分享的原因

  • The first is that it's a phenomenon of immense theoretical interest

    第一,此現象背後的理論引起很多興趣

  • for those who want to understand intelligence on a fundamental level.

    對於那些想要從基本面向理解人工智能的人

  • The second reason is practical.

    第二個原因是實際應用

  • It has to do with the effects that intelligence explosion could have.

    這與智能大爆炸可能造成的效果有關

  • Depending on the conditions

    依照不同

  • under which an intelligence explosion could arise

    智能大爆炸可能產生的情形

  • and on the dynamics that it exhibits

    以及呈現出的改變

  • it could mean that AI changes very rapidly

    足以顯示人工智能的迅速變化

  • from a safe technology, relatively easy to handle,

    從安全無害的科技,相當容易掌控

  • to a volatile technology that is difficult to handle safely.

    一直到多變化的科技,就變得相當難以安全掌控

  • In order to navigate this hazard,

    為了釐清風險

  • we need to understand intelligence explosion.

    我們必須了解智能大爆炸

  • Intelligence explosion is a theoretical phenomenon.

    智能大爆炸是一個理論性現象

  • In that sense, it's a bit

    意義上來說,這就像

  • like a hypothetical particle in particle physics.

    粒子物理學中,假設會存在的粒子

  • There are arguments that explain why it should exist,

    有很多論點來解釋它的真實性

  • but we have not been able to experimentally confirm it yet.

    但我們仍無法透過實驗驗證

  • Nevertheless, the thought experiment

    然而,此思想實驗

  • that explains what intelligence explosion would look like

    用來解釋智能大爆炸會是什麼情況

  • is relatively simple.

    其實相當簡單扼要

  • And it goes like this.

    實驗是這樣進行的

  • Suppose we had a machine

    假設我們有一台機器

  • that was much more capable than today's computers.

    比現代電腦的性能更卓越

  • This machine, given a task,

    給予這台機器一項任務

  • could form hypotheses from observations,

    能夠藉由觀察建立假說

  • use those hypotheses to make plans, execute the plans,

    使用這些假說擬訂計畫,執行計畫

  • and observe the outcomes relative to the task,

    再觀察工作結果

  • and do it all efficiently within a reasonable amount of time.

    在合理的時間內有效率地完成所有工作

  • This kind of machine could be given science and engineering tasks

    這種機器能夠執行科學以及工程工作

  • to do on its own, autonomously.

    而且全自動化

  • And this is the key step in the thought experiment:

    這在思想實驗中是極關鍵的一步

  • this machine could even be tasked with performing AI research,

    這台機器甚至能夠執行人工智能研究

  • designing faster and better machines.

    設計出速度更快、性能更好的機器

  • Let's say our machine goes to work, and after a while,

    我們命令機器開始工作,過不久

  • produces blueprints for a second generation of AI,

    機器就能設計出第二代人工智能的藍圖

  • that's more efficient, more capable, and more general than the first.

    性能、效率、規格都比前一代更加出色

  • The second generation can be tasked once again

    第二代能夠再次被指派工作

  • with designing improved machines,

    要設計更好的機器

  • leading to a third generation, a fourth, a fifth, and so on.

    創造第三代、第四、第五代,延續下去

  • An outside observer would see

    旁觀者角度來看

  • a very large and very rapid increase in the abilities of these machines,

    這些機器的能力獲得卓越成長,十分迅速

  • and it's this large and rapid increase

    這種卓越又迅速的成長

  • that we call Intelligence Explosion.

    我們稱為「智能大爆炸」

  • Now if it's the case

    假設

  • that in order to undergo an intelligence explosion

    為了要產生智能大爆炸

  • many new pieces of hardware need to be build,

    必須打造出很多新的硬體零件

  • or new manufacturing technologies,

    或是打造新的製造科技

  • then an explosion will be more slow

    如此一來,智能大爆炸就會更慢

  • - although still quite fast by historical standards.

    即便以歷史來看,其成長仍然相當快

  • However, looking at the history of algorithmic improvement

    然而,觀察演算法過去的演變

  • it turns out that just as much improvement

    事實證明,這種硬體上的進步

  • tends to come from new software as from new hardware.

    透過軟體,也同樣達到類似的進步

  • This is true in areas like physics simulation, game playing,

    某些領域也是如此,例如:物理性質模仿、遊戲

  • image recognition, and many parts of machine learning.

    圖像識別,以及許多有關機器學習的部分

  • What this means is that our outside observer may not see physical changes

    這意味著旁觀者可能沒有看到

  • in the machines that are undergoing an intelligence explosion.

    發生智能大爆炸時,機器外觀上的變化

  • They may just see a series of programs

    他們可能僅看見許多電腦程式

  • writing successively more capable programs.

    不斷寫出更多性能更佳的電腦程式

  • It stands to reason that this process could give rise to programs

    按理說,這個進步過程會導致電腦程式

  • that are much more capable at any number of intellectual tasks than any human is.

    更能夠勝任腦力工作,甚至比人類更加卓越

  • Just as we now build machines that are much stronger, faster, and more precise

    如同我們現在創造了更加強勁、快速、精準的機器

  • at all kinds of physical tasks,

    能勝任所有勞力工作

  • it's certainly possible to build machines

    那肯定也有可能建造一種機器

  • that are more efficient at intellectual tasks.

    能夠在腦力工作更有效率

  • The human brain is not at the upper end of computational efficiency.

    人腦尚未達到運算效率的極致

  • And it goes further than this.

    然而不只如此

  • There is no particular reason

    目前尚沒有理由

  • to define our scale by the abilities of a single human or a single brain.

    以一個人或一個腦的能力去限制能力範圍

  • The largest thermonuclear bombs release more energy

    世界最大的氫彈釋放出更多能量

  • in less than a second

    在短短一秒之內

  • than the human population of Earth does in a day.

    比起全地球的人類一天之內的釋放總量更多

  • It's not out of the question to think

    有件事是肯定的

  • that machines designed to perform intellectual tasks

    用來執行腦力工作的機器

  • and then honed over many generations of improvement

    再經歷數個世代的成長

  • could similarly outperfom

    最終足以勝過

  • the productive thinking of the human race.

    人類的思考能力

  • This is the theoretical phenomenon called Intelligence Explosion.

    這就是所謂「智能大爆炸」的假設理論

  • We don't have a good theory of intelligence explosion yet,

    目前仍未發展出關於智能大爆炸的理論

  • but there is reason to think that it could happen at software speed

    但有理由認為這件事可能一觸即發

  • and could reach a level of capability

    軟體能夠快速成長至相當程度的性能

  • that's far greater than any human or group of humans

    遠比任一個人類或一群人類

  • at any number of intellectual tasks.

    在執行腦力工作時更加卓越

  • The first time I encountered this argument,

    我首次聽到這理論時

  • I more or less ignored it.

    其實並不重視

  • Looking back it seems crazy for me, someone who takes AI seriously,

    回頭看才發覺自己非常瘋狂,我是如此嚴肅的看待人工智慧

  • to walk away from intelligence explosion.

    一開始還十分不屑智能大爆炸

  • And I'll give you two reasons for that.

    我分享兩個原因

  • The first reason is a theorist's reason.

    第一是身為一位理論家

  • A theorist should be interested in the large-scale features of their field

    理論家應該著重於自身專業領域的大概念

  • in the contours of their phenomena of choice as determined by

    專注於他們想觀察的現象的原理

  • the fundamental forces, or interactions, or building blocks of their subject.

    基於他們學科中的基礎力量、相互作用、或基礎學問

  • As someone who aspires to be a good theorist of intelligence,

    身為一個渴求成為人工智慧的優秀理論家的人

  • I can't, in good faith, ignore intelligence explosion

    我無法不把智能大爆炸

  • as a major feature

    當作一個重要概念

  • of many simple straightforward theories of intelligence.

    由許多人工智能的簡單基礎理論而來

  • What intelligence explosion means

    智能大爆炸的意思是

  • is that intelligence improvement is not uniform.

    人工智慧的成長沒有規律

  • There is a threshold below which improvements tend to peter out,

    這種成長有一道門檻,令幅度逐漸下滑

  • but above that threshold,

    然而在此門檻之上

  • intelligence grows like compound interest increasing more and more.

    人工智慧會像複利一樣呈倍數成長

  • This threshold would have to emerge from

    這道門檻必定是從

  • any successful theory of intelligence.

    人工智慧的成功理論中誕生

  • The way phase transitions emerge from thermodynamics,

    就像相位變化是從熱力學發展出來的一樣

  • intelligence would effectively have a boiling point.

    類似地,人工智慧也會有這個突破口

  • Seeing this way,

    基於這種方式

  • exploring intelligence explosion is exactly the kind of thing

    探究智能大爆炸就是一項

  • a theorist wants to do, especially in a field like AI,

    理論家想研究的事情,尤其是人工智慧的領域

  • where we are trying to move from our current state

    我們領域目前極力想脫離現在的研究的方向

  • ,partial theories, pseudotheories, arguments, and thought experiments,

    ,片面理論、偽理論、學說、思想實驗,

  • toward a fully-fledged predictive theory of intelligence.

    朝向一個發育健全的前瞻性智能理論

  • This is the intelligence explosion.

    這就是智能大爆炸

  • In its most basic form,

    最根本的道理就是

  • it relies on a simple premise

    仰賴一個簡單的基礎

  • that AI research is not so different from other intellectual tasks

    人工智慧研究與其他腦力工作其實相差不遠

  • but can be performed by machines.

    它們都能夠由機器執行

  • We don't have a good understanding yet,

    我們尚未瞭如指掌

  • but there's reason to think that it can happen at software speed

    但我們有理由認為它能在軟體運行速度內達到

  • and reach levels of capability

    達到相當程度的性能

  • far exceeding any human or group of humans.

    超越任一個人類或一群人類

  • The second reason which I alluded to at the start of the talk

    第二個原因要回到我開頭提到的

  • is that intelligence explosion could change AI very suddenly

    智能大爆炸能夠迅速改變人工智慧

  • from being a benign technology to being a volatile technology

    從無危險性的科技轉變為極為不穩的科技

  • that requires significant thought into safety

    這得從安全層面好好思考

  • before use or even development.

    在尚未研發或是使用這項科技之前

  • Today's AI, by contrast, is not volatile.

    現代的人工智慧是相對無危險性的

  • I don't mean that AI systems can't cause harm.

    我不是指這些人工智慧系統毫無危害

  • Weaponization of AI is ongoing, and accidental harms can arise

    人工智慧武器化正持續進行,而意外的危害可能

  • from unanticipated systemic effects or from faulty assumptions.

    從意料之外的系統結果或是錯誤判定而發生

  • But on the whole, these sorts of harms should be manageable.

    總而言之,這類的危害應該都能夠控制

  • Today's AI is not so different from today's other technologies.

    現今的人工智慧與現在的其他科技並沒有太大差異

  • Intelligence explosion, however highlights an important fact:

    然而智能大爆炸卻強調了一個重要的事實:

  • AI will become more general, more capable, and more efficient

    人工智慧將變得更普及、性能更卓越、效率更好

  • perhaps very quickly

    或許很快就發生了

  • and could become more so than any human or group of humans.

    而且能夠超越所有人類

  • This kind of AI will require

    這種人工智慧將需要

  • a radically different approach to be used safely.

    一個截然不同的方式來使用,以確保安全

  • And small incidents could plausibly escalate to cause large amounts of harm.

    即使是一件小事或許最終也能引發成大危害

  • To understand how AI could be hazardous,

    要理解人工智慧造成危害的方式就是

  • let's consider an analogy to microorganisms.

    思考一下微生物的例子

  • There are two traits

    有兩點特徵

  • that make microorganisms more difficult to handle safely than a simple toxin.

    與一個單純的毒素相比,這使得微生物難以安全掌控

  • Microorganisms are goal-oriented,

    微生物以目標為導向

  • and they are, what I'm going to call, chain reactive.

    還有他們其實有所謂的連鎖反應

  • Goal-oriented means

    目標為導向指的是那些

  • that a microorganisms behaviors

    微生物的行為

  • tend to push towards some certain result.

    會朝向某些特定結果努力

  • In their case that's more copies of themselves.

    在微生物的例子來說,目標就是要多複製自己

  • Chain reactive means

    連鎖反應指的是

  • that we don't expect a group of microorganisms to stay put.

    我們並未預期這群微生物按兵不動

  • We expect their zone of influence to grow,

    我們預期它們的影響範圍擴增

  • and we expect their population to spread.

    也預期他們繁殖

  • Hazards can arise, because a microorganisms

    會產生危害,因為微生物

  • values don't often align with human goals and values.

    的價值與人類目標及價值時常並不相符

  • I don't have particular use

    我並沒有特別

  • for an infinite number of clones of this guy.

    想要不斷複製這個微生物

  • Chain reactivity can make this problem worse.

    連鎖反應會讓問題惡化

  • Since, small releases of a microorganism can balloon

    因為少量的微生物能夠

  • into large population spending pandemics.

    大量繁殖

  • Very advanced AI, such as could arise from intelligence explosion,

    極先進的人工智慧,而且是從智能大爆炸崛起的

  • could be quite similar in some ways to a microorganism.

    將會和微生物的某些地方有極為相似之處

  • Most AI systems are task-oriented.

    大部分的人工智慧系統是以任務為導向

  • They are designed by humans to complete a task.

    他們由人類設計以完成任務

  • Capable AIs will use many different kinds of actions

    有能力的人工智慧將會使用不同的行為

  • and many types of plans to accomplish their tasks.

    以及多樣的計畫完成任務

  • And flexible AIs will be able to learn to thrive,

    有彈性的人工智慧將能夠學習並且成長

  • that is to make accurate predictions and effective plans

    最後就能夠做出精準預測以及有效規劃

  • in a wide variaty of environments.

    能夠適應任何環境

  • Since AIs will act to accomplish their tasks as well as possible,

    因為人工智慧會極力完成所交代的任務

  • they will also be chain reactive.

    所以會產生連鎖反應

  • They'll have use for more resources, they'll want to improve themselves,

    它們將使用更多資源,極欲自我成長

  • to spread to other computer systems, to make backup copies of themselves

    帶動其他電腦程式,一直備份自己

  • in order to make sure that their task gets done.

    目的就是確保能夠完成任務

  • Because of their task orientation and chain reactivity,

    由於它們的任務導向以及連鎖反應

  • sharing an environment with this kind of AI would be hazardous.

    這種情況下的人工智慧會有危害

  • They may use some of the things we care about,

    它們會利用我們在乎的事物

  • our raw materials, and our stuff to accomplish their ends.

    我們的原物料以及其他原料都會被拿去完成目標

  • And there is no task that has yet been devised

    這種情況下,還沒有任務被設計為

  • that is compatible with human safety under these circumstances.

    能兼顧人類安全

  • This hazard has made worse by intelligence explosion,

    這種危害在智能大爆炸之後會更加惡化

  • in which very volatile AI could arise quickly from benign AI.

    良好的人工智慧將快速變成有害、容易失控的人工智慧

  • Instead of a gradual learning period,

    並非是漸進學習

  • in which we come to terms with the power of very efficient AI,

    那種我們習慣於高效人工智慧的能力

  • we could be thrust suddenly into a world

    我們會突然進入到一個世界

  • where AI is much more powerful than it is today.

    到時候人工智慧將會史無前例的強大

  • This scenario is not inevitable,

    這種情況並非無可避免

  • it's mostly dependent upon

    主要取決於

  • some research group, or company, or government

    某些研究團隊、企業或政府

  • walking into intelligence explosion blindly.

    對於智能大爆炸的盲目、不了解

  • If we can understand intelligence explosion,

    若我們能夠理解智能大爆炸

  • and if we have sufficient will and self-control as a society,

    還能夠團結一心,有足夠的信念以及自我控制的能力

  • then we should be able to avoid an AI outbreak.

    我們就能夠阻止人工智慧爆發

  • There is still the problem of chain reactivity though.

    即便仍存在連鎖反應的問題

  • It would only take one group to release AI into the world

    只要一組團隊釋出了人工智慧

  • even if nearly all groups are careful.

    即使其它組都非常小心翼翼

  • One group walking into intelligence explosion accidently or on purpose

    只要有一組不小心,或故意地造成了智能大爆炸

  • without taking proper precautions,

    在毫無準備時

  • could release an AI that will self-improve

    將會釋放出能夠自我成長的人工智慧

  • and cause immense amounts of harm to everyone else.

    對於所有人造成龐大的危機

  • I'd like to close with four questions.

    我要用四個問題總結

  • These are questions that I'd like to see answered

    我樂見這些問題能夠被回答

  • because they'll tell us more about the theory of artificial intelligence

    因為他們告訴我們更多有關人工智慧的理論

  • and that theory is what will lead us understand intelligence explosion

    正是這理論使我們了解智能大爆炸

  • well enough to mitigate the risks that it poses.

    讓我們足以降低它所產生的風險

  • Some of these questions are being actively pursued

    當中有些問題正被

  • by researchers at my home institution,

    我母校的研究學者著手處理

  • The Future of Humanity Institute at Oxford,

    牛津大學的人類未來機構

  • and by others, like The Machine Intelligence Research Institute.

    以及其他機構,例如機器智能研究機構

  • My first question is,

    第一個問題是:

  • "Can we get a precise predictive theory of intelligence explosion?"

    「我們能否得出智能大爆炸的精確預測理論?」

  • What happens when AI starts to do AI research?

    如果人工智慧著手研究人工智慧會發生什麼事?

  • In particular, I'd like to know

    我特別想了解

  • how fast software can improve its intellectual capabilities.

    程式能多快地提升自己的智力

  • Many of the most volatile scenarios we've examined include

    我們所研究的危險情境包括了

  • a rapid self-contained take off,

    一個程式能自給自足,快速的開始

  • such as could only happen under a software improvement circumstance.

    例如,只能發生在程式自我提升的情況當中

  • If there is some key resource that limits software improvement

    如果有一些關鍵資源能限制程式進步

  • or if it's the case that such improvement isn't possible

    或是程式提升的情況並不太可能

  • below a certain threshold of capability,

    發生在特定的門檻以下

  • these would be very useful facts from a safety standpoint.

    從安全的角度來看,這會是非常有用的事實

  • Question two:

    第二個問題:

  • what are our options, political or technological,

    我們有哪些選擇,不管是政治上或是技術面

  • for dealing with the potential harms

    能處理從一個超級高效的人工智慧

  • from super efficient artificial intelligences?

    所帶來的潛在危害

  • One option, of course, is to not build them in the first place.

    第一個選擇,當然就是一開始就不要創造它們

  • But this would require exceedingly good cooperation

    但這將會需要極端地合作

  • between many governments, commercial entities, and even research groups.

    在許多政府、商業團體甚至研究團隊

  • That cooperation and that level of understanding isn't easy to come by.

    這種合作性以及共識程度不容易獲得

  • It would also depend, to some extent, on an answer to question one

    這某種程度上也取決於問題一的答案

  • so that we know how to prevent intelligence explosion.

    讓我們知道如何避免智能大爆炸

  • Another option would be to make sure

    另一個選項是確保

  • that everyone knows how to devise safe tasks.

    每個人都了解如何設計安全任務

  • It's intuitively plausible that there are some kinds of tasks

    直覺上來看,是有些任務

  • that can be assigned by a safety conscious team

    能夠由安全管理的團隊指派

  • without posing too much risk.

    藉以避免過多的風險

  • It's another question entirely

    這完全是另一個問題

  • how these kinds of safety standards could be applied

    這些安全標準如何能夠一致地進行

  • uniformly and reliably enough all over the world

    且足夠可靠地應用在世界各地

  • to prevent serious harm.

    進而防止嚴重危害

  • This leads into question three: very capable AIs,

    這就牽扯到問題三:極高效的人工智慧

  • if they can be programmed correctly,

    如果他們能夠正確運行程式

  • should be able to determine

    應該就能夠判定

  • what is valuable

    有價值的事物

  • by modeling human preferences and philosophical arguments.

    藉由界定人類喜好以及哲學探討

  • Is it possible to assign a task of learning what is valuable

    是否可能指派一項學習判定價值的任務

  • and then acting to pursue that aim?

    然後程式就能夠去追求這目標

  • This turns out to be a highly technical problem.

    這變成一個複雜的技術問題

  • Some of the ground work has been laid by researchers

    一些研究員早已建立一些研究基礎

  • like Eliezer Yudkowsky, Nick Bostrom,

    像是 Eliezer Yudkowsky,Nick Bostrom

  • Paul Christiano and myself

    Paul Christiano,還有我自己

  • but we still have a long way to go.

    但仍有很長的路要走

  • My final question, as a machine self-improves it may make mistakes.

    最後一個問題,即便機器自我提升,也可能出錯

  • Even if the first AI is programed to pursue valuable ends,

    即便起初的人工智慧被設定為追求價值目標

  • later ones may not be.

    往後的人工智慧可能不會如此

  • Designing a stable and reliable self-improvement process

    設計一個穩定可靠自我提升的過程

  • turns out to involve some open problems

    終將隱含一些待解決的問題

  • in logic and in decision theory.

    比如邏輯和選擇的思考

  • These problems are being actively pursued at research workshops

    這些問題都在研究團隊中實驗

  • held by The Machine Intelligence Research Institute.

    由機器智能研究機構著手進行

  • Those are my four questions.

    以上是我的四個問題

  • I've only been able to cover the basics in this talk.

    這場演講我只能談論基本面

  • If you'd like to know more

    若你想知道更多

  • about the long-term future of AI and about the intelligence explosion,

    關於人工智慧的長期未來以及智能大爆炸

  • I can recommend David Chalmers' excellent paper,

    我推薦 David Chalmers 的精闢論文

  • "The Singularity of Philosophical Analysis,"

    「哲學分析的奇妙之處」

  • as well as a book forthcoming in 2014 called,

    以及 2014 即將出版的書籍

  • "Super Intelligence" by Nick Bostrom.

    Nick Bostrom的「超級智能」

  • And of course there are links and references on my website.

    當然我的網站有相關連結及參考資料

  • I believe that managing

    我相信掌握

  • and understanding intelligence explosion

    並理解智能大爆炸

  • will be a critical concern

    將至關重要

  • not just for the theory of AI but for safe use of AI

    不只是人工智慧的理論,也是學習如何安全使用它

  • and possibly, for humanity as a whole.

    可能也是為了全世界的人類

  • Thank you.

    謝謝

  • (Applause)

    (掌聲)

I'm going to talk about my research

今天要談論的是我做的研究

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋