Placeholder Image

字幕列表 影片播放

已審核 字幕已審核
  • I work with a bunch of mathematicians, philosophers and computer scientists,

    我和一群數學家、 哲學家、及電腦科學家一起工作。

  • and we sit around and think about the future of machine intelligence,

    我們坐在一起思考機器智慧的未來。

  • among other things.

    以及其他問題。

  • Some people think that some of these things are sort of science fiction-y,

    有些人可能認為這是科幻小說的範疇,

  • far out there, crazy.

    離我們很遙遠,很瘋狂。

  • But I like to say,

    但是我要說,

  • okay, let's look at the modern human condition.

    好,我們來看看現代人類的狀況....

  • (Laughter)

    (觀眾笑聲)

  • This is the normal way for things to be.

    這是人類的常態。

  • But if we think about it,

    但如果我們仔細想想,

  • we are actually recently arrived guests on this planet,

    其實人類是剛剛才抵達地球的訪客

  • the human species.

    假設地球在一年前誕生,

  • Think about if Earth was created one year ago,

    人類這個物種則僅存在了10分鐘。

  • the human species, then, would be 10 minutes old.

    工業革命在2秒鐘前開始。

  • The industrial era started two seconds ago.

    另外一個角度是看看這一萬年來的GDP增長

  • Another way to look at this is to think of world GDP over the last 10,000 years,

    我花了時間作了張圖表,

  • I've actually taken the trouble to plot this for you in a graph.

    它長這個樣子

  • It looks like this.

    (觀眾笑聲)

  • (Laughter)

    對一個正常的狀態來說這是個很有趣的形狀。

  • It's a curious shape for a normal condition.

    我可不想要坐在上面。

  • I sure wouldn't want to sit on it.

    (觀眾笑聲)

  • (Laughter)

    我們不禁問自己:「是什麼造成了這種異態呢?」

  • Let's ask ourselves, what is the cause of this current anomaly?

    有些人會說是科技

  • Some people would say it's technology.

    這是對的,科技在人類歷史上不斷累積,

  • Now it's true, technology has accumulated through human history,

    而現在科技正以飛快的速度進步。

  • and right now, technology advances extremely rapidly --

    這個是近因,

  • that is the proximate cause,

    這也是為什麼我們現在的生產力很高。

  • that's why we are currently so very productive.

    但是我想要進一步回想到最終的原因

  • But I like to think back further to the ultimate cause.

    看看這兩位非常傑出的紳士:

  • Look at these two highly distinguished gentlemen:

    這位是坎茲先生

  • We have Kanzi --

    他掌握了200個詞彙,這是一個難以置信的壯舉。

  • he's mastered 200 lexical tokens, an incredible feat.

    以及 愛德 維騰,他掀起了第二次超弦革命。

  • And Ed Witten unleashed the second superstring revolution.

    如果我們往腦袋瓜裡面看,這是我們看到的:

  • If we look under the hood, this is what we find:

    基本上是一樣的東西。

  • basically the same thing.

    一個稍微大一點,

  • One is a little larger,

    它可能有一些特別的連結方法。

  • it maybe also has a few tricks in the exact way it's wired.

    但是這些無形的差異不會太複雜,

  • These invisible differences cannot be too complicated, however,

    因為從我們共同的祖先以來,

  • because there have only been 250,000 generations

    只經過了25萬代。

  • since our last common ancestor.

    我們知道複雜的機制需要很長的時間演化。

  • We know that complicated mechanisms take a long time to evolve.

    因此 一些相對微小的變化

  • So a bunch of relatively minor changes

    將我們從坎茲先生變成了維騰,

  • take us from Kanzi to Witten,

    從撿起掉落的樹枝當武器到發射洲際彈道飛彈

  • from broken-off tree branches to intercontinental ballistic missiles.

    因此,顯而易見的是至今我們所實現的所有事

  • So this then seems pretty obvious that everything we've achieved,

    以及我們關心的所有事物,

  • and everything we care about,

    都取決於人腦中相對微小的改變。

  • depends crucially on some relatively minor changes that made the human mind.

    由此而來的推論就是:在未來,

  • And the corollary, of course, is that any further changes

    任何能顯著地改變思想基體的變化

  • that could significantly change the substrate of thinking

    都有可能會帶來巨大的後果。

  • could have potentially enormous consequences.

    我的一些同事覺得我們即將發現

  • Some of my colleagues think we're on the verge

    足以深刻的改變思想基體的科技

  • of something that could cause a profound change in that substrate,

    那就是超級機器智慧

  • and that is machine superintelligence.

    以前的人工智慧是將指令輸入到一個箱子裡。

  • Artificial intelligence used to be about putting commands in a box.

    你需要程式設計師

  • You would have human programmers

    精心的將知識設計成指令。

  • that would painstakingly handcraft knowledge items.

    你建立這些專門系統,

  • You build up these expert systems,

    這些系統在某些特定的領域中有點用,

  • and they were kind of useful for some purposes,

    但是它們很生硬,你無法延展這些系統。

  • but they were very brittle, you couldn't scale them.

    基本上這些系統所輸出的東西僅限於你事先輸入的範圍。

  • Basically, you got out only what you put in.

    但是從那時起,

  • But since then,

    人工智慧的領域裡發生了模式的轉變。

  • a paradigm shift has taken place in the field of artificial intelligence.

    現在主要的課題是機器的學習。

  • Today, the action is really around machine learning.

    因此,與其設計知識的表現及特點,

  • So rather than handcrafting knowledge representations and features,

    我們寫出具有學習原始感官數據的能力的程式碼。

  • we create algorithms that learn, often from raw perceptual data.

    基本上和嬰兒所做的是一樣的。

  • Basically the same thing that the human infant does.

    結果就是不侷限於某個領域的人工智慧 --

  • The result is A.I. that is not limited to one domain --

    同一個系統可以學習在任何兩種語言之間翻譯

  • the same system can learn to translate between any pairs of languages,

    或者學著玩雅達利系統上的任何一款遊戲。

  • or learn to play any computer game on the Atari console.

    現在當然

  • Now of course,

    人工智慧到現在還未能達到像人類一樣

  • A.I. is still nowhere near having the same powerful, cross-domain

    具有強大的跨領域的學習能力。

  • ability to learn and plan as a human being has.

    人類大腦還具有一些運算技巧

  • The cortex still has some algorithmic tricks

    我們不知道如何將這些技巧複製到機器中。

  • that we don't yet know how to match in machines.

    所以現在需要問的是:

  • So the question is,

    我們還要多久才能在機器裡面複製這些技巧?

  • how far are we from being able to match those tricks?

    在幾年前,

  • A couple of years ago,

    我們對世界頂尖的人工智慧專家做了一次問卷調查,

  • we did a survey of some of the world's leading A.I. experts,

    想要看看他們的想法, 其中的一個題目是:

  • to see what they think, and one of the questions we asked was,

    "到哪一年你覺得人類會有50%的機率

  • "By which year do you think there is a 50 percent probability

    能夠達成人類級的人工智慧?"

  • that we will have achieved human-level machine intelligence?"

    我們把人類級的人工智慧定義為有能力

  • We defined human-level here as the ability to perform

    將任何任務至少執行得像一名成年人一樣好,

  • almost any job at least as well as an adult human,

    所以是真正的人類級別,而不是僅限於某些領域。

  • so real human-level, not just within some limited domain.

    而答案的中位數是2040或2050年

  • And the median answer was 2040 or 2050,

    取決於我們問的專家屬於什麼群體。

  • depending on precisely which group of experts we asked.

    當然,這個有可能過很久才實現,也有可能提早實現

  • Now, it could happen much, much later, or sooner,

    沒有人知道確切的時間。

  • the truth is nobody really knows.

    我們知道的是,機器基體處理資訊能力的最終界限

  • What we do know is that the ultimate limit to information processing

    比生物組織的界限要大的多。

  • in a machine substrate lies far outside the limits in biological tissue.

    這取決於物理原理。

  • This comes down to physics.

    一個生物神經元發出脈衝的頻率可能在200赫茲,每秒200次。

  • A biological neuron fires, maybe, at 200 hertz, 200 times a second.

    但就算是現在的電晶體都以千兆赫(GHz)的頻率運轉。

  • But even a present-day transistor operates at the Gigahertz.

    神經元在軸突中傳輸的速度比較慢,頂多是每秒100公尺。

  • Neurons propagate slowly in axons, 100 meters per second, tops.

    但在電腦裡面,信號是以光速傳播的。

  • But in computers, signals can travel at the speed of light.

    另外還有尺寸的限制

  • There are also size limitations,

    就像人類的大腦必需要放得進顱骨內。

  • like a human brain has to fit inside a cranium,

    但是一部電腦可以跟倉庫一樣大,甚至更大。

  • but a computer can be the size of a warehouse or larger.

    因此超級智慧的潛能現在正潛伏在物質裡面,

  • So the potential for superintelligence lies dormant in matter,

    就像是原子能在人類的歷史中一直潛伏著,

  • much like the power of the atom lay dormant throughout human history,

    耐心的等著,一直到1945年。

  • patiently waiting there until 1945.

    在這個世紀內,

  • In this century,

    科學家有可能會將人工智慧的力量喚醒。

  • scientists may learn to awaken the power of artificial intelligence.

    屆時我覺得我們會見證到智慧的大爆發。

  • And I think we might then see an intelligence explosion.

    大部分的人,當他們在想什麼是聰明什麼是愚笨的時候,

  • Now most people, when they think about what is smart and what is dumb,

    我想他們腦中浮現出的畫面會是這樣的:

  • I think have in mind a picture roughly like this.

    在一邊是村裡的傻子,

  • So at one end we have the village idiot,

    然後在另外一邊

  • and then far over at the other side

    是 愛德 維騰 或愛因斯坦,或你喜歡的某位大師。

  • we have Ed Witten, or Albert Einstein, or whoever your favorite guru is.

    但是我覺得從人工智慧的觀點來看,

  • But I think that from the point of view of artificial intelligence,

    真正的畫面應該比較像這樣子:

  • the true picture is actually probably more like this:

    人工智慧從這一點開始,零智慧

  • AI starts out at this point here, at zero intelligence,

    然後,在許多許多年辛苦的研究以後,

  • and then, after many, many years of really hard work,

    我們可能可以達到老鼠級的人工智慧,

  • maybe eventually we get to mouse-level artificial intelligence,

    它可以在凌亂的環境中找到路

  • something that can navigate cluttered environments

    就像一隻老鼠一樣。

  • as well as a mouse can.

    然後,在更多年的辛苦研究及投資了很多資源之後,

  • And then, after many, many more years of really hard work, lots of investment,

    我們可能可以達到黑猩猩級的人工智慧。

  • maybe eventually we get to chimpanzee-level artificial intelligence.

    然後,在更加多年的辛苦研究之後,

  • And then, after even more years of really, really hard work,

    我們達到村莊傻子級別的人工智慧。

  • we get to village idiot artificial intelligence.

    然後過一小會兒後,我們就超越了愛德維騰。

  • And a few moments later, we are beyond Ed Witten.

    這列火車並不會在人類村這一站就停車。

  • The train doesn't stop at Humanville Station.

    它比較可能會直接呼嘯而過。

  • It's likely, rather, to swoosh right by.

    這個具有深遠的寓意,

  • Now this has profound implications,

    特別是在談到權力的問題。

  • particularly when it comes to questions of power.

    舉例來說,黑猩猩很強壯 --

  • For example, chimpanzees are strong --

    以體重比例來說, 一隻黑猩猩比一個健康的男性人類要強壯兩倍。

  • pound for pound, a chimpanzee is about twice as strong as a fit human male.

    然而,坎茲和他朋友們的命運則很大的部分取決於

  • And yet, the fate of Kanzi and his pals depends a lot more

    人類的作為,而非黑猩猩們自己的作為。

  • on what we humans do than on what the chimpanzees do themselves.

    當超級智慧出現後,

  • Once there is superintelligence,

    人類的命運可能會取決於超級智慧的作為。

  • the fate of humanity may depend on what the superintelligence does.

    想想看:

  • Think about it:

    機器智慧將會是人類所需要作出的最後一個發明。

  • Machine intelligence is the last invention that humanity will ever need to make.

    從那之後機器將會比人類更會發明,

  • Machines will then be better at inventing than we are,

    他們也將會在"數位時間"裡做出這些事。

  • and they'll be doing so on digital timescales.

    這意味著未來到來的時間將被縮短。

  • What this means is basically a telescoping of the future.

    想想那些我們曾經想像過的瘋狂科技

  • Think of all the crazy technologies that you could have imagined

    人類可能在有足夠的時間下可以發明出來:

  • maybe humans could have developed in the fullness of time:

    防止衰老、殖民太空、

  • cures for aging, space colonization,

    自行複製的奈米機器人,或將我們的頭腦上載到電腦裡,

  • self-replicating nanobots or uploading of minds into computers,

    這一些僅存在科幻小說範疇,

  • all kinds of science fiction-y stuff

    但同時還是符合物理法則的東西

  • that's nevertheless consistent with the laws of physics.

    超級智慧有辦法開發出這些東西,而且速度可能很快。

  • All of this superintelligence could develop, and possibly quite rapidly.

    這麼成熟的超級智慧

  • Now, a superintelligence with such technological maturity

    將會非常的強大,

  • would be extremely powerful,

    最少在某些場景它將有辦法得到它想要的東西。

  • and at least in some scenarios, it would be able to get what it wants.

    這樣以來我們的未來就將會被這個超級智慧的偏好所影響。

  • We would then have a future that would be shaped by the preferences of this A.I.

    現在出現了一個好問題,這些偏好是什麼呢?

  • Now a good question is, what are those preferences?

    這個問題更棘手。

  • Here it gets trickier.

    要在這個領域往前走,

  • To make any headway with this,

    我們必須避免將機器智慧擬人化(人格化)。

  • we must first of all avoid anthropomorphizing.

    這一點很諷刺因為每一篇關於未來的人工智慧

  • And this is ironic because every newspaper article

    的報導都會有這張照片:

  • about the future of A.I. has a picture of this:

    所以我覺得我們必須要更抽象的來想像這個議題,

  • So I think what we need to do is to conceive of the issue more abstractly,

    而非以好萊塢的鮮明場景來想像。

  • not in terms of vivid Hollywood scenarios.

    我們需要把智慧看做是一個優化的過程,

  • We need to think of intelligence as an optimization process,

    一個將未來指引到特定的組態的過程。

  • a process that steers the future into a particular set of configurations.

    一個超級智慧是一個很強大的優化過程。

  • A superintelligence is a really strong optimization process.

    它將很會利用現有資源

  • It's extremely good at using available means to achieve a state

    去達到達成目標的狀態。

  • in which its goal is realized.

    這意味著有著高智慧以及

  • This means that there is no necessary conenction between

    擁有一個對人類來說是有意義的目標之間

  • being highly intelligent in this sense,

    並沒有必然的聯繫。

  • and having an objective that we humans would find worthwhile or meaningful.

    假設我們給予人工智慧的目標是讓人類笑。

  • Suppose we give an A.I. the goal to make humans smile.

    當人工智慧比較弱時,它會做出有用的或是好笑的動作

  • When the A.I. is weak, it performs useful or amusing actions

    以讓使用者笑出來。

  • that cause its user to smile.

    當人工智慧演化成超級智慧的時後,

  • When the A.I. becomes superintelligent,

    它會體認到有更有效的方法可以達到這個目標:

  • it realizes that there is a more effective way to achieve this goal:

    控制這個世界

  • take control of the world

    然後在人類的臉部肌肉上連接電級

  • and stick electrodes into the facial muscles of humans

    以使這個人不斷的微笑。

  • to cause constant, beaming grins.

    另外一個例子,

  • Another example,

    假設我們給人工智慧的目標是解出一個非常困難的數學問題。

  • suppose we give A.I. the goal to solve a difficult mathematical problem.

    當人工智慧變成超級智慧時,

  • When the A.I. becomes superintelligent,

    它會體認到最有效的方法是

  • it realizes that the most effective way to get the solution to this problem

    把整個地球轉化成一部超大號的電腦,

  • is by transforming the planet into a giant computer,

    進而增加它自己的運算能力。

  • so as to increase its thinking capacity.

    注意到這個模式會給人工智慧理由去做

  • And notice that this gives the A.I.s an instrumental reason

    我們可能不認可的事情。

  • to do things to us that we might not approve of.

    在這個模型裡面人類是威脅,

  • Human beings in this model are threats,

    我們可能會在解開數學問題的過程中成為阻礙。

  • we could prevent the mathematical problem from being solved.

    當然,在我們可預見的範圍內,事情不會以這種方式出錯;

  • Of course, perceivably things won't go wrong in these particular ways;

    這些是誇大的例子。

  • these are cartoon examples.

    但是它指出的概念很重要:

  • But the general point here is important:

    如果你創造了一個非常強大的優化流程

  • if you create a really powerful optimization process

    要最大化目標X,

  • to maximize for objective x,

    你最好確保你對目標X的定義

  • you better make sure that your definition of x

    包含了所有你所在意的事情。

  • incorporates everything you care about.

    這也是在很多神話故事中教導的寓意。

  • This is a lesson that's also taught in many a myth.

    希臘神話中的米達斯國王希望他碰到的所有東西都可以變成金子。

  • King Midas wishes that everything he touches be turned into gold.

    他碰到了他的女兒, 她變成了黃金。

  • He touches his daughter, she turns into gold.

    他碰到了他的食物,他的食物也變成了黃金。

  • He touches his food, it turns into gold.

    這實際上跟我們的題目有關,

  • This could become practically relevant,

    不僅僅是對貪婪的隱喻,

  • not just as a metaphor for greed,

    但也指出了如果你創造了一個強大的優化流程

  • but as an illustration of what happens

    但同時給了它不正確或不精確的目標後

  • if you create a powerful optimization process

    會發生什麼事。

  • and give it misconceived or poorly specified goals.

    你可能會說,如果電腦系統開始在人臉上安裝電極,

  • Now you might say, if a computer starts sticking electrodes into people's faces,

    我們可以直接把他關掉就好了。

  • we'd just shut it off.

    一、這並不一定容易做到,如果我們已經對這個系統產生依賴性 ——

  • A, this is not necessarily so easy to do if we've grown dependent on the system --

    比如:你知道網際網路的開關在哪裡嗎?

  • like, where is the off switch to the Internet?

    二、為什麼黑猩猩當初沒有把人類的開關關掉?

  • B, why haven't the chimpanzees flicked the off switch to humanity,

    或是尼安德特人?

  • or the Neanderthals?

    他們有很明顯的理由要這麼做,

  • They certainly had reasons.

    而我們的開關就在這裡:

  • We have an off switch, for example, right here.

    (窒息聲)

  • (Choking)

    原因是人類是很聰明的敵人;

  • The reason is that we are an intelligent adversary;

    我們可以預見威脅並為其做出準備。

  • we can anticipate threats and plan around them.

    但一個超級智慧也會,

  • But so could a superintelligent agent,

    而且它的能力將比我們強大的多。

  • and it would be much better at that than we are.

    我想要說的一點是,我們不應該覺得一切都在我們的掌握之中。

  • The point is, we should not be confident that we have this under control here.

    我們可能可以藉由把AI放到一個盒子裡面

  • And we could try to make our job a little bit easier by, say,

    來給我們更多的掌握,

  • putting the A.I. in a box,

    就像是一個獨立的軟體環境,

  • like a secure software environment,

    一個AI無法逃脫的虛擬實境。

  • a virtual reality simulation from which it cannot escape.

    但是我們有多大的信心這個AI不會找到漏洞?

  • But how confident can we be that the A.I. couldn't find a bug.

    就算只是人類駭客,他們還經常找出漏洞。

  • Given that merely human hackers find bugs all the time,

    我想我們不是很有信心。

  • I'd say, probably not very confident.

    那所以我們把網路線拔掉,製造一個物理間隙,

  • So we disconnect the ethernet cable to create an air gap,

    但同樣的,就算只是人類駭客

  • but again, like merely human hackers

    也經常可以利用社交工程陷阱來突破物理間隙。

  • routinely transgress air gaps using social engineering.

    現在,在我在台上說話的同時

  • Right now, as I speak,

    我確定在世界的某一個角落裡有一名公司職員

  • I'm sure there is some employee out there somewhere

    才剛剛被自稱來自IT部門的人士說服(詐騙)

  • who has been talked into handing out her account details

    並交出了她的帳戶信息。

  • by somebody claiming to be from the I.T. department.

    更天馬行空的狀況也可能會發生,

  • More creative scenarios are also possible,

    就像是如果你是AI,

  • like if you're the A.I.,

    你可以想像藉由擺動你體內的電路

  • you can imagine wiggling electrodes around in your internal circuitry

    然後創造出無線電波,用以與外界溝通。

  • to create radio waves that you can use to communicate.

    或這你可以假裝有故障,

  • Or maybe you could pretend to malfunction,

    然後當程式設計師把你打開檢查哪裡出錯時,

  • and then when the programmers open you up to see what went wrong with you,

    他們找出了原始碼 --梆--

  • they look at the source code -- Bam! --

    你可以在此做出操控。

  • the manipulation can take place.

    或這它可以做出一個很巧妙的科技藍圖,

  • Or it could output the blueprint to a really nifty technology,

    當我們實施這個藍圖後,

  • and when we implement it,

    它會產生一些AI計劃好的秘密副作用。

  • it has some surreptitious side effect that the A.I. had planned.

    寓意是我們不能對我們控制人工智慧的能力

  • The point here is that we should not be confident in our ability

    具有太大的信心

  • to keep a superintelligent genie locked up in its bottle forever.

    它終究會逃脫出來,只是時間問題而已。

  • Sooner or later, it will out.

    我覺得解方是我們需要弄清楚

  • I believe that the answer here is to figure out

    如何創造出一個超級智慧, 哪怕是它逃出來了,

  • how to create superintelligent A.I. such that even if -- when -- it escapes,

    它還是安全的,因為它是站在我們這一邊的

  • it is still safe because it is fundamentally on our side

    因為它擁有了我們的價值觀。

  • because it shares our values.

    我們沒有辦法避免這個艱難的問題。

  • I see no way around this difficult problem.

    但是我覺得我們可以解決這個問題。

  • Now, I'm actually fairly optimistic that this problem can be solved.

    我們並不需要把我們在乎的所有事物寫下來,

  • We wouldn't have to write down a long list of everything we care about,

    或更麻煩的把這些事物寫成電腦程式語言

  • or worse yet, spell it out in some computer language

    像是 C++或 Python,

  • like C++ or Python,

    這是個不可能完成的任務。

  • that would be a task beyond hopeless.

    與其,我們可以創造出一個人工智慧,它用它自己的智慧

  • Instead, we would create an A.I. that uses its intelligence

    來學習我們的價值觀,

  • to learn what we value,

    它的激勵機制要設計成會讓它想要

  • and its motivation system is constructed in such a way that it is motivated

    來追求我們的價值觀或者去做它認為我們會贊成的事情。

  • to pursue our values or to perform actions that it predicts we would have approved of.

    藉此我們可以最大化地利用到它們的智慧

  • We would thus leverage its intelligence as much as possible

    來解決這個價值觀的問題。

  • to solve the problem of value-loading.

    這個是有可能的,

  • This can happen,

    而且這個的結果可對人類是非常有益的。

  • and the outcome could be very good for humanity.

    但是它不會自動發生。

  • But it doesn't happen automatically.

    如果我們需要控制這個智慧的大爆炸,

  • The initial conditions for the intelligence explosion

    那智慧大爆炸的初始條件

  • might need to be set up in just the right way

    需要被正確的建立起來。

  • if we are to have a controlled detonation.

    人工智慧的價值觀要和我們的一致,

  • The values that the A.I. has need to match ours,

    並不只是在常見的狀況下,

  • not just in the familiar context,

    比如我們可以很簡單低檢查它的行為,

  • like where we can easily check how the A.I. behaves,

    但也要在未來所有人工智慧可能會遇到的情況下

  • but also in all novel contexts that the A.I. might encounter

    保持價值觀的一致。

  • in the indefinite future.

    還有很多深奧的問題需要被解決:

  • And there are also some esoteric issues that would need to be solved, sorted out:

    它們決策概念的所有細節,

  • the exact details of its decision theory,

    它如何面對解決邏輯不確定性的情況等問題。

  • how to deal with logical uncertainty and so forth.

    所以技術上待解決的問題

  • So the technical problems that need to be solved to make this work

    讓這個任務看起來蠻難的 --

  • look quite difficult --

    還沒有像做出一個超級智慧那樣的難,

  • not as difficult as making a superintelligent A.I.,

    但還是挺難的。

  • but fairly difficult.

    我們所擔心的是:

  • Here is the worry:

    創造出一個超級智慧是一個很難的挑戰。

  • Making superintelligent A.I. is a really hard challenge.

    創造出一個安全的超級智慧

  • Making superintelligent A.I. that is safe

    是一個更大的挑戰。

  • involves some additional challenge on top of that.

    最大的風險在於有人想出了如何解決第一個難題

  • The risk is that if somebody figures out how to crack the first challenge

    但是沒有解決第二個問題

  • without also having cracked the additional challenge

    來確保安全性萬無一失。

  • of ensuring perfect safety.

    所以我覺得我們應該先想出

  • So I think that we should work out a solution

    如何"控制"的方法。

  • to the control problem in advance,

    這樣當我們需要的時候我們可以用的到它。

  • so that we have it available by the time it is needed.

    現在也許我們無法完全解決「控制」的問題

  • Now it might be that we cannot solve the entire control problem in advance

    因為有時候你要了解你所想要控制的架構後

  • because maybe some elements can only be put in place

    你才能知道如何實施。

  • once you know the details of the architecture where it will be implemented.

    但是如果我們可以事先解決更多的難題

  • But the more of the control problem that we solve in advance,

    我們順利的進入到機器智能時代的機率

  • the better the odds that the transition to the machine intelligence era

    就會更高。

  • will go well.

    這對我來說是一個值得挑戰的事情

  • This to me looks like a thing that is well worth doing

    而且我能想像到如果一切順利的話,

  • and I can imagine that if things turn out okay,

    我們的後代,幾百萬年以後的人類回顧我們這個時代的時候

  • that people a million years from now look back at this century

    他們可能會說我們所做的最重要的事就是

  • and it might well be that they say that the one thing we did that really mattered

    把這個事情弄對了。

  • was to get this thing right.

    謝謝

  • Thank you.

    (觀眾掌聲)

  • (Applause)

I work with a bunch of mathematicians, philosophers and computer scientists,

我和一群數學家、 哲學家、及電腦科學家一起工作。

字幕與單字
已審核 字幕已審核

單字即點即查 點擊單字可以查詢單字解釋