Placeholder Image

字幕列表 影片播放

  • I'm going to talk about a failure of intuition

    我要談一種我們很多人遭受的、直覺上的失誤。

  • that many of us suffer from.

    那其實是一種使你無法察覺到特定種類危險的失誤。

  • It's really a failure to detect a certain kind of danger.

    我會描述一個情境

  • I'm going to describe a scenario

    是我認為很可怕

  • that I think is both terrifying

    而且很有機會發生的,

  • and likely to occur,

    這不是很好的組合,

  • and that's not a good combination,

    一如預期。

  • as it turns out.

    然而比起感到害怕,大部分的人會覺得

  • And yet rather than be scared, most of you will feel

    我正在說的東西有點酷。

  • that what I'm talking about is kind of cool.

    我將會描述在人工智能領域我們的進展

  • I'm going to describe how the gains we make

    如何能最終消滅我們。

  • in artificial intelligence

    事實上,我認為很難看出他們為何不會消滅我們

  • could ultimately destroy us.

    或者驅使我們消滅自己。

  • And in fact, I think it's very difficult to see how they won't destroy us

    如果你是和我類似的人,

  • or inspire us to destroy ourselves.

    你會發現思考這類事情很有趣。

  • And yet if you're anything like me,

    那種反應也是問題的一部分。

  • you'll find that it's fun to think about these things.

    對嗎?那種反應應該讓你感到擔心。

  • And that response is part of the problem.

    如果我是打算在這個裡演講說服你,

  • OK? That response should worry you.

    我們很可能會遭受全球性的飢荒

  • And if I were to convince you in this talk

    無論是因為氣候變遷或某種大災難

  • that we were likely to suffer a global famine,

    而你的孫子們或者孫子們的孫子們

  • either because of climate change or some other catastrophe,

    非常可能要這樣生活,

  • and that your grandchildren, or their grandchildren,

    你不會覺得:

  • are very likely to live like this,

    「有意思,

  • you wouldn't think,

    我喜歡這個 TED 演講。」

  • "Interesting.

    飢荒並不有趣。

  • I like this TED Talk."

    另一方面來說,科幻式的死亡,是有趣的。

  • Famine isn't fun.

    而現階段人工智能的發展讓最讓我擔心的是

  • Death by science fiction, on the other hand, is fun,

    我們似乎無法組織出一個適當的情緒反應,

  • and one of the things that worries me most about the development of AI at this point

    針對眼前的威脅。

  • is that we seem unable to marshal an appropriate emotional response

    我無法組織出這個回應,所以我在這裡講這個。

  • to the dangers that lie ahead.

    就像我們站在兩扇門前面。

  • I am unable to marshal this response, and I'm giving this talk.

    一號門後面,

  • It's as though we stand before two doors.

    我們停止發展「製造有智能的機器」。

  • Behind door number one,

    我們的電腦硬體和軟體就因故停止變得更好。

  • we stop making progress in building intelligent machines.

    現在花一點時間想想為什麼這會發生。

  • Our computer hardware and software just stops getting better for some reason.

    我的意思是,人工智能和自動化如此有價值,

  • Now take a moment to consider why this might happen.

    我們會持續改善我們的科技,只要我們有能力做。

  • I mean, given how valuable intelligence and automation are,

    有什麼東西能阻止我們這麼做呢?

  • we will continue to improve our technology if we are at all able to.

    一場全面性的核子戰爭?

  • What could stop us from doing this?

    一場全球性的流行病?

  • A full-scale nuclear war?

    一次小行星撞擊地球?

  • A global pandemic?

    小賈斯汀成為美國總統?

  • An asteroid impact?

    (笑聲)

  • Justin Bieber becoming president of the United States?

    重點是:必須有什麼我們知道的東西會毀滅我們的文明。

  • (Laughter)

    你必須想像到底能有多糟

  • The point is, something would have to destroy civilization as we know it.

    才能阻止我們持續改善我們的科技,

  • You have to imagine how bad it would have to be

    永久地,

  • to prevent us from making improvements in our technology

    一代又一代人。

  • permanently,

    幾乎從定義上,這就是

  • generation after generation.

    人類歷史上發生過的最糟的事。

  • Almost by definition, this is the worst thing

    所以唯一的替代選項,

  • that's ever happened in human history.

    這是在二號門之後的東西,

  • So the only alternative,

    是我們繼續改善我們的智能機器

  • and this is what lies behind door number two,

    年復一年,年復一年。

  • is that we continue to improve our intelligent machines

    到某個時間點我們會造出比我們還聰明的機器,

  • year after year after year.

    而我們一旦造出比我們聰明的機器,

  • At a certain point, we will build machines that are smarter than we are,

    它們就會開始改善自己。

  • and once we have machines that are smarter than we are,

    然後我們承擔數學家 ij Good 稱為

  • they will begin to improve themselves.

    「人工智能爆發」的風險,

  • And then we risk what the mathematician IJ Good called

    那個過程會脫離我們的掌握。

  • an "intelligence explosion,"

    這時常被漫畫化,如我的這張圖,

  • that the process could get away from us.

    一種恐懼:充滿惡意的機械人軍團

  • Now, this is often caricatured, as I have here,

    會攻擊我們。

  • as a fear that armies of malicious robots

    但這不是最可能發生的情境。

  • will attack us.

    並不是說我們的機器會變得自然地帶有敵意。

  • But that isn't the most likely scenario.

    問題在於我們將會造出

  • It's not that our machines will become spontaneously malevolent.

    遠比我們更有競爭力的機器,

  • The concern is really that we will build machines

    只要我們和他們的目標些微的歧異

  • that are so much more competent than we are

    就會讓我們被毀滅。

  • that the slightest divergence between their goals and our own

    就想想我們和螞蟻的關係。

  • could destroy us.

    我們不討厭牠們。

  • Just think about how we relate to ants.

    我們不會特別去傷害牠們。

  • We don't hate them.

    甚至有時我們為了不傷害牠們而承受痛苦。

  • We don't go out of our way to harm them.

    我們在人行道跨越他們。

  • In fact, sometimes we take pains not to harm them.

    但當他們的存在

  • We step over them on the sidewalk.

    和我們的目標嚴重衝突,

  • But whenever their presence

    譬如當我們要建造一棟和這裡一樣的建築物,

  • seriously conflicts with one of our goals,

    我們會毫無不安地除滅牠們。

  • let's say when constructing a building like this one,

    問題在於有一天我們會造出機器,

  • we annihilate them without a qualm.

    無論他們是有意識或的者沒有意識的,

  • The concern is that we will one day build machines

    會對我們如螞蟻般的不予理會。

  • that, whether they're conscious or not,

    現在,我懷疑這對說法這裡大部分的人來說不著邊際。

  • could treat us with similar disregard.

    我確信你們有些人懷疑超級人工智能出現的可能,

  • Now, I suspect this seems far-fetched to many of you.

    更別說它必然出現。

  • I bet there are those of you who doubt that superintelligent AI is possible,

    但接著你一點會發現接下來其中一個假設有點問題。

  • much less inevitable.

    以下只有三個假設。

  • But then you must find something wrong with one of the following assumptions.

    智能是關於資訊在物質系統裡處理的過程。

  • And there are only three of them.

    其實這個陳述稍微多於一個假設

  • Intelligence is a matter of information processing in physical systems.

    我們已經在我們的機器裡安裝了有限的智能,

  • Actually, this is a little bit more than an assumption.

    而很多這樣的機器已經表現出

  • We have already built narrow intelligence into our machines,

    某種程度的超人類智能。

  • and many of these machines perform

    而我們知道這個現象

  • at a level of superhuman intelligence already.

    可能導致被稱為「通用智能」的東西,

  • And we know that mere matter

    一種能跨多個領域彈性地思考的能力,

  • can give rise to what is called "general intelligence,"

    因為我們的腦已經掌握了這個,對吧?

  • an ability to think flexibly across multiple domains,

    我的意思是,裡面都只是原子,

  • because our brains have managed it. Right?

    而只要我們繼續製造基於原子的系統

  • I mean, there's just atoms in here,

    越來越能表現智能的行為,

  • and as long as we continue to build systems of atoms

    我們終究會,除非我們被打斷,

  • that display more and more intelligent behavior,

    我們終究會造出通用智能

  • we will eventually, unless we are interrupted,

    裝進我們的機器裡。

  • we will eventually build general intelligence

    關鍵是理解到發展的速率無關緊要,

  • into our machines.

    因為任何進展都足以帶我們到終結之境。

  • It's crucial to realize that the rate of progress doesn't matter,

    我們不需要摩爾定律才能繼續。我們不需要指數型的發展。

  • because any progress is enough to get us into the end zone.

    我們只需要繼續前進。

  • We don't need Moore's law to continue. We don't need exponential progress.

    第二個假設是我們會繼續前進。

  • We just need to keep going.

    我們會持續改善我們的智能機器。

  • The second assumption is that we will keep going.

    而因為智能的價值——

  • We will continue to improve our intelligent machines.

    我的意思是,智能是所有我們珍視的事物的源頭

  • And given the value of intelligence --

    或者我們需要智能來保護我們珍視的事物。

  • I mean, intelligence is either the source of everything we value

    智能是我們最珍貴的資源。

  • or we need it to safeguard everything we value.

    所以我們想要這麼做。

  • It is our most valuable resource.

    我們有許多亟需解決的問題。

  • So we want to do this.

    我們想要治癒疾病如阿茲海默症和癌症。

  • We have problems that we desperately need to solve.

    我們想要了解經濟系統。我們想要改進我們的氣候科學。

  • We want to cure diseases like Alzheimer's and cancer.

    所以我們會這麼做,只要我們可以。

  • We want to understand economic systems. We want to improve our climate science.

    火車已經出站,而沒有煞車可以拉。

  • So we will do this, if we can.

    最後一點,我們不站在智能的巔峰,

  • The train is already out of the station, and there's no brake to pull.

    或者根本不在那附近。

  • Finally, we don't stand on a peak of intelligence,

    而這真的是一種重要的洞察。

  • or anywhere near it, likely.

    正是這個讓我們的處境如此危險可疑,

  • And this really is the crucial insight.

    這也讓我們對風險的直覺變得很不可靠。

  • This is what makes our situation so precarious,

    現在,想想這世界上活過的最聰明的人。

  • and this is what makes our intuitions about risk so unreliable.

    每個人的清單上幾乎都會有 約翰·馮·諾伊曼 。

  • Now, just consider the smartest person who has ever lived.

    我是指, 馮·諾伊曼 對他周圍的人造成的印象,

  • On almost everyone's shortlist here is John von Neumann.

    而這包括和他同時代最棒的數學家和物理學家,

  • I mean, the impression that von Neumann made on the people around him,

    被好好地記錄了。

  • and this included the greatest mathematicians and physicists of his time,

    只要有一半關於他的故事的一半是真的,

  • is fairly well-documented.

    那毫無疑問

  • If only half the stories about him are half true,

    他是世界上活過的最聰明的人之一。

  • there's no question

    所以考慮智能的頻譜。

  • he's one of the smartest people who has ever lived.

    約翰·馮·諾伊曼 在這裡。

  • So consider the spectrum of intelligence.

    然後你和我在這裡。

  • Here we have John von Neumann.

    然後雞在這裡。

  • And then we have you and me.

    (笑聲)

  • And then we have a chicken.

    抱歉,雞應該在那裡。

  • (Laughter)

    (笑聲)

  • Sorry, a chicken.

    我實在無意把這個把這個演講弄得比它本身更讓人感到沮喪。

  • (Laughter)

    (笑聲)

  • There's no reason for me to make this talk more depressing than it needs to be.

    智能的頻譜似乎勢不可擋地

  • (Laughter)

    往比我們能理解的更遠的地方延伸,

  • It seems overwhelmingly likely, however, that the spectrum of intelligence

    如果我們造出比我們更有智能的機器,

  • extends much further than we currently conceive,

    他們很可能會探索這個頻譜,

  • and if we build machines that are more intelligent than we are,

    以我們無法想像的方式,

  • they will very likely explore this spectrum

    然後超越我們以我們無法想像的方式。

  • in ways that we can't imagine,

    重要的是認識到這說法僅因速度的優勢即為真。

  • and exceed us in ways that we can't imagine.

    對吧?請想像如果我們造出了一個超級人工智能

  • And it's important to recognize that this is true by virtue of speed alone.

    它不比你一般在史丹佛或者 MIT 遇到的研究團隊聰明。

  • Right? So imagine if we just built a superintelligent AI

    電子電路作用的速率比起生化作用快一百萬倍,

  • that was no smarter than your average team of researchers

    所以這個機器思考應該比製造它的心智快一百萬倍。

  • at Stanford or MIT.

    如果你設定讓它運行一星期,

  • Well, electronic circuits function about a million times faster

    他會執行人類等級的智能要花兩萬年的工作,

  • than biochemical ones,

    一週接著一週接著一週。

  • so this machine should think about a million times faster

    我們如何可能理解,較不嚴格地說,

  • than the minds that built it.

    一個達成如此進展的心智?

  • So you set it running for a week,

    另一個另人擔心的事,老實說,

  • and it will perform 20,000 years of human-level intellectual work,

    是想像最好的情況。

  • week after week after week.

    想像我們想到一個沒有安全顧慮的超級人工智能的設計

  • How could we even understand, much less constrain,

    我們第一次就做出了完美的設計。

  • a mind making this sort of progress?

    如同我們被給予了一個神諭,

  • The other thing that's worrying, frankly,

    完全照我們的預期地動作。

  • is that, imagine the best case scenario.

    這個機器會是完美的人力節約裝置。

  • So imagine we hit upon a design of superintelligent AI

    它能設計一個機器,那機器能製造出能做任何人工的機器,

  • that has no safety concerns.

    太陽能驅動,

  • We have the perfect design the first time around.

    幾乎只需要原料的成本。

  • It's as though we've been handed an oracle

    所以我們是在談人類苦役的終結。

  • that behaves exactly as intended.

    我們也是在談大部分的智力工作的終結。

  • Well, this machine would be the perfect labor-saving device.

    像我們一樣的猩猩在這種情況下會做什麼?

  • It can design the machine that can build the machine

    我們可能可以自由地玩飛盤和互相按摩。

  • that can do any physical work,

    加上一點迷幻藥和可議的服裝選擇,

  • powered by sunlight,

    整個世界都可以像在過火人祭典。

  • more or less for the cost of raw materials.

    (笑聲)

  • So we're talking about the end of human drudgery.

    那聽起來也許很不錯,

  • We're also talking about the end of most intellectual work.

    但請問,在我們目前的經濟和政治秩序下,會發生什麼事情?

  • So what would apes like ourselves do in this circumstance?

    我們很可能會見證

  • Well, we'd be free to play Frisbee and give each other massages.

    一種我們從未見過的財富不均和失業程度。

  • Add some LSD and some questionable wardrobe choices,

    缺乏一種意願來把這份新財富馬上

  • and the whole world could be like Burning Man.

    放在服務全人類,

  • (Laughter)

    少數幾個萬億富翁能登上我們的財經雜誌

  • Now, that might sound pretty good,

    而其他人可以自由地選擇挨餓。

  • but ask yourself what would happen

    而俄國和中國會怎麼做?

  • under our current economic and political order?

    當他們聽說矽谷的某個公司

  • It seems likely that we would witness

    即將部署一個超級人工智能,

  • a level of wealth inequality and unemployment

    這個機器能夠發動戰爭,

  • that we have never seen before.

    無論是領土侵略或者網路電子戰,

  • Absent a willingness to immediately put this new wealth

    以前所未見的威力。

  • to the service of all humanity,

    這是個贏者全拿的劇本。

  • a few trillionaires could grace the covers of our business magazines

    在這個競爭領先六個月

  • while the rest of the world would be free to starve.

    等於領先五十萬年,

  • And what would the Russians or the Chinese do

    最少。

  • if they heard that some company in Silicon Valley

    所以即使僅僅是這種突破的謠言

  • was about to deploy a superintelligent AI?

    都能使我們這個種族走向狂暴。

  • This machine would be capable of waging war,

    現在,最讓人驚恐的事情,

  • whether terrestrial or cyber,

    在我的看法,在這個時刻,

  • with unprecedented power.

    是人工智慧研究者說的那類話

  • This is a winner-take-all scenario.

    當他們試著表現得讓人安心。

  • To be six months ahead of the competition here

    而最常用來告訴我們現在不要擔心的理由是時間。

  • is to be 500,000 years ahead,

    這還有很長的路要走,你不知道嗎,

  • at a minimum.

    起碼還要 50 到 100 年。

  • So it seems that even mere rumors of this kind of breakthrough

    一個研究人員曾說,

  • could cause our species to go berserk.

    「憂心人工智慧安全

  • Now, one of the most frightening things,

    如同憂心火星人口爆炸。」

  • in my view, at this moment,

    這是矽谷版本的

  • are the kinds of things that AI researchers say

    「別杞人憂天。」

  • when they want to be reassuring.

    (笑聲)

  • And the most common reason we're told not to worry is time.

    似乎沒人注意到

  • This is all a long way off, don't you know.

    以時間當參考

  • This is probably 50 or 100 years away.

    是一個不合理的推論。

  • One researcher has said,

    如果智能只是關於資訊的處理,

  • "Worrying about AI safety

    而我們持續改善我們的機器,

  • is like worrying about overpopulation on Mars."

    我們會製作出某種形式的超級智能。

  • This is the Silicon Valley version

    而且我們不知道要花我們多長的時間

  • of "don't worry your pretty little head about it."

    來創造安全地這麼做的條件。

  • (Laughter)

    讓我再說一次,

  • No one seems to notice

    我們不知道要花我們多長的時間

  • that referencing the time horizon

    來創造安全地這麼做的條件。

  • is a total non sequitur.

    而且如果你還沒注意到, 50 年已經不像以前的概念。

  • If intelligence is just a matter of information processing,

    這是 50 年以月來表示

  • and we continue to improve our machines,

    這是我們有了 iPhone 的時間。

  • we will produce some form of superintelligence.

    這是《辛普森家庭》在電視上播映的時間。

  • And we have no idea how long it will take us

    50 年不是那麼長的時間

  • to create the conditions to do that safely.

    來面對對我們這個種族來說最巨大的挑戰之一。

  • Let me say that again.

    再一次說,我們似乎無法產生適當的情緒反應

  • We have no idea how long it will take us

    對應我們有所有的理由相信將發生的事。

  • to create the conditions to do that safely.

    資訊科學家斯圖亞特·羅素有個很好的比喻。

  • And if you haven't noticed, 50 years is not what it used to be.

    他說,想像我們收到一則外星文明的訊息,

  • This is 50 years in months.

    寫道:

  • This is how long we've had the iPhone.

    「地球的人們,

  • This is how long "The Simpsons" has been on television.

    我們 50 年內會到達你們的星球。

  • Fifty years is not that much time

    作好準備。」

  • to meet one of the greatest challenges our species will ever face.

    而現在我們只是在倒數外星母艦還剩幾個月登陸?

  • Once again, we seem to be failing to have an appropriate emotional response

    我們會比我們現在稍微感到緊迫。

  • to what we have every reason to believe is coming.

    另一個我們被告知不用擔心的原因

  • The computer scientist Stuart Russell has a nice analogy here.

    是這些機器不得不和我們有一樣的價值觀

  • He said, imagine that we received a message from an alien civilization,

    因為他們字面上只是我們的延伸。

  • which read:

    它們會被植入我們的大腦裡,

  • "People of Earth,

    而我們基本上變成他們大腦的邊緣系統。

  • we will arrive on your planet in 50 years.

    現在用一點時間想想

  • Get ready."

    這最安全而且唯一謹慎的往前的路,

  • And now we're just counting down the months until the mothership lands?

    被推薦的,

  • We would feel a little more urgency than we do.

    是將這個科技植入我們的腦內。

  • Another reason we're told not to worry

    這也許的確是最安全而且唯一謹慎的往前的路,

  • is that these machines can't help but share our values

    但通常科技的安全性問題對一個人來說

  • because they will be literally extensions of ourselves.

    應該在把東西插到你腦袋裡之前就該大部分解決了。

  • They'll be grafted onto our brains,

    (笑聲)

  • and we'll essentially become their limbic systems.

    更深層的問題是,打造超級人工智能本身

  • Now take a moment to consider

    似乎相對容易於

  • that the safest and only prudent path forward,

    「打造超級人工智慧

  • recommended,

    而且擁有完整的神經科學

  • is to implant this technology directly into our brains.

    讓我們可以把我們的心智無縫與之整合」。

  • Now, this may in fact be the safest and only prudent path forward,

    而假設正在從事人工智能研發的許多公司和政府

  • but usually one's safety concerns about a technology

    很可能察覺他們正在和所有其他人競爭,

  • have to be pretty much worked out before you stick it inside your head.

    假設贏了這個競爭就是贏得世界,

  • (Laughter)

    假設你在下一刻不會毀了世界,

  • The deeper problem is that building superintelligent AI on its own

    那麼很可能比較容易做的事

  • seems likely to be easier

    就會先被做完。

  • than building superintelligent AI

    現在,很不幸地,我沒有這個問題的解決方法,

  • and having the completed neuroscience

    除了建議我們更多人思考這個問題。

  • that allows us to seamlessly integrate our minds with it.

    我想我們需要類似曼哈頓計畫的東西,

  • And given that the companies and governments doing this work

    針對人工智能這個課題。

  • are likely to perceive themselves as being in a race against all others,

    不是因為我們不可避免地要這麼做而做,

  • given that to win this race is to win the world,

    而是試著理解如何避免軍備競賽

  • provided you don't destroy it in the next moment,

    而且用一種符合我們利益的方式打造之。

  • then it seems likely that whatever is easier to do

    當你在談論能夠對其本身造成改變的超級人工智能

  • will get done first.

    這似乎說明我們只有一次機會把初始條件做對,

  • Now, unfortunately, I don't have a solution to this problem,

    而且我們會必須承受

  • apart from recommending that more of us think about it.

    為了將它們做對的經濟和政治的後果

  • I think we need something like a Manhattan Project

    但一旦我們承認

  • on the topic of artificial intelligence.

    資訊處理是智能的源頭,

  • Not to build it, because I think we'll inevitably do that,

    某些適當的電腦系統是智能的基礎,

  • but to understand how to avoid an arms race

    而且我們承認我們會持續改進這些系統,

  • and to build it in a way that is aligned with our interests.

    而且我們承認認知的極限有可能遠遠超越

  • When you're talking about superintelligent AI

    我們目前所知,

  • that can make changes to itself,

    然後我們必須承認

  • it seems that we only have one chance to get the initial conditions right,

    我們正在打造某種神明的過程裡

  • and even then we will need to absorb

    現在是個好時機

  • the economic and political consequences of getting them right.

    來確保那是個我們能夠與之共存的神明。

  • But the moment we admit

    謝謝大家。

  • that information processing is the source of intelligence,

  • that some appropriate computational system is what the basis of intelligence is,

  • and we admit that we will improve these systems continuously,

  • and we admit that the horizon of cognition very likely far exceeds

  • what we currently know,

  • then we have to admit

  • that we are in the process of building some sort of god.

  • Now would be a good time

  • to make sure it's a god we can live with.

  • Thank you very much.

  • (Applause)

I'm going to talk about a failure of intuition

我要談一種我們很多人遭受的、直覺上的失誤。

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 中文 美國腔 TED 智能 人工 機器 造出 假設

【TED】Sam Harris:我們能在不失去控制權的情況下構建人工智能嗎?(Can we build AI without losing control over it? | Sam Harris) (【TED】Sam Harris: Can we build AI without losing control over it? (Can we build AI without losing control over it? | Sam Harris))

  • 2029 144
    大佑 發佈於 2021 年 01 月 14 日
影片單字