Placeholder Image

字幕列表 影片播放

  • Today I'm going to talk about technology and society.

    譯者: Lilian Chiu 審譯者: NAN-KUN WU

  • The Department of Transport estimated that last year

    今天我要談的是科技與社會。

  • 35,000 people died from traffic crashes in the US alone.

    交通部估計去年

  • Worldwide, 1.2 million people die every year in traffic accidents.

    在美國有 35,000 人死於交通事故。

  • If there was a way we could eliminate 90 percent of those accidents,

    全球,每年有 120 萬人 死於交通意外。

  • would you support it?

    如果有方法可以避免 90% 的那些意外,

  • Of course you would.

    你們願意支持嗎?

  • This is what driverless car technology promises to achieve

    當然你們會願意。

  • by eliminating the main source of accidents --

    這就是無人駕駛車技術 許諾要達成的目標,

  • human error.

    做法是消除意外事故的主要源頭:

  • Now picture yourself in a driverless car in the year 2030,

    人類錯誤。

  • sitting back and watching this vintage TEDxCambridge video.

    想像一下,2030 年, 你坐在一臺無人駕駛的車內,

  • (Laughter)

    你靠坐著,看著這支極佳的 TEDxCambridge 影片。

  • All of a sudden,

    (笑聲)

  • the car experiences mechanical failure and is unable to stop.

    突然間,

  • If the car continues,

    這臺車發生機械故障,無法停下來。

  • it will crash into a bunch of pedestrians crossing the street,

    如果車繼續前進,

  • but the car may swerve,

    它會撞上過馬路的一群行人,

  • hitting one bystander,

    但可以把車突然轉向,

  • killing them to save the pedestrians.

    只撞死一個旁觀者,

  • What should the car do, and who should decide?

    撞死他就可以救一群行人。

  • What if instead the car could swerve into a wall,

    這臺車會怎麼做?誰來決定?

  • crashing and killing you, the passenger,

    如果改成,這臺車可以 突然轉向去撞牆,

  • in order to save those pedestrians?

    撞毀後只有乘客──也就是你── 會身亡,

  • This scenario is inspired by the trolley problem,

    這樣就能夠救那些行人,如何?

  • which was invented by philosophers a few decades ago

    這個情境的靈感是來自於電車問題,

  • to think about ethics.

    那是幾十年前哲學家所發明的問題,

  • Now, the way we think about this problem matters.

    做道德思考用。

  • We may for example not think about it at all.

    我們思考這個問題的方式 是很重要的。

  • We may say this scenario is unrealistic,

    比如,我們有可能完全不去想它。

  • incredibly unlikely, or just silly.

    我們可能會說,這個情境不實際、

  • But I think this criticism misses the point

    非常不可能發生或很蠢。

  • because it takes the scenario too literally.

    但我認為這種批判是失焦了,

  • Of course no accident is going to look like this;

    因為它看這個情境看得太表面。

  • no accident has two or three options

    當然不會有像這樣子的意外發生;

  • where everybody dies somehow.

    沒有意外會有兩、三個選項,

  • Instead, the car is going to calculate something

    且每個選項都有人死。

  • like the probability of hitting a certain group of people,

    情況是,這臺車去做計算,

  • if you swerve one direction versus another direction,

    比如計算撞到某一群人的機率,

  • you might slightly increase the risk to passengers or other drivers

    如果你向這個方向急轉彎, 跟另一個方向做比較,

  • versus pedestrians.

    你可能會稍微增加乘客 或其他駕駛人的風險,

  • It's going to be a more complex calculation,

    跟行人的風險比較等等。

  • but it's still going to involve trade-offs,

    這會是個比較複雜的計算,

  • and trade-offs often require ethics.

    但它仍然會涉及取捨,

  • We might say then, "Well, let's not worry about this.

    而取捨通常都會需要用到道德。

  • Let's wait until technology is fully ready and 100 percent safe."

    我們可能會說:「別擔心這個。

  • Suppose that we can indeed eliminate 90 percent of those accidents,

    我們可以等到技術完全 準備好且 100% 安全。」

  • or even 99 percent in the next 10 years.

    假設我們的確可以減少 90% 的意外事故,

  • What if eliminating the last one percent of accidents

    或是在接下來十年甚至達到 99%。

  • requires 50 more years of research?

    如果說要把最後 1% 的 意外事故都消除,

  • Should we not adopt the technology?

    會需要再多做 50 年的研究呢?

  • That's 60 million people dead in car accidents

    我們不該採用這個技術嗎?

  • if we maintain the current rate.

    如果我們繼續用目前的方式,

  • So the point is,

    那就會有 6000 萬人死亡。

  • waiting for full safety is also a choice,

    所以重點是,

  • and it also involves trade-offs.

    等到完全安全也是一個選擇,

  • People online on social media have been coming up with all sorts of ways

    而這個選擇也涉及取捨。

  • to not think about this problem.

    人們在網路社會媒體上 提出各式各樣的方式,

  • One person suggested the car should just swerve somehow

    來避免思考這個問題。

  • in between the passengers --

    有個人建議,車應該要急轉彎,

  • (Laughter)

    穿過這些路人──

  • and the bystander.

    (笑聲)

  • Of course if that's what the car can do, that's what the car should do.

    和旁觀者之間。

  • We're interested in scenarios in which this is not possible.

    當然,如果這臺車能這麼做, 那它是應該這麼做。

  • And my personal favorite was a suggestion by a blogger

    我們感興趣的是 沒辦法這樣做的情況。

  • to have an eject button in the car that you press --

    我個人的最愛, 是一個部落客的建議,

  • (Laughter)

    他建議車要裝設一個 彈射按鈕,讓你可以──

  • just before the car self-destructs.

    (笑聲)

  • (Laughter)

    在車自毀之前按下它。

  • So if we acknowledge that cars will have to make trade-offs on the road,

    (笑聲)

  • how do we think about those trade-offs,

    所以,如果我們承認車在 路上行駛時會需要做取捨,

  • and how do we decide?

    我們要如何去想那些取捨?

  • Well, maybe we should run a survey to find out what society wants,

    我們要如何做決定?

  • because ultimately,

    也許我們該做個調查 來了解社會想要什麼,

  • regulations and the law are a reflection of societal values.

    因為,最終,

  • So this is what we did.

    規定和法律都是反應出社會價值。

  • With my collaborators,

    所以我們就這麼做了。

  • Jean-François Bonnefon and Azim Shariff,

    我和我的共同研究者,

  • we ran a survey

    尚方斯華.邦納方和阿辛.夏利夫,

  • in which we presented people with these types of scenarios.

    一起進行了一項調查,

  • We gave them two options inspired by two philosophers:

    調查中,我們給人們看這些情境。

  • Jeremy Bentham and Immanuel Kant.

    我們給予他們兩個選項, 選項靈感來自兩位哲學家:

  • Bentham says the car should follow utilitarian ethics:

    傑瑞米.邊沁及伊曼努爾.康德。

  • it should take the action that will minimize total harm --

    邊沁說,車應該要遵循特定的道德:

  • even if that action will kill a bystander

    它應該要採取 能讓傷害最小的行為──

  • and even if that action will kill the passenger.

    即使那個行為會害死一名旁觀者,

  • Immanuel Kant says the car should follow duty-bound principles,

    即使那個行為會害死一名乘客。

  • like "Thou shalt not kill."

    康德說,車應該要遵循 責無旁貸原則,

  • So you should not take an action that explicitly harms a human being,

    比如「你不該殺人」。

  • and you should let the car take its course

    所以你不能採取很明確 會傷害到人類的行為,

  • even if that's going to harm more people.

    你應該讓車就照它的路去走,

  • What do you think?

    即使那最後會傷害到更多人。

  • Bentham or Kant?

    你們認為如何?

  • Here's what we found.

    邊沁或康德?

  • Most people sided with Bentham.

    我們發現的結果如下。

  • So it seems that people want cars to be utilitarian,

    大部分人站在邊沁這一邊。

  • minimize total harm,

    所以,似乎人們會希望 車是功利主義的,

  • and that's what we should all do.

    把總傷害降到最低,

  • Problem solved.

    我們所有人都該這麼做。

  • But there is a little catch.

    問題解決。

  • When we asked people whether they would purchase such cars,

    但有一個小難處。

  • they said, "Absolutely not."

    當我們問人們, 他們是否會買這樣的車,

  • (Laughter)

    他們說:「絕對不會。」

  • They would like to buy cars that protect them at all costs,

    (笑聲)

  • but they want everybody else to buy cars that minimize harm.

    他們會買不計代價來保護他們的車,

  • (Laughter)

    但他們希望其他人都買 能把傷害減到最低的車。

  • We've seen this problem before.

    (笑聲)

  • It's called a social dilemma.

    我們以前就看過這個問題。

  • And to understand the social dilemma,

    它叫做社會兩難。

  • we have to go a little bit back in history.

    要了解社會兩難,

  • In the 1800s,

    我們得要回溯一下歷史。

  • English economist William Forster Lloyd published a pamphlet

    在 1800 年代,

  • which describes the following scenario.

    英國經濟學家威廉.佛斯特.洛伊 出版了一本小冊子,

  • You have a group of farmers --

    小冊子描述了下列情境:

  • English farmers --

    有一群農夫

  • who are sharing a common land for their sheep to graze.

    ──英國農夫──

  • Now, if each farmer brings a certain number of sheep --

    他們在一塊共用地上面 放牧他們的羊。

  • let's say three sheep --

    每個農夫都會帶來一定數量的羊──

  • the land will be rejuvenated,

    比如說三隻羊──

  • the farmers are happy,

    這塊地會更生,

  • the sheep are happy,

    農夫都很快樂,

  • everything is good.

    羊也很快樂,

  • Now, if one farmer brings one extra sheep,

    一切都很好。

  • that farmer will do slightly better, and no one else will be harmed.

    現在,如果一個農夫多帶了一隻羊,

  • But if every farmer made that individually rational decision,

    那位農夫的生計會更好一些, 其他人也沒受害。

  • the land will be overrun, and it will be depleted

    但如果每位農夫都各別 做出了那個理性的決定,

  • to the detriment of all the farmers,

    就會超過這塊地的限度, 資源會被用盡,

  • and of course, to the detriment of the sheep.

    所有農夫都會受害,

  • We see this problem in many places:

    當然,羊也會受害。

  • in the difficulty of managing overfishing,

    我們在許多地方都會看到這類問題:

  • or in reducing carbon emissions to mitigate climate change.

    比如管理過度捕魚的困難,

  • When it comes to the regulation of driverless cars,

    或是減少碳排放來緩和氣候變遷。

  • the common land now is basically public safety --

    當用到規範無人駕駛車的情況時,

  • that's the common good --

    這塊共用地基本上就是公共安全──

  • and the farmers are the passengers

    也就是共善──

  • or the car owners who are choosing to ride in those cars.

    農夫就是乘客,

  • And by making the individually rational choice

    或是選擇坐這種車的車主。

  • of prioritizing their own safety,

    他們各自做出理性的選擇,

  • they may collectively be diminishing the common good,

    把他們自己的安全擺在第一,

  • which is minimizing total harm.

    那麼整體來說, 他們就可能會削減了共善,

  • It's called the tragedy of the commons,

    共善就是把總傷害減到最低。

  • traditionally,

    這就是所謂的「公地悲劇」,

  • but I think in the case of driverless cars,

    傳統是這麼稱呼的。

  • the problem may be a little bit more insidious

    但我認為在無人駕駛車的情況中,

  • because there is not necessarily an individual human being

    問題可能還有涉及其他隱患,

  • making those decisions.

    因為並沒有特別一個人類

  • So car manufacturers may simply program cars

    在做那些決定。

  • that will maximize safety for their clients,

    汽車製造商很可能就會把他們的車

  • and those cars may learn automatically on their own

    設計成以確保客戶安全為第一要務,

  • that doing so requires slightly increasing risk for pedestrians.

    而那些車可能會自己發現,

  • So to use the sheep metaphor,

    保護客戶會稍微增加行人的風險。

  • it's like we now have electric sheep that have a mind of their own.

    若用放牧羊的比喻,

  • (Laughter)

    就像是有電子羊,自己會思考。

  • And they may go and graze even if the farmer doesn't know it.

    (笑聲)

  • So this is what we may call the tragedy of the algorithmic commons,

    他們可能會在農夫不知道的 情況下自己去吃草。

  • and if offers new types of challenges.

    所以我們可以稱之為 「演算法公地悲劇」,

  • Typically, traditionally,

    它會帶來新的挑戰。

  • we solve these types of social dilemmas using regulation,

    通常,傳統上,

  • so either governments or communities get together,

    我們會用規制來解決這些社會兩難,

  • and they decide collectively what kind of outcome they want

    所以政府或是社區會集合起來,

  • and what sort of constraints on individual behavior

    他們共同決定想要什麼樣的結果、

  • they need to implement.

    以及對於個人行為,他們需要實施

  • And then using monitoring and enforcement,

    什麼樣的限制。

  • they can make sure that the public good is preserved.

    接著透過監控和執行,

  • So why don't we just,

    他們就能確保公善能被維護。

  • as regulators,

    身為管理者,

  • require that all cars minimize harm?

    我們為什麼不

  • After all, this is what people say they want.

    要求所有的車都要把 傷害降至最低就好了?

  • And more importantly,

    畢竟,這是人民要的。

  • I can be sure that as an individual,

    更重要的,

  • if I buy a car that may sacrifice me in a very rare case,

    我可以確定,身為個人,

  • I'm not the only sucker doing that

    如果我買了一臺有可能在 非常少數情況下把我犧牲的車,

  • while everybody else enjoys unconditional protection.

    至少我不會是唯一的冤大頭,

  • In our survey, we did ask people whether they would support regulation

    因為人人都享有無條件保護。

  • and here's what we found.

    在我們的調查中, 我們有問人們是否支持規制,

  • First of all, people said no to regulation;

    這是我們的發現。

  • and second, they said,

    首先,人們對規制說不;

  • "Well if you regulate cars to do this and to minimize total harm,

    第二,他們說:

  • I will not buy those cars."

    「如果你們規定車都要這樣做, 並把總傷害降至最低,

  • So ironically,

    我不會去買那些車。」

  • by regulating cars to minimize harm,

    所以,很諷刺,

  • we may actually end up with more harm

    藉由規範汽車來將傷害降至最小,

  • because people may not opt into the safer technology

    結果卻會是更多的傷害,

  • even if it's much safer than human drivers.

    因為人們可能不會選擇 比較安全的科技,

  • I don't have the final answer to this riddle,

    即使科技比人類駕駛還安全。

  • but I think as a starting point,

    關於這個謎,我沒有最終的答案,

  • we need society to come together

    但我認為,做為一個起始點,

  • to decide what trade-offs we are comfortable with

    我們需要社會能團結起來,

  • and to come up with ways in which we can enforce those trade-offs.

    來決定我們對何種取捨 會覺得比較舒服,

  • As a starting point, my brilliant students,

    並提出方法來執行這些取捨。

  • Edmond Awad and Sohan Dsouza,

    做為一個起始點,我的出色學生們,

  • built the Moral Machine website,

    艾德蒙.艾瓦和索漢.達蘇札,

  • which generates random scenarios at you --

    建立了「道德機器」網站,

  • basically a bunch of random dilemmas in a sequence

    它會為你產生隨機的情境,

  • where you have to choose what the car should do in a given scenario.

    基本上是一連串的隨機兩難,

  • And we vary the ages and even the species of the different victims.

    你得要選擇在給定的情境中, 車要怎麼做。

  • So far we've collected over five million decisions

    我們會改變受害者的年齡和物種。

  • by over one million people worldwide

    目前我們已經從 來自全世界超過 100 萬人,

  • from the website.

    收集到了超過 500 萬個決定,

  • And this is helping us form an early picture

    透過網站取得的。

  • of what trade-offs people are comfortable with

    這能協助我們勾勒出初步的狀況,

  • and what matters to them --

    了解人們對怎樣的取捨 感到比較舒服、

  • even across cultures.

    什麼對他們比較重要──

  • But more importantly,

    且是跨文化的。

  • doing this exercise is helping people recognize

    但更重要的,

  • the difficulty of making those choices

    做這個演練,能協助人們認清

  • and that the regulators are tasked with impossible choices.

    做那些決定有多困難、

  • And maybe this will help us as a society understand the kinds of trade-offs

    而管理者的工作就是要 做不可能的選擇。

  • that will be implemented ultimately in regulation.

    也許這能協助我們這個社會 去了解這類的取捨,

  • And indeed, I was very happy to hear

    終究這類的取捨在將來也會 被納入到規制當中。

  • that the first set of regulations

    的確,我非常高興聽到

  • that came from the Department of Transport --

    來自交通部的

  • announced last week --

    第一組規定

  • included a 15-point checklist for all carmakers to provide,

    在上週宣佈,

  • and number 14 was ethical consideration --

    內容有一張清單,上面有 15 個項目, 是要汽車製造商避免的,

  • how are you going to deal with that.

    而第 14 項就是道德考量──

  • We also have people reflect on their own decisions

    你要如何去處理它。

  • by giving them summaries of what they chose.

    也有人會反思他們自己的決定,

  • I'll give you one example --

    我們會把人們所做的決定 歸納給他們。

  • I'm just going to warn you that this is not your typical example,

    我舉一個例子給各位──

  • your typical user.

    我要先警告各位, 這不是典型的例子,

  • This is the most sacrificed and the most saved character for this person.

    使用者也不是典型的。

  • (Laughter)

    這個人最會犧牲的角色(小孩) 及最會拯救的角色(貓)是這些。

  • Some of you may agree with him,

    (笑聲)

  • or her, we don't know.

    有些人可能認同他,

  • But this person also seems to slightly prefer passengers over pedestrians

    或她,性別不詳。

  • in their choices

    但這個人對於乘客的偏好 也略高於行人,

  • and is very happy to punish jaywalking.

    從他們的選擇看得出來,

  • (Laughter)

    且非常樂意懲罰闖紅燈的人。

  • So let's wrap up.

    (笑聲)

  • We started with the question -- let's call it the ethical dilemma --

    來總結收尾一下。

  • of what the car should do in a specific scenario:

    我們一開頭問的問題, 就姑且稱之為道德兩難,

  • swerve or stay?

    在特定的情境中, 車該怎麼做:

  • But then we realized that the problem was a different one.

    急轉彎或保持原路?

  • It was the problem of how to get society to agree on and enforce

    但接著我們了解到, 這個問題有所不同。

  • the trade-offs they're comfortable with.

    問題是要如何讓社會認同,

  • It's a social dilemma.

    並且執行讓他們覺得舒服的取捨。

  • In the 1940s, Isaac Asimov wrote his famous laws of robotics --

    這是個社會兩難。

  • the three laws of robotics.

    在四〇年代,以撒.艾西莫夫 寫了著名的機器人學法則──

  • A robot may not harm a human being,

    機器人學的三大法則。

  • a robot may not disobey a human being,

    機器人不得傷害人類,

  • and a robot may not allow itself to come to harm --

    機器人不得違背人類,

  • in this order of importance.

    且機器人不得 讓它自己受到傷害──

  • But after 40 years or so

    重要性就是依這個順序。

  • and after so many stories pushing these laws to the limit,

    約四十年後,

  • Asimov introduced the zeroth law

    許多故事已經把 這些法則推到了極限,

  • which takes precedence above all,

    艾西莫夫又提出了第零條法則,

  • and it's that a robot may not harm humanity as a whole.

    其重要性高於前述的所有法則,

  • I don't know what this means in the context of driverless cars

    這條法則是, 機器人不得傷害整體人性。

  • or any specific situation,

    我不知道在無人駕駛車的情況中, 或任何明確的情境中,

  • and I don't know how we can implement it,

    這條法則是什麼意思,

  • but I think that by recognizing

    我也不知道我們要如何實施它,

  • that the regulation of driverless cars is not only a technological problem

    但我認為,透過認清

  • but also a societal cooperation problem,

    無人駕駛車的規範不只是科技問題,

  • I hope that we can at least begin to ask the right questions.

    也是社會合作問題,

  • Thank you.

    我希望我們至少能 開始問出對的問題。

  • (Applause)

    謝謝。

Today I'm going to talk about technology and society.

譯者: Lilian Chiu 審譯者: NAN-KUN WU

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 中文 美國腔 TED 農夫 情境 兩難 駕駛車 法則

TED】Iyad Rahwan:無人駕駛汽車應該做出哪些道德決定?(無人駕駛汽車應該做出哪些道德決定?|Iyad Rahwan) (【TED】Iyad Rahwan: What moral decisions should driverless cars make? (What moral decisions should driverless cars make? | Iyad Rahwan))

  • 478 19
    Zenn 發佈於 2021 年 01 月 14 日
影片單字