Placeholder Image

字幕列表 影片播放

已審核 字幕已審核
  • This is a thought experiment.

    這是一個思想實驗。

  • Let's say at some point in the not-so-distant future, you're barreling down the highway in your self-driving car, and you find yourself boxed in on all sides by other cars.

    假設在不遠的未來,你開著自動駕駛汽車行駛於高速公路上,發現左右兩邊都有車 。

  • Suddenly, a large, heavy object falls off the truck in front of you.

    突然,前面的貨車上掉下一個又大又重的貨物。

  • Your car can't stop in time to avoid the collision, so it needs to make a decision: go straight and hit the object, swerve left into an SUV, or swerve right into a motorcycle?

    你的車子來不及煞車以避免撞上它,導致它必須做出決定:是要直行然後撞上貨物、左轉撞上休旅車,或是右轉撞上摩托車?

  • Should it prioritize your safety by hitting the motorcycle, minimize danger to others by not swerving, even if it means hitting the large object and sacrificing your life, or take the middle ground by hitting the SUV, which has a high passenger safety rating?

    它該以你的安全為優先,選擇撞上摩托車嗎?或是它該直行來將整體傷害最小化,即使這表示你將會撞上貨物並犧牲性命?又或是它該折衷選擇撞上休旅車,因為它的乘坐安全等級較高?

  • So what should the self-driving car do?

    自動駕駛汽車該怎麼做呢?

  • If we were driving that boxed in car in manual mode, whichever way we'd react would be understood as just that, a reaction, not a deliberate decision.

    如果我們是自行手動駕駛被夾在車陣中的同一台車,無論我們做了哪種決定,都會被視為是單純的反應動作,而非經過深思熟慮的決定。

  • It would be an instinctual panicked move with no forethought or malice.

    這會是基於本能上的恐慌,而做出的毫無惡意或預謀的決定。

  • But if a programmer were to instruct the car to make the same move, given conditions it may sense in the future, well, that looks more like premeditated homicide.

    但如果是程式設計師在事前針對它在未來可能會碰到的情況,編寫指令命令它做出一樣的選擇,那就會感覺像是在預謀殺人了。

  • Now, to be fair, self-driving cars are are predicted to dramatically reduce traffic accidents and fatalities by removing human error from the driving equation.

    平心而論,自動駕駛汽車被預測藉由自動駕駛程式來消除人為疏失,來大幅降低車禍及事故。

  • Plus, there may be all sorts of other benefits: eased road congestion, decreased harmful emissions, and minimized unproductive and stressful driving time.

    再者,它可能還有其他優點:舒緩塞車、減少有害物質排放量,並縮短沒效率又高壓力的開車時間。

  • But accidents can and will still happen, and when they do, their outcomes may be determined months or years in advance by programmers or policy makers.

    但事故仍舊會發生,而當它發生時,最終結果通常早在事故發生前的幾個月或幾年,就被程式設計師及政策決策者所決定了。

  • And they'll have some difficult decisions to make.

    而他們將必須做出許多困難的決定。

  • It's tempting to offer up general decision-making principles, like minimize harm, but even that quickly leads to morally murky decisions.

    直接利用一個一般性決策的想法原則確實很吸引人,例如命令汽車將「傷害最小化」。但就連如此都仍有可能會讓汽車做出在道德上有瑕疵的決定。

  • For example, let's say we have the same initial set up, but now there's a motorcyclist wearing a helmet to your left and another one without a helmet to your right.

    例如,假設我們又回到了先前的車陣中,但現在是兩台摩托車在我們的左右,而左邊那台的騎士有戴安全帽,右邊那台的騎士則沒有。

  • Which one should your robot car crash into?

    你的自動駕駛汽車該撞向哪裡?

  • If you say the biker with the helmet because she's more likely to survive, then aren't you penalizing the responsible motorist?

    若你因為他看起來存活機率比較高而選擇撞向有戴安全帽的那方,這難道不是在懲罰守法的駕駛人嗎?

  • If, instead, you say the biker without the helmet because he's acting irresponsibly, then you've gone way beyond the initial design principle about minimizing harm, and the robot car is now meting out street justice.

    若你因為他似乎比較不守法而選擇撞向沒有戴安全帽的那方,那麼你就違反了最初對於傷害最小化的設想,而你的自動駕駛汽車則是在實行你個人的街頭正義。

  • The ethical considerations get more complicated here.

    道德考量在此時變得更加複雜。

  • In both of our scenarios, the underlying design is functioning as a targeting algorithm of sorts.

    在前面兩種情況中,它的基本設計都是以標靶演算法來運作。

  • In other words, it's systematically favoring or discriminating against a certain type of object to crash into.

    換句話說,它是有系統地選擇要撞上或避開即將撞上的物體類型。

  • And the owners of the target vehicles will suffer the negative consequences of this algorithm through no fault of their own.

    而那位被選擇撞上的車主雖然沒有做錯任何事,卻只能承擔演算法所帶來的負面結果。

  • Our new technologies are opening up many other novel ethical dilemmas.

    我們的新興科技目前正面臨許多前所未有的道德困境。

  • For instance, if you had to choose between a car that would always save as many lives as possible in an accident, or one that would save you at any cost, which would you buy?

    舉例來說,假如你必須在一台會在事故中盡可能減少傷亡的車 ,和一台會用盡一切手段保住你性命的車,你會買哪一輛?

  • What happens if the cars start analyzing and factoring in the passengers of the cars and the particulars of their lives?

    如果汽車開始分析並考量車上的乘客及他們的地位,又會發生什麼事?

  • Could it be the case that a random decision is still better than a predetermined one designed to minimize harm?

    隨機性選擇會不會其實比預先決定的傷害最小化決策原則來得好呢?

  • And who should be making all of these decisions anyhow?

    那麼誰要規定該如何決定這些所有的選擇?

  • Programmers? Companies? Governments?

    程式設計師?企業?政府?

  • Reality may not play out exactly like our thought experiments, but that's not the point.

    現實中所發生的情況不一定會與我們的思想實驗相符,但那不是重點。

  • They're designed to isolate and stress test our intuitions on ethics, just like science experiments do for the physical world.

    它們被設計來將我們的道德直覺反應分離出來並加以測試,就像現實世界的科學實驗一樣。

  • Spotting these moral hairpin turns now will help us maneuver the unfamiliar road of technology ethics, and allow us to cruise confidently and conscientiously into our brave new future.

    發現這些道德的急轉彎能幫助我們在陌生的科技倫理學這條路上,能夠自信且有良知地駛向新的美麗未來。

This is a thought experiment.

這是一個思想實驗。

字幕與單字
已審核 字幕已審核

單字即點即查 點擊單字可以查詢單字解釋