Placeholder Image

字幕列表 影片播放

  • So when people voice fears of artificial intelligence,

    譯者: Lilian Chiu 審譯者: Helen Chang

  • very often, they invoke images of humanoid robots run amok.

    當人們表達出對人工智慧的恐懼,

  • You know? Terminator?

    他們腦中的景象通常是 形象似人的機器人失控殺人。

  • You know, that might be something to consider,

    知道嗎?魔鬼終結者?

  • but that's a distant threat.

    雖然考量那種情況的確沒錯,

  • Or, we fret about digital surveillance

    但那是遙遠以後的威脅。

  • with metaphors from the past.

    或者,我們擔心被數位監視,

  • "1984," George Orwell's "1984,"

    有著來自過去的隱喻。

  • it's hitting the bestseller lists again.

    喬治歐威爾的《1984》

  • It's a great book,

    再度登上了暢銷書的排行榜。

  • but it's not the correct dystopia for the 21st century.

    雖然它是本很棒的書,

  • What we need to fear most

    但它並未正確地反映出 21 世紀的反烏托邦。

  • is not what artificial intelligence will do to us on its own,

    我們最需要恐懼的

  • but how the people in power will use artificial intelligence

    並不是人工智慧本身會對我們怎樣,

  • to control us and to manipulate us

    而是掌權者會如何運用人工智慧

  • in novel, sometimes hidden,

    來控制和操縱我們,

  • subtle and unexpected ways.

    用新穎的、有時隱蔽的、

  • Much of the technology

    精細的、出乎意料的方式。

  • that threatens our freedom and our dignity in the near-term future

    那些會在不遠的將來

  • is being developed by companies

    威脅我們自由和尊嚴的科技,

  • in the business of capturing and selling our data and our attention

    多半出自下面這類公司,

  • to advertisers and others:

    他們攫取我們的注意力和資料,

  • Facebook, Google, Amazon,

    販售給廣告商和其他對象:

  • Alibaba, Tencent.

    臉書、Google、亞馬遜、

  • Now, artificial intelligence has started bolstering their business as well.

    阿里巴巴、騰訊。

  • And it may seem like artificial intelligence

    人工智慧開始鞏固這些公司的事業。

  • is just the next thing after online ads.

    看似人工智慧將是

  • It's not.

    線上廣告後的下一個產物。

  • It's a jump in category.

    並非如此。

  • It's a whole different world,

    它是個大躍進的類別,

  • and it has great potential.

    一個完全不同的世界,

  • It could accelerate our understanding of many areas of study and research.

    它具有龐大的潛力,

  • But to paraphrase a famous Hollywood philosopher,

    能夠加速我們對於 許多研究領域的了解。

  • "With prodigious potential comes prodigious risk."

    但,轉述一位知名 好萊塢哲學家的說法:

  • Now let's look at a basic fact of our digital lives, online ads.

    「驚人的潛力會帶來驚人的風險。」

  • Right? We kind of dismiss them.

    先談談一個數位生活的 基本面向:線上廣告。

  • They seem crude, ineffective.

    我們算是有點輕視了線上的廣告。

  • We've all had the experience of being followed on the web

    它們看似粗糙、無效。

  • by an ad based on something we searched or read.

    我們都曾經因為在網路上 搜尋或閱讀過某些內容,

  • You know, you look up a pair of boots

    而老是被一個廣告給跟隨著。

  • and for a week, those boots are following you around everywhere you go.

    上網搜尋一雙靴子,

  • Even after you succumb and buy them, they're still following you around.

    之後的一週,你到哪兒 都會看見那雙靴子。

  • We're kind of inured to that kind of basic, cheap manipulation.

    即使你屈服,買下了它, 它還是到處跟著你。

  • We roll our eyes and we think, "You know what? These things don't work."

    我們算是習慣了 那種基本、廉價的操縱,

  • Except, online,

    翻個白眼,心想: 「知道嗎?這些沒有用。」

  • the digital technologies are not just ads.

    只除了在線上,

  • Now, to understand that, let's think of a physical world example.

    數位科技並不只是廣告。

  • You know how, at the checkout counters at supermarkets, near the cashier,

    為瞭解這一點,我們先用 實體世界當作例子。

  • there's candy and gum at the eye level of kids?

    你們有沒有看過,在超市結帳台 靠近收銀機的地方,

  • That's designed to make them whine at their parents

    會有放在孩子視線高度的 糖果和口香糖?

  • just as the parents are about to sort of check out.

    那是設計來讓孩子哀求

  • Now, that's a persuasion architecture.

    正在結帳的父母用的。

  • It's not nice, but it kind of works.

    那是一種說服架構,

  • That's why you see it in every supermarket.

    不太好,但算是有些效用,

  • Now, in the physical world,

    因此在每個超級市場都看得到。

  • such persuasion architectures are kind of limited,

    在實體世界中,

  • because you can only put so many things by the cashier. Right?

    這種說服架構有點受限,

  • And the candy and gum, it's the same for everyone,

    因為在收銀台那裡 只擺得下那麼點東西,對吧?

  • even though it mostly works

    並且每個人看到的 是同樣的糖果和口香糖,

  • only for people who have whiny little humans beside them.

    這招只對身旁

  • In the physical world, we live with those limitations.

    有小孩子喋喋不休吵著的大人有用。

  • In the digital world, though,

    我們生活的實體世界裡有那些限制。

  • persuasion architectures can be built at the scale of billions

    但在數位世界裡,

  • and they can target, infer, understand

    說服架構的規模可達數十億的等級,

  • and be deployed at individuals

    它們會瞄準、臆測、了解,

  • one by one

    針對個人來部署,

  • by figuring out your weaknesses,

    各個擊破,

  • and they can be sent to everyone's phone private screen,

    弄清楚個別的弱點,

  • so it's not visible to us.

    且能傳送到每個人 私人手機的螢幕上,

  • And that's different.

    別人是看不見的。

  • And that's just one of the basic things that artificial intelligence can do.

    那就很不一樣。

  • Now, let's take an example.

    那只是人工智慧 能做到的基本功能之一。

  • Let's say you want to sell plane tickets to Vegas. Right?

    讓我舉個例子。

  • So in the old world, you could think of some demographics to target

    比如說,你要賣飛往賭城的機票。

  • based on experience and what you can guess.

    在舊式的世界裡,你可以想出 某些特徵的人來當目標,

  • You might try to advertise to, oh,

    根據你的經驗和猜測。

  • men between the ages of 25 and 35,

    你也可以試著打廣告,

  • or people who have a high limit on their credit card,

    像針對 25~35 歲的男性,

  • or retired couples. Right?

    或高信用卡額度的人,

  • That's what you would do in the past.

    或退休的夫妻,對吧?

  • With big data and machine learning,

    那是過去的做法。

  • that's not how it works anymore.

    有了大量資料和機器學習,

  • So to imagine that,

    方式就不一樣了。

  • think of all the data that Facebook has on you:

    試想,

  • every status update you ever typed,

    想想臉書掌握什麼關於你的資料:

  • every Messenger conversation,

    所有你輸入的動態更新、

  • every place you logged in from,

    所有的訊息對話、

  • all your photographs that you uploaded there.

    所有你登入時的所在地、

  • If you start typing something and change your mind and delete it,

    所有你上傳的照片。

  • Facebook keeps those and analyzes them, too.

    如果你開始輸入些內容, 但隨後改變主意而將之刪除,

  • Increasingly, it tries to match you with your offline data.

    臉書會保留那些內容和分析它們。

  • It also purchases a lot of data from data brokers.

    它越來越會試著將你 和你的離線資料做匹配,

  • It could be everything from your financial records

    也會向資料仲介商購買許多資料。

  • to a good chunk of your browsing history.

    從你的財務記錄

  • Right? In the US, such data is routinely collected,

    到你過去瀏覽過的一大堆記錄。

  • collated and sold.

    在美國,這些資料被常規地收集、

  • In Europe, they have tougher rules.

    校對和售出。

  • So what happens then is,

    歐洲的規定比較嚴。

  • by churning through all that data, these machine-learning algorithms --

    接下來發生的狀況是

  • that's why they're called learning algorithms --

    透過攪拌所有這些資料, 這些機器學習演算法──

  • they learn to understand the characteristics of people

    這就是為什麼它們 被稱為學習演算法──

  • who purchased tickets to Vegas before.

    它們學會了解過去購買機票

  • When they learn this from existing data,

    飛往賭城的人有何特徵。

  • they also learn how to apply this to new people.

    當它們從既有的資料中 學到這些之後,

  • So if they're presented with a new person,

    也學習如何將所學 套用到新的人身上。

  • they can classify whether that person is likely to buy a ticket to Vegas or not.

    如果交給它們一個新的人,

  • Fine. You're thinking, an offer to buy tickets to Vegas.

    它們能辨識那人可能 或不太可能買機票。

  • I can ignore that.

    好。你心想,不就是提供 購買飛往賭城機票的訊息罷了,

  • But the problem isn't that.

    可以忽略它。

  • The problem is,

    但問題不在那裡。

  • we no longer really understand how these complex algorithms work.

    問題是,

  • We don't understand how they're doing this categorization.

    我們已經不能真正了解 這些複雜的演算法如何運作。

  • It's giant matrices, thousands of rows and columns,

    我們不了解它們如何分類。

  • maybe millions of rows and columns,

    它是個巨大的矩陣, 有數以千計的直行和橫列,

  • and not the programmers

    也許有上百萬的直行和橫列,

  • and not anybody who looks at it,

    程式設計者也無法了解,

  • even if you have all the data,

    任何人看到它都無法了解,

  • understands anymore how exactly it's operating

    即使握有所有的資料,

  • any more than you'd know what I was thinking right now

    對於它到底如何運作的了解程度,

  • if you were shown a cross section of my brain.

    絕對不會高於你對我現在 腦中想什麼的了解程度,

  • It's like we're not programming anymore,

    如果你單憑看我大腦的切面圖。

  • we're growing intelligence that we don't truly understand.

    感覺好像我們不是在寫程式了,

  • And these things only work if there's an enormous amount of data,

    而是在栽培一種我們不是 真正了解的智慧。

  • so they also encourage deep surveillance on all of us

    只在資料量非常巨大的情況下 這些才行得通,

  • so that the machine learning algorithms can work.

    所以他們也助長了 對我們所有人的密切監視,

  • That's why Facebook wants to collect all the data it can about you.

    這樣機器學習才能行得通。

  • The algorithms work better.

    那就是為什麼臉書要盡可能 收集關於你的資料。

  • So let's push that Vegas example a bit.

    這樣演算法效果才會比較好。

  • What if the system that we do not understand

    讓我們再談談賭城的例子。

  • was picking up that it's easier to sell Vegas tickets

    如果這個我們不了解的系統

  • to people who are bipolar and about to enter the manic phase.

    發現比較容易把機票銷售給

  • Such people tend to become overspenders, compulsive gamblers.

    即將進入躁症階段的躁鬱症患者。

  • They could do this, and you'd have no clue that's what they were picking up on.

    這類人傾向於變成 花錢超支的人、強迫性賭徒。

  • I gave this example to a bunch of computer scientists once

    他們能這麼做,而你完全不知道 那是他們選目標的根據。

  • and afterwards, one of them came up to me.

    有次,我把這個例子 給了一群電腦科學家,

  • He was troubled and he said, "That's why I couldn't publish it."

    之後,其中一人來找我。

  • I was like, "Couldn't publish what?"

    他感到困擾,說:「那就是 為什麼我們無法發表它。」

  • He had tried to see whether you can indeed figure out the onset of mania

    我說:「不能發表什麼?」

  • from social media posts before clinical symptoms,

    他曾嘗試能否在出現臨床症狀前 就預知躁鬱症快發作了,

  • and it had worked,

    靠的是分析社交媒體的貼文。

  • and it had worked very well,

    他辦到了,

  • and he had no idea how it worked or what it was picking up on.

    結果非常成功,

  • Now, the problem isn't solved if he doesn't publish it,

    而他完全不知道是怎麼成功的, 也不知道預測的根據是什麼。

  • because there are already companies

    如果他不發表結果, 問題就沒有解決,

  • that are developing this kind of technology,

    因為已經有公司

  • and a lot of the stuff is just off the shelf.

    在發展這種技術,

  • This is not very difficult anymore.

    很多東西都已經是現成的了。

  • Do you ever go on YouTube meaning to watch one video

    這已經不是很困難的事了。

  • and an hour later you've watched 27?

    你可曾經上 YouTube 原本只是要看一支影片,

  • You know how YouTube has this column on the right

    一個小時之後你卻已看了 27 支?

  • that says, "Up next"

    你可知道 YouTube 在網頁的右欄

  • and it autoplays something?

    擺著「即將播放」的影片,

  • It's an algorithm

    而且會自動接著播放那些影片?

  • picking what it thinks that you might be interested in

    那是種演算法,

  • and maybe not find on your own.

    選出它認為你可能會感興趣,

  • It's not a human editor.

    但不見得會自己去找到的影片。

  • It's what algorithms do.

    並不是人類編輯者,

  • It picks up on what you have watched and what people like you have watched,

    而是演算法做的。

  • and infers that that must be what you're interested in,

    它去了解你看過什麼影片, 像你這類的人看過什麼影片,

  • what you want more of,

    然後推論出那就是你會感興趣、

  • and just shows you more.

    想看更多的影片,

  • It sounds like a benign and useful feature,

    然後呈現更多給你看。

  • except when it isn't.

    聽起來是個良性又有用的特色,

  • So in 2016, I attended rallies of then-candidate Donald Trump

    除了它不是這樣的時候。

  • to study as a scholar the movement supporting him.

    在 2016 年,我去了一場 擁護當時還是候選人川普的集會,

  • I study social movements, so I was studying it, too.

    我以學者身份去研究支持他的運動。

  • And then I wanted to write something about one of his rallies,

    我研究社會運動,所以也去研究它。

  • so I watched it a few times on YouTube.

    接著,我想要針對 他的某次集會寫點什麼,

  • YouTube started recommending to me

    所以就在 YouTube 上 看了幾遍。

  • and autoplaying to me white supremacist videos

    YouTube 開始推薦給我

  • in increasing order of extremism.

    並為我自動播放, 白人至上主義的影片,

  • If I watched one,

    一支比一支更極端主義。

  • it served up one even more extreme

    如果我看了一支,

  • and autoplayed that one, too.

    它就會送上另一支更極端的,

  • If you watch Hillary Clinton or Bernie Sanders content,

    並且自動播放它。

  • YouTube recommends and autoplays conspiracy left,

    如果你看的影片內容是 希拉蕊柯林頓或伯尼桑德斯,

  • and it goes downhill from there.

    YouTube 會推薦並自動播放 陰謀論左派的影片,

  • Well, you might be thinking, this is politics, but it's not.

    之後就每況愈下。

  • This isn't about politics.

    你可能會想,這是政治。

  • This is just the algorithm figuring out human behavior.

    但並不是,重點不是政治,

  • I once watched a video about vegetarianism on YouTube

    這只是猜測人類行為的演算法。

  • and YouTube recommended and autoplayed a video about being vegan.

    我曾經上 YouTube 看一支關於吃素的影片,

  • It's like you're never hardcore enough for YouTube.

    而 YouTube 推薦並自動播放了 一支關於嚴格素食主義者的影片。

  • (Laughter)

    似乎對 YouTube 而言 你的口味永遠都還不夠重。

  • So what's going on?

    (笑聲)

  • Now, YouTube's algorithm is proprietary,

    發生了什麼事?

  • but here's what I think is going on.

    YouTube 的演算法是專有的,

  • The algorithm has figured out

    但我認為發生的事是這樣的:

  • that if you can entice people

    演算法發現到,

  • into thinking that you can show them something more hardcore,

    如果誘使人們思索

  • they're more likely to stay on the site

    你還能提供他們更重口味的東西,

  • watching video after video going down that rabbit hole

    他們就更可能會留在網站上,

  • while Google serves them ads.

    看一支又一支的影片, 一路掉進兔子洞,

  • Now, with nobody minding the ethics of the store,

    同時 Google 還給他們看廣告。

  • these sites can profile people

    沒人在意商家倫理的情況下,

  • who are Jew haters,

    這些網站能夠描繪人的特性,

  • who think that Jews are parasites

    哪些人痛恨猶太人,

  • and who have such explicit anti-Semitic content,

    認為猶太人是寄生蟲,

  • and let you target them with ads.

    以及哪些人明確地反猶太人,

  • They can also mobilize algorithms

    讓你針對他們提供廣告。

  • to find for you look-alike audiences,

    它們也能動員演算法,

  • people who do not have such explicit anti-Semitic content on their profile

    為你找出相近的觀眾群,

  • but who the algorithm detects may be susceptible to such messages,

    那些側看不怎麼明顯反猶太人,

  • and lets you target them with ads, too.

    但是被演算法偵測出來 很容易受到這類訊息影響的人,

  • Now, this may sound like an implausible example,

    讓你針對他們提供廣告。

  • but this is real.

    這可能聽起來像是個 難以置信的例子,

  • ProPublica investigated this

    但它是真實的。

  • and found that you can indeed do this on Facebook,

    ProPublica 調查了這件事,

  • and Facebook helpfully offered up suggestions

    且發現你的確可以 在臉書上做到這件事,

  • on how to broaden that audience.

    且臉書很有效地提供建議,

  • BuzzFeed tried it for Google, and very quickly they found,

    告訴你如何拓展觀眾群。

  • yep, you can do it on Google, too.

    BuzzFeed 用 Google 做了實驗,他們很快發現,

  • And it wasn't even expensive.

    是的,你也可以在 Google 上這樣做。

  • The ProPublica reporter spent about 30 dollars

    而且甚至不貴。

  • to target this category.

    ProPublica 的記者 花了大約 30 美元

  • So last year, Donald Trump's social media manager disclosed

    來針對這個類別。

  • that they were using Facebook dark posts to demobilize people,

    去年川普的社交媒體經理透露,

  • not to persuade them,

    他們利用臉書的隱藏廣告貼文 來「反動員」選民,

  • but to convince them not to vote at all.

    不是勸說或動員他們,

  • And to do that, they targeted specifically,

    而是說服他們根本不去投票。

  • for example, African-American men in key cities like Philadelphia,

    為做到這一點,他們準確設定目標,

  • and I'm going to read exactly what he said.

    比如像費城這樣 關鍵城市的非裔美國男性,

  • I'm quoting.

    讓我把他的話一字不漏讀出來。

  • They were using "nonpublic posts

    以下為引述。

  • whose viewership the campaign controls

    他們使用「非公開貼文,

  • so that only the people we want to see it see it.

    那些貼文的觀看權限 由競選團隊來控制,

  • We modeled this.

    所以只有我們挑的讀者才看得到。

  • It will dramatically affect her ability to turn these people out."

    我們為此建立了模型,

  • What's in those dark posts?

    會嚴重影響到她(指希拉蕊) 動員那些人去投票的能力。」

  • We have no idea.

    那些隱藏廣告貼文中有什麼內容?

  • Facebook won't tell us.

    我們不知道。

  • So Facebook also algorithmically arranges the posts

    臉書不告訴我們。

  • that your friends put on Facebook, or the pages you follow.

    所以臉書也用演算法的方式 來安排你的朋友

  • It doesn't show you everything chronologically.

    在臉書的貼文或是你追蹤的頁面。

  • It puts the order in the way that the algorithm thinks will entice you

    它並不會照時間順序 來呈現所有內容。

  • to stay on the site longer.

    呈現順序是演算法認為

  • Now, so this has a lot of consequences.

    能引誘你在網站上逗留久一點的順序。

  • You may be thinking somebody is snubbing you on Facebook.

    所以,這麼做有許多後果。

  • The algorithm may never be showing your post to them.

    你可能會認為有人在臉書上冷落你。

  • The algorithm is prioritizing some of them and burying the others.

    也許是演算法根本沒把 你的貼文呈現給他們看。