Placeholder Image

字幕列表 影片播放

已審核 字幕已審核
  • Let's play a game.

    我們來玩個遊戲,

  • Close your eyes and picture a shoe.

    閉上你的雙眼,然後想像一隻鞋子

  • OK.

    好,

  • Did anyone picture this?

    有人想像的鞋子長這樣嗎?

  • This?

    這隻?

  • How about this?

    還是這隻?

  • We may not even know why, but each of us

    我們可能不知道為什麼,但我們每個人

  • is biased toward one shoe over the others.

    偏偏想到了某款鞋子,而不是其他款式

  • Now, imagine that you're trying to teach a computer

    現在試著想像你要試著教導一部電腦,

  • to recognize a shoe.

    去辨認一隻鞋子

  • You may end up exposing it to your own bias.

    最後電腦接收到的資訊可能都會受你的偏見左右

  • That's how bias happens in machine learning.

    這就是為什麼人類偏見會影響機器學習

  • But first, what is machine learning?

    但首先,什麼是機器學習呢?

  • Well, it's used in a lot of technology we use today.

    在今日的科技裡,我們很容易見到它的應用

  • Machine learning helps us get from place to place,

    機器學習引導我們前往目的地

  • gives us suggestions, translates stuff,

    給我們建議、翻譯語句,

  • even understands what you say to it.

    甚至能進行語音辨識

  • How does it work?

    它是如何運作的?

  • With traditional programming,

    傳統上,程式編碼需要

  • people hand code the solution to a problem, step by step.

    人手動調整每一個步驟,提出問題的解決方案

  • With machine learning, computers learn the solution by finding patterns in data

    有了機器學習,電腦能找到資料中的模式,進而學習提出解方

  • ,so it's easy to think there's no human bias in that.

    因此我們很容易以為,沒有人類偏見會參與這個過程

  • But just because something is based on data doesn't automatically make it neutral.

    但我們不能理所當然地因為它的基礎是數據資料,就認為機器學習很中立,

  • Even with good intentions, it's impossible to separate ourselves from our own human biases,

    即使立意良好,我們永遠也無法剝除自我偏見

  • so our human biases become part of the technology we create in many different ways.

    所以在許多方面,人類偏見變成我們科技發明的一部份

  • There's interaction bias, like this recent game

    有所謂的互動偏見,像剛剛這場小遊戲,

  • where people were asked to draw shoes for the computer.

    參與者必須為電腦畫出一雙鞋子

  • Most people drew ones like this.

    多數人畫出了這樣的款式

  • So as more people interacted with the game,

    當越來越多人參與遊戲,

  • the computer didn't even recognize these.

    電腦就會變得認不出這種鞋款

  • Latent bias-- for example, if you were training a computer

    再來是內隱偏見,舉例來說,如果你要訓練一部電腦

  • on what a physicist looks like, and you're using pictures of past physicists,

    去學習辨認一位物理學家,用的是過去物理學家的相片,

  • your algorithm will end up with a latent bias skewing towards men.

    你的運算法則最後會出現偏向男性的內隱偏見

  • And selection bias-- say you're training a model to recognize faces.

    再來談談揀選偏見,假設你要訓練一個人臉辨識模型

  • Whether you grab images from the internet or your own photo library,

    無論你是從網路上或自己的相片庫中選取圖片,

  • are you making sure to select photos that represent everyone?

    你確定你挑的照片能夠代表每個人嗎?

  • Since some of our most advanced products use machine learning,

    既然某些最先進的科技產品裡已運用了機器學習,

  • we've been working to prevent that technology from perpetuating negative human bias--

    我們一直為終結人類偏見而努力

  • from tackling offensive or clearly misleading information

    對於冒犯性或高誤導性的資訊,

  • from appearing at the top of your search results page

    我們避免它出現在你的搜尋結果頁面頂端

  • to adding a feedback tool in the search bar

    此外,我們在搜尋欄位中加上了一個回饋工具,

  • so people can flag hateful or inappropriate autocomplete suggestions.

    讓使用者可以舉報仇恨性或不適當的自動填入搜尋

  • It's a complex issue, and there is no magic bullet,

    這是一個複雜的議題,而且並沒有特效藥

  • but it starts with all of us being aware of it,

    但我們可以從正視它開始做起,

  • so we can all be part of the conversation,

    讓大家都能參與對話,

  • because technology should work for everyone.

    因為科技應該是為了我們每個人而發展

Let's play a game.

我們來玩個遊戲,

字幕與單字
已審核 字幕已審核

單字即點即查 點擊單字可以查詢單字解釋