Placeholder Image

字幕列表 影片播放

  • Transcriber: Leslie Gauthier Reviewer: Joanna Pietrulewicz

    譯者: Lilian Chiu 審譯者: Carol Wang

  • Every day, every week,

    每天,每週,

  • we agree to terms and conditions.

    我們同意某些「條件及條款」。

  • And when we do this,

    當我們這麼做時,

  • we provide companies with the lawful right

    我們便讓公司擁有合法的權利

  • to do whatever they want with our data

    可以任意使用我們的資料,

  • and with the data of our children.

    以及我們孩子的資料。

  • Which makes us wonder:

    這會讓我們不禁納悶:

  • how much data are we giving away of children,

    我們給出了有關孩子的多少資料,

  • and what are its implications?

    以及這背後的意涵是什麼?

  • I'm an anthropologist,

    我是人類學家,

  • and I'm also the mother of two little girls.

    同時也是兩個小女孩的母親。

  • And I started to become interested in this question in 2015

    我從 2015 年開始 對這個問題感到好奇,

  • when I suddenly realized that there were vast --

    那年,我突然發現,有很大量——

  • almost unimaginable amounts of data traces

    和孩子有關的追蹤資料 被產生出來並收集起來,

  • that are being produced and collected about children.

    且數量大到無法想像。

  • So I launched a research project,

    於是,我展開了一項研究計畫, 名稱叫做「兒童資料公民」,

  • which is called Child Data Citizen,

    我的目標是要填補這些空白。

  • and I aimed at filling in the blank.

    各位可能會認為我是來責怪大家

  • Now you may think that I'm here to blame you

    在社群媒體上張貼 自己孩子的照片,

  • for posting photos of your children on social media,

    但那其實不是重點。

  • but that's not really the point.

    問題遠大於所謂的 「分享式教養」。

  • The problem is way bigger than so-called "sharenting."

    重點在於體制,而不是個人。

  • This is about systems, not individuals.

    要怪的不是你們和你們的習慣。

  • You and your habits are not to blame.

    史無前例,

  • For the very first time in history,

    我們遠在孩子出生之前 就開始追蹤他們的個人資料——

  • we are tracking the individual data of children

    有時是從懷孕就開始,

  • from long before they're born --

    接著便追蹤他們的一生。

  • sometimes from the moment of conception,

    要知道,當父母決定要懷孕時,

  • and then throughout their lives.

    他們會上網搜尋「懷孕的方式」,

  • You see, when parents decide to conceive,

    或者他們會下載 排卵追蹤應用程式。

  • they go online to look for "ways to get pregnant,"

    當他們確實懷孕之後,

  • or they download ovulation-tracking apps.

    他們會把寶寶的超音波照片 張貼在社群媒體上,

  • When they do get pregnant,

    他們會下載懷孕期應用程式,

  • they post ultrasounds of their babies on social media,

    或者他們會向 Google 大神 諮詢各種相關事項。

  • they download pregnancy apps

    比如——

  • or they consult Dr. Google for all sorts of things,

    搜尋「飛行造成的流產風險」,

  • like, you know --

    或「懷孕初期的腹痛」。

  • for "miscarriage risk when flying"

    我知道是因為我做過許多次。

  • or "abdominal cramps in early pregnancy."

    等寶寶出生了,他們會用各種技術

  • I know because I've done it --

    追蹤每次小盹、每次進食、 生命中的每件事。

  • and many times.

    而他們用的這些技術,

  • And then, when the baby is born, they track every nap,

    都會把寶寶最私密的行為 和健康資料分享出去,

  • every feed,

    以轉換成利潤。

  • every life event on different technologies.

    讓我說明一下這是怎麼運作的:

  • And all of these technologies

    2019 年,英國醫學期刊 刊出了一篇研究,

  • transform the baby's most intimate behavioral and health data into profit

    指出在二十四個 行動健康應用程式中,

  • by sharing it with others.

    有十九個會和第三方分享資訊。

  • So to give you an idea of how this works,

    而這些第三方會把資訊分享給

  • in 2019, the British Medical Journal published research that showed

    兩百一十六個其他組織。

  • that out of 24 mobile health apps,

    在這兩百一十六個第四方當中,

  • 19 shared information with third parties.

    只有三個屬於健康領域。

  • And these third parties shared information with 216 other organizations.

    其他能取得這些資料的公司 則是大型科技公司,

  • Of these 216 other fourth parties,

    比如 Google、臉書, 或甲骨文公司,

  • only three belonged to the health sector.

    還有數位廣告公司,

  • The other companies that had access to that data were big tech companies

    還有一家是消費者信用調查機構。

  • like Google, Facebook or Oracle,

    所以,沒錯:

  • they were digital advertising companies

    廣告公司和信用機構可能 都已經有小寶寶的資料了。

  • and there was also a consumer credit reporting agency.

    但,行動應用程式、 網路搜尋和社群媒體

  • So you get it right:

    其實只是冰山的一角,

  • ad companies and credit agencies may already have data points on little babies.

    因為有許多技術在日常生活中

  • But mobile apps, web searches and social media

    追蹤兒童的資料。

  • are really just the tip of the iceberg,

    在家中,家用科技 和虛擬助理會追蹤兒童。

  • because children are being tracked by multiple technologies

    在學校,教育平台 和教育相關技術都會追蹤兒童。

  • in their everyday lives.

    在醫生的診間,線上記錄 和線上入口網站都會追蹤兒童。

  • They're tracked by home technologies and virtual assistants in their homes.

    還有需連結網路的玩具、線上遊戲

  • They're tracked by educational platforms

    及許多許多其他技術 都會追蹤兒童。

  • and educational technologies in their schools.

    所以,在我研究期間,

  • They're tracked by online records

    很多父母來找我, 他們會說:「又怎樣?

  • and online portals at their doctor's office.

    我的孩子被追蹤有什麼關係?

  • They're tracked by their internet-connected toys,

    我們沒啥要隱瞞的。」

  • their online games

    這是有關係的。

  • and many, many, many, many other technologies.

    有關係是因為,現今,

  • So during my research,

    個人不僅受到追蹤,

  • a lot of parents came up to me and they were like, "So what?

    這些追蹤資料還會 被拿來建構他們的側寫評比。

  • Why does it matter if my children are being tracked?

    人工智慧和預測分析

  • We've got nothing to hide."

    正被用來盡可能多地利用

  • Well, it matters.

    不同來源的個人生活資料:

  • It matters because today individuals are not only being tracked,

    家族史、購買習慣、社群媒體留言。

  • they're also being profiled on the basis of their data traces.

    接著,這些資料會被整合,

  • Artificial intelligence and predictive analytics are being used

    以資料為根據, 做出針對個人的決策。

  • to harness as much data as possible of an individual life

    到處都在使用這些技術。

  • from different sources:

    銀行用它們來決定貸款,

  • family history, purchasing habits, social media comments.

    保險公司用它們來決定保費,

  • And then they bring this data together

    招聘公司和僱主用它們

  • to make data-driven decisions about the individual.

    來判定應徵者是否適合某個職缺。

  • And these technologies are used everywhere.

    連警方和法庭也會用它們

  • Banks use them to decide loans.

    來判斷一個人是否有可能是罪犯,

  • Insurance uses them to decide premiums.

    或是否有可能再犯。

  • Recruiters and employers use them

    我們不知道也無法控制

  • to decide whether one is a good fit for a job or not.

    購買、銷售、處理我們資料的公司

  • Also the police and courts use them

    會用什麼方式來對我們 和我們的孩子做側寫評比,

  • to determine whether one is a potential criminal

    但那些側寫評比有可能會 顯著影響我們的權利。

  • or is likely to recommit a crime.

    舉個例子,

  • We have no knowledge or control

    2018 年《紐約時報》 刊載的新聞提到,

  • over the ways in which those who buy, sell and process our data

    透過大學規劃線上服務 所收集到的資料——

  • are profiling us and our children.

    這些資料來自全美各地數百萬名

  • But these profiles can come to impact our rights in significant ways.

    想要尋找大學科系 或獎學金的高中生——

  • To give you an example,

    被販售給教育資料中介商。

  • in 2018 the "New York Times" published the news

    福坦莫大學裡那些研究 教育資料中介商的研究者

  • that the data that had been gathered

    揭發出這些公司會根據不同的分類

  • through online college-planning services --

    來為小至兩歲的兒童做側寫評比:

  • that are actually completed by millions of high school kids across the US

    人種、宗教、富裕程度、

  • who are looking for a college program or a scholarship --

    社交尷尬

  • had been sold to educational data brokers.

    及許多其他隨機的分類。

  • Now, researchers at Fordham who studied educational data brokers

    接著,它們會賣掉這些側寫評比,

  • revealed that these companies profiled kids as young as two

    連帶附上兒童的姓名、 地址和聯絡細節資訊,

  • on the basis of different categories:

    賣給各種公司,

  • ethnicity, religion, affluence,

    包括貿易和職涯機構、

  • social awkwardness

    學生貸款以及學生信用卡公司。

  • and many other random categories.

    福坦莫大學的研究者還更進一步,

  • And then they sell these profiles together with the name of the kid,

    請一家教育資料中介商 提供他們一份名單,

  • their home address and the contact details

    羅列十四到十五歲 對於避孕措施感興趣的女孩。

  • to different companies,

    資料中介商同意 提供他們這份名單。

  • including trade and career institutions,

    想像這多麼侵害我們孩子的私密。

  • student loans

    但,教育資料中介商 也只不過是一個例子。

  • and student credit card companies.

    事實是,我們無法控制別人 如何對我們的孩子做側寫評比,

  • To push the boundaries,

    但這些側寫評比卻會明顯影響 他們在人生中的機會。

  • the researchers at Fordham asked an educational data broker

    所以,我們得要捫心自問:

  • to provide them with a list of 14-to-15-year-old girls

    我們能信任這些 側寫評比孩子的技術嗎?

  • who were interested in family planning services.

    能嗎?

  • The data broker agreed to provide them the list.

    我的答案是「不能。」

  • So imagine how intimate and how intrusive that is for our kids.

    身為人類學家,

  • But educational data brokers are really just an example.

    我相信人工智慧和預測分析

  • The truth is that our children are being profiled in ways that we cannot control

    很擅長預測疾病的過程 或對抗氣候變遷。

  • but that can significantly impact their chances in life.

    但我們不能夠信任

  • So we need to ask ourselves:

    這些技術能夠客觀地 對人類做側寫評比,

  • can we trust these technologies when it comes to profiling our children?

    讓我們依據這些側寫評比資料 來對個人的人生做出判斷,

  • Can we?

    因為它們無法對人類做側寫評比。

  • My answer is no.

    追蹤資料並無法反映出 我們是什麼樣的人。

  • As an anthropologist,

    人類說出來的話 可能和心中想的相反,

  • I believe that artificial intelligence and predictive analytics can be great

    做出來的行為 可能和心中的感受不同。

  • to predict the course of a disease

    用演算法做預測或其他數位做法

  • or to fight climate change.

    無法考量到人類經歷中的 不可預測性和複雜性。

  • But we need to abandon the belief

    除此之外,

  • that these technologies can objectively profile humans

    這些技術向來——

  • and that we can rely on them to make data-driven decisions

    向來——會以某種方式偏頗。

  • about individual lives.

    在定義上,演算法就是 一組一組的規則或步驟,

  • Because they can't profile humans.

    設計的目的是要達成 一個特定的結果。

  • Data traces are not the mirror of who we are.

    但這些規則或步驟並不客觀,

  • Humans think one thing and say the opposite,

    因為它們是由某種 特定文化情境下的人所設計的,

  • feel one way and act differently.

    且由某些特定的 文化價值觀所形塑出來。

  • Algorithmic predictions or our digital practices

    所以,機器學習時

  • cannot account for the unpredictability and complexity of human experience.

    會自偏頗的演算法學習,

  • But on top of that,

    通常也會從偏頗的資料庫中學習。

  • these technologies are always --

    現在我們已經開始看見 一些偏頗演算法的初始例子,

  • always --

    當中有些還挺嚇人的。

  • in one way or another, biased.

    紐約的 AI Now Institute 今年公佈的一份報告揭露出

  • You see, algorithms are by definition sets of rules or steps

    用來做預測性維安的人工智慧技術

  • that have been designed to achieve a specific result, OK?

    是用「髒數據」訓練出來的。

  • But these sets of rules or steps cannot be objective,

    收集這些資料的時期,

  • because they've been designed by human beings

    是歷史上已知很有種族偏見

  • within a specific cultural context

    以及警方作業不透明的時期。

  • and are shaped by specific cultural values.

    因為訓練這些技術 所用的資料是髒數據,

  • So when machines learn,

    不具備客觀性,

  • they learn from biased algorithms,

    它們產出的結果

  • and they often learn from biased databases as well.

    只會放大和犯下警方的偏見和錯誤。

  • At the moment, we're seeing the first examples of algorithmic bias.

    所以,我認為我們面臨的 是社會中的根本問題。

  • And some of these examples are frankly terrifying.

    我們開始交由科技技術 來側寫評比人。

  • This year, the AI Now Institute in New York published a report

    我們知道在側寫評比人時,

  • that revealed that the AI technologies

    這些技術一定會偏頗,

  • that are being used for predictive policing

    永遠不會正確。

  • have been trained on "dirty" data.

    所以,現在我們需要的 是政治上的解決方案。

  • This is basically data that had been gathered

    我們需要政府認可 我們的資料權和人權。

  • during historical periods of known racial bias

    (掌聲及歡呼)

  • and nontransparent police practices.

    在那之前,我們不用冀望 會有更公正的未來。

  • Because these technologies are being trained with dirty data,

    我擔心我的女兒會接觸到

  • they're not objective,

    各種演算法歧視和錯誤。

  • and their outcomes are only amplifying and perpetrating

    我和我女兒的差別在於

  • police bias and error.

    我的童年並沒有 公開的記錄可被取得。

  • So I think we are faced with a fundamental problem

    肯定也沒有資料庫 記錄我在青少女時期

  • in our society.

    做過的所有蠢事和蠢念頭。

  • We are starting to trust technologies when it comes to profiling human beings.

    (笑聲)

  • We know that in profiling humans,

    但我女兒要面臨的情況可能不同。

  • these technologies are always going to be biased

    今天收集到和她們有關的資料,

  • and are never really going to be accurate.

    未來可能就會被用來評斷她們,

  • So what we need now is actually political solution.

    且有可能會漸漸阻擋到 她們的希望和夢想。

  • We need governments to recognize that our data rights are our human rights.

    我認為該是我們大家 站出來的時候了。

  • (Applause and cheers)

    該是我們開始同心協力,

  • Until this happens, we cannot hope for a more just future.

    以個人、組織、 機構的身份攜手合作,

  • I worry that my daughters are going to be exposed

    我們要為自己及我們的孩子 爭取更高的資料公平性,

  • to all sorts of algorithmic discrimination and error.

    別等到太遲了。

  • You see the difference between me and my daughters

    謝謝。

  • is that there's no public record out there of my childhood.

    (掌聲)

  • There's certainly no database of all the stupid things that I've done

  • and thought when I was a teenager.

  • (Laughter)

  • But for my daughters this may be different.

  • The data that is being collected from them today

  • may be used to judge them in the future

  • and can come to prevent their hopes and dreams.

  • I think that's it's time.

  • It's time that we all step up.

  • It's time that we start working together

  • as individuals,

  • as organizations and as institutions,

  • and that we demand greater data justice for us

  • and for our children

  • before it's too late.

  • Thank you.

  • (Applause)

Transcriber: Leslie Gauthier Reviewer: Joanna Pietrulewicz

譯者: Lilian Chiu 審譯者: Carol Wang

字幕與單字

影片操作 你可以在這邊進行「影片」的調整,以及「字幕」的顯示

B1 中級 中文 資料 評比 追蹤 技術 演算法 兒童

What tech companies know about your kids | Veronica Barassi

  • 0 0
    林宜悉 發佈於 2020 年 10 月 30 日
影片單字