Placeholder Image

字幕列表 影片播放

  • If you upload a photo into Google's reverse image search, it'll find websites where

  • that picture has appeared, or providevisually similar imagesthat have the same coloring

  • and composition. The leading search engine in Russia, called

  • Yandex, has reverse image search too but it doesn't work the same way. It's not looking

  • for visually similar images. It's looking for similar faces, the same face.

  • The difference between these search engines is that Google hasn't switched on facial recognition

  • and Yandex has. On Google, you can enter a name and look for a face. But on Yandex, you

  • can enter a face and look for a name. And that distinction represents a potentially

  • enormous shift in our offline lives, where we usually decide who we introduce ourselves

  • to. Now that computer scientists have created

  • tools that can turn faces into nametags, it's worth reflecting on how we got here and what

  • we stand to lose.

  • A computer's facial recognition system has broadly the same components as your own facial

  • recognition system. You see someone with your eyes, your mind processes the features of

  • their face, and recalls their identity from your memory.

  • Now imagine if you could have eyes in lots of places and could download and store memories

  • from other people, then you have something more like the automated version of facial

  • recognition, which has only come together in the past 5 years or so.

  • Its eyes are digital cameras, revolutionary machines that turn light into data. "It's

  • a state of the art digital model, which records images on memory chips instead of photographic

  • film." Digital imagery arose in the early 2000s, which coincided with the arrival of

  • the social internet. So right when we were able to take an unlimited number of pictures,

  • Facebook, Flickr, Youtube, and other sites told us our images had a home online.

  • "100 million photos are being tagged every day on Facebook."

  • Professional photography also went up on websites, news articles, and photo libraries, and Google's

  • web crawlers gathered them into Image Search. And then the computer vision researchers went

  • to work. The millions of digital photos posted to the internet, like the Facebook pictures

  • where we tagged our friends or Google image results of celebrities-- they were used to

  • build themindof facial recognition systems. That mind is made up of a series

  • of algorithms. They locate faces in an image, map facial

  • features to correct for head rotation, and then take over 100 measurements that define

  • that individual face. Those measurements are usually described as

  • the distance between the eyes, the length of the nose, the width of the mouth. But the

  • truth is, nobody knows exactly what's being measured. That's determined by a deep learning

  • algorithm looking for correlations in raw pixel data.

  • To train that algorithm, engineers give it sets of triplets: an anchor photo, another

  • photo of the same person, and a photo of a different person. The algorithm is tasked

  • with deciding what to measure so that the statistical difference between the two matching

  • photos is as small as possible while the distance between the non-matching photos is as large

  • as possibleThese algorithms are refined through millions

  • of examples, but they still don't perform equally well on all types of people or on

  • all types of photos. That hasn't stopped them from being packaged

  • and distributed as ready-to-use software. But whoever uses that software won't be

  • able to identify you until you're in their database of known faces. That's thememory

  • of the system - and it's separate from the training images.

  • In the case of the iphone's faceID, it's a database of one - you volunteer to store

  • your face on your device in exchange for easily unlocking your phone.

  • Companies like Facebook and Google also keep databases of their users. But it's governments

  • that typically have access to the largest databases of names and faces, so facial recognition

  • significantly expands the power of the state. They collected these images for other reasons

  • and now they're repurposing them for facial recognition without telling us or obtaining

  • our consent, which is why several US cities have banned government use of facial recognition.

  • Retail stores, banks, and stadiums can create or buy watchlists of known shoplifters, valued

  • customers, or other persons of interest, so they're notified if one of those people

  • shows up. And then there's another source of labeled

  • photos. Those are the ones we've been labeling ourselves by setting up profiles on social

  • media networks. It's typically against the terms of use

  • to program bots that can download faces and names from Linkedin, Twitter, of Facebook,

  • but it's doable. And what's at stake is something that most

  • of us take for granted: our ability to move through public spaces anonymously.

  • "So we typically think of public and private as being opposites. But is there such a thing

  • as having privacy when we're in public?" "I would like to think so."

  • Evan Selinger is a professor of philosophy who argues that facial recognition is a threat

  • toobscurity,” Which is the idea that personal information is safer when it is hard

  • to obtain or understand. "So We have natural sort of limitations in

  • what we can perceive and what we can hear. Even the human mind has a sort of basic limits

  • in how much information it can store. So one of the things that technologies do

  • is they reduce the transaction costs of being able to find information, being

  • able to store information, being able to share information, and being able to correctly interpret

  • information. And so facial recognition is probably the most obscurity-eviscerating technology

  • ever invented."

  • We don't have to imagine how this could play out. It's already happening with photos from

  • the Russian social media network VK. Aric Toler, a journalist who covers Europe

  • for Bellingcat, showed me how it works with a random video of Russian soccer fans picking

  • fights in Poland. "There's about 10 or so of these soccer hooligans

  • in this video and for every single one of them you can find their profiles on VK. OK

  • I'll get this guy in the background. Let me save him. OK so here's the first result. This

  • guy. So if you click the photo here it will take you directly to the photo's link. And

  • here he is. I think he's wearing I think he's wearing the same shirt. Yeah he's wearing

  • the same shirt even. This is him too. So this is probably like his buddy who uploaded a

  • photo. Yeah. So this is this guy's profile and here's his buddy right here. Yeah so here

  • he is during a baptism, probably." "And the photo you uploaded is not particularly clear

  • or high resolution." No "not at all, right, it's just a 200 by 100. So it does feel weird

  • when you do this and you have access to way more information than you should, is what

  • it feels like. But also we only publish what we're like one thousand percent sure of

  • and if possible we maybe dont include the names of the people."

  • How you feel about this technology probably depends on how much you sympathize with the

  • person being identified. Bellingcat has used these tools to find identify

  • people linked to the attack on flight MH17 in Eastern Ukraine. It's also been used

  • to doxx police officers accused of brutality, anti-corruption activists protesting against

  • Vladimir Putin, random strangers as part of an art project, and sex workers, porn performers,

  • and others who have posted anonymous photos online.

  • "The way that we share our images and our names on social media, LinkedIn Twitter, Instagram,

  • it seems to suggest that we don't want to be obscure or we're not really looking to

  • be anonymous. Are we allowed to want to share and connect with other people online and still

  • be able to expect not to be recognized when we're offline in our regular lives?"

  • "I would say absolutely. In fact I would go further and say if we ever create a society

  • where that's not a reasonable expectation, a lot of the things that are fundamental to

  • being a human being are really going to be compromised.

  • Having any individuality requires experimenting in life and experimenting requires the protections

  • of some obscurity. But also intimacy requires obscurity. Right. If you want to be able to

  • share different parts of your life with different people, and I think most of us do right. We

  • don't want to come into work and behave the same way we do with our friends. We don't

  • want to treat our partners in the same way we do acquaintances. And the concern, when

  • you lose too much obscurity, is that these domains bleed into one another and create

  • what's called context collapse. And it doesn't mean that one is more real or one is more

  • authentic. Leading a rich life requires us to be able to express ourselves in these diverse

  • ways." The photos we took to share with friends,

  • or document history, or simply get a government ID have been used to build and operate a technology

  • that strips away the protections that obscurity has always provided us. It's nothing less

  • than a massive bait-and-switch. One that could change the meaning of the human face forever.

If you upload a photo into Google's reverse image search, it'll find websites where

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 美國腔

人臉識別從我們身上偷走了什麼 (What facial recognition steals from us)

  • 10 1
    Courtney Shih 發佈於 2021 年 01 月 14 日
影片單字