Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • SPEAKER 1: This is the [INAUDIBLE].

  • [MUSIC PLAYING]

  • DAVID MALAN: Hello, world.

  • This is the CS50 Podcast.

  • My name is David Malan.

  • And I'm here again, with CS50s own, Brian Yu.

  • BRIAN YU: Good to be back.

  • DAVID MALAN: So in the news, of late, has been this app for iOS

  • and Android called FaceApp, as some of you might have heard.

  • Brian, have you used this app before?

  • BRIAN YU: So people keep talking about this.

  • I don't have it myself, on my phone.

  • But one of our teaching fellows for CS50 does have it

  • and was actually showing it to me earlier, today.

  • It's kind of scary, what it can do.

  • So the way it seems to work is that you open up the app, on your phone.

  • And you can choose a photo of yourself, from your photo library, a photo of you

  • or a friend.

  • And you submit it.

  • And then you can apply any number of these different image filters

  • to it, effectively.

  • But they can do a variety of different things.

  • So they can show you what they think you would look like when you are older,

  • and show you an elderly version of yourself,

  • or what you looked like when you were younger.

  • They can change your hairstyle.

  • They can do all sorts of different things to the background of the image,

  • for instance.

  • So it's a pretty powerful image tool.

  • And it does a pretty good job of trying to create a realistic looking photo.

  • DAVID MALAN: Yeah.

  • It's striking.

  • And I discovered this app two years after everyone else did, it seems.

  • Because someone sent me a modified photo of myself

  • recently, whereby I was aged in the photo.

  • And it was amazing.

  • The realism of the skin, I thought was compelling.

  • And what was most striking to me, so much so,

  • that I forwarded it to a couple of relatives afterward,

  • is that I look like a couple of my older relatives do in reality.

  • And it was fascinating to see that the app was seeing these familial traits

  • in myself, that even I don't see when I look in the mirror, right now.

  • But apparently, if you age me and you make my skin

  • a different texture, over time, oh my god,

  • I'm going to actually look like some of my relatives, it seems.

  • BRIAN YU: Yeah.

  • It's incredible what the app can do.

  • A human trying to do this type of thing on their own,

  • might not be able to do at all.

  • And I think that really just speaks to how powerful machine learning has

  • gotten, at this point.

  • That these machines have been trained to look at these huge data

  • sets of analyzing younger and older pictures

  • probably, and trying to understand, fundamentally, how you translate

  • one younger photo to an older photo.

  • And now, they've just gotten really good at being able to do that in a way

  • that humans, on their own, never would've been able to.

  • DAVID MALAN: So this, is then, related to our recent chat

  • about machine learning, more generally.

  • Where, I assume, the training data in this case

  • is just lots, and lots, and lots of photos of people, young and old,

  • and of all sorts.

  • BRIAN YU: Yeah.

  • That would be my guess.

  • So FaceApp doesn't publicly announce exactly how their algorithm is working.

  • But I would imagine that it's probably just a lot of training data,

  • where you give the algorithm a whole bunch of younger photos and older

  • photos.

  • And you try and train the algorithm to be

  • able to figure out how to turn the younger photo into the older photo.

  • Such that you can give it a new younger photo as input and have it

  • predict what the older photo is going to look like.

  • DAVID MALAN: It's amazing.

  • It's really quite fascinating too, to allow

  • people to imagine what they might look like in different clothes,

  • or I suppose, with different makeup on, or so forth.

  • Computers can do so much of this.

  • But it's actually quite scary too, because a corollary

  • of being able to mutate people's faces in this way, digitally,

  • is that you can surely identify people, as well.

  • And I think that's one of the topics that's been getting a lot of attention

  • here, certainly in the US, whereby a few cities, most recently, have actually

  • outlawed, outright, the police's use of facial recognition

  • to bring in suspects.

  • For instance.

  • Somerville, Massachusetts, which is right around the corner from Cambridge,

  • Massachusetts, here, did this.

  • And I mean, that's actually the flip side of the cool factor.

  • I mean, honestly, I was pretty caught up in it,

  • when I received this photo of myself some 20, 30, 40 years down the road.

  • Sent it along happily to some other people.

  • And then didn't really stop to think, until a few days later,

  • when I started reading about FaceApp and the implications thereof.

  • That actually, this really does forebode a scary future,

  • where all too easily can computers, and whatever humans own them,

  • pick us out in the crowd or track, really

  • in the extreme, your every movement.

  • I mean, do you think that policy is really the only solution to this?

  • BRIAN YU: So I think that certainly, technology

  • is going to get good enough that facial recognition is

  • going to keep getting better.

  • Because it's already really, really good.

  • And I know this from whenever photos get posted on Facebook.

  • And I'm in the background corner of a very small part

  • of the image, Facebook, pretty immediately, is able to tell me,

  • oh, that's me in the photo.

  • When I don't even know if I would have noticed myself in the photo.

  • DAVID MALAN: I know.

  • Even when it just seems to be like a few pixels, off to the side.

  • BRIAN YU: Yeah.

  • So technology is certainly not going to be the factor that holds anyone back,

  • when it comes to facial recognition.

  • So if a city wants to protect itself against the potential implications

  • of this, then I think policy is probably the only way to do it.

  • Though it seems like the third city that most recently banned facial recognition

  • in the city is Oakland.

  • And it looks like their main concern is the misidentification of individuals,

  • and how that might lead to the misuse of force, for example.

  • And certainly, facial recognition technology is not perfect, right now.

  • But it is getting better and better.

  • So I can understand why more and more people might

  • feel like they could begin to rely on it, even though it's not 100% accurate

  • and may never be 100% accurate.

  • DAVID MALAN: But that too, in and of itself, seems worrisome.

  • Because if towns, or cities, are starting

  • to ban it on the basis of the chance of misidentification,

  • surely the technology, as you say, is only going to get better, and better,

  • and better.

  • And so that argument, you would think, is going to get weaker, and weaker,

  • and weaker.

  • Because I mean, even just a few years ago, was Facebook,

  • you noted, claiming that they could identify

  • humans in photos with an accuracy-- correct me if I'm wrong-- of 97.25%.

  • Whereas, humans, when trying to identify other humans and photos,

  • had an accuracy level of 97.5%.

  • So almost exactly the same statistic.

  • So at that point, if the software is just as good, if not better,

  • than humans' own identification, it seems like a weak foundation

  • on which to ban the technology.

  • And really, our statement should be stronger than just, oh,

  • there's this risk of misidentification.

  • But rather, this is not something we want societally, no?

  • BRIAN YU: Yeah.

  • I think that, especially now that facial recognition technology

  • has gotten better.

  • But when the Facebook did that study, I think that was back in 2014, or so.

  • So I would guess that Facebook's facial recognition

  • abilities have gotten even better than that, over the course of the past five

  • years, or so.

  • So facial recognition is probably better when

  • a computer is doing it than when humans are doing it,

  • by now, or at least close to as good.

  • And so given that, I do think that when it

  • comes to trying to decide on how we want to shape the policies in our society,

  • that we should not just be looking at how accurate these things are.

  • But also, looking at what kind of technologies

  • do we want to be playing a role in our policing system, and in the way

  • that the society runs, and the rules there.

  • DAVID MALAN: And I imagine this is going to play out differently

  • in different countries.

  • And I feel like you've already seen evidence of this,

  • if you travel internationally.

  • Because customs agencies, in a lot of countries,

  • are already photographing, even with those silly little webcams,

  • when you swipe your passport and sign into a country.

  • They've been logging people's comings and going, for some time.

  • So really the technology is just facilitating all the more of that

  • and tracking.

  • I mean, in the UK, for years, they've been

  • known as having hundreds, thousands of CCTVs, closed-circuit televisions.

  • Which, I believe, historically were used really

  • for monitoring, either in real time or after the fact, based on recordings.

  • But now, you can imagine software just scouring a city, almost like a Batman.

  • I was just watching, I think, The Dark Knight, the other day,

  • where Bruce Wayne is able to oversee everything going on in Gotham,

  • or listen in, in that case, what bias people's cell phones.

  • It just feels like we're all too close to the point where

  • you could do a Google search for someone, essentially on Google Maps,

  • and find where they are.

  • Because there are so many cameras watching.

  • BRIAN YU: Yeah.

  • And so, those privacy concerns, I think, are

  • part of what this whole recent controversy has been

  • with facial recognition and FaceApp.

  • And in particular, with FaceApp, the worry

  • has been that when FaceApp is running these filters to take your face

  • and modify it to be some different face.

  • It's not just a program that's running on your phone

  • to be able to do that sort of thing.

  • It's that you've taken a photo, and that photo

  • is being uploaded to FaceApp servers.

  • And now your photo is on the internet, somewhere.

  • And potentially, it could stay there and be used for other purposes.

  • And who knows what might happen to it.

  • DAVID MALAN: Yeah.

  • I mean, you, and some other people on the internet,

  • dug into the privacy policy that FaceApp has.

  • And if we read just a few sentences here,

  • one of the sections in the "Terms of Service"

  • are that, "You grant FaceApp consent to use the user content,

  • regardless of whether it includes an individual's name, likeness, voice,

  • or persona sufficient to indicate the individual's identity.

  • By using the services, you agree that the user consent

  • may be used for commercial purposes.

  • You further acknowledge that FaceApp's use

  • of the user content for commercial purposes

  • will not result in any injury to you or any other person you

  • authorized to act on your behalf."

  • And so forth.

  • So you essentially are turning over your facial property, and any photos

  • thereof, to other people.

  • And in my case, it wasn't even me who opted into this.

  • It was someone else who uploaded my photo.

  • And, at the time, I perhaps didn't take enough offense or concern.

  • But that too is an issue, ever more so, when folks are using services

  • like this, not to mention Facebook and other social media apps,

  • and are actually providing, not only their photos, but here is my name,

  • here is my birthday, here are photos from what I did yesterday,

  • and God knows how much more information about you.

  • I mean, we've all tragically opted into this, under the guise of,

  • oh, this is great.

  • We're being social with other people online.

  • When really, we're providing a lot of companies with treasure

  • troves of information about us.

  • And now, governmental agencies seem to be hopping on board, as well.

  • BRIAN YU: Yeah.

  • Facebook, especially.

  • It's just scary how much they know about exactly who you are

  • and what your internet browsing habits are like.

  • It's all too often that I'll be reading about something, on the internet,

  • that I might be interested in purchasing.

  • And all of a sudden, I go and check Facebook,

  • and there's an advertisement for the very thing

  • that I was just thinking about purchasing.

  • Because Facebook has their cookies installed

  • on so many websites that are just tracking every website you visit.

  • And they can link that back to you and know exactly what you've been doing.

  • DAVID MALAN: Yeah.

  • I know.

  • And I was thinking that the other day, because I

  • was seeing ads for something, that I actually went ahead and bought

  • from some website.

  • I don't even remember what it was.

  • But I was actually annoyed that the technology wasn't smart enough

  • to opt me out of those same adverts, once I had actually

  • completed the transaction.

  • But you know, I was thinking too, because just yesterday, I

  • was walking back to the office.

  • And I passed someone who I was, for a moment, super sure that I knew.

  • But I wasn't 100% confident.

  • So I kept walking.

  • And then I felt guilty.

  • And so I turned around, because I didn't want to just walk

  • past someone without saying hello.

  • But then when I saw them a second time, nope,

  • it still wasn't the person I thought it was.

  • But I had that hesitation.

  • And I couldn't help but think now, in hearing these statistics,

  • that Facebook and real humans are on statistically 97%

  • good at detecting faces.

  • That was my 3%, yesterday.

  • Out of 100 people, he was one of the three people in this week

  • that I'm going to fail to recognize.

  • And it really put this into perspective.

  • Because while you might think that humans are perfect,

  • and it's the machines that are trying to catch up, it feels like sometimes

  • it's the machines that are already catching up.

  • And, case in point, there was my own mistake.

  • BRIAN YU: Yeah.

  • And when machine learning algorithms, like facial recognition, but machine

  • learning more generally, are trained, humans are often the baseline

  • that computers are striving to match, in terms of performance.

  • Where, you try and have a human perform some task

  • of trying to label images, or documents, or the like.

  • And then you give the same task to the computer

  • and see, how accurately does the computer match up

  • with the human's task.

  • With the goal of being, how human can we get the computer to be.

  • But there are certain tasks where, you could actually imagine cases,

  • where the computer can get better.

  • And facial recognition is one of those cases,

  • where I feel like, eventually, if not already,

  • it could be better than humans.

  • I think, self-driving cars is another example, which

  • we've talked about before, where there's a lot of potential

  • for cars to be better when they're being driven by computers

  • than when they're being driven by people.

  • DAVID MALAN: But I think that's an interesting one,

  • because it's hard for people I think to rationally acknowledge that, right?

  • Because I feel like, you read all the time

  • about a self-driving car that's been involved in an accident.

  • Because this seems to be evidence, among some minds, of this

  • is why we shouldn't have self-driving cars.

  • Yet, I'm guessing we're nearing the point if we're not there already

  • where it is humans who are crashing their cars far more

  • frequently than these computers.

  • And so, we need to appreciate that, yes, the machines are

  • going to make mistakes.

  • And in the worst extreme case, God forbid, a computer, a machine,

  • might actually hit, and hurt someone, or kill someone.

  • But that's the same reality in our human world.

  • And it's perhaps a net positive, if machines

  • get to the point of being at least better than we humans.

  • Of course, in facial recognition that could actually

  • mean adversarily, for humans, that they're being detected.

  • They're being monitored far more commonly.

  • So it almost seems these trends in machine learning

  • are both for good and for bad.

  • I mean even FaceApp, a couple of years ago, apparently--

  • and I only realized this by reading up on some of the recent press it's now

  • gotten again--

  • I mean, even they got themselves into some touchy social waters,

  • when it came to some of the filters they rolled out.

  • Apparently, a couple of years ago, they had

  • a hot filter, which was supposed to make you look prettier in your photos.

  • The catch is, that for many people, this was apparently exhibiting patterns

  • like lightening skin tone, thereby invoking some racial undertones,

  • as to what defines beauty.

  • And they even had in more explicit filters, I gather, a couple of years

  • ago, where you could actually change your own ethnicity, which

  • did not go over well, either.

  • And so those features have since been removed.

  • But that doesn't change the fact that, we are at the point technologically,

  • where computers can do this and are probably

  • poised to do it even better, for better or for worse.

  • And so again, it seems to boil down to then,

  • how we humans decide proactively, or worse, reactively,

  • to put limits on these technologies or restrain ourselves

  • from actually using them.

  • BRIAN YU: Yeah.

  • I think that's one of the big challenges for societies and governments,

  • especially right at this point in time, is catching up with technology, where

  • technology is really moving fast, and, every year,

  • is capable of more things than the year before.

  • And that's expanding the horizon on what computers can do.

  • And I think it's really incumbent upon society to be able to figure out,

  • OK, what things should this computers be able to do,

  • and placing those appropriate limits earlier rather than later.

  • DAVID MALAN: Yeah, absolutely.

  • And I think it's not just photos, right?

  • Because there's been in the press, over the past year or two,

  • this notion of deepfake videos, as well.

  • Whereby, using machine learning and algorithms,

  • you feed these algorithms lots of training data,

  • like lots of videos of you teaching, or talking, or walking, and moving,

  • and so forth.

  • And out of that learning process can come a synthesized video

  • of you saying something, moving something, doing something,

  • that you never actually said, or did, or moved.

  • A couple of clips gained a decent amount of notoriety some months ago,

  • because someone did this, for instance, for President Obama in the US.

  • In fact, do you want to go ahead and play the clip of this deepfake?

  • So there is a video component, too.

  • But what you're about to hear is not Obama, much as it sounds like him.

  • BRIAN YU: Yeah, sure.

  • [AUDIO PLAYBACK]

  • - We're entering an era in which our enemies can

  • make it look like anyone is saying anything, at any point in time.

  • Even if they would never say those things.

  • So for instance, they could have me say things like, I don't know,

  • Killmonger was right, or Ben Carson is in the sunken place.

  • Or, how about this, simply, President Trump is a total and complete dipshit.

  • Now, you see, I would never say these things,

  • at least not in a public address.

  • But someone else would, someone like Jordan Peele.

  • This is a dangerous time.

  • Moving forward, we need to be more vigilant with what

  • we trust from the internet.

  • That's a time when we need to rely on trusted news sources.

  • It may sound basic, but how we move forward in the age of information

  • is going to be the difference between whether we survive

  • or whether we become some kind of fucked-up dystopia.

  • Thank you.

  • And stay woke bitches.

  • [END PLAYBACK]

  • DAVID MALAN: So if you're familiar with Obama's voice,

  • this probably sounds quite like him, but maybe not exactly.

  • And it might sound a bit more like an Obama impersonator.

  • But honestly, if we just wait a year, or two, or more,

  • I bet these deepfake impressions of actual humans

  • are going to become indistinguishable from the actual humans themselves.

  • And in fact, it's perhaps all too appropriate

  • that this just happened on Facebook, or more specifically,

  • on Instagram recently, where Facebook's own Mark

  • Zuckerberg was deepfaked via video.

  • Should we go ahead and have a listen to that, too?

  • [AUDIO PLAYBACK]

  • - Imagine this for a second, one man with total control

  • of billions of people's stolen data, all their secrets, their lives,

  • their futures.

  • I owe it all to Spectre.

  • Spectre showed me that, whoever controls the data, controls the future.

  • [END PLAYBACK]

  • DAVID MALAN: So there too, it doesn't sound perfectly like Mark Zuckerberg.

  • But if you were to watch the video online--

  • and if you go ahead Indeed and google President Obama deepfake and Mark

  • Zuckerberg deepfake, odds are, you'll find your way

  • to these very same videos, and actually see

  • the mouth movements and the facial movements that are

  • synthesized by the computer, as well.

  • That too, is only going to get better.

  • And I wonder, you can certainly use this technology all too obviously for evil,

  • to literally put words in someone's mouth

  • that they never said, but they seem to be saying,

  • in a way, that's far more persuasive than just misquoting someone

  • in the world of text or synthesizing someone's voice,

  • as seems to happen often in TV and movie shows.

  • But doing even more compellingly because people are all the more inclined,

  • I would think, to believe, not only what they hear or read, but what they see,

  • as well.

  • But you could imagine, maybe even using this technology for good.

  • You and I, for instance, spend a lot of time

  • preparing to teach classes on video, for instance, that don't necessarily

  • have students there physically.

  • Because we do it in a studio environment.

  • So I wonder, to be honest, if you give us a couple of years time

  • and feed enough recordings of us, now in the present,

  • to computers of the future, could they actually synthesize

  • you teaching a class, or me teaching a class, and have the voice sound right,

  • have the words sound right, have the facial and the physical movements look

  • right?

  • So much so, that you and I, down the road,

  • could just write a script for what it is that we want to say,

  • or what it is we want to teach, and just let

  • the computer take it the final mile?

  • BRIAN YU: That's a scary thought.

  • We'd be out of a job.

  • DAVID MALAN: Well, someone's got to write the content.

  • Although, surely if we just feed the algorithms enough words

  • that we've previously said.

  • You could imagine, oh, just go synthesize what it is my thoughts

  • would be on this topic.

  • I don't know.

  • I mean, there's some actually interesting applications of this,

  • at least if you disclaim to the audience,

  • for instance, that this is indeed synthesized and not

  • the actual Brian or the actual David.

  • But if you're a fan of Black Mirror, the TV show that's

  • been popular for a few years now, on Netflix,

  • there's actually in the most recent season--

  • no spoilers here-- but in most recent season, starring Miley Cyrus,

  • and the rest of the cast, actually touch on this very subject,

  • and use, although they don't identify it by name, this notion of deepfaking,

  • when it comes to videos.

  • BRIAN YU: Yeah.

  • It's a very interesting technology, for sure.

  • And these videos of Mark Zuckerberg and Obama,

  • certainly you can tell, if you're watching and paying attention closely,

  • that there's certain things that don't look or don't feel quite right.

  • But I would be very curious to see a Turing test, of sorts,

  • on this type of thing.

  • Where you ask someone to be able to look at two videos and figure out,

  • which one is the actual Obama, and which one is the actual Mark Zuckerberg.

  • I guess that, on these videos, most people

  • would probably do a pretty good job.

  • But I don't think it'd be 100%.

  • But I would be very curious to see, year after year, how that rate would change.

  • And as these technologies get better, as people

  • become less able to be able to distinguish,

  • to the point where it just be a 50/50 shot as to which one of the fake.

  • DAVID MALAN: Yeah.

  • Especially when it's not just celebrities,

  • but it's a person you've never met and you are seeing them,

  • or quote-unquote them for the first time on video.

  • I bet it would be even harder for a lot of folks

  • to distinguish someone for whom they don't have just ample press

  • clippings in their memory, of having seen them or heard them before.

  • So what do you think, in the short term--

  • because this problem only seems to get scarier and worse down the road--

  • is there anything people like you, and I, and anyone

  • else out there can actually do to protect themselves

  • against this trending, if you will?

  • BRIAN YU: So I think one of the important things

  • is just being aware of it, and being mindful of it, and being on the lookout

  • for it as it comes up.

  • Because certainly, there is nothing we can really

  • do to stop people from generating content like this,

  • and generating fake audio recordings, or video recordings.

  • But I think that, if people look at something,

  • and it's potentially a fake video, and you just

  • take it at face value as accurate.

  • Then that's a potentially dangerous thing.

  • But encouraging people to take a second look at things,

  • to be able to look a little more deeply to try and find the primary sources,

  • that's probably a way to mitigate it.

  • But even then, the ultimate primary source

  • is the actual person doing the speaking.

  • So if you can simulate that, then even that's not a perfect solution.

  • DAVID MALAN: So is it fair to say maybe, that the biggest takeaway

  • here, certainly educationally, would be just critical thinking

  • and seeing, hearing something and deciding for yourself evaluatively

  • if this is some source I should believe.

  • BRIAN YU: Yeah, I'd say so.

  • DAVID MALAN: And you should probably stop

  • uploading photos of yourself to Facebook,

  • and Instagram, and Snapchat, and the like.

  • BRIAN YU: Well, that's a good question.

  • Should you stop uploading photos to Facebook, and Instagram, and Snapchat.

  • I mean, certainly there's a lot of positive value for that.

  • Like my family always loves it when they see photos of me on Facebook.

  • And maybe is that worth the trade-off of my photos being online?

  • DAVID MALAN: Living in a police state.

  • [LAUGHTER]

  • I don't know.

  • I mean, I think that to some extent, the cat is out of the bag.

  • There's already hundreds of photos, I'm guessing, of me

  • out there online, whether it's in social media or other people's accounts

  • that I even know about, for instance, because I just wasn't tagged in there.

  • But I would think that that's really the only way

  • to stay off the grid is not to, at least, participate in this media.

  • But again, especially in the UK, and other cities, and surely

  • other locations here in the US, you can't even

  • go outside anymore without being picked up

  • by one or more cameras, whether it's in an ATM, at a bank,

  • or whether it's street view camera up above, or literally Street View.

  • I mean, there are cars driving around taking pictures of everything they see.

  • And at least companies, like Google, have

  • tended to be in the habit of blurring out faces.

  • They still have those faces somewhere in their archives [INAUDIBLE]..

  • BRIAN YU: Yeah.

  • I was actually just grocery shopping at Trader Joe's the other day.

  • And I was walking outside.

  • An Apple Maps car drove by with all their cameras

  • that we're looking around and taking photos of the street.

  • DAVID MALAN: I saw one recently, too.

  • But their cars are not nearly as cool as Google's.

  • BRIAN YU: I've never seen a Google car, in person.

  • DAVID MALAN: Oh, yeah.

  • No, I've seen them from time to time.

  • They're much better painted and branded.

  • Apple's looked like someone had just set it up on top of their own car.

  • [LAUGHTER]

  • Well, and on that note, please do keep the topics of interest coming.

  • Feel free to drop me and Brian a note at podcast@cs50.harvard.edu,

  • if you have any questions or ideas for next episodes.

  • But in the meantime, if you haven't yourself seen or tried out FaceApp,

  • don't necessarily go, and rush, and download, and install this app.

  • That was not intended to be our takeaway.

  • But be mindful of it.

  • And certainly, if you just google FaceApp on Google Images, or the like,

  • you can actually see some examples of just how compelling,

  • or how frightening, the technology is.

  • So it's out there.

  • This then was the CS50 Podcast.

  • My name is David Malan.

  • BRIAN YU: I'm Brian Yu.

  • See you all next time.

[MUSIC PLAYING]

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

A2 初級

FaceApp - CS50播客,第8集。 (FaceApp - CS50 Podcast, Ep. 8)

  • 8 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字