Placeholder Image

字幕列表 影片播放

  • a es mi destino.

  • Welcome back to smarter every day.

  • Unfortunately, now is the time for the video about disinformation on Reddit, the front page of the Internet.

  • It's been documented by both the European Union and the United States of America that Russia, Iran, China are all spreading disinformation online about the Corona virus pandemic, which, to be clear, can kill you.

  • If you don't know the right things about the pandemic, you can die.

  • So why would they do that?

  • This is the fourth video in a series on disinformation.

  • We've already covered YouTube, Twitter and Facebook.

  • Okay, Read it does not want disinformation on the platform.

  • In fact, they're doing a ton to try to fight against it.

  • But the question is, how do you fight disinformation without giving up?

  • What makes read it so powerful a place to freely exchange ideas with other people anonymously.

  • Today we're gonna talk about this tension and, more importantly, how it affects you and I as Reddit users to learn about this.

  • Let's go get smarter every day and talk to the experts were to speak to Chris Slow, the chief technology officer at Read it and then we'll go to Rene.

  • Duress tha at the Stanford Internet Observatory.

  • We'll talk on the phone with John at the Oxford Internet Institute and then Jeremy Blackburn at the eye drama lab.

  • It's a group of scientists and academics that use math and science to try to detect this stuff.

  • All right, first off, let's go to actual Reddit headquarters, something I've always wanted to do.

  • We're gonna meet Chris slow, chief technology officer, and get this conversation started against me.

  • Start here.

  • Yeah.

  • Who controls?

  • Read it.

  • Somebody controls.

  • Ready?

  • Oh, but I mean red.

  • It is.

  • It's the front page of the Internet.

  • Like you go to read it in, like, whatever's on that front page, it influences culture for that day.

  • I think the actual answer is like everybody who uses read it controls.

  • Read it in some in some way, you won't be able to vote or down Vote yes, function of, you know, if the community likes that content.

  • And so so, yeah, the next letter is the moderator.

  • So every every community has its own set of moderators and they enforce their own set of rules.

  • And so they get a chance to set the tone of their community so they can take down anything that wants directly think, just remove it.

  • Um, and they can set the rules to be as strict auras looses they want to be.

  • The underlying important unit is not the content or the user, but the communities that are built on the platform is coordinated.

  • An authentic behavior problem on red.

  • I mean, it's it's definitely a problem we constantly work to prevent.

  • I'm gonna say, like there will always be examples of things that hit the front page that potentially shouldn't be there.

  • That's kind of like a nature of the platform, right?

  • Because you're dealing with a bunch of people, right?

  • And so, um, you know, you know, extreme example would be you Click Bait is a thing like titles that are suggestive.

  • That'll get you to click through.

  • We're like, you know, um, but what we try to do is those two parts of it.

  • One is users have an opportunity to reconsider their vote after seeing the content secondarily, and you have the download available to them.

  • There's a way to kind of like degrade any any inauthentic like Baber that happens your side of it is like, you know, we put a lot of work over the last decade and change into maintaining a certain, like sanctity of the vote.

  • Um, you know, the joke I usually use is that, um we can't every vote, but it doesn't mean that every vote counts, right?

  • And so we have to be able to come up with, um, you know, based on based on behavior is knowing that there's some adversary on the far side.

  • Was trying to basically game that our system have a set of somewhat opaque ways to hide what counts and what doesn't.

  • But at the same time, make sure that we're not throwing the baby out with the bathwater right like we do.

  • We do wanna have things that are popular, manifest themselves as popular.

  • But during that very early formative time, that really first couple of votes is what's really critical.

  • That's where it's a lot of scrutiny around making sure that, you know, there's not like a game going on where you know it's it's me and my 50,000 clones that are all coming from the same computer in the same I P address or counting towards that boat you can think of it like this Kind of a tiered system for how we deal with the content appears on Reddit.

  • At least we're, like the like, the enforcement side of things like the removal of side of things.

  • The last line of defense is us, like that company.

  • You know, the ad mons a cz were over for doing the platform.

  • Um, we our job is to enforce, like, site wide policy violations, like, you know, copyright violations, anything that violates our content policy.

  • You know, harassment, spam on all that great fun stuff, Um, are appearing in places like the last line of defense.

  • It's like the it's like you're sitting in the National Guard to clean up like a giant mess.

  • Okay, so you're saying when Reddit employees get involved?

  • Yeah, that's a rare thing.

  • That's a relatively rare thing.

  • Just a personal observation read.

  • It seems to be way more hands off when it comes to this stuff than other social media platforms, but it's also way more community focused.

  • Grenade Arrested has a ton of experience studying this information online, so I asked her what she thought about this community driven approach.

  • There's an interesting idea there, which is that the community should decide what kind of content it wants to tolerate.

  • Right?

  • And so you do see things like, I think there's a subreddit where you can only post a picture of a cat.

  • And if you post a picture of a dog like you know, it's deleted and band and nobody goes in there and screams that they're being censored because they couldn't post the dog into the cat subreddit.

  • So it's interesting in light of like the moderation challenges faced by, like Twitter and Facebook, where there's this expectation that one moderation standard is fit for the whole community.

  • I do think read it is an interesting experiment, seeing how that much more hands on moderation kind of activity works.

  • And then also, when the community reports these weird accounts coming in, there is a little bit more of like, you know, you as a member of that community have a better sense of like where that uncanny valley is.

  • We're like the contents, not quite right.

  • The the person typing the account like, you know, sort of person typing the comment gets it just a little bit wrong.

  • They don't understand community.

  • Yeah, exactly.

  • And so there is like I do believe that read it has a unique, that community moderated kind of point of view for detecting anything from disinformation to harassment.

  • At the same time, we do see these accounts get through right, And so the question is also what's their top level strategy for managing and detecting and being proactive?

  • Recognizing that platform is a target for state sponsored actors.

  • Is it enough to rely on community mods?

  • Or is there also something in place?

  • Thio have, ah, top level view that's doing a little bit more of the investigative type work that Facebook and Twitter we're doing.

  • So when speaking to Renee and Chris, we talked about up vote manipulation down vote manipulation brigade ing.

  • There's all kinds of manipulation that can take place on Reddit, and those discussions are pretty long, so I'll leave them over on the second channel at some point.

  • But for now, there was one little nugget that fell out of the conversation with Chris from Reddit that I think other Reddit Er's might appreciate.

  • One of the one of those size and various things on Reddit has been that the the upload to dammit ratio consistent over 14 years.

  • I don't know what that even means.

  • 7 to 1.

  • There's been seven up votes consistently to every down vote cast on the site.

  • It's just I don't really I don't get it, but it's Ah, it's a thing that's always happened.

  • Some city where Jeremy Blackburn.

  • He's from Binghamton University in New York, the top public university in New York.

  • And he runs this thing or at least participates in this thing called the I Drama Lab, which I don't really understand what you are.

  • Could you please explain the high drama lab for me?

  • Yeah, sure.

  • So I drama lab, where a group of international scientists and, um, we specialize in getting a under, having a large scale, a very wide and high level view, holistic view of the Internet as a whole.

  • So we don't just look at Twitter or just read it or just Facebook.

  • We look at it all, and we focus really on understanding the connections between them, the hidden connections between them and other chunks of the web.

  • Um, this is really important.

  • We think we're you know, we're pretty good at it.

  • Um it's our niche.

  • And ah, we do our job there.

  • I got it.

  • So let me just ask you this straight up then.

  • Is there some type of large scale, coordinated inauthentic activity on Reddit?

  • Is that a thing?

  • Yes.

  • Absolutely, Really.

  • The Internet has a long history of people pretending to be something they're not.

  • Ah, and the Internet kind of enables that.

  • So no, I don't I don't I don't want to say that everybody you see, that's acting a little bit weird.

  • Is is some kind of bad actor.

  • There are people that are new to communities have to learn the rules.

  • You know, maybe they are interested.

  • They just haven't learned the culture yet.

  • But it's not crazy to be on the lookout for this stuff.

  • It does happen.

  • There are active campaigns, toe abuse, social media and influence opinion.

  • So what you saying?

  • Ah, we're seeing that there's a lot of content going on right now, especially with Corona virus.

  • Unfortunately, we see that a lot of this stuff appears to be effective.

  • Still, um, it there's not we haven't noticed yet.

  • At least any kind of news, particularly new strategies they're being used, were still seeing kind of the same basic strategies that were 23 years ago because they still work.

  • I guess one thing I didn't get from the Reddit interview is they were kind of top.

  • Chris was kind of top level.

  • He didn't really dive down and say, Oh, yeah, user surfing Injured 385 Did this.

  • Do you have, like, a specific example of a time when someone's trying to manipulate read it?

  • Yeah, there's so red it's actually put out their stuff.

  • There's a reddit dot com slash wiki slash suspicious accounts and on that site what they did and actually give him a lot of credit for doing this.

  • Following the 2016 election read, it acknowledged 944 troubles that have been active on her platform, and it did a very detailed transparency report in early 2017 Way was late 2017 but they did a really detailed transparency report where they actually go through, and you can see that the user Reuben Jer, for example, has karma of 99,400 It looks like and read it for a while.

  • I had left these accounts up, um, and so you could actually kind of click in there and you could see what they were, who they were, where they were posting to.

  • And that was where you could start to see that they were posting to some of the funny name sights in the black community Subreddit.

  • It's some of the far right community Subreddit on DSO, To their credit, read, it did actually make this visible to the public.

  • They kept them up there for quite some time.

  • Just an interesting choice.

  • It is because Facebook took it all down, even though they flagged the accounts.

  • When I first heard that read, it was leaving up access to both posts and comments from known Russian agents who were trying to manipulate us.

  • I thought that was a really odd decision on Red.

  • It's part.

  • But when I went to the public link Reddit security provided of all the known trouble accounts, I realized that this was actually valuable.

  • It lets you study the context of the comments and the post to understand how these disinformation campaigns work.

  • At the time this particular troll operation was underway, the strategy seemed to be an attempt to use the hot button issues of the day in order to rile up Americans on both sides of any argument, all the while making sure to cycle in a good helping of low effort, cute animal repost to try to throw red it security and other users off the scent.

  • At the time of these post in 2015 and 2016 they clearly focused on racism and police issues.

  • They would go from Subreddit to separate it, hoping that it would explode and make everyone fight and create a CZ many casualties as possible.

  • The comments were interesting.

  • They did one of two things.

  • They were either camouflage trying to convince other Reddit er's that they were real people or they were inflammatory, hoping to nudge people towards hating each other.

  • Although there were a few outlier accounts that seemed to be focused more on comments.

  • If you look at the karma scores of the vast majority of these accounts, the post karma was on average 40 times larger than the comment karma.

  • So Inflammatory Post seemed to have been the weapon of choice.

  • That being said, even though the comments seemed to appear less influential, let's take a moment and try to analyze them, to understand the cumulative effect that they did have on the conversation.

  • I'm on the phone with John from the Oxford Internet Institute, and he's been researching minced information on Red, and he's gonna talk to us a little bit about what he sees in the data.

  • So you have a report in Robo trolling, which is the magazine NATO Strategic Communications Center Makes quarterly about online disinformation.

  • You've got a graft down here that I was hoping you could explain to us.

  • Sure, I'll give it try.

  • So this was looking at what happens to the Reddick conversations after some manipulations.

  • So as you got a reddit thread sort of taking along over time, you think of this left to right in each those graphs, another comment.

  • Another commented on the comment on and then in the middle on the red dotted line we have when a fake account a Russian buyer, a trial account has made a comment.

  • They've injected something into this into this threat, and then what we have is the baseline, which is tracking along at zero, is essentially a control threat.

  • So another thread that hasn't been manipulated and then what we see in three lines is how the manipulated conversation diverged.

  • So what happened?

  • Where was the change?

  • Cognitive complexity.

  • What does that mean?

  • So, ah, higher cognitive complexity score on acceptance that multiple viewpoints can be true even if they're in opposition.

  • Where is a low complexity score is singular, Very single minded, You know I'm right and there's nothing else that could possibly be true.

  • Okay, An identity attack.

  • I assume that means like people start attacking each other based on who they are.

  • Like gender, religion, things of that nature.

  • Yeah, exactly.

  • Attacking somebody.

  • Because if their their identity so high toxicity score suggests that it is more aggressive, more confrontational, and it's more likely to lead the recipient of the message to leave the conversation.

  • And this is why I think it's really interesting is that we see in all three of these measures the conversations things change, s so they became on the left hand graph.

  • They became less diverse in the number opinions they became or more polarized, single viewpoints in the middle.

  • There's a sort of a short spike in the number of identity attacks, although this did return to baseline afterwards and then on the right, we see a sustained rise in the level of toxicity in the conversation.

  • So this is just one comment that was able to make the entire conversation diverge.

  • Is that what I'm seeing here?

  • Yes.

  • Oh, this is just the one.

  • The one comment level.

  • So eat.

  • One individual comment did create a measurable change in the nature of the conversation.

  • So the magnitude of this change is small but measurable.

  • But if you scale this up to thousands of comments over the whole platform, then suddenly that starts to have a bigger impact.

  • Wow, it's right.

  • Like this is data thing is pretty clear.

  • If you can make a a conversation into a two sided thing instead of a three sided or four cider, whoever sided thing, it becomes much easier to control it, right?

  • If there's two very clearly opposing points of use black and white now it's easier to control the narrative because people get locked into one of those points of views of the other.

  • Using this same data provided by Reddit, Jeremy and the eye drama lab team were able to create a report with fascinating implications.

  • This graph shows weekly troll activity plotted over many years.

  • This yellow line is what we're interested in because it's Russian troll activity on Reddit, which can see here, is that there was a bunch of activity by those accounts in the second half of 2015 and then the activity seems to go dormant, and then it creeps back up and spikes.

  • In the fall of 2016 the initial activity in 2015 seems to be karma farming.

  • They would Free Butte content from other places on the Internet and posted for karma on Reddit.

  • This post, for example, looks awfully familiar.

  • And so is it.

  • Pistol, by the way, weird.

  • After this long period of what is thought to be karma farming to give the username or credibility, the troll activity seems to slow down for a while and then when the moment is right, in this case, right before the 2016 election.

  • Ah, percentage of these accounts surgeon action and try to influence society.

  • So it's pretty clear what's happening.

  • They're creating the accounts, and then they're grooming them to become effective social media weapons, which can then be deployed at the exact moment when they might be the most effective.

  • Think about where we are right now.

  • Global pandemic.

  • The early stages were coming up on election.

  • Everybody's kind of tense at the moment.

  • If you had these social media weapons these accounts parked, when would you deploy them?

  • What is a home run for a person or entity that would try to run a misinformation campaign on Reddit?

  • What would be their goal?

  • I think that the most achievable goal is just to cause chaos caused, uh, polarization and to keep people from going towards a common goal to deliberately drive people apart.

  • So you see this type of behavior elsewhere on Twitter, it's been very well explain that there are.

  • There were state sponsored actors taking both sides of the same argument, right?

  • They didn't have particular goal.

  • They just wanted to cause problems.

  • Jeremy showed me something that made me realize we're only scratching the surface with this series.

  • We're looking the individual platforms here, right?

  • Twitter, Facebook, YouTube read it.

  • They all have their own vulnerabilities.

  • He showed me data and explain that these campaigns were coordinated attacks that spanned the whole Internet.

  • And it appears that read it is being used as a way to scale disinformation campaigns from smaller places on the Internet, up two more mainstream outlets.

  • Military is aware that this is a big deal.

  • They know it's a big deal, and nobody really has a map of the battlefield.

  • Full map of the battlefield.

  • Yes, Facebook has their, you know, their map of, you know, Facebook land and Twitter has their map of Twitter land.

  • But, um, you know, nobody knows what's going on and all other parts of the Internet, and it's hard to win a war without a map.

  • I don't know if it's ever been done.

  • It's hard to win a war without a map.

  • It's pretty good deed.

  • So what would you say to the people on Reddit right now that they're using the platform and they're starting to question what they see, you know, they're seeing Maybe why is this kind of stuff at the top?

  • And if they do identify some type of inauthentic or coordinated behavior like what?

  • What do we need to tell them to dio report it?

  • Definitely no.

  • Put the report button.

  • That's what the report.

  • But it is for Okay.

  • Um uh, we look at look at reports we look at anything that you sent to us, we do.

  • We do try to look at the more people who are reporting stuff and identifying things that look a little off.

  • That's the thing that really helps us to find it and localize it.

  • So that's a signal that you see.

  • That's a signal we can use.

  • And, of course, like you know, the obvious question is like, What do we do about people who have used the report button?

  • Of course, people abuse report button.

  • Of course we know how to deal with that.

  • So I think you know, the broader the base of people who were reporting stuff who tell you think things look wrong, the more more stable we have goes back to way to scale up users with more users like, you know, we we can either have a gigantic police force that on like a bunch of a equipped cameras that watch is everybody at all times what could possibly go wrong.

  • Or we can let the you know we go with the neighborhood watch take over.

  • I would rather live in a society that has a neighborhood watch rather than a bunch of cameras on every street pole.

  • So Chris is talking about this neighborhood watch.

  • But I was very surprised when I look at the data for myself, and I realized that read It is very proactive when it comes to taking down bad actors on the platform.

  • In the fourth quarter of last year, there were about five and 1/2 1,000,000 reports of potential content manipulation on reddit by users.

  • In that same period, read it removed almost 31 million pieces of content and sanction almost two million accounts for content manipulation.

  • To be clear, that means read.

  • It has taken down roughly six times Maur content and accounts that have been reported.

  • If reddit overreacts if they get to ah, authoritarianism authoritarian, if you will.

  • Ah, that can push people, um, even further away from what we might consider mainstreamers normalcy type of stuff because then they feel persecuted and stuff like that.

  • So red, it has a difficult job.

  • Um, you know, maybe they could do a better job, but, uh, you know, I don't envy that job either.

  • You're pretty happy with what they're doing.

  • E.

  • I don't know if I'm happy with it, but I don't.

  • I also don't know if I have a better, a better, easily implementable answer.

  • There are definitely worst things they could be doing so clearly.

  • Read it.

  • Admissions are proactively working to decrease the influence of bad actors on the platform.

  • I find this interesting because when I started this study, I was under the impression that read It was two hands off.

  • But I'm starting to understand that it's more of a fine line that they have to walk also, and this is more personal.

  • When I first started studying this stuff and I started seeing these disinformation campaigns around me, I think something bad happened to me.

  • My reaction to this new reality was flat wrong.

  • I started to irrationally see trolls everywhere.

  • Oh, that person feels strongly and disagrees with me.

  • Must be a troll.

  • Look, a pot shot at me in the comments.

  • That's a troll.

  • Ignore that person, and it started to feed this us versus them narrative from a different angle.

  • The the angle where I think anyone that disagrees with me must be a troll.

  • At the risk of confusing you, I'm going to point this out.

  • The trolls first play is to make you hate your online brother.

  • The trolls.

  • Second play is to make you think you're online.

  • Brother is a troll.

  • If you go through the common database, you can see trolls meta joking about the existence of trolls.

  • So what would you say to the normal user of read it?

  • The person who goes there to find funny names, the person that you know, their kids were sitting them links from Reddit.

  • What do you say to that person?

  • Well, I think we don't wanna create a feeling of paranoia.

  • You don't want people to think like everybody I talked to on the Internet is a suit anonymous, you know, troll operating out of some other country.

  • But at the same time, I think there's, like, the need for healthy skepticism.

  • If somebody is posting stuff, it really makes you feel riled up or really makes you feel, um strongly, uh, you know, kind of take the extra two seconds toe to do the check.

  • You know, where did this come from?

  • And why do I feel compelled to share it?

  • Knowing that trolls appeal to emotion is very important because they are really So here's what I'm gonna D'oh!

  • What if I were to assume everyone was riel?

  • Hear me out on this.

  • Not like a user named interact with or comment interact with but like actual people.

  • And I make that primary, and I just go forward on the Internet as if they're really when I'm more kind, more loving, it seems to be more toxic to trolls.

  • Think back to John's grafts.

  • If I see someone trying to reduce the cognitive complexity of a conversation, I'm going to add new once and try to expand the conversation that's toxic to trolls.

  • If I start to see identity attacks in the thread, I'm gonna call it out in a non toxic way with kindness and love, which is also toxic to trolls.

  • If I de escalate the rhetoric and try to make it less aggressive and less confrontational, that's toxic to trolls.

  • And plus, this is how I want to interact with people in real life.

  • Read.

  • It has a very difficult job.

  • They've got add men's mods.

  • They have other tools in place to try to thwart the bad guys.

  • But for me, the real battle is with me.

  • Every interaction I have on Reddit is the one I can do something about.

  • And I want to be a good guy, and I want to vote people that are doing the same.

  • So if I see you out there decreasing the toxicity or trying to expand the conversation, you're gonna have my vote.

  • All right, I'm gonna do a quick sponsor message on a similar topic.

  • Internet security.

  • And I hope you learn something, whether you want the product or not.

  • This episode of Smarter every day is sponsored by express VPN using a VPN, a virtual private network.

  • You can create a tunnel between you and the websites you're trying to access an obscure your data, and I could do two things.

  • You can change the i P address you operate from, and you can also make people think you're operating the Internet from a different location.

  • The other day I was on a plane and a guy recognizes me from smarter every day, and we struck up a conversation and turns out he's an expert at tracking people in two dimensional meet space and figuring out who they are and what add to serve them on their phone.

  • Why's my flashlight on?

  • I hate the fact that people are making money based on my physical location online, and there's three ways people can determine your location from your Internet traffic number one.

  • They could infer your location based on your behavior online.

  • For example, if you're interacting with a website about the best ice cream places in midtown Manhattan, you're either in midtown Manhattan or chances are you will be soon.

  • The second way people can get your coordinate information on the Internet is if you're just sending it to him.

  • Like sometimes data from your phone transmits out the coordinate information in data packets.

  • If you don't turn your location, service is off.

  • You could be giving your location away, so turn that junk off.

  • The third thing that can happen is your I.

  • P address can show people your information, or I didn't know this was a thing, but it is.

  • Your Internet service provider can actually give away your location information to other parties.

  • This is what express VP and can help you with with one touch of the button.

  • I use express VPN to make the Internet think I'm operating from a different location in this case, Chicago not after speaking to this advertising expert that tracks people with location data.

  • I am or excited now about using this feature to thwart advertisers.

  • Efforts to trap me in meat space.

  • If you want to give a shot, go to express VPN dot com slash martyr and see how you can get three months for free.

  • It works on all your devices, even your gaming console, so check it out.

  • Express VPN dot com slash smarter.

  • I think you'll dig it.

  • I want to say thank you.

  • Thank you for watching this big Siri's that I did on misinformation.

  • We had YouTube, Twitter, Facebook and now read it.

  • There was even a cyber warfare video before that.

  • This was a challenge, and I want to say thank you to the patrons for letting me mentally have the freedom to do this.

  • People that support at patriot dot com slash smarter Every day that frees me up.

  • I don't feel like I'm tied to the algorithm.

  • I could just explore the things that I genuinely want to explore, and I thought this was an important topic.

  • If you feel like this, Siri's has earned your subscription, I would greatly appreciate that There's even a little notification bell.

  • If you click that you'll be notified when I upload.

  • But if not, no big deal.

  • If you want to discuss this, will be doing it over our slash smarter.

  • Every day I'll leave leaks down below.

  • For all the references that we talked about.

  • And other than that, I am destined.

  • You're getting smarter every day.

  • Thank you for the support.

  • Have a good one.

  • Bye.

a es mi destino.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

Reddit上的巨魔如何操縱你(虛假資訊與我們如何戰勝它)--每天更聰明 232 (How Trolls on Reddit Try to Manipulate You (Disinformation & How We Beat It) - Smarter Every Day 232)

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字