Placeholder Image

字幕列表 影片播放

  • Chris Anderson: What worries you right now?

    譯者: Val Zhang 審譯者: Xiaowei Dong

  • You've been very open about lots of issues on Twitter.

    克里斯•安德森(安): 你目前擔心的是什麼?

  • What would be your top worry

    你對許多推特爭議持開放的態度。

  • about where things are right now?

    對於現狀你最擔心的是什麼?

  • Jack Dorsey: Right now, the health of the conversation.

    傑克•多爾西(多): 目前擔心對話是否良性。

  • So, our purpose is to serve the public conversation,

    我們致力於為公眾提供對話平臺,

  • and we have seen a number of attacks on it.

    但也看到對此的攻擊不斷。

  • We've seen abuse, we've seen harassment,

    像是那些濫用、騷擾、操弄、

  • we've seen manipulation,

    自動化、人類協作、錯誤資訊。

  • automation, human coordination, misinformation.

    這並非 13 年前我們 創立時,所預期的變化。

  • So these are all dynamics that we were not expecting

    但這些現象現已不容小覷,

  • 13 years ago when we were starting the company.

    而最讓我擔心的是,我們是否有能力

  • But we do now see them at scale,

    大規模、系統性地處理這些狀況,

  • and what worries me most is just our ability to address it

    能有一套嚴謹的系統、

  • in a systemic way that is scalable,

    透徹地瞭解我們該如何採取行動,

  • that has a rigorous understanding of how we're taking action,

    當我們犯錯時, 也有嚴謹的上訴程序,

  • a transparent understanding of how we're taking action

    因為我們一定會犯錯。

  • and a rigorous appeals process for when we're wrong,

    惠妮•潘尼頓•羅傑斯(羅): 我很高興聽到

  • because we will be wrong.

    你關心這個議題,

  • Whitney Pennington Rodgers: I'm really glad to hear

    因為常看到有人寫到

  • that that's something that concerns you,

    他們覺得自己在推特上 被辱罵和騷擾,

  • because I think there's been a lot written about people

    最常遇到這種事的 就是女性、有色人種女性

  • who feel they've been abused and harassed on Twitter,

    和黑人女性。

  • and I think no one more so than women and women of color

    已經有些資料顯示——

  • and black women.

    幾個月前,國際特赦組織發表報告,

  • And there's been data that's come out --

    報告顯示,部分活躍使用 推特的黑人女性中,

  • Amnesty International put out a report a few months ago

    平均每十 則回覆中就有一則

  • where they showed that a subset of active black female Twitter users

    是某種形式的騷擾。

  • were receiving, on average, one in 10 of their tweets

    當想到推特社群的良性發展,

  • were some form of harassment.

    我很想要聽到「人人良性對話」,

  • And so when you think about health for the community on Twitter,

    但明確地說:你打算如何 將推特變成安全的空間,

  • I'm interested to hear, "health for everyone,"

    尤其是對那些少數,包含女性、 有色人種的女性及黑人女性而言?

  • but specifically: How are you looking to make Twitter a safe space

    多:是的。

  • for that subset, for women, for women of color and black women?

    這是個很糟的情況,

  • JD: Yeah.

    當你使用一項服務,

  • So it's a pretty terrible situation

    理想情況下, 只是想了解世界資訊,

  • when you're coming to a service

    但你卻花了多數的時間於: 檢舉或受到辱罵、

  • that, ideally, you want to learn something about the world,

    被人騷擾。

  • and you spend the majority of your time reporting abuse, receiving abuse,

    我們在探究的是

  • receiving harassment.

    平臺及服務本身的設計是否有缺陷。

  • So what we're looking most deeply at is just the incentives

    目前系統的互動模式,讓使用者

  • that the platform naturally provides and the service provides.

    被輕易地利用來騷擾和辱罵他人,

  • Right now, the dynamic of the system makes it super-easy to harass

    不幸的是,過去系統的運作,

  • and to abuse others through the service,

    絕大部分都依賴使用者 主動檢舉騷擾以及濫用。

  • and unfortunately, the majority of our system in the past

    大約在去年, 我們決定針對這個問題

  • worked entirely based on people reporting harassment and abuse.

    採用更多的機器學習、深度學習,

  • So about midway last year, we decided that we were going to apply

    並嘗試在發生濫用前先發制人,

  • a lot more machine learning, a lot more deep learning to the problem,

    替受害者完全卸下負擔。

  • and try to be a lot more proactive around where abuse is happening,

    最近我們有些進展。

  • so that we can take the burden off the victim completely.

    機器學習演算法可以主動找出

  • And we've made some progress recently.

    大約 38% 的濫用推文,

  • About 38 percent of abusive tweets are now proactively identified

    使用者不需要主動檢舉這些推文。

  • by machine learning algorithms

    但被識別出來的濫用推文 會再由人工複審,

  • so that people don't actually have to report them.

    在人工複審前,我們不會 刪除任何的推文或帳號。

  • But those that are identified are still reviewed by humans,

    這比一年前的 0% 大有進步。

  • so we do not take down content or accounts without a human actually reviewing it.

    意思就是,在 0% 時,

  • But that was from zero percent just a year ago.

    每個收到辱罵的人都得主動檢舉,

  • So that meant, at that zero percent,

    對他們和我們來說都很麻煩,

  • every single person who received abuse had to actually report it,

    歸根究柢就是很不公平。

  • which was a lot of work for them, a lot of work for us

    我們做的另外一件事 是確保作為一個企業

  • and just ultimately unfair.

    被服務的所有群體皆有其代表。

  • The other thing that we're doing is making sure that we, as a company,

    我們要打造一個成功的企業,

  • have representation of all the communities that we're trying to serve.

    內部就必須擁有多元觀點,

  • We can't build a business that is successful

    且每天都深切地體會這些爭議。

  • unless we have a diversity of perspective inside of our walls

    不只是前端負責 執行的團隊要夠多元,

  • that actually feel these issues every single day.

    領導階層也一樣。

  • And that's not just with the team that's doing the work,

    我們要對人們的感受保有同理心

  • it's also within our leadership as well.

    並為客戶提供更好的工具,

  • So we need to continue to build empathy for what people are experiencing

    還有更優質、更簡易的方法,

  • and give them better tools to act on it

    來處理他們所目睹的事情。

  • and also give our customers a much better and easier approach

    我們做的事大多環繞著科技,

  • to handle some of the things that they're seeing.

    但我們也在研究因服務產生的誘因:

  • So a lot of what we're doing is around technology,

    當你一打開推特, 它誘使你去做什麼?

  • but we're also looking at the incentives on the service:

    在過去,

  • What does Twitter incentivize you to do when you first open it up?

    推特誘導了許多紛爭、 暴行、集體騷擾。

  • And in the past,

    我們得更深入地探討 服務內容的本質,

  • it's incented a lot of outrage, it's incented a lot of mob behavior,

    才能做重大調整。

  • it's incented a lot of group harassment.

    我們可以在技術上 做許多小改變,如剛所敘,

  • And we have to look a lot deeper at some of the fundamentals

    但最終我們得更深入 觀察網絡裡的動態,

  • of what the service is doing to make the bigger shifts.

    那正是我們在做的。

  • We can make a bunch of small shifts around technology, as I just described,

    安:你的看法是什麼——

  • but ultimately, we have to look deeply at the dynamics in the network itself,

    你可能做出什麼樣的改變,

  • and that's what we're doing.

    才能從根本上改變使用者行為呢?

  • CA: But what's your sense --

    多:其中一樣——

  • what is the kind of thing that you might be able to change

    舉例來說,我們推出了一種服務,

  • that would actually fundamentally shift behavior?

    概念是追蹤帳號,

  • JD: Well, one of the things --

    我不認為這是大家 使用推特的原因。

  • we started the service with this concept of following an account,

    我認為推特最好定位為 以興趣為基礎的網絡。

  • as an example,

    大家因為某種特定的興趣而來。

  • and I don't believe that's why people actually come to Twitter.

    他們得要花很多心力 才能找到並追蹤

  • I believe Twitter is best as an interest-based network.

    和那些興趣相關的帳號。

  • People come with a particular interest.

    我們可以換一種方式, 讓你追蹤一種興趣,

  • They have to do a ton of work to find and follow the related accounts

    追蹤一個「#主題標籤」、一種趨勢、

  • around those interests.

    追蹤一個社群,

  • What we could do instead is allow you to follow an interest,

    這樣我們就有機會 呈現所有相關的帳號、

  • follow a hashtag, follow a trend,

    所有相關的主題、時刻、「#」,

  • follow a community,

    只要是和那個主題及興趣 有關的都會呈現出來,

  • which gives us the opportunity to show all of the accounts,

    這真的能夠開拓你的視野。

  • all the topics, all the moments, all the hashtags

    但這樣大幅度的結構性轉變

  • that are associated with that particular topic and interest,

    讓整個平臺從偏重於帳號

  • which really opens up the perspective that you see.

    轉而偏向於主題和興趣 。

  • But that is a huge fundamental shift

    安:現實情況不是這樣的嗎?

  • to bias the entire network away from just an account bias

    推特之所以有那麼多內容的原因之一

  • towards a topics and interest bias.

    不就是將全世界幾百萬人放在一個

  • CA: Because isn't it the case

    類似競技場式的比賽

  • that one reason why you have so much content on there

    去競爭追蹤者、關注?

  • is a result of putting millions of people around the world

    對只看而不發推特的人來說,

  • in this kind of gladiatorial contest with each other

    這不是問題,

  • for followers, for attention?

    但對創造內容的人而言, 許多人會說:

  • Like, from the point of view of people who just read Twitter,

    「我真希望有多一點 喜歡、關注者、轉推。」

  • that's not an issue,

    所以他們經常在做實驗,

  • but for the people who actually create it, everyone's out there saying,

    試著找出方法達到目的。

  • "You know, I wish I had a few more 'likes,' followers, retweets."

    而我們所發現的是最常見途徑就是:

  • And so they're constantly experimenting,

    帶有挑釁成分的、

  • trying to find the path to do that.

    罵起人來頭頭是道,

  • And what we've all discovered is that the number one path to do that

    彷彿,頭頭是道的辱罵, 是人們在推特的目標。

  • is to be some form of provocative,

    這樣便能快速傳播——

  • obnoxious, eloquently obnoxious,

    而這變成一種導致憤怒的惡性循環。

  • like, eloquent insults are a dream on Twitter,

    你要如何平息這個狀況?

  • where you rapidly pile up --

    多:是的,你切中核心,

  • and it becomes this self-fueling process of driving outrage.

    但這又回到激勵。

  • How do you defuse that?

    在早期,我們所做的選擇之一

  • JD: Yeah, I mean, I think you're spot on,

    就是突出你的跟隨者人數。

  • but that goes back to the incentives.

    我們決定讓那個數字 要用粗體大字呈現,

  • Like, one of the choices we made in the early days was

    網頁上只要用到粗體大字, 通常都是重要的,

  • we had this number that showed how many people follow you.

    那些就會是你想要追求的東西。

  • We decided that number should be big and bold,

    在那時,那個決策正確嗎?

  • and anything that's on the page that's big and bold has importance,

    可能不正確。

  • and those are the things that you want to drive.

    如果能重頭來過,

  • Was that the right decision at the time?

    我不會這麼強調跟隨者人數。

  • Probably not.

    我不會這麼強調「喜歡」的數目。

  • If I had to start the service again,

    我甚至根本不會創造 「喜歡」這個功能,

  • I would not emphasize the follower count as much.

    因為這個功能無法真的

  • I would not emphasize the "like" count as much.

    推動我們現在最重要的理念,

  • I don't think I would even create "like" in the first place,

    也就是對網路做良性的回饋貢獻,

  • because it doesn't actually push

    以及與網路對話、

  • what we believe now to be the most important thing,

    參與對話,

  • which is healthy contribution back to the network

    從對話中學習。

  • and conversation to the network,

    我們 13 年前沒有想到這些,

  • participation within conversation,

    現在我們認為這些非常重要。

  • learning something from the conversation.

    所以我們得要研究 如何呈現關注者人數、

  • Those are not things that we thought of 13 years ago,

    如何呈現轉推數,

  • and we believe are extremely important right now.

    如何呈現「喜歡」,

  • So we have to look at how we display the follower count,

    並深思:

  • how we display retweet count,

    我們真的希望大家 把這個數字衝高嗎?

  • how we display "likes,"

    我們真的希望當你打開推特時,

  • and just ask the deep question:

    你會說「我得要增加這個數字」嗎?

  • Is this really the number that we want people to drive up?

    我不認為是這樣的。

  • Is this the thing that, when you open Twitter,

    (掌聲)

  • you see, "That's the thing I need to increase?"

    羅:我們來看一些觀眾的推文。

  • And I don't believe that's the case right now.

    安:來看看大家想問什麼。

  • (Applause)

    一般來說,這是推特 很棒的優點之一,

  • WPR: I think we should look at some of the tweets

    可以把它用在群眾智慧上,

  • that are coming in from the audience as well.

    像是:更多知識、問題、觀點,

  • CA: Let's see what you guys are asking.

    超越想像,

  • I mean, this is -- generally, one of the amazing things about Twitter

    有時,許多內容是非常良性的。

  • is how you can use it for crowd wisdom,

    羅:有一則訊息進來:

  • you know, that more knowledge, more questions, more points of view

    「推特對抗外國干預 2020 年 美國大選的計畫是什麼?」

  • than you can imagine,

    這是我們在網路上

  • and sometimes, many of them are really healthy.

    一般都會看見的議題,

  • WPR: I think one I saw that passed already quickly down here,

    有許多惡意的、自動化的活動。

  • "What's Twitter's plan to combat foreign meddling in the 2020 US election?"

    在推特上,比如,我們有

  • I think that's something that's an issue we're seeing

    我們的朋友 Zignal 實驗室的作品,

  • on the internet in general,

    也許我們能以此為例,

  • that we have a lot of malicious automated activity happening.

    闡述我的觀點,

  • And on Twitter, for example, in fact, we have some work

    有些機器人,

  • that's come from our friends at Zignal Labs,

    或自動化協同的惡意帳號活動,

  • and maybe we can even see that to give us an example

    被用來影響選舉等等。

  • of what exactly I'm talking about,

    這個例子是 Zignal 和我們分享的,

  • where you have these bots, if you will,

    他們使用來自推特的資料,

  • or coordinated automated malicious account activity,

    各位可以看見,

  • that is being used to influence things like elections.

    在這裡白色代表人—— 人的帳戶,每一個點是一個帳戶。

  • And in this example we have from Zignal which they've shared with us

    顏色越粉紅,

  • using the data that they have from Twitter,

    就表示越多自動化的活動。

  • you actually see that in this case,

    可以看見有些人和機器人互動。

  • white represents the humans -- human accounts, each dot is an account.

    在這個例子中,這些活動 和以色列大選有關,

  • The pinker it is,

    散播關於本尼•甘茨的不實資訊,

  • the more automated the activity is.

    我們知道,

  • And you can see how you have a few humans interacting with bots.

    最後尼坦雅胡以些微差距勝出,

  • In this case, it's related to the election in Israel

    那結果可能受到推特的影響。

  • and spreading misinformation about Benny Gantz,

    想到這樣的事發生在推特,

  • and as we know, in the end, that was an election

    你會採取哪些明確的措施,

  • that Netanyahu won by a slim margin,

    以確保不會有錯誤資訊 藉此散播出去,

  • and that may have been in some case influenced by this.

    影響到大家,進而影響到民主?

  • And when you think about that happening on Twitter,

    多:先倒帶一下,

  • what are the things that you're doing, specifically,

    我們會自問:

  • to ensure you don't have misinformation like this spreading in this way,

    我們真的能測量良性對話的程度嗎?

  • influencing people in ways that could affect democracy?

    良性對話是什麼意思?

  • JD: Just to back up a bit,

    就像你有指標,

  • we asked ourselves a question:

    有指標可以表示人類是否健康,

  • Can we actually measure the health of a conversation,

    比如體溫、面色紅潤度,

  • and what does that mean?

    我們相信能找到良性對話的指標。

  • And in the same way that you have indicators

    我們和麻省理工學院的 Cortico 實驗室合作,

  • and we have indicators as humans in terms of are we healthy or not,

    提出了四個初始指標,

  • such as temperature, the flushness of your face,

    我們相信,最終能在系統中測量。

  • we believe that we could find the indicators of conversational health.

    第一個指標我們稱之為「共同焦點」。

  • And we worked with a lab called Cortico at MIT

    它計算的是:

  • to propose four starter indicators

    人們對話的主題是集中還是分散。

  • that we believe we could ultimately measure on the system.

    第二個指標叫「共同依據」,

  • And the first one is what we're calling shared attention.

    它指的是對話有多少比例

  • It's a measure of how much of the conversation is attentive

    來自同樣的依據——

  • on the same topic versus disparate.

    不論那些依據是否屬實,

  • The second one is called shared reality,

    只看我們對話時 是否有共同的依據。

  • and this is what percentage of the conversation

    第三個指標是「接納歧見」:

  • shares the same facts --

    對話是否是開放、文明有禮的,

  • not whether those facts are truthful or not,

    或反過來說,是令人無法接受的?

  • but are we sharing the same facts as we converse?

    第四個指標是「多元觀點」。

  • The third is receptivity:

    我們是否身處同溫層,或迴聲室?

  • How much of the conversation is receptive or civil

    或我們真的能在對話中

  • or the inverse, toxic?

    聽見多元的聲音?

  • And then the fourth is variety of perspective.

    這四個指標指的是:

  • So, are we seeing filter bubbles or echo chambers,

    指標越高,對話越顯良性。

  • or are we actually getting a variety of opinions

    第一步就是嘗試能否 實際測量這些指標,

  • within the conversation?

    我們相信是可行的。

  • And implicit in all four of these is the understanding that,

    「接納歧見」的指標最有具動能。

  • as they increase, the conversation gets healthier and healthier.

    系統裡有不悅度分數及模型,

  • So our first step is to see if we can measure these online,

    它可以測量你是否有可能會離開

  • which we believe we can.

    你在推特上參與的對話,

  • We have the most momentum around receptivity.

    因為它讓你感到不悅,

  • We have a toxicity score, a toxicity model, on our system

    這個指標已達相當水準。

  • that can actually measure whether you are likely to walk away

    我們正致力於其它指標的測量,

  • from a conversation that you're having on Twitter

    下一步是,

  • because you feel it's toxic,

    當我們在建立解決方案時,

  • with some pretty high degree.

    去觀察這些測量值的變化趨勢,

  • We're working to measure the rest,

    並持續地實驗。

  • and the next step is,

    目標是要確保這些 指標之間達到平衡,

  • as we build up solutions,

    因為如果有一項增加, 可能讓另一項減少。

  • to watch how these measurements trend over time

    如果「多元觀點」提升了,

  • and continue to experiment.

    同時亦可能減少「共同依據」。

  • And our goal is to make sure that these are balanced,

    安:從湧入的問題中挑一個。

  • because if you increase one, you might decrease another.

    多:不間斷的問題。

  • If you increase variety of perspective,

    安:很多人不明白,

  • you might actually decrease shared reality.

    像是讓極端種族主義分子的言論 從推特上消失有多難?

  • CA: Just picking up on some of the questions flooding in here.

    多:(笑)

  • JD: Constant questioning.

    針對暴力極端團體 我們有相關的政策,

  • CA: A lot of people are puzzled why,

    我們主要的工作及服務條款