字幕列表 影片播放 已審核 字幕已審核 列印所有字幕 列印翻譯字幕 列印英文字幕 “We're entering an era in which our enemies can make anyone say anything at any point in time.” 「我們正進入一個新時代,我們的敵人能讓任何人在任意時間說任何話。」 Jordan Peele created this fake video of President Obama to demonstrate how easy it was to put 喬登·皮爾創作這部歐巴馬的假影片來證明,用沒說過的話來誣賴一個人 words in someone else's mouth- 有多麼容易 moving forward we need to be more vigilant with what we trust from the internet. 在時代進步的同時,我們要更警惕該多相信網路上的東西 not everyone bought it, but the technology behind it is rapidly improving, even as worries 不是每個人都被騙到,但是影片背後的科技正在快速進步,同樣對潛在傷害的擔憂 increase about its potential for harm. 也在持續增加 This is your Bloomberg QuickTake on Fake Videos. 這部影片是彭博快訊關於假影片的報導 Deep fakes, or realistic-looking fake videos and audio, gained popularity as a means of 人工智能換臉,或逼真的假影片和聲音越來越受歡迎,因為它可以將 adding famous actresses into porn scenes. 女明星加入色情片場景中 Despite bans on major websites, they remain easy to make and find. 儘管在大型網站上被禁止,還是很容易製作或發現假影片 They're named for the deep-learning AI algorithms that make them possible. 人工智能換臉是以人工智慧演算法中的深度學習命名,也是因這項技術讓假影片得以製作 Input real audio or video of a specific person- the more, the better- and the software tries 放入某個人的真實聲音或影片,越多越好,而這項產品會試著 to recognize patterns in speech and movement. 找出講話時的模式和動作 Introduce a new element like someone else's face or voice, and a deep fake is born. 引入一個新的元素像是別人的臉或聲音,一個人工智能換臉便誕生了 Jeremy Kahn: It's actually extremely easy to make one of these things… there was just 傑里米·卡恩:其實做這些東西超簡單的,這只是 some breakthroughs from academic researchers who work with this particular kind of machine 一些學術研究者在過去幾週特別研究這種特定的機器學習 learning in the past few weeks, which would drastically reduce the amount of video you 並有所突破,讓我們製作這種假影片所需的影片數量 need actually to create one of these. 比起以前急遽減少 Programs like FakeApp, the most popular one for making deep fakes, need dozens of hours FakeApp 這樣的程式,是之前製作換臉假影片最受歡迎的程式,需要幾十小時的 of human assistance to create a video that looks like this rather than this, but that's 人為協助才能製作吃一個像這樣的影片,而不是這樣的,但是 changing. 這都在變了 In September researchers at Carnegie-Mellon revealed unsupervised software that accurately 今年九月,卡內基美隆大學的研究員發表一種無人監管的軟體,可以準確的 reproduced not just facial features, but changing weather patterns and flowers in bloom as well. 不僅能重製臉部表情,還能改變天氣模式和花朵開花 But with increasing capability comes increasing concern. 但隨著這項技術的發展,擔憂也隨之而來 You know, this is kind of fake news on steroids potentially. 你知道的,這也潛在的,會讓假新聞發展到極端的技術 We do not know of a case yet where someone has tried to use this to perpetrate a kind 目前還沒有案例是某人試圖用這項技術去進行某種 of fraud or an information warfare campaign, or for that matter, to really damage someone's 詐欺或資訊戰,或為了資訊戰去真的損害某人的名聲 reputation, but it's the danger that everyone is really afraid of. 但這危險也是每個人都非常害怕的 In a world where fakes are easy to create- authenticity also becomes easier to deny. 在一個換臉假影片能很容易製作的世界,也變得越來越容易否定事情的可信賴性 People caught doing genuinely objectionable things could claim evidence against them is 人們被抓到在做真的不該做的事情時,可以聲稱那些危害他們的證據是 bogus. 假的 Fake videos are also difficult to detect, though researchers and the US Department of 假影片也很難偵測,雖然研究員和美國國防部 Defense, in particular, have said they're working on ways to counter them. 特別強調他們正在找出可以反擊假影片的方法 Deep Fakes do however have some positive potential- take CereProc, who creates digital voices 然而,人工智能換臉也有一些正向的潛力,就以 CereProc 為那些因疾病而失去自己聲音的人 for people who lose theirs from disease… 創造出數位聲音為例 There are also applications that could be considered more value-neutral, like the many, 也有一些應用是更加價值中立的,像許多 many deep fakes that exist solely to turn as many movies as possible into Nicolas Cage 許多人工智能換臉完全是為了要將更多影片轉換成尼可拉斯·凱吉 movies. 的電影
B1 中級 中文 美國腔 影片 人工 智能 製作 聲音 技術 越來越難揪出假影片! (It’s Getting Harder to Spot a Deep Fake Video) 7487 229 Priscilla 發佈於 2018 年 10 月 26 日 更多分享 分享 收藏 回報 影片單字