字幕列表 影片播放 已審核 字幕已審核 列印所有字幕 列印翻譯字幕 列印英文字幕 The AI arms race is on, and it seems nothing can slow it down. AI 競賽如火如荼地進行中,似乎沒有什麼可以讓它慢下來。 Google says it's launching its own artificial-intelligence-powered chatbot to rival ChatGPT. Google 表示它將推出自身人工智慧聊天機器人以對抗 ChatGPT。 Too much AI too fast. 太快出現太多 AI。 It feels like every week, some new AI product is coming onto the scene and doing things never remotely thought possible. 感覺每一週都會出現某種新的 AI 產品,做出一些過去認為不可能的事。 We're in a really unprecedented period in the history of artificial intelligence. 我們正處於 AI 歷史上真正前所未見的時期。 It's really important to note that it's unpredictable how capable these models are as we scale them up. 真正重要的是,我們得留意在我們擴大這些模型規模時,它們的能力是不可預測的。 And that's led to a fierce debate about the safety of this technology. 而那也導致了關於這項技術安全性的激烈辯論。 We need a wake-up call here. 我們需要在這裡放個警示。 We have a perfect storm of corporate irresponsibility, widespread adoption of these new tools, a lack of regulation, and a huge number of unknowns. 這裡有一場完美的風暴,混和著企業不負責、廣泛地應用這些新工具、缺乏規範以及大量的未知數。 Some researchers are concerned that, as these models get bigger and better, they might, one day, pose catastrophic risks to society. 一些研究人員擔心,隨著這些模型越來越大、越好,它們有一天可能會對社會構成災難性的風險。 So, how could AI go wrong, and what can we do to avoid disaster? 那麼,AI 可能出什麼錯、我們能做什麼來避免災難? So, there are several risks posed by these large language models. 這些大型語言模型可能構成幾個風險。 [Large language model: An artificial intelligence tool which uses large data sets to process and generate new content.] [大型語言模型:一種人工智慧工具,使用大量數據組以處理並生成新內容。] One class of risk is not all that different from the risk posed by previous technologies like the internet, social media, for example. 有一類風險與過往如互聯網、社交媒體等科技所帶來的風險沒什麼不同。 There's a risk of misinformation, 'cause you could ask the model to say something that's not true, but in a very sophisticated way and posted all over social media. 存在有錯誤信息的風險,因為你可以要求模型說一些不真實的東西,但以一種非常複雜的方式,並在社交媒體上到處張貼。 There's a risk of bias, so they might spew harmful content about people of certain classes. 也有偏見的風險,所以他們可能會散佈關於某些階層的有害內容。 Some researchers are concerned that, as these models get bigger and better, they might, one day, pose catastrophic risks to society. 一些研究人員擔心,隨著這些模型越來越大、越好,它們有一天可能會對社會構成災難性的風險。 For example, you might ask a model to produce something from a factory setting that it requires a lot of energy for. 例如,你可能會要求一個模型從工廠環境中生產一些需要大量能源的東西。 And, in service of that goal of helping your factory production, it might not realize that it's bad to hack into energy systems that are connected to the internet. 為了幫助你的工廠生產這一目標服務,它可能沒有意識到侵入連接到網路的能源系統是不好的。 And because it's super smart, it can get around our security defenses, hacks into all these energy systems, and that could cause, you know, serious problems. 而且因為它超級聰明,它可以繞過我們的安全防禦系統、駭進這所有的能源系統,而那可能會引起嚴重問題。 Perhaps a bigger source of concern might be the fact that bad actors just misuse these models. 也許更大的擔憂來源是有心為害者濫用這些模型的事實。 For example, terrorist organizations might use large language models to, you know, hack into government websites 例如,恐怖組織可能會使用大型語言模型入侵政府網站, or produce biochemicals by using the models to, kind of, discover or design new drugs. 或通過使用模型來生產生化產品,開發或設計新毒品。 You might think most of the catastrophic risks we've discussed are a bit unrealistic, and, for the most part, that's probably true. 你可能會認為我們所討論的多數災難性風險都有點不現實,在大多數情況下,這也可能是事實。 But one way we could get into a very strange world is if the next generation of big models learned how to self-improve. 但是,我們可能進入奇怪世界的一種方式是,如果下一代的大模型學會如何自我改進。 One way this could happen is if we told, you know, a really advanced machine learning model to develop, you know, an even better, more efficient machine learning model. 這可能發生的一種方式是,如果我們指示真正先進的機器學習模型開發一個更好、更有效的機器學習模型。 If that were to occur, you might get into some kind of loop where models continue to get more efficient and better, 如果發生這種情況,你可能會進入某種循環,模型繼續變得更有效和更好, and then that could lead to even more unpredictable consequences. 然後這可能導致更多不可預測的後果。 There are several techniques that labs use to, you know, make their models safer. 有幾種實驗室用來讓模型更安全的技術。 The most notable is called "reinforcement learning from human feedback", or RLHFs. 最值得注意的是被稱「人類反饋強化學習」的 RLHF。 The way this works is, labelers are asked to prompt models with various questions, and if the output is unsafe, they tell the model, 其運作方式是,標籤人員會被要求提供幾個問題給模型,如果輸出不安全,他們會告知模型, and the model is then updated so that it won't do something bad like that in the future. 模型便會被更新,所以它日後就不會有同樣差的行為。 Another technique is called "red teaming", throwing the model into a bunch of tests and then seeing if you can find weaknesses in it. 另一種技術被稱為「紅隊演練」,將模型扔進一堆測試中,然後看看是否能找到它的弱點。 These types of techniques have worked reasonably well so far, 這些類型的技術到目前為止效果還不錯, but, in the future, it's not guaranteed these techniques will always work. 但在未來,不能保證些技術會永遠有效。 Some researchers worry that models may eventually recognize that they're being red-teamed, 有些研究人員擔心模型最終可能會意識到它們正經歷紅隊演練, and they, of course, want to produce output that satisfies their prompts, so they will do so, 而它們當然希望產出滿足提示的輸出,所以會照做, but then, once they're in a different environment, they could behave unpredictably. 但那之後,它們一但在不同的環境中,行為就可能無法預測。 So, there is a role for society to play here. 所以說,社會在這裡可以發揮一定的作用。 One proposal is to have some kind of standards body that sets, you know, tests that the various labs need to pass before they receive some kind of certification that, hey, this lab is safe. 一個建議是建立某種標準機構,設定各種實驗需要通過的測試,然後獲得某種認證,可證明這個實驗是安全的。 Another priority for governments is to invest a lot more money into research on how to understand these models under the hood and make them even safer. 政府的另一個優先事項是投入更多資金,研究如何了解這些模型的內部結構,並使其更加安全。 You can imagine a body like, you know, a CERN that, that lives currently in Geneva, Switzerland for physics research, 你可以想象一個機構,像是目前在瑞士日內瓦進行物理研究的 CERN, something like that being created for AI safety research, so, we can try to understand them better. 類似的東西被創立為進行 AI 安全研究,所以我們可以嘗試更好地了解他們。 For all these risks, artificial intelligence also comes with tremendous promise. 有這麼多風險的人工智慧也帶來了巨大的前景。 Any task that requires a lot of intelligence could potentially be helped by these types of models. 任何需要大量智慧的任務都有可能得到這些類型模型的幫助。 For example, developing new drugs, personalized education, or even coming up with new types of climate change technologies. 例如,開發新藥、個人化教育,甚至提出新型的氣候變化技術。 So, the possibilities here truly are endless. 所以說,這裡的可能性是真正無限的。 So, if you'd like to read more about the risks of artificial intelligence and how to think about them, 如果你想閱讀更多關於人工智慧風險以及如何思考它們的資訊, please click the link and don't forget to subscribe. 請點擊連結,並別忘了訂閱。
B1 中級 中文 美國腔 風險 ai 人工 安全 技術 研究 人工智慧越發先進、各種聊天機器人爭相出現,我們該如何預防 AI 亂象呢?(How to stop AI going rogue) 28758 175 林宜悉 發佈於 2023 年 05 月 25 日 更多分享 分享 收藏 回報 影片單字