Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • The algorithm discovered the easiest way to grab people's attention and keep them glued to the screen is by pressing the greed or hate or fear button in our minds.

    算法發現,抓住人們的注意力並讓他們緊盯螢幕的最簡單方法,就是按下我們心中的貪婪、仇恨或恐懼按鈕。

  • The people who manage the social media companies, they are not evil, they just really didn't foresee.

    管理社交媒體公司的人,他們並不邪惡,只是真的沒有預見到。

  • This is the problem.

    這就是問題所在。

  • I mean because we don't know if they really have consciousness or they're only very very good at mimicking consciousness.

    我的意思是,因為我們不知道它們是否真的有意識,或者它們只是非常非常擅長模仿意識。

  • In the Hollywood scenario, you have the killer robots shooting people.

    在好萊塢的場景中,機器人殺手會射殺人類。

  • In real life, it's the humans pulling the trigger but the AI is choosing the targets.

    在現實生活中,扣動扳機的是人類,但選擇目標的卻是人工智能。

  • I think maybe the most important thing is really to understand what AI is because now there is so much hype around AI that it's becoming difficult for people to understand what is AI.

    我認為,最重要的事情也許是真正瞭解什麼是人工智能,因為現在有太多關於人工智能的炒作,以至於人們越來越難以理解什麼是人工智能。

  • Now everything is AI.

    現在,一切都是人工智能。

  • So you know your coffee machine is now a coffee machine, is an AI coffee machine and your shoes are AI shoes.

    所以,你知道你的咖啡機現在是一臺咖啡機,是一臺人工智能咖啡機,而你的鞋子是一雙人工智能鞋子。

  • And what is AI?

    什麼是人工智能?

  • You know the key thing to understand is that AIs are able to learn and change by themselves, to make decisions by themselves, to invent new ideas by themselves.

    要知道,最關鍵的一點是,人工智能能夠自我學習、自我改變、自我決策、自我發明新想法。

  • If a machine cannot do that, it's not really an AI and therefore by definition something that we cannot predict how it will develop and evolve and for good or for bad.

    如果一臺機器做不到這一點,它就不是真正的人工智能,是以顧名思義,我們無法預測它將如何發展和進化,是好是壞。

  • It can invent medicines and treatments we never thought about but it can also invent weapons and dangerous strategies that go beyond our imagination.

    它可以發明我們從未想過的藥物和療法,但也可以發明我們無法想象的武器和危險策略。

  • You characterize AI not as artificial intelligence but as alien intelligence.

    你把人工智能描述為外星智能,而不是人工智能。

  • You give it a different term.

    你給它換了個說法。

  • Can you explain the difference there and why you why you've landed on that word?

    你能解釋一下其中的區別嗎?

  • Traditionally the acronym AI stood for artificial intelligence but with every passing year AI becomes less artificial and more alien.

    傳統上,AI 是人工智能的縮寫,但隨著時間的推移,AI 的人工化程度越來越低,變得越來越陌生。

  • Alien not in the sense that it's coming from outer space, it's not.

    異形不是說它來自外太空,不是的。

  • We create it.

    我們創造了它。

  • But alien in the sense it analyzes information, makes decisions, invents new things in a fundamentally different way than human beings.

    但從它分析資訊、做出決策、發明新事物的方式與人類根本不同的意義上來說,它是外星生物。

  • And artificial is from artifact.

    而人工則來自於人工製品。

  • It gives us the impression that this is an artifact that we control and this is misleading because yes we design the kind of baby AIs.

    它給我們的印象是,這是一個由我們控制的人工製品,而這是一種誤導,因為是的,我們設計了人工智能寶寶的種類。

  • We gave them the ability to learn and change by themselves and then we release them to the world and they do things that are not under our control, that are unpredictable.

    我們給了他們學習和自我改變的能力,然後我們把他們釋放到這個世界上,他們卻做出了我們無法控制、無法預料的事情。

  • So in this sense they are alien, not in the sense that they came from Mars.

    是以,從這個意義上說,它們是外星生物,而不是來自火星的生物。

  • When I said earlier that you know AIs can make decisions and AIs they are not just tools in our hands, they are agents creating new realities.

    我之前說過,人工智能可以做出決策,人工智能不僅僅是我們手中的工具,它們還是創造新現實的媒介。

  • So you may think okay this is a prophecy for the future, a prediction about the future, but it's already in the past because even though social media algorithms they are very very primitive AIs, you know the first generation of AIs, they still reshaped the world with the decisions they made.

    是以,你可能會認為這只是對未來的預言,對未來的預測,但其實這已經成為過去,因為即使社交媒體算法是非常非常原始的人工智能,你知道第一代人工智能,但它們做出的決定仍然重塑了世界。

  • In social media, Facebook, Twitter, TikTok, all that, the ones that make the decision what you will see at the top of your news feed or the next video that you'll be recommended, it's not a human being sitting there making these decisions.

    在 Facebook、Twitter、TikTok 等社交媒體上,決定你將在新聞源頂部看到什麼或下一個視頻將被推薦給你的,並不是坐在那裡的人類在做這些決定。

  • It's an AI, it's an algorithm.

    這是一種人工智能,一種算法。

  • And these algorithms were given a relatively simple and seemingly benign goal by the corporations.

    而這些算法被企業賦予了一個相對簡單、看似無害的目標。

  • The goal was increase user engagement, which means in simple English make people spend more time on the platform, because the more time people spend on TikTok or Facebook or Twitter or whatever, the company makes more money.

    我們的目標是提高用戶參與度,簡單地說,就是讓人們在平臺上花更多時間,因為人們在 TikTok 或 Facebook 或 Twitter 上花的時間越多,公司賺的錢就越多。

  • It sells more advertisements, it harvests more data that it can then sell to This is the goal of the algorithm.

    它能賣出更多的廣告,獲得更多的數據,然後再賣給其他公司,這就是算法的目標。

  • Now engagement sounds like a good thing, who doesn't want to be engaged?

    現在,訂婚聽起來是件好事,誰不想訂婚呢?

  • But the algorithms then experimented on billions of human guinea pigs and discovered something, which was of course discovered even earlier by humans, but now the algorithms discovered it.

    但是,算法隨後在數十億隻人類小白鼠身上進行實驗,發現了一些東西,當然人類更早發現了這些東西,但現在是算法發現了它。

  • The algorithms discovered that the easiest way to increase user engagement, the easiest way to grab people's attention and keep them glued to the screen, is by pressing the greed or hate or fear button in our minds.

    這些算法發現,提高用戶參與度的最簡單方法,抓住人們注意力並讓他們緊盯螢幕的最簡單方法,就是按下我們心中的貪婪、仇恨或恐懼按鈕。

  • You show us some hate-filled conspiracy theory and we become very angry, we want to see more, we tell about it to all our friends, user engagement goes up.

    你向我們展示一些充滿仇恨的陰謀論,我們就會變得非常憤怒,我們就會想要看到更多,我們就會把它告訴我們所有的朋友,用戶參與度就會上升。

  • And this is what they did over the last 10 or 15 years.

    這就是他們在過去 10 年或 15 年裡所做的事情。

  • They flooded the world with hate and greed and fear, which is why the conversation is breaking down.

    他們用仇恨、貪婪和恐懼充斥著這個世界,這就是對話破裂的原因。

  • These are kind of unintended consequences.

    這些都是意想不到的後果。

  • Like the people who manage the social media companies, they are not evil, they didn't set out to destroy democracy or to flood the world with hate and so forth.

    就像那些管理社交媒體公司的人一樣,他們並不邪惡,他們的目的不是摧毀民主,也不是讓仇恨充斥世界等等。

  • They just really didn't foresee that when they give the algorithm the goal of increasing user engagement, the algorithm will start to hate.

    只是他們真的沒有預料到,當他們把提高用戶參與度作為算法的目標時,算法就會開始討厭用戶。

  • But initially, when they started this whole ball rolling, they really didn't know.

    但最初,當他們開始這一切的時候,他們真的不知道。

  • And this is just kind of a warning of look what happens with even very primitive AIs.

    這只是一種警告,看看即使是非常原始的人工智能也會發生什麼。

  • And the AIs of today, which are far more sophisticated than in 2016, they too are still just the very early stages of the AI evolutionary process.

    而今天的人工智能,要比 2016 年複雜得多,它們也還只是人工智能進化過程的早期階段。

  • We can think about it like the evolution of animals.

    我們可以把它想象成動物的進化。

  • Until you get to humans, you have 4 billion years of evolution.

    在人類進化之前,有 40 億年的進化史。

  • You start with microorganisms like amoebas, and it took billions of years of evolution to get to dinosaurs and mammals and humans.

    從變形蟲等微生物開始,經過數十億年的進化,才有了恐龍、哺乳動物和人類。

  • Now, AIs are at present at the beginning of a parallel process that Chanty Petty and so forth, they are the amoebas of the AI world.

    目前,人工智能正處於與 Chanty Petty 等並行進程的開端,它們是人工智能世界的變形蟲。

  • But AI evolution is not organic.

    但人工智能的進化不是有機的。

  • It's inorganic, it's digital, and it's billions of times faster.

    它是無機的,是數字化的,速度快數十億倍。

  • So while it took billions of years to get from amoebas to dinosaurs, it might take just 10 or 20 years to get from the AI amoebas of today to AI T-Rex.

    是以,從變形蟲到恐龍需要數十億年的時間,而從今天的人工智能變形蟲到人工智能霸王龍可能只需要 10 年或 20 年。

  • Consumers, like all of us, we're being lured into a trust of something so powerful we can't comprehend and are ill-equipped to be able to kind of cast our gaze into the future and imagine where this is leading us.

    和我們所有人一樣,消費者們也被誘騙相信某種強大的東西,我們無法理解,也沒有能力將目光投向未來,想象它將把我們引向何方。

  • Absolutely.

    當然。

  • I mean, part of it is that there is enormous positive potential in AI.

    我的意思是,部分原因是人工智能具有巨大的積極潛力。

  • It's not like it's all doom and gloom.

    這並不像所有的厄運和陰霾。

  • There is really enormous positive potential if you think about the implications for healthcare, that, you know, AI doctors available 24 hours a day that know our entire medical history and have read every medical paper that was ever published and can tailor their advice, their treatment to our specific life history and our blood pressure, our genetics.

    如果考慮到對醫療保健的影響,人工智能醫生每天 24 小時為我們提供服務,他們瞭解我們的全部病史,閱讀過所有發表過的醫學論文,可以根據我們的具體生活史、血壓和遺傳學情況為我們量身定製建議和治療方案,這確實具有巨大的積極潛力。

  • It can be the biggest revolution in healthcare ever, if you think about self-driving vehicles.

    如果考慮到自動駕駛汽車,這可能是醫療保健領域有史以來最大的變革。

  • So every year, more than a million people die all over the world in car accidents.

    是以,全世界每年都有 100 多萬人死於車禍。

  • Most of them are caused by human error, like people drinking and then driving or falling asleep at the wheel or whatever.

    大多數事故都是人為失誤造成的,比如有人喝酒後開車,或者在開車時睡著了什麼的。

  • Self-driving vehicles are likely to save about a million lives every year.

    自動駕駛汽車每年可能挽救約一百萬人的生命。

  • This is amazing.

    太神奇了

  • You think about climate change.

    你想想氣候變化。

  • So yes, developing the AIs will consume a lot of energy, but they could also find new sources of energy, new ways to harness energy that could be our best shot at preventing ecological collapse.

    是以,是的,開發人工智能會消耗大量能源,但它們也能找到新的能源,找到利用能源的新方法,這可能是我們防止生態崩潰的最佳機會。

  • So there is enormous positive potential.

    是以,積極的潛力是巨大的。

  • We shouldn't deny that.

    我們不應否認這一點。

  • We should be aware of it.

    我們應該意識到這一點。

  • And on the other hand, it's very difficult to appreciate the dangers because the dangers, again, they're kind of alien.

    另一方面,我們也很難體會到其中的危險,因為這些危險又有點陌生。

  • Like if you think about nuclear energy, yeah, it also had positive potential, cheap nuclear energy, but people had a very good grasp of the danger, nuclear war.

    就像核能一樣,是的,它也有積極的潛力,廉價的核能,但人們很清楚核戰爭的危險性。

  • Anybody can understand the danger of that.

    任何人都能理解這樣做的危險性。

  • With AI, it's much more complex because the danger is not straightforward.

    對於人工智能來說,情況要複雜得多,因為危險並不直接。

  • The danger is really, I mean, we've seen the Hollywood science fiction scenarios of the big robot rebellion, that one day a big computer or the AI decides to take over the world and kill us or enslave us.

    我的意思是,我們已經看過好萊塢科幻小說中關於大型機器人叛亂的情節,有一天大型計算機或人工智能決定接管世界,殺死我們或奴役我們。

  • And this is extremely unlikely to happen anytime soon because the AIs are still a kind of very narrow intelligence.

    而這在短期內是極不可能發生的,因為人工智能仍然是一種非常狹隘的智能。

  • Like the AI that can summarize a book, it doesn't know how to act in the physical world outside.

    就像能總結一本書的人工智能一樣,它不知道如何在外部物理世界中行動。

  • You have AIs that can fold proteins.

    人工智能可以摺疊蛋白質。

  • You have AIs that can play chess, but we don't have this kind of general AI that can just find its way It's hard to understand.

    你們有會下棋的人工智能,但我們沒有這種能找到方向的通用人工智能,這很難理解。

  • So what's so dangerous about something which is so kind of narrow in its abilities?

    那麼,能力如此狹窄的東西有什麼危險性呢?

  • And I would say that the danger doesn't come from the big robot rebellion.

    我想說的是,危險並非來自大型機器人的反叛。

  • It comes from the AI bureaucracies.

    它來自人工智能官僚機構。

  • Already today and more and more, we will have not one big AI trying to take over the world.

    今天,已經有越來越多的大型人工智能試圖接管世界。

  • We will have millions and billions of AIs constantly making decisions about us everywhere.

    屆時,數百萬乃至數十億的人工智能將無處不在,不斷為我們做出決策。

  • You apply to a bank to get a loan, it's an AI deciding whether to give you a loan.

    你向銀行申請貸款,是人工智能在決定是否給你貸款。

  • You apply to get a job, it's an AI deciding whether to give you a job.

    你申請一份工作,人工智能就會決定是否給你一份工作。

  • You're in court or you're found guilty of some crime, the AI will decide whether you go for six months or three years or whatever.

    你在法庭上或被認定犯有某種罪行,人工智能將決定你是被判 6 個月還是 3 年或其他什麼的。

  • Even in armies, we already see now in the war in Gaza and the war in Ukraine, AI make the decision about what to bomb.

    即使在軍隊中,我們現在已經看到,在加沙戰爭和烏克蘭戰爭中,也是由人工智能來決定轟炸什麼。

  • And in the Hollywood scenario, you have the killer robots shooting people.

    而在好萊塢的場景中,機器人殺手會射殺人類。

  • In real life, it's the humans pulling the trigger, but the AI is choosing the targets.

    在現實生活中,是人類在扣動扳機,但人工智能在選擇目標。

  • I start thinking about like this, this bias I have around the originality of human thought and emotion and this kind of assumption that AI will never be able to fully mimic the human experience, right?

    我開始思考這樣的問題,我對人類思想和情感的獨創性存在偏見,並認為人工智能永遠無法完全模仿人類的體驗,對嗎?

  • There's something indelible about what it means to be human that the machines will never be able to fully replicate.

    人類的意義是不可磨滅的,機器永遠無法完全複製。

  • And when you talk about, you know, information, the purpose of information being to create connection, a big piece there is intimacy, like intimacy between human beings.

    當你談到資訊時,你知道,資訊的目的是建立聯繫,其中很大一部分是親密關係,比如人與人之間的親密關係。

  • So information is meant to create connection, but now we have so much information and we're feeling very disconnected.

    是以,資訊的本意是創造聯繫,但現在我們擁有如此多的資訊,卻感覺非常脫節。

  • So there's something broken in this system.

    所以,這個系統有問題。

  • And I think it's driving this loneliness epidemic, but on the other side, it's making us value like intimacy, maybe a little bit more than we were previously.

    我認為這導致了孤獨感的流行,但從另一個角度看,這也讓我們更加珍視親密關係,也許比以前更多一些。

  • And so I'm curious about where intimacy kind of fits into this, you know, post-human world in which culture is being dictated by machines.

    是以,我很好奇,在這個文化由機器主宰的後人類世界裡,親密關係該何去何從?

  • I mean, human beings are wired for that kind of intimacy.

    我的意思是,人類天生就喜歡這種親密關係。

  • And I think our radar or our kind of ability to, you know, identify it when we see it is part of what makes us human to begin with.

    我認為,我們的雷達或我們的那種能力,你知道,當我們看到它時識別它的能力,是使我們人類開始的一部分。

  • Maybe the most important part.

    也許這是最重要的部分。

  • I think the key distinction here that is often lost is the distinction between intelligence and consciousness.

    我認為,這裡經常被忽略的關鍵區別在於智力和意識之間的區別。

  • That intelligence is the ability to pursue goals and to overcome problems and obstacles on the way to the goal.

    這種智慧就是追求目標並克服通往目標途中的問題和障礙的能力。

  • The goal could be a self-driving vehicle trying to get from here to San Francisco.

    目標可能是一輛試圖從這裡開往舊金山的自動駕駛汽車。

  • The goal could be increasing user engagement.

    目標可能是提高用戶參與度。

  • And an intelligent agent knows how to overcome the problems on the way to the goal.

    而智能代理知道如何克服通往目標途中遇到的問題。

  • This is intelligence.

    這就是智慧。

  • And this is something that AI is definitely acquiring.

    而這一點,人工智能無疑正在獲取。

  • In at least certain fields, AI is now much more intelligent than us.

    至少在某些領域,人工智能現在比我們聰明得多。

  • Like in playing chess, much more intelligent than human beings.

    就像下棋一樣,比人類聰明得多。

  • But consciousness is a different thing than intelligence.

    但意識與智力是兩碼事。

  • Consciousness is the ability to feel things, pain, pleasure, love, hate.

    意識是感受事物、痛苦、快樂、愛、恨的能力。

  • When the AI wins a game of chess, it's not If there is a tense moment in the game, it's not clear who is going to win.

    當人工智能贏得一盤棋時,如果棋局出現緊張時刻,誰會獲勝並不明確。

  • The AI is not tense.

    人工智能並不緊張。

  • It's only the human player which is tense or frightened or anxious.

    緊張、恐懼或焦慮的只是人類選手。

  • The AI doesn't feel anything.

    人工智能沒有任何感覺。

  • Now there is a big confusion because in humans and also in other mammals, in other animals, in dogs and pigs and horses and whatever, intelligence and consciousness go together.

    現在有一個很大的困惑,因為在人類和其他哺乳動物中,在其他動物中,在狗、豬、馬等動物中,智慧和意識是並存的。

  • We solve problems based on our feelings.

    我們憑感覺解決問題。

  • Our feelings are not something that kind of evolution, it's decoration.

    我們的感情不是一種進化,而是一種裝飾。

  • It's the core system through which mammals make decisions and solve problems is based on our feelings.

    這是哺乳動物根據感覺做出決定和解決問題的核心繫統。

  • So we tend to think that consciousness and intelligence must go together.

    是以,我們傾向於認為意識和智慧必須同時存在。

  • And in all these science fiction movies, you see that as the computer or robot becomes more intelligent, then at some point, it also gains consciousness.

    在所有這些科幻電影中,你都會看到,隨著電腦或機器人變得越來越智能,在某個時刻,它也會獲得意識。

  • It falls in love with the human or whatever.

    它愛上了人類或其他什麼東西。

  • And we have no reason to think like that.

    我們沒有理由這樣想。

  • Yeah.

    是啊

  • Consciousness is not a mere extrapolation of intelligence.

    意識並不僅僅是智力的外推。

  • Absolutely not.

    絕對不行。

  • It's a qualitatively different thing.

    這是質的不同。

  • Yeah.

    是啊

  • And again, if you think in terms of evolution, so yes, the evolution of mammals took a certain path, a certain road in which you develop intelligence based on consciousness.

    再說一遍,如果你從進化的角度來思考,那麼是的,哺乳動物的進化走上了一條特定的道路,一條在意識基礎上發展智力的道路。

  • But so far, what we see as computers, they took a different route.

    但到目前為止,我們所看到的計算機,他們走的是另一條路。

  • Their road develops intelligence without consciousness.

    他們的道路發展了智慧,卻沒有意識。

  • I mean, computers have been developing, you know, for 60, 70 years now.

    我的意思是,計算機已經發展了六七十年。

  • They are not very intelligent, at least in some fields, and still zero consciousness.

    它們的智力不高,至少在某些領域是這樣,而且還是零意識。

  • Now, this could continue indefinitely.

    現在,這種情況可能會無限期地持續下去。

  • Maybe they are just on a different path.

    也許他們只是走上了不同的道路。

  • Maybe eventually they will be far more intelligent than us in everything and still will have zero consciousness, will not feel pain or pleasure or love or hate.

    也許最終它們會在所有方面都比我們聰明得多,但仍然沒有意識,感覺不到痛苦或快樂,也不會有愛或恨。

  • Now, what adds to the problem is that there is nevertheless a very strong commercial and political incentive to develop AIs that mimic feelings, to develop AIs that can create intimate relations with human beings, that can cause human beings to be emotionally attached to the AIs.

    現在,使問題更加嚴重的是,有一種非常強烈的商業和政治動機,就是開發能夠模擬情感的人工智能,開發能夠與人類建立親密關係的人工智能,使人類在情感上依附於人工智能。

  • Even if the AIs have no feelings of themselves, they could be trained, they are already trained, to make us feel that they have feelings and to start developing relationships with them.

    即使人工智能本身沒有感情,它們也可以接受訓練,它們已經接受了訓練,讓我們感覺到它們有感情,並開始與它們發展關係。

  • Why is there such an incentive?

    為什麼會有這樣的激勵措施?

  • Because that the human can have.

    因為人類可以擁有。

  • That intimacy is not a liability.

    親密並不是一種負擔。

  • It's not something bad that, oh, I need this.

    這不是什麼壞事,哦,我需要這個。

  • No, it's the greatest thing in the world.

    不,這是世界上最偉大的事情。

  • But it's also potentially the most powerful weapons, weapon in the world.

    但它也可能是世界上最強大的武器。

  • If you want to convince somebody to buy a product, if you want to convince somebody to vote for a certain politician or party, intimacy is like the ultimate weapon.

    如果你想說服別人購買產品,如果你想說服別人投票給某個政治家或政黨,親密就像是終極武器。

  • Now it is possible technically to mass produce intimacy.

    現在,在技術上大規模製造親密關係已經成為可能。

  • You can create all these AIs that will interact with us and they will understand our feelings because even feelings are also patterns.

    你可以創造出所有這些能與我們互動的人工智能,它們會理解我們的感受,因為即使是感受也是一種模式。

  • You can predict a person's feelings by watching them for weeks and months and learning their patterns and facial expression and tone of voice and so forth.

    你可以通過數週、數月的觀察,瞭解一個人的行為模式、面部表情和語氣等等,來預測他的感受。

  • And then if it's in the wrong it could be used to manipulate us like never before.

    如果它犯了錯誤,就會被用來操縱我們,這是前所未有的。

  • Sure, it's our ultimate vulnerability.

    當然,這是我們最終的弱點。

  • This beautiful thing that makes us human becomes this great weakness that we have because as these AIs continue to self-iterate, their capacity to mimic consciousness and human intimacy will reach such a degree of fidelity that it will be indistinguishable to the human brain and then humans become like these unbelievably easy to hack machines who can be directed wherever the AI, you know, chooses to direct them.

    因為隨著這些人工智能的不斷自我智能化,它們模仿意識和人類親密關係的能力將達到與人腦無異的程度,到那時,人類就會變得像這些容易被黑客攻擊的機器一樣,人工智能想指揮它們就能指揮它們。

  • Yeah, it's not a prophecy.

    是的,這不是預言。

  • We can take actions today to prevent this.

    我們今天就可以採取行動防止這種情況發生。

  • We can have regulations about it.

    我們可以制定相關法規。

  • We can, for instance, have a regulation that AIs are welcome to interact with humans but on condition that they disclose that they are AIs.

    例如,我們可以制定一項規定,歡迎人工智能與人類互動,但條件是它們必須公開自己是人工智能。

  • If you talk with an AI doctor, that's good, but the AI should not pretend to be a human being.

    如果你與人工智能醫生交談,那很好,但人工智能不應該假裝是人類。

  • You know, I'm talking with an AI.

    你知道,我在和人工智能對話。

  • I mean, it's not that there is no possibility that AI will develop consciousness.

    我的意思是,人工智能並非不可能發展出意識。

  • We don't know.

    我們不知道。

  • I mean, there could be that AIs will really develop consciousness.

    我的意思是,人工智能可能真的會發展出意識。

  • But does it matter if it's mimicking it to such a degree of fidelity?

    但如果模仿得如此逼真,又有什麼關係呢?

  • Does it even, in terms of like how human beings interact with it, does it matter?

    就人類如何與之互動而言,這重要嗎?

  • For the human beings, no.

    對人類來說,不是。

  • I mean, this is the problem.

    我的意思是,這就是問題所在。

  • I mean, because we don't know if they really have consciousness or they're only very, very good at mimicking consciousness.

    我的意思是,因為我們不知道它們是否真的有意識,或者它們只是非常非常擅長模仿意識。

  • So the key question is ultimately political and ethical.

    是以,關鍵問題歸根結底是政治和道德問題。

  • If they have consciousness, if they can feel pain and pleasure and love and hate, this means that they are ethical and political subjects.

    如果他們有意識,如果他們能感受到痛苦和快樂,能感受到愛與恨,這就意味著他們是倫理和政治的主體。

  • They have rights that you should not inflict pain on an AI the same way you should not inflict pain on a human being.

    他們有權利不給人工智能造成痛苦,就像不給人類造成痛苦一樣。

  • Now, and the other thing is, it's very difficult to understand what is happening.

    現在,還有一件事是,很難理解發生了什麼。

  • If we want humans around the world to cooperate on this, to build guardrails, to regulate the development of AI, first of all, you need humans to understand what is happening.

    如果我們想讓全世界的人類在這方面進行合作,建立防護欄,規範人工智能的發展,首先,你需要人類瞭解正在發生什麼。

  • Secondly, you need the humans to trust each other.

    其次,你需要人類相互信任。

  • And most people around the world are still not aware of what is happening on the AI front.

    而世界上大多數人還不知道人工智能領域正在發生什麼。

  • You have a very small number of people in just a few countries, mostly the U.S. and China and a few others, who understand.

    只有少數幾個國家(主要是美國、中國和其他一些國家)的少數人能夠理解。

  • Most people in Brazil, in Nigeria, in India, they don't understand.

    巴西、尼日利亞、印度的大多數人都不理解。

  • And this is very dangerous because it means that a few people, many of them are not even elected by the U.S. citizen.

    這非常危險,因為這意味著少數人,其中許多人甚至不是美國公民選出來的。

  • They are just, you know, private companies.

    你知道,它們只是私營公司。

  • They will make the most important decisions.

    他們將做出最重要的決定。

  • And the even bigger problem is that even if people start to understand, they don't trust each other.

    而更大的問題是,即使人們開始理解,他們也不信任對方。

  • Like, I had the opportunity to talk to some of the people who are leading the AI revolution.

    比如,我有機會與一些引領人工智能革命的人交談。

  • And you meet with these, you know, entrepreneurs and business tycoons and politicians also in the U.S., in China, in Europe, and they all tell you the same thing, basically.

    在美國、中國和歐洲,你會遇到這些企業家、商業大亨和政治家,他們基本上都會告訴你同樣的事情。

  • They all say, we know that this thing is very, very dangerous, but we can't trust the other humans.

    他們都說,我們知道這東西非常非常危險,但我們不能相信其他人類。

  • If we slow down, how do we know that our competitors will also slow down?

    如果我們放慢腳步,我們怎麼知道我們的競爭對手也會放慢腳步?

  • Whether our business competitors, let's say here in the U.S., or our Chinese competitors across the ocean.

    無論是我們的商業競爭對手,比方說美國本土的競爭對手,還是大洋彼岸的中國競爭對手。

  • And you go and talk with the competitors, they say the same thing.

    你去和競爭對手聊聊,他們也是這麼說的。

  • We know it's dangerous.

    我們知道這很危險。

  • We would like to slow down to give us more time to understand, to assess the dangers, to debate regulations, but we can't.

    我們希望放慢腳步,讓我們有更多的時間去了解、評估危險,並就法規展開辯論,但我們做不到。

  • We have to rush even faster because we can't trust the other corporation, the other country.

    我們必須趕得更快,因為我們不能相信其他公司、其他國家。

  • And if they get it before we get it, it will be a disaster.

    如果他們比我們先拿到,那將是一場災難。

  • And so you have this kind of paradoxical situation where the humans can't trust each other, but they think they can trust the AIs.

    於是就出現了這種自相矛盾的情況:人類無法相互信任,但他們認為可以信任人工智能。

  • Because when you talk with the same people and you tell them, okay, I understand you can't trust the Chinese or you can't trust open AI, so you need to move faster developing this super AI.

    因為當你和同樣的人交談時,你會告訴他們:好吧,我知道你們不能相信中國人,也不能相信開放的人工智能,所以你們需要加快開發這種超級人工智能。

  • How do you know you could trust the AI?

    你怎麼知道可以信任人工智能?

  • One of the things I heard you say that really struck me was this.

    我聽你說過的一句話讓我印象深刻。

  • It's a quote.

    這是一句話。

  • If something ultimately destroys us, it will be our own delusions.

    如果有什麼東西最終毀滅了我們,那一定是我們自己的妄想。

  • So can you elaborate on that a little bit and how that applies to what we've been talking about?

    那麼,你能否詳細說明一下這一點,以及它如何與我們一直在談論的話題相聯繫?

  • Yeah, I mean, the AIs, at least of the present day, they cannot escape our control and they cannot destroy us unless we allow them or unless we kind of order them to do that.

    是的,我的意思是,人工智能,至少是現在的人工智能,它們無法逃脫我們的控制,也無法摧毀我們,除非我們允許它們這樣做,或者我們命令它們這樣做。

  • We are still in control.

    我們仍在掌控之中。

  • But because of our, you know, political and mythological delusions, we cannot trust the other humans.

    但是,由於我們的,你知道的,政治和神話妄想,我們無法信任其他人類。

  • And we think we need to develop these AIs faster and faster and give them more and more power because we have to compete with the other humans.

    我們認為,我們需要越來越快地開發這些人工智能,並賦予它們越來越多的能力,因為我們必須與其他人類競爭。

  • And this is the thing that could really destroy us.

    這才是真正能摧毀我們的東西。

The algorithm discovered the easiest way to grab people's attention and keep them glued to the screen is by pressing the greed or hate or fear button in our minds.

算法發現,抓住人們的注意力並讓他們緊盯螢幕的最簡單方法,就是按下我們心中的貪婪、仇恨或恐懼按鈕。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 中文 美國腔 人工 智能 人類 意識 親密 算法

人工智能】Yuval Noah Harari 哈拉瑞 - AI 如何毀滅人類,目前為止最清晰的解釋 |《人類大歷史》作者 | 早晚會思考的問題 (【人工智能】Yuval Noah Harari 哈拉瑞 - AI 如何毁滅人類,目前為止最清晰的解釋 |《人類大歷史》作者 | 早晚會思考的問題)

  • 13 2
    Adam Lin 發佈於 2025 年 01 月 02 日
影片單字