Placeholder Image

字幕列表 影片播放

自動翻譯
  • When do you think the singularity is coming?

    你認為奇點什麼時候會到來?

  • Uh, Ray, Kurzweil says 2045.

    呃,雷,庫茲韋爾說2045年。

  • Uh, I'll stick with that.

    呃,我會堅持這樣做。

  • He's 26 years.

    他26歲了

  • Yeah, yeah, Yeah, it could be sooner.

    是啊,是啊,是啊,它可能會更快。

  • Could be later.

    可能是以後。

  • Could be 2030.

    可能是2030年。

  • Could be 2070.

    可能是2070年。

  • I don't think it's 20.

    我不認為這是20。

  • 200.

    200.

  • I don't think it's 2020.

    我不認為是2020年。

  • It won't be 2030.

    不會是2030年。

  • It's too soon.

    太快了

  • I don't know, man.

    我不知道,夥計。

  • Okay, I guess when it starts to happen, it happens real fast.

    好吧,我想,當它開始發生, 它發生真正的快。

  • Well, that's sort of the point.

    嗯,這就是問題的關鍵。

  • If you suppose that as a thought experiment like supposing 2025 Google Deepmind or my own a team in singular eating that an open cog, like supposed by 2025 1 of us manages to creating a I that is essentially as smart as a human being, right?

    如果你假設作為一個思想實驗,比如假設2025年穀歌Deepmind或者我自己的一個團隊在單吃,一個開放的齒輪,比如假設到2025年我們中的1個人設法創造一個本質上和人類一樣聰明的我,對嗎?

  • So it can hold a conversation like a human being.

    所以它可以像人一樣進行對話。

  • And it can prove math rooms like human mathematician that can analyse a biology data set.

    而且它可以證明數學室像人類數學家一樣,可以分析生物數據集。

  • Right.

    好吧,我知道了

  • So we go and then they like your robots and you put it in the in the robot body.

    所以我們去,然後他們喜歡你的機器人,你就把它放在機器人的身體裡。

  • I mean, that's a separate problem.

    我的意思是,這是一個單獨的問題。

  • It could operate money, robot bodies all at once.

    它可以同時操作金錢、機器人身體。

  • Right?

    對吧?

  • But unlike Sofia right now, it can really understand what it's seeing and what it's doing fully at the level of the human cancers.

    但它不像現在的索菲亞,它能真正瞭解它所看到的,以及它在人類癌症層面上所做的全部工作。

  • Suppose you get there.

    假設你到了那裡。

  • Like, how far are you from a true singularity?

    比如,你離真正的奇點還有多遠?

  • Because this a I can then copy itself.

    因為這一個我就可以複製自己。

  • You make a million of those right of itself.

    你做一百萬個這樣的權利本身。

  • Yeah.

    是啊。

  • Yeah, right.

    是的,沒錯。

  • Because you can once as a smart is a human.

    因為你可以一旦作為一個聰明的是人。

  • Okay, let's teach a computer science.

    好吧,我們來教一個計算機科學。

  • I mean, we can send it to school.

    我是說,我們可以把它送到學校去。

  • What can learn?

    能學到什麼?

  • Not they can learn.

    不是他們能學會的。

  • Programming can learn hardware engineering, and then you can copy a million of those.

    編程可以學習硬件工程,然後你可以複製無數個。

  • Right?

    對吧?

  • So all million of those, maybe half of those were working on improving its own intelligence, but But they're not working in isolation.

    所以所有的百萬,可能有一半的人都在努力提高自己的智能,但但他們並不是在孤立地工作。

  • Like humans are taken.

    像人類一樣被帶走。

  • Share thoughts directly because they're all just running in the compute cloud.

    直接分享思想,因為它們都只是在計算雲中運行。

  • Right?

    對吧?

  • So it seems very plausible.

    所以看起來很有可能。

  • Within a few years of getting to that human level, I I it's going to invent all manner of amazing new things, and it doesn't mean that will be instantaneous.

    在幾年內達到人類的水準,我我它會發明各種驚人的新東西,這並不意味著會是瞬間的。

  • I mean, doing lab research still takes time.

    我是說,做實驗室研究還是需要時間的。

  • Building new hardware still takes time, but of course, it can be working on how to speed that up, right?

    構建新的硬件還是需要時間的,當然,可以研究如何加快這個速度,對吧?

  • Like having more fully automate manufacturing in laboratory research.

    比如在實驗室研究中擁有更多的全自動化製造。

  • So yet and then it could take us out of the process as well.

    所以,卻又能把我們也帶出這個過程。

  • Possibly it could mean that depends on the value system that the a g I has right?

    可能的意思是,這取決於a g I的價值體系吧?

  • And I mean, this is This is why it's important to, you know, give values of kindness, compassion, tolerance, inclusiveness, love for all sentient beings.

    我的意思是,這就是為什麼它是重要的, 你知道,給善良的價值觀, 憐憫,寬容,包容,愛所有的眾生。

  • We want to get these positive values into the i ast much as we can, and we think we can program that into a I.

    我們希望儘可能地把這些積極的價值融入到i ast中,我們認為我們可以把它編入I。

  • I think you teach it, teach you programming with the ability to learn that, but then wanted on learned that if it doesn't suit it over, that's what we don't know.

    我覺得你教它,教你編程的能力,學會了,但後來想上學會了,如果它不適合它過,那就是我們不知道。

  • I mean, it's very subtle because human values air a moving target, right?

    我的意思是,這是非常微妙的,因為人類的價值觀是一個移動的目標,對不對?

  • Like the values of suburban New Jersey in 1975 when I was in elementary school are not the same as the values in suburban New Jersey in the US right now, But I mean, back then, gay marriage was illegal on Dvir, violently opposed by by almost everybody, right and racism was very rampant.

    就像1975年我上小學時紐澤西郊區的價值觀和現在美國紐澤西郊區的價值觀是不一樣的,但我的意思是,那時同志婚姻在Dvir是非法的,幾乎每個人都強烈反對,對吧,種族主義非常猖獗。

  • They're so I mean, human values of their evolving all the time by the values of medieval Europe, you and I, and probably almost everyone listening to us deserve to burn in hell for effort, right?

    他們是這樣的我的意思是,人類的價值觀他們的演變所有的時間由中世紀歐洲的價值觀,你和我,可能幾乎每個人都聽我們的值得燒在地獄的努力,對不對?

  • Yeah.

    是啊。

  • So I'm in.

    所以我加入了。

  • You don't.

    你不知道

  • If you gave the AI exact even values from 2019 then by 2050 it's gonna be horrible.

    如果你從2019年就給人工智能精確的均勻值,那麼到2050年就會很可怕。

  • Would be like having a dictated with the values of 1950 America or something.

    就像有一個用1950年美國的價值觀念來支配什麼的。

  • Right?

    對吧?

  • So so you want an AI that we'll have involving values in a way that somehow respects the ongoing evolution of human values and that hopefully, still respects it and doesn't just make its own?

    所以,所以你希望我們會有一個涉及到價值觀的人工智能,以某種方式尊重人類價值觀的不斷進化,而且希望還是尊重人類價值觀,而不是隻做自己的?

  • Yeah, yeah, which is which is very subtle, right?

    是啊,是啊,哪個是哪個是很微妙的,對吧?

  • So some of my friends who are vegans and animal rights activists are like, Well, what if the what if the it treats us the way we treat less intelligent animals?

    所以,我的一些朋友誰是素食主義者 和動物權利活動家是這樣的, 好吧,如果什麼,如果它對待我們的方式 我們對待不太聰明的動物?

  • And you think of my legal?

    你認為我的法律?

  • We care about extinction of species, though I mean, not as much as we should, but in general, we don't want the extinct entire species of subhuman animals.

    我們關心物種的滅絕,雖然我的意思是,沒有我們應該關心的那麼多,但總的來說,我們不希望亞人類動物的整個物種滅絕。

  • But we don't care much about one cow, sheep, Iger, wolf.

    但我們對一頭牛、一頭羊、一頭艾格、一頭狼都不太在意。

  • More or less right.

    多多少少是對的。

  • We care about like the genome.

    我們關心的像基因組。

  • So if you took that analogy, the AI it would like to keep humans around.

    所以如果以這個比喻,人工智能它希望把人類留在身邊。

  • I mean, were the creators were the ancestors.

    我的意思是,創造者是祖先。

  • We have our own unique aesthetic value, but by that analogy it may not give a crap about one human more or less any more than we care about one wolf for pig More.

    我們有自己獨特的審美價值,但按照這個比喻,它未必會在乎一個人多或少,就像我們在乎一頭狼換一頭豬一樣 More。

  • So when we didn't keep the Neanderthals around, no.

    所以當我們沒有把尼安德特人留在身邊的時候,沒有。

  • But we weren't as reflectively intelligent then as we are now.

    但我們那時並不像現在這樣有反思的智慧。

  • And I think that there is an argument that as human cultures advanced more and more toward abundance, away from scarcity, there's more caring about the environment.

    而我認為,有一種說法是,隨著人類文化的進步,越來越向著富足的方向發展,擺脫了匱乏的狀態,對環境的關愛也越來越多。

  • There's there's, there's more compassion toward toward non human animals.

    有的有的,有更多的同情心,向對非人類動物。

  • I mean, there was a lot in, so they age society, and it sort of went down in the industrial Revolution, and now now it's It's certainly going up again.

    我的意思是,有很多在, 所以他們的年齡社會, 它有點去了工業革命, 現在現在它肯定會再次上升。

  • But I guess the point is we are iis not only to be super intelligent, we want them to be super compassionate, like we want them to be more compassionate to us.

    但我想重點是我們不僅是要超級聰明,我們還希望他們超級有同情心,就像我們希望他們對我們更有同情心一樣。

  • Way are to each other because we're killing each other.

    因為我們在互相殘殺。

  • Humans are killing each other for no good reason all the time.

    人類一直在無端地自相殘殺。

  • So we want them to be morally better than us in it.

    所以我們希望他們在道德上比我們好。

  • In some sense, right?

    從某種意義上說,對嗎?

  • Well, they are, you know, we're not set.

    好吧,他們是,你知道,我們還沒有確定。

  • Were not.

    不是。

  • On the whole, we're not setting the best example in terms of the way human society is being regulated right now.

    總體來說,從現在人類社會的監管方式來看,我們並沒有樹立最好的榜樣。

  • Nor the way we treat nonhuman life forms on the planet nor the applications that were deploying.

    也不是我們對待地球上非人類生命形式的方式,也不是部署的應用。

When do you think the singularity is coming?

你認為奇點什麼時候會到來?

字幕與單字
自動翻譯

影片操作 你可以在這邊進行「影片」的調整,以及「字幕」的顯示

B1 中級 中文 價值觀 人類 智能 假設 同情心 人工

2045年的單一性:創造能像人類一樣體驗生活的機器人|本-戈爾策爾博士 (SINGULARITY IN 2045: Creating Robots That Can Experience Life Like A Human | Dr. Ben Goertzel)

  • 9 1
    林宜悉 發佈於 2020 年 09 月 08 日
影片單字