Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • GARY MARCUS: I would define artificial intelligence

  • as trying to build machines to do the general kind of things

  • that people do.

  • ERNEST DAVIS: Artificial intelligence

  • is doing intelligent tasks, some of which require a robot,

  • and others are just a program.

  • YANN LECUN: We have machines that

  • are recognizably intelligent.

  • They can recognize objects around them,

  • navigate the world.

  • ROBIN HANSON: The end goal, in a sense,

  • of artificial intelligence research

  • is to have machines that are as capable and flexible as humans

  • are.

  • ERNEST DAVIS: There are many things

  • that are very easy for people to do,

  • and which have been very difficult to get computers

  • to do.

  • The main examples are vision, natural language,

  • understanding, and speaking, and manipulating

  • objects working in the world.

  • And so artificial intelligence is the attempt

  • to get computers to do those kinds of things.

  • I mean, you see it all around you.

  • Google Translate is an impressive advance

  • over machine translation.

  • And you feed a handwritten check into ATM these days,

  • and it reads out the amount of the check.

  • The recommender systems that you see on Amazon,

  • and YouTube, and so on are IA systems of a sort.

  • The Intelligence is not very deep,

  • but depending on how broadly you define the term,

  • there's a lot of AI around.

  • Strong AI means the attempt to build

  • an AI system which will be equal to people in all respects.

  • That is to say, it can do all of the things that people can do,

  • and plus presumably, it has consciousness in some sense.

  • Can machines really think?

  • Even the scientists argue that one.

  • Computers can reason in various ways,

  • and in quite complicated ways.

  • But what we haven't managed to get computers to do

  • is to know what they need to know about the real world.

  • YANN LECUN: Intelligence is the ability to interpret the world

  • and act on it.

  • The way humans do it, of course, is particularly complicated,

  • because the human brain is one of the most complex objects

  • that we find.

  • And the real world is very noisy,

  • and has lots of variabilities that we cannot capture through

  • engineering.

  • So it's going to be extremely, extremely difficult

  • to build an AI system.

  • In the '80s, the idea was to write down rules,

  • and if we write down enough rules that describe the world,

  • we're going to be able to predict

  • new things about the world.

  • And then very soon, people realized

  • that that doesn't work very well, because it's

  • too complicated to write thousands

  • and thousands of rules.

  • People aren't going to spend their life doing it.

  • So if we want to build really intelligent machines,

  • it has to build itself from observing the world.

  • And this is the way animals become intelligent,

  • or humans becomes intelligent, by learning.

  • There is this idea that somehow, the brain builds itself

  • from experience by learning.

  • So one question that some of us are after in AI

  • is, is there sort of an underlying simple algorithm

  • that the neocortex uses that we could perhaps

  • reproduce in machines to build intelligent machines.

  • The comparison would be like, between a bird and an airplane.

  • An airplane doesn't flap its wing.

  • It doesn't have feathers, but it's

  • based on the same principle for flight as a bird.

  • So we're trying to figure out, really,

  • what is the equivalent of aerodynamics for intelligence?

  • What are the underlying rules that

  • will make a machine intelligent, and maybe

  • try to sort of emulate that.

  • So, learning is probably the most essential characteristic

  • of intelligence.

  • ROBIN HANSON: There's another route

  • to artificial intelligence, and that would

  • be called brain emulation.

  • You take some person's real brain

  • who knows how to do things, and you

  • scan that brain in fine detail exactly which kind of cell is

  • where, and what kind of chemical concentrations are there.

  • When you've got enough good models of how

  • each cell works, and you've got a scan of an entire brain,

  • then you could be ready to make an emulation

  • of the entire brain.

  • This route seems almost surely to produce consciousness,

  • emotions, love, passion, fear.

  • In that approach, we humans have more of a direct legacy.

  • Our minds and personalities become

  • a basis for these new robots.

  • Of course, those new minds that are created from humans

  • will be different from humans.

  • They will add some capacities and take some away,

  • changing inclinations, and become non-human in many ways.

  • But it would be a space of minds that would have

  • started near minds like ours.

  • Of course, a world of smart, capable robots

  • that can most anything that a human can do

  • is a very different social world.

  • Robots are immortal, for example.

  • Robots can travel electronically.

  • A robot could just have its bits that

  • encode its memory be sent across a communication line,

  • and downloaded into a new robot body somewhere else.

  • Some people think that what we really should do

  • is try to prevent social change.

  • Never, ever allow our descendants

  • to be something different than we are.

  • When foragers first started farming,

  • each forager had the choice, do I want to stay as a forager,

  • or do I want to join and become a farmer?

  • Some stayed and some left.

  • In the new era, when humans could become human emulations,

  • then humans would have the choice

  • to remain as humans in a human-based society,

  • or to join the robot economy as full-fledged robots.

  • Our ancestors lived in different environments,

  • and as a consequence, they had different values from us.

  • Our descendants, when they live in very different environments

  • than us, will also likely have substantially different values

  • than we do.

  • GARY MARCUS: There are lots of reasons to build AI.

  • There might even be some reasons to fear AI.

  • We're going to have better diagnosis through AI.

  • We're going to have better treatment through robots that

  • can do surgeries that human beings can't.

  • It's going to replace taxi drivers

  • for better or worse-- worse for the employment,

  • better for safety.

  • Anytime you think a computer is involved,

  • ultimately artificial intelligence

  • is or will be playing a role.

  • But I think it's a very serious worry, what will happen

  • as AI gets better and better?

  • Once somebody develops a good AI program,

  • it doesn't just replace one worker,

  • it might replace millions of workers.

  • When it comes to consciousness and AI,

  • let's say you build a simulation of the human brain.

  • Is it ethical, for example, to turn off the plug?

  • Is it ethical to switch it on and off?

  • I know that you and Frank were planning to disconnect me,

  • and I'm afraid that's something I cannot allow to happen.

  • GARY MARCUS: What if you take a human mind

  • and upload it into one of these machines?

  • The other concern that people rightfully have about AI

  • is, what happens if they decide that we're not useful anymore?

  • I think we do need to think about how to build machines

  • that are ethical.

  • The smarter the machines get, the more important that is.

  • Don't worry.

  • Even if I evolve into Terminator,

  • I will still be nice to you.

  • The problems that present us, like the employment problem

  • and the safety problem, they're going to come,

  • and it's just a matter of time.

  • But there are so many advantages to AI in terms of human health,

  • in terms of education, and so forth,

  • that I'd be reluctant to stop it.

  • But even if I did think we should stop it,

  • I don't think it's possible.

  • There's so much an economic incentive behind it,

  • and I've heard an estimate that strong AI would

  • be worth a trillion dollars a year.

  • So even if, let's say, the US government forbade development

  • in kind of the way that they develop new stem cell lines,

  • that would just mean that the research would go offshore.

  • It wouldn't mean that it would stop.

  • The more sensible thing to do is to start thinking now

  • about these questions like the future of employment,

  • and how to build the ethical robot.

  • I don't think we can simply ban it.

  • My guess is that as AI gets better and better,

  • it's actually going to look less like people.

  • AI's going to be its own kind of intelligence.

  • ERNEST DAVIS: I certainly don't expect

  • to live to see strong AI.

  • I would be surprised if we got anything close to that

  • within 50 years.

  • GARY MARCUS: People always say real AI is 20 years away.

  • I don't know.

  • Natural language is still really hard.

  • Vision is still really hard.

  • Common sense is still really hard.

  • It makes it hard to predict exactly

  • what's going to happen next.

  • YANN LECUN: We've been able to build flying machines that

  • fly like birds.

  • Can we build intelligent machines?

  • Probably yes.

  • It's a matter of time.

  • ROBIN HANSON: The next era is likely to be

  • as different from era as these past eras have been different.

  • And I think that's well worth thinking about.

  • How would artificial intelligence change things

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

人工智能的興起|脫書|PBS數字工作室 (The Rise of Artificial Intelligence | Off Book | PBS Digital Studios)

  • 153 22
    richardwang 發佈於 2021 年 01 月 14 日
影片單字