Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • NICHOLAS THOMPSON: Hello, I'm Nicholas Thompson.

  • I'm the editor in chief of "Wired."

  • It is my honor today to get the chance

  • to interview Geoffrey Hinton.

  • They're a couple-- well, there are

  • many things I love about him.

  • But two that I'll just mention in the introduction.

  • The first is that he persisted.

  • He had an idea that he really believed in

  • that everybody else said was bad.

  • And he just kept at it.

  • And it gives a lot of faith to everybody who has bad ideas,

  • myself included.

  • Then the second, as someone who spends half his life

  • as a manager adjudicating job titles,

  • I was looking at his job title before the introduction.

  • And he has the most non pretentious job

  • title in history.

  • So please welcome Geoffrey Hinton, the engineering fellow

  • at Google.

  • [APPLAUSE]

  • Welcome.

  • GEOFFREY HINTON: Thank you.

  • NICHOLAS THOMPSON: So nice to be here with you.

  • All right, so let us start.

  • 20 years ago when you write some of your early very influential

  • papers, everybody starts to say, it's a smart idea,

  • but we're not actually going to be able to design computers

  • this way.

  • Explain why you persisted, why you were so confident that you

  • had found something important.

  • GEOFFREY HINTON: So actually it was 40 years ago.

  • And it seemed to me there's no other way the brain could work.

  • It has to work by learning the strengths of connections.

  • And if you want to make a device do something intelligent,

  • you've got two options.

  • You can program it, or it can learn.

  • And we certainly weren't programmed.

  • So we had to learn.

  • So this had to be the right way to go.

  • NICHOLAS THOMPSON: So explain, though--

  • well, let's do this.

  • Explain what neural networks are.

  • Most of the people here will be quite familiar.

  • But explain the original insight and how

  • it developed in your mind.

  • GEOFFREY HINTON: So you have relatively simple processing

  • elements that are very loosely models of neurons.

  • They have connections coming in.

  • Each connection has a weight on it.

  • That weight can be changed to do learning.

  • And what a neuron does is take the activities

  • on the connections times the weights, adds them all up,

  • and then decides whether to send an output.

  • And if it gets a big enough sum, it sends an output.

  • If the sum is negative, it doesn't send anything.

  • That's about it.

  • And all you have to do is just wire up

  • a gazillion of those with a gazillion squared weights

  • and just figure out how to change the weights,

  • and it'll do anything.

  • It's just a question of how you change the weights.

  • NICHOLAS THOMPSON: So when did you

  • come to understand that this was an approximate representation

  • of how the brain works?

  • GEOFFREY HINTON: Oh, it was always designed as that.

  • NICHOLAS THOMPSON: Right.

  • GEOFFREY HINTON: It was designed to be like how the brain works.

  • NICHOLAS THOMPSON: But let me ask you this.

  • So at some point in your career, you

  • start to understand how the brain works.

  • Maybe it was when you were 12.

  • Maybe it was when you were 25.

  • When do you make the decision that you

  • will try to model computers after the brain?

  • GEOFFREY HINTON: Sort of right away.

  • That was the whole point of it.

  • The whole idea was to have a learning device that

  • learned like the brain like people

  • think the brain learns by changing connection strengths.

  • And this wasn't my idea.

  • Turing had the same.

  • Turing, even though he invented a lot

  • of the basis of standard computer science,

  • he believed that the brain was this unorganized device

  • with random weights.

  • And it would use reinforcement learning

  • to change the connections.

  • And it would learn everything, and he

  • thought that was the best route to intelligence.

  • NICHOLAS THOMPSON: And so you were following Turing's idea

  • that the best way to make a machine is to model it

  • after the human brain.

  • This is how a human brain works.

  • So let's make a machine like that.

  • GEOFFREY HINTON: Yeah, it wasn't just Turing's idea.

  • Lots of people thought that back then.

  • NICHOLAS THOMPSON: All right, so you have this idea.

  • Lots of people have this idea.

  • You get a lot of credit.

  • In the late '80s, you start to come

  • to fame with your published work, is that correct?

  • GEOFFREY HINTON: Yes.

  • NICHOLAS THOMPSON: When is the darkest moment.

  • When is the moment where other people who

  • have been working who agreed with this idea from Turing

  • start to back away and yet you continue to plunge ahead?

  • GEOFFREY HINTON: There were always

  • a bunch of people who kept believing in it, particularly

  • in psychology.

  • But among computer scientists, I guess

  • in the '90s, what happened was data sets were quite small.

  • And computers weren't that fast.

  • And on small data sets, other methods like things

  • called support vector machines, worked a little bit better.

  • They didn't get confused by noise so much.

  • And so that was very depressing because we developed back

  • propagation in the '80s.

  • We thought it would solve everything.

  • And we were a bit puzzled about why it didn't solve everything.

  • And it was just a question of scale.

  • But we didn't really know that then.

  • NICHOLAS THOMPSON: And so why did

  • you think it was not working?

  • GEOFFREY HINTON: We thought it was not

  • working because we didn't have quite the right algorithms.

  • We didn't have quite the right objective functions.

  • I thought for a long time it's because we

  • were trying to do supervised learning

  • where you have to label data.

  • And we should have been doing unsupervised learning, where

  • you just learn from the data with no labels.

  • It turned out it was mainly a question of scale.

  • NICHOLAS THOMPSON: Oh, that's interesting.

  • So the problem was you didn't have enough data.

  • You thought you had the right amount of data,

  • but you hadn't labeled it correctly.

  • So you just misidentified the problem?

  • GEOFFREY HINTON: I thought that using labels at all

  • was a mistake.

  • You would do most of your learning

  • without making any use of labels just

  • by trying to model the structure in the data.

  • I actually still believe that.

  • I think as computers get faster, for any given size data set,

  • if you make computers fast enough,

  • you're better off doing unsupervised learning.

  • And once you've done the unsupervised learning,

  • you'll be able to learn from fewer labels.

  • NICHOLAS THOMPSON: So in the 1990s,

  • you're continuing with your research.

  • You're in academia.

  • You are still publishing, but it's not coming to a claim.

  • You aren't solving big problems.

  • When do you start--

  • well, actually, was there ever a moment

  • where you said, you know what, enough of this.

  • I'm going to go try something else?

  • GEOFFREY HINTON: Not really.

  • NICHOLAS THOMPSON: Not that I'm going to go sell burgers,

  • but I'm going to figure out a different way of doing this.

  • You just said we're going to keep doing deep learning.

  • GEOFFREY HINTON: Yes, something like this has to work.

  • I mean, the connections in the brain are learning somehow.

  • And we just have to figure it out.

  • And probably there's a bunch of different ways of learning

  • connection strengths.

  • The brains using one of them.

  • There may be other ways of doing it.

  • But certainly, you have to have something that can learn

  • these connection strengths.