Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • LAURENCE MORONEY: Hi, everybody.

  • Laurence Moroney here on my TensorFlow World.

  • And we've just come from the keynote

  • that was given by Jeff Dean.

  • And so Jeff, welcome, and thanks for coming to talk with us.

  • JEFF DEAN: Thanks for having me.

  • LAURENCE MORONEY: So you covered lots of great contents

  • in the keynote, and there were so

  • many things that we don't have time to go over them all.

  • But there was one really impactful thing that I saw.

  • And you were talking about like in computer vision.

  • Now, the error rate in humans is like 5% in computer vision.

  • And now with machines, it's down to 3%,

  • and that's really, really cool.

  • But it's more than just a number, right?

  • What's the impact of this?

  • JEFF DEAN: Right.

  • I mean, it's important to understand

  • this is for a particular task that humans aren't necessarily

  • that great at.

  • You have to be able to distinguish 40 species of dogs

  • and other kinds of things in 1,000 categories.

  • But I do think the progress we've made from about 26%

  • error in 2011 down to 3% in 2016 is hugely impactful.

  • Because the way I like to think about it is computers

  • have now evolved eyes that work, right?

  • And so we've now got the ability for computers

  • to perceive the world around them

  • in ways that didn't exist six or seven years ago.

  • And all of a sudden, that opens up applications of computing

  • that just didn't exist before.

  • Because now, you can depend on being able to see

  • and sense of what's right.

  • LAURENCE MORONEY: I know one of these applications that you're

  • always passionate about is diabetic retinopathy

  • and diagnosis of that.

  • Could you tell us what's going on in that space?

  • JEFF DEAN: Yeah, I mean, I think diabetic retinopathy

  • is a really good example of many medical imaging fields.

  • Where now, all of a sudden, if you collect

  • a high quality [INAUDIBLE] from domain experts,

  • radiologists labeling x-rays, or ophthalmologists

  • labeling eye images, and then you train a computer vision

  • model on that task, whatever it might be,

  • you can now sort of replicate the expertise of those domain

  • experts in a way that makes it possible to bring and deploy

  • that sort of expertise much more widely.

  • You can get something onto a GPU card

  • and do 100 images a second in a rural village

  • all over the world.

  • LAURENCE MORONEY: And I think that's the important part.

  • It's like places where there's shortage of that expertise,

  • you can now have impact to change the world.

  • JEFF DEAN: That's right.

  • Yeah, yes.

  • So you can offer--

  • if you have clinicians who are already

  • doing this task-- you can offer them an instant second opinion,

  • like a second colleague they can turn to.

  • But you can also deploy it in places where there are just

  • aren't enough doctors.

  • LAURENCE MORONEY: I just find that amazing,

  • and it's one of the ways that computer vision is now

  • more than just a number.

  • It's an application that we're able to change our world

  • to make it--

  • JEFF DEAN: I mean, being able to see

  • has all kinds of cool implications.

  • LAURENCE MORONEY: Exactly.

  • And then you also spoke a lot about language,

  • and some of the new language models,

  • and some of the research that's been going on into there.

  • And you can you update us a little on that?

  • JEFF DEAN: Sure.

  • I think in the last four or so years,

  • we've made a lot of progress as a community

  • in how do we build models that can basically

  • understand pieces of text?

  • Things like a paragraph or a couple of paragraphs

  • long, we can actually understand them at a much deeper level

  • than we were able to do before.

  • We still don't have a good handle on

  • how do we read an entire book and understand that in a way

  • a human would get from reading a book?

  • But understanding a few paragraphs of text

  • is actually a pretty fundamentally useful thing

  • for all kinds of things.

  • They can use these to improve our search system.

  • Just last week, we announced the use

  • of a BERT model, which is a fairly sophisticated

  • natural language processing model

  • in the middle of our search ranking algorithms.

  • And that's been shown to improve our search results quite

  • a lot for lots of different kinds of queries

  • that were previously pretty hard.

  • LAURENCE MORONEY: Cool, cool.

  • And I'm assuming can be used, for example, for like research

  • at least, for translation, for bringing more languages online

  • for [INAUDIBLE].

  • JEFF DEAN: Yeah, yeah.

  • So there's also a lot of advances

  • in the field of translation using these kinds of models.

  • Transformer-based models for translation

  • are showing remarkable gains in BLEU score

  • which is a measure of translation quality.

  • LAURENCE MORONEY: Right, right.

  • Now, one thing that I found particularly fascinating

  • that you were talking about as you were wrapping up

  • your keynote is that a lot of time,

  • we have these kind of atomic models

  • that do all these unit tasks.

  • But what about this great big model,

  • like to be able to do multiple things

  • and using neural architecture search to be

  • able to add to that model?

  • And could you elaborate a little bit

  • on that 'cause you had a great call to action there?

  • JEFF DEAN: Yeah, I think today, in the machine learning field,

  • we mostly find a problem we care about,

  • we find the right data to train a model

  • to do that particular task.

  • But we usually start from nothing with that model.

  • We basically initialize the parameters

  • of the model with random floating point numbers

  • and then try to learn everything about that task from the data

  • set we've collected.

  • And that seems pretty unrealistic.

  • It's sort of akin to, like, when you

  • want to learn to do something new,

  • you forget all your education, and you

  • go back to being an infant.

  • LAURENCE MORONEY: Take a brain out

  • and put a different brain in.

  • JEFF DEAN: And now, you try to learn everything

  • about this task.

  • And that's going to require that you

  • have a lot more examples of what it is you're trying to do,

  • because you're not generalizing from all the other things

  • you already know how to do.

  • And it's also going to mean you need

  • a lot more computation and a lot more

  • effort to achieve good outcomes in those tasks.

  • If, instead, you had a model that

  • knew how to do lots and lots of things,

  • in the limit, all the things we're

  • training separate machine learning models for,

  • why aren't we training one large model for this

  • with different pieces of expertise?

  • I think it's really important that, if we

  • have a large model, that we only sort of sparsely activate it.

  • We call upon different pieces of it as needed.

  • But mostly, 99% of the model is idle for any given task.

  • And you call upon the right pieces of expertise

  • when you need them.

  • That, I think, is a promising direction.

  • There's a lot of really interesting computer systems

  • problems underneath there.

  • How do we actually scale to a model of that size?

  • There's a lot of interesting machine

  • learning research questions.

  • How do we have a model that evolves its structure that

  • learns to route to different pieces of the model that

  • are most appropriate?

  • But I'm pretty excited about it.

  • LAURENCE MORONEY: Yeah, me, too.

  • And it's like, it's one of those things that might

  • seem a little fantastical now.

  • But only two or three or four years ago,

  • the computer vision and natural language stuff that

  • we're talking about seemed fantastical then, so it's--

  • JEFF DEAN: Right.

  • And we're seeing hints of things.

  • Like, neural architecture search seems to work well for things.

  • We're seeing the fact that when you

  • do transfer learning from another related task,

  • you generally get good results with less

  • data for the final task you care about.

  • Multi-task learning at small scales of five

  • or six related things all tend to make things work well.

  • So this is just sort of the logical consequence

  • of extending all those ideas out.

  • LAURENCE MORONEY: Yeah, exactly.

  • So then bringing you back, for example,

  • to the computer vision that we spoke about early on.

  • It was, like, who would have thought that when we were first

  • researching that, that things like diabetic retinopathy

  • would have been possible?

  • And now we're at the point where with this model, this--

  • I don't know what to call it-- model of everything,

  • uber model, that kind of thing, there

  • were going to be implications for that

  • can change the world, that can make the world a better place.

  • JEFF DEAN: Yeah.

  • That's what we hope.

  • LAURENCE MORONEY: That's the hope,

  • and that's also the driving goal, I think.

  • And that's one of the things that I find--

  • and if we go back to your keynote,

  • towards the end of your keynote, when you spoke about fairness,

  • when you spoke about the engineering challenges

  • that we're helping to solve, that

  • was personally inspiring to me.

  • JEFF DEAN: Hmm, cool.

  • LAURENCE MORONEY: And I hope it's personally

  • inspiring to you, too.

  • So thanks so much, Jeff.

  • I really appreciate having you on and--

  • JEFF DEAN: Thanks very much.

  • Appreciate it.

  • LAURENCE MORONEY: Thank you.

  • JEFF DEAN: Thanks.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

A2 初級

Jeff Dean在TF World '19(TensorFlow Meets)上討論機器學習的未來。 (Jeff Dean discusses the future of machine learning at TF World ‘19 (TensorFlow Meets))

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字