Placeholder Image

字幕列表 影片播放

  • Hi, everybody.

  • And welcome to this session where we're gonna talk about breakthroughs in machine learning.

  • I'm Laurence Maroney.

  • I'm a developer advocate.

  • That's Google of working on tensorflow with the Google brain team.

  • We're here today to talk about the revolution that's going on in machine learning and how that revolution is transformative.

  • Now I come from a software development background.

  • Any software developers here, given that it's I Oh, sure, on this, this is this transformation.

  • This revolution is, particularly from a developers perspective, is really, really cool because it's giving us a whole new set of tools that we can use to build scenarios and to build solutions for problems that may have been too complex to even consider prior to this.

  • It's also leading to massive advances in our understanding of things like the universe around us.

  • It's opening up new fields and arts on it's impacting and revolutionizing things such as health care on so many more things.

  • So should we take a look at some of these so first of all, astronomy at school?

  • I studied physics.

  • I wasn't the com side part since I'm a physics and astronomy geek on it wasn't that long ago when we learned how to discover what, how new planets around other stars in our galaxy, the way that we discovered it was that sometimes would observe like a little wobble in the star.

  • And that meant that there was a very large planet like Jupiter, size or even bigger, orbiting that star very closely and causing a wobble because of the gravitational attraction.

  • But, of course, the kind of planets we want to find out the small, rocky ones like Arthur Mars, where you know there's a chance of finding life on these planets on DDE.

  • Finding those and discovering those was very, very difficult to do because small ones closer to Starr, you just wouldn't see.

  • But there's with research that's been going on in the Kepler mission.

  • They've actually recently discovered this planet called Kepler 90 i by sifting through data and building models for using machine learning and using tensorflow on Kepler 90.

  • Eye is actually much closer to its host star than Earth is so that its orbit is only 14 days instead of our 365 and 1/4 in a bitch on.

  • Not only that which I find really cool that they didn't just find.

  • This is a single planet around that star.

  • They've actually mapped and model the entire solar system of eight planets that are there.

  • So these are some of the advances.

  • It's to me.

  • I find it's just a wonderful time to be alive because technology is enabling us to discover these great new things.

  • And even closer to home, we've also discovered that looking at scans of the human eye, as you would have seen in the keynote, you know, with machine learning trained models on this, we've been able to discover things such as blood pressure predictions or being able to assess a person's risk of a heart attack or a stroke.

  • Now, just imagine if this screening can be done on a small mobile phone, how profound is the effects going to be?

  • Suddenly, the whole world is going to be able to access easy, rapid, affordable.

  • A noninvasive screening for things such as heart disease is it'll be saving many lives but also be improving the quality of many, many more lives.

  • Now, these are just a few of the breakthroughs and advances that have been made because of tensorflow and tensorflow.

  • We've been working hard with the community with all of you to make this a machine learning platform for everybody.

  • So today and when we want to share a few of the new advances that have been working on this.

  • So including will be looking at robots on Vincent's gonna come out in a few moments to show us robots that learn and some of the work that they've been doing to improve how robots learn.

  • And then Debbie is going to be from nurse.

  • She's gonna be showing us cosmology advancements on including showing how building a simulation of the entire universe will help us understand the nature of the unknowns in our universe, like dark matter and dark energy.

  • But first of all, I would love to welcome from the magenta team we have dug.

  • Who's the principal scientists, Doug.

  • Thanks, Florence.

  • Thanks, Doug.

  • Thank you very much.

  • All right.

  • Day three.

  • We're getting there, Everybody.

  • I'm Doug.

  • I am a research scientist at Google, working on a project called Magenta.

  • And so before we talk about modeling the entire known universe, so we talk about robots.

  • I want to talk to you a little bit about music and art and how to use a machine learning potentially for expressive purposes.

  • So I want to talk first about a drawing project called Sketch Are n n where we trained a neural network to do something as important as draw the pig that you see on the right there.

  • And I want to use this as an example, actually highlight a few, I think important machine learning concepts that we're finding to be crucial for using machine learning in the context of art and music.

  • So let's dive in.

  • It's gonna get a little technical, but hopefully to be fun for you, all we're gonna do is try to learn to draw, not by generating pixels, but actually by generating pen strokes.

  • And I think this is a very interesting representation to use because it's very close to what we do when we draw so specifically, we're gonna take the data from the very popular quick draw game playing Pictionary against machine learning algorithm.

  • At that was captured his Delta X delta y movements of the pen.

  • We also know when the pen is put down on the page and when the pen it's lifted up and we're gonna treat that as our training domain.

  • One thing what I would notices that are observed is that we didn't necessarily need a lot of this data.

  • What's nice about the data is that it fits the creative process.

  • It's closer to drawing, I argue.

  • Then pixels are to drawing.

  • It's actually modeling the movement of the pen.

  • Now what we're gonna do with these drawings is we're gonna push them through something called an auto encoder.

  • What you're seeing on the left, the encoder Networks job is to take those strokes of that cat and encode them in some way so that they could be stored as a latent vector, the yellow box in the middle.

  • The job of the decoder is to decode that late in vector back into a generated sketch and the very important point.

  • In fact, the only point that you really need to take away from this talk is that that late in vector is worth everything to us.

  • First, it's smaller in size than the encoded or decoded drawing, so it can't memorize everything.

  • And because it can't memorize, we actually get some nice effects.

  • For example, you might notice if you look carefully that the cat on the left, which is actual data and has been pushed through the trained model and decoded, is not the same as the cat on the right, right?

  • The cat on the left has five whiskers, but the model regenerated the sketch with six whiskers.

  • Why?

  • Because that's what it usually sees.

  • Six whiskers is general.

  • It's normal to the model where it's five.

  • Whiskers is hard for the model to make sense of.

  • So this idea of having a tight, low dimensional representation this late in vector that's been trained on lots of data.

  • The goal is that this model might learn to find some of the generalities in a drawing, learn general strategies for creating something.

  • So here's an example of starting each of the four corners with a drawing done by a human.

  • David, the first author, and those are encoded in the corners, and now we just move linearly around the space, not the space of the strokes but the space of the Layton vector.

  • And if you look closely, what I think you'll see is that the movements and the changes from these faces say, from left to right, actually quite smooth.

  • The model has dreamt up all of those faces in the middle.

  • Yet to my eye, they really do kind of fill the space of possible drawings.

  • Finally, as I pointed out with the cat whiskers, these models generalized, not memorize.

  • It's not that interesting to memorize a drawing.

  • It's much more interesting to learn general strategies for drawing.

  • And so we see that with the 5 to 6 Mr Cat, I think more interestingly, and I think it's also suggestive.

  • We also see this with doing something like taking a model that's only seen pigs and giving it a picture of a truck.

  • And what's that model going to do?

  • It's gonna find a pig truck because that's all it knows about right?

  • And if that seems silly, which I grant, it is in your own mind.

  • Think about how how hard it would be, at least for me.

  • If someone says draw a truck that looks like a pig, it's kind of hard to make that transformation, and these models do it.

  • Finally, they get paid to do this.

  • I just want to point that out as an aside, so it's kind of nice.

  • I said that last year.

  • It's still true.

  • Um, okay, so these late in space analogies, another example of what's happening in these late in space is obviously, if you add and subtract pen strokes, you're not gonna get far with making something that's that's recognizable.

  • But if you have a look at the light and space analogies, we take the late in vector for a cat head and we add a pig body and we subtract the pig head.

  • And of course, it stands to reason that you should get a cat body and we could do the same thing in reverse.

  • And this is real data.

  • This actually works, and the reason I mention it is it shows that the late in space models are learning some of the geometric relations between the forms that people draw.

  • I'm going to switch gears now and move from, uh, drawing to music and talk a little bit about a model called Incense, which is a neural network synthesizer that takes audio and learns to generalize in the space of music you may have seen from the beginning of a Iot with bathing that has been put into a hardware unit called Instant Super how many people have heard of hand since super?

  • How many people want an instant?

  • Super good.

  • Okay, well, that's possible, as you know.

  • Um, okay, so I want for those of you that didn't see the opening, I have a short version of the making of the instant.

  • Super like to roll that now to give you guys a better idea of what this model's up to.

  • Um, let's let's roll it.

  • That's, like, wild.

  • There's a flu.

  • Here's a snare.

  • Now I just feel like attending a corner.

  • What could be new possibility?

  • It could generate a sound that my inspire ism.

  • The fun part is like, Even though you think you know what you're doing, there's some weird interaction happening.

  • Can give you something totally unexpected.

  • Wait, why did that happen that way?

  • Okay, So what you see here, by the way, the last person with the long hair was Jesse Angle, who was the main scientist on the instant project.

  • This grid that you're seeing this, uh, square where you can move around the space is exactly the same idea as we saw with those faces.

  • So the idea that you're moving around the late in space, and you're able to discover sounds that hopefully have some similarity and because they're made up of learning what makes humans how sound works for us in the same way as a pig truck gives us maybe some new ideas about how sound works.

  • And, as you probably know, you can make these yourself, which I think is my mind.

  • My favorite part about the Instant Super project is that this is open source.

  • Get hub for those of you who are makers and like to tinker, please give it a shot.

  • If not, we'll see some coming available from tons of people who are building them on the room.

  • So I want to keep going with music, but I want to move away from audio and I want to move now.

  • Thio Musical scores, musical notes something that you know, think of last night with justice driving a sequencer and talk about basically the same idea, which is, can we learn a late in space where we can move around what's possible in in in a musical note or a musical score?

  • Rather so what you see here is some three part musical thing on the top and someone part musical thing on the bottom and then finding in a late in space something that's in between, okay?

  • And now I put the faces underneath this.

  • What you're looking at now is a representation of a musical drum score where time is passing left to right.

  • And what we're going to see is we're gonna start.

  • I'm gonna play this for you.

  • It's a little bit long, so I want to set this up.

  • We're gonna start with a drum beat one measure of drums, and we're gonna end with one measure of drums and you're gonna hear those.

  • First you're gonna hear A and B, and then you're going to hear this late in space model.

  • Try to figure out how to get from a to B, and everything in between is made up by the model in exactly the same way that the faces in the middle are made up by the model.

  • So as you're listening, basically, listen for whether it makes musical sense or not, that the intermediate drums let's give it a role.

  • So you have it moving right along.

  • It turns out, take a look at this command.

  • Um, this make sense to some of you may be, we were surprised to learn, after a year of doing magenta, that this is not the right way to work with musicians and artists.

  • I know I laughed too, but we really thought he was a great idea.

  • Guys, it's like taste this into terminal and they're like, what's terminal?

  • And then, you know you're in trouble, right?

  • OK, so, um, we've we've moved quite a bit towards trying to build tools that musicians can use.

  • This is a drum machine, actually, that you can play with online built around tensorflow dot Js and I have a short clip of this being used.

  • What you're going to see is all the red is from you.

  • As a musician, you can play around with it and then the blue is generated by the models.

  • So let's give this a roll.

  • This is quite a bit shorter.

  • So this is available for you as a code pen which allows you to play a round of the HTML and the CSS and the JavaScript and really amazing a huge shout out to Tero Parviainen who did this.

  • He grabbed one of our train magenta models and he used tensorflow dot Js and he hacked a bunch of code to make it work.

  • And he put it out on Twitter and we had no idea this was happening.

  • And then we reached out to my research A month for terror.

  • You're my hero.

  • This is awesome.

  • And he's like, Oh, you guys care about this?

  • Can Of course we care about this.

  • This is our dream to have people not just as playing with this technology.

  • So I love it that we've gotten there.

  • So part of what I want to talk about today actually close with, We've cleaned up a lot of the code.

  • In fact, Tero helped.

  • And we've Now we're able to introduce Magenta that J s, which is very tightly integrated with tensorflow dot Js.

  • And it allows you, for example, to grab a checkpoint in model and set up a player and start sampling from it.

  • So in three lines of code, you can set up a little drum machine or music sequencer, and we're also doing the same thing with sketch aren't and so we have the art side as well.

  • Um, and we've seen a lot of demos driven by this a lot of really interesting work, both by Googlers and by people from the outside.

  • And I think it highly lines well with what we're doing in Magenta.

  • So too close.

  • We're doing research in generative models were working to engage with musicians and artists.

  • Very happy to see the JavaScript stuff come along.

  • Which is really seems to be the language for that, hoping to see better tools come and heavy engagement with the open source community.

  • If you wanna learn more, please visit Judah Coast last magenta.

  • Also, you can follow my Twitter account.

  • I post regular updates and try to be a connector for that.

  • So that's what I have for you.

  • And now I'd like to switch gears and go to robots.

  • Very exciting with my colleague from Google Brain.

  • Vincent.

  • Anouk.

  • Thank you very much things, Doug.

  • So my name's been sent and I lead the Brain Robotics research team, the robotics research team at Google.

  • We when you think about robots, you may think about precision and control.

  • You may think about robots, you know, leaving factories.

  • They've got one very specific job to do, and they gotta do it over and over again.

  • But as you saw in the keynote earlier, more and more robots are about people write their self driving cars that are driving in our streets, interacting with people.

  • They essentially now live in our world, not their world.

  • And so they really have to adapt and perceive the world around them and learn how to operate in this human centric environment.

  • Right?

  • So how do we get robots to learn instead of having to program them?

  • Um, this is what we've been embarking on.

  • And it turns out we can get robots to learn.

  • It takes a lot of robots.

  • It takes a lot of time.

  • And but we can actually improve on this if we teach robots hard to behave collaboratively.

  • So this is an example of ah team of robots that are learning together how to do a very simple task like grasping objects right at the for the beginning.

  • They have no idea what they're doing there.

  • Try and try and try.

  • And sometimes they will grass something every time they grass.

  • Something would give them a reward and over time to get better and better at it.

  • Of course, we used the planning for this um, basically have a convolution all network that maps those images that the robot C of the work space in front of them to actions and possible actions.

  • And this collective learning of robots enables us to get to levels of performance that we haven't seen before.

  • But he takes a lot of robots.

  • And in fact, you know, this is Google.

  • We would much rather use lots of computers if we could instead of lots of robots.

  • So the question becomes, Could we actually use a lot of rope simulated robots, virtual robots to do this kind of task and teach those robots to perform tasks?

  • And would it actually matter in the real world, would they?

  • What would they learn in simulation, actually apply to real tasks?

  • Um, and it turns out the key to making this work is to learn simulations that are more and more faithful to reality.

  • So on the right here, you see what a typical simulation of a robot would look like.

  • This is a virtual robots trying to grasp objects and simulation.

  • What you see on the other side here may look like a real robots doing the same task, but in fact, it is completely simulated as well.

  • We've learned a machine learning model that maps those simulated images to real images to real looking images.

  • They're essentially indistinguishable from what a riel robot would see in the real world.

  • And by using this kind of data in a simulated environment and training assimilating model to accomplish tasks using those images, we can actually transfer that information and make it work in the real world as well.

  • So there's lots of things we can do with theis kinds of simulated robots.

  • Uh, this is Rainbow Dash, our favorite little pony.