字幕列表 影片播放
Test.
>> Hi, everybody.
We have a big show for you today.
So, if you have -- now would be a
great time to turn -- the exit behind you.
[ Applause ]
>> Hi, everybody.
Welcome.
Welcome to the 2018 TensorFlow Lite summit.
We have a good day with lots of cool talks.
As you know -- we are embarking on the
--
and the controllers in Europe are
useing
to project the trajectory of flight
through the air of Belgium, Luxembourg, Germany and the
Netherlands. This has more than 1.
8 million flights and it is one of the
most dense air spaces in the world.
And teary farming.
We know that a cow's health is vital
to the survival of the dairy industry.
And -- connected our company in the Netherlands, they wondered
if they can
use machine learning to track the health
of cows and be able to provide insights to farmers and
veterinarians on actions
to be taken to ensure we have happy,
healthy cows that are high yielding.
In California, and also from the Netherlands.
And
--
music, the machine learning algorithm,
the neural networks --
>> And changed by machine learning.
The popular Google home, or the pixel or search or YouTube or
even maps. Do you know what is fascinateing in all of these
examples?
TensorFlow is at the forefront of them. Makeing it all
possible.
A machine learning platform that can solve challengeing problems
for all of us.
Join us on this incredible journey to make TensorFlow
powerful, scaleable and the best machine learning platform for
everybody.
I now -- with TensorFlow to tell us more about this. Thank you.
>> So, let's take a look at what we have been doing over the last
few years. It's been really amazing.
There's lots of new -- we have seen the popularity of
TensorFlow Lite grow. Especially over the last year,
we focused on makeing TensorFlow easy to
use, and the degrees -- and new
programming paradigms like -- execution really make that
easyier. Earlier this year, we hit the
milestone of 11 million downloads. We are really
exciteed to see how much users are uses this and how much
impact it's had in the world.
Here's a map showing self-identifyied
locations of folks on Git hub that
started TV -- TensorFlow. It goes up and down. In fact,
TensorFlow is useed in every time zone in the world.
An important part of any open source product is the
contributeors themselves. The people who make this project
successful. I'm exciteed to see over a thousand contributeors
from outside Google who
are makeing contributions not just by improveing code, but
also by helping the rest of the committee by answering
questions,
responding to queries and so on.
Our commitment to this community is by
share
-- sharing our direction in the
roadmap, have the design direction, and
focus on the key needs like TensorBoard. We will be talking
about this later this afternoon in detail.
Today we are launching a new TensorFlow Lite blog. We'll be
shareing work by the team in the community on this blog, and we
would like to invite you to participate in this as well.
We're also launching a new YouTube channel for TensorFlow
that brings together all the great content for TensorFlow.
Again, all of these are for the community to really help build
and communicate. All day today we will be shareing a number of
posts on the blog and videos on the channel.
The talks you are hear hearing here will be made available
there as well, along with lots of conversations and interviews
with the speakers.
To make views and shareing easyier, today we are launching
TensorFlow hub.
This library of components is easyily integrateed into your
models. Now, again, goes back to really
makeing things easy for you.
Library.
With the focus on deep learning and neural networks. It's a
rich collection of machine learning environments.
It includes items like regressions and
decision trees commonly used for many structured data
classification problems. There's a broad collection of
state of
the art tools for stats and
Baysian analysis.
You can check out the blog post for details.
As I mentioned earlier, one of the big key focus points for us
is to make TensorFlow it easy to use. And we have been pushing
on simpler APIs, and making them more intuitive. The lowest
level -- our focus is to consolidate a lot of the APIs we
have and make it easier to build these models and train them.
At the noise level the TensorFlow APIs are really
flexible and let users build anything they want to.
But these same APIs are easier to use.
TensorFlow contains a full
implementation of Keras. You can offer lots of layers to
train them as well.
Keras works with both executions as well. For distributed
execution, we provide
estimators so you can take models and distribute them
across machines.
You could also get estimators from the Keras models.
And finally, we provide premade estimators. A library of ready
to go implementations of common machine learning environments.
So, let's take a look at how this works. So, first, you
would often define were model. This is a
nice and easy way to define your model. Shows a convolution
model here with just a few lines here.
Now, once you've defined that, often
you want to do some input processing.
We have a great idea of the data introduced in 1.
4 that makes it easy to process inputs and lets us do lots of
optimizations behind the scenes. And you will see a lot more
detail on this later today as well. Once you have those, the
model and the info data, now, you can put them
together by equating the data-set, computing gradients
and updating parameters themselves.
You need a few lines to put these together.
And you can use your debugger to debug that and involve problems
as well.
And, of course, you can do it even fewer lines by using the
pre-defined lines we have in Keras.
In this case, it executes the model as a graph with all the
optimizations that come with it. This is great for a single
machine or a single device.
Now, often, given the high, heavy computation needs for
deep learning or machine learning, we
want to use more than one actuator. We have estimators.
The same datasets that you had, you
can build an estimator and really use
that to train across the cluster or multiple devices on a single
machine. That's great. Why not use a cloud cluster?
Why not use a single block box if you can do it faster?
This is used for training ML models at scale. And the focus
is to take everything you have been doing and build a TPU
estimator to allow you to scale the same model.
And finally, once you have trained
that model, use that one line at the
bottom for the deployment itself. Your deployment is
important, you often do that in data centers. But more and more
we are seeing the need to deploy this on the phones, on other
devices as well.
And so, for that, we have TensorFlow lithe.
And we have a custom format that's designed for devices and
lightweight and really fast to get started with. And then once
you have that format, you can include that in your
application, integrate
TensorFlow Lite with a few lines, and you have an
application to do predictions and include ML. Whatever task
you want to perform.
So, TensorFlow runs not just on many platforms, but in many
languages as well.
Today I'm excited to add Swift to the mix. And it brings a
fresh approach to machine learning.
Don't miss the talk by Chris Lattner this afternoon that