字幕列表 影片播放
[MUSIC PLAYING]
CHRIS MATTMANN: Hey, everybody, this is Chris Mattmann.
And I'm not able to attend physically the TensorFlow Dev
Summit.
So I'm giving my talk remotely.
My talk is about TensorFlow and Machine Learning
from the Trenches.
I'm the deputy CTO at the NASA Jet Propulsion Laboratory.
And I'm going to talk about our experience using TensorFlow
in the Innovation Experience Center at JPL.
What's JPL?
JPL is a federally-funded research and development
center.
It's NASA's only FFRDC.
They call these the National Labs.
Its goal is to do first-of-a-kind missions
in autonomy, technology development for space, in situ,
on-the-ground remote sensing of the earth,
and various other really nationally critically
functions.
It's nestled there in the beautiful mountains of La
Cañada, Flintridge.
We have about 6,000 employees, about a $2.6 billion-business
base.
We have a pretty large facility, about 167 acres.
At JPL, I am the lead for the Innovation Experience Center.
I'm the deputy chief technology and innovation officer.
And what does our Innovation Experience Center look like?
Our recipe for it is to find the most difficult space,
the space that looks the ugliest, and make it our own.
Take it, gut it, have the actual engineers and data scientists
put it together and put it back together in the way
that they want-- sit-stand desks, basically,
follow the sun sunshades, IoT internet devices
that both frost and unfrost the conference
room for smart-glass and privacy,
and so on and so forth.
So that's our team.
We're working in all of these areas.
And we're really excited to be doing machine
learning with TensorFlow.
In particular, we're excited in a few different areas.
Our organization is responsible for using TensorFlow
in the following ways-- the first is the M2020 Rover
that you just see right there.
It's now named Perseverance.
In that rover, in that clean room,
we measure particulates in the air
to determine if we're adding any sort of biocontamination.
Because we don't want to do that when
we send this to another planet.
If we discover life, we actually want to do that.
So we have small-commodity IoT, internet
of things, sensors that are measuring particulates,
increasing our ability to do that,
increasing the density of the measurements that we have.
And we're doing predictions using TensorFlow
in machine learning to determine the next measurements,
the next contaminations, if we had them,
and intervening if necessary.
In the bottom right, you can see our people counter.
That's another IoT device.
It uses TensorFlow, and object detection,
and facial recognition, and so forth
to, basically, count people's heads
as they go in and out of our tents at events,
like our IT Expo and so forth, so that we can tell people
when the right time is to actually attend these events,
so that they're not overcrowded.
Besides that, we're not just doing institutional things
with TensorFlow.
We're looking beyond that.
Today, our MAARS rovers are currently
running on what's called the RAD750 processor.
That's a radiation-hardened PowerPC 750 Lite processor.
That's, basically, the amount of power
that we had on an iPhone 1.
Tomorrow, we'll have the ability to have
high-performance spaceflight computing and the ability
to use things like Snapdragons from Qualcomm.
So real GPU, like a deep-learning chip,
so that we can do actual computing onboard.
And if we could do high-performance spaceflight
computing onboard, we could do really cool things,
like make the rovers intelligent,
make our rovers smart, do things like drive-by science, which
you see there highlighted on the right,
as one of our three ongoing tasks and initiatives to use
and leverage high-performance spaceflight computing.
Can we make rovers smarter?
Absolutely, we can.
In particular, we can take models
like terrain classifiers, which we've built with TensorFlow.
We call it SPOC.
There is a theme here, "Star Trek."
Our terrain classifier, SPOC, is a CNN.
It's a convolutional neural network using TensorFlow to do
terrain classification-- ripples, smooth,
smooth with rocks-- to figure out where the rover should
drive and where it shouldn't.
We test this in our Arroyo Seco, which is right by JPL,
using our test Athena Rover.
Another TensorFlow-based model that we've been using
and leveraging is the Google Show-and-Tell model for that,
which is a combination of a convolutional neural network
and LSTM--
or recurrent neural network, long short-term memory--
to, basically, do labeling, figure out
the labels for a particular image for the rover,
and then take the labels and actually learn a sentence
description for it, so that scientists can review them
and so that the rover, when it's on Mars, can,
instead of sending back 200 images a day
to plan what to do the next day, it can send back millions
of image captions that are scientifically validated
and to increase our density of observation.
In terms of our terrain classifier, just
some examples of that--
SPOC looks at the geometric features and so forth.
And it's actually capable of recognizing terrain types
from images.
So this is really important, both for Mars surface
operations, but, also, to potentially plan
where we should do future Mars missions and landings.
Beyond that, one of the things we've been really challenged
with-- and it's been a big area of research for us--
is putting TensorFlow models, and taking them,
and porting them to TensorFlow Lite,
and moving them on to exotic hardware--
some of which isn't even physically here and we only
have emulators for, like the high-performance spaceflight
computing emulator-- that we've been trying to look at various
TensorFlow models-- like DeepLab, Mobilenetv2--
and then figuring out how do we port them
into a TensorFlow Lite quantized model or a TensorFlow Lite
floating-point model and measuring the computation time
from that.
One of our key observations here is that Mobilenetv2 tests were
conducted on smaller imagery.
And actually, Mobilenetv2 tests performed the fastest
of any of the models that we were actually
testing when we used TensorFlow Lite in a quantized fashion
for that.
So we've got ongoing research.
And we're working on porting these models
into these TF-Lite environments.
In particular, if we have drive-by science,
if our rovers are smarter and you look on the right,
we don't want to miss that unnoticed-green-monster
problem, where the rover simply doesn't have enough power to be
able to--
in light time and bandwidth, it misses recognizing something
that, actually, we really wanted to see, like our little buddy
right there in green.
And one of the challenges with that
is that the rover has an eight-minute light time round
trip, from Earth to Mars, to, basically,
send out a communication and to hear back from it.
So it's got to do a lot of science and things onboard.
It's got to recognize things, even
without human intervention.
Additionally, our geologists, they've
got a headache with too many images and so forth.
So having the ability to have the rover be smart, do
drive-by science onboard, and just send back,
again, those textual captions and descriptions of images
is really key.
Because then it can get beyond only
being able to send 200 images per day
and could actually send millions of captions.
In particular, all of the work that we're
doing on TensorFlow in the book, I've been collecting it.
I've been capturing it.
And I'm writing a second version of the "Machine Learning
with TensorFlow" book.
It's called "Machine Learning with TensorFlow Second
Edition."
It's currently in the Manning Early Access Program, or MEAP.
Please check out the link right there.
And I would love for you to, basically,
ask me any questions.
There's an online developer forum for it.
Please let me know.
And I'd be happy to get back to you.
And you know what?
I'm not there physically.
But you can find me online at Twitter, Chris Mattmann
@ChrisMattmann.
And thank you for giving me the opportunity to present today.
[MUSIC PLAYING]