Placeholder Image

字幕列表 影片播放

  • some examples.

  • The first thing you need is data.

  • You may want to validate results or test ideas on a common public

  • dataset.

  • It includes a large and rapidly growing collection of datasets

  • you can get

  • started with easily and combined with tf.data it is simple to

  • wrap your own data too. Here is a small sample of the datasets

  • available and all of these and many nor are included there.

  • Then with Keras, you can express the

  • models just like you are used to thinking about it.

  • Standard package is fit with model fit and evaluate as well.

  • Since deep learning models are often

  • commutationally expensive you way want

  • to try scaling this across more than one device.

  • Starting from a pre-trained model or component also works

  • well to reduce some

  • of this computational cost. TensorFlow helps provide a large

  • collection of pretained components you

  • can include in your model and Feinstein

  • -- fine tune for your dataset.

  • Keras comes with everything you might need for a typical

  • training job.

  • Sometimes you need a bit more control. For example when you

  • are exploring new kinds of algorithms.

  • Let's say you wanted to build a custom

  • encoder for machine translation, here is

  • how you could do this by subclassing the model.

  • You can even customize it training loop

  • to get full control over the gradients

  • and optimization process.

  • While training models, whether packaged with Keras or more

  • complex ones, it is often valuable to understand the

  • progress and even analyze the muddle in detail.

  • TensorFlow board provides a lot of visualization to help with

  • chis and

  • comes full integration with Colab and

  • other Jupyter notebooks allowing you to

  • see the same visuals. All of these features are available in

  • TensorFlow 2.0 and I am really excited to announce

  • our alpha release is available for you as of today.

  • [Applause] Many of you in the room and

  • across the world really helped with lots of work to make this

  • possible. I would really like to take this moment to thank you

  • you all. Please give yourself a round of applause.

  • We really couldn't have done this

  • without you.

  • In addition to all the great improvements we talked about,

  • this

  • release comes with a converter script

  • and compatibility module to give you

  • access to the 1.X APIs. We are working for a full release over

  • the next quarter.

  • There is a lot of work going on to make TensorFlow 2.0 work well

  • for you. You can track the progress and provide

  • feedback on the TensorFlow GitHub projects page.

  • You asked for better documentation, and

  • we have worked to streamline our docs

  • for APIs, guides and tutorials. All of this material will be

  • available

  • today on the newly redesigned TensorFlow.org website where you

  • will find examples, documentation and tools to get

  • started. We are very excited about these changes and what's

  • to come. To tell you more about improvements in TensorFlow for

  • research and production, I would like to welcome Megan Kacholia

  • on stage. Thank you.

  • Thank you. Thanks Rajat. TensorFlow has always been a

  • platform for research to production.

  • We just saw how TensorFlow high-level APIs make it easy at

  • a get started and build models and now let's talk about

  • how it improves experimentation for research and let's you take

  • models from

  • research and production all the way through. We can see this in

  • paper publications

  • which are shown over the past few years in this chart.

  • Powerful experimentation begins and

  • really needs flexibility and this begins with eager execution

  • and TensorFlow 2.0 every Python command is immediately executed.

  • This means you can write your code in

  • the style you are used it without having to use session

  • realm. This makes a big difference in the realm of

  • debugging.

  • As you iterate through, you will want

  • to distribute your code on to GPUs and

  • TPUs and we have provided tf. function turning your eager code

  • into a graph function-by-function. You get

  • Python control flow, asserts and even print but can convert to a

  • graph any time you need to, including when you are ready to

  • move your model into production. Even with this, you will

  • continue to get great debugging.

  • Debugability is great not just in eager

  • but we have made improves in tf. function and graph. Because of

  • the mismatch inputs you get an error.

  • As you can see, we give information to

  • user about the file and line number where the error occurs.

  • We have made the error messages concise, easy to understand and

  • actionable. We hope you enjoy the changes and they make it

  • easier to progress with the models. Performance is another

  • area we know researchers as well as all users for that matter

  • care about. We have continued improving core performance in

  • TensorFlow. Since last year, we have sped up

  • training on eight Nvidia TeslaV100s by LLGS double.

  • With Intel and MKL acceleration we have gotten inference speed

  • up by almost three times. Performance will continue to be

  • a focus of TensorFlow 2.

  • 0 and a core part of our progress to final release.

  • TensorFlow also provides flexibility

  • and many add on libraries that expand and extend TensorFlow.

  • Some are extensions to make certain

  • problems easier like tf.text with Unicode.

  • It helps us explore how we can make machine learning model

  • safer by a tf.privacy. You will hear new announcements on

  • reinforcement learning and tomorrow we

  • will discuss the new tf. federated library.

  • It is being applied to real world applications as well.

  • Here are a few examples from researchers at Google where we

  • see it applied to areas like data centers and making them

  • more efficient. Our apps like Google Maps, the one in

  • the middle, which has a new feature

  • called global localization and combines street service.

  • And devices like the Google Pixel that use machine learning

  • to improve depth estimation to create better portrait mode

  • photos like the ones shown here. In order to make these real

  • world applications a reality, you must be able

  • to take models from research and prototyping to launch and

  • production. This has been a core strength and focus for

  • TensorFlow. Using TensorFlow you can deploy models on a number of

  • platforms shown here and models end up in a lot of places so we

  • want to make sure TensorFlow works across all these servers,

  • Cloud, mobile,

  • edge devices and Java and number of platforms.

  • We have products for these.

  • TensorFlow Extended is the end the end platform.

  • In orange, shown here, you can see the

  • libraries we have Open SourceSourced so far. We are

  • taking a step further and providing components built from

  • these libraries that make up an end-to-end platform. These are

  • the same components used internally in thousands of

  • production systems powering Google's most important

  • products. Components are only part of there story.

  • 2019 is the year we are putting it together and providing you

  • with an integrated end-to-end platform.

  • You can bring your own orchestrator.

  • Here is airflow or raw Kubernetes even.

  • Not matter what orchestrate you

  • chose, the items integrate with the metadata store.

  • This enables experiments, experimentation, experiment

  • tracking and model comparison and things I am sure you will be

  • excited about and will help you as you iterate through. We have

  • an end-to-end talk coming up from Clemens and his team and

  • they will

  • take you on a complete tour of

  • TensorFlow Extended to solve a real problem. TensorFlow Lite is

  • our solution for

  • running models on a mobile and IOt hardware.

  • On device models can be

  • reore responsive and keep users on device for privacy.

  • Google and partners like

  • iqiyi provide all sorts of things. TensorFlow Lite is about

  • performance.

  • You can deploy models to CPU, GPU, and even EdgeTPU.

  • By using the latest techniques and adding support for OpenGL

  • and metal on

  • GPUs and tuning performance on EdgeTPUs we are constantly

  • pushing the limits of what is possible. You should expect

  • greater enhancements in the year ahead.

  • We will hear details from Raziel and colleaguess coming up later.

  • JavaScript is the number one

  • programming laj language in the world