Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • MEGAN KACHOLIA: Hi, everyone.

  • Welcome to the 2020 TensorFlow Developer Summit Livestream.

  • I'm Megan Kacholia, VP of Engineering for TensorFlow.

  • Thanks for tuning into our fourth annual developer

  • Summit and our first ever virtual event.

  • With the recent developments of the coronavirus,

  • we're wishing all of you good health, safety, and well-being.

  • While we can't meet in person, we're

  • hoping the Dev Summit is more accessible than ever

  • to all of you.

  • We have a lot of great talks along

  • with exciting announcements, so let's get started.

  • When we first opensourced TensorFlow,

  • our goal was to give everyone a platform

  • to build AI to solve real world problems.

  • I'd like to share an example of one of those people.

  • Irwin is a radiologist in the Philippines

  • and no stranger to bone fracture images like the ones

  • that you see here.

  • He's a self-proclaimed AI enthusiast

  • and wanted to learn how AI could be applied to radiology

  • but was discouraged because he didn't have a computer science

  • background.

  • But then he discovered TensorFlow.js,

  • which allowed him to build this machine learning application

  • that could classify bone fracture images.

  • Now, he hopes to inspire his fellow radiologists

  • to actively participate in building AI to,

  • ultimately, help their patients.

  • And Irwin is not alone.

  • TensorFlow has been downloaded millions of times

  • with new stories like Irwin's popping up every day.

  • And it's a testament to your hard work and contributions

  • to making TensorFlow what it is today.

  • So on behalf of the team, I want to say a big thank you

  • to everyone in our community.

  • Taking a look back, 2019 was an incredible year for TensorFlow.

  • We certainly accomplished a lot together.

  • We kicked off the year with our Dev Summit,

  • launched several new libraries and online educational courses,

  • hosted our first Google Summer of Code,

  • went to 11 different cities for the TensorFlow roadshow,

  • and hosted the first TensorFlow World last fall.

  • 2019 was also a very special year for TensorFlow

  • because we launched version 2.0.

  • It was an important milestone for the platform

  • because we looked at TensorFlow end to end

  • and asked ourselves, how can we make it easy to use?

  • Some of the changes were simplifying the API,

  • settling on Keras and eager execution,

  • and enabling production to more devices.

  • The community really took the changes to heart,

  • and we've been amazed by what the community has built.

  • Here are some great examples from winners of our 2.0 Dev

  • Post Challenge like Disaster Watch, a crisis mapping

  • platform that aggregates data and predicts

  • physical constraints caused by a natural disaster,

  • or DeepPavlov, an NLP library for dialog systems.

  • And like always, you told us what

  • you liked about the latest version but,

  • more importantly, what you wanted to see improved.

  • Your feedback has been loud and clear.

  • You told us that building models is easier

  • but that performance can be improved.

  • You also are excited about the changes.

  • But migrating your 1.x system to 2.0 is hard.

  • We heard you.

  • And that's why we're excited to share the latest

  • version, TensorFlow 2.2.

  • We're building off of the momentum from 2.0 last year.

  • You've told us speed and performance is important.

  • That's why we've established a new baseline

  • so we can measure performance in a more structured way.

  • For people who have had trouble migrating to 2,

  • we're making the rest of the ecosystem

  • compatible so your favorite libraries

  • and models work with 2.x.

  • Finally, we're committed to that 2.x core library.

  • So we won't be making any major changes.

  • But the latest version is only part

  • of what we'd like to talk about today.

  • Today, we want to spend the time talking about the TensorFlow

  • ecosystem.

  • You've told us that a big reason why you love TensorFlow

  • is the ecosystem.

  • It's made up of libraries and extensions

  • to help you accomplish your end and end ML goals.

  • Whether it's to do cutting edge research

  • or apply ML in the real world, there is a tool for everyone.

  • If you're a researcher, the ecosystem

  • gives you control and flexibility

  • for experimentation.

  • For applied ML engineers or data scientists,

  • you get tools that help your models have real world impact.

  • Finally, there are libraries in the ecosystem that

  • can help create better AI experiences for your users,

  • no matter where they are.

  • All of this is underscored by what all of you, the community,

  • bring to the ecosystem and our common goal of building AI

  • responsibly.

  • We'll touch upon all of these areas today.

  • Let's start first with talking about the TensorFlow

  • ecosystem for research.

  • TensorFlow is being used to push the state of the art of machine

  • learning in many different sub fields.

  • For example, natural language processing

  • is an area where we've seen TensorFlow really

  • help push the limits in model architecture.

  • The T5 model on the left uses the latest

  • in transfer learning to convert every language problem

  • into a text-to-text format.

  • The model has over 11 billion parameters

  • and was trained off of the colossal clean crawled corpus

  • data set.

  • Meanwhile, Meena, the conversational model

  • on the right, has over 2.6 billion parameters

  • and is flexible enough to respond sensibly

  • to conversational context.

  • Both of these models were built using TensorFlow.

  • And these are just a couple examples

  • of what TensorFlow is being used for in research.

  • There are hundreds of papers and posters

  • that were presented at NeurIPS last year that used TensorFlow.

  • We're really impressed with the research

  • produced with TensorFlow every day

  • at Google and outside of it.

  • And we're humbled that you trust TensorFlow

  • with your experiments, so thank you.

  • But we're always looking for ways

  • to make your experience better.

  • I want to highlight a few features

  • in the ecosystem that will help you in your experiments.

  • First, we've gotten a lot of positive feedback

  • from researchers on TensorBoard.dev,

  • a tool we launched last year that lets you upload and share

  • your experiment results by URL.

  • The URL allows for quickly visualizing hyper parameter

  • sweeps.

  • At NeurIPS, we were happy to see papers starting

  • to cite TensorBoard.dev URLs so that other researchers could

  • share experiment results.

  • Second, we're excited to introduce a new performance

  • profiler toolset in TensorBoard that

  • provides consistent monitoring of model performance.

  • We're hoping researchers will love the toolset because it

  • gives you a clear view of how your model is performing,

  • including in-depth debugging guidance.

  • You'll get to hear more about TensorBoard.dev

  • and the new profiler from Gal and [? Schumann's ?] talks

  • later today.

  • Researchers have also told us that the changes in 2.x

  • make it easy for them to implement new ideas, changes

  • like eager execution in the core.

  • It supports numpy arrays directly,

  • just like all the packages in the py data

  • ecosystem you know and love.

  • The tf.data pipelines we rolled out are all reusable.

  • Make sure you don't miss Rohan's tf.data talk today

  • for the latest updates.

  • And TensorFlow data sets are ready right out of the box.

  • Many of the data sets you'll find

  • were added by our Google Summer of Code students.

  • So I want to thank all of them for contributing.

  • This is a great example of how the TF ecosystem is

  • powered by the community.

  • Finally, I want to round out the TensorFlow ecosystem

  • for research by highlighting some

  • of the add ons and extensions that researchers love.