Placeholder Image

字幕列表 影片播放

  • ♪ (upbeat music) ♪

  • Good morning everyone, I'm Alina,

  • program manager for TensorFlow.

  • (applause)

  • Thank you.

  • Welcome to the 2019 TensorFlow Developer Summit.

  • This is our third annual and largest developer summit to date,

  • and I'm so happy to have all of you here

  • both right here in the room and on the Livestream.

  • Welcome.

  • So I'm just curious by a show of hands,

  • who traveled a little bit further, maybe, to get here?

  • Europe?

  • Asia?

  • Africa?

  • As far as Australia?

  • Woo, awesome.

  • Welcome to all of you.

  • We have a lot of great talks ahead, some exciting announcements

  • and cool demos, so let's get going.

  • We are living in a formative moment of history right now,

  • where machine learning is experiencing an unprecedented revolution.

  • The way we fundamentally think about and interact with computer systems

  • has inherently changed

  • due to the breakthroughs in the field of AI.

  • and this is due to three major factors.

  • First, we have lots more compute specially designed ML accelerator

  • like these TPUs,

  • let you train models faster than ever before.

  • Secondly, we have breakthroughs in the field of machine learning.

  • Their novel algorithms created every month

  • like a BERT and an innovative approach to natural language processing

  • which lets anyone around the world

  • train their own state-of-the-art question-answering system.

  • And finally, we have lots and lots of data.

  • We're seeing new waves of data sets come from all kinds of disciplines.

  • For example, the new open images extended data set.

  • This is a collection of over 478,000 images

  • that volunteers have added

  • with the pursuit of inclusivity and diversity.

  • So, all three of these are basically changing

  • how we solve challenging, real-world problems,

  • and it's really cool to see that TensorFlow is the platform

  • that's powering this machine learning revolution.

  • It's allowing developers, businesses and researchers around the world

  • to benefit from intelligent applications.

  • And we've been really amazed

  • by what the community has built with TensorFlow.

  • Developers have been using TensorFlow to solve problems

  • in in their local communities.

  • So I don't know if any of you were in the Bay Area

  • during the tragic Paradise fire,

  • but one of the consequences was

  • that air quality was really bad.

  • It was in the high to mid to 200s on the Air Quality Index.

  • And as difficult as this was for us in Delhi, India during winter.

  • The air quality can get up to about the 400s on Air Quality Index,

  • and this is considered very dangerous.

  • So pollution sensors can help gauge air quality

  • but they're very expensive to deploy at scale.

  • So a group of students in Delhi built image classifiers in TensorFlow

  • and use those to build an app called Air Cognizer,

  • and what it does is basically just by using the images on a smartphone

  • it gives an accurate estimation of the air quality.

  • Businesses are also fundamentally

  • improving their products and services built with TensorFlow,

  • for example, Twitter strives to keep its global users informed

  • with relevant and healthy content.

  • But this can be hard,

  • when the users follow hundreds or even thousands of people,

  • so to solve this, Twitter launched ranked timeline, an ML power feed

  • which has the most relevant tweets at the top of the time timeline,

  • ensuring users never missed their best and most relevant content.

  • And by using TensorFlow's ecosystem of tools

  • like TensorFlow Hub, TensorBoard

  • and TensorFlow Model Analysis,

  • Twitter was able to reduce both training and model iteration time

  • as well as increase the timeline quality and engagement for users.

  • Specific industries are also being very much transformed by ML.

  • GE Healthcare, for example

  • is using TensorFlow to improve MRI imaging.

  • These TensorFlow models, they're real-time on MRI scanners

  • and can actually detect the orientation of the patient inside the scanner.

  • And this is really great

  • because not only does this help the diagnosis,

  • but also lowers the errors and exam time.

  • But also, what's really cool is it basically expands this technology

  • to many many more people around the world.

  • TensorFlow also powers bleeding-edge research.

  • A team of scientists, researchers and engineers

  • at nurse Oak Ridge National Laboratory at VIDYA

  • recently won the Gordon Bell Prize

  • for applying deeplearning

  • to study the effects of extreme weather patterns

  • using high-performance computing.

  • They built and scaled a neural network using TensorFlow,

  • of course, to a run on Summit, the world's fastest supercomputer.

  • They achieved a peak and sustained throughput

  • of 1.13 exaFLOPS and FPC-16

  • which is equivalent to more than a quantalian computations per second.

  • I think I need to pause for a second because that is ridiculously fast.

  • Right?

  • In addition to these awesome examples,

  • there are thousands and thousands of people all over the world

  • doing amazing work using TensorFlow,

  • and the power and impact of TensorFlow would not be what it is

  • without all of you, thank you.

  • It's with your help and interest

  • that TensorFlow has become the most widely adopted ML framework in the world.

  • And right here, I'd like to show the latest map of GitHub stars

  • who self-identified their location.

  • I'm sure many of the dots on this map are right here

  • in the room and on the Livestream,

  • so I just want to say thank you one more time.

  • And this growth has been absolutely amazing.

  • TensorFlow has been downloaded over 41 million times,

  • and has over 1800 contributors worldwide.

  • Last November, we celebrated TensorFlow's third birthday

  • by taking a look back at the different components

  • that we've added throughout the years.

  • But today, we'd like to talk

  • about how TensorFlow has matured as a platform

  • to become an entire end-to-end ecosystem.

  • And TensorFlow 2.0 is the start of a new era,

  • and we're committed and focused on making it

  • the best ML platform for all our users.

  • To talk more about TensorFlow 2.0

  • I'd like to introduce Rajat Monga,

  • Engineering Director of TensorFlow on stage.

  • Thank you.

  • (applause)

  • Thank You, Alina.

  • Hello, everyone, I'm Rajat.

  • I am an engineer at TensorFlow

  • and have been involved with this since the very beginning.

  • It's been great to see what we've been up to

  • over the last few years.

  • All the amazing growth

  • and all the amazing things that you've done with it.

  • It's also been great to hear from you.

  • You told us what you like about TensorFlow

  • and equally importantly, what you would like to see improved

  • in TensorFlow.

  • Your feedback has been loud and clear.

  • You asked for simpler, more intuitive APIs in developer experiences.

  • You pointed out areas of redundancy and complexity,

  • and you asked for better documentation and examples,

  • and this is exactly what we've been focusing on with TensorFlow 2.0.

  • To make it easy, we focused on Keras

  • for a single set of API's

  • and combine it with Eager Execution for the simplicity of Python.

  • With flexibility to try the craziest ideas

  • and ability to go beyond an exaFLOP

  • TensorFlow is more powerful than ever.

  • With the same robustness and performance

  • you expect in production, battle-tested in Google.

  • Let's start with the overall architecture for TensorFlow.

  • You may be familiar with this high-level architecture.

  • There have been lots of components and features

  • we've added throughout the years

  • to help support workloads to go from training to deployment.

  • With TensorFlow 2.0, we're really making sure

  • that these components work better together.

  • Here's how these powerful API components fit together

  • for the entire training workflow.

  • With tf.data for data ingestion and transformation,

  • keras and premade estimators from model building,

  • training with eager execution and graphs,

  • and finally packaging for deployment with SavedModel.

  • Let's take a look at some examples.

  • The first thing you need is data.

  • Often, you may want to validate results or test your new ideas

  • in common public data set.

  • TensorFlow data sets includes

  • a large and rapidly growing collection of public data sets

  • that you can get started with very easily.

  • And combined with tf.data,

  • it is simple to wrap your own data too.

  • Here is a small sample of the data sets that are available,

  • and all of these and many more are included there.

  • Then with keras, you can express the model with layers,

  • just as you are used to thinking about it.

  • Standard training and evaluation is packaged as well,

  • with model.fit and .evaluate.

  • Since deep learning models are often computationally expensive,

  • you may want to try scaling this across more than one device.

  • TensorFlow comes pre-built with MirroredStrategy

  • that works with small additions to your code.

  • Starting from a pre-trained model or component

  • also works well to reduce some of this computational cost.

  • To make it easy,

  • TensorFlow Hub provides a large collection of pre-trained components

  • that you can include in your model

  • and even fine tune for your specific data set.

  • Keras and .estimator offers high-level building blocks

  • for an easy-to-use package.

  • They come with everything you might need for typical training jobs.

  • But, sometimes you need a bit more control.

  • For example, when you're exploring new kinds of algorithms.

  • Let's say, you wanted to build a custom encoder for machine translation.

  • Here's how you might do this by subclassing a model.

  • Here, you can focus

  • on implementing the computational algorithm,

  • and let the framework take care of the rest.

  • And you could even customize the training loop

  • to get full control over the gradients and the optimization process.

  • While training models,

  • whether packaged with keras or more complex ones,

  • it's often valuable to understand the progress,

  • and even analyze the model in detail.

  • TensorBoard provides a lot of visualization to help with this,

  • and now it comes full integration

  • with intercollab and other Jupiter notebooks,

  • allowing you to see the same visualizations

  • right from within your notebook.

  • All of these features are available in TensorFlow 2.0,

  • and I'm really excited to announce that our alpha release is available

  • for you as of today.

  • (applause)

  • Many of you in the room and across the world

  • really helped with lots of work in testing to make this possible.

  • I really like to take this moment to thank you all.

  • Please give yourself a round of applause.

  • We really couldn't have done this without you.

  • (applause)

  • In addition to all the great improvements we talked about,

  • this release comes with a Conversion Script

  • to help you upgrade from 1.X.,

  • and a compatibility module to give you access to 1.X. APIs

  • for easy transition,

  • and we are working towards the full release over the next quarter.

  • There's a lot of work going on to make TensorFlow 2.0

  • really work well for you.

  • You can track the progress and provide feedback

  • on the TensorFlow GitHub projects page.

  • You asked for better documentation

  • and we worked to streamline our docks for APIs, guides and tutorials.

  • All of this material will be available today

  • on the newly redesigned TensorFlow.org website.

  • Where you'll find more examples, documentation and tools to get started.

  • We're really very excited

  • about these changes and what's to come.

  • And to tell you more about improvements in TensorFlow

  • for research and production.

  • I'd like to welcome Megan Kacholia on stage, thank you.

  • (applause)

  • Thanks, Rajat.

  • TensorFlow has always been a platform for research to production.

  • We just saw how TensorFlow, as high-level APIs,

  • make it easy to get started and build your models.

  • Now, let's talk about how it improves powerful experimentation for researchers,

  • and let's you take models from research and prototyping

  • all the way through to production.

  • Researchers have been using TensorFlow for state-of-the-art research.

  • We can see it in paper publications,

  • which are shown over the past few years in this chart.

  • But powerful experimentation begins and really needs flexibility.

  • This begins with Eager Execution with TensorFlow.

  • In TensorFlow 2.0 by default, every Python command is immediately executed.

  • This means you can write your code in the style you're used to

  • without having to use Session.Run.

  • This also makes a big difference in the realm of debugging.

  • As you iterate through with Eager Mode,

  • you'll eventually want to distribute your code onto GPUs, TPUs

  • and other hardware or accelerators.

  • For this, we've provided tf.function

  • which turns your eager code into a graph, function by function.

  • You get all of the familiar tools

  • like Python, control-flow, asserts, even print

  • but can convert to a graph anytime you need to,

  • including when you're ready to move your model into production.

  • And even with this, you'll continue to get great debugging.

  • Debug ability is great, not just in Eager,

  • but we've made huge improvements in tf.function and graphs as well.

  • In this example shown here,

  • we're splitting a tensor using tf.function which creates a graph,

  • but because of the mismatched inputs, you get an error.

  • As you can see, we now give users the information

  • about the file and the line number where the error occurred in the model

  • to help you more quickly track things down,

  • so you can continue iterating.

  • We've made the error messages concise, easy to understand and actionable.

  • We hope you enjoy these changes and they make it much easier for you

  • to quickly iterate and progress with your models.

  • Performance is another area we know that researchers

  • as well as all users for that matter, care about,

  • and we've continued improving core performance in TensorFlow.

  • Since last year,

  • we've sped up training on eight NVIDIA Tesla V100 by almost double.

  • Using a Google Cloud TPU V2,

  • we've boosted performance by 1.6x.

  • And with Intel MKL Acceleration

  • we've got an inference speed up by almost three times.

  • Performance will continue to be a big focus of TensorFlow 2.0

  • and a core part of our progress to final release.

  • TensorFlow also provides flexibility to enable researchers,

  • and this is with many add-on libraries

  • that extend and expand TensorFlow

  • in new and useful ways.

  • Some of these add-on libraries or extensions

  • to make certain problems easier,

  • like TF.Text with Unicode

  • and the new ragged Tensor type.

  • In other cases, it lets us explore

  • how we can make machine learning models fairer and safer

  • by a TF Privacy.

  • You'll also hear new announcements on TF-Agents for reinforcement learning,

  • and tomorrow, we'll be discussing the new TF federated library

  • for federated learning.

  • Deep learning research is also being applied to real-world applications

  • using TensorFlow.

  • Here are a few examples from researchers at Google

  • where we see them applying it to areas like our data centers.

  • We're making them more efficient with AI control system

  • that delivers energy savings.

  • Our apps like Google Maps, the one shown in the middle,

  • which has a new navigation feature called global localization.

  • It combines visual positioning service, street view, and machine learning

  • to more accurately identify position and orientation.

  • And devices like the Google Pixel

  • that use machine learning to improve depth estimation

  • to create better portrait mode photos

  • like the one shown here.

  • In order to make these real-world applications a reality,

  • you must be able to take models from research and prototyping.

  • all the way through to launching and production.

  • This has always been a core strength and focus for TensorFlow.

  • Using TensorFlow, you can deploy your models

  • on a number of platforms like shown here.

  • And models end up in a lot of places,

  • so we want to make sure TensorFlow works well

  • across all of these on servers and in cloud,

  • on mobile and other Edge devices,

  • in browser and JavaScript platforms.

  • We have products for each of these:

  • TensorFlow Extended, TensorFlow Lite and TensorFlow.js

  • which I'll briefly talk through.

  • TensorFlow Extended is our end-to-end platform

  • for managing every stage of the machine learning lifecycle.

  • This spans all the way from ingesting and transforming your data

  • to deploying your machine learning models at scale.

  • In orange shown here, you can see the libraries of open-sourced so far.

  • What this slide alludes to is

  • that we're now taking a step further

  • and providing components built from these libraries

  • that make up an end-to-end platform.

  • And note these are the same components

  • that are used internally

  • in thousands of production systems,

  • powering Google's most important products.

  • The components are only part of the story.

  • 2019 is the year we're putting it all together,

  • and providing you with an integrated end-to-end platform.

  • First, you can bring your own orchestrator.

  • Here, we're showing airflow or kubeflow, even raw kubernetes,

  • whatever you want.

  • No matter what orchestrator you choose,

  • the TensorFlow extending components integrate with a meta-data store.

  • This store keeps track of all the component runs,

  • the artifacts that went into them

  • and the artifacts that were also produced.

  • This enables advanced features like experiments, experimentation

  • and experiment tracking, model comparison

  • and things along those lines

  • that I'm sure you'll be excited about

  • and will help you as you iterate through

  • and work with your production systems.

  • We have an end-to-end talk coming up later today from Clemens and his team

  • in which they'll take you on a complete tour

  • of using TensorFlow Extended to solve a real problem.

  • Moving on, TensorFlow Lite is our solution for running models

  • on mobile and IoT hardware.

  • it uses a custom streamline file format and a stripped-down runtime,

  • so you can deploy TensorFlow models everywhere your users are.

  • On-device models can be more responsive to input than cloud backends,

  • and they keep user data on device for privacy

  • which is very important, especially in this day and age.

  • Google and our partners like IGE in China use TF Lite

  • for all kinds of tools,

  • including predictive text generation, video segmentation

  • and things like Edge detection.

  • But under the hood, TensorFlow Lite is about performance.

  • You can deploy models to CPU, GPU and even Edge TPUs,

  • and expect fast performance,

  • and we've been refining since we launched TensorFlow Lite last year.

  • By using the latest quantization techniques on CPU,

  • adding support for OpenGL 3.1 and Metal on GPUs,

  • and tuning our performance on Edge TPUs,

  • we're constantly pushing the limits of what is possible on device,

  • and you should can expect even greater enhancements in the year ahead.

  • We'll hear details from Raziel and his colleagues coming up

  • in a little bit this morning.

  • Javascript is the number one programming language in the world

  • and until recently hasn't necessarily benefited

  • from all the machine learning development and tools.

  • Last year we released TensorFlow.js,

  • a library for training and deploying machine learning models

  • in the browser and on Node.js.

  • Since then we've seen huge adoption in the JavaScript community

  • with more than 300,000 downloads and 100 contributors,

  • but we're just at the beginning

  • given how big the JavaScript and web ecosystem is.

  • Today we're excited to announce TensorFlow.js version 1.0.

  • This comes with many improvements and new features.

  • We have a library of off-the-shelf models for common machine learning problems

  • that run both in the browser and on node.

  • We're also adding support for more platforms

  • where JavaScript runs such as electron desktop apps

  • or mobile native platforms.

  • and a huge focus in TensorFlow.js 1.0

  • is on performance improvements.

  • As an example, compared to last year,

  • MobileNet inference and browser is now nine times faster.

  • You'll learn more about these advances in our talk later in the day.

  • Another language that we're really excited about is swift.

  • Swift for TensorFlow is reexamining

  • what it means for performance and usability.

  • With a new stack built on top of TensorFlow's core

  • and a new programming model

  • that intends to bring further usability.

  • And today, we're announcing that Swift for TensorFlow is now at version 0.2.

  • It's ready for you to experiment with, to try out,

  • and we're really excited to be bringing this to the community.

  • In addition to telling you about version 0.2,

  • we're also excited to announce that Jeremy Howard, a fast.ai

  • is writing a new course in Swift for TensorFlow.

  • Chris and Brennan will tell you a lot more about this later today.

  • So to recap everything we've shown you so far.

  • TensorFlow has grown to a full ecosystem

  • from research to production,

  • from server to mobile with many languages.

  • This growth has been fueled by our community,

  • and honestly would not have been possible without the community.

  • To talk about what we're planning for you and with you in 2019,

  • I'll hand it over to Kemal.

  • (applause)

  • It's all you.

  • (Kemal) Thank you, Megan.

  • Hi, my name is Kemal

  • and I'm the Product Director for TensorFlow.

  • I'm really excited to be here today for this celebration,

  • and what we're celebrating is the most important part

  • of what we're building, and that's the community.

  • Personally, I love building developer platforms.

  • I used to be a developer as an entrepreneur,

  • and now I get to enable other developers

  • by building together a better platform.

  • When we started working on 2.0,

  • we turned to the community,

  • we started with the request for common process,

  • consulting with all of you on important product decisions.

  • We received valuable feedback

  • and we couldn't have built 2.0 without you.

  • And some of you wanted to get more involved

  • so we created special interest groups or sigs

  • like Networking or Tensor Board to name a few.

  • And sigs are really a great way for the community

  • to build the pieces of TensorFlow

  • that they care the most about.

  • We also wanted to hear more about what you were building,

  • so we launched a Powered By TensorFlow campaign.

  • And I am going to say we were amazed by the creativity of the project,

  • from biological image analysis, to custom wearables,

  • to chat BOTS.

  • So after three years, our community is really thriving.

  • There're almost 70 machine learning GDEs right now.

  • Around the world,

  • 1800 contributors on core alone,

  • and countless more of you who are doing amazing work

  • to help make TensorFlow successful.

  • So on behalf of the whole TensorFlow team

  • we want to say a huge thank you.

  • (applause)

  • So we have big plans for 2019,

  • and I would like to make a few announcements.

  • First, as our community grows,

  • we welcome people who are new to machine learning

  • and it's really important to provide them

  • with the best educational material,

  • so we're excited to announce

  • two new online courses.

  • One is with deeplearning.ai

  • and it's published in the Coursera platform.

  • And the other's with Udacity.

  • The first batch of these lessons is available right now,

  • and they provide an awesome introduction to TensorFlow for developers.

  • They require no prior knowledge to machine learning,

  • so I highly encourage you to check them out.

  • Next, if your students for the very first time,

  • you can apply to the Google Summer of Code program

  • and get to work with the TensorFlow engineering team

  • to help build a part of TensorFlow.

  • I also talked about the Powered By TensorFlow campaign.

  • We're so excited with the creativity

  • that we decided to launch a 2.0 hackathon on DevPost

  • post to let you share your latest and greatest,

  • and win cool prizes.

  • So we're really excited to see what you're going to build.

  • Finally, as our ecosystem grows,

  • we're now having a second day at the summit,

  • but we really wanted to do something more.

  • We wanted a place where you can share

  • what you've been building on TensorFlow,

  • so we're excited to announce

  • TensorFlow World,

  • a week-long conference dedicated to open-source collaboration.

  • This conference will be co-presented by O'Reilly Media and TensorFlow,

  • and will be held in Santa Clara end of October.

  • Our vision is to bring together the awesome TensorFlow World

  • and give a place for folks to connect with each other.

  • So I'd like to invite on stage Gina Blaber

  • to say a few words about the conference.

  • (applause)

  • (Gina) Thank You, Kemal.

  • O'Reilly is a learning company

  • with a focus on technology and business.

  • We have strong ties with the open source community

  • as many of you know,

  • and we have a history of bringing big ideas to life.

  • That's why we're excited about partnering with TensorFlow

  • to create this new event

  • that brings machine learning and AI to the community.

  • The event of TensorFlow happening on October 28 to 31

  • in Santa Clara.

  • And when I say community, I mean everyone.

  • We want to bring together the entire TensorFlow community

  • of individuals and teams,

  • and enterprises.

  • This is the place where you'll meet experts

  • from around the world,

  • the team that actually creates TensorFlow,

  • and the companies and enterprises that will help you deploy it.

  • We have an open CFP right now on the TensorFlow World site.

  • I invite you all to check that out and send in your proposal soon,

  • so your voice is heard at that event.

  • We look forward to seeing you at TensorFlow World in October.

  • Thank you.

  • (applause)

  • Thank you, Gina. This is going to be great.

  • Are you guys excited?

  • Woo!

  • So we have a few calls to action for you.

  • Take a course, submit a talk to TF World, start hacking in 2.0.

  • By the way, the grand prizes for a hackathon on DevPost,

  • will include free tickets to TensorFlow World.

  • You know one thing that I love is to hear about these amazing stories

  • of people building awesome stuff on top of TensorFlow.

  • And as a team, we really believe that AI advances faster

  • when people have access to our tools

  • and can then apply them to the problems that they care about

  • in ways that we never really dreamed of.

  • And when people can really do that, some special things happen.

  • And I'd like to share with you something really special.

  • (urban noise)

  • Looking at historical documents

  • and especially documents from the Middle Age period,

  • requires a lot of time and also a lot of patience.

  • ♪ (gentle music) ♪

  • In the Vatican Archives, there are 85km of documents

  • more or less the length of the Panama Canal.

  • The scriptures written in the medieval handwriting

  • are different from the ones we know nowadays.

  • If one day someone ask me to transcribe and translate

  • all the documents of the Vatican Archive,

  • I would tell them that they are completely crazy.

  • (woman) Looking at this book page by page

  • and trying to decipher, read and transcribe whatever is there

  • takes an enormous amount of time.

  • It would require an army of paleographer.

  • ♪ (upbeat music) ♪

  • (woman 2) What I am excited the most about machine learning is

  • that it enabled us to solve problems

  • that up to 10, 15 years ago we thought unsolvable.

  • (Paolo) "In Codice Ratio" was born from this idea of building a software

  • that can read and interpret what is inside those manuscripts.

  • When we started discussing the problem,

  • we realized that a solution based on neural networks

  • was absolutely necessary.

  • The choice of TensorFlow was a natural one.

  • (Elena) Before using any kind of machine learning module,

  • we needed to collect data first.

  • You have thousands of images of dogs and cats on the Internet,

  • but there's very little images of ancient manuscripts.

  • We build our own custom web application for crowd sourcing

  • and we involved high school students to collect the data.

  • I didn't know much about machine learning in general,

  • but I found it very easy to create a TensorFlow environment.

  • When we were trying to figure out which model worked best for us,

  • Keras was the best solution.

  • The production model runs on TensorFlow layers and estimator interface.

  • We experimented with binary classification

  • with fully connected networks,

  • and finally we move to convolutional neural network

  • and multi-class classification.

  • In a short time, we were able to develop and test the first solutions.

  • When it comes to recognizing single characters,

  • we can get 95% average accuracy.

  • (Marco) Being able to access an IT tool greatly shortens the timing.

  • Being able to solve certain abbreviations and to understand a text

  • in that cryptic writing

  • is something exceptional.

  • (Serena) This will have an enormous impact in a short period of time.

  • We will have a massive quantity of historical information available.

  • I just think solving problems is fun.

  • It's a game against myself,

  • and how good I can do.

  • (Marco) The study of history is extremely important

  • to understand our present

  • and to get a perspective on the future.

  • (applause)

  • This is such a great story.

  • I think about the scholars who wrote these manuscripts.

  • They couldn't have been imagined that centuries later,

  • people will be using computers to bring back to life their work.

  • So we're really lucky to have Elena with us today.

  • Elena, would you stand?

  • (applause)

  • Don't miss the talk where she will share her story today.

  • I really hope you have a great day.

  • We have some really awesome things.

  • The team and I will be around.

  • Please come and say hi, we want to hear from you,

  • and with that, I'm going to hand it over to Martin

  • who will talk about TensorFlow 2.0. Thank you.

  • ♪ (upbeat music) ♪

♪ (upbeat music) ♪

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

TensorFlow開發峰會2019主題演講 (TensorFlow Dev Summit 2019 Keynote)

  • 3 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字