Placeholder Image

字幕列表 影片播放

  • [MUSIC PLAYING]

  • ANDRE SUSANO PINTO: Hello, my name is Andre,

  • and I'm a software engineer in Zurich.

  • I'm the technical lead for the TensorFlow Hub project.

  • TensorFlow Hub is a library and a repository

  • for reusing machine learning.

  • We open-sourced it last year, and today I'll

  • give you an update on the project.

  • Let's start by talking about when you'd want to use it.

  • If you have problems collecting enough data

  • to train your models from scratch,

  • then transfer learning is a technique for you.

  • You can use Hub to make it easy to reuse parts of models that

  • were trained on large parts--

  • on large amounts of data.

  • Additionally, since it's so easy to reuse computations

  • and weights, it becomes possible to leverage features

  • without having to learn how to fit them into neural networks.

  • So images, text, and videos are features

  • you can use with a single line of code.

  • You might also have encountered problems

  • where code bases become really coupled

  • and experimentation becomes slower over time.

  • By defining an artifact that does not depend on code,

  • Hub allows for more maintainable systems,

  • similar to how libraries have helped software engineer.

  • So the main concept is these pre-trained building blocks

  • that we call modules.

  • Typically, we start by training a large model

  • with the right algorithm and data.

  • From this large model, we're just

  • interested then into a part of it, which is reusable.

  • This is typically a bottleneck layer or some other distributed

  • representation.

  • Then we can package them into a SavedModel that

  • defines this computation and the weights that were trained,

  • and no longer depends on the original code.

  • You can share this artifact via the file

  • system, web servers, cloud.

  • Then you can bring this module back as a piece of a new model

  • to solve a new task.

  • Since the model is defined by TensorFlow primitives,

  • you can fine tune its weights and adjust them

  • to your problem.

  • So what's new?

  • For the last few months, we've been

  • making it even easier to use.

  • This concept of saving a part of a model and then loading it

  • back is getting integrated right in the core of TensorFlow.

  • We have added SavedModel features

  • that make it possible to share more than just signatures.

  • And with eager execution, it becomes even easier

  • to select the part of a network that gets exported.

  • Let's look at a few examples.

  • In TensorFlow 2.0, we can load a module with hub.load.

  • We can also use tf.savedmodel.load

  • if the model is already on our file system.

  • Due to eager execution, once loaded,

  • we can call it right away.

  • Additionally, due to the new capabilities,

  • you can now share any object which is composed

  • of TensorFlow primitives.

  • So in this case, text_features has two members--

  • __call__, which is a TF function, and embeddings,

  • which is a TF variable.

  • We're also really excited that we added support

  • for polymorphic functions when we serialize TF functions

  • on SavedModel.

  • This will provide a more natural interface

  • than we had before with signatures.

  • For example, here we see an image representation module

  • being loaded and then being used in inference mode,

  • or being used during training mode where batch norm is on.

  • Or when it's on, we can also control

  • its-- some of its parameters.

  • And all of this is just baked by TF graphs,

  • so we no longer need to be selecting things

  • here and there.

  • We just get all the API that looks very--

  • very, like, patterned.

  • Additionally, we have added a new symbol--

  • hub.KerasLayer.

  • This makes integrating hub modules into a Keras model

  • easier.

  • In this example, we see how it is to build a text sentence--

  • a sentence classification model.

  • So we have three layers.

  • The top layer-- this Keras layer, which is the NNLM--

  • is a layer that receives sentences as inputs

  • and outputs a Dense representation.

  • Then we have a dense layer and a classification layer.

  • Since the Keras layer--

  • this NNLM layer-- includes text preprocessing on it,

  • we can just feed sentences straight into our model.

  • We never had to define the logic for it.

  • Additionally, if we wanted to try other text models,

  • we could just change that string.

  • They'll be as easy to try like the latest research.

  • So the status is we have released a 0.3

  • version of TensorFlow Hub.

  • It has these two new symbols we just saw--

  • hub.load and hub.KerasLayer.

  • And they are usable in TensorFlow 2, both

  • in eager and graph mode.

  • To let you preview this functionality,

  • we have published some modules in this format.

  • And we-- the next steps for us is

  • to backport existing modules.

  • A bit more practical now--

  • Google AI and DeepMind teams have

  • been sharing their resource with you on tfhub.dev.

  • This was already launched last year,

  • and there are some new modules.

  • And we're going to have a look at some of those.

  • One of the most popular was the universal sentence encoder.

  • This was a module that encoded short sentences

  • into a dimensional vector that could

  • be used for many natural language tasks.

  • Recently, the team has added a cross lingual version of this.

  • So sentence with similar meaning,

  • independent of the language-- they'll

  • end up in points close together.

  • What's exciting about this is that now you

  • can learn a classifier using English data,

  • and then you can run it on other languages.

  • We have also added image augmentation modules.

  • The policies to augment images were trained by reinforcement

  • learning on tasks such as ImageNet,

  • and they have been shown to transfer to new tasks.

  • An interesting thing is that you could grab this module,

  • and you could grab one of the image representation modules,

  • and you could string them together.

  • In this case, the image augmentation module

  • would reduce the amount of data by data augmentation

  • and the image feature vector by transfer learning.

  • And there are many more.

  • We have BERT module for text tasks.

  • We have object detection modules,

  • BigGAN modules for controlled image generation,

  • I3D kinetics for video action recognition, and more.

  • Some of these models' architectures

  • were specially designed for low resources.

  • Additionally, we have been working

  • making modules more integrated with other pieces of TensorFlow

  • ecosystem.

  • We have a command line utility to convert Hub module

  • into a TFJS model.

  • Hub models can be used together with AdaNet,

  • which is a library for AutoML.

  • And they can also be used inside TF transform.

  • So if you want to try it, you can go to tfhub.dev

  • and you can search for modules.

  • Most of them include the link where

  • you can see them in action.

  • Thank you.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

TensorFlow Hub:可重用的機器學習(TF Dev Summit '19) (TensorFlow Hub: Reusable Machine Learning (TF Dev Summit '19))

  • 1 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字