Placeholder Image

字幕列表 影片播放

  • ♪ (upbeat music) ♪

  • Welcome to Ask TensorFow

  • where we'll be answering all of your questions

  • about TensorFlow Lite and Swift for TensorFlow.

  • I'm Paige Bailey.

  • And I'm Daniel Situnayake and I work in DevRel for TensorFlow Lite.

  • Excellent! So with that, let's get started with our first question.

  • This is from Zongjun Zheng and was submitted via YouTube.

  • "Is RNN/LSTM quantization-aware training and TOCO conversion in TF Lite

  • available in TensorFlow 2.0?"

  • That's a lot acronyms in a sentence

  • and it sounds like a question that's for you, Dan.

  • It is indeed. So, I can say first of all,

  • yes all of these things are either available or coming in TensorFlow 2.0.

  • With regards to RNNs and LSTMs, we actually just published a guide

  • on how you can use these with TensorFlow Lite.

  • You can find that on our docs.

  • With TOCO conversion, yeah, the converser works great with 2.0

  • and we also have a guide on how to do that.

  • And with quantization-aware training,

  • right now we're recommending you checkout post-training quantization.

  • We've actually just published some new documentation on that

  • and it works really, really well.

  • But if you want to use quantization-aware training,

  • we're going to have support for that for TF 2.0 coming very soon.

  • Excellent. And all of those links that Dan mentioned are going to be listed

  • in the comments section of this YouTube video.

  • So make sure to check them out.

  • For the next question, "Is there any tutorial or example

  • for text processing models in TensorFlow Lite

  • aside from the pre-trained smart reply example?"

  • This is from Shubham.

  • Alright, so right now smart reply is our main example

  • that involves text processing.

  • However, we're working on a lot of new models right now

  • and so if you've signed up for the TF Lite mailing list,

  • you'll be able to get notified

  • when we release a bunch of new models that you can play with.

  • Excellent! And a link to that TF Lite mailing list

  • will also be placed in the comments section of the video.

  • Alright, so our next question is from Sohaib Arif who asks,

  • "Is Swift for TensorFlow for iOS programming?"

  • That's an excellent question, Sohaib.

  • So, Swift is absolutely for iOS programming.

  • You've probably used it if you were building iOS apps.

  • Swift for TensorFlow will be able to deploy to iOS devices.

  • However, right now, we only have support for Mac OS

  • and also for Linux.

  • But if you're interested in this space, absolutely stay tuned

  • by joining the swift@tensorflow.org mailing list

  • as well as our weekly open design meetings

  • that happen every Friday at 9 a.m. Pacific Time.

  • So thanks so much for the question.

  • For our next question,

  • it also sounds like a TF Lite-y question from Katayoun,

  • "Will there be a commodity device that I can use for TPU inferencing?"

  • Awesome, so the answer is yes!

  • And you may have heard of something called the Edge TPU.

  • It's basically like a tiny low power version

  • of our TPU machine learning accelerators

  • that you can embed in hardware that you're producing.

  • So imagine you're making a consumer product or an IoT device,

  • you can take the TPU, add it to that device

  • and get accelerated inference at the edge.

  • So we've created something called the Coral Platform

  • which gives you the end-to-end developer tooling to be able to do this

  • so you can start off with dev boards that you can order online

  • and once you prototyped your application, you can have all of the tools

  • to go into production and incorporate the hardware

  • into your end product.

  • So that's called the Coral Platform

  • and you can find it at coral.withgoogle.com.

  • It's super exciting.

  • (Paige) Excellent! And that's so cool that you can have TPUs

  • outside of a data center

  • that you just order online

  • and have it delivered directly to your house.

  • Absolutely. And small enough that it works in your pocket.

  • That's amazing!

  • Another TF Lite question,

  • so,"Does TF Lite only work on these Coral dev boards?"

  • This is from Christian Rivera.

  • Alright, so actually TF Lite is the way that you work with ML

  • in TensorFlow on Coral dev boards,

  • But TF Lite works across the board.

  • We covered basically every platform you can think of.

  • So, from mobile phones, Android and iOS,

  • through to IoT platforms in embedded Linux like things like the Raspberry Pi,

  • all the way down to tiny microcontrollers

  • with out new TensorFlow Lite for microcontrollers products.

  • And you can actually run TensorFlow Lite on the server as well.

  • It's basically just a stream line, stripped down,

  • super fast optimized version of TensorFlow

  • for deploying machine learning on device.

  • Excellent. And what I love too

  • is that it supports so many different languages, right?

  • So if you're not a Python person, you can use Swift or you can use Java

  • or a whole host of others.

  • Absolutely, yeah.

  • Even if you're writing C++ on tiny embedded devices,

  • you can use our libraries.

  • - So, it's pretty cool. - Excellent.

  • Our next question is also about Edge TPU, so I guess these must be pretty popular.

  • "Will Edge TPUs be available to purchase in other counties?"

  • This is from Yao Ouyang asked on YouTube.

  • Alright, excellent question.

  • Absolutely, Edge TPUs are available all over the world--

  • currently available in over 30 countries.

  • And the place to go and find them is coral.withgoogle.com.

  • Excellent, and Coral just recently released their compiler as well.

  • Didn't they open-source it?

  • Yes! So Coral's compiler is now open-source

  • so you can convert your models with TensorFlow Lite

  • run them through the Coral compiler and then deploy them to Edge TPU.

  • Excellent.

  • Alright, so next question from Raveen Gouda

  • is, "What about Android things? Does TensorFlow 2.0 support them?"

  • Absolutely, and that's kind of a double whammy, right?

  • So Swift for TensorFlow is certainly supported on Android devices.

  • You're capable of running Swift on Android

  • and it pairs quite nicely with Kotlin; they're pretty syntactically identical.

  • But TensorFlow Lite has a whole host of options on Android, right Dan?

  • Yeah, absolutely.

  • So TensorFlow Lite was really built from the ground up

  • to work with these types of mobile and embedded operating systems.

  • So whether you're using Android or iOS,

  • we have a ton of support in documentation and libraries.

  • So if you want to access hardware acceleration,

  • for example on Android,

  • you can use our GPU Delegate if your device supports it.

  • We have libraries in Java and Kotlin,

  • so basically, yeah, you're good to go.

  • Alright, so our next question from Sohaib Arif is,

  • "What platforms are supported by Swift for TensorFlow?"

  • Right now-- and this is an excellent question, Sohaib--

  • is right now we support Linux as well as Mac OS.

  • But we have plans to support iOS and also Android.

  • And you can certainly run Swift on both iOS and Android devices.

  • One of the cool things about Swift for TensorFlow

  • is that it's an infinitely hackable machine learning framework

  • and Swift is an infinitely hackable language

  • that's essentially just syntactic sugar for something called the LLVM

  • so we anticipate that support for mobile devices

  • will land before the end of the year but if you want to stay up to date,

  • again, join the swift@tensorflow.org mailing list.

  • Our next question is also about TensorFlow Lite,

  • "Will there be support in the Python API for exporting object detection models?

  • So for example after transfer learning to TF Lite."

  • Awesome! So that's a great question because it lets me talk about

  • the TensorFlow Lite Converter.

  • so it doesn't really matter what type of model you want to use

  • with TensorFlow Lite.

  • Basically the workflow is

  • that you'll train your model with TensorFlow,

  • export it to a format like SavedModel,

  • and then you can load it into the TensorFlow Lite Converter,

  • which will handle basically taking that model,

  • creating it in a file format which is optimized for use on mobile

  • so it's very small and efficient

  • and also doing a bunch of other optimizations

  • some of them are optional that can increase performance

  • of your model when it's running on device

  • so pretty much whatever kind of model you're using,

  • you can use the converter.

  • And for a bunch of models, for example, for object detection

  • you can actually go to tensorflow.org/lite/models

  • and we have them available for download,

  • so you don't even necessarily need to do the conversion yourself.

  • That's amazing.

  • And I also love that SavedModel is so well represented

  • across the entire TensorFlow ecosystem.

  • So if you do create that model using the Python API

  • and you convert it to SavedModel,

  • you would be able to deploy it to browsers with TensorFlow JS,

  • to all of these great mobile use cases, to embedded devices

  • and anywhere you would want to place machine learning.

  • Yeah, it's really exciting that we've got this format

  • that's just at the heart of everything and lets you use your model everywhere.

  • Yeah, machine learning with no boundaries.

  • (giggle)

  • Awesome. So our next question is,

  • "Why is it currently so difficult to integrate and use custom C++

  • or CUDA operations in TensorFlow and especially TensorFlow Serving?

  • And are there any plans

  • to make this process easier for production?"

  • Excellent. So I love this question and I'm really excited

  • to be able to say that yes, Swift for TensorFlow

  • has plans to give you the ability to import C headers, C++ headers

  • and to also give really performance C++ Interop.

  • We also envision a world with MLAR where you would be able

  • to write custom CUDA kernels from within a Jupyter Notebook.

  • So, certainly recommend taking a look at the C++ Interop capabilities

  • and C header imports available to you in Swift for TensorFlow.

  • Swift for TensorFlow also targets

  • the TensorFlow graph representation

  • which means it's extensible across the entire ecosystem.

  • And also take a look at that MLAR documentation.

  • Alright, so the next question from Katayoun is,

  • "I had some problems when using Keras and TensorFlow

  • - with OpenCV..." - Ah yes.

  • "... are there any improvements in TensorFlow 2.0?"

  • So that's a great question.

  • There are improvements for doing image processing operations

  • with TensorFlow 2.0,

  • both as part of tf. image and as part of TensorFlow add-ons;

  • which is a community supported package for a number of Keras layers and losses

  • and great image pre-processing steps.

  • We also have an RFC out for pre-processing layers.

  • But if you really want to use OpenCV in a performant way

  • by you know directly importing the C library,

  • you would be able to use it with Swift for TensorFlow.

  • So there's a great example from Jeremy Howard

  • the creator of fast AI

  • on how he was able to build a custom data import pipeline

  • using C headers and OpenCV that was twice as fast as tf.data.

  • So if you're a fan of OpenCV, if you want to use OpenCV,

  • you know the way that you use it currently

  • which is probably with C or C++, Swift for TensorFlow

  • might be an excellent option.

  • And if not, we do have a number of support features

  • available in TensorFlow 2.0.

  • Very cool.

  • Our next question, "Does TensorFlow have any API

  • that can do AutoML such as Azure ML SDK?"

  • And this is from Mahbub.

  • That sounds like a you question.

  • Yes, so, I would say the best thing to do if you're interested in AutoML

  • is look up this product called Cloud AutoML from Google

  • and basically allows you to do a bunch of ML stuff

  • in a semi-automated way in the Cloud.

  • So, you can train models to do cool stuff using our tooling

  • and then export the model to use wherever you need to use it.

  • Absolutely. And it searches across in an intelligent way,

  • so not a brute-force search,

  • the entire problem space for machine model optimization.

  • We also have something called Keras Tuner

  • so if you want to have a little bit more control

  • over the hyper parameter tuning,

  • you can do that on your own, as well.

  • But I think you're right that Cloud AutoML

  • is probably your best bet.

  • - A good place to start. - Yeah.

  • Alright, so our next question is

  • "What about Kotlin for TensorFlow?"

  • That's another double whammy, right?

  • Like, so we don't have Kotlin bindings for TensorFlow,

  • but I do think that you have Kotlin support for TF Lite.

  • Is that correct?

  • Yes. So you can use our TF Lite library from within Kotlin super easy.

  • Excellent.

  • So, I guess we do,

  • but not from the perspective of Swift for TensorFlow

  • or the Java bindings for the language directly.

  • So our next question is, "Can a deep learning model

  • be miniaturized automatically?"

  • And that sounds like a TF Lite scenario.

  • Yeah, and this is a great question to answer

  • because we've just published something called the Model Optimization Toolkit,

  • which basically documents how to do exactly this.

  • So we have a set of tools that can do everything

  • from taking a SavedModel and re-encoding it

  • into a, different, more space efficient file format

  • called a Flat Buffer for deploying on mobile devices,

  • and that's something you'd use the TensorFlow Lite Converter to do.

  • All the way through to quantization

  • where you're actually reducing the precision of the numbers in the model

  • so that the model takes up less space on disk and takes less time to execute.

  • But potentially doesn't really lose any accuracy,

  • so it's almost kind of magical.

  • So those are some of the techniques we have available right now,

  • but in the future we're going to be adding more,

  • so you should sign up to the TensorFlow Lite mailing list

  • to get updates on this

  • and check out the Model Optimization Toolkit.

  • Absolutely. And if you've used it,

  • I think we would love to hear feedback about it.

  • So make sure to share your experience on the TF Lite mailing list.

  • Absolutely.

  • All right, so the next question is regarding tf.data,

  • "Do you guys have any new APIs to directly load audio files

  • like WAVs, etc. instead of going through the extra conversion steps

  • to convert to TFRecords?"

  • Excellent. That is a great question,

  • And I completely understand the struggle

  • of having to migrate between WAV file formats to TFRecords

  • and dealing with tf.data pipelines.

  • We're adding support in TensorFlow 2.0 through something called tf.io

  • so if you have specialized data input formats or export formats,

  • that could be a great option.

  • But I also suggest very strongly

  • that you take a look at Swift for TensorFlow.

  • That way you can import WAV files exactly as you would in C or C++

  • and I can also place a link in the video description below

  • showing how precisely how to import those audio files.

  • So thanks so much for the question.

  • I'm really excited to see what you create.

  • Alright, so our next question from Jordan is, "Do you have any plans to add support

  • for constraints or even better AutoDiff on manifolds?

  • It would be so nice to do optimization where some parameters live

  • in SO(3), for example."

  • Excellent, excellent question again.

  • So AutoDiff is very near and dear to my heart,

  • and it's something that we're working very very closely on

  • with Swift for TensorFlow.

  • One of the magical things about Swift is that any function--

  • So anything that you can imagine: adds or multiplies or custom functions,

  • they're all differentiable.

  • So I strongly suggest that you take a look

  • at our design document for AutoDiff.

  • it's just recently been released

  • and we'll place a link in the video description below.

  • and let us know what you build, Jordan.

  • Really excited to see it.

  • Alright, thank you so much for your awesome questions

  • and just as a reminder,

  • if you have a question that you'd like us to answer

  • on Ask TensorFlow, post it on social media with the hashtag #AskTensorFlow.

  • And make sure you check out all of those great links

  • that we have in the video description below.

  • We're really excited to see what you think.

  • and we can't wait to hear the new questions that you ask.

  • So thanks so much for joining us today.

  • - I'm Paige Bailey. - And I'm Daniel Situnayake.

  • And this has been Ask TensorFlow.

  • ♪ (upbeat music) ♪

♪ (upbeat music) ♪

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

回答你的TF Lite問題以及更多(#AskTensorFlow) (Answering your TF Lite questions and more! (#AskTensorFlow))

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字