Placeholder Image

字幕列表 影片播放

  • What is going on everybody and welcome to a much-needed

  • Update to the deep learning and Python with tensorflow as well as now chaos tutorial

  • it's been a bit over two years since I did just a basic deep learning video in Python and

  • Since then a lot has changed. It's now much simpler to both like get into it

  • But then also just to work with deep learning models

  • So if you want to get into the more nitty gritty details in the lower-level

  • Tensorflow code you can still check out the older video

  • But if you're just trying to get started with deep learning that's not necessary anymore because we have these nice high-level

  • api's like chaos that sit on top of tensorflow and

  • Make it super super simple. So anybody can follow along if you don't know anything about deep learning that's totally fine

  • We're going to do a quick run-through of neural networks. Also, you're gonna want Python

  • 3.6 at least as of the release of this video hopefully very very soon

  • Tensorflow will be supported on three seven and later versions of Python just happens to be the case right now

  • it isn't I think it's something to do with the

  • async

  • Changes, I'm not really sure anyways

  • Let's get into it starting with an overview of how neural networks just work

  • Alright to begin

  • we need to have some sort of balance between treating neural networks like a total black box that we just don't understand at all and

  • Understanding every single detail to them. So I'm gonna show you guys what I think is just the kind of bare essential to understanding

  • What's going on? So a neural network is going to consist of the following things. Like what's the goal of any machine learning model?

  • Well, you've got some input

  • So let's say X 1 X 2 X 3 and you're just trying to map those

  • inputs to some sort of output

  • Let's say that output is determining whether something is a dog or that something is a cat

  • So the output is going to be two neurons in this case. So it's just boom two neurons

  • Now what we want to do is is figure out how are we going to map to those things?

  • We could use a single hidden layer. Let's say we're going to do some neurons here and

  • That's our first

  • Hidden lair now

  • What's gonna happen is each of these X 1 X 2 and X 3 these are gonna map to that hidden lair

  • each of the

  • input data x' gets

  • Connected to each of the neurons in that first hidden layer. And each of those connections has its own

  • Unique weight now from here that first hidden layer could then map and connect to that output layer

  • the problem is if you did this the relationship between x1 and dog or cat and

  • All the other ones those relationships would only be linear relationships

  • so if we're looking to map and nonlinear relationships

  • Which is probably going to be the case in a complex question. You need to have two or more one

  • Hidden layer means you just have a neural network two or more hidden layers means you have a quote-unquote deep neural network

  • So we'll add one more layer and then we're gonna fully connect that one two

  • And then once that's fully connected again all unique weights, each of those blue lines has a unique weight associated with it

  • and then that gets mapped to

  • The output and again each blue line has a unique weight associated with it

  • so now what we're gonna do is talk about what's happening on an

  • individual

  • Neuron level. So again that neuron has certain inputs coming to it

  • It might be the input layer X values or it could be inputs coming from the other neurons

  • So we're gonna again we're gonna call the inputs x1 x2 and x3

  • But just keep in mind this could actually not be it might not be your input data

  • It might be data coming from another neuron

  • But regardless that data's gonna come in and we're just gonna get the sum of that data

  • So it's gonna come in and be summed all together

  • But remember we also have those weights each of the inputs has a unique weight that gets put you know multiplied

  • Against the input data and then we sum it together finally and this is kind of where the artificial neural network comes into play

  • we have an activation function and this activation function is kind of meant to

  • Simulate a neuron actually firing or not

  • So you can think of the activation function like on a graph, you know?

  • You got your X and your Y and then a really basic activation function would be like a step or function

  • So if X is than a certain value boom we step up and we have a value. So let's say here

  • This is zero here. The value is one

  • So let's say this is our x-axis 1 2 3

  • so if X, you know after being all the inputs are multiplied by their weights sum together if that value is

  • let's say

  • greater than 3

  • well, then this activation function returns a 1 but today we tend to use more of a

  • sigmoid activation function so it's not going to be a 0 or 1 it's going to be some sort of value between

  • 0 and a 1 so instead we might actually return like a point seven nine or something like that

  • So coming back to this neural network here that we've been drawing

  • Let's say here on this final output layer. You've got dog and cat

  • well, this output layer is almost certain to have just a sigmoid activation function and

  • What's gonna say is maybe dog is a point seven nine and cat is a point two one

  • these two values are gonna add up to a perfect 1.0 but we're gonna go with whatever the

  • Largest value is so in this case

  • The neural network is you could say 79 percent confident that it's a dog 21 percent confidence a cat

  • We're gonna say we're gonna take the Arg max basically and we're gonna say hmm. We think it's a dog

  • All right. Now that we're all experts on the concepts of neural networks. Let's go ahead and build one. You're gonna need tensorflow

  • So do a pip install - - upgrade tensorflow you should be on tensorflow version 1.1 or greater. So

  • one thing you can do is import tensorflow and then

  • Actually touch flow as TF and then TF dot version will give you your current version

  • so mine is

  • 1.10

  • Now let's go ahead and get started. So the first thing we're going to do is import a data set. We're going to use em

  • nacelle

  • of

  • Data sets with machine learning. It is a dataset that consists of 28 by 28 sized images

  • So it's like the resolution

  • images of handwritten

  • Digits 0 through 9. So it'll be like a 0 1 2 3 and so on and it's a handwritten kind of unique image

  • so it's actually a

  • Picture we can graph it

  • soon enough so you can see it's actually an image and the idea is to feed through the pixel values to the neural network and

  • Then have the neural network output

  • Which number it actually thinks that image is

  • So that's our data set, and now what we want to do is

  • Unpack that data set to training and testing variables

  • So this is a far more complex

  • Operation when it's actually a data set that you're kind of bringing in or that you built or whatever

  • For the sake of this tutorial we want to use something real basic like M inist

  • so we're gonna unpack it to X train Y train and

  • then we're going to do X test Y test and

  • that's gonna equal m n--

  • Astana score data, so that's gonna unpack it into there now

  • Just to show you guys what this is

  • We're gonna use Matt plot Lib you can pip install or just look at it with me, but we're gonna import matplotlib

  • Pipe lot as a PLT. And what we're gonna do is peel TM show and we're gonna do X

  • train

  • And we'll do the zero width

  • So one thing we could do just just for the record

  • Let me just print so we can you can see what we're talking about here. So this is just going to be an array

  • It'll be a multi-dimensional array which is all a tensor is by the way

  • So this is this is here's your tensor

  • right

  • Okay, so that's the the actual data that we're gonna attempt to pass through our neural network and just to show you if we were

  • To actually graph it and then do a peel t touch show. It's gonna be the number and you can just excuse the color

  • It's definitely black and white. It's a single color. It's a binary

  • So one thing we could say is the color map is equal to P LTCM for color map

  • Binary Reap lot it and there you go. It's a it's not a color image

  • So anyways back to our actual code up here

  • Once we have the data one thing we want to do is is normalize that data

  • so again, if I print it out, you can see it's data that seems to vary from 0 to

  • Looks like we have as high as 253. It's 0 to 255 4 pixel data

  • So what we want to do is scale this data or normalize it but really what we're doing in this normalization is scaling it

  • So we're going to just redefine X train and X testing but it's gonna be TF caras dot utils dot

  • Normalize and we're gonna pass X

  • Train and it'll be on axis 1 and then we're gonna copy

  • paste and we're gonna do the exact same thing for X test and

  • All this does let's just run that and then we'll run this again and you can see how the 5 has changed a little bit

  • looks like I got a little lighter and

  • Then we come down here and we can see the values here are now

  • Scaled between 0 and 1 and that just makes it easier for a network to learn we don't have to do this

  • But at the end of this only probably won't have time

  • But if you want on, you know, comment those lines out and see how it effects the network. It's it's pretty significant

  • Ok. So the next thing we're gonna do now is actually build the model

  • So the model itself is gonna start as TF karosta model's dot and then it's going to be the sequential type of model

  • There's two types of models

  • The sequential is your your most common one. It's a feed-forward like the image we drew

  • So we're gonna use this sequential model and then from here we can use this like model dot add syntax

  • so the first layer is gonna be the input layer and now right now our images are 28 by 28 in this like

  • Multi-dimensional array we don't want that

  • We want them to be just like flat if we were doing like a confident or something like that

  • We might not want it to be flat

  • but in this case

  • we definitely want to flatten it so we could use that we could use like numpy and reshape or

  • We can actually use one of the layers that's built into chaos, which is flattened. So

  • So we're gonna do ad and what we're gonna add is TF. Chaos layers dot flatten

  • so one of the reasons why you you want this to actually be a layer type is like when you have a

  • Convolutional neural network a lot of times at the end of the convolutional neural network. There'll be just like a densely connected

  • Layer, and so you need to flatten it before that layer. So it's it's it's used for more than then the input layer

  • We're just use it for the input layer

  • Just to make our lives easier. So once we've got that

  • That's our input layer. Now. We want to do our hidden layers again

  • We're going to go with I think just two hidden layers. This isn't a complex problem to solve

  • So again, we're going to use the model set up model dot add syntax and we're gonna add and in fact

  • I think what I'm gonna do is copy paste and then rather than a flattened layer it's a dense layer in the dense layer

  • We're gonna pass a couple parameters here. So the first one is gonna be how many units in the layer. So we're gonna use

  • 128 units or 128 neurons in the layer, then we're gonna pass the activation function

  • This is like the function. Like I said like a stepper function or a sigmoid function

  • What is gonna make that neuron fire or sort of fire whatever so we're gonna use TF tenon

  • Lu for rectified linear, this is kind of the default go-to

  • Activation function just use it as your default and then later you can tweak it to see if you can get better results

  • But it's a pretty good one to always fall back on

  • So we're gonna add the second one just by copying and pasting

  • Again, I'm gonna add a final one and this is our output layer, which is still going to be a dense layer

  • It's not a hundred and twenty eight your output layer will always if it's in the case of classification

  • Anyways, it'll have your number of classifications in our case. That's ten and

  • the activation function we don't want it to be rel you because we actually this is like a

  • Probability distribution so we want to use softmax for a probability distribution. So

  • That is our entire model. We're done with defining, you know, the the architecture I guess of our model

  • Now what we need to define is some parameters for the training of the model

  • So here we're going to say model dot compile and in here

  • We're gonna pass some we're gonna pass the optimizer that we want to use

  • We're gonna pass the loss metric which loss you don't know we haven't really talked about it loss is the degree of error

  • Basically, it's it's it's it's what you got wrong. So a normal network doesn't actually attempt to optimize for accuracy

  • It doesn't try to maximize accuracy. It's always trying to minimize loss

  • So the way that you calculate loss can make a huge impact because it's it's what's the losses?

  • relationship to your

  • to your accuracy op timer

  • Optimizer

  • Okay, so the optimizer that we're going to use is going to be the Adam optimizer you could use something

  • this is basically it's just your it's your

  • It's the this is like the most complex part of the entire neural network

  • So if you're familiar with gradient descent, you could pass something like stochastic gradient descent

  • But the Adam optimizer kind of like the rectified linear is the you know

  • Just kind of the default go-to optimizer you can use others

  • There's lots of them not lots, but I don't know ten or so in Karis anyways, so anyways

  • There's other ones to go with Adam seems to be the one that you should start with for loss again

  • There's many ways to calculate loss

  • probably the most popular one is a

  • categorical

  • Cross and true P or some version of that and that in this case, we're gonna use sparse

  • You can also use binary like in the case of cats versus dogs. You probably use binary in that case

  • But you could you could just kind of blanket categorical cross entropy everything

  • Anyways, then finally what what are the metrics we want to track like as we go and we're going to just do accuracy

  • accuracy

  • Okay. So once we have all this we're actually ready to to train the model

  • So to Train it's just model that fit and then you're gonna pass your what do you want to fit?

  • so X train X test, I'm sorry y

  • Ok X train white rain and then epochs three total brain for it, okay, let's go ahead and run that and

  • We should start to get some training

  • Hopefully it doesn't crash me as I'm recording but okay looks good

  • Let's zoom out just slightly

  • So it looks a little better and we can see actually our accuracy is always already quite well

  • Our loss is also still dropping so our accuracy should still be improving and sure enough it is

  • Awesome, ok, so we did pretty good. We got a

  • 97% accuracy after only three epochs which is pretty good

  • so once we have this we can

  • This was in sample. So this might this is always gonna really excite you

  • But what's really important to remember is neural networks are great at fitting the question is did they over fit?

  • So the idea or the hope is that your model actually generalized, right it learned

  • Patterns and in actual attributes to the toe to what makes an eight ten

  • what makes a for rather than memorizing every single sample you passed and you'd be surprised how easily a

  • Model can just memorize all the samples that you passed to do very well

  • so the next thing we want to always do is calculate the

  • validation loss in the validation

  • Accuracy, and that is just model dot evaluate evaluate

  • X tests

  • Y test and

  • then we'll go ahead and just print that loss and Val accuracy and

  • We can see here the loss is almost point 11 and the accuracy is at ninety six point five

  • so a little less than the one that we ended on and the loss is quite a bit higher relatively, but

  • You should expect that

  • You should expect your out-of-sample accuracy to be slightly lower and your loss to be slightly higher

  • What you definitely don't want to see is either too close or it too much of a delta if there's a huge Delta

  • Chances are you probably already have over fit and you'd want to like kind of dial it back a little bit. So

  • That's basically everything

  • as far as the basics of

  • Karos and all that

  • the only other things that I wouldn't mind

  • Covering here is like if you want to save a model and load a model

  • It's just model dot save and we can save this as epoch num

  • reader

  • model and

  • Then if you want to reload that model

  • We'll save it as it will call the new underscore model that's going to be equal to TF. Caras models dot load model

  • And it's this exact model name

  • Whoops. There we go

  • So that's our new model

  • And then finally if we wanted to make a prediction we could say predictions equals new model dot

  • Dict and keep in mind predict always takes a list. This will get you a few times for sure

  • But anyways, they'll take a list and we'll do X underscore

  • test and

  • Then if we just print predictions, it's probably not gonna look too friendly

  • It's a little messy. So

  • What's going on here? These are all one hot arrays and it's it. These are our probability distributions

  • So what do we do with these so I'm gonna use numpy you can also use TF to arc max, but it's an abstract

  • It's a tensor and we have to pull it down. We need a session and all that

  • It's just easier to import numpy for at least for this tutorial here

  • import numpy as NP and then print for example NP arg max or max

  • Let's do predictions

  • And let's just do the zero width prediction. Okay

  • It says it's a seven

  • So the prediction for X test zero like the 0th index is it's a seven

  • So gee if only we had a way to draw it. Okay, we can definitely do this so we can do PLT

  • I'm sure and we're gonna do X test

  • zero with and

  • Then PLT dot show. Oh

  • Look at that. It's a seven

  • Okay

  • So I think that's basically all the things I would want to show you guys as far as like just a quick in

  • you know QuickStart with

  • Deep learning and Python and caris and tensorflow this just barely scratches the surface. There's so many things for us to do I

  • Definitely plan to have at least one more follow-up video

  • Covering things like loading and outside datasets and definitely some tensor board reading the model understanding what's going on

  • and also what's going wrong because that's

  • Eventually, you know, it's really fun when we're doing tutorials and problems or like already solved and we know the answer

  • It's very exciting. But in reality a lot of times you have to dig to find the model that works with your data. So

  • Anyways, that that's definitely something we have to cover or at least that you're gonna have to learn somehow or other

  • Anyway, that is all for now. If you got questions comments concerns, whatever. Feel free to leave them below

  • Definitely check out red comm slash are slash machine learning the learned machine learning subreddit

  • You can come join our discord

  • If you've got questions, that's just discord dot G G slash cent decks will get you there. Also special. Thanks to my most recent

  • Channel members Daniel Jeffrey KB AB Ajit H nur Newcastle geek fubá 44 Jason and eight counts

  • Thank you guys so much for your support without you guys. I couldn't do stuff like this. So really, thank you

  • Anyways, that's it for now more to come at some point until next time

What is going on everybody and welcome to a much-needed

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 美國腔

用Python、TensorFlow和Keras進行深度學習教程 (Deep Learning with Python, TensorFlow, and Keras tutorial)

  • 67 0
    YUAN 發佈於 2021 年 01 月 14 日
影片單字