Placeholder Image

字幕列表 影片播放

  • Hey, guys.

  • And welcome to a brand new tutorial, Siris on neural networks with python and tensorflow.

  • Two point.

  • Oh, now, tensorflow 2.0, is the brand new version of tensorflow still actually in the Alfa stages right now?

  • But it should be released within the next few weeks.

  • But because it's an Alfa TENSORFLOW has been kind enough to release us that Alfa version.

  • So that's what we're gonna be working with in this tutorial.

  • Siri's.

  • And this will work for all future versions of tensorflow to point out.

  • So don't be worried about that now, before I get too far into this first video, I just want to quickly give you an overview of exactly what I'm gonna be doing throughout this serious You guys have an idea of what to expect and what you're going to learn Now, the beginning videos, and especially this one are gonna be dedicated to understanding how a neural network works.

  • And I think this is absolutely fundamental and that you have to have some kind of basis on the math behind a neural network before you're really able to actually properly implement one.

  • Now, Tensor float is a really nice job of making it super easy to implement neural networks and use them.

  • But to actually have a successful and complex no network, you have to understand how they work on the lower level, so that's going to be doing for the first few videos.

  • After that, what we'll do is we'll start designing our own neural networks that can solve the very basic M nus datasets that tensor vote provides to us.

  • Now these air pretty straightforward and pretty simple, but they give us a really good building block on understanding how the architecture of a neural network works.

  • What are some of the different activation functions, how you can connect layers and all of that which will transition us nicely into creating our own neural networks, using our own data for something like playing a game.

  • Now, personally, I'm really interested with neural networks playing games, and I'm sure a lot of you are as well.

  • And that's what I'm gonna be aiming to do near the end of the Siris, on kind of our larger project will be designing a neural network and tweaking it so they can play a very basic game that I have personally designed in python with Pie Game.

  • Now with that being sent, that's kind of it for what we'll be doing in this series.

  • I may continue this on future in later videos into, like, very specific neural network.

  • Serious maybe chat about or something like that.

  • But I need you guys to let me know what you'd like to see in the comments down below.

  • With that being said, if you're excited about the Siri's, make sure you drop a like on this video and subscribe to the channel to be notified when I post the new videos.

  • And with that being said, let's get into this first video on how a neural network works and what a neural network is.

  • So let's start talking about what a neural network is and how they work.

  • Now, when you hear a neural network, you usually think of neurons.

  • Now neurons are what compose our brain, and I believe don't quote me on this.

  • We have billions of them in our body or in our break.

  • Now, the way that neurons work on a very simple and high level is you have a bunch of them that are connected in some kind of waste.

  • Let's say these are four neurons and they're connected in some kind of pattern.

  • Now, in this case, our pattern is completely like, uh, like, random, where just arbitrary.

  • We're just picking a connection.

  • But this is the way that they're connected.

  • Okay, Now, neurons can either fire or not fire, so you need to be on her off, just like a one or a zero.

  • Okay, so let's say that for some reason this neuron decides to fire, maybe you touch something.

  • Maybe you smelt something.

  • Something fires in your brain, and this neuron decides to fire.

  • Now it's connected to, in this case, all of the other neurons.

  • So what it will do is it will look at its other neurons and the connection, and it will possibly cause it's connected neurons to fire or to not fire.

  • So in this case, let's say, maybe that's what this one firing causes this connected neuron to fire this one to fire.

  • And maybe this one was already firing.

  • And now it's decided it turned it off or something like that.

  • Okay, so that's what happened Now when this neuron fires well, it's connected to listener on and it's connected to this now.

  • Well, it's already got that connection.

  • But let's say that maybe when this one fires, it causes this one to UN fire because it was just fire something like that, right?

  • And then this one.

  • Now that it's off, it causes this one to fire back up and then it goes.

  • It's just a chain of firing and unfired, and that's just kind of how it works, Right?

  • Firing an unfired.

  • Now, that's a far as I'm gonna go into explaining neurons.

  • But this kind of gives us a little bit of a basis for a neural network.

  • Now, a neural network essentially is a connected layer of neurons were connected layers so multiple of neurons.

  • So in this case, let's say that we have a first layer.

  • We're gonna call this our input layer that has for nerves.

  • And we have one more layer that only contains one now.

  • No, these neurons are connected.

  • Now in our neural network, we can have our connections happening in different ways.

  • We could have each, uh, woody coats neuron connected to each other, neurons so from layer to layer, or we could have, like some connected to others, some not connected, some connected multiple times.

  • It really depends on the type type of neural network we're doing now.

  • In most cases, what we do is we have what's called a fully connected neural network, which means that each neuron in one layer is connected to each neuron in the next layer.

  • Exactly one time.

  • So if I were to add another neuron here, then what would happen is each of these neurons would also connect to this neuron one time so it would have a total of eight connections because four times two is right, and that's how that would work.

  • Now for simplicity Steak.

  • We're just gonna use one neuron in the next layer just to make things a little bit easier to understand.

  • Now all of these connections have what is known as a wait.

  • Now, this is in a neural network specifically OK, so we're going to say this is known as Wait one.

  • This is known as way, too.

  • This is wait three, and this is wait for and again just to re emphasize this is known as our input layer because it is the first layer in our connected layers of neurons okay, and going with that, the last layer in our connected layer of knowns is known as our output layer.

  • Now, these are the only two layers that we really concern ourselves with when we look and use a neural network.

  • Now, obviously, when we create them, we have to determine what layers we're gonna have in the connection type.

  • But when we're actually using the neural network to make predictions or to train it were only concerning ourselves with the input layer and the output layer.

  • Now, what does this do?

  • And how do these no networks work?

  • Will essentially, given some kind of input, we want to do something with it and get some kind of output, right?

  • In most instances, that's what you want.

  • Input results in the output.

  • In this case, we have four inputs and we have one output.

  • But we could have a case where we have four inputs and we have 25 outputs, right?

  • It really depends on the kind of problem we're trying to solve.

  • So this is a very simple example, But what I'm going to do is show you how we would or how a neural network would work to train a very basic snaking.

  • So let's look at a very basic snaking.

  • So let's say this is our snake, okay?

  • And this is his head.

  • Um, actually, yeah.

  • Let's say this is his head.

  • But, like, this is what the position the snake looks like, where this is the tale.

  • Okay, we'll circle the tail.

  • Now.

  • What I want to do is I want to train a neural network that will allow this snake to stay alive.

  • So essentially, its output will be what direction to go in or like to follow a certain direction or not.

  • Okay, essentially, just keep this snake a lot.

  • That's what I wanted to do.

  • Now, how am I gonna do this?

  • Well, the first step is to decide what our input is gonna be and then to decide what our output is gonna be.

  • So in this case, I think a clever input is gonna be.

  • Do we have something in front of the snake?

  • Do we have something to the left of the snake?

  • And do we have something to the right to the snake?

  • Because in this case, all that's here is just the snake, and he just needs to be able to survive.

  • So we'll do.

  • Is we'll say Okay.

  • Is there something to left?

  • Yes.

  • No, Something in front?

  • Yes.

  • No.

  • So 01 something to the right?

  • Yes.

  • No.

  • And then our last input will be a recommended direction for the snake to go in.

  • So the recommended direction could be anything.

  • So in this case, maybe we'll say the recommended direction is left and what our output will be is whether or not to follow that recommended direction.

  • We're not here to try to do a different recommendation.

  • Essentially, you go to a different direction.

  • So let's do one case on how we would expect this neural network to perform without trained like once it's trained right, based on some given input.

  • So let's say there's not something to the left.

  • So we're gonna put a zero here because this one will represent If there's anything to the left, the next one will be front.

  • So we'll say, Well, there's nothing in front er the next one will be to the right, so say right and we'll say Yes.

  • There is something to the right of the snake and are recommended direction.

  • What could be anything we like.

  • So in this case, we say the recommended direction is left and we'll wait.

  • We'll do the recommend.

  • Direction is the negative 101 where negative one is left, zero is in front and one used to the right.

  • Okay, so we'll see in this case are recommended.

  • Direction is negative one, and we'll just denote this bite direction.

  • Now we're all put in this instance Should either be a zero or one representing.

  • Do we follow the recommended direction or do we not?

  • So let's see, in this case, following the recommended direction would keep our snake alive.

  • So we'll say one.

  • Yes, we will follow the recommended direction.

  • That is acceptable.

  • That is fine.

  • We're going to stay alive when we do that.

  • Now let's see what happens when we change the recommended direction to be right.

  • So let's say that we say one as a recommended direction again.

  • This is during here.

  • Then what's Europa B?

  • Well, if we decide to go right, we're gonna crash into our tail, which means that we should not follow that direction, so I'll put should be zero.

  • So I hope your understanding how we would expect this neural network to perform.

  • All right, so now how do we actually designed this neural network?

  • How do we get this work?

  • How do we train this?

  • Right.

  • Well, that is a very good question.

  • And that is what I'm gonna talk about now.

  • So let me actually just erase some of this stuff.

  • So we have a little bit more room to work with some mass stuff right here.

  • But right now, well, we start by doing is we start by designing what's known as the architecture of our neural network.

  • So we've already done this.

  • We have the input and we have the output.

  • Now each of our inputs is connected to our outputs.

  • And each of these connections has what's known as a weight.

  • Now, another thing that we have is each of our input neurons has a value, right.

  • We had in this case, we either had zero where we had one.

  • Now, these values can be different, right?

  • These values can either be decimal values or they could be like between zero and 100.

  • They don't have to be just between zero and one.

  • But the point is that we have some kind of value.

  • Right?

  • So what we're gonna do in this output layer to determine what way we should go is essentially we're going to take the weighted sum of the values multiplied by the weight.

  • And I talked about how this works more in depth in a second, but just just fall me now.

  • So what this symbol means is take some Ah, and what we do is I'm going to say in this case, I which is gonna be our variable, and I'll talk about how this kind of thing works in a second will say I equals one, and I'm going to say we'll take the weighted sum of, in this case value I multiplied by Wait, I So what this means essentially is gonna start at I equals one we're gonna use eyes are variable for looping.

  • And we're going to say in this case, we're going to be one times V I r servi I times w I.

  • And then we're gonna add all those.

  • So what?

  • This will return to us.

  • Actually, it will be the one w on plus b two w two plus b three w three plus of the four w four and this will be our output.

  • That's that's what our output layer is going to have as a value.

  • Now, this doesn't really make much sense right now, right?

  • Like why?

  • Why we doing this?

  • Wait.

  • What?

  • What is this?

  • Multiplication.

  • We'll just follow with me for one second.

  • So this is what our output layer is going to do.

  • Now, there's one thing that we have to add to this as well.

  • And this is what is known as our biases.

  • Okay, so what we're gonna do is we're going to take this weighted sum, but we're also going to have some kind of bias on each of these weights, okay?

  • And what?

  • This bias is known as its denoted by sea typically, Um, but essentially, it is some value that we just automatically add or subtract.

  • It's a constant value for each of these weights.

  • So we're gonna say all of these these connections have weight, but they also have a bias.

  • We're gonna be one.

  • Be too.

  • Be three.

  • And before, uh, ice mobile called be instead of C.

  • So what'll do Here is what I'm also gonna do is I'm also gonna add these biases in when I do these weights, we're gonna say B I as well.

  • So now what we'll have is we'll have at the end here.

  • Plus B I plus B one plus b two plus b three plus before now again.

  • I know you guys like what the heck am I doing this?

  • This makes no sense.

  • It's right to make sense in one second.

  • So now what we need to do is we need to train the network.

  • So we've understood now this is essentially what the output layer is doing.

  • We're taking all of these weights, and these values were multiplying them together.

  • And we're adding them, and we're taking what's known as the weeded some.

  • Okay, but how do we like What are these values?

  • How do we get these values and how is this gonna give us about output?

  • Well, what we're gonna do is we're gonna train the network on a ton of different information.

  • So let's say we play 1000 games of snake and we get all of the different inputs and all the different help.

  • What's so?

  • What we'll do is we'll randomly decide like a recommended direction, and we'll just take the state of the snake, which will be either.

  • Is there something the left to the right?

  • We're in front of it.

  • And then we'll take the output, Which will be like, Did the snake survive?

  • Or did the snake not survive?

  • So what we'll do is we'll train the network using that information, so we'll generate all of this different information and then train the network.

  • And what the network will do is it will look at all of this information, and it will start adjusting these biases and these weights to properly get a correct open.

  • Because what we'll do is we'll give it all this input, right?

  • So let's say we give it the input again of 010 and maybe one like this.

  • So random input.

  • And let's say the output for this case is, um, what he called.

  • So one is go to the right.

  • The output is one which is correct.

  • Well, what?

  • The network producer.

  • Okay, I got that.

  • Correct.

  • So what I'm gonna do is I'm not gonna bother adjusting the network because this fine, So I don't have to change any of these biases.

  • I don't have to change any of these weights.

  • Everything is working fine.

  • But let's say that we get the answer wrong.

  • So maybe the output was zero.

  • But the answer should have been one because we know the answer, obviously, because we've generated all the input and the output.

  • So now what the network will do is it will start adjusting these weights and adjusting these biases.

  • I'll say, All right, so I got this one wrong, and I've gotten, like, five or six wrong before, And this is what was familiar when I got something wrong.

  • So let's add one to this bias.

  • Or let's multiply this weight by two.

  • And what it will do is it'll start adjusting these weights in these biases so that it gets more things correct.

  • So obviously, that's why neural networks typically take a massive amount of information to train, because what you do is you pass it all of this information, and then it keeps going through the network.

  • And at the beginning it sucks right because it has this network to start of random weights and random biases.

  • But as it goes through and it learns, it says Okay, well, I got this one, correct.

  • So let's leave the weights in the bias is the same.

  • But let's remember that this is what the way in the bias was when this was correct and then maybe get something wrong.

  • It is okay, so let's adjust biased one a little bit.

  • Let's adjust.

  • Wait one, Let's mess with these And then let's try another example and then it's okay.

  • I got this example, right?

  • Maybe we're moving in the right direction.

  • Maybe we'll just another wait.

  • Maybe we'll just innately bias.

  • And eventually your goal is that you get to a point where your network is very accurate because you've given it a ton of data and its adjusted the weights in the biases correctly so that this kind of formula here of this weighted average will just always give you the correct answer or has a very high accuracy or high chance of giving you the correct answer.

  • So I hope that kind of makes sense.

  • I'm definitely oversimplifying things in how the adjustment of these weights in these biases work, but it's not crazy important, and we're not going to be doing any of the adjustment.

  • Ourself were were just gonna be kind of tweaking a few things with the network.

  • So as long as you understand that when you feed information, what happens is it checks whether the network got it correct, We're gotta incorrect, and then it just the network according.

  • And that is how the learning process works for a neural network.

  • All right, so now it's time to discuss a little bit about activation functions.

  • So right now, what I've actually just described to you is a very advanced technique of linear regression.

  • So essentially I was saying, We're adjusting waits, We're adjusting biases and essentially we're creating a function that, given the inputs of like, X y z w like left front, right, We're giving some kind of output.

  • But all we've been doing to do that essentially, is just adjusting a linear function because our degrees on Lee won right?

  • We have weights of degree one multiplying by values of degree one, and we're adding some kind of bias and that kind of reminds you of the form MX plus B.

  • We're literally just adding a bunch of MX plus bees together, which gives us like a fairly complex linear function.

  • But this is really not a great way to do things because it limits the degree of complexity that our network can actually have to be linear.

  • And that's not what we want.

  • So now we have to talk about activation functions.

  • So if you understand everything that I've talked about so far, you're doing amazing that this is great.

  • You understand that essentially, the way that the network works is you feed information in and it adjust these weights and biases.

  • There's a specific way it does that which will talk about later, and then you get some kind of output.

  • And based on that, oh, put your trying to adjust the weights and vices and and all that right.

  • So now we'll be used to talk about activation functions.

  • When activation function does is, it's essentially a nonlinear function that will allow you to add a degree of complexity to your network so that you can have more of a function that's like this, as opposed to a function that is a straight line.

  • So an example of an activation function is something like a sigmoid function.

  • Now the sigmoid function.

  • What it does is it'll map any value you give it in between the value of negative one and one.

  • So, for example, when we create this network, our output might be like the number seven.

  • Now this number seven.

  • Well, it is closer to one that is 20 So we might deem that a correct answer.

  • Or we might say that this is actually way off because it's way above one.

  • Right.

  • But we want to do essentially in our output layer.

  • We only want our values to be within a certain range.

  • We want them to be, in this case between zero and one.

  • Or maybe we want them to be between negative one and one saying, like how close we are to zero making that decision how close we had one something like that, right?

  • So what the sigmoid activation function does is a nonlinear function, and it takes any value.

  • And essentially the closer that value is to infinity, the closer the output is tau one, and the closer that value is too negative.

  • Infinity, the closer that would put it is to negative one.

  • So what it does is it adds a degree of complexity to our network.

  • Now, if you don't, if you're not a high level like mass student or you only know, like very basic high school.

  • Mathis might not really make sense to you, but essentially, the degree of something right is honestly how complex it can get.

  • If you have, like, a degree nine function than what you could do is you can have some crazy kind of curve ah, and stuff going on, especially in multiple dimensions that will just make things like much more complex.

  • So, for example, if you have, like a degree nine function, you can have curves that air going like like this, like all around here that are mapping your different values.

  • And if you only have a linear function, while you could only have a straight line, which limits your degree of complexity by a significant amount Now what these activation functions also do is they shrink down, you're down a so that it is not as large.

  • So, for example, right like say, well, working with data that is like hundreds of thousands of like characters longer digits, we'd want to shrink that into, like, normalized that data so that it's easier to actually work with.

  • So let me give you a more practical example of how to use the activation function.

  • I talked to her.

  • What sigmoid does?

  • What we would do is we would take this waited sums.

  • We did the psalm of W I B I, um plus B I write and we would apply an activation function to this.

  • So I would say maybe our activation function is FX, and we would say f of this and this gives us some value, which is now gonna be our output neuron.

  • And the reason we do that again is so that when we are adjusting our weights and biases and we have the activation function and now we can have a way more complex function, as opposed to just having the kind of linear, regression straight line, which is what we've talked about in my other machine learning courses.

  • So if this is kind of going a little bit over your head, it may be my lack of explaining it.

  • I'd love to hear the comments below how you think this explanation, but essentially, that's what the activation function does now.

  • Another activation function that is very popular and is actually used way more than sigmoid nowadays is known as rectifying linear unit and what this does.

  • Is it Tommy drawed in red, actually, So it we can see it better is it takes all of the values that are negative and automatically puts them to zero and takes all of the values that are positive and just makes them more positive essentially or like to some level positive, right?

  • And what this season is gonna do is it's a nonlinear function.

  • So it's going to enhance the complexity of our model and just make our data points in between the range zero and positive infinity, which is better than having between negative infinity and positive infinity, for one, we're calculating air.

  • All right, last thing to talk about for neural networks in this video, I'm trying to kind of get everything like, briefly into one long video is ah, lost function.

  • So this is again gonna help us understand how these weights and these biases are actually adjusted.

  • So we know that they're adjusted.

  • And we know that what we do is we look at the output on we compare it to what the output should be from our task data, and then we say, OK, let's adjust the weights in the biases accordingly.

  • But how do we just that?

  • And how do we know how far off we are?

  • How much to tune by if an adjustment you need to be made?

  • Well, we use what's known as a loss function, so a loss function essentially is a way of calculating air.

  • Now there's a ton of different Los Los functions.

  • Some of them are like means squared error.

  • That's the name of one of them.

  • I think one is like, um, I can't even remember the name of this one.

  • But there's there's a bunch of very popular ones.

  • If you know some, leave them in the comments.

  • Love to hear all the different ones.

  • But anyways, whatthe lost function will do is tell you how wrong your answer is because, like, let's think about this right.

  • If you get an answer of, let's say, maybe our output is like 0.79 and the actual answer was one.

  • Well, that's pretty close, like that's pretty close to one.

  • But right now all we're gonna get is the fact that we were 0.21 off, Okay, so 0.2 went off.

  • So just the way to certain degree based on to your point to one.

  • But the thing is, what if we get, like, zero point, um, 85 Was this, like, this is significantly better than 0.79 But this is only going to say that we were better by what is this?

  • 0.15 So we're still going to do a significant amount adjusting to the weights and the biases.

  • So what we need to do is need to apply a loss function to this that will give us a better kind of degree of like how wrong or how right we were.

  • Now, these lost functions are again not linear loss functions.

  • Which means that we're gonna add a higher degree of complexity to our model, which will allow us to create way more complex models and neural networks that consult better problems.

  • I don't really want to talk about loss functions too much because I'm definitely no expert on how they work.

  • But essentially what you do is you're comparing the output to the what the open should be so like, whatever the model generated based what it should be, and then you're gonna get some value and based on that value, you are going to adjust the biases and the weights accordingly.

  • The reason we use the loss function again is because we want a higher degree of complexity there, nonlinear and, you know, if you get zero, if you're 99% like, say, your 990.1 away from the correct answer, we probably want to adjust the weights very, very little.

  • But if you're like way off the answer, two whole points, maybe our answer's negative one.

  • We wanted to be one.

  • Well, we want to adjust the model like crazy, right?

  • Because that model was horribly wrong.

  • It wasn't even close, so we would adjust it way more than just like two points of adjustment, right?

  • We adjust it based on whatever that lost function gave to us when it means that this has kind of been my explanation of a neural network.

  • I want a very I want to state right here for everyone that I am no pro on neural networks.

  • This is my understanding.

  • There might be some stuff that's a little bit flawed or some areas that I skipped over, and quickly, actually, because I you know, if some people probably gonna say this when you're creating your own networks as well, you have another thing that is called He didn't layers.

  • So right now we've only been using two layers.

  • But in most neural networks, what you have is a ton of different input neurons that connect to what's known as a hidden layer or multiple hidden layers of neuron.

  • So let's say we have, like, an architecture.

  • Maybe that looks something like this.

  • So all these connections on then these ones connect to this, and what this allows you to do is have way more complex models that can solve way more difficult problems because you can generate different combinations of inputs and hidden what what is known as hidden layered neurons to solve your problem and have more weights and more biases to adjust.

  • Which means you can, on average, be more accurate to produce certain models.

  • So you can have crazy neural networks that looks something like this, but with way more neurons and way more layers and all this kind of stuff, I just want to show a very basic network today because I didn't want to go in and talk about like a ton of stuff, especially cause I know a lot of people that watch my videos are not parole.

  • Mouth guys are just trying to get a basic understanding and be able to implement some of this stuff.

  • Now, in today's video, what we'll be doing is actually getting our hands dirty and working with a bit of code and loading in our first data set.

  • So we're not actually gonna do anything with the model right now.

  • We're gonna do that in the next video.

  • This video is gonna be dedicated to understanding data, the importance of data, how weaken scale that data, look at it and understand how that's going to affect our model when training.

  • The most important part of machine learning, at least in my opinion, is the data.

  • And it's also one of the hardest things to actually get done correctly training the model and testing the model and using it is actually very easy, and you guys will see that as we go through.

  • But getting the right information to our model and having it in the correct form is something that is way more challenging than it may seem with these initial data sets that were gonna work with Things are gonna be very easy because the data sets are gonna be given to us.

  • But when we move on into future videos to using our own data, we're gonna have to pre process it.

  • We'll have to put it in its correct form.

  • We're gonna have to send it into an array.

  • I'm gonna have to make sure that the data makes sense.

  • So we're not adding things that shouldn't be there or we're not omitting things that need to be there.

  • So anyways, I'm just gonna quickly say here that I am kind of working off of this tensorflow 2.0, tutorial that is on tensor Flows website.

  • Now.

  • I'm kind of going to stray from it quite a bit, to be honest, but I'm just using the data sets that they have and a little bit of the code that they have here because it's a very nice introduction to machine learning and neural networks.

  • But there's a lot of stuff in here that they don't talk about, and it's not very in depth.

  • So that's what I'm going to be adding in the reason why maybe you'd want to watch my version of this as opposed to just reading this off the website.

  • Because if you have no experience with neural networks, it is kind of confusing some of the stuff they do here, and they don't really talk about why they use certain things or whatnot.

  • So anyways, the data so we're gonna be working with today is known as the fashion M NUS data set.

  • So you may have heard of the old does, which is image image classifications.

  • But it was like digits.

  • So, like you had digits from 0 to 9 and the neural network with classified digits, this one's very similar principle.

  • Except we're gonna be doing it with, like, T shirts and pants and, um, what you got like sandals and all that.

  • So these air kind of some examples of what the images look like and we'll be showing them as well in in the code.

  • So that's enough about it felt like I should tell you guys that the first thing that we're gonna be doing before we actually start working with TENSORFLOW is we obviously need to install it now.

  • Actually, maybe I'll grab the install command here, so I have to copy it.

  • But this is the install command for Tensorflow two point.

  • Oh, so I'm just gonna copy here, Lingle being description as well as on my website, and you can see pink Pip install hyphen que tensorflow equals equals two point.

  • Oh, point.

  • Oh, hyphen, Alfa zero.

  • I already have this installed.

  • I'm gonna go ahead and hit enter anyways.

  • And the hyphen Q I believe just means don't give any output when you're installing.

  • So if this runs and you don't see any output whatsoever, then you have successfully installed tensorflow two point.

  • Oh, Now, I ran into an issue where I couldn't install it because I had a previous version of numb pie installed in my system.

  • So if for some reason this doesn't work and there's something with numb pie, I would just pip on installed numb pie and reinstall.

  • So do Pitman insult numb pie like that.

  • I'm always not gonna run that.

  • But if you did that and then you tried to reinstall tensorflow 2.0, that should work for you, and it should actually install its own version of the most updated version of numb pie.

  • Now another thing, we're going to install here is going to be Matt plot limp Now.

  • Matt won't live is a nice library for just graphing and showing images and different information that will use a lot through this Siri's.

  • So let's install that already have it installed.

  • But go ahead and do that.

  • And then finally, we will install pandas, which we may be using in later videos in the series.

  • So I figured we might as well install it now.

  • So Pip, install pandas, and once you've done that, you should be ready to actually go hear and start getting our data loaded in and looking at the data.

  • So I'm just going to working in sub line, taxed and execute my python files from the command line just because this is something that will work for everyone, no matter what, but feel free to work and ideally, for your field of work in pi trim.

  • As long as you understand how to set up your environment so that you have the necessary packages like tensorflow and all of that, then you should be good to go.

  • So let's start by importing tensorflow going for tensorflow as t f like that.

  • I don't know why it always short forms when I try to do this.

  • But anyways, we're gonna import Uh, actually sorry from tensorflow will import care as now Caress is an A p I for tensorflow, which essentially just allows us to write less code.

  • It does a lot of stuff for us, like you'll see only set up the model we use care as and will be really nice and simple and just like a high level a p I That's the way that they describe it.

  • That makes things a lot easier for people like us that are gonna be defining our own 10 Tsar's and writing our own code from scratch.

  • Essentially.

  • Now, another thing we need to import is numb pies were gonna say, import if I could get this year imports numb pie as np.

  • And finally, we will import Matt plot live.

  • So, Matt, plot lib in this case dot pie plot as p.

  • L t.

  • And this again is just gonna allow us to graft some things here.

  • All right, so now what we're gonna do is we're actually gonna get our data set loaded in, so the way that we can load in our data set is using care ass.

  • So to do this, I'm just gonna say data equals in this case care as dot datasets dot fashion underscore amnesty.

  • Ah, and this is just the name of the data set.

  • There's a bunch of other datasets inside of care as that we will be using in the future.

  • Now, whenever we have data, it's very important that we split our data into testing and training data.

  • Now, you may have heard this.

  • You talk about this in the previous Michigan editorials?

  • I did.

  • But essentially what you want to do with any kind of machine learning algorithm, especially a neural network, is you don't want to pass all of your data into the network when you train it, you wanna pass about 90 80% of your data to the network to train it, and then you want to test the network for accuracy and making sure that it works properly on the rest of your data that it hasn't seen yet.

  • Now the reason you don't want to do this on a lot of people would say, Why don't I just give all my dad's the network and make it better not necessarily.

  • And that's because if you test your data on if you test your network on data it's already seen, then you can't be sure that it's not just simply memorizing the data.

  • It's seen right.

  • For example, if you show me five images, um and then like you tell me the classes of all of them and then you show me that the same image again you say, What's the class?

  • And I get it right.

  • Well, did I get it right?

  • Because I figured out how to analyze the images properly or because I'd already seen it.

  • And I knew what it was, right?

  • I just memorized what it was.

  • That something we want to try to avoid with our models.

  • So whenever we have our data, we're gonna split it up into testing and training data, and that's what we're gonna do right here.

  • So to do this, I'm gonna say train.

  • In this case, trained underscore images and train underscore labels comma in this case, test underscore images, comma, test underscore and labels.

  • And then we say this is equal to data.

  • Don't get underscored data.

  • So not get down.

  • Low dinners were down.

  • Now the reason we can do this is just because this load data method is gonna return information in a way where we can kind of split it up like this.

  • In most cases, when you're writing your own models for your own data, you're gonna have to write your own a raise and four loops and load in doubt and do all this fancy stuff.

  • But Cara's makes it nice and easy for us.

  • Just by allowing us to write this line here, which will get us our training and testing data in before kind of variables that we need so quickly, Let me talk about what labels are now.

  • So for this specific data set, there are 10 labels That means each image that we have will have a specific label assigned to it.

  • Now, if I actually I'll show you by just printing out my print, for example, train underscore labels.

  • And let's just print like the zero with, uh, I guess the first training label.

  • So let me just run this file.

  • So pike on tutorial one, you can see that we simply get the number nine now.

  • This is just what is represent.

  • Like the label representation.

  • So obviously is not giving us a string.

  • But let's say if I picked for example, six on and I hit enter here.

  • You can see that the label is seven.

  • So the labels are between zero and nine s 0 10 labels in total.

  • Nothing is That's not very useful to us because we don't really know what label zero is what label nine is.

  • So what I'm gonna do is create a list that will actually define what those labels are.

  • So I'm gonna have to copy it from here because I actually don't remember the labels.

  • Ah, but you can see it says here what they are.

  • So, for example, label zero is a T shirt label.

  • One is a trouser, nine is an ankle boot, and you can see what they all are.

  • So we just need to define exactly this list here.

  • So class names so that we can simply take whatever value is returned to us from the model of what label it thinks it is.

  • And then just throw that as an index to this list so we can get what?

  • Label this.

  • All right, Sweet.

  • So that is, um, how we're getting the data now.

  • So now I want to show you what some of these images look like and talk about the architecture of the neural network we might use in the next video.

  • So I'm gonna use pi plot just to show you some of these images on explain kind of the input and the output in all of that.

  • So if if you want to show an image using Matt plot Lib, you can do this by just doing p l t dot I am show and then in here simply putting the image.

  • So, for example, if I do train, not labels, images and let's say we do the seventh image and then I do p lt dot show If I run this now, you as well see what this image is.

  • So it's run those and you can see that we get this is actually I believe, like a pullover or a hoody.

  • Now I know it looks weird and you've got all this like green and purple.

  • That's just because of the way that kind of map lot live shows these images If you want to see it properly, what you do is I believe you do seem up equals in this case, p l t dot c.

  • I think it's like Cmdr binary or something.

  • I'm gonna have a look here because I forget.

  • Ah, yeah, CME dot binary.

  • So if we do this and now we decide to display the image, it should look a little bit better.

  • Let's see here.

  • Ah, and there we can see now we're actually getting this, like, black and white kind of image.

  • Now, this is great and all, but let me show you actually, what our image looks like.

  • So, like, how was I just able to show How was I just able to do this image?

  • Well, the reason I'm able to do that is because all of our images are actually a raise of 28 by 28 pixels.

  • So let me print without fuckir so I d train underscore images.

  • Let's do seven the same example here and print that to the screen.

  • I'll show you what the data actually looks like.

  • Give it a second and there we go.

  • So you can see this is obviously what our data looks like.

  • It's just a bunch of lists.

  • Eso won list for each row, and it just has pixel values.

  • And these pixel values are simply represented of, I believe, like, how much?

  • I don't actually know this scale that they're on, but I think it's like an RGB value.

  • But in Greece scale, right?

  • So, for example, we have, like, 022 55 where 2 55 is black and zero is white.

  • And I'm pretty sure that's how getting the information in someone can correct me if I'm wrong.

  • But I'm almost certain that that's how this actually works.

  • So this is great.

  • No, but this is These are large numbers.

  • And remember, I was saying before in the previous video, that's typically a good idea to shrink our down it down so that it's with within a certain range, that is a bit smaller.

  • So in this case, what I'm actually gonna do is I'm going to modify this information a little bit so that we only have each value out of one.

  • So we, instead of having No.

  • 2 55 we have another one.

  • So the way to do that is to divide every single pixel value by 2 55 now because these train images are actually stored in what's known as a numb pyre.

  • Ray.

  • We can simply just divide it by 255 to achieve that.

  • So well said train images equals train images divided by 2 55 We'll do the same thing here with our test images as well.

  • Now, obviously, we don't have to modify the labels as well, also because they're just between zero and nine, and that's how the labels work.

  • But four images.

  • We're going to divide those values so that it's a bit nicer.

  • So now let me show you what it looks like.

  • So I go python to Troy.

  • All one don't pine.

  • And now you can see that we're getting these decimal values and that our shirt looks, well the same, but exactly like we just shrunk down our data.

  • So it's gonna be easier to work with in the future with our model.

  • Now that's about it.

  • I think that I'm going to show you guys in terms of this data.

  • Now we have our Donald loaded in, and we're pretty much ready to go in terms of making a model.

  • Now, if you have any questions about that down Please don't hesitate to leave a comment down below, But essentially, again, the way it works is we're gonna have 28 by 28 pixel images and they're gonna come in as an arranged just as I've showed you here.

  • So these are all the values that we're gonna have.

  • We're gonna pass that to our model, and then our model is gonna spit out what class it thinks it is.

  • And those classes, they're gonna be between zero and nine.

  • Obviously, zero is gonna represent T shirt where nine is gonna represent ankle boot, and we will deal with that all in the next video.

  • Now, in today's video, we're actually gonna be working with the neural network.

  • So we're gonna be setting up a model.

  • We're gonna be training that model.

  • I'm going to be testing that model to see how well it performed Well, also, use it to predict on individual images and all of that fun stuff.

  • So without further ado, let's get started.

  • Now, the first thing that I want to do before we really get into actually writing any code is talk about the architecture of the neural network we're going to create now.

  • I always found in tutorials that I watched.

  • They never really explained exactly what the layers were doing, what they looked like and why we chose such layers.

  • And that's what I'm hoping Thio give to you guys right now.

  • So if you remember before we know now that are images that come in essentially as like 28 by 28 pixels.

  • And the way that we have them is we have an array and we have another rain sides, like a two dimensional rain, has pixel values to make it 0.10 point three, which is the grayscale value.

  • And this goes and there's times 28 Citro of these these pixels.

  • Now there's 28 rows, obviously because, well, 28 by 28 pixels.

  • So in here again, we have the same thing more pixel values and we go down 28 times, right?

  • And that's what we have.

  • And that's what our Ray looks like now this water input, that is that's fine, But this isn't really gonna work well for our neural network.

  • What are we gonna do?

  • We're gonna have one neuron, and we're gonna pass this whole thing to it.

  • I don't think so.

  • That's not gonna work very well.

  • So what we need to actually do before we can even, like, start talking about the neural network is figure out a way that we can change this information into a way that we can give it to the neural network.

  • So what I'm actually gonna do and what I mean most people do is they do what's called flab in the data.

  • So, actually, maybe we'll go.

  • I can't even go back once I clear.

  • But flattening the data essentially is taking any, like, interior list.

  • So let's say we have a list like this and just like squishing them altogether.

  • So rather than so, it's a This is like 123 If we were to flat in this, what we would do is, well, we remove all of these interior a race or let's say whatever it is, so we would just end up getting down.

  • It looks like one, 23 and this actually turns out toe work just fine for us.

  • So in this instance, we only had, like, one element in each ray.

  • But when we're dealing with 28 elements and each story list.

  • Listen to Ray.

  • They're interchangeable.

  • Just in case they keep saying those what will essentially have is, well, flat in the data.

  • So we get a list of length 784 and I believe that is because, Well, I mean, I know that us because 28 times 28 equals 784.

  • So when we flat in that data, so 28 rows of 28 pixels, then we end up getting 784 pixels, just one after each other.

  • And that's what we're gonna feed in as the input to our neural network.

  • So that means that our initial input layer is gonna look something like this.

  • We're gonna have a bunch of neurons and they're gonna go all the way down, so we're gonna have 784 domes.

  • So let's say this is 784 I know you could probably hardly read that, but you get the point, and this is our input layer.

  • Now, before we even talk about any kind of hidden layers, let's talk about our output layer.

  • So what is our output?

  • Well, our output is gonna be a number between zero and nine.

  • Ideally, that's what we want.

  • So when we're actually gonna do for our output layer is rather than just having one neuron that we use kind of in the last two videos ago, as an example is we're actually gonna have 10 neurons, each one representing one of these different classes.

  • Right?

  • So we have 0 to 9, so obviously tenderloins for 10 classes.

  • So let's have 10 your own.

  • So 123456789 10.

  • Now what's gonna happen with these neurons is each one of them is going to have a value, and that value is gonna represent how much the network thinks that it is each note. 00:43:35.180 -->

Hey, guys.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

A2 初級

TensorFlow 2.0速成班 (TensorFlow 2.0 Crash Course)

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字