Placeholder Image

字幕列表 影片播放

  • what's going on?

  • Everybody.

  • And welcome to part three of our deep learning.

  • And how light in part six of our overall highlight tutorial, Siri's in this tutorial remaining is training a model on the training data.

  • Basically, that we've been building.

  • I'm I'm just gonna keep letting this run in the background.

  • Actually, it's like this plus 40.

  • I stopped and I was like, I Price should just keep it running.

  • Uh, while I write the training code so we'll just keep it going, so almost actually have 100 will definitely be over 100 by It's how we're done.

  • So with that, let's go ahead and get started.

  • I'm gonna call this model trainers.

  • I'm gonna copy Data Creator DuPuy.

  • Call it model Traynor Pie.

  • Edit it and fit that on our screen.

  • All right, So what?

  • We're gonna need your business.

  • What you're gonna need tohave here is gonna be both tensorflow and caress.

  • Um, you could just do Pip, install tensor flow and pip install cares in your command line, but tensorflow, you'd probably really want to have the GPU version, So it should be tensorflow dash gpu.

  • And in order to do that, it could be a tedious to install.

  • So if you need help installing the GPU version of Tensorflow on both Windows or Linux, go to the text based versions.

  • Detroit I've linked toe both options in that text based tutorial.

  • It'll be in the There's a link in the description to the text based tutorial.

  • So head there if you need any help, Um, I'm also using T Q t m.

  • So Pippen stall TDM you don't need this.

  • I'd like to use t Q t.

  • M any time.

  • I've got a large four loop that.

  • I'm curious how much longer it might be, but by no means you do you need it?

  • So anyway, But grab if you want.

  • So to begin, um, I am going Thio.

  • I think I'm just gonna keep doing what I was doing before.

  • I think that I think it's just more useful.

  • We've got so much data, it's so much code to cover, uh, to do it this way.

  • So let me just copy this over so important Paris because we're gonna use it.

  • Models Sequential.

  • That's just for your multi layer perception on feed forward type model.

  • We want dense layers won't drop out which dropout activation at that point also, if you want to learn more about deep learning in both python and just deep learning in general, I've also got links to tutorials there as well.

  • If you need Maur information on that.

  • So anyway, uh, that's that load model so we can save in load models if we want, uh, random for shuffling our data.

  • Tiki gm for Like I said, it's for Iterating over models.

  • I'm sorry.

  • Eatery the It's for producing.

  • Like Progress Bar's basically for anything you're gonna operate over and then numb pie for obvious reasons.

  • So continuing along, um, we're gonna have a batch.

  • Size is just how much How much of a bachelor we're gonna feed through the neural network at a time.

  • I'm gonna go with 1 28 I found the 1 28 to be like the sweet spot.

  • So So?

  • So I'm gonna go with that bit there.

  • E pox, as everybody likes to correct me.

  • Uh, we'll go with 10 summer between, like one and 10.

  • Um, I just kind of depends, wouldn't I haven't really found any benefit to going over 10 but you can if you want feel free to play with all this stuff and then test size.

  • How much of our data do we want to dedicate to add a sample testing?

  • Now, I'm gonna go ahead and do is create in bottling out model again.

  • I'm gonna copy Paste.

  • This one's will take a little bit too, right.

  • I'd rather just explain it.

  • Um, basically, this is gonna tell us what is our batch size and how many pox did we do?

  • So it's going to save these models now.

  • We don't have an input model to start with, so I'm just passing 00 there.

  • But if you had one that you wanted to basically, you might you're gonna find that there's times where you train a model, and then maybe you get some more data or you get some new data and either you want to train at our larger data set or you want thio like you want to combine your new data with old data training couple pox there.

  • Or maybe you've got pure new data that you just kind of want to throw at it.

  • There's all sorts of reasons why you'd want to input a saved model, but anyway, Now we're gonna do is if we did have a model, we'll load it.

  • So I'm going to say load previous model will be false.

  • But if we want to load a model, then we'll load in that model.

  • What's really cool about Paris?

  • I haven't actually covered terrace on my channel.

  • So if you're on my channel and you haven't used caress yet one really nice thing about Paris over tense or flow in general, a zealous TF learned, which is what I usually use in place of caress is you don't If you load a model, you don't have to define the model, which is really nice.

  • I always thought that was really dumb, that you had to define the model and then load it.

  • Um, if you save the model, it just seemed like, Why can't we just save the model?

  • Maybe that's something new with tensorflow like 1.4 or something like that that I just haven't experienced yet or something.

  • But at least Caris does that, and I think that's great.

  • So continuing along, um, now, let's go ahead and read in the data.

  • So again, I'm just gonna copy and paste this So here we're just opening the train file.

  • We split by new line, uh, and then we're gonna vow this stuff.

  • So one option we could have done rather than a pending to the file is like, save it as some sort of numb pie file or something like that.

  • But when we're just saving as this file, it's the format is a string.

  • So we need to use E Val.

  • I, um because it needs to be evaluated for what?

  • It is not be a string.

  • So this basically converts it toe actual data, not string.

  • Anyway, that's your input.

  • That's our output.

  • Now we need to discuss balancing so with with all machine learning but deep learning too, Um, you wanna have balanced data?

  • So in our case, we have three choices.

  • We have attack minor own planets, mine and empty planet.

  • Chances are, aside from the initial part of the game, chances are the most common actual output vector is attack because most of the game you're gonna wind up with all the plants are occupied.

  • Oh, and now you just need to attack.

  • And so it's probably gonna be the case that, like, 75% of our data is attack, attack, attack, attack.

  • And the problem is those 22 major issues.

  • One.

  • It's hard to assess how accurate you are if you have unbalanced data.

  • But it's also hard to train a model with unbalanced data because the model's gonna learn like, let's see, um, you know, I could be 75% accurate if I always say attack, because that's what the model is gonna try to do.

  • It's gonna try to figure out what's the quickest way to get to the best answer.

  • And so it's kind of just if 75% of your data, let's say his attack, it's going very quickly learn.

  • Okay, I just should always attack.

  • I shouldn't.

  • I'll be the most accurate.

  • So we want to make sure our data is perfectly balanced for that reason.

  • But also, it comes time to evaluate our model and how accurate the model actually is.

  • If your data isn't balanced, it's really hard to determine.

  • Are we this accurate because we're always predicting attacker were most often predicting attack because the model incorrectly just learned to just in general predict attack?

  • Or is it actually smart?

  • So it's really hard to determine how you've actually done if you don't have balance data, So it's two reasons why we want to do it.

  • So what I'm gonna do now is just put in this code for the balancing.

  • So we have these lists that air attack mine and mine Empty planet.

  • There's got to be a better way to do it than this, but this is the way I've done it.

  • So So basically, what we're gonna do is if the output layer is attack, we add it to attack.

  • If it's mine, our own planets, we added two minor on planets.

  • And if it is mine and empty planet, we upend their and then what?

  • I want to go ahead and do is like, for now, what we can do is we can just print this, Uh, we just print it out so we can see the length, see what we're starting with.

  • And then what we want to do is grab whatever the shortest one is.

  • So we want to know what is this shortest length thing?

  • I'm gonna wager it's probably mine Empty planet.

  • But that's okay then.

  • What we want to do is we're gonna shuffle all of those lists and my throat so dry.

  • And so you want to shuffle them for a variety of reasons.

  • But mostly we just Chances are if we don't like at least shuffling at this stage isn't as important as shuffling the next stage.

  • But in general, you just don't want data necessarily to be in succession unless you want your day to be in success.

  • Like if we were doing, like, a recurrent neural network or something, it would be great if the data was in succession, but we don't actually really want it to be in succession here.

  • So we're gonna go and shuffle that.

  • And then now we're gonna make all of the data the length of whatever the shortest is allergies today or what eyes were itching.

  • My nose is itchy, Throat is dry.

  • Okay, so now once we've done that, let's print the new lengths.

  • So now we get the lengths.

  • They should all be identical in length now and now we want to add these all up together.

  • So we're gonna say now all choices are just attack plus mind plus mine.

  • Our planet's please mind empty plants and then we want to shuffle them.

  • So here it's super important that we shuffle otherwise, the model as we feed it, data the modern Probably testing data, for example, but only be like mine.

  • Empty planet Always.

  • But then also, um, for here it would be really hard for the model because, like, the first large sum of data would be all attack, attack, attack.

  • So the model would very quickly learn.

  • Okay, Attack.

  • And then it would shift over to this chunk of data will be all mine, our planet, minor planet.

  • And then I'd be like, Oh, shoot!

  • And it would shift over to always mine our planet.

  • And then it would get to mine.

  • Empty planet so on.

  • And, uh, that wouldn't be ideal for learning.

  • So we want definite one shuffle and random shovels in place.

  • So this should be fun.

  • Okay, Now, once we've shuffled that day, we need to parse it back to train and train out information.

  • So we're just going to write over and again.

  • I think I used it up here to sew it up here.

  • I used t Q t m so from tiki ti m importante que tm And then I just used T Q t m over this enumerated training if you don't have to, you could just remove the tiki gm in casing there.

  • And you'd be fine if you don't use Deke.

  • Edie and I just really like thi Hue Diem's.

  • So, anyway, yeah.

  • So now, um, we depended it, And then So we don't have to do this every time.

  • Like, if you haven't made new data.

  • Um, you don't need to do this every time.

  • So we're gonna go and save it to train and entering out and then tow load it back in.

  • You would just do and peat load train injuring out.

  • And then what we're gonna do is split the data to training and testing Boone.

  • And then we build the model.

  • So because it's care Awesome, it's awesome.

  • If not load previous model, then we need to build the model to find it.

  • But again, if you have that model and you load it, you don't have to do this.

  • It's great.

  • It's just great.

  • So in this case, we got a two by 2 56 hidden later model, a 50% dropout, and that's about it.

  • So you can you can feel free to tweak a lot of this stuff on this.

  • I'm just throwing a bunch of information.

  • I either so many variables in both the starting script that does the random stuff.

  • There's so many things that we could change their and tweet to make things better on then.

  • Obviously, in the neural network model, there's a bunch of things we could tweak.

  • You could tweak the layers layer size.

  • You could tweet, drop out how many layers we have you could treat change the activation later.

  • Here they change probably would never change that activation there.

  • But you can feel free to change these activation layers and all that.

  • And again, just like before.

  • If you, um if you don't know much about deep learning, I highly encourage it.

  • Uh, you can go to python programming dot net.

  • Um, either come over here.

  • If it's smaller, there should be a search bar right at the top.

  • And you can pry, just type in deep learning.

  • Boom.

  • Um, you could definitely check out this This you can't see it.

  • This this Siri's here, which is the full machine Learning Series, which covers basically all the machine learning um, class fires will not all but a large amount, not just deep learning.

  • And basically, that was kind of structured with the idea of we go over the classifier show like a practical example.

  • And then we code that class for ourselves from scratch, and it just, at least for me.

  • But I think for a lot of people that helps toe really solidify how this stuff works.

  • But if you want to start with just deep learning, you could just go here.

  • The introduction to deep learning with tensorflow.

  • Um and yeah, you can go through this if you if you want to learn more about it.

  • If you already know about it.

  • Good for you.

  • So back to what I was doing now I left my page on amateur.

  • Uh, yeah.

  • Once we've built the model, we're ready to fit the model.

  • Now, this would be outside of that load.

  • Previous model expiration on a train.

  • So to fit you passed the training data batch size.

  • How many pox you want to do?

  • Verbose.

  • That has to do with tensor board.

  • I'm not gonna get into tense aboard here, but if you wanna learn more about tense aboard, go to the deep learning series.

  • Uh, and then the validation split.

  • This could be a 10% split.

  • So as its training, it's gonna output some accuracy for you.

  • But that accuracy is going to be based on a portion of the training data.

  • So in theory, that's in sample accuracy.

  • And, you know, you take that with grant salt.

  • So what we really want to know is the out of sample accuracy.

  • So to generate that we're gonna say score equals model and evaluate on the testing data.

  • Now, it's a good idea to keep this as a value of something.

  • Because of this validation, accuracy does not line up to the score.

  • So the out of sympathy in sample accuracy is much higher than the outer sample accuracy.

  • Chances are you have over fit.

  • Um, so it's good to actually track both because it's a good indicator of when you've done too many box.

  • Um, but yeah, So we'll keep that around, and then Well, go ahead and save the model.

  • Just case you wanna look later says where it saved you.

  • Tell us the test score.

  • Tell us the test accuracy.

  • So, um, so this is gonna return both score inaccuracy for the life of me.

  • I can't remember what this I want to say.

  • The score is like some.

  • I'm not even.

  • You'll have to look up with scores.

  • Maybe I'll look it up real quick.

  • It has, like, some sort of special meaning to, but I just I forget what it was.