Placeholder Image

字幕列表 影片播放

  • What's going on?

  • Everybody welcome apart 10 of the machine learning or deep learning and hell like three Siri's.

  • Up to this point, we just kind of started talking about her concepts, making our imports, talking about how we're going to structure things.

  • Uh, let's just go ahead and jump into it.

  • So at this point, we've shuffled all of the files and we're ready to separate them out into training and validation files.

  • They're already shuffled, so we can just iterating up to whatever the, you know, validation amount is for the validation stuff.

  • And then after that, do the training.

  • Also, we only needed that once.

  • We don't need it every time through the pox.

  • And like I said, I'm not going to use the tensorflow generators stuff because I think it just over complicates things just not really well done for the user, in my opinion.

  • So we want to make sure we have some logic that allows us tow.

  • Just load this data one time, even if we want toe, you know, because basically, we could also set this to true once we have those files and we don't have to love him at all.

  • But If we do need to load in the data, we just want to load it in one time.

  • Not every time, per iPAQ.

  • All right, let's get to it.

  • So, first of all, we want to say, give if load trained files.

  • This means we have to load them.

  • So now what we want to do before we even enter the pox is we wanna load in, um, the validation files.

  • So if load train files, so if we happen to have them, then we can just say test x equals, uh, n p dot load test x dot N P.

  • Y right.

  • That'll be there.

  • It's not there.

  • So we're gonna have to this, like, cause this is false anyways, But if it was, wouldn't that be nice?

  • We could load it, but we can't.

  • Therefore else.

  • So now what we need to do is, um, basically, we'll start with two empty lists.

  • Test why?

  • And then we want to generate over all of our files.

  • So we're gonna say four f in t.

  • Q t.

  • Q d.

  • M.

  • We're just going to use this toe like let us know where we are in each process.

  • Um, and then that's gonna be training file names up to the validation game count.

  • So the first what is it?

  • 50.

  • I think we chose.

  • Yeah, 1st 50 that'll be our validation data.

  • So then what we're gonna say is data equals np dot Load that f that file.

  • You know, that data?

  • It's an entire game.

  • So it's a list of lists where the list itself is a bunch of these.

  • You know, it's not really the game visualization, but it sort of is just all the data from each of the coordinates basically on the game map that we're curious about.

  • So that'll be the So the zero with element is the 33 by 33 data.

  • And then the first element is the move.

  • That was associate ID.

  • You know?

  • What what move did the agent make based on that data?

  • Okay, so now, um, data equals np alluded.

  • So now what we need to do is we want iterated over that data.

  • So four d in data, we wanna test x dot upend D zero and then test.

  • Why, uh, pen d one.

  • And, uh, that's all good.

  • And also probably a pen np array.

  • So the wise don't actually need to be a numb pyre, Ray, But I'm pretty sure the ECs data I think tensorflow is gonna gonna bark at us if if we don't go with Honore for all the ECs data.

  • Okay, once I stun mp dot save and, um and I hit my insert tears on them.

  • What happened?

  • What is happening?

  • There we go.

  • Uh, chest X, none pie.

  • This is like the same time.

  • Is that previous tutorial where I made an egregious mistake.

  • So just go ahead and expect that this is not gonna work.

  • You know, really any sentence video just really keep your standards very low and everybody stays happy.

  • Okay, so in this case, now, we save those files that later we would look for if we believed we could load it.

  • Okay, so we've done that, and we can get away with that because it's only 50 files.

  • And even when the files were, like 50 megabytes each or whatever, we can still easily load in 50.

  • But what about in this case where, you know, we've got, like, 3000 of these files that were going to use, And while they're only one megabyte.

  • That's no big deal, right?

  • But when they're 50 it's a little bigger of a deal.

  • And then what if we even have even more games?

  • Like, what if we have 10,000 games that we're going to go with at some point?

  • We got it.

  • We have to chunk thes and load them in his batches.

  • So that's what we want to d'oh!

  • Now, the next question is, how do we separate these into batches?

  • Well, um, you know, a bunch of different ways we could slice and then use some sort of counter that keeps adding plus 50 and slices.

  • Based on that, you really could do that.

  • Or the thing I always do is I go to google dot com and then I search.

  • Um uh, Chunk split.

  • Lissy, split a list into chunks.

  • Python the surprise.

  • Get us where I want to be.

  • How do you Yeah, click on this very first result.

  • And this is a beautiful generator for doing just that.

  • You passed the list and then you pass.

  • How many items do you want?

  • Each of the list and it will just automatically chunk it and then you can iterated over those chunks very nicely.

  • I just I use this all the time.

  • I find this to be one of the most useful stack overflow copy pastas that you could possibly make.

  • So, uh, the other thing I'll do is just a copy that just in case I post this code somewhere sores black.

  • Okay, so, um great.

  • Well done, everybody good chunking.

  • Okay, Now, the next thing that we're gonna dio is define our neural network.

  • Now, in the case, you know, you might already have a previous model, just like before.

  • So if load pre ve model So if that's true, then you can say model equals t f dot Keira Stott isn't models.

  • I think it stopped models yet dot models don't load model, and then preview model name Easy is that we don't have a model, so it's else, and then we need to start specifying our model.

  • So model equals sequential will go with a sequential type of model.

  • Um e I think we're gonna steal from myself due to do, um, it's good.

  • Care us again.

  • And then it was what part?

  • Three part of me just wants to just take a copy and paste that code.

  • Let's go to the bottom.

  • This looks good, huh?

  • We don't want the binary cross entropy.

  • Um, but the rest should be fine.

  • So I think what I'm gonna do, huh?

  • Uh, everything down to the final dense layer, I think, is what we'll take.

  • Uh, so we will do this.

  • Probably just copy that.

  • This is probably not gonna end well, but hey, nothing ventured, nothing gained.

  • That looks like a regular Tabas.

  • Well, let's fix that.

  • No, no, no.

  • I thought we fix this before.

  • What?

  • Swayed?

  • Oh, my gosh, is the whole thing.

  • I could have sworn we, um Oh, this is a really painful I thought already set this.

  • What do I gotta do this again?

  • I just Now, I gotta fix all this because it's gonna probably get angry at me.

  • But ah, why I gotta do this?

  • This is super annoying.

  • There's got to be, like, a better way to translate all of this stuff, But I'm gonna do all these.

  • That's really weird.

  • I swear in now I've lost my man.

  • It's cool.

  • I swear in one of the previous tutorials I fixed this already.

  • I'm not sure why this one suddenly has real tabs anyways.

  • Definitely want spaces.

  • So weaken trance, Transfer it to other other places.

  • Okay, um, where was I?

  • So, uh, we probably don't need this.

  • Let's do a 64 64.

  • Let's do Let's do a three by 64 activation can state rectified Linear.

  • That's fine.

  • Flattened to a dense 64.

  • Ah, well, keep activation, sigmoid.

  • We don't have one Now we're going to go with five options, and then we need to choose sparse, uh, sports categorical cross entropy on the fit mint.

  • Um, also a pry it Let me think Here on X.

  • Well, we just don't have X.

  • Also, by this point, we won't even we won't have any of the exes.

  • So, in fact, what we probably need to say is test x dot shape.

  • So we definitely need that thing to be an umpire at this point.

  • Um, test X.

  • But also test X itself is not an umpire.

  • A.

  • We need to convert that to probably be a numb pyre.

  • A.

  • We just do that here.

  • Test X equals NP array test X because we still can't get a shape.

  • It's a list of numb pie.

  • Raise up until we do this.

  • So we've got that.

  • Then we got this random function in there.

  • That's fine.

  • Well, we can clean this stuff up later.

  • For now, Uh, let's make it work.

  • I think we're I think we're almost done here.

  • Um, so the only other thing that has been coming up recently padding equals same is, for some reason and the newer versions of tensorflow I get a lot of these, like, shaping issues, so I would recommend adding patting same to every convolution a ll in pooling layer.

  • Hopefully, that is where I think that's all where it's meant to be, Um, because you have to specify and for some reason, that defaults have changed.

  • And then now if if everything doesn't add up perfectly, you'll get an error, and it's really it's really tedious.

  • That's something that you would want to happen in the lower level tensorflow, but not in the higher level tensorflow layers a p.

  • I think it's really dumb that Dave, like, gone back to having that stuff.

  • Have to, you know, I used to not have to think about it.

  • If you use the tensorflow layers and I think that is a better choice personally, but it is what it is.

  • Okay, Okay, that's our model.

  • Fantastic.

  • So then we're going to specify our optimizer, and that's gonna be a t f dot keira stott optimizers dot Capital a Adam.

  • And then we'll say the learning rate learning rate 0.1 So one inning, Let's just do this way when the negative three.

  • Um, we might find that when you need it for 25.

  • Makes more sense.

  • I think to start will get away with 20 negative three.

  • But we'll probably end up having to check a few.

  • Also, we consent to set of decay.

  • Also all said that 20 negative three.

  • Okay, then model model dot com pile and we will say los equals sparse, categorical cross and Tropea.

  • That's really long.

  • We need, like an S C c ah, shorthand.

  • I know they have it for, like, MSC for means where?

  • There.

  • We need sec, please.

  • Maybe that's a thing.

  • Now someone try it, Let me know.

  • Optimizer optimizer equals the opt that we just defined.

  • And then the metrics that will track metrics equals accuracy.

  • Fantastic.

  • Okay, so step one is Let's see if this compiles, Huh?

  • So I thought on python scented Bought.

  • Oh, no, that's not what I want for, uh, python.

  • It was sent a cent train.

  • Not the part nine, but this one.

  • Okay.

  • Okay, so it looks like our model does compile, and, uh, we are ready to go through a box back to our code.

  • Lovely, lovely code.

  • Okay, so now four feet in range.

  • Uh, Pop, what if that was, ah, print.

  • Uh, current currently working on Hee Park Hee Park hee Cool.

  • So we'll know which epoch we're on.

  • Um Then what we're gonna say is training file chunks that is equal to that chunks generator that we built here and then you pass l n n.

  • So the entire list is all of our files.

  • So that's training file names after the validation, or I'm sorry, who almost pissed up there.

  • Okay, so, validation game count.

  • Onward.

  • And then, uh, training, training chunks, Eyes done.

  • Okay.

  • I said fudged, by the way.

  • Okay.

  • Training file chunks.

  • Um, probably actually.

  • Just define that up here.

  • That should be fine.

  • Thio, put that up there.

  • We don't need to find that every park.

  • Okay.

  • Great.

  • Four index and training files, Uh, in a new and humor eight training file chunks Prince working on data chunk idee X plus one out of, um, this out of this divided by training chunk size.

  • So that might wind up giving us quite long.

  • Let's say round that 22 decimal places.

  • Otherwise, that might be like a super long decimal or something.

  • And really, what we should say is math dot seal, But I'm not going to do that.

  • I just want the general gist of where we are.

  • Um, okay, so now what will say is if load if load trained files for E is greater than zero.

  • So if we've already done one whole iPAQ, we have the data, so we can actually say X equals np dot load.

  • Um, and then we'll call this, um, ex dash something dot n p y.

  • And this would be I d x.

  • As if I'm asking you guys.

  • Um, but yeah, that should That should be what you want.

  • Copy that, paste.

  • Otherwise, why equals np load?

  • Why?

  • Under the ideal scenario, then else, uh, X equals empty list.

  • Why equals empty list four f ine ti que dia training file you No training files.

  • What do we want to do?

  • We're gonna say a data equals np dot load f Really?

  • This is all pretty similar to what we had before.

  • Um, also, we still haven't balanced anything, huh?

  • Well, get to it.

  • Data equals np load F 40 in data.

  • Uh, let's say ex dot hand np array d zero.

  • Why dot upend D one?

  • Okay, Same Koda's we used before.

  • If anything, probably make that a function or something, but I'm not gonna worry about it.

  • Okay?

  • Now we need, like, a balancing function.

  • Um, and this cost sucks.

  • There's got to be, like, a built in a built in balance that we could plausibly use.

  • Um say underscores one.

  • So we'll do this for each of the each of the list.

  • It's almost embarrassing.

  • Depression post this.

  • But look at that.

  • We gotta balance it.

  • There's gotta be like a built in balancing function, though.

  • Such a common task.

  • Four x y in zip X y Basically, we need to get to the point where all of these lists are the same length.

  • So usually what i'll dio is will generate through the data, separate them out by their classifications.

  • Um, then you go.

  • You find the men length, and then you make all of the lists.

  • You just slice them up to that minimum length.

  • So the lowest one, um, so kind of lame that we're going to do this, but, uh, this is you've got to balance it.

  • And like I said, there's probably out there some really great balance function.

  • And if you know, if one it would really make my day if you like.

  • If you know of, like, a pre existing one in care office or something, that is super easy to use.

  • Most of the time, I don't really like their helper functions.

  • But anyways, if why is equal to zero?

  • Um, we'll do underscore zero dot upend.

  • What happened X Y Okay, then, um, we're gonna do this again, sir.

  • Start acting up on me now.

  • So this is like one of those scenarios where I really wish it was acceptable python practice to just do this and like, have this on every line.

  • Like, why can't we as a community come together and just decided that if we have a 11 very short line after an if statement that we can put it on one line.

  • It's okay.

  • I really want that to be the case.

  • Two, 345 If I wasn't showing it on video, I might Ah, shoot.

  • I went the wrong way.

  • I see.

  • 1234 And then this needs to be 123 And I'm definitely violating another rule, which is any time you got copied code, you're probably doing something stupid.

  • So also, if you have a way to condense what we right here, even if it isn't something in care Ross opposed to blow.

  • Because again, I end up writing these stupid functions, like every time.

  • And it stinks men.

  • Uh, Len.

  • So Len underscore zero.

  • So it will be the minimum of all of these lengths, So ding it.

  • Okay, that would be 1234 So Okay, so this tells us what the shortest list is.

  • Chances are these air all at this stage, Um oh, and this also needs to be a list, so it should be a list of these lengths.

  • Okay, So this tells us, um what the shortest list.

  • You know, the shortest.

  • How long the shortest list is?

  • I got it.

  • Okay.

  • Once we know that, then we can just say zero.

  • Um, underscore.

  • Zero equals underscore.

  • Zero up to shortest.

  • So is a shortest is 1000.

  • It'll be the first.

  • It'll be 1000 of all of these lists.

  • So then we just do this for all of the classes.

  • It's like this.

  • I feel like there's gotta be some way to be like here.

  • There are n classes.

  • Ah, the only problem is, sometimes your classes air gonna be one hot vectors are a raise or whatever you wanna call him.

  • Um, And then other times, it will be just scale er's like I have here.

  • But you could probably figure out each unique one and then somehow assign it.

  • There's there's gonna be a good way to do this.

  • Somebody else.

  • Somebody will save us at some point.

  • Okay.

  • So balanced equals.

  • And then it's all these put together.

  • So zero underscore zero, uh, underscore one.

  • And then let me just do this.

  • Uh, 23 Okay.

  • I don't even know what I hit.

  • Hey, I thought this was gonna be easy.

  • Uh, okay.

  • Is there one, two, 34 So that's balanced.

  • Will do a random dot Full balanced.

  • Cool.

  • Um, just for our records.

  • Let's just say the shore shortest file waas.

  • Um, in this case, I'll just use a comma here, and then what was the shortest?

  • Just so we get the number.

  • Um okay.

  • And then what we want to say is, exes equals this wise equals this.

  • And in fact, I kind of want to fix this.

  • I want a little more information.

  • Okay.

  • The shortest file waas shortest.

  • Total new total balanced length is and this will be Len balance.

  • So that way we get a good idea.

  • After balance, you might find that you have way less training samples than you thought.

  • So then.

  • So once we've balanced the training data, we need to re separate out.

  • The ex is in the wise.

  • So, uh, now what we want to say is four x y in balanced exes.

  • Don't upend X wised up end.

  • Why turn X is wise wolf.

  • Okay, Looks great.

  • Good work, everybody, I think.

  • Okay, so now coming down here.

  • What we want to say is, uh, x comma y equals balance, X comma y, uh and then test x test.

  • Why equals balance?

  • We're gonna have a problem.

  • We're gonna have to reconvert test X to a numb pire.

  • Right here.

  • Um, don't forget that test.

  • Why?

  • So and then also it needs to be a Capital X.

  • And also we did.

  • Did we call it?

  • Um, okay, test X test wine for some present.

  • Thought I saw a capital X test instead, but I guess I didn't.

  • My eyes are playing tricks on me, maybe.

  • Okay, okay.

  • We're good.

  • So then what we need to do is convert these all of these things to a race.

  • So we're gonna say X equals NP array X.

  • I don't think why actually has to be an array, but we'll we'll convert it anyways.

  • Why?

  • Test X equals NP array test X test test.

  • Why?

  • Equal test y equals NP array test.

  • Why?

  • Okay, Now, um, let's do n p dot Save on then f lips f string.

  • And we're gonna call this X dash.

  • Whatever I is, I think we're gonna check to make sure that's right.

  • I think I called it I, but I don't remember for sure.

  • Um uh, why I right X dash.

  • So its i d x i d x.

  • Okay, probably should make sure that works.

  • Um, but I want to go ahead and just finish the the code up to this point.

  • So in this case, um, we're loading in the files.

  • Four idea.

  • So then what we'd like to do, like, once you have the files in, we want to fit.

  • So I'm gonna say model that fit X.

  • Why, uh, batch signs will be well, say, 32.

  • For now, epoxy equals one because, uh, we're handling a pox on our end validation data eyes equal to test x test.

  • Why?

  • And we should do callbacks at some point, but I think we'll add that at the end.

  • Uh, let's just see if this fits and depending on the air will pry have to cover it in the next video, because this one's getting to be a really long sent a train.

  • How's it?

  • Probably should make I should probably make these chunks a lot smaller, so we can have a little quicker if we If we do hit in there, I probably will probably have to do that.

  • Um, he's already taken forever.

  • I'm bored.

  • Uh huh.

  • S so for now, I'm just don't assume I'm gonna hit in there, so I'm gonna change these coming back.

  • Where are we?

  • Here.

  • Okay, balance.

  • X Y x is not to find I soon we would hit in air and, gosh, darn, it was all right.

  • Eso balance needs to be Capital X.

  • Try again.

  • Idiot.

  • And also our fit mint.

  • I kind of messed up there as well.

  • Another lower case, X.

  • This probably should be out one.

  • I think so.

  • For Yeah.

  • So this should not happen in here.

  • That was under the l statement.

  • Um, then we would never train if we loaded a file.

  • Um um um um Let's see.

  • Where was the other silly lower case X right here.

  • Save.

  • Let's come back.

  • Run it again.

  • Try again.

  • Idiot.

  • I really do, too.

  • I should do 10 and 100 save.

  • We didn't save the actual array.

  • I'm just gonna lower this yet again.

  • Let's make it 50.

  • So it's even quicker.

  • We're just I'm just trying to figure out get this toe work.

  • Um, so in this case, we want to save X Capital X and then here we want to save lower case.

  • Why again?

  • I would really like to see some training going on.

  • That would just make me so happy.

  • I expected way more errors.

  • I can't believe we're training already, Guys.

  • Um okay, so accuracy right now is pretty darn bad.

  • Uh, but that we should expect that accuracy will be around 20%.

  • Don't forget, we're training on random data here, so, um, we should on Lee hope for slightly better than random results.

  • So the fact of already at 22% accuracy now 25 was my out of sample.

  • Oh, my gosh.

  • Uh, example That's moving too fast.

  • 26% validation accuracy, 28%.

  • I would not even trust that at 28 we really should be, like, slightly better than random.

  • Hopefully, we don't.

  • I want to start freaking out if we see, like, a 30 or something we should be pretty close to Random is my point.

  • But we do have a pretty good threshold.

  • Um, but we just shouldn't be that far at least on the first generation, but it's looking pretty darn good.

  • Um, okay, So what I'm gonna do is I'm gonna break it here because these are way too small of peace is also the validation set is arguably way too small.

  • So we're gonna go back to our original 50 in our 500 and then the other thing we should be doing is Come on, man.

  • Um, after the fit.

  • So the fit is through every index.

  • You could either save a model every iteration or every iPAQ.

  • You can decide what the heck you want to do.

  • I'm guessing only one e pocket's gonna be required, but definitely I would save at least every park.

  • And at least I'm gonna do that.

  • But a part of me says might as well just save it every generation.

  • So I would say model dot save.

  • Uh, it was called model name or something like that, wasn't it?

  • Maybe it was.

  • Just name was a capital name.

  • Yeah, and I can't remember if name has a time associated with it or not.

  • It does cool.

  • Um, okay.

  • Good enough for me.

  • And the next thing I also wouldn't mind doing so Model that safety name.

  • Also, let's save that toe models slash name.

  • Uh, cool.

  • Um, so we'll also need to create that models directory new models Very well.

  • Very well.

  • And finally, this tutorial will end at some point.

  • Um, at this pride before we load in the model, let's define tensor board, and that's gonna be equal to tensor board object.

  • And then we specify a longer equals, um, again, f logs slash whatever the name of this model is.

  • And then we come down to where we do our fit mint, and we need to add in the tensor board call back.

  • So should be callbacks.

  • Kovacs is a list, and we want to pass.

  • Shoot.

  • I think it was just lower case.

  • Tense or bored, if I recall, right?

  • Um yeah, Yeah.

  • Okay.

  • Tends to war board come up here and confirmed tens or bored.

  • That's why I make sure I didn't type of this.

  • We were really angry if I if I hit some freaking error at this stage, Okay.

  • Um all right.

  • I think we're all set.

  • So models, and then I can't remember if you have to have tense or bored, uh, logged, ERM, aids.

  • I'm gonna throw in logs as well, just in case.

  • And now let's go.

  • When I cut the video here, if I hit any errors for whatever reason along the way I will throw in a little update at the end of this video.

  • Um, otherwise, I'm gonna let this train.

  • We're gonna check out some of the results.

  • I might do some tweaks or whatever.

  • And in the next video, we'll talk about the results and then hopefully have time to apply it to, you know, actually start training, you know, make making new games based on that model and all that stuff.

  • So that's what you guys have to look forward to.

  • If you have questions, comments, concerns, suggestions, a better idea for the balance.

  • Whatever.

  • Feel free tow.

  • Leave those below.

  • Otherwise, I will see you guys in the next video, and I kind of want to let this iPAQ finish doesn't see, but I think it's hopefully I'm gonna be really shocked for, like, beyond 30 percentages, But okay, validation.

  • And we actually got a better validation accuracy than in sample.

  • Interesting.

  • Okay, so you guys later, all right?

  • It wouldn't be a cent eggs video without a mistake in an edit at the very end.

  • So, um, I was trying to compete, and I ran the first iPAQ, which you know, really, we did.

  • We stuck around the range of 24 to 28 Which for validate.

  • Well, actually, I'm looking at the wrong thing.

  • Let's look, E puck validation accuracy Got a relative, uh, and we could see where we where we've stayed at somewhere between 24 a half and 25% accurate, Which, in my opinion, is actually pretty good, eh?

  • So those are the results.

  • But I caught.

  • I was trying to do multiple pox.

  • Um, and I stumbled on.

  • Hello?

  • There we go.

  • This so what's going on?

  • So the 1st 1 I said it to run for three.

  • Fox, this is a pox zero and then just skips iPAQ one, too.

  • It doesn't do anything.

  • The issue is our generator.

  • I moved it thinking that was the smart thing to do, but no, that was the stupid thing to do.

  • Um, what we need to do is put this back.

  • Ah, where it was.

  • So for each epoch.

  • We need to specify that generator, because if we don't, what happens is that it becomes empty.

  • So, uh, so we need Thio.

  • Put it down here.

  • Despite the fact that we're loading the files only we actually only need to load them one time.

  • Um, there's probably actually even a better way, because the only real really, that should be up here.

  • And then maybe this here needs to be something that calculates how many chunks there would have been or something like that.

  • But I'm not gonna worry about that for now.

  • What I am gonna say is the easiest fix is to just throw this back down into there.

  • And then Now, um, we should be able to get three fully pox just for comparison sake.

  • I just want to see Maury Pox and see.

  • Does that help?

  • Does it do anything?

  • Does it hurt or whatever?

  • So back to training.

  • Okay, So you guys the next video.

What's going on?

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

訓練模型--深度學習在海瀾之家AI大賽中的應用第5頁 (Training Model - Deep Learning in Halite AI competition p.5)

  • 4 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字