字幕列表 影片播放 列印英文字幕 what's going on? Everybody. And welcome to part five of our shenanigans with the neural networks tutorial series in this part, what we're gonna talk about is the results from training the general this call it a classifications generator. So, uh, let's talk about it so you could see what I'm running right here. Is actually, um the Let's see a key. I guess you can't see this on the screen. We pull us up a little bit if I can. Come on. There we go. So, basically, what it's doing is every time it gets the classification, right, Like that was the one predicted it was a one. This was seven. I'm guessing this is a 20 and so on. So these are all incorrect. Incorrect, incorrect. And then eventually, it might get a correct one. Like here. Zero correct. So not a seven. Anyways, um, I've been running this for a while. About 316 tests. 3 17 now, and it's about 22% accurate. Also, I'm still training this model. This is actually still very young model. So moving over here. This is the model still in training. As you can see, it has leveled out quite a bit. But if I do smooth this out, you should see that it's still slowly tapering down still, so I'd like to actually let it keep going. In fact, it's still it's like less than 10% done with at least the 50 e pox that we wanted to go through. So I kind of wanted to let it continue going. But I didn't want to do that if it was just a waste of time. So I actually did not know if this was going to be successful. My first attempt was unsuccessful completely. And then I wondered if we gave it a little more of an opportunity to get things right. If that would help in, sure enough, it has. So what I'd like to do now is go over. Um, at least the code that I used to get this I don't think it's wise. There's really no point for us to write it. So I just kind of want to show you the logic that I'm doing to detect whether or not we got something right. And also we should probably talk about, um what the chances of getting it right are. So, for example, with a regular classifier, you would say the classifier is better than random. If it makes it makes the prediction right. In the case of M nus, there's 10 predictions. So anything over 10% is you're better than random now with a generative model, though I mean, it could. It's like an infinite number of things that could generate, So we'd still like to see something over 10% accuracy. But, um, really any accuracy is is pretty good, because it could have generated anything. Um, as we can see, it seems to get zeros pretty well, probably because it just over predicts for a zero. Very frequently. Um, but as you saw there, got the three right and so on in the nine. Anyways, it's on. It's on a roll that did look like a four. Come on, man. Anyway, uh, here is the code that I just kind of changed up. That's what this is was running right at this moment. Let me just fit it to the screen so I don't talk about something that you can't actually see and, um, not too much here. That is different. As you can see, It's the same kind of logic that we've seen before. And again, if you want this code, you can go to the text based version of the tutorial. Uh, it's I'll put it in there. It's actually not there right now because I made some changes. Um, anyway, yeah, there we have that. And basically, all it's doing is just gonna It's just gonna generate through the exes, which, in this case yet right there, the validation. So these are numbers it's never seen before. Um, so it's gonna generate through those, and then it's going to make a classification and output it. Now, our logic for doing this isn't necessarily the greatest logic. I'm still very much in the development stage. But for example, what constitutes our logic is whether whether we got it right or not is, um, wait, I'm sorry. This is not updated. Hold on. When he pulled the updated one type, like, commented out this logic, but not shown the other logic One moment. Okay, take two. Yeah. Here's the new logic. So Ah, so, yeah. If arcamax of the label is just in sample dot decode my you know, for the last 90 elements we just say, Yeah, we did a good job. Um, but obviously, there's a few reasons why, um, Why did it do that? I'm not sure. We must have hit quit or something. Anyway, we ran 402 tests. That's enough test, I say. Ah. Anyway, And that was on a model with 18,000 steps. And now we're already at 21,000 actually, so I'll probably transfer over again in a little bit. Um, Anyway, what I just wanted to do is, you see, like, is it worth me continuing to train this model for a couple of days or not? And I think it is. So I'm gonna actually let this one continue. I think we'll continue with the next topic on the tutorial. Will probably come back to this one for the results. Um, so you'll have to stay tuned. It's gonna be a cliffhanger. But right now, getting about 23% accuracy is pretty good. But anyways, as I was saying, um, so in the case of like a zero, for example, there's frequently zeros. Um, so a lot of times to just starts predicting another number like this is a perfect example where I mean, it got the Coghlan's and it started to sort of make that prediction. But then it just started drawn another zero, basically, or just who knows what it's doing? But that's not totally right, right, And we're classifying that as being correct. So take that with a grain of salt, are accuracy. But just from running it visually, inspecting it as well. I think we're doing pretty good. I'll worry about making something a little more official. We'll probably use something like regular expression or something to find something that fits the correct format and then decide Is this correct? Is it not? Do we have a large group of like in this case is like a bunch of seven's, Not just 17 or in this case, like it says five. Well, if the prediction was a five and all we had was just this, I would say Now that's wrong. Um, so anyways, I'll do that. But like I said, this was just, ah, just a really quick test to seize anything here at all. Um, and I think there is something here, so that's interesting. So I'm gonna let this one continue ah, to run. The other thing I'll just show you to is if we do just run a python sample dot pie, um, and equals, let's do 5000. And unless just send it to out dot text if we do this, um, it's actually really good at Oh, the, uh primer. So hold on, let's s o the primaries of space. And obviously we don't have any spaces, So let's do it. Uh, I think Is it dash? Dash? Uh, is it? I can't move his prime. Or primer. Let's check. Ah, dash, dash prime. Yeah. Dash dash for the full words. So prime equals, um, I don't know. We could just do like a bracket. Anything. Really? And while that's going, let me pull up. Ah, Is this the one? Yes. Already haven't out dot text. Hopefully, we'll get replaced. It's not the right time stamp, okay? It says it's done. Um, looks good. Let's open that up. Mmm. Okay, I guess it's just depended to it. Okay, well, anyway, as we can see, though, um, this is like it should get these really well. So the ones that it generates, we should expect it. That's a goofy looking five. Come on. Um, it's getting these wrong. Okay, let me do a bigger one. Uh, you're making me look bad. At least from what I've seen. Generally, it doesn't really good job of, um because in the one case we're doing a general, we're doing classifications, and then the other one is doing generation, which is like stuff it's seen before, s so it should be. Ah, it's a slightly easier task to get the numbers right that it has generated. It's a much harder task to get the numbers right. That, um it's not generating, so I think I might I don't know, this should go pretty quick, but I might have to pause this one. Well, while we wait, you also just check out this looks like it's starting to tick up here, but on a long enough timeline, it should just continue to toe at least level out. I don't think we're gonna make any more big changes, but, um, I wouldn't mind letting it iterated few more times through its own data set, though, Still go in, and maybe I'll pause it. I'm just gonna pause this until it's done. So core dumped. I can't decide it. Is that my fault? Is that like me hitting a key that does that? Or does that just do that? Uh, see, Do we have an out text? We, at least how? It's empty. Damn it. Okay, let's do just do. 25,000 5 25,000 Okay. Pausing. Okay. It's definitely me. Hitting control for that is my paws, but in that is causing a core dump. Interesting noted. Thank you, sir. All right, I'm gonna pause now, and I'm gonna run this. All right? So now it is done. Let's see how we've done ourselves. See out dot Texts should be this one, so hopefully this will look a little better than the other one. So three correct. Not a 51 should be a 61 Should be a nine 744 Good, good, good. I'm not actually sure if that's a seven or a one. I'm gonna give it to him, though. Ah, So interestingly enough, this one actually is doing worse, uh, at generating the declassification part during just a straight up big, um, big sample than when it was just a one hot array. It did much better. The other one, but I'm not really interested in in doing that. I'm not interested in classifications being right in the generation where it just generates a bunch more interested in how accurate it can be on things that's never seen stuff anyways, But not bad. It's It's definitely better than it is on numbers. It's never seen. But, um, before it was, like, perfect. Anyways, Um, yes. So that's just that's just generation that's a little easier than generating classifications on the validation set just to be clear. Okay, So what I'm gonna do is I'm gonna leave this model training for the couple of days that it needs to train for in order Thio be fully done with 50 pox. I'll probably check in again. Um, maybe about at a park 10 or something like that. Or maybe tomorrow, Um, I'll see where we're at. Probably 20 something or Maur, Um, so maybe we'll check in again. Otherwise I'm gonna start us working on the next thing, which is, rather than generating the classification, generating the number based on the input that we want to put in instead. So it's very much like what we've actually just been doing up to this point. Just with a couple tweaks Att least the way that I did. It s so that's what we're gonna be doing in the coming videos. As always, if you've got questions, comments, concerns, whatever. Feel free to leave it below. You can support this content and python permanent at slash support till next time.