Placeholder Image

字幕列表 影片播放

  • what's going on everybody.

  • And welcome to a deep learning with python and, of course, pint torch tutorial.

  • Siri's this tutorial.

  • Siri's is going to start from the very, very basics, assuming maybe you don't know anything about deep learning and then we'll be getting into more complex types of neural networks and applications for them.

  • So anyways, to get started, of course you're going to need pied towards.

  • You're also going to need python.

  • I am assuming you already have at least Python, and my hope is that you at least understand the basics of Python as well as object oriented programming.

  • So if you don't have either of those two things, probably you should stop and either find someone else's tutorials or I've got tutorials on both the basics.

  • It's a pretty short Siri's.

  • It's like 13 parts or something like that, Uh, and then also object oriented programming again.

  • It's not that many videos should be able to get through it pretty quickly.

  • You should have a basic understanding of python because I'm just assuming you.

  • D'oh!

  • I'm not going to be talking about basic python concepts and then object or any programming, because in general, your neural network is going to be a class.

  • So if you don't understand that you're gonna be fuzzy about certain things, and then you're also probably fuzzy about neural networks.

  • It's just not a good combination.

  • You should be solid on how object during it programming works first, uh, and then jump into this next up.

  • I'm assuming you guys don't.

  • At least some of you guys probably don't know about neural networks or at least don't know much.

  • So I'm gonna go really, really briefly into how neural networks work You.

  • You can get by here with a pretty high level understanding of neural networks and be just fine.

  • And then you could If you want to be very successful, you're gonna need a pretty deep understanding, at least for this Siri's.

  • We're not going to get too deep too quickly, because again, you don't really need that.

  • But if you're curious or you want to get into more cutting edge research and stuff like that, you're going to need to sew.

  • Uh, as this serious progresses will be getting a little deeper into things.

  • But then also, and we'll be talking about concept, so I'm just not gonna hit immediately because there's just no need.

  • Uh, and then also, at some point, I'm going to do in neural networks from scratch video and buy scratch.

  • I mean, we're going to use numb pie.

  • We need something to do, like a ray math or something like that.

  • But no helper functions.

  • Um, just because the list would be really slow.

  • Anyway, it's not important.

  • But if you guys want to get a little deeper that you will be able to both in this series and in the future.

  • Anyways, assuming you don't know anything about neural networks, I'm going to go briefly over them.

  • But then again, if you're still kind of fuzzy about just even sort of 100 on networks work, you can look up videos by three Blue one brown that there's some great visuals there.

  • Also, there's just so much information online a ce faras like how neural networks work at a basic level, but I'll do my best to explain them really quickly.

  • So this'll is a basic neural network.

  • You've got your input data here.

  • Let's assume you've got a knish, a situation where you're trying.

  • You've got images of dogs, cats and humans and you want a neural network that can differentiate between an image of dogs, cats and humans.

  • So over here, in your input, you've got features and these air going to have to be numerical valued in some way.

  • It might be the case that they're pixel values and pixel values are already numerical values, right?

  • 022 55 for RGB or if it's gray scaled image, it's just gonna be 022 55 or some other number.

  • It's usually 022 55 anyways, so they'll already be, um, a new miracle form.

  • But your features could also be so the input we call that features.

  • This could also be some sort of categorical type of feature.

  • So it's like Has four legs, has two legs.

  • Could be has body of covered in hair or something like that, or what kind of color?

  • How tall is this thing?

  • That kind of stuff is going to be like descriptive features, and at some point we have to convert that to numerical, so you could just do like a, like just simply, the first thing is a zero.

  • The second thing is the one, and so on, and we'll talk much more about that and how how to convert things to numbers later.

  • But just understand it's always gonna be a number that is being input.

  • It has to be a number, so anyways, that's your input.

  • So let's assume they're pixel values, because that's a pretty common task.

  • Eso and it's great skilled.

  • Let's say so.

  • You have this import and then it gets passed to these things called hidden layers.

  • They're called hidden because we don't have control over those layers.

  • The machine has control over those layers.

  • We can read them.

  • We can see what's going on.

  • But we're not changing any values there.

  • After they come through the hidden layers, they get past to an output.

  • And again, in general, I'm just gonna be super super vague or not vague.

  • But hopefully not big Super general here.

  • Let's just say this neuron.

  • This is your dog.

  • Now on this is your cats in Iran.

  • This is your human neuron and the value that winds up here in general, we do an argon max function that's just gonna ask which of these is the greatest.

  • So dogs, cats, humans, Let's say dogs.

  • Five cats is seven humans is 12 right?

  • So that would mean the largest value is humans.

  • So we would say this neural network has predicted it's a human.

  • Okay, um, so how does the machine has a neural network even learned?

  • It's not complicated at all.

  • We could look at really complicated imagery like this.

  • It's You don't even really have to think of it this way.

  • But basically each of these connections you can see how there's a line drawn from every input.

  • Every neuron here and then every neuron here has a connection to every other one, right?

  • As you continue going forward here, this is what's referred to as a fully connected neural network.

  • They don't have to be fully connected, and we can talk more about how to do different things later on.

  • But in general, this one's fully connected, and each line here each connection is really a wait.

  • So you've got this input value, and then it gets multiplied by await the biases.

  • Optionally added.

  • In on a bias is just an extra, you know, bias could be three.

  • So it's, you know, the input times, the weight plus three.

  • Okay, and then it becomes a value in here and then X to the same thing is done.

  • And it's all these air added together into this one neuron.

  • So I've got another image here and like, this is basically what goes on in a single neuron.

  • So you've got all your inputs coming in.

  • Those inputs are being multiplied by a weight of biases.

  • Possibly added, All those are added together, So they're just summed together and they're passed through an activation function.

  • So the activation function kind of mimics of neuron in your brain, right?

  • Is it firing or isn't it?

  • And it could be a firing or isn't it?

  • So it would be like a stepper function.

  • But in most cases, we use the sigmoid function.

  • So is actually a range between zero and one, and that again is just kind of mimic both a neuron firing or not.

  • And also it serves a purpose of keeping values from just exploding in your neural network.

  • Um, it just kind of keeps things again between zero and one.

  • One of the key concepts that I just keep coming up time and time again is that in general, neural networks appreciate values between zero and one or negative one and one in some cases.

  • So, um, always keep that in mind.

  • So your input, usually you're going to do what's called scaling and scale it down, um, to be between zero and one and then, yeah, even through your network, this value start exploding into these gigantic values.

  • Something's probably going wrong, but way more on that later.

  • With one thing, all you have to really think about is what a neural network does.

  • All it does is each of these weights is what we call a parameter.

  • So same with biases, awaits and bias.

  • Even this really, really simple neural network that I've hand drawn here.

  • Look at all of those parameters.

  • Every single line is a parameter, probably two parameters if we consider weights and biases.

  • So what the machine does is it has has the ability to modify every single one of those weights independently, right?

  • So it just keeps tweaking all of those so that when we pass data when we train in the old network, we're passing in this input data and we're telling it, let's say it really is a picture of a human.

  • We would say the desired output.

  • The output I wish you would make in this case is a 001 Right?

  • Please make that and it makes its output, and then we determine how wrong that is with loss.

  • And then we have an optimizer that goes back and attempts to update these weights in such a way that gets closer to predicting wth e targeted output.

  • And we tend to do this over batches over millions of samples in slowly.

  • Over time, the neural network learns what exact values to put for each of these weights to hopefully generalize over your data and predict.

  • But at the end of the day, it's just it's like this gigantic function with, like, a 1,000,000 variables, sometimes more like like I'm not even being I'm not exaggerating like three million variables is a pretty small neural network.

  • 30 million variables is a regular neural network, so it's just so many things that it gets to tweak and change.

  • And of course, with that comes things issues like over fitting and stuff like that, and we will talk about that moving forward.

  • But really all enrolled network is is just this huge function where we allow the machine to slowly tweak variables and parameters to output desired results.

  • Okay, so now hopefully you have a general and basic understanding of how neural networks work again.

  • I think it's completely to be expected that you're somewhat vague on how things work.

  • I think one writing it out in code would help, but also to There are just vast resource is online as faras, depicting how neural networks work.

  • And that's how you learn.

  • I would strongly recommend the Three Blue one Brown just type of like to YouTube three blue one brown neural network, and I am sure you'll find the Siri's.

  • He is wonderful of visualizations way better than my hand Sean photos.

  • So anyways, moving along, we're gonna be using pytorch.

  • Now chances are you've heard of for sure, Tensorflow, which is another machine learning library, and there's quite a few others I want to use pytorch one.

  • I just start.

  • I just decided one day to learn pytorch myself, and as I was learning it, I was like, This would be a really good language to do a beginner's deep learning Siri's because Pytorch is like super friendly to the programmer, especially a python programmer as opposed to Tensorflow.

  • And one way it is with the graph.

  • And the great thing about Pytorch is you don't even need to know what the heck is the graph.

  • What is he talking about?

  • You don't need to know.

  • With pytorch, it's great.

  • With tensorflow you have to.

  • You have to learn about the graph because you have to work around the graph.

  • It's very challenging, whereas in pytorch you really you just don't need to know about it and so on.

  • And then also, people call it like more python IQ and like, What does that actually mean in general, like with Pytorch, you write Python programming like you write the object, you deal with it like it's an object, you it's eagerly executed.

  • And there is tensorflow eager.

  • Ah, but it's not the main tensorflow, and there are certain issues and limitations with using.

  • It also is much slower than regular tensorflow And, uh, and then the next question, of course, is had his pytorch compared to tensorflow and speed.

  • Uh, and they're pretty close, actually.

  • But pytorch is faster than eager tensorflow, and that would be the more fair comparison because Pytorch is eager and what I mean by that I mean, you can write a line of code in pytorch like you can have your network and then you run like one line or you run through one layer and then you could check the output and you can see it.

  • Also, with pytorch you, you can branch out into different layers and you write pure python code to do it.

  • It's just it's just easy to make complex networks.

  • There's some tedious things about pytorch, and we'll talk about them when they when we get there.

  • But anyways, long story short, we use pytorch and you're gonna love it.

  • So let's click on Get Started.

  • There's, like, a 1,000,000 ways you could install this, so I'm gonna let you install it by yourself.

  • All this stuff should be super simple for you to understand, except for maybe Cuda.

  • If you don't know what Cuda is, go over here.

  • Click on none.

  • If you have ah NVIDIA GPU of decent quality, you could enable you could get the coup diversion and put operations on your GP you.

  • And when you do that, it's in general somewhere between 50 and 100 times faster, which makes a big difference in training times to begin.

  • This Siri's at least for the first few videos.

  • All the code that I'm gonna write is one.

  • I'm gonna run it.

  • I'm gonna be running it on my CPU, and it's like, totally capable of being running a CPU.

  • At some point, you are going to need access to a mid range or better GPU, either locally or in the cloud.

  • Now, if you have access to one locally you can install it will be talking about how to do that.

  • And then in the cloud there are lots of options.

  • This Siri's is sponsored by Leonard, who also happens to have the best prices in terms of bang for your buck performance in the cloud s so you can check them out.

  • I've done a video on exactly how to set everything up on them as well.

  • Um, I'm not saying that because they're a sponsor, the response or because they are simply the best right now.

  • Anyways, right over time, all these all these cloud providers are duking it out.

  • So at some point, someone might beat them on price, and I'll let you know when that happens till them.

  • Linda.

  • It has the best cloud GPU prices by far.

  • It's like twice as good as anyone else right now.

  • So anyway, um, check them out.

  • But like I said, if if you don't know what Cody is, don't worry about it right now.

  • Just get the regular CPU version.

  • You're gonna be totally fine.

  • You can learn the basics without needing to run on a GPU, but at some point, you will need to run on a GPU.

  • So anyway, um, once you have that, go ahead and do your install.

  • I am going to pause for a moment and assume you guys when I would have done pausing going to assume you have pytorch installed.

  • Great.

  • Congratulations on your install.

  • So the first thing I'm gonna do is figure out.

  • Do I have Oh, I do already opened it up.

  • Nice.

  • So, uh, I'm gonna be coding things in the Jupiter lab, but you can use whatever editor you want, cause remember, you guys have done your least not basic programmers anymore.

  • So I'm assuming you guys can have your own editor and be fine.

  • I'm going to use Jupiter notebook just because one I can yea for eager execution, but also because it just kind of makes sense a T East when you're trying to learn the very, very basics, running line by line is just useful.

  • Also, if you needed to bug things again, it's just it's super useful.

  • So I'll be starting in a Jupiter notebook at some point will probably leave the notebooks behind, but for now, cool.

  • So first of all, like what is pytorch right?

  • Like what is that?

  • So again, Assuming you know the basics of python, you're probably familiar with numb pie, which is the the new miracle processing library of Python.

  • Now the problem with some pie is what I just explained moments ago is that it doesn't run on the GPU.

  • So whatever deep learning first came out, there was really it was very difficult to to to really run these things because the your CPU, the reason why things are so much faster on your GP you is with a neural network.

  • Like I said that the thing that is happening is each of thes waits is getting updated, so your machine is needing to run thousands and thousands of operation of a small cat like, not even thought like millions of small calculations.

  • Basically.

  • And so?

  • So the GPU.

  • Um, trying to think how exactly?

  • Want to describe this?

  • What order?

  • Uh, let's say your CPU.

  • So your CPU was made tea?

  • Do these not really huge song, these huge calculations, but the CPI was really there under the assumption that most calculations that are going to be made are gonna be are going to be large calculations like very much more challenging calculations than modifying little weights.

  • Okay, your GP you is your graphics card, right?

  • So it's computing, graphics and graphics.

  • There's a lot of very small calculations.

  • So we run on the GPU for that reason.

  • So So running on the GP was where we want to be for something like deep learning, because what we're trying to do is a lot of simple calculations.

  • Inter Jeep, like your CPU probably has something between I don't know, four and 12 cores.

  • Okay, A nice GPU has, like thousands, of course, so it's just it makes a big difference.

  • So, uh, let me come over here.

  • And so So, like I said, pytorch is just numb pie on the GPU with some nice helper functions because a lot like like, you don't have to write your optimizer and you're back propagation and all these things That probably might not mean anything to you yet, but we'll soon.

  • But it is basic poor.

  • Let's go ahead and import torch and can I pressure to figure this one out first?

  • I wouldn't have minded making my font size larger.

  • Gotta be a way besides zooming in, but I'll just zoom in for now.

  • Um so import torch And then let's say you've got to two variables So we're gonna say X equals torch dot tensor Uh and I mean, this could go many ways, but let's say like, five and a three and then you've got y equals torched on tenser.

  • And we can make this a two in a one.

  • We could then say print X times why, it's gonna take a second to import torch there, but normally that operation will be much faster.

  • But yeah, we just we just multiplied these two arrays, so a tensor is initially kind of a scary sounding word, but basically a tensor isn't array like that's all you need to think of right um, and so basically a multi dimensional array.

  • So uh, yes.

  • Oh, that's a simple as it gets right.

  • What's nice about Torch is we can put this on the GPU, and this could also be on the GPU.

  • And then when we multiply them together, the GPU is doing that math.

  • But again, what just happened just now was actually on the CPU.

  • We weren't really asking very many calculations to run, so it was like No big deal toe actually run that.

  • The other thing that we can do is like we can say X equals torch dot zeroes and then we can specify shape.

  • Let's say say it to buy five.

  • I am.

  • Let's go ahead and well, one thing we can say is what we could print X.

  • Then we could also say ex dot shape and that gives you like the size and before anybody is like, Is this really necessary?

  • Why are you showing me all these things?

  • Basically, all these things are things that we are going to use.

  • But also, if you're already familiar with numb pie, one thing you might be starting to realise is hey, they've got zeros in shape and all these calculations that you tend to need to do or you've done in a dump.

  • I there's a torch variant.

  • So again we could say, Why equals torch, not Rand.

  • Uh and then we could say to five on, Then we can output.

  • Why?

  • And there you have just a random initialization of a two by 5 10 sir.

  • So the next thing I want to show is reshaping.

  • So the one thing that for whatever reason they have decided to call something that you are not familiar with is like when you want to actually reshape something.

  • Uh, numb pie you would say dot reshape on.

  • I believe in tensorflow.

  • It would be a dot reshape in torch.

  • It's not a re shape.

  • It's cold.

  • You use a function called or really a method called view.

  • So this is a two by five.

  • So let's say we want to do what's called a flatten operation.

  • So a two by five in general, you're not gonna be able to put it to buy five into a neural network.

  • So it's a it's an image in the image is to buy five pixels.

  • It's not gonna be the case But let's just say it was the first thing you have to do before you feed this image through.

  • Let's say a basic neural network is You have to flatten it so it would be a one by 10.

  • Right, cause there's 10 total elements.

  • If there's two by five, if we flatten, it will be a one by 10.

  • Well, how do we do that, though?

  • So the way we would do that is with a reshape and like numb pie.

  • But here in torch, it's not view, but one thing to keep in mind.

  • I'll show in a moment we would say one by 10.

  • Right?

  • So there's your one by 10 reshape.

  • Their one thing to keep in mind is, um, if we print out why, again, you'll see hasn't actually been modified.

  • So keep in mind that you would need to reassign, um, so you would need to say, Why equals why dot view.

  • And I guess you know the view sort of makes sense like it's like Show me this view.

  • Show me why viewed as the shape or something like, I don't really mind the name, but it is different from what we're used to in terms of reshape, but it's honestly not that bad.

  • People made a big deal about that whenever I was reading about updates toe to pytorch.

  • People are complaining about that.

  • I'm like, really?

  • I mean, he is a good name.

  • So anyway, um okay, so So, as you can see, it's just doing simple math with, um, arrays.

  • Basically.

  • Now again, there are lots of little functions that we can also import to help us specifically with neural networks.

  • But at its core, it really is just a library to help you do array math.

  • So anyways, I think I'm gonna cut it here.

  • And then in the next tutorial, we're going to start talking about the actual data itself.

  • And then probably that'll consume a tutorial.

  • And then we'll talk about building the neural network and then training the neural network.

  • So, um, as strange as it might sound, we're gonna dedicate the entire next video to data alone, and even then, it's pretty simple.

  • But if you think about it as a machine, you know, as somebody who's doing machine learning like your biggest job is like the one thing you have control over is data and then you know you can you sort of have control, like in other ways.

  • Like you can change the shapes and sizes of your neural network.

  • You you get to dictate what the structure is.

  • But in general, the biggest thing for you to do is provide good data.

  • It's kind of like with a lot of things like audio or something where you say garbage in garbage out.

  • If you put in bad data, you're probably gonna get bad date out.

  • So anyways, in the next tutorial, we're gonna discuss data and we're gonna go easy mode on the data first and then on the next neural network we build will get a little more complicated, but anyways, we've We've covered a lot of basic stuff here, but we've covered a lot of stuff.

  • If you have questions, comments, concerns.

  • If I said something wrong, feel free to correct me.

  • Um, you can leave those below.

  • Also, if a discord, it's discord dot g slash Centex.

  • If you've got any questions, you help on something.

  • Come check out the discord.

  • Also special.

  • Shout out to my channel members who have been with me now for a year.

  • Ah Har Soft.

  • Angela Montalvo, Mr Jean Jean's and Edward McCain.

  • Thank you guys so much for your support.

  • You guys are amazing.

  • If you want to support the channel, you click on that beautiful blue joined button.

  • Get early access to content.

  • Shoutouts.

  • Ah, special ranking.

  • The discord.

  • That's pretty much covers it, I think.

  • Anyway, uh, that's it for now.

  • I will see you guys in another video.

what's going on everybody.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

介紹 - 用Python和Pytorch進行深度學習和神經網絡 p.1 (Introduction - Deep Learning and Neural Networks with Python and Pytorch p.1)

  • 10 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字