Placeholder Image

字幕列表 影片播放

  • Hi there, everybody.

  • Wassup.

  • My name is Magnus.

  • And your watching coding tensorflow the show where you learn how to code in tensorflow.

  • All right, this is the second episode where we explore over fitting and under fitting.

  • If you haven't already, you need to check out the first episode.

  • See the link below.

  • Don't worry.

  • I'll wait for you here.

  • Okay, So we left off Episode one looking at the multi hot encoding of our input string using the centers, The small cat.

  • As you can see, we put a one hot encoding at the array indexes for each word present on zero at all other indexes.

  • Let's now look at three different models we will use to demonstrate over 50.

  • There will be a baseline model, a very small model, and then a bigger model.

  • Our baseline model will consist of three dense layers, the 1st 2 with 16 new Ron's, and really and then our classifications layer using signal.

  • Our small model will be just a fraction of our baseline model with just four neurons instead of 16.

  • And our bigger model will be very similar in structure but have 512 neurons for the first the second layers.

  • Okay, time to bring out the code you can located here.

  • Now that's train and test these models.

  • First we defined the baseline model.

  • Then we train it here.

  • We defined a small model on training and finally we did find a bigger model and train it all right.

  • Now for the interesting part comparing how these models perform as you can see, the training loss for baseline and bigger quickly decreases.

  • While it takes much longer for the small model for our discussion on over fitting, though, what's more interesting is the validation of the models on the test data set here.

  • You can clearly see that the lost quickly increases.

  • The more features are models have.

  • This is a clear example off over 50.

  • The trade off here is that the more new rose all model has the risk for memorization on the test data increases and our model will not work well during validation.

  • This is called over 50.

  • But at the same time, if we have too few neurons are model may not be expressive enough to solve the problems.

  • This is called under fitting.

  • So what can we do about this well, there are two ways to approach this problem.

  • The 1st 1 is called Regularization.

  • You can read a detailed explanation of that here.

  • What it really boils down to the O is to force the weights of our model to be as small as possible.

  • This disables our model to learn specialize things about our training data.

  • Doing this is very straightforward.

  • In Tensorflow.

  • Simply used the colonel underscore regularize er parameter when defining the model.

  • Let's train our baseline model using these parameters.

  • As you can see, our L to model, which is regularized, validates much better on the test data set than our previous baseline model.

  • A second way to deal with over fitting is to use something called dropout Dropout simply means that we set a layers feature with probability to a zero.

  • So in this example, we're adding a dropout probability off 50% to our layers when we train our model.

  • And as you can see again, this improves the over fitting problem.

  • If you want more information about over fitting and under fitting, you should watch the generalization video from the machine Learning Crash course.

  • The link is in the description on that's it for this two episode.

  • Serious.

  • I hope you had fun watching and please subscribe to this channel below for seeing more videos like this.

  • But now it's your turn to go out there and create some great models.

  • Don't forget to tell us all about it.

Hi there, everybody.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

解決你的模型的過擬合和欠擬合問題--Pt.2(編碼TensorFlow)。 (Solve your model’s overfitting and underfitting problems - Pt.2 (Coding TensorFlow))

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字