Placeholder Image

字幕列表 影片播放

  • ANNOUNCER: The following program is brought to you by Caltech.

  • YASER ABU-MOSTAFA: Welcome back.

  • Last time, we introduced the learning problem.

  • And if you have an application in your domain that you wonder if machine

  • learning is the right technique for it, we found that there are three

  • criteria that you should check.

  • You should ask yourself: is there a pattern to begin

  • with that we can learn?

  • And we realize that this condition can be intuitively met in many applications,

  • even if we don't know mathematically what the pattern is.

  • The example we gave was the credit card approval.

  • There is clearly a pattern-- if someone has a particular salary, has

  • been in a residence for so long, has that much debt, and so on, that this

  • is somewhat correlated to their credit behavior.

  • And therefore, we know that the pattern exists in spite of the fact

  • that we don't know exactly what the pattern is.

  • The second item is that we cannot pin down the pattern mathematically, like

  • the example I just gave.

  • And this is why we resort to machine learning.

  • The third one is that we have data that represents that pattern.

  • In the case of the credit application, for example, there are historical

  • records of previous customers, and we have the data they wrote in their

  • application when they applied, and we have some years' worth of record of

  • their credit behavior.

  • So we have data that are going to enable us to correlate what they wrote in the

  • application to their eventual credit behavior, and that is what we are

  • going to learn from.

  • Now, if you look at the three criteria, basically there are two that

  • you can do without, and one that is absolutely essential.

  • What do I mean?

  • Let's say that you don't have a pattern.

  • Well, if you don't have a pattern, then you can try learning.

  • And the only problem is that you will fail.

  • That doesn't sound very encouraging.

  • But the idea here is that, when we develop the theory of learning, we will

  • realize that you can apply the technique regardless of whether there

  • is a pattern or not.

  • And you are going to determine whether there's a pattern or not.

  • So you are not going to be fooled and think, I learned, and then give the

  • system to your customer, and the customer will be disappointed.

  • There is something you can actually measure that will tell you whether you

  • learned or not.

  • So if there's no pattern, there is no harm done in trying machine learning.

  • The other one, also, you can do without.

  • Let's say that we can pin the thing down mathematically.

  • Well, in that case, machine learning is not the recommended technique.

  • It will still work.

  • It may not be the optimal technique.

  • If you can outright program it, and find the result perfectly, then why

  • bother generate examples, and try to learn, and go through all of that?

  • But machine learning is not going to refuse.

  • It is going to learn, and it is going to give you a system.

  • It may not be the best system in this case, but it's a system nonetheless.

  • The third one, I'm afraid you cannot do without.

  • You have to have data.

  • Machine learning is about learning from data.

  • And if you don't have data, there is absolutely nothing you can do.

  • So this is basically the picture about the context of machine learning.

  • Now, we went on to focus on one type, which is supervised learning.

  • And in the case of surprised learning, we have a target function.

  • The target function we are going to call f.

  • That is our standard notation.

  • And this corresponds, for example, to the credit application.

  • x is your application, and f of x is whether you are a good credit risk or

  • not, for the bank.

  • So if you look at the target function, the main criterion about the target

  • function is that it's unknown.

  • This is a property that we are going to insist on.

  • And obviously, unknown is a very generous assumption, which means that

  • you don't have to worry about what pattern you are trying to learn.

  • It could be anything, and you will learn it-- if we manage to do that.

  • There's still a question mark about that.

  • But it's a good assumption to have, or lack of assumption, if you will,

  • because then you know that you don't worry about the environment that

  • generated the examples.

  • You only worry about the system that you use to implement machine learning.

  • Now, you are going to be given data.

  • And the reason it's called supervised learning is that you are not only

  • given the input x's, as you can see here.

  • You're also given the output--

  • the target outputs.

  • So in spite of the fact that the target function is generally unknown,

  • it is known on the data that I give you.

  • This is the data that you are going to use as training examples, and that you

  • are going to use to figure out what the target function is.

  • So in the case of supervising learning, you have the targets

  • explicitly.

  • In the other cases, you have less information than the target, and we

  • talked about it-- like unsupervised learning, where you don't have

  • anything, and reinforcement learning, where you have partial information,

  • which is just a reward or punishment for a choice of a value of y that

  • may or may not be the target.

  • Finally, you have the solution tools.

  • These are the things that you're going to choose in order to solve the

  • problem, and they are called the learning model, as we discussed.

  • They are the learning algorithm and the hypothesis set.

  • And the learning algorithm will produce a hypothesis--

  • the final hypothesis, the one that you are going to give your customer, and

  • we give the symbol g for that.

  • And hopefully g approximates f, the actual target function,

  • which remains unknown.

  • And g is picked from a hypothesis set, and the general the symbol for

  • a member of the hypothesis set is h.

  • So h is a generic hypothesis.

  • The one you happen to pick, you are going to call g.

  • Now, we looked at an example of a learning algorithm.

  • First, the learning model-- the perceptron itself, which is a linear

  • function, thresholded.

  • That happens to be the hypothesis set.

  • And then, there is an algorithm that goes with it that chooses which

  • hypothesis to report based on the data.

  • And the hypothesis in this case is represented by the purple line.

  • Different hypotheses in the set H will result

  • in different lines.

  • Some of them are good and some of them are bad, in terms of separating

  • correctly the examples which are the pluses and minuses.

  • And we found that there's a very simple rule to adjust the current

  • hypothesis, while the algorithm is still running, in order to get a better

  • hypothesis.

  • And once you have all the points classified correctly, which is

  • guaranteed in the case of the perceptron learning algorithm if the

  • data was linearly separable in the first place,

  • then you will get there, and that will be the g that you are going to report.

  • Now, we ended the lecture on sort of a sad note, because after all of this

  • encouragement about learning, we asked ourselves: well,

  • can we actually learn?

  • So we said it's an unknown function.

  • Unknown function is an attractive assumption, as I said.

  • But can we learn an unknown function, really?

  • And then we realized that if you look at it, it's really impossible.

  • Why is it impossible?

  • Because I'm going to give you a finite data set, and I'm going to give you

  • the value of the function on this set.

  • Good.

  • Now, I'm going to ask you what is the function outside that set?

  • How in the world are you going to tell what the function is outside, if the

  • function is genuinely unknown?

  • Couldn't it assume any value it wants?

  • Yes, it can.

  • I can give you 1000 points, a million points, and on the million-and-first point,

  • still the function can behave any way it wants.

  • So it doesn't look like the statement we made is feasible in terms of

  • learning, and therefore we have to do something about it.

  • And what we are going to do about it is the subject of this lecture.

  • Now, the lecture is called Is Learning Feasible?

  • And I am going to address this question in extreme detail from

  • beginning to end.

  • This is the only topic for this lecture.

  • Now, if you want an outline--

  • it's really a logical flow.

  • But if you want to cluster it into points--

  • we are going to start with a probabilistic situation, that is a very

  • simple probabilistic situation.

  • It doesn't seem to relate to learning.

  • But it will capture the idea--

  • can we say something outside the sample data that we have?

  • So we're going to answer it in a way that is concrete, and where the

  • mathematics is very friendly.

  • And then after that, I'm going to be able to relate that probabilistic

  • situation to learning as we stated.

  • It will take two stages.

  • First, I will just translate the expressions into something that

  • relates to learning, and then we will move forward and make it correspond to

  • real learning.

  • That's the last one.

  • And then after we do that, and we think we are done, we find that there is

  • a serious dilemma that we have.

  • And we will find a solution to that dilemma, and then declare victory-- that

  • indeed, learning is feasible in a very particular sense.

  • So let's start with the experiment that I talked about.

  • Consider the following situation.

  • You have a bin, and the bin has marbles.

  • The marbles are either red or green.

  • That's what it looks like.

  • And we are going to do an experiment with this bin.

  • And the experiment is to pick a sample from the bin--

  • some marbles.

  • Let's formalize what the probability distribution is.

  • There is a probability of picking a red marble, and let's call it mu.

  • So now you think of mu as the probability of a red marble.

  • Now, the bin is really just a visual aid to make us relate to the

  • experiment.

  • You can think of this abstractly as a binary experiment--

  • two outcomes, red or green.

  • Probability of red is mu, independently from

  • one point to another.

  • If you want to stick to the bin, you can say the bin has an infinite number

  • of marbles and the fraction of red marbles is mu.

  • Or maybe it has a finite number of marbles, and you are going to pick the

  • marbles, but replace them.

  • But the idea now is that every time you reach in the bin, the probability

  • of picking a red marble is mu.

  • That's the rule.

  • Now, there's a probability of picking a green marble.

  • And what might that be?

  • That must be 1 minus mu.

  • So that's the setup.

  • Now, the value of mu is unknown to us.

  • So in spite of the fact that you can look at this particular bin and see

  • there's less red than green, so mu must be small.

  • and all of that.

  • You don't have that advantage in real.

  • The bin is opaque-- it's sitting there, and I reach for it like this.

  • So now that I declare mu is unknown, you probably see

  • where this is going.

  • Unknown is a famous word from last lecture, and that will be the link to

  • what we have.

  • Now, we pick N marbles independently.

  • Capital N. And I'm using the same notation for N, which is the

  • number of data points in learning, deliberately.

  • So the sample will look like this.

  • And it will have some red and some green.

  • It's a probabilistic situation.

  • And we are going to call the fraction of marbles in the sample--

  • this now is a probabilistic quantity.

  • mu is an unknown constant sitting there.

  • If you pick a sample, someone else picks a sample, you will have a different

  • frequency in sample from the other person.

  • And we are going to call it nu.

  • Now, interestingly enough, nu also should appear in the figure.

  • So it says nu equals fraction of red marbles.

  • So that's where it lies.

  • Here is nu!

  • For some reason that I don't understand, the app wouldn't show nu

  • in the figures.

  • So I decided maybe the app is actually a machine learning expert.

  • It doesn't like things in sample.

  • It only likes things that are real.

  • So it knows that nu is not important.

  • It's not an indication.

  • We are really interested in knowing what's outside.

  • So it kept the mu, but actually deleted the nu.

  • At least that's what we are going to believe for the rest of the lecture.

  • Now, this is the bin.

  • So now, the next step is to ask ourselves the question we asked in

  • machine learning.

  • Does nu, which is the sample frequency, tell us anything about mu,

  • which is the actual frequency in the bin that we are interested in knowing?

  • The short answer--

  • this is to remind you what it is.

  • The short answer is no.

  • Why?

  • Because the sample can be mostly green, while the bin is mostly red.

  • Anybody doubts that?

  • The thing could have 90% red, and I pick 100 marbles, and all

  • of them happen to be green.

  • This is possible, correct?

  • So if I ask you what is actually mu, you really don't know from the sample.

  • You don't know anything about the marbles you did not pick.

  • Well, that's the short answer.

  • The long answer is yes.

  • Not because no and yes, but this is more elaborate.

  • We have to really discuss a lot in order to get there.

  • So why is it yes?

  • Because if you know a little bit about probability, you realize that if the

  • sample is big enough, the sample frequency, which is nu-- the mysterious

  • disappearing quantity here-- that is likely to be close to mu.

  • Think of a presidential poll.

  • There are maybe 100 million or more voters in the US, and you make a poll

  • of 3000 people.

  • You have 3000 marbles, so to speak.

  • And you look at the result in the marbles, and you tell me how the 100

  • million will vote.

  • How the heck did you know that?

  • So now the statistics come in.

  • That's where the probability plays a role.

  • And the main distinction between the two answers is

  • possible versus probable.

  • In science and in engineering, you go a huge distance by settling for not

  • absolutely certain, but almost certain.

  • It opens a world of possibilities, and this is one of the

  • possibilities that it opens.

  • So now we know that, from a probabilistic point of view, nu does

  • tell me something about mu.

  • The sample frequency tells me something about the bin.

  • So what does it exactly say?

  • Now we go into a mathematical formulation.

  • In words, it says: in a big sample, nu, the sample frequency,

  • should be close to mu, the bin frequency.

  • So now, the symbols that go with that-- what is a big sample?

  • Large N, our parameter N.

  • And how do we say that nu is close to mu?

  • We say that they are within epsilon.

  • That is our criterion.

  • Now, with this in mind, we are going to formalize this.

  • The formula that I'm going to show you is a formula that is going to

  • stay with us for the rest of the course.

  • I would like you to pay attention.

  • And I'm going to build it gradually.

  • We are going to say that the probability of something is small.

  • So we're going to say that it's less than or equal to, and hopefully the

  • right-hand side will be a small quantity.

  • Now if I am claiming that the probability of something is small, it

  • must be that that thing is a bad event.

  • I don't want it to happen.

  • So we have a probability of something bad happening being small.

  • What is a bad event in the context we are talking about?

  • It is that nu does not approximate mu well.

  • They are not within epsilon of each other.

  • And if you look at it, here you have nu minus mu in absolute value, so

  • that's the difference in absolute value.

  • That happens to be bigger than epsilon.

  • So that's bad, because that tells us that they are further away from our

  • tolerance epsilon.

  • We don't want that to happen.

  • And we would like the probability of that happening to

  • be as small as possible.

  • Well, how small can we guarantee it?

  • Good news.

  • It's e to the minus N.

  • It's a negative exponential.

  • That is great, because negative exponentials tend to die very fast.

  • So if you get a bigger sample, this will be diminishingly small

  • probability.

  • So the probability of something bad happening will be very small, and we

  • can claims that, indeed, nu will be within epsilon from mu, and we will be

  • wrong for a very minute amount of the time.

  • But that's the good news.

  • Now the bad news--

  • ouch!

  • Epsilon is our tolerance.

  • If you're a very tolerant person, you say:

  • I just want nu and mu to be within, let's say, 0.1.

  • That's not very much to ask.

  • Now, the price you pay for that is that you plug in the exponent

  • not epsilon, but epsilon squared.

  • So that becomes 0.01.

  • 0.01 will dampen N significantly, and you lose a lot of the benefit of the

  • negative exponential.

  • And if you are more stringent and you say, I really want nu

  • to be close to mu.

  • I am not fooling around here.

  • So I am going to pick epsilon to be 10 to the minus 6.

  • Good for you.

  • 10 to the minus 6?

  • Pay the price for it.

  • You go here, and now that's 10 to the minus 12.

  • That will completely kill any N you will ever encounter.

  • So the exponent now will be around zero.

  • So this probability will be around 1, if that was the final answer.

  • That's not yet the final answer.

  • So now, you know that the probability is less than or equal to 1.

  • Congratulations!

  • You knew that already. [LAUGHTER]

  • Well, this is almost the formula, but it's not quite.

  • What we need is fairly trivial.

  • We just put 2 here, and 2 there.

  • Now, between you and me, I prefer the original formula

  • better, without the 2's.

  • However, the formula with the 2's has the distinct advantage of being: true. [LAUGHTER]

  • So we have to settle for that.

  • Now that inequality is called Hoeffding's Inequality.

  • It is the main inequality we are going to be using in the course.

  • You can look for the proof.

  • It's a basic proof in mathematics.

  • It's not that difficult, but definitely not trivial.

  • And we are going to use it all the way-- and this is the same formula

  • that will get us to prove something about the VC dimension.

  • If the buzzword 'VC dimension' means anything to you, it will come from

  • this after a lot of derivation.

  • So this is the building block that you have to really know cold.

  • Now, if you want to translate the Hoeffding Inequality into words, what

  • we have been talking about is that we would like to make the

  • statement: mu equals nu.

  • That would be the ultimate.

  • I look at the in-sample frequency, that's the out-of-sample frequency.

  • That's the real frequency out there.

  • But that's not the case.

  • We actually are making the statement mu equals nu, but we're not

  • making the statement--

  • we are making a PAC statement.

  • And that stands for: this statement is probably, approximately, correct.

  • Probably because of this.

  • This is small, so the probability of violation is small.

  • Approximately because of this.

  • We are not saying that mu equals nu.

  • We are saying that they are close to each other.

  • And that theme will remain with us in learning.

  • So we put the glorified Hoeffding's Inequality at the top, and we spend

  • a viewgraph analyzing what it means.

  • In case you forgot what nu and mu are, I put the figure.

  • So mu is the frequency within the bin.

  • This is the unknown quantity that we want to tell.

  • And nu is the disappearing quantity which happens to be the frequency in

  • the sample you have.

  • So what about the Hoeffding Inequality?

  • Well, one attraction of this inequality is that it is valid for

  • every N, positive integer, and every epsilon which is greater than zero.

  • Pick any tolerance you want, and for any number of examples you

  • want, this is true.

  • It's not an asymptotic result.

  • It's a result that holds for every N and epsilon.

  • That's a very attractive proposition for something that has

  • an exponential in it.

  • Now, Hoeffding Inequality belongs to a large class of mathematical laws,

  • which are called the Laws of Large Numbers.

  • So this is one law of large numbers, one form of it, and

  • there are tons of them.

  • This happens to be one of the friendliest, because it's not

  • asymptotic, and happens to have an exponential in it.

  • Now, one observation here is that if you look at the left-hand side, we are

  • computing this probability.

  • This probability patently depends on mu.

  • mu appears explicitly in it, and also mu affects the probability

  • distribution of nu.

  • Nu is the sample, in N marbles you picked.

  • That's a very simple binomial distribution.

  • You can find the probability that nu equals anything based on

  • the value of mu.

  • So the probability that this quantity, which depends on mu, exceeds epsilon--

  • the probability itself does depend on mu.

  • However, we are not interested in the exact probability.

  • We just want to bound it.

  • And in this case, we are bounding it uniformly.

  • As you see, the right-hand side does not have mu in it.

  • And that gives us a great tool, because now we don't use the quantity

  • that, we already declared, is unknown.

  • mu is unknown.

  • It would be a vicious cycle if I go and say that it depends on mu,

  • but I don't know what mu is.

  • Now you know uniformly, regardless of the value of mu-- mu could be anything

  • between 0 and 1, and this will still be bounding the deviation of the

  • sample frequency from the real frequency.

  • That's a good advantage.

  • Now, the other point is that there is a trade-off that you can read off the

  • inequality.

  • What is the trade-off?

  • The trade-off is between N and epsilon.

  • In a typical situation, if we think of N as the number of examples that are

  • given to you-- the amount of data-- in this case, the number of marbles out

  • of the bin,

  • N is usually dictated.

  • Someone comes and gives you a certain resource of examples.

  • Epsilon is your taste in tolerance.

  • You are very tolerant. You pick epsilon equals 0.5.

  • That will be very easy to satisfy.

  • And if you are very stringent, you can pick epsilon smaller and smaller.

  • Now, because they get multiplied here, the smaller the epsilon is, the bigger

  • than N you need in order to compensate for it and come up with the same level

  • of probability bound.

  • And that makes a lot of sense.

  • If you have more examples, you are more sure that nu and mu will be close

  • together, even closer and closer and closer,

  • as you get larger N.

  • So this makes sense.

  • Finally,

  • it's a subtle point, but it's worth saying.

  • We are making the statement that nu is approximately the same as mu.

  • And this implies that mu is approximately the same as nu.

  • What is this?

  • The logic here is a little bit subtle.

  • Obviously, the statement is a tautology, but I'm just making

  • a logical point, here.

  • When you run the experiment, you don't know what mu is.

  • mu is an unknown.

  • It's a constant.

  • The only random fellow in this entire operation is nu.

  • That is what the probability is with respect to.

  • You generate different samples, and you compute the probability.

  • This is the probabilistic thing.

  • This is a happy constant sitting there, albeit unknown.

  • Now, the way you are using the inequality is to infer mu, the sample

  • here, from nu.

  • That is not the cause and effect that actually takes place.

  • The cause and effect is that mu affects nu, not the other way around.

  • But we are using it the other way around.

  • Lucky for us, the form of the probability is symmetric.

  • Therefore, instead of saying that nu tends to be close to mu, which will

  • be the accurate logical statement-- mu is there, and nu has a tendency to be

  • close to it.

  • We, instead of that, say that I know already nu, and now mu tends to

  • be close to nu.

  • That's the logic we are using.

  • Now, I think we understand what the bin situation is, and we know what the

  • mathematical condition that corresponds to it is.

  • What I'd like to do,

  • I'd like to connect that to the learning problem we have.

  • In the case of a bin, the unknown quantity that we want to decipher is

  • a number, mu.

  • Just unknown.

  • What is the frequency inside the bin.

  • In the learning situation that we had, the unknown quantity we would like to

  • decipher is a full-fledged function.

  • It has a domain, X, that could be a 10th-order Euclidean space.

  • Y could be anything.

  • It could be binary, like the perceptron.

  • It could be something else.

  • That's a huge amount of information.

  • The bin has only one number.

  • This one, if you want to specify it, that's a lot of specification.

  • So how am I going to be able to relate the learning problem to something that

  • simplistic?

  • The way we are going to do it is the following.

  • Think of the bin as your input space in the learning problem.

  • That's the correspondence.

  • So every marble here is a point x.

  • That is a credit card applicant.

  • So if you look closely at the gray thing, you will read: salary, years in

  • residence, and whatnot.

  • You can't see it here because it's too small!

  • Now the bin has all the points in the space. Therefore, this

  • is really the space.

  • That's the correspondence in our mind.

  • Now we would like to give colors to the marbles.

  • So here are the colors.

  • There are green marbles, and they correspond to something in the

  • learning problem.

  • What do they correspond to?

  • They correspond to your hypothesis getting it right.

  • So what does that mean?

  • There is a target function sitting there, right?

  • You have a hypothesis.

  • The hypothesis is a full function, like the target function is.

  • You can compare the hypothesis to the target function on every point.

  • And they either agree or disagree.

  • If they agree, please color the corresponding point

  • in the input space--

  • Color it green.

  • Now, I'm not saying that you know which ones are green and which ones

  • are not, because you don't know the target function overall.

  • I'm just telling you the mapping that takes an unknown target function into

  • an unknown mu.

  • So both of them are unknown, admittedly, but that's the

  • correspondence that maps it.

  • And now you go, and there are some red ones.

  • And, you guessed it.

  • You color the thing red if your hypothesis got the answer wrong.

  • So now I am collapsing the entire thing into just agreement and

  • disagreement between your hypothesis and the target function, and that's

  • how you get to color the bin.

  • Because of that, you have a mapping for every point, whether it's green or

  • red, according to this rule.

  • Now, this will add a component to the learning problem that we

  • did not have before.

  • There is a probability associated with the bin.

  • There is a probability of picking a marble, and

  • independently, and all of that.

  • When we talked about the learning problem, there was no probability.

  • I will just give you a sample set, and that's what you work with.

  • So let's see what is the addition we need to do in order to adjust the

  • statement of the learning problem to accommodate the new ingredient.

  • And the new ingredient is important, because otherwise we cannot learn.

  • It's not like we have the luxury of doing without it.

  • So we go back to the learning diagram from last time.

  • Do you remember this one?

  • Let me remind you.

  • Here is your target function, and it's unknown.

  • And I promised you last time that it will remain unknown, and the promise

  • will be fulfilled.

  • We are not going to touch this box.

  • We're just going to add another box to accommodate the probability.

  • And the target function generates the training examples.

  • These are the only things that the learning algorithm sees.

  • It picks a hypothesis from the hypothesis set, and produces it as the

  • final hypothesis, which hopefully approximates f.

  • That's the game.

  • So what is the addition we are going to do?

  • In the bin analogy, this is the input space.

  • Now the input space has a probability.

  • So I need to apply this probability to the points from the input space that

  • are being generated.

  • I am going to introduce a probability distribution over the

  • input space.

  • Now the points in the input space-- let's say the d-dimensional

  • Euclidean space--

  • are not just generic points now.

  • There is a probability of picking one point versus the other.

  • And that is captured by the probability, which I'm going to call

  • capital P.

  • Now the interesting thing is that I'm making no assumptions about P. P can

  • be anything.

  • I just want a probability.

  • So invoke any probability you want, and I am ready with the machinery.

  • I am not going to restrict the probability distributions over X.

  • That's number one.

  • So this is not as bad as it looks.

  • Number two, I don't even need to know what P is.

  • Of course, the probability choice will affect the choice of the probability

  • of getting a green marble or a red marble, because now the probability of

  • different marbles changed, so it could change the value mu.

  • But the good news with the Hoeffding is that I could bound the performance

  • independently of mu.

  • So I can get away with not only any P, but with a P that I don't know, and

  • I'll still be able to make the mathematical statement.

  • So this is a very benign addition to the problem.

  • And it will give us very high dividends, which is the

  • feasibility of learning.

  • So what do you do with the probability?

  • You use the probability to generate the points x_1 up to x_N. So now

  • x_1 up to x_N are assumed to be generated by that probability

  • independently.

  • That's the only assumption that is made.

  • If you make that assumption, we are in business.

  • But the good news is, as I mentioned before,

  • we did not compromise about the target function.

  • You don't need to make assumptions about the function you don't know and

  • you want to learn, which is good news.

  • And the addition is almost technical.

  • That there is a probability somewhere, generating the points.

  • If I know that, then I can make a statement in probability.

  • Obviously, you can make that statement only to the extent that the assumption

  • is valid, and we can discuss that in later lectures when the

  • assumption is not valid.

  • So, OK.

  • Happy ending.

  • We are done, and we now have the correspondence.

  • Are we done?

  • Well, not quite.

  • Why are we not done?

  • Because the analogy I gave you requires a particular

  • hypothesis in mind.

  • I told you that the red and green marbles correspond to the agreement between h

  • and the target function.

  • So when you tell me what h is, you dictate the colors here.

  • All of these colors.

  • This is green not because it's inherently green, not because of

  • anything inherent about the target function.

  • It's because of the agreement between the target function and your

  • hypothesis, h.

  • That's fine, but what is the problem?

  • The problem is that I know that for this h, nu generalizes to mu.

  • You're probably saying, yeah, but h could be anything.

  • I don't see the problem yet.

  • Now here is the problem.

  • What we have actually discussed is not learning, it's verification.

  • The situation as I describe it--

  • you have a single bin and you have red and green marbles, and this and that,

  • corresponds to the following.

  • A bank comes to my office.

  • We would like a formula for credit approval.

  • And we have data.

  • So instead of actually taking the data, and searching hypotheses, and picking

  • one, like the perceptron learning algorithm, here is what I do that

  • corresponds to what I just described.

  • You guys want a linear formula?

  • OK.

  • I guess the salary should have a big weight.

  • Let's say 2.

  • The outstanding debt is negative, so that should be a weight minus 0.5.

  • And years in residence are important, but not that important.

  • So let's give them a 0.1.

  • And let's pick a threshold that is high, in order for

  • you not to lose money.

  • Let's pick a threshold of 0.5.

  • Sitting down, improvising an h.

  • Now, after I fix the h, I ask you for the data and just verify whether the h

  • I picked is good or bad.

  • That I can do with the bin, because I'm going to look at the data.

  • If I miraculously agree with everything in your data, I can

  • definitely declare victory by Hoeffding.

  • But what are the chances that this will happen in the first place?

  • I have no control over whether I will be good on the data or not.

  • The whole idea of learning is that I'm searching the space to deliberately

  • find a hypothesis that works well on the data.

  • In this case, I just dictated a hypothesis.

  • And I was able to tell you for sure what happens out-of-sample.

  • But I have no control of what news I'm going to tell you.

  • You can come to my office.

  • I improvise this.

  • I go to the data.

  • And I tell you, I have a fantastic system.

  • It generalizes perfectly, and it does a terrible job.

  • That's what I have, because when I tested it, nu was terrible.

  • So that's not what we are looking for.

  • What we are looking for is to make it learning.

  • So how do we do that?

  • No guarantee that nu will be small.

  • And we need to choose the hypothesis from multiple h's.

  • That's the game.

  • And in that case, you are going to go for the sample, so to speak, generated

  • by every hypothesis, and then you pick the hypothesis that is most favorable,

  • that gives you the least error.

  • So now, that doesn't look like a difficult thing.

  • It worked with one bin.

  • Maybe I can have more than one bin, to accommodate the situation where I have

  • more than one hypothesis.

  • It looks plausible.

  • So let's do that.

  • We will just take multiple bins.

  • So here is the first bin.

  • Now you can see that this is a bad bin.

  • So that hypothesis is terrible.

  • And the sample reflects that, to some extent.

  • But we are going to have other bins, so let's call this something.

  • So this bin corresponds to a particular h.

  • And since we are going to have other hypotheses, we are going to call this

  • h_1 in preparation for the next guy.

  • The next guy comes in, and you have h_2.

  • And you have another mu_2.

  • This one looks like a good hypothesis, and it's also reflected in the sample.

  • And it's important to look at the correspondence.

  • If you look at the top red point here and the top green point here, this is

  • the same point in the input space.

  • It just was colored red here and colored green here.

  • Why did that happen?

  • Because the target function disagrees with this h, and the target function

  • happens to agree with this h.

  • That's what got this the color green.

  • And when you pick a sample, the sample also will have different colors,

  • because the colors depend on which hypothesis.

  • And these are different hypotheses.

  • That looks simple enough.

  • So let's continue.

  • And we can have M of them.

  • I am going to consider a finite number of hypotheses, just to make the math

  • easy for this lecture.

  • And we're going to go more sophisticated when we get into the

  • theory of generalization.

  • So now I have this.

  • This is good.

  • I have samples, and the samples here are different.

  • And I can do the learning, and the learning now, abstractly, is to scan

  • these samples looking for a good sample.

  • And when you find a good sample, you declare victory, because of Hoeffding,

  • and you say that it must be that the corresponding bin is good, and the

  • corresponding bin happens to be the hypothesis you chose.

  • So that is an abstraction of learning.

  • That was easy enough.

  • Now, because this is going to stay with us, I am now going to introduce

  • the notation that will survive with us for the entire discussion of learning.

  • So here is the notation.

  • We realize that both mu, which happens to be inside the bin,

  • and nu, which happens to be the sample frequency--

  • in this case, the sample frequency of error-- both of them depend on h.

  • So I'd like to give a notation that makes that explicit.

  • The first thing,

  • I am going to call mu and nu with a descriptive name.

  • So nu, which is the frequency in the sample you have, is in-sample.

  • That is a standard definition for what happens in the data that I give you.

  • If you perform well in-sample, it means that your error in the sample

  • that I give you is small.

  • And because it is called in-sample, we are going to denote it by E_in.

  • I think this is worth blowing up, because it's an important one.

  • This is our standard notation for the error that you have in-sample.

  • Now, we go and get the other one, which happens to be mu.

  • And that is called out-of-sample.

  • So if you are in this field, I guess what matters is the out-of-sample

  • performance.

  • That's the lesson.

  • Out-of-sample means something that you haven't seen.

  • And if you perform out-of-sample, on something that you haven't seen, then

  • you must have really learned.

  • That's the standard for it, and the name for it is E_out.

  • With this in mind, we realize that we don't yet have the dependency on h

  • which we need.

  • So we are going to make the notation a little bit more elaborate, by calling

  • E_in and E_out--

  • calling them E_in of h, and E_out of h.

  • Why is that?

  • Well, the in-sample performance-- you are trying to see the error of

  • approximating the target function by your hypothesis.

  • That's what E_in is.

  • So obviously, it depends on your hypothesis.

  • So it's E_in of h.

  • Someone else picks another h, they will get another E_in of their h.

  • Similarly E_out, the corresponding one is E_out of h.

  • So now, what used to be nu is now E_in of h.

  • What used to be mu, inside the bin, is E_out of h.

  • Now, the Hoeffding Inequality, which we know all too well

  • by now, said that.

  • So all I'm going to do is just replace the notation.

  • And now it looks a little bit more crowded, but it's

  • exactly the same thing.

  • The probability that your in-sample performance deviates from your out-of-

  • sample performance by more than your prescribed tolerance is less than or

  • equal to a number that is hopefully small.

  • And you can go back and forth.

  • There's nu and mu, or you can go here and you get the new notation.

  • So we're settled on the notation now.

  • Now, let's go for the multiple bins and use this notation.

  • These are the multiple bins as we left them.

  • We have the hypotheses h_1 up to h_M, and we have the mu_1 and mu_M.

  • And if you see 1, 2, M, again, this is a disappearing nu--

  • the symbol that the app doesn't like.

  • But thank God we switched notations, so that

  • something will appear.

  • Yeah!

  • So right now, that's what we have.

  • Every bin has an out-of-sample performance, and out-of-

  • sample is: Out. Of. Sample.

  • So this is a sample.

  • What's in it is in-sample.

  • What is not in it is out-of-sample.

  • And the out-of-sample depends on h_1 here, h_2 here, and h_M here.

  • And obviously, these quantities will be different according to the sample, and

  • these quantities will be different according to the ultimate performance

  • of your hypothesis.

  • So we solved the problem.

  • It's not verification. It's not a single bin.

  • It's real learning.

  • I'm going to scan these.

  • So that's pretty good.

  • Are we done already?

  • Not so fast.

  • [LAUGHING]

  • What's wrong?

  • Let me tell you what's wrong.

  • The Hoeffding Inequality, that we have happily studied and declared important

  • and all of that, doesn't apply to multiple bins.

  • What?

  • You told us mathematics, and you go read the proof, and all of that.

  • Are you just pulling tricks on us?

  • What is the deal here?

  • And you even can complain.

  • We sat for 40 minutes now going from a single bin, mapping it to

  • the learning diagram, mapping it to multiple bins, and now you tell us

  • that the main tool we developed doesn't apply.

  • Why doesn't it apply, and what can we do about it?

  • Let me start by saying why it doesn't apply, and then we can go for what we

  • can do about it.

  • Now, everybody has a coin.

  • I hope the online audience have a coin ready.

  • I'd like to ask you to take the coin out and flip it,

  • let's say, five times.

  • And record what happens.

  • And when you at home flip the coin five times, please,

  • if you happen to get all five heads in your experiment, then text us that you

  • got all five heads.

  • If you get anything else, don't bother text us.

  • We just want to know if someone will get five heads.

  • Everybody is done flipping the coin.

  • Because you have been so generous and cooperative, you can keep the coin!

  • [LAUGHTER]

  • Now, did anybody get five heads?

  • All five heads?

  • Congratulations, sir.

  • You have a biased coin, right?

  • We just argued that in-sample corresponds to out-of-sample, and we

  • have this Hoeffding thing, and therefore if you get five heads, it

  • must be that this coin gives you heads.

  • We know better.

  • So in the online audience, what happened?

  • MODERATOR: Yeah, in the online audience, there's also five heads.

  • PROFESSOR: There are lots of biased coins out there.

  • Are they really biased coins?

  • No.

  • What is the deal here?

  • Let's look at it.

  • Here, with the audience here, I didn't want to push my luck with 10 flips,

  • because it's a live broadcast.

  • So I said five will work.

  • For the analytical example, let's take 10 flips.

  • Let's say you have a fair coin, which every coin is.

  • You have a fair coin.

  • And you toss it 10 times.

  • What is the probability that you will get all 10 heads?

  • Pretty easy.

  • One half, times one half, 10 times, and that will give

  • you about 1 in 1000.

  • No chance that you will get it--

  • not no chance, but very little chance.

  • Now, the second question is the one we actually ran the experiment for.

  • If you toss 1000 fair coins-- it wasn't 1000 here. It's how many there.

  • Maybe out there is 1000.

  • What is the probability that some coin will give you all 10 heads?

  • Not difficult at all to compute.

  • And when you get the answer, the answer will be it's actually more

  • likely than not.

  • So now it means that the 10 heads in this case are no indication at all of

  • the real probability.

  • That is the game we are playing.

  • Can I look at the sample and infer something about the real probability?

  • No.

  • In this case, you will get 10 heads, and the coin is fair.

  • Why did this happen?

  • This happened because you tried too hard.

  • Eventually what will happen is--

  • Hoeffding applies to any one of them.

  • But there is a probability, let's say half a percent, that you

  • will be off here.

  • Another half a percent that you will be off here.

  • If you do it often enough, and you are lucky enough that the half percents

  • are disjoint, you will end up with extremely high probability that

  • something bad will happen, somewhere.

  • That's the key.

  • So let's translate this into the learning situation.

  • Here are your coins.

  • And how do they correspond to the bins?

  • Well, it's a binary experiment, whether you are picking a red marble

  • or a green marble, or you are flipping a coin getting heads or tails.

  • It's a binary situation.

  • So there's a direct correspondence.

  • Just get the probability of heads being mu, which is the probability of

  • a red marble, corresponding to them.

  • So because the coins are fair,

  • actually all the bins in this case are half red, half green.

  • That's really bad news for a hypothesis.

  • The hypothesis is completely random.

  • Half the time it agrees with the target function.

  • Half the time it disagrees.

  • No information at all.

  • Now you apply the learning paradigm we mentioned, and you say: let me

  • generate a sample from the first hypothesis.

  • I get this, I look at it, and I don't like that.

  • It has some reds.

  • I want really a clean hypothesis that performs perfectly--

  • all green.

  • You move on.

  • And, OK.

  • This one--

  • even, I don't know.

  • This is even worse.

  • You go on and on and on.

  • And eventually, lo and behold, I have all greens.

  • Bingo.

  • I have the perfect hypothesis.

  • I am going to report this to my customer, and if my customer is in

  • financial forecasting, we are going to beat the stock market and

  • make a lot of money.

  • And you start thinking about the car you are going to buy, and all of that.

  • Well, is it bingo?

  • No, it isn't.

  • And that is the problem.

  • So now, we have to find something that makes us deal with

  • multiple bins properly.

  • Hoeffding Inequality-- if you have one experiment, it has a guarantee.

  • The guarantee gets terribly diluted as you go, and we want to know exactly

  • how the dilution goes.

  • So here is a simple solution.

  • This is a mathematical slide. I'll do it step-by-step.

  • There is absolutely nothing mysterious about it.

  • This is the quantity we've been talking about.

  • This is the probability of a bad event.

  • But in this case, you realize that I'm putting g.

  • Remember, g was our final hypothesis.

  • So this corresponds to a process where you had a bunch of h's, and you picked

  • one according to a criterion, that happens to be an in-sample criterion,

  • minimizing the error there, and then you report the g as the

  • one that you chose.

  • And you would like to make a statement that the probability for the g you

  • chose-- the in-sample error-- happens to be close to the out-of-sample error.

  • So you'd like the probability of the deviation being bigger than your

  • tolerance to be, again, small.

  • All we need to do is find a Hoeffding counterpart to this, because

  • now this fellow is loaded.

  • It's not just a fixed hypothesis and a fixed bin.

  • It actually corresponds to a large number of bins, and I am visiting the

  • random samples in order to pick one.

  • So clearly the assumptions of Hoeffding don't apply-- that correspond

  • to a single bin.

  • This probability is less than or equal to the

  • probability of the following.

  • I have M hypotheses--

  • capital M hypotheses.

  • h_1, h_2, h_3, h_M.

  • That's my entire learning model.

  • That's the hypothesis set that I have, finite as I said I would assume.

  • If you look at what is the probability that the hypothesis you

  • pick is bad? Well, this will be less than or equal to the probability that the

  • first hypothesis is bad, or the second hypothesis is bad, or, or, or the last

  • hypothesis is bad.

  • That is obvious.

  • g is one of them.

  • If it's bad, one of them is bad.

  • So less than or equal to that.

  • This is called the union bound in probability.

  • It's a very loose bound, in general, because it doesn't

  • consider the overlap.

  • Remember when I told you that the half a percent here, half a percent here,

  • half a percent here--

  • if you are very unlucky and these are non-overlapping, they add up.

  • The non-overlapping is the worst-case assumption, and it is the assumption

  • used by the union bound.

  • So you get this.

  • And the good news about this is that I have a handle on each term of them.

  • The union bound is coming up.

  • So I put the OR's.

  • And then I use the union bound to say that this is less than or equal to, and simply sum

  • the individual probabilities.

  • So the half a percent plus half a percent plus half a percent--

  • this will be an upper bound on all of them.

  • The probability that one of them goes wrong, the probability that someone

  • gets all heads, and I add the probability for all of you, and that

  • makes it a respectable probability.

  • So this event here is implied.

  • Therefore, I have the implication because of the OR, and this one

  • because of the union bound, where I have the pessimistic assumption that I

  • just need to add the probabilities.

  • Now, all of this-- again, we make simplistic assumptions, which is

  • really not simplistic as in trivially restricting, but rather the opposite.

  • We just don't want to make any assumptions that restrict the

  • applicability of our result.

  • So we took the worst case.

  • It cannot get worse than that.

  • If you look at this, now I have good news for you.

  • Because each term here is a fixed hypothesis.

  • I didn't choose anything.

  • Every one of them has a hypothesis that was declared ahead of time.

  • Every one of them is a bin.

  • So if I look at a term by itself, Hoeffding applies to this, exactly the

  • same way it applied before.

  • So this is a mathematical statement now.

  • I'm not looking at the bigger experiment.

  • I reduced the bigger experimental to a bunch of quantities.

  • Each of them corresponds to a simple experiment that we already solved.

  • So I can substitute for each of these by the bound that the

  • Hoeffding gives me.

  • So what is the bound that the Hoeffding gives me?

  • That's the one.

  • For every one of them, each of these guys was less than or

  • equal to this quantity.

  • One by one.

  • All of them are obviously the same.

  • So each of them is smaller than this quantity.

  • Each of them is smaller than this quantity.

  • Now I can be confident that the probability that I'm interested in,

  • which is the probability that the in-sample error

  • being close to the out-of-sample error-- the closeness of them is bigger

  • than my tolerance, the bad event.

  • Under the genuine learning scenario-- you generate marbles from every bin,

  • and you look deliberately for a sample that happens to be all green or as

  • green as possible, and you pick this one.

  • And you want an assurance that whatever that might be, the

  • corresponding bin will genuinely be good out-of-sample.

  • That is what is captured by this probability.

  • That is still bounded by something, which also has that exponential in it,

  • which is good.

  • But it has an added factor that will be a very bothersome factor, which is:

  • I have M of them.

  • Now, this is the bad event.

  • I'd like the probability to be small.

  • I don't like to magnify the right-hand side, because that is the probability

  • of something bad happening.

  • Now, with M, you realize that

  • if you use 10 hypotheses, this probability is probably tight.

  • If you use a million hypotheses, we probably are already in trouble.

  • There is no guarantee, because now the million gets multiplied by what used

  • to be a respectable probability, which is 1 in 100,000, and now you can make

  • the statement that the probability that something bad happens

  • is less than 10.

  • [LAUGHING]

  • Yeah, thank you very much.

  • We have to take a graduate course to learn that!

  • Now you see what the problem is.

  • And the problem is extremely intuitive.

  • In that Q&A session after the last lecture, we all got through the

  • discussion the assertion that if you have a more sophisticated model, the

  • chances are you will memorize in-sample, and you are not going to

  • really generalize well out-of-sample, because you have so many

  • parameters to work with.

  • There are so many ways to look at that intuitively, and this is one of them.

  • If you have a very sophisticated model-- M is huge, let alone infinite.

  • That's later to come.

  • That's what the theory of generalization is about.

  • But if you pick a very sophisticated example with a large M, you lose the

  • link between the in-sample and the out-of-sample.

  • So you look at here.

  • [LAUGHING], I didn't mean it this way, but let me go back just to show

  • you what it is.

  • At least you know it's over, so that's good.

  • So this fellow is supposed to track this fellow.

  • The in-sample is supposed to track the out-of-sample.

  • The more sophisticated the model you use, the looser that in-sample will

  • track the out-of-sample.

  • Because the probability of them deviating becomes bigger and bigger

  • and bigger.

  • And that is exactly the intuition we have.

  • Now, surprise.

  • The next one is for the Q&A. We will take a short break, and then we will

  • go to the questions and answers.

  • We are now in the Q&A session.

  • And if anybody wants to ask a question, they can go to the

  • microphone and ask, and we can start with the online audience questions, if

  • there are any.

  • MODERATOR: The first question is

  • what happens when the Hoeffding Inequality

  • gives you something trivial, like less than 2?

  • PROFESSOR: Well, it means that either the resources of the examples

  • you have, the amount of data you have, is not sufficient to guarantee any

  • generalization, or--

  • which is somewhat equivalent--

  • that your tolerance is too stringent.

  • The situation is not really mysterious.

  • Let's say that you'd like to take a poll for the president.

  • And let's say that you ask five people at random.

  • How can you interpret the result?

  • Nothing.

  • You need a certain amount of respondents in order for the

  • right-hand side to start becoming interesting.

  • Other than that, it's completely trivial.

  • It's very likely that what you have seen in-sample doesn't correspond to

  • anything out-of-sample.

  • MODERATOR: So in the case of the perceptron--

  • the question is would each set of w's be considered a new m?

  • PROFESSOR: The perceptron and, as

  • a matter of fact, every learning model of interest

  • that we're going to encounter, the number of hypotheses, M,

  • happens to be infinite.

  • We were just talking about the right-hand side not being meaningful

  • because it's bigger than 1. If you take an infinite hypothesis set and

  • verbatim apply what I said, then you find that the probability is actually

  • less than infinity.

  • That's very important.

  • However, this is our first step.

  • There will be another step, where we deal with infinite hypothesis sets.

  • And we are going to be able to describe them with an abstract quantity

  • that happens to be finite, and that abstract quantity will be the one we

  • are going to use in the counterpart for the Hoeffding Inequality.

  • That's why there is mathematics that needs to be done.

  • Obviously, the perceptron has an infinite number of hypotheses because

  • you have real space, and here is your hypothesis, and you can perturb this

  • continuously as you want.

  • Even just by doing this, you already have an infinite number of hypotheses

  • without even exploring further.

  • MODERATOR: OK, and this is a popular one.

  • Could you go over again in slide 6, of the implication of nu equals mu and

  • vice versa.

  • PROFESSOR: Six.

  • It's a subtle point, and it's common between machine learning and

  • statistics.

  • What do you do in statistics?

  • What is the cause and effect for a probability and a sample?

  • The probability results in a sample.

  • So if I know the probability, I can tell you exactly what is the

  • likelihood that you'll get one sample or another or another.

  • Now, what you do in statistics is the reverse of that.

  • You already have the sample, and you are trying to infer which probability

  • gave rise to it.

  • So you are using the effect to decide the cause rather than

  • the other way around.

  • So the same situation here.

  • The bin is the cause.

  • The frequency in the sample is the effect.

  • I can definitely tell you what the distribution is like in the sample,

  • based on the bin.

  • The utility, in terms of learning, is that I look at the sample

  • and infer the bin.

  • So I infer the cause based on the effect.

  • There's absolutely nothing terrible about that.

  • I just wanted to make the point clear, that when we write the Hoeffding

  • Inequality, which you can see here, we are talking about this event.

  • You should always remember that nu is the thing that plays around

  • and causes the probability to happen, and mu is a constant.

  • When we use it to predict that the out-of-sample will be the same as the in-

  • sample, we are really taking nu as fixed, because this is the in-

  • sample we've got.

  • And then we are trying to interpret what mu gave rise to it.

  • And I'm just saying that, in this case, since the statement is of the

  • form that the difference between them, which is symmetric, is greater than

  • epsilon, then if you look at this as saying mu is there and I know that nu

  • will be approximately the same, you can also flip that.

  • And you can say, nu is here, and I know that mu that gave rise to it must

  • be the same.

  • That's the whole idea.

  • It's a logical thing rather than a mathematical thing.

  • MODERATOR: OK.

  • Another conceptual question that is arising is that a more complicated

  • model corresponds to a larger number of h's.

  • And some people are asking--

  • they thought each h was a model.

  • PROFESSOR: OK.

  • Each h is a hypothesis.

  • A particular function, one of them you are going to pick, which is going to

  • be equal to g, and this is the g that you're going to report as your best

  • guess as an approximation for f.

  • The model is the hypotheses that you're allowed to visit in order to

  • choose one.

  • So that's the hypothesis set, which is H.

  • And again, but there is an interesting point.

  • I'm using the number of hypotheses as a measure for the complexity in the

  • intuitive argument that I gave you.

  • It's not clear at all that the pure number corresponds to the complexity.

  • It's not clear that anything that has to do with the size, really, is the

  • complexity.

  • Maybe the complexity has to do with the structure of individual

  • hypotheses.

  • And that's a very interesting point.

  • And that will be discussed at some point-- the complexity of individual

  • hypotheses versus the complexity of the model that captures all the

  • hypotheses.

  • This will be a topic that we will discuss much later in the course.

  • MODERATOR: Some people are getting ahead.

  • So how do you pick g?

  • PROFESSOR: OK.

  • We have one way of picking g-- that already was established last time--

  • which is the perceptron learning algorithm.

  • So your hypothesis set is H.

  • Script H.

  • It has a bunch of h's, which are the different lines in the plane.

  • And you pick g by applying the PLA, the perceptron learning algorithm,

  • playing around with this boundary, according to the update rule, until it

  • classifies the inputs correctly, assuming they are linearly separable,

  • and the one you end up with is what is declared g.

  • So g is just a matter of notation, a name for whichever one we settle on,

  • the final hypothesis.

  • How you pick g depends on what algorithm you use, and what

  • hypothesis set you use.

  • So it depends on the learning model, and obviously on the data.

  • MODERATOR: OK.

  • This is a popular question.

  • So it says: how would you extend the equation to support an output that

  • is a valid range of responses and not a binary response?

  • PROFESSOR: It can be done.

  • One of the things that I mentioned here is that this fellow, the

  • probability here, is uniform.

  • Now, let's say that you are not talking about a binary experiment.

  • Instead of taking the frequency of error versus the probability of error,

  • you take the expected value of something versus the

  • sample average of it.

  • And they will be close to each other, and some, obviously technical,

  • modification is needed to be here.

  • And basically, the set of laws of large numbers, from which this is one member,

  • has a bunch of members that actually have to do with expected value and

  • sample average, rather than just the specific case of probability and

  • sample average.

  • If you take your function as being 1, 0, and you take the expected value,

  • that will give you the sample as the sample average, and the probability as

  • the expected value.

  • So it's not a different animal.

  • It's just a special case that is easier to handle.

  • And in the other case, one of the things that matters is the variance of

  • your variable.

  • So it will affect the bounds.

  • Here, I'm choosing epsilon in general, because the variance of this variable

  • is very limited.

  • Let's say that the probability is mu, so the variance is mu

  • times 1 minus mu.

  • It goes from a certain value to a certain value.

  • So it can be absorbed.

  • It's bounded above and below.

  • And this is the reason why the right-hand side here can

  • be uniformly done.

  • If you have something that has variance that can be huge or small,

  • then that will play a role in your choice of epsilon, such that

  • this will be valid.

  • So the short answer is: it can be done.

  • There is a technical modification, and the main aspect of the technical

  • modification, that needs to be taken into consideration, is the variance of

  • the variable I'm talking about.

  • MODERATOR: OK.

  • There's also a common confusion.

  • Why are there are multiple bins?

  • PROFESSOR: OK.

  • The bin was only our conceptual tool to argue that learning is

  • feasible in a probabilistic sense.

  • When we used a single bin, we had a correspondence with a hypothesis, and

  • it looked like we actually captured the essence of learning, until we

  • looked closer and we realized that, if you restrict yourself to one bin and

  • apply the Hoeffding Inequality directly to it, what you are really

  • working with--

  • if you want to put it in terms of learning--

  • is that my hypothesis set has only one hypothesis.

  • And that corresponds to the bin.

  • So now I am picking it--

  • which is my only choice.

  • I don't have everything else.

  • And all I'm doing now is verifying that its in-sample performance will

  • correspond to the out-of-sample performance, and that is guaranteed by

  • the plain-vanilla Hoeffding.

  • Now, if you have actual learning, then you have more than one

  • hypothesis.

  • And we realize that the bin changes with the hypothesis, because whether

  • a marble is red or green depends on whether the hypothesis agrees or

  • disagrees with your target function.

  • Different hypotheses will lead to different colors.

  • Therefore, you need multiple bins to represent multiple hypotheses, which

  • is the only situation that admits learning as we know it--

  • that I'm going to explore the hypotheses, based on their performance in-sample,

  • and pick the one that performs best, perhaps, in-sample, and hope that it

  • will generalize well out-of-sample.

  • MODERATOR: OK.

  • Another confusion.

  • Can you resolve the relationship between the probability and the big H?

  • so I'm not clear exactly what--

  • PROFESSOR: We applied the--

  • there are a bunch of components in the learning

  • situation, so let me get the--

  • It's a big diagram, and it has lots of components.

  • So one big space or set is X, and another one is H. So if you

  • look at here.

  • This is hypothesis set H. It's a set.

  • OK, fine.

  • And also, if you look here, the target function is defined from X to Y, and

  • in this case, X is also a set.

  • The only invocation of probability that we needed to do, in order to get

  • the benefit of the probabilistic analysis in learning, was to put

  • a probability distribution on X.

  • H, which is down there, is left as a fixed hypothesis set.

  • There is no question of a probability on it.

  • When we talk about the Bayesian approach, in the last lecture in

  • fact, there will be a question of putting a probability distribution

  • here in order to make the whole situation probabilistic.

  • But that is not the approach that is followed for the entire course, until

  • we discuss that specific approach at the end.

  • Question.

  • STUDENT: What do we do when there are many possible hypotheses which

  • will satisfy my criteria?

  • Like, in perceptron, for example.

  • I could have several hyperplanes which could be separating the set.

  • So how do I pick the best--

  • PROFESSOR: Correct.

  • Usually, with a pre-specified algorithm,

  • you'll end up with something.

  • So the algorithm will choose it for you.

  • But your remark now is that,

  • given that there are many solutions that happen to have zero in-sample

  • error, there is really no distinction between them in terms of the out-of-

  • sample performance.

  • I'm using the same hypothesis set, so M is the same.

  • And the in-sample error is the same.

  • So my prediction for the out-of-sample error would be the same, as there's no

  • distinction between them.

  • The good news is that the learning algorithm will solve this for you, because

  • it will give you one specific, the one it ended with.

  • But even within the ones that achieve zero error, there is a method,

  • that we'll talk about later on when we talk about support vector machines,

  • that prefers one particular solution as having a better chance of

  • generalization.

  • Not clear at all given what I said so far, but I'm just telling you,

  • as an appetizer, there's something to be done in that regard.

  • MODERATOR: OK.

  • A question is does the inequality hold for any g,

  • even if g is not optimal?

  • PROFESSOR: What about the g?

  • MODERATOR: Does it hold for any g, no matter how you pick g?

  • PROFESSOR: Yeah.

  • So the whole idea--

  • once you write the symbol g, you already are talking about any

  • hypothesis.

  • Because by definition, g is the final hypothesis, and your algorithm is

  • allowed to pick any h from the hypothesis set and call it g.

  • Therefore, when I say g, don't look at a fixed hypothesis.

  • Look at the entire learning process that went through the H, the

  • set of hypotheses, according to the data and according to the learning

  • rule, and went through and ended up with one that is declared the right

  • one, and now we call this g.

  • So the answer is patently that g can be different.

  • Patently yes, just by the notation that I'm using.

  • MODERATOR: Also, some confusion.

  • With the perceptron algorithm or any linear algorithm--

  • there's a confusion that, at each step, there's a hypothesis, but--

  • PROFESSOR: Correct.

  • But these are hidden processes for us.

  • As far as analysis I mentioned, you get the data,

  • the algorithm does something magic, and ends up with a final hypothesis.

  • In the course of doing that, it will obviously be visiting lots of

  • hypotheses.

  • So the abstraction of having just the samples sitting there, and eyeballing

  • them and picking the one that happens to be green, is an abstraction.

  • In reality, these guys happen in a space, and you are moving from one

  • hypothesis to another by moving some parameters.

  • And in the course of doing that, including in the perceptron learning

  • algorithm, you are moving from one hypothesis to another.

  • But I'm not accounting for that, because I haven't found my final

  • hypothesis yet.

  • When you find the final hypothesis, you call it g.

  • On the other hand, because I use the union bound, I use the worst-case

  • scenario, the generalization bound applies to every single hypothesis you

  • visited or you didn't visit.

  • Because what I did to get the bound, of deviation between in-sample and out-of-

  • sample, is that I consider that all the hypotheses simultaneously behave from

  • in-sample to out-of-sample, closely according to your epsilon criterion.

  • And that obviously guarantees that whichever one you end up

  • with will be fine.

  • But obviously, it could be an overkill.

  • And among the positive side effects of that is that even the

  • intermediate values have good generalization--

  • not that we look at it or consider it, but just to answer the question.

  • MODERATOR: A question about the punchline.

  • They say that they don't understand exactly how the Hoeffding works--

  • shows that learning is feasible.

  • PROFESSOR: OK.

  • Hoeffding shows that verification is feasible.

  • The presidential poll makes sense.

  • That, if you have a sample and you have one question to ask, and you see

  • how the question is answered in the sample, then there is a reason to

  • believe that the answer in the general population, or in the big bin, will be

  • close to the answer you got in-sample.

  • So that's the verification.

  • In order to move from verification to learning, you need to be able to make

  • that statement, simultaneously on a number of these guys, and that's why

  • you had the modified Hoeffding Inequality at the end,

  • which is this one

  • that has the red M in it.

  • This is no longer the plain-vanilla Hoeffding Inequality.

  • We'll still call it Hoeffding.

  • But it basically deals with a situation where you have M of these

  • guys simultaneously, and you want to guarantee that all of them are

  • behaving well.

  • Under those conditions, this is the probability that the guarantee can

  • give, and the probability, obviously, is looser than it used to be.

  • So the probability that bad thing happens when you have many

  • possibilities is bigger than the probability that bad things happen when

  • you have one of them.

  • And this is the case where you added up as if they happen disjointly, as I

  • mentioned before.

  • MODERATOR: Can it be said that the bin corresponds to the entire

  • population in a--

  • PROFESSOR: The bin corresponds to the entire

  • population before coloring.

  • So remember the gray bin--

  • I have it somewhere.

  • We had a viewgraph where the bin had gray marbles.

  • So this is my way of saying this is a generic input, and we

  • call it X.

  • And this is indeed the input space in this case, or the general population.

  • Now, we start coloring it according to when you give me a hypothesis.

  • So now there's more in the process than just the input space.

  • But indeed, the bin can correspond to the general population, and the sample

  • will correspond to the people you polled over the phone, in the case of

  • the presidential thing.

  • MODERATOR: Is there a relation between the Hoeffding Inequality and the

  • p-values in statistics?

  • PROFESSOR: Yes.

  • The area where we are trying to say that if I have a sample and I get

  • an estimate on the sample, the estimate is reliable.

  • The estimate is close to the out-of-sample.

  • The probability that you will deviate-- is a huge body of work.

  • And the p-value in statistics is one approach.

  • And there are other laws of large numbers that come with it.

  • I don't want to venture too much into that.

  • I basically picked from that jungle of mathematics the single most useful

  • formula that will get me home when I talk about the theory of

  • generalization.

  • And I want to focus on it.

  • I want to understand it-- this specific formula-- perfectly, so when we

  • keep modifying it until we get to the VC dimension, things are clear.

  • And, obviously, if you get curious about the law of large numbers, and

  • different manifestations of in-sample being close to out-of-sample and

  • probabilities of error, that is a very fertile ground, and a very useful

  • ground to study.

  • But it is not a core subject of the course.

  • The subject is only borrowing one piece as a utility

  • to get what it wants.

  • So that ends the questions here?

  • Let's call it a day, and we will see you next week.

ANNOUNCER: The following program is brought to you by Caltech.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 美國腔

第02講--學習是否可行? (Lecture 02 - Is Learning Feasible?)

  • 191 7
    cc 發佈於 2021 年 01 月 14 日
影片單字