Placeholder Image

字幕列表 影片播放

  • we thought about how you can talk about optic flow with little changes in space on little changes in time, that means derivatives.

  • And so things like sub l, which we can calculate across the image.

  • So there's something called the optic flow equation, which basically combines thes derivatives in the image these Grady INTs on.

  • Also in that equation that these two things that we want to get out this you and this be so these are our optic flow vector for each pixel.

  • Which way's things going?

  • It ordered To do that, though, we have to make some more assumptions because there's two unknowns and we have one equation.

  • So this is where people come up with a load of different ideas for howto frame the problem so that you can get up those u and V values.

  • So one method that Sosa optic flow and gives you these little what called quiver plots where for each pixel here, you've got this, you and the vector, so you can plot these across the whole image on.

  • If you look in the video that is onscreen now, you conceal these plotted and showing things moving around one of the methods for calculating this stuff originated in the 80.

  • So I think it was 1981 by a couple of people called Horn and Shunk.

  • So if you look up optic flow, this will be the sort of one of the first techniques that's mentioned again bury minds that in the eighties it was a challenge just to get any kind of video into a computer.

  • It'll on dhe.

  • Yeah.

  • You know, you're talking about working with TV standards like PAL or NTSC rather than things like PGA, which came along sort of nearer nearer the nineties.

  • So these are gonna be originally quite low resolution images, maybe 3 20 by 200 or something like that.

  • If you're talking about four K images here but nowadays, yeah, you've got a lot more calculations today.

  • So the assumption that horn junk make in order to get out these U and V components of optic flow is that they look in the local neighborhood so as well as just considering an individual pixel.

  • So we haven't got enough information to figure out what it flow is.

  • What they do is they say, actually, let's look at the neighboring pixels here on, We're going Thio basically put in a constraint into our silver That says, are you envy here should be quite similar to an average used on an average be in the local neighborhood.

  • What we're going to assume is that this motion isn't going to radically change between pixels.

  • If we're on a surface, the motion change is gonna be small.

  • Unless you're an edge, of course, would break another constraint there.

  • But most of the time, you know, if this table's moving around, all the pixels are gonna be moving in roughly in the same direction.

  • So we have this smoothness constraint that we build in on Horn Chunk is a global approach.

  • So for every single pixel in the image, it will try and optimize for working out you and being the local pixel on comparing that to the the average U.

  • N V in there in the local area.

  • So it's this iterated scheme, so it's pretty slow where it goes through and tries to kind of globally find the best you envy.

  • It's a fairly image.

  • It looks to me like you've got a workout.

  • If something that's the same is has now moved into a different area.

  • Is that Is that what you kind of like?

  • Yeah, it is.

  • It is that.

  • But remember that the motion that we're talking about is really tiny, so we're not talking about something that's moved forward.

  • 10 pixels that will break optic play, although we could talk about how you can fix that at the end, because there's one quite neat trick you can do if something's moved really fast.

  • We're talking about tiny movements, so almost kind of sub pixel pixel level movements here.

  • So it's picking up changes in brightness patterns spatially, but also over time as well.

  • On all the equations do is figure out a way toe.

  • Get these estimates of of West.

  • That little change in brightness is gone across the whole image.

  • In the case of whole jump, blowing chunks of global method is trying to do everything across the whole image, so there's a lot of other approaches that will calculate this.

  • Another really common one is called the Lucas Kennedy Approach to solving it on, rather than trying to say, Look, let's let's optimize this thing globally.

  • Across the whole image, they look a little patch, so they take a patch of pixels.

  • So again, this looks a bit like a colonel, I guess So.

  • It's normally sort of five by five and you say you're considering the pixel in the middle.

  • But what the Lucas Kennedy approach says is, let's assume that you envy is gonna be the same in all of these pixels in this region, and that gives us 25 equations, which is over determined so you can use the least squares to figure out the best fit essentially for you and be there.

  • No, of course, all these things are making a huge load of assumptions that which I've already hinted.

  • We have to break quite a lot.

  • So if you've got an edge here, for example, that maybe this object is moving that way and this object is moving that way, you're gonna have problems figuring out a U.

  • N V there.

  • So some constraints my building, things that try and separate our edges.

  • Um, because it tends to break this stuff.

  • Another quite interesting problem that you get with some of these methods is something called the aperture problem.

  • So this is where we're trying to figure out motion.

  • So it's called the aperture problem because we've only got a little window that we can see motion happening in like that.

  • So the question is, which way is that line moving?

  • So if we had to put optic flow vectors on this line, where would you say it was going?

  • Well, I mean, the obvious thing to say is it's going down.

  • But it could, of course, be a diagonal line.

  • Moving writes.

  • Good answer, because it can be lots of things if we take away that window.

  • This is the motion that we're actually getting, so the lines just moving across the image from left to right.

  • But it looks there that it's kind of either going diagonally down, right or down kind of depends on how you interpret it.

  • I guess this is called the After Problem or the barbershop pole illusion because it's got stripes moving up and down, and the idea being that there's no enough information here.

  • Thio to accurately figure out how that that feature is moving its very easy here because we can see the corners, corners are a bit special on.

  • They allow us to sort of refine our estimates of motion so sometimes if we're only looking in a small window like here, we can get ambiguous motion happening that we can't determined because of things like the future problems.

  • That's another sort of issue with these these methods.

  • The only other thing that I wanted to mention here is that s o horn Chunk is global.

  • We talked about it kind of fit.

  • Finding the best set of you envies across the image.

  • This approach here is local, so we only care about making it work on a five by five Patrick pixels.

  • Essentially.

  • But they've got advantages and disadvantages.

  • So one of the advantages of the global approach is if we've got an object here that's moving at the edges, you know there's enough brightness changes that we can pick up movement happening.

  • There's a question about if this is just a sort of orange or white, whatever shape what's happening in the middle of it, it's like the spinning ball we can't tell.

  • So the nice thing about global approach is it will kind of fill in from the information it knows it will fill in the edges throughout the shape.

  • Okay, The problem with the local approaches is if you've just gotta patch here.

  • Yeah, you can kind of figure out you and the in this location, but if your patches in the middle of one of these textures shapes, it's kind of undetermined solution, so you might get some sort of noisy approaches.

  • Um, so it's swings and roundabouts.

  • As with all of this stuff as to where the use of a global approach or a local approach and they got trade off in speed and things like that.

  • I was just two examples.

  • There's loads of different ways of calculating optic flow.

  • People doing it with deep learning now is one, of course, eso lots of different ways of doing it.

  • And it's still even though it's been around since Theeighties.

  • It's ah, it's a very useful technique still is a way of pre processing things, perhaps, is a precursor for segmentation, so if this shape here on the background are very similar color or texture, but this shapes moving on the backgrounds, not moving.

  • See, you've not got any flavor actors on the background.

  • You can use the optical flow field as a way of segmenting what's going on.

  • Okay, so we've said that the motion has to be really small for any of this to work.

  • So you need a really small time between frames again.

  • Another assumption that's going to get broken all the time is that stuff moves more than a few pixels.

  • So if you've got an image that looks like this and you've got something here and in the next frame, it moves down here, That's it.

  • Close, not gonna like that.

  • And it's gonna break.

  • So you're not gonna get a good value out for that.

  • There's a trick called building an Image pyramid, which certainly the Lucas cannot approach uses.

  • So you might read about this approach using period a pyramid scheme, and all that means is actually pretty simple.

  • You make your image lower resolution to start with.

  • So perhaps if I switch to a living color, imagine if, instead of being a four by four pixel image, this is a two by two pixel image.

  • And then whatever shape we've got here Yeah, okay, so it averages out a bit because we're sort of blowing it with our surrounding ones.

  • But now they've become neighboring pixels on.

  • We've essentially shrunk the space over the motion that's happening So you end up with this pyramid source system where you have low resolution lower down on, then you kind of move up to higher and higher resolutions.

  • That's a terrible image.

  • Do they have to every jot and say, OK, that one picture you saying for 20 pixels or exactly.

  • So you use this as a way to kind of bootstrap the rest of it.

  • So you calculate your motion here, so you get your motion vectors that might look like this on the low resolution one.

  • And essentially you populate the next level up with estimates of emotion from these.

  • So, you know, whatever was here gets filled into these four pixels on.

  • Then you do the scheme again.

  • But because you've got a starting point this time, it will help you sort of overcome some of those big emotions.

  • Is this being used these days?

  • You mentioned deep learning.

  • What?

  • What sort of things is it used for?

  • The moment Yes, used today.

  • So I mentioned it could be used for image stabilization so you can stabilize an image by looking at how the pixels of moving around You don't have to calculate it across the whole image.

  • If you want to do it really quickly, you could just kind of sample bits of it.

  • But you want to get an idea off.

  • Is the camera moving around the world in some way?

  • And then you can sort of in software Correct for that.

  • Another use is frame interpellation.

  • So if you've got 25 frames per second video and you want to turn into a kind of fake slo mo, if you just stretch out your frames, it will go kind of jagged E right.

  • So you get frame and then the next frame on because you filmed it at normal speed and you're slowing it down.

  • It doesn't have very nice.

  • So if you know how things are moving across the frame, you could add in sort of fake extra frames.

  • So if this is your first frame and your second frame, you can add in additional frames in the middle, which you can use optic flow to kind of figure out how brightness is moving around at that point.

  • And if you know how the sort of local patterns are moving about, you can put them in a sensible place in those interpolated frames so, rather than just pure sort of smoothing or interpolation between them.

  • It's kind of a bit more sensible than that, but clever.

  • And you couldn't move surfaces sort of where they should be.

  • So there's some quite sort of neat plug ins and tricks coming around doing that kind of stuff in one way you can do that is using optical.

  • If you're trying to follow something moving very fast in an image, you know thing rather than just talking about movement of the pixel level, that's gonna be where you're looking.

  • Object tracking, which perhaps we could do a video on in the future for a number of reasons.

  • If you've got a wobbly, shaky camera, you can use it for image stabilization.

  • I wasn't insinuating anything on dhe.

  • Yes, so you can use optic flow to see what kind of global motors what Boca Blur.

  • I don't have pronounced bouquet is right.

  • Okay.

  • Like, broke up.

  • Okay?

  • Yeah.

we thought about how you can talk about optic flow with little changes in space on little changes in time, that means derivatives.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

光流解決方案 - Computerphile (Optic Flow Solutions - Computerphile)

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字