Placeholder Image

字幕列表 影片播放

  • *clap* Ah, there we are.

  • So we have diagnostic information here, but we can also get a

  • 3D view of what the Hololens actually sees.

  • So this is the point cloud and or rather mesh that it's creating in real-time.

  • As you can see that's the room was standing in.

  • There's the table.

  • That's the point of view of the Hololens in real time.

  • First-person view.

  • There is a bit of a delay.

  • The Hololens does not need a cable of any sort. It's all on board.

  • This is just to give a preview,

  • otherwise it's really laggy over Wi-Fi.

  • *BLIP*BLIP*DIT*DIT*DIT*DIT*DIT*DIT*DIT*

  • Point it at me. Let's have a look at what I look like with this.

  • So, can you be seen? Because it might...

  • Because you're effectively a dynamic mesh,

  • it might not be tracking you.

  • -Are you actually showing up? -Eh..something's there.

  • Maybe. Let's see if I update it...

  • And uh, give it a...

  • -Ah! Is there... -Yes, something is there.

  • Okay.

  • Yep, and that is enough information for the Hololens to say

  • -"Right, there is some sort of object there." -I can sort of see my camera there...

  • -Yep. -Because I'm pointing at the laptop

  • -that's on the table. -Alright

  • Do we have a...?

  • -Hold on, let's... -Trying to stay still

  • Okay, and I'll just tell it to update the mesh.

  • -Ah! -There we are.

  • So this is basically a depth map of me is it?

  • Pretty much, yeah.

  • Well from this angle only. I'll have to walk around otherwise.

  • I'm seeing some blinking red lights on you.

  • That is the depth camera array that's on the top of the Hololens, very similar to a Kinect's,

  • so the exact cocktail of sensors is not very well known.

  • Talking to you now, I can sort of just see your eyes,

  • but it feels like I've walked a bit into the future here.

  • Indeed it is.

  • Of all the various head mounts we have,

  • this one is probably the more Star Treky of the lot.

  • -Haha -Yeah

  • Just like the Kinect or the Tango and all those devices,

  • What it's doing is reading the environment around it,

  • which is different from what, say normal optical AR does.

  • And by creating a measure to understand the environment around it.

  • And we can start placing content there

  • So, as you can see that's

  • the example of the mesh, what it's seeing.

  • The mesh it's creating in real time.

  • -Is it constantly updating that? -All the time.

  • It's reading the environment all the time,

  • and now this is a view, exactly what I see.

  • So I can populate the world

  • around me with any visual contact I want.

  • So this is the menu, basically.

  • You should be able to it. There we are.

  • So that is a hologram in space.

  • So let's go play some other stuff. For example, my settings.

  • I can either place it in space, just leave it there,

  • or I can go find a nice flat surface

  • like say, side of this printer, and place it.

  • And that is now locked that that's where it

  • belongs

  • So, that is now a normal Windows 10 menu,

  • many of you will probably have seen it before.

  • There we are.

  • Or we can start placing other stuff.

  • Let's place a...

  • It's a theme we put around three lines.

  • -These demos usually. -So that is now locked onto

  • the table.

  • It won't however, it won't move. It is there.

  • Now the key thing here unlike most

  • of the other AR solutions we have seen is

  • as soon as these sensors have an

  • understanding of depth, they can deal

  • with the occlusion problem.

  • So the occlusion problem is where the...any AR solution

  • doesn't exactly know what it's looking at.

  • Now, so, and in which case it draws the digital content above

  • everything, so if you've used any AR,

  • any mobile phone-based AR with a marker,

  • you have probably tried putting your hand in front of the marker

  • to see if it occludes the virtual content or

  • if anybody tries to pick stuff up.

  • Normally, with marker-based AR, that simply won't work.

  • All it can really do is tie content that marker,

  • but in this case it, does have a sense of depth,

  • so it can effectively read your hand

  • and stop drawing content over it, so if I put my

  • hand over the cat, right you can see now

  • it is not working because it hasn't

  • triggered from my hand just yet, but it will eventually catch on.

  • If you went and kinda looked behind this laptop,

  • - will the cat disappear behind this laptop? - Uh, it should.

  • Assuming the the device has actually read the laptop.

  • Let's have a go. Oh, that is stuck.

  • - Does the stickiness happen quite often or is it..?

  • - No, it's only for the preview really.

  • - Oh so basically it's because we're looking on the laptop

  • - Indeed, it's a little bit of a debug view.

  • As far as my experience goes on the Hololens,

  • everything is smooth.

  • Oh, there we are.

  • So, let's have a look.

  • So it looks like we don't have a lot of occlusions here.

  • No, my laptop doesn't seem to want to hide it.

  • The problem with the room here

  • is there's not a lot of stuff to hide behind.

  • So let's try something else.

  • So, if I was to take a another hologram and place it

  • behind something like sit on the floor over here.

  • Right.

  • There we are, I can see the dog

  • and now the table's in the way. Can't see it.

  • - Yeah, ok. - Yep so, this is a case where the geometry

  • has been correctly read, so the Hololens is sure

  • that that is a static feature in my environment.

  • So it now has built geometry.

  • As we saw before actually, we can switch back to

  • the previous view.

  • There we are.

  • So, this is the geometry that it is using for the occlusion.

  • - Oh, so you can see that table there... - Yep.

  • -and it's not perfectly smooth. - Exactly, so previously the cat

  • could not be hidden behind the laptop because

  • as you can see, the laptop is shown up as a short mesh.

  • If I had kept moving, it probably would have hidden it.

  • The table though is a nice solid object.

  • (it's got that) and the dog is hiding

  • hiding behind the table, so I can see the

  • hologram from here. you probably cannot on

  • the screen now because we're seen the preview.

  • But as I move and I'm behind the table, it's hidden.

  • This has not systematically scanned the entire room

  • - and got geometry exactly. - Mhm

  • So, are we looking at a trade-off with that?

  • Yes. Yes. Seeing as this is an entirely on-board process,

  • so unlike say, all the virtual reality headsets

  • we've seen, this does not require a desktop computer.

  • It's basically mobile phone strapped to my face, just like the Tango.

  • There is continuously a trade-off between speed and performance,

  • so if you want to maintain the experience that everything is smooth,

  • say about 30 frames per second for all the content

  • I'm seeing, it's going to start cutting corners.

  • and it also goes down to the basically the capabilities

  • of the sensors. How much granularity can they pick up.

  • How much detail, so you'll see that

  • they kind of tend to cut corners,

  • will ignore clatter on the tables.

  • They're not very good with see through surfaces,

  • so for example, the glass walls over here as you can see,

  • they will simply not register. Because it all comes down

  • to computer vision in this case, but computer vision on the sensors.

  • Just like we have seen with other AR stuff when

  • it's through a some sort of glass case or what-have-you.

  • The cameras actually see the reflections, but unlike our brains,

  • they did not know to discard them.

  • So they don't really know what to do with them.

  • The depth sensors so in this case, I see straight through them.

  • So that is both a good thing and a bad thing,

  • depending on your on how you want to apply it

  • So, I could place a hologram there, but it

  • wouldn't necessarily see it as a wall.

  • So, let's see what we can do.

  • Of course it's not all just placing Holograms around

  • Right off the bat, it's the idea that

  • you can use the environment around you

  • as you're, well, computer environment,

  • so if you could just transplant your

  • everyday desktop use into the environment.

  • So I can place a Word file over there,

  • I can put my browser on my ceiling,

  • make a massive screen.

  • Eh, obviously there are a lot of uses

  • in entertainment, so the games on this thing are pretty good.

  • They will use the environment,

  • you can use the physics of your- the environment around you.

  • So you can have a ball bounce,

  • on the table, bounce on the floor.

  • People have tried this kind of gesture-based

  • and moving around kind of interfaces for computing before

  • And it doesn't really seem to take off.

  • People are quite comfortable with a keyboard and a mouse,

  • or some kind of trackpad.

  • Indeed.

  • Weirdly, this is probably the first examples where I've seen people

  • just click with it.

  • So it's only really got to gestures that recognizes,

  • the click and the menu, but that is enough.

  • It is. It is enough for you to be

  • able to do basic interaction such as

  • drag stuff around, scroll around, select things

  • in a way you normally might, you know,

  • pointing at something is a is a normal interaction

  • and translates quite well.

  • And it's the it's also reliability factor,

  • so one of the reasons these things don't catch on

  • Is because usually don't work.

  • They have a, you know, you're sitting there tapping

  • things, this one is just registering and working quite well.

  • The two gestures for example, the the bloom "my bloom" it's called,

  • just works reliably every time.

  • Same as the other main interaction which is the

  • click, so I'm not, well there we go.

  • Every time I click, we get a update of the mesh.

  • You can see how it's reading the mesh there,

  • the environment around it.

  • Or if I want to get rid of windows, just point out what I want

  • and remove.

  • There, we do have a bit of a problem.

  • Yes, I can select things and there are

  • two gestures that work very reliably,

  • but what were missing is a cursor.

  • Yes, I do have a cursor in the middle of like a,

  • like a crosshair in the middle of my view.

  • But that means I'm relying on the

  • direction of my head in order to

  • basically look at something,

  • center it in the middle of my screen and then select it.

  • Um, this is a good and bad thing.

  • There are various interaction methods to do this,

  • maybe, I could point at something.

  • But you see that pointing is actually not

  • one of the most accurate things in the world.

  • We have some other studies with

  • other interfaces where you can see that

  • people think they're pointing at something

  • but really they're pointing

  • with an arm, they're pointing with a wrist,

  • they're pointing with a finger.

  • So, getting a system that understands the intent of the user is pretty difficult,

  • but gaze is pretty solid.

  • We have a crosshair, we aim at we want

  • - we select it. - We're seeing a preview here

  • which is a camera on the front of the device in this view.

  • Is your field of view a lot wider than that?

  • Can you see more stuff?

  • If you're looking at that cat, what can you see at the sides

  • - of your arms? -That is a very good question.

  • That's actually one of the

  • things that well surprises most people when they put this on.

  • The field of view is actually that narrow.

  • view is actually fat narrow,

  • unlike pretty much every headset just right now.

  • This is a proper heads-up display,

  • meaning it's not like the Google Glasses

  • where I have my real vision and then a

  • second window up in the corner, giving me a

  • version of the real world.

  • This is directly adding content to my normal

  • vision

  • The problem is the area that it has to

  • add this content is really very narrow.

  • I think that's the equivalent of a

  • 15-inch screen at normal distance,

  • so really my field of view of augmented

  • content and it sounds bad, but it's not

  • that bad, is really just this angle over here.

  • - It's really narrow. - But you can see at the sides, can you?

  • Uh, yes. Yes I can see.

  • It's all perfectly clear, so I can

  • see the cat now. It's in the center of my

  • vision, but if I move my head ever so

  • slightly, at this point, the cat is

  • outside my augmented field of view.

  • It is quite narrow, but this is a technology

  • limitation that my understanding is has

  • already been overcome. It's again a

  • trade-off between cost and performance.

  • These are developer kits after all,

  • so they're already pretty pricey. If they were to

  • have a proper field of you, probably the

  • weight and power requirements would have

  • made them quite unwieldy other time.

  • You know, the launch works very well. It's Cortana.

  • - So umm... - Should we try and put it on the camera? -Yep.

  • - Let's see if this works. - Okay, so...Whoops.

  • - It's not gonna work on b[oth] - I'll have to look out through one

  • - of the eyes I guess. - Oh yeah yeah.

  • - Oh. Hold on. Yep, I see content. - Oh, yeah.

  • - Oh, there we are. - Oh, Okay.

  • - Okay, so uh - So, do you want to hold that against there?

  • - Yep, aim on other account. Ok, then we-

  • - sure enough, there's content. - Mhm.

  • - There's more content. - Is the cat still there?

  • - Yeah there's... Oh hold on, I'm doing it now.

  • - Uh, we move forward. - Oh, there we go.

  • - Yeah it's quite interesting to see that line, isn't it?

  • So, if I put this on the cat's nose for instance,

  • and then what? Click or something?

  • - Yep. So, whenever you can interact with

  • something, the dot will turn into a circle.

  • - Ah, okay - Indeed. And also, when it sees

  • fingers and it says "Okay, you're ready to click."

  • Same thing.

*clap* Ah, there we are.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

A2 初級

微軟Hololens - Computerphile (Microsoft Hololens - Computerphile)

  • 3 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字