Placeholder Image

字幕列表 影片播放

  • (gentle intro music)

  • - My name's Tristan Goulden.

  • I'm a remote sensing scientist

  • with the Airborne Observation Platform

  • and I'm going to do a talk this afternoon on

  • Introductin to LiDAR.

  • Primarily on discrete LiDAR because Keith Cross

  • is going to be following me talking

  • about the waveform LiDAR.

  • LiDAR is a active remote sensing system

  • which means that it has its own energy source.

  • And so the main subsystem of a LiDAR is the laser

  • and we'll use the laser for is to generate

  • a pulse of energy that comes out of the sensor

  • which is pointed out the bottom of the aircraft,

  • travels down to the ground, reflects off targets

  • on the ground, and then returns back to the aircraft.

  • And so, we're actually able to measure the time

  • it takes for that laser pulse to go down,

  • reflect, and come back.

  • Based on that two way travel time we can

  • calculate a range from the aircraft down to the ground.

  • Also have a GPS sensor on the roof of the aircraft

  • to get the aircraft position,

  • an inertia measurement unit inside

  • the aircraft to get orientation,

  • roll, pitch, and yaw off the aircraft

  • and then a scanning mirror which directs

  • the laser pulse within a squad beneath the aircraft.

  • When you combine all of these subsystems together

  • you can actually coordinate points on the ground

  • based on all the observations from these subsystems.

  • What makes LiDAR really unique

  • is that it is able to achieve a really accurate

  • and dense raw sample of the terrane.

  • It's able to do that mostly from the Laser Ranger

  • which today's rangers can operate at about 500 kHz.

  • This means that the system is capable of sending

  • out 500,000 pulses per second.

  • Each pulse is capable of getting multiple returns.

  • It's possible that some of the energy

  • is going to reflect off the top of vegetation

  • then, energy will continue down through

  • the vegetation and might reflect off the understory,

  • then hopefully will make it down to the ground

  • reflect off the ground and we can get multiple returns

  • from each pulse.

  • That means we're able to get,

  • for sending out 500,000 pulses per second,

  • a multiple of that many points every second.

  • It's an incredible amount of data.

  • I just want to briefly introduce the difference

  • between discrete and full waveform.

  • I won't get into a lot of details

  • cause Keith is going to talk about that in a minute.

  • But basically there's kind of two flavors

  • of the observations we get from the LiDAR.

  • The discrete gives us points only.

  • When we get that reflection off of an object,

  • we that return range and we calculate

  • just a single coordinate.

  • From multiple returns we could get three or four

  • individual three dimensional coordinates.

  • With the Optech Gemini when that signal actually comes back

  • its split and part of the signal goes to

  • a waveform digitizer and that records the entire

  • return energy signature.

  • This energy signature will include the outgoing impulse here

  • and then some time will pass and then as we're going

  • through vegetation you're getting this humps here

  • where we're getting additional energy returns

  • from the object.

  • And so in the discrete LiDAR we're going to cut off

  • the timing at each one of these humps,

  • here, here, and here and give us three individual points.

  • But with the waveform LiDAR we get this full return

  • signal and you're able to do more

  • advance analysis on the structure

  • of the vegetation with that signal.

  • The NEON LiDAR currently we are operating two

  • Optech Gemini systems which are slightly older technology.

  • We purchased these in 2011 and 2012.

  • In the future we will be also doing surveys

  • with the Riegl Q780 which is a more contemporary system.

  • Right now we run our obtech at a

  • Pulse Repetition Frequency of 100 kHz.

  • We chose this frequency because it's the highest we can go

  • and still maintain the accuracy that we want.

  • If we go any higher than that,

  • it's capable of going up to a 166 kHz,

  • but there is a large degradation in accuracy.

  • We fly at 1,000 meters.

  • We have 30% overlap in our flight lines.

  • That gives us 2-4 pulse per squared meter.

  • In the overlapped areas we get 4 pulse per squared meter

  • but in the non overlapped areas we get 2.

  • It's capable of recording up to 4 overturns per pulse

  • on the discrete LiDAR.

  • And so that's theoretically we could achieve

  • 400,000 points per second but generally you'll never get

  • 4 returns on every pulse.

  • In order to position all of the LiDAR data

  • we also have to determine a trajectory.

  • So this is the information that the GPS IMU collects.

  • As part of that we set up GPS base stations

  • in the local vicinity of our survey areas.

  • I think somebody asked about this yesterday.

  • Generally we try to exploit the CORS Network

  • as much as we can.

  • And these are GPS base stations that are set out

  • across the United States and are run by the local

  • governments or the federal government.

  • We use those CORS stations.

  • There's kind of stationary GPS sites with really accurate

  • coordinates and we use those to differentially correct

  • the GPS trajectory.

  • We go to the site and there's COR station

  • and we see that the distribution of COR stations

  • doesn't give us sort of less than a 20 kilometer

  • baseline between that reference station and the aircraft

  • then we go ahead and we set up our own GPS base station

  • to correct the airborne trajectory.

  • For the most part, unless we're transiting from

  • the airport to the site, we'll never have base stations

  • that are more than 20 kilometers from the aircraft.

  • We do this because we're aiming for errors in the GPS

  • trajectory to be between 5 cm and 8 cm.

  • In order to do that we really need those local

  • GPS base stations that are close to the

  • airborne trajectory.

  • We try to achieve errors of .005 degrees in pitch and roll

  • and .008 degrees in yaw.

  • This is just a picture of the IMU that's located

  • inside the aircraft to get the roll, pitch, and yaw

  • as well as the GPS antenna.

  • One of the reasons we're able to get

  • these really accurate trajectories

  • is because GPS IMU are really complimentary technologies.

  • The IMU is able to achieve really fast positioning

  • but it's prone to drift over time.

  • Where GPS gives us really good position

  • every second or so but we can't get a position

  • in between those two GPS observations.

  • The GPS operates at 1 hertz.

  • The plane's traveling at 50 meters per second.

  • That means we're only getting 1 GPS

  • observation every 50 meters.

  • A lot could happen to the plane in 50 meters.

  • That's where the IMU takes over.

  • It's operating at 200 hertz.

  • It takes care of the positioning in between

  • those two GPS observations.

  • We get a good position 200 times per second.

  • I should mention that as you're going in between

  • the two GPS observations the IMU is prone to drift

  • but it gets corrected every time

  • you get to a new GPS observation.

  • So really it only needs to do

  • its positioning for one second.

  • And this is just an example of some results

  • of a trajectory.

  • This was done at Smithsonian Environmental Research Center.

  • This is the upper left hand side

  • you can see the software that we use

  • to process the trajectory and then

  • the results of the trajectory in Google Earth

  • where you can really see each one of the flight lines

  • that we flew going up and down the site.

  • We actually also worked up statistics

  • for all of our trajectories from our 2016 flights

  • and we'll probably do the same for the 2017 flights.

  • Just so we can get an impression

  • of the type of quality we're getting on those trajectories.

  • You can see generally we try to keep

  • our roll below 20 degrees so that we maintain

  • locked to all the satellites.

  • You can see that for the most part

  • our roll was always between 20 and -20.

  • Generally we always had above 6 satellites

  • but more like 8 or 9.

  • Our PDOP was generally below 4 which is quite good.

  • This is the distance to the nearest base station.

  • You can see that for most of the time

  • we're below 20 kilometers.

  • There is sometimes where we get up a little bit higher

  • but that's generally during transits

  • between the site and the local airport.

  • Once we have that trajectory we're able to

  • mix that with the range and scanning information

  • that's collected by the LiDAR sensor

  • to produce the point cloud.

  • This is an example of our L1 product

  • which is point clouds produced in LAS format.

  • LAS is a standard binary format

  • for exchange of LiDAR point clouds.

  • This is an example from the San Joaquin Experimental Range

  • You can see all the individual points

  • that were collected by the LiDAR

  • and even the structure of the vegetation

  • you can make on those individual points.

  • That's just our L1 product.

  • All the L3 products that we produce are rasters

  • opposed to point clouds

  • Instead of all those individual coordinates

  • we have a grid of points.

  • We actually have to convert those points

  • into a raster product.

  • You can imagine if we observe this area of land

  • with the LiDAR we might get a

  • sampling like this of all the LiDAR points.

  • But what we really want to create our raster product

  • is the elevation at each one of these grid points.

  • Basically all of the points overlaying

  • where we want those grid notes.

  • What we do is we look in the area surrounding

  • a particular gride note and then we use

  • an interpolation method to calculate

  • what the elevation of that grid note might be.

  • And of course we can create that at any size

  • And here at NEON we create

  • these rasters at 1 meter resolution.

  • There's lots of different interpolation methods

  • that you could use.

  • I encourage you to go out there

  • and research the different ones that are available.

  • At NEON we use what's called a

  • Triangular Irregular Network.

  • Which basically means we're just creating

  • linear connections between all the points

  • and forming triangles between all the points.

  • If you think about it you can kind of lay that grid

  • underneath this Triangular Irregular Network

  • that's connecting all those points.

  • Just interpolate the elevation from the plane

  • of the triangle that overlaps the raster cell

  • and pull that elevation down and

  • assign it to that raster cell.

  • That's how we get the elevation from the LiDAR Point Cloud.

  • All the points here are LiDAR observations

  • and we interpolate in between them,

  • pull the elevation from the triangular plane,

  • and assign that elevation to the raster grid.

  • One the advantages of this is that

  • obviously it honors the location of the true data point.

  • You're never interpolating down or filtering

  • a lot of observations in creating a new

  • elevation from what you observed,

  • and it's computationally efficient.

  • This is the main reason that we want to use it.

  • Because when we're producing a lot of data

  • in an automated fashion we want a really

  • computationally efficient algorithm.

  • The main downfall of the Triangular Irregular Network

  • is it doesn't exploit redundancy

  • in the LiDAR data to improve your accuracy.

  • You can imagine if you had multiple points

  • within a single grid cell, it's not averaging

  • those to reduce the error.

  • You're just pretty much getting the

  • elevation point that's closest to the center.

  • We'll talk more about that later in the week

  • during the MCU lessons.

  • As I mentioned these are the main products

  • that come from the LiDAR.

  • You have the Unclassified point cloud.

  • That's the L1 product as well

  • as the Classified point cloud, that's an L1 product.

  • And then the L3 products which are the rasters.

  • You get a Digital Terrain Model,

  • Digital Surface Model, Slope and Aspect,

  • Canopy hight model, and then the Slant range waveform.

  • I'm just going to briefly go

  • through each one of these products.

  • You've seen this picture already.

  • It's the L1 product.

  • It's in LAS 1.3 format and available by flight line.

  • And then we also have a Classified Point Cloud.

  • We give this zone in tiles.

  • 1 kilometer by 1 kilometer tiles,

  • and we perform the classification with

  • a commercial software product called ASPRS

  • which basically just goes through

  • and looks at the geometry of the points

  • and determines whether the points are

  • ground of vegetation based on their structure

  • and how they are orientated to one another.

  • Then we further classify those also

  • into building, noise, and unclassified points.

  • And then we also colorize the point cloud.

  • We take the high resolution camera imagery

  • and apply the Calgard GB Colors

  • to the point that overlaps on that image

  • so you can kind of get a full color

  • 3D model of each of our sites with these.

  • This is part of the Classified Point Cloud.

  • Next we have the Digital Surface Model

  • This is one of our L3 Raster products

  • Basically we just use that Triangular Irregular Network

  • interpolation algorithm with all of the points.

  • This is vegetation, building, all the points included

  • we're interpolating between all those

  • and getting that raster elevation.

  • Again creating 1 meter spatial resolution

  • and then we create a digital terrain model.

  • You saw in the classification we had a couple slides ago,

  • we remove all of the vegetation points,

  • that are points that are classified as vegetation

  • and then we move all of the vegetation points,

  • and then just interpolate just the ground points.

  • That gives us just the ground surface

  • and I think I mentioned on the first day

  • that LiDAR is really one of the only

  • technologies that's able to do this

  • to classify those ground versus vegetation points

  • and then just get the ground

  • so you get an idea what the surface looks like.

  • This is just a little animation that shows

  • the difference between ground, the DTM, and the DSM

  • removing those vegetation points.

  • Then we also create Slope and Aspect Rasters

  • from the digital terrain model.

  • This is also L3 product.

  • The slope is measured in degrees.

  • Basically just the slope of the terrain

  • while the aspect is the direction of the steepest slope.

  • These are also measured in degrees

  • between 0 and 360 degrees.

  • Both of these come from the Horn algorithm.

  • Which is the same algorithm that's used

  • in a lot of popular remote sensing packages

  • like ESRI, or QGIS, and ENVI to calculate

  • slope and aspect.

  • This is also produced at 1 meter

  • and given in 1 kilometer by 1 kilometer tiles.

  • Finally we have the Canopy Hight Model.

  • Also an L3 product.

  • A common issue in creating Canopy Height Models

  • from live hours you get data pits.

  • These are areas that don't go all the way

  • down to the ground in the center of the canopy.

  • Which sort of biases your estimates

  • that you might be gathering from the Canopy Height Model.

  • We use an algorithm from this paper here

  • that takes care of those date pits.

  • If you want more information on that you can go here.

  • And then the final product is

  • that Full Waveform LiDAR product.

  • You can see that these were points that were

  • taken over Canopy Height Model

  • and if you look at one of these individual points

  • you have this outgoing waveform here

  • and then a whole bunch of time passes,

  • but a whole bunch of time in LiDAR is like 300 nanoseconds

  • and then you get this return pulse here.

  • And so Keith is going to give us

  • his presentation on Waveform LiDAR.

(gentle intro music)

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B2 中高級 美國腔

离散激光雷达和NEON激光雷达数据简介:演示(Introduction to Discrete Lidar & NEON Lidar Data: A Presentation)

  • 6 1
    joey joey 發佈於 2021 年 05 月 24 日
影片單字