Placeholder Image

字幕列表 影片播放

  • SciShow is supported by Brilliant.org.

  • By now, you've probably heard that self-driving cars are coming soon.

  • If you haven't—surprise!

  • They're coming soon!

  • But people have been saying that for at least a decade,

  • and I still can't buy a car that'll drive me to work

  • while I nap in the passenger seat.

  • Some cars already come with partial autonomy,

  • systems like Tesla's Autopilot,

  • that assist drivers or sometimes even take control.

  • But they still need a human driver who can grab the reins on short notice

  • if things get dicey, which is why someone in the UK got arrested

  • earlier this year for trying the passenger seat thing.

  • There are some fully driverless vehicles that might be released

  • in the next few years, but they're only meant for very specific uses,

  • like long-haul trucking or taxis confined to certain streets and neighborhoods.

  • That's because general-purpose driving is hard!

  • The software has to work out a lot of really tricky questions

  • to turn information from its sensors into commands to the steering and pedals.

  • And despite all the money and brainpower that's being poured into research,

  • there are still major challenges at every step along that path.

  • The first thing a self-driving car has to do is figure out what's around it,

  • and where everything is.

  • It's called the perception stage.

  • Humans can do this at a glance, but a car needs

  • a whole cornucopia of sensor data: cameras, radar, ultrasonic sensors,

  • and lidar, which is basically detailed 3D radar that uses lasers instead of radio.

  • Today's autonomous vehicles do pretty well at interpreting all that data

  • to get a 3D digital model of their surroundings

  • the lanes, cars, traffic lights, and so on.

  • But it's not always easy to figure out what's what.

  • For example, if lots of objects are close together

  • say, in a big crowd of people

  • it's hard for the software to separate them.

  • So to work properly in pedestrian-packed areas like major cities,

  • the car might have to consider not just the current image

  • but the past few milliseconds of context, too.

  • That way, it can group a smaller blob of points moving together

  • into a distinct pedestrian about to step into the street.

  • Also, some things are just inherently hard for computers to identify:

  • a drifting plastic bag looks just as solid to the sensors as a heavier,

  • and more dangerous, bag full of trash.

  • That particular mix-up would just lead to unnecessary braking,

  • but mistaken identities can be fatal:

  • in a deadly Tesla crash in 2016,

  • the Autopilot cameras mistook the side of a truck for washed-out sky.

  • You also need to make sure the system is dependable,

  • even if there are surprises.

  • If a camera goes haywire, for example,

  • the car has to be able to fall back on overlapping sources of information.

  • It also needs enough experience to learn about dead skunks,

  • conference bikes, backhoes sliding off trucks,

  • and all the other weird situations that might show up on the road.

  • Academics often resort to running simulations in Grand Theft Auto

  • yes, that Grand Theft Auto.

  • Some companies have more sophisticated simulators,

  • but even those are limited by the designers' imaginations.

  • So there are still some cases where perception is tricky.

  • The really stubborn problems, though, come with the next stage: prediction.

  • It's not enough to know where the pedestrians and other drivers are right now

  • the car has to predict where they're going next

  • before it can move on to stage 3: planning its own moves.

  • Sometimes prediction is straightforward:

  • a car's right blinker suggests it's about to merge right.

  • That's where planning is easy.

  • But sometimes computers just don't get their human overlords.

  • Say an oncoming car slows down and flashes its lights as you wait for a left.

  • It's probably safe to turn, but that's a subtle thing for a computer to realize.

  • What makes prediction really complicated, though,

  • is that the safety of the turn isn't something you just recognize

  • it's a negotiation.

  • If you edge forward like you're about to make the left, the other driver will react.

  • So there's this feedback loop between prediction and planning.

  • In fact, researchers have found that when you're merging onto the highway,

  • if you don't rely on other people to react to you,

  • you might never be able to proceed safely.

  • So if a self-driving car isn't assertive enough, it can get stuck:

  • all actions seem too unsafe,

  • and you have yourself what researchers call thefreezing robot problem.”

  • Which itself can be unsafe!

  • There are two main ways programmers try to work around all this.

  • One option is to have the car think of everyone else's actions

  • as dependent on its own.

  • But that can lead to overly aggressive behavior, which is also dangerous.

  • People who drive that way are the ones who end up swerving

  • all over the highway trying to weave between the cars.

  • Don't do that, by the way.

  • Another option is to have the car predict everyone's actions collectively,

  • treating itself as just one more car interacting like all the rest,

  • and then do whatever fits the situation best.

  • The problem with that approach is that you have to

  • oversimplify things to decide quickly.

  • Finding a better solution to prediction and planning

  • is one of the biggest unsolved problems in autonomous driving.

  • So between identifying what's around them,

  • interpreting what other drivers will do, and figuring out how to respond,

  • there are a lot of scenarios self-driving cars aren't totally prepared for yet.

  • That doesn't mean driverless cars won't hit some roads soon.

  • There are plenty of more straightforward situations

  • where you just don't encounter these types of problems.

  • But as for self-driving cars that can go anywhere

  • let's just say the engineers won't be out of a job any time soon.

  • I love the layers of thinking involved in this kind of problem solving.

  • And while I'm not an engineer designing self-driving cars,

  • but I still get to practice this kind of thinking on Brilliant.org.

  • Right now, I'm working through the Convolutional Neural Networks lesson

  • to help me learn how to work with neural networks.

  • I've already gone through the overview,

  • and thisApplications and Performancequiz has a car on it,

  • so that's what I'm going to try my hand at next.

  • The quiz already explained how this network works.

  • And then it's asking how we should modify it to suit this imagenet challenge,

  • to help it categorize objects better.

  • I think the answer is C: to add a fully connected network at the end

  • to help predict probabilities for what the object is,

  • based on the high level filter activations.

  • And I got it right!

  • What's great about these quizzes is that they keep building on each other,

  • so even though I got that one right,

  • I'm still getting more information throughout

  • and each question gets a little bit more interesting.

  • And if you get one wrong, that's ok too!

  • Because the point isn't to beat the quiz, it's to keep learning,

  • just like these neural networks do!

  • So, if you want to test out YOUR neural network,

  • the first 200 viewers to sign up at brilliant.org/scishow will get 20% off

  • their annual premium subscription, and you'll help support SciShow - so thanks!

SciShow is supported by Brilliant.org.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級 美國腔

为什么无人驾驶汽车要花这么长时间(Why Are Self-Driving Cars Taking So Long?)

  • 13 3
    joey joey 發佈於 2021 年 05 月 24 日
影片單字