字幕列表 影片播放
Oftentimes, innovations solve practical problems,
but the advancement of A.I.
might bring new tools
to chip away at the larger, even existential questions.
Are we alone in the universe?
Can we create lifelike, intelligent machines?
Maybe they're all moonshots, but imagine, one day,
having a second, synthetic version of you.
-How's it going, brother? -Oh, not bad.
Just spent the last hour mapping half the cosmos.
I'm looking for a constellation to name after us.
You mean "me," yeah?
Whatever. Semantics.
I'm doing all the work.
Touchy.
The starry night sky has been a source of fascination
and curiosity for centuries.
Is there something out there?
We've got all these suspect places to look for life
in our own solar system.
And we're just one little solar system
in a large galaxy,
which is one of many, many galaxies in the universe.
And so you realize pretty quickly
the chances of life elsewhere are pretty high.
[tuning radio]
[man] ...we hope we have a number of listeners out there.
Most of you are probably soft and squishy humanoids.
In case any artificial intelligence is listening, welcome as well.
[Bill Diamond] You'll appreciate this,
being a data scientist,
you know we're generating about 54 terabytes of data
every day, so...
See, that's music to my ears, right there.
[Diamond] That's music to your ears.
That's a nice playground for your algorithms.
[Downey] In remote northern California,
two scientists are on their way to collect data
in hopes to answer a cosmic question...
one that's as old as humankind itself,
or at least, Galileo.
[Graham Mackintosh] When it comes to the search
for extraterrestrial intelligence...
[Diamond] Right.
...there is decades of scientific discovery
and progress
which is relentlessly telling us
life is more likely than we thought.
Yeah, the body of evidence is becoming--
That's right.
...overwhelming, but can we find it?
[Mackintosh] When I was ten years old,
I was determined to have my own computer,
and I found out there was a kit that you could buy
and put together for yourself,
so I earned enough money to do that,
and that got me hooked.
I've been obsessed with computers ever since.
And I hope, I believe, that A.I. can help us dig deeper,
and hopefully come to the answer we're looking for.
Is there life beyond the Earth?
[Diamond] Ever since humans have been able to gaze up at the sky
and look at the stars,
we've wondered,
"Are we alone?
Is this the only place where life has occurred?"
The SETI institute is trying to answer this question.
SETI institute was founded
by Frank Drake, and Jill Tarter,
and Carl Sagan.
I co-founded this institute back in 1984
as a way to save NASA money.
...see if we can backtrack
to see if we can figure out what's venting...
Since then,
it has grown far beyond any of my expectations.
We have nearly 80 PhD scientists here.
Our research really starts with, "How does life happen?"
What are the conditions under which life takes hold?
We're trying to understand that transition
of how the universe
and how our own galaxy and solar system
went from chemistry to biology.
The number of civilizations
that there might be in the galaxy
is of the order of a million.
[Downey] Carl Sagan helped bring the cosmos
down to Earth,
but he wasn't the first to popularize it.
Ever since Orson Welles scared our pants off
with War Of The Worlds,
pop culture has had its eyes on the skies.
Little green men, extraterrestrials,
contact with aliens continues to capture our imagination.
[Diamond] We're interested in all kinds of life,
but of course we have a special interest
in intelligent or technological life beyond Earth,
hence, SETI.
Hello, this is Seth Shostak speaking to you
from Big Picture Science.
Today we're going to talk about artificial intelligence.
The machines of today are a lot smarter, if you will,
at least more capable,
than the machines of 50 years ago, incredibly...
There's vast amounts of data coming from space,
and A.I. can, um...
allows us to understand that data better
than we have been able to in the past.
It's this new capacity we have to see patterns in data...
[Tarter] We are trying to find evidence
of somebody else's technology out there.
We can't define intelligence,
but we're using technology as a proxy,
so if we find some technology,
something that's engineered,
something that nature didn't do,
then we're going to infer
that at least at some point in time,
there were some intelligent technologists
who were responsible.
[Diamond] So, Graham, we call it Area 52.
[chuckling]
[Mackintosh] We are headed to the Allen Telescope Array,
and tonight we are going to be doing an observation
which really is looking for signs of extraterrestrial life,
and we're gonna be using A.I. models
in a way that's never been done before.
[Diamond] All right, we are good to go.
So I gotta turn my cell phone off, no Bluetooth, nothing?
Nope, we need to be in a place that is radio quiet,
so you don't have interference,
or at least, you minimize interference.
We're gonna come around another bend a little up ahead,
and you'll see the dishes.
[Mackintosh exclaiming] Oh!
[Diamond] There we are.
Welcome to the Allen Telescope Array.
[Downey] The sole mission of the Allen Telescope Array,
or A.T.A.,
is to search for extraterrestrial life.
Past telescopes were basically toy binoculars
compared to the A.T.A.,
which was built in 2007
with support from Microsoft's Paul Allen.
Part of what makes it light-years ahead
is its wider field of view,
and ability to capture a greater range of frequencies.
It's also an array,
which basically means
it's a group of many small dishes
working together to cover more ground,
or sky.
Welcome to the A.T.A.
Fantastic.
Okay. Looks like Jon is out there.
I think he's manually turning those dishes to get 'em lined up.
[laughing]
-Hey, Jon. -Jon!
-Good to see you, man! -Good to see you, yeah!
My name is Jon Richards,
and I'm the Senior Software Engineer
at the Allen Telescope Array.
Radio astronomy is similar to optical astronomy,
except the radio wave frequencies
are much lower than visual,
so to receive radio waves, you need an antenna.
Take a look, Graham. Under the bell jar,
you see the actual antenna
that's picking up the signals coming from space.
This is spectacular.
[Diamond] It's kept
below the temperature of liquid nitrogen.
That brings the noise level down,
exactly what we want for deep space observation.
Just amazing.
[Richards] The radio signals from each one of these dishes
are brought into our control room,
digitized, made into binary ones and zeroes,
and combined together
to create the effect of having one large dish,
so we can actually map out the sky
much like you would
with a regular optical telescope.
All right, let's head back.
Let's go.
The observation we're gonna do tonight
is with the Trappist-1 system.
This is a star that has planets circling,
and at 8:00 tonight,
two of those planets are gonna align perfectly with Earth,
which makes it exactly the right moment
to do an observation.
We're gonna be listening in
for signs of any kind of communication
between these two planets,
even if that's not communication directed at us.
[Diamond] We're counting down to 8:01 p.m.,
which is when the orientation of these planets
are going to be lined up in our line of sight,
the so-called conjunction.
[Downey] It's a little like an intergalactic stake-out.
The guys are waiting
till the two planets are closest together,
and then plan to eavesdrop on their conversation.
They have no idea what they're listening for,
or if there's even gonna be a conversation.
[Richards] So we can take out this board here.
We're gonna repurpose it.
-So that's ready to go? -Yeah, let's go put it in.
All right, let's get it in.
[Richards] Since the site's getting close to 20 years old now,
my job is to get all this data coming in cleanly
and recorded cleanly,
and that is a challenge.
Here's the computer which is sending all the data
that we receive from all of our dishes
to our 48 terabytes of data storage,
so we need to replace a card.
This card will control our data storage.
[Mackintosh] You know, often
when people think of the search for extraterrestrial life,
they're thinking of someone with headphones
listening in on something that is sent to us,
something that's obvious.
It's really not like that.
It's a lot more subtle,
and that's why we're going to be collecting
enormous amounts of data.
All of the different parameters we might have to explore
set that volume, that exploration volume,
set it equal to the volume of all the oceans on the Earth.
So how much have we done, in 50 years?
Well, we've searched one glass of water
from the Earth's oceans.
The technologies that we've had to use until now
were not big enough, not adequate to the job.
Okay.
[Mackintosh] That's why we need computer systems
and artificial intelligence systems
to really turn that search on its head.
[Parr] When we think about traditional software,
we think about human beings writing lines of code.
What's extraordinary about A.I.
is that we're teaching machines how to learn.
This is why it's a quantum leap,
because for the first time,
instead of human beings writing the software,
the computer's actually building an understanding itself.
[Richards] We have to keep in mind
that the Trappist-1 system is 39.4 light-years...
39.6.
39.6 light-years away,
so this actual positioning was 39.6 years ago.
So not only are we, uh,
are we doing SETI research tonight,
we're time-traveling.
[Downey] That's right.
Because of how far away these planets are,
and how long it takes radio waves
to travel through space,
the guys are listening to a conversation
from about 40 years ago.
Here's some perspective.
It takes about eight minutes
for radio waves to get from here to the sun.
So, these planets?
Yeah, a little farther away.
[Diamond] Over your shoulder, Graham,
there's a NASA illustration of the Trappist system,
and there's at least three rocky, Earth-like planets
where liquid water can potentially be maintained--
Right.
...and that gives rise to the possibility
that biology could have formed in this system.
What's really interesting
about this particular planetary system,
these planets are very close together,
much closer than, for example, Earth to Mars.
That means there could be communication happening
between these planets,
and what we can potentially do is listen in.
Not that we can have a conversation
or understand what they're, uh...
-[Mackintosh] We don't need to. -We don't need to.
[Mackintosh] I love this kind of observation
because it has as its basic principle
something that's really important.
It's not all about us.
No one's sending us a signal,
no one's trying to get our attention.
The whole point
about the search for extraterrestrial intelligence
is you don't even--
We don't know what we're looking for.
Right, right.
Instead of looking for something specific,
you have to look for the exceptions
from what is normal.
That is where I think
A.I. is gonna just completely change the game for SETI.
[Mackintosh] Maybe it's communication,
maybe it's just a byproduct
of some technologically advanced civilization
going about its business.
All we care about
is it doesn't look like the rest of nature.
If it's a needle in a haystack,
it doesn't look like hay.
It's like this, each one of these little blips
is like a point in time of radio power,
and we take different points in time,
different windows into the data,
and we analyze them together
to see if there's any kind of repetition,
anything at all
that might indicate that something isn't random,
like this, right in the middle here,
where the random dots aren't random.
In a computer,
think of it like a thousand of these sheets,
and it's moving them a million times a second.
[Downey] To find order in the randomness,
the A.I. picks a small area
and studies its radio frequency data
to learn what normal sounds like.
Then, it uses this info to filter out background signals
from all the data that's been collected.
What's left is any signal, pattern, or repetition
that is unnatural.
They're coming up to perfect alignment.
Conjunction now!
[Richards] We're recording.
[Diamond] Wanna check the audio?
This is good.
This is good, nice clean data.
Crispy clean.
[Richards] Silence.
Yeah, that's what we want. I just, well--
I mean, we've been working up to this for the last month.
It looks like, it looks like nothing to us,
but that's the point.
[Diamond] That's the point.
That random sound is music to my ears.
This picture here is just immediate,
real-time results,
something that your normal Allen Telescope Array
would discard as nothing.
Our point is, not so fast.
There could well be more in there than we realize.
We do see some little blip right here...
That's true, in and around it.
Yeah. So here, let's press...
So, now, this all looks similar.
It's the sort of normal signal,
but that's interesting.
It just seems, I don't know--
It's like it spreads here for some reason.
Well, I don't know what that means.
It also is a higher average power.
It is.
So, yeah, it's... this is weird, right?
It is.
[Diamond] There are a couple of things
that we are looking at in the data
that look interesting.
Now, it's very subtle,
and this is why we'll need machine-learning to extract
whether what we're seeing is just something we're seeing,
or it's real, a real phenomenon.
All right, so we are done with the Trappist system.
[Mackintosh] This is great.
We've clearly grabbed good data.
It's exactly what we need.
[Downey] It's gonna take Graham a few days
to analyze the data,
nothing compared to what it used to take
to do manually.
[Pedro Domingos] Some people think
that the emergence of artificial intelligence
is the biggest event on the planet since life,
because it's going to be a change that is as big
as the emergence of life.
It will lead to different kinds of life
that are very different
from the entire set of, you know, DNA, carbon-based life
that we've had so far.
[Downey] While some are ramping up the search
in outer space,
others are using A.I.
to further explore inner life.
[Suzanne Gildert] In 20 to 30 years' time,
you might see a street like this,
with humans walking up and down it,
but there might also be a new thing,
which is human-like robots
might be walking up and down, too, with us.
Humans and robots
are really gonna be doing the same kinds of things,
and some of the things they'll be doing
will be maybe superior to humans.
[Downey] Suzanne is one of the founders
of Sanctuary A.I.,
a tech startup that's building what they call "synths,"
or synthetic humans.
That's right.
Artificial intelligence wrapped in a body.
[Gildert] Our mission is to create machines
that are indistinguishable from humans
physically, cognitively, and emotionally.
[Downey] Doing so
involves solving problems of engineering,
computer science, neuroscience,
biology, even art and design.
But for her,
the problem of artificially replicating a person
boils down to a deeper question...
What does it mean to be human?
[Gildert] Understanding what it is to be human
is a question that we've been asking ourselves
for many thousands of years,
so I'd like to turn science and technology to that question
to try and figure out who we are.
[Downey] We love stories and films about clones
and replicants and humanoid robots.
Why are we so obsessed
with the idea of recreating ourselves?
Is it biological?
Existential?
[Gildert] To try and understand something fully,
you have to reverse-engineer it,
you have to put it back together.
[Downey] The human that Suzanne knows best
is... Suzanne,
so one of her projects
is to build a synthetic replica of herself.
[Gildert] There's this thing called the Turing test,
which is trying to have an A.I.
that you can't tell is not a human.
So I wanna try and create a physical Turing test,
where you can't tell whether or not
the system you're actually physically interacting with
is a person, or whether it's a robot.
So here we have 132 cameras...
which are all pointed at me,
and they all take a photograph simultaneously.
This data is used to create
a full three-dimensional body scan of me
that we can then use to create a robot version of me.
[Downey] Suzanne believes
that we experience life through the senses,
so she's putting as much work into making the body lifelike
as she is the mind.
[Gildert] We broke down this very ambitious project
into several different categories.
The first category is physical.
Can you build a robotic system that looks like a person?
So the synth has bones and muscles
that are roughly analogous to the human body,
but not quite as complex.
These hands are 3D-printed as an entire piece
on our printers.
[Gildert] We can actually print in carbon fiber
and Kevlar,
and we can create robot bones
that are stronger than aluminum machined parts,
with these beautiful organic biological shapes.
So I'm adding in a finger sensor.
This, uh, current generation
has a single sensor on the fingertip.
[Gildert] We build a machine that perceives like a human
by trying to copy the human sensorium very accurately.
The most complicated part of the perception system
is actually the sense of touch.
Are you monitoring the touch?
Yes. Touch received.
[Holly Marie Peck] We've actually embedded
capacitive touch sensors in the synth's hand,
essentially pressure sensors
allowing it to feel, uh, its environment,
and interact and manipulate objects.
Let's just test the pressure.
-Okay. -This should max it out.
Yep, yep. Maxed out.
Just stretch out her hand. Okay, go.
[Gildert] The reason the hand and the arm
is able to move so fluidly
is because of pneumatic actuators.
They work using compressed air.
You actuate one of these devices,
and it kind of contracts
and pulls on a tendon,
so the actuation mechanism is very similar to a human muscle.
It's just not yet quite as efficient.
[Shannon] I'm adding the camera into the eyeball.
Now I'm adding the cosmetic front of the eye.
[Gildert] The eyes are super important to get right.
Similar to our own vision system,
they can see similar color spectrum,
and they can also, because there's two cameras,
they can have depth perception too.
[Peck] Restarting facial detection.
[Gildert] That actually looks pretty good.
[Peck] Mm-hmm. Do you wanna come forward a little bit?
Yeah.
-I'm gonna restart her headboard. -[Gildert] Okay.
That information
is fed through a series of different A.I. algorithms.
One algorithm is a facial detection system.
She's definitely seeing me.
[Peck] Yes, she is.
I can tell she's looking at me,
'cause she looked straight at me.
Yeah, gaze tracking is working.
Okay, cool. Now, do you wanna just smile?
I'll see if she's actually capturing your emotion?
[Gildert] If you're smiling,
the corners of your mouth come up,
your eyes open a little bit,
and the A.I. system can actually detect
how those landmarks have moved relative to one another.
[Rana el Kaliouby] I think the moment in time
we're at right now
is very exciting
because there's this field that's concerned
about building human-like generalized intelligence,
and sometimes even kind of surpassing human intelligence.
[Daphne Koller] There's people out there
who believe that this is on our immediate horizon.
I don't.
I think we're a long ways away
from machines that are truly conscious
and think on their own.
She's responding. I can see her face changing.
[synth] You look happy.
-Good. -Mm-hmm.
I'm gonna look sad.
You look sad.
Okay, good.
[Peck] We have actually configured
a lot of A.I. algorithms on the back end
that give the robot
the capabilities of recognizing people,
detecting emotion,
recognizing gestures and poses that people are making.
It then responds in various ways
with its environment.
[Gildert] Bring up her node graph
so you can see what's running in her brain.
Yeah, let's see all the online modules.
The chatbot, emotion detection,
object detection...
Wonderful. Gaze tracking...
[Gildert] The body, in a way, is the easy part.
Creating the mind is a lot harder.
[Downey] Creating the mind is more than hard.
It's basically impossible,
at least for now,
and maybe forever,
because a mind is not just knowledge,
or skill, or even language,
all of which a machine can learn.
The part that makes us really human is consciousness;
an awareness, a sense of being,
of who we are
and how we fit in time and space around us.
A human mind has that...
and memory.
"I remember the experience of buying a new pencil case
and the supplies to go in it,
getting all those new little things
that smelled nice,
and were all clean and colorful."
If you think about how people work,
it's very unusual for you to meet a person
that doesn't have a backstory.
I can use all the data that I have about myself
to try and craft something that has my memories,
it has my same mannerisms,
and it thinks and feels the way I do.
I would like them to become their own beings,
and to me,
creating the copy is a way of pushing the A.I. further
towards making it a realistic human
by having it be a copy of a specific human.
I remember going to Bolton Town Center
quite often.
We just called it "Town."
[Gildert] The basic idea
is you send in a large amount of text data,
and the system learns correlations between words,
and the idea
is that the synth could use one of these models
to kind of blend together an idea of a memory
that may have happened or may not have happened,
so it's a little bit of an artistic way
of recreating memories.
I remember going into WH Smith.
It had a very distinct smell that I can still recall.
[Gildert] So by giving them these backstories now,
we believe that we will be able to learn in the future
how they can create their own memories
from their experiences.
[Bran Ferren] I love the idea
that there are passionate people
who are dedicating their time and energy
to making these things happen.
Why?
Because if and when it does happen,
it's going to be because of those passionate people.
We talk about the computer revolution
like it's done.
It's barely begun.
We don't understand
where the impact of these technologies will be
over the next five, ten,
20, 30, 50, 100 years.
If you think it's exciting and confusing now,
fasten your seatbelts,
because it hasn't begun.
What is your name?
My name is Holly.
What is your name?
Hmm.
[Gildert] Of course there's that unknown,
like are we gonna run into a problem
with trying to recreate a mind
that no one's thought of yet?
My name is Nadine.
Interesting.
I am glad to see you.
[Downey] Even if we do one day figure out
how to create a virtual mind,
it's not just the science.
There's also the ethics.
What kind of rights will the robots have?
Can we imbue it with good values,
make sure it's unbiased?
What if breaks the law or commits a crime?
Are we responsible for our synths?
[el Kaliouby] There are big ethical challenges
in the field of A.I.
I believe that as a community of A.I. innovators
and thought leaders,
we have to really be at the forefront
of enforcing and designing
these best practices and guidelines
around how we build and deploy ethical A.I.
I like to say that artificial intelligence
should not be about the artificial,
it should be about the humans.
You look angry.
Landmarks are registering.
[Ferren] I think it's perfectly reasonable
to have a set of rules that govern ethical behavior
when you are dealing with technologies
that can have direct impact into people's lives
and their families and the future.
[Gildert] The vision's very ambitious for this.
We'd like to think that that is a 10- to 20-year mission.
You might say we're somewhere like
five to 10% of the way along.
Why is her arm doing that?
It's almost like it's not clearing the buffer.
Yeah... interesting.
Let's just restart you so your arm goes--
Oh, wait, it's going back down again.
Okay, that's good.
Okay.
How do you feel today, Nadine?
It feels good to be a synth.
Nice.
"It feels good to be a synth."
[Gildert] The synths are not mobile at the moment,
they can't move around,
they can't walk yet.
That's something we're going to be adding in
within the next couple of years.
The grand goal
is to make these into their own beings
with their own volition and their own rights.
There are these moments you can have
where you really feel something that's unusual.
It's surprising.
I was adjusting the synth's hair,
and then she suddenly, like, smiled,
and opened her mouth a little bit,
like, you know, like I'd just tickled her or something.
It was just, like, synchronous with what I was doing.
[Downey] In some ways,
Suzanne's vision is already coming alive.
She's making a connection, albeit small, with a machine.
Isn't that something?
[Domingos] I think A.I. is part of evolution.
The same evolution
that led from bacteria to animals,
and has led people to create technology,
has led them to create A.I.
In some ways, we're still in the very early infancy
of this new age.
[Downey] Will we ever create intelligent life
here on Earth...
or maybe we'll find it out there first?
So I'm on my way
to the SETI Institute headquarters
in Mountain View,
and, and I'm gonna show, uh, what the A.I. system found
in the data that we collected.
I'm excited. I'm a little nervous too.
[Tarter] We need to be able to follow up in real time...
[Diamond] Mm-hmm.
[Tarter] ...as closely as we can,
so that a signal that's there
is still gonna be there when we go back to look for it,
and we can then classify it.
Jill Tarter is really a legend
in this whole field of SETI research.
Also really a pioneer as a woman astronomer.
The character played by Jodie Foster in Contact,
is based, at least in the first half of that movie,
on Jill Tarter.
[Tarter] People often talk
about finding a needle in a haystack
as being a difficult task,
but the SETI task is far harder.
If I got out of bed every morning
thinking, "This is the day we're gonna find the signal,"
I have pretty good odds
I'm gonna go to bed that night disappointed.
I don't get up in the morning thinking that.
What I do get up in the morning thinking
is that today, I'm going to figure out
how to do this search better,
do new things,
do things you could not do in the past.
Early on, the technology just wasn't there...
Mm-hmm.
...and now we're doing something
that we've never been able to do.
I'm excited.
-Hello? -Oh, hey!
-Look who's here! -How are ya?
-Good to see you! -Hi, Graham.
-Nice to see you. -Nice to see you.
Likewise. Good to see you too.
-Hey, Bill. -It's been a couple of whole days?
-I know! [laughs] -Thanks for coming down.
-My pleasure, I'm excited. -Yeah.
We're thinking maybe you've got some news.
Well, I wanna step you through it.
Here you can kinda see
the system is initially very active.
It's all lit up,
and very quickly,
it starts to get a handle on what the shape,
you know, what a signal from the Trappist-1 system should look like.
Over on the far right is its areas of interest...
What I'm showing here
is a time-compressed video of the A.I. system
looking at the signal we gathered.
...and if you focus in on that,
the A.I. system did indeed flag this one area,
at that point, saying,
-"Whoa, back up. Something just happened." -[Tarter] Ooh, wow.
"That's not right,"
and if you zoom in on the actual data,
sure enough, there's that spike,
so that is not from the Trappist system.
That was generated by the Allen Telescope Array,
but, you know, beyond that,
this is an area that the A.I. system is saying,
"This isn't quite what I would have expected."
This is a little more interesting
'cause there's more structure to it,
and we should take its hints,
and have a deeper analysis done of this part of the observation.
We didn't write any code.
We didn't tell it to... to look for spikes of power
or anything else.
We just said, "You know what, you figure out what's normal,
and you let us know
when something catches your attention,"
which is exactly what it's doing there.
It's encouraging,
because already with just this one observation,
we started to see some real progress
in what the A.I. system can do compared to our own eyes,
and that's just one observation.
What about the next, and the next,
and as it gets better
with each new round of data that we collect?
This is after two hours.
I wonder how good it's gonna get after a hundred hours.
Yeah.
If we just routinely keep feeding the data from the A.T.A.
into this model,
it's gonna get better and better and better.
We can just scale this out.
-Right. -Absolutely.
We just got smarter. Thank you, machine.
Yes, exactly.
[Tarter] I'm absolutely so excited.
I'm really blown away.
I can see the tools that are being built
give us a new way of looking for things
that we hadn't thought of,
and things that we don't have to define up front,
anomalies that the machines will find
simply because they've looked at so much data.
[Mackintosh] I do think we're going to find ET.
I do think we are gonna find signs of civilization
beyond Earth,
and I do think that it's going to be A.I. that finds it.
[Downey] Is there intelligent life out there?
Can we create human-like machines?
[Domingos] The odds are overwhelming
that we will eventually be able to build an artificial brain
that is at the level of the human brain.
The big question is how long will it take?
[Downey] Outer space,
inner life...
Age-old mysteries now seem more solvable.
[Chris Botham] If we wanna go to Mars,
if we wanna populate other planets,
these types of things require these advanced technologies.
[Downey] Moonshots, yeah,
but also other pressing problems,
like...
-[gasps of shock] -All five! Whoa!
[Downey] ...the mind and body.
[Tim Shaw] Are you working today?
[beeping]
It's wonderful.
[Downey] Adaptation...
[Jim Ewing] I'm thinking and doing
and getting instant response.
It makes it feel like it's part of me.
[Downey] Work...
Action!
[Downey] ...and creativity...
These types of technologies can help us do our tasks better.
Three, two, one.
[computer voice] Autonomous driving started.
[el Kaliouby] I believe if we do this right,
these A.I. systems can truly, truly compliment
what we do as humans.
[Eric Warren] We use the A.I. tools
to predict what the future not only is,
but what it should be.
Yo, what's up? This is will.i.am.
[laughing]
[Mark Sagar] This is the new version of you.
The way it's looking so far is mind-blowing.
[firefighter] Stay close, I'll lead.
[Downey] Survival...
[firefighter] Over here, I see him! Three yards at 2:00!
[Martin Ford] I believe that artificial intelligence
is really going to be
the most important tool in our toolbox
for solving the big problems that we face.
[firefighter] I got him!
[crowd chanting]
[Downey] Conservation...
The fact that we can look across the world
and find where famine might happen
four months from now,
it's mind-blowing.
[Downey] All out of the realm of sci-fi and magic,
and now just science.
Still hard problems, but now possible,
with innovation,
computing power, will, and passion...
-[cheering] Yay! -Yes!
There it is.
[Downey] ...and yet, despite all that,
a vestige of unknown endures.
Who are we?
What are we becoming?
Every major technological change
leads to a new kind of society,
with new moral principles,
and the same thing will happen with A.I.
[Downey] Technology's changing us, for sure.
The whole idea of what it means to be human
is getting rewired.
A.I. might be humanity's most valuable tool...
...but it's also just that.
A tool.
[clattering]
[Downey] What we choose to do with it...
that's up to you and I.
[Seth Shostak] If you could project yourself
into the next millennium,
a thousand years from now,
would we look back on this generation and say,
"Well, they were the last generation of Homo sapiens
that actually ran the planet"?
[James Parr] There's a lot of paranoia.
The media's done a really good job
of making people frightened,
but A.I. is just a portrait of reality,
a very close portrait, but it isn't reality.
It's just a bucket of probabilities.
Where I think human beings will always have the edge
are understanding other humans.
It's going to take a long time
before we have an A.I.
that can understand all of the nuances
and various layers of the human experience
at a societal level.
[Shostak] James Parr, thanks so very much for being with us.
Great, thank you.