字幕列表 影片播放
-
Chris Anderson: Nick Bostrom.
-
So, you have already given us so many crazy ideas out there.
-
I think a couple of decades ago,
-
you made the case that we might all be living in a simulation,
-
or perhaps probably were.
-
More recently,
-
you've painted the most vivid examples of how artificial general intelligence
-
could go horribly wrong.
-
And now this year,
-
you're about to publish
-
a paper that presents something called the vulnerable world hypothesis.
-
And our job this evening is to give the illustrated guide to that.
-
So let's do that.
-
What is that hypothesis?
-
Nick Bostrom: It's trying to think about
-
a sort of structural feature of the current human condition.
-
You like the urn metaphor,
-
so I'm going to use that to explain it.
-
So picture a big urn filled with balls
-
representing ideas, methods, possible technologies.
-
You can think of the history of human creativity
-
as the process of reaching into this urn and pulling out one ball after another,
-
and the net effect so far has been hugely beneficial, right?
-
We've extracted a great many white balls,
-
some various shades of gray, mixed blessings.
-
We haven't so far pulled out the black ball --
-
a technology that invariably destroys the civilization that discovers it.
-
So the paper tries to think about what could such a black ball be.
-
CA: So you define that ball
-
as one that would inevitably bring about civilizational destruction.
-
NB: Unless we exit what I call the semi-anarchic default condition.
-
But sort of, by default.
-
CA: So, you make the case compelling
-
by showing some sort of counterexamples
-
where you believe that so far we've actually got lucky,
-
that we might have pulled out that death ball
-
without even knowing it.
-
So there's this quote, what's this quote?
-
NB: Well, I guess it's just meant to illustrate
-
the difficulty of foreseeing
-
what basic discoveries will lead to.
-
We just don't have that capability.
-
Because we have become quite good at pulling out balls,
-
but we don't really have the ability to put the ball back into the urn, right.
-
We can invent, but we can't un-invent.
-
So our strategy, such as it is,
-
is to hope that there is no black ball in the urn.
-
CA: So once it's out, it's out, and you can't put it back in,
-
and you think we've been lucky.
-
So talk through a couple of these examples.
-
You talk about different types of vulnerability.
-
NB: So the easiest type to understand
-
is a technology that just makes it very easy
-
to cause massive amounts of destruction.
-
Synthetic biology might be a fecund source of that kind of black ball,
-
but many other possible things we could --
-
think of geoengineering, really great, right?
-
We could combat global warming,
-
but you don't want it to get too easy either,
-
you don't want any random person and his grandmother
-
to have the ability to radically alter the earth's climate.
-
Or maybe lethal autonomous drones,
-
massed-produced, mosquito-sized killer bot swarms.
-
Nanotechnology, artificial general intelligence.
-
CA: You argue in the paper
-
that it's a matter of luck that when we discovered
-
that nuclear power could create a bomb,
-
it might have been the case
-
that you could have created a bomb
-
with much easier resources, accessible to anyone.
-
NB: Yeah, so think back to the 1930s
-
where for the first time we make some breakthroughs in nuclear physics,
-
some genius figures out that it's possible to create a nuclear chain reaction
-
and then realizes that this could lead to the bomb.
-
And we do some more work,
-
it turns out that what you require to make a nuclear bomb
-
is highly enriched uranium or plutonium,
-
which are very difficult materials to get.
-
You need ultracentrifuges,
-
you need reactors, like, massive amounts of energy.
-
But suppose it had turned out instead
-
there had been an easy way to unlock the energy of the atom.
-
That maybe by baking sand in the microwave oven
-
or something like that
-
you could have created a nuclear detonation.
-
So we know that that's physically impossible.
-
But before you did the relevant physics
-
how could you have known how it would turn out?
-
CA: Although, couldn't you argue
-
that for life to evolve on Earth
-
that implied sort of stable environment,
-
that if it was possible to create massive nuclear reactions relatively easy,
-
the Earth would never have been stable,
-
that we wouldn't be here at all.
-
NB: Yeah, unless there were something that is easy to do on purpose
-
but that wouldn't happen by random chance.
-
So, like things we can easily do,
-
we can stack 10 blocks on top of one another,
-
but in nature, you're not going to find, like, a stack of 10 blocks.
-
CA: OK, so this is probably the one
-
that many of us worry about most,
-
and yes, synthetic biology is perhaps the quickest route
-
that we can foresee in our near future to get us here.
-
NB: Yeah, and so think about what that would have meant
-
if, say, anybody by working in their kitchen for an afternoon
-
could destroy a city.
-
It's hard to see how modern civilization as we know it
-
could have survived that.
-
Because in any population of a million people,
-
there will always be some who would, for whatever reason,
-
choose to use that destructive power.
-
So if that apocalyptic residual
-
would choose to destroy a city, or worse,
-
then cities would get destroyed.
-
CA: So here's another type of vulnerability.
-
Talk about this.
-
NB: Yeah, so in addition to these kind of obvious types of black balls
-
that would just make it possible to blow up a lot of things,
-
other types would act by creating bad incentives
-
for humans to do things that are harmful.
-
So, the Type-2a, we might call it that,
-
is to think about some technology that incentivizes great powers
-
to use their massive amounts of force to create destruction.
-
So, nuclear weapons were actually very close to this, right?
-
What we did, we spent over 10 trillion dollars
-
to build 70,000 nuclear warheads
-
and put them on hair-trigger alert.
-
And there were several times during the Cold War
-
we almost blew each other up.
-
It's not because a lot of people felt this would be a great idea,
-
let's all spend 10 trillion dollars to blow ourselves up,
-
but the incentives were such that we were finding ourselves --
-
this could have been worse.
-
Imagine if there had been a safe first strike.
-
Then it might have been very tricky,
-
in a crisis situation,
-
to refrain from launching all their nuclear missiles.
-
If nothing else, because you would fear that the other side might do it.
-
CA: Right, mutual assured destruction
-
kept the Cold War relatively stable,
-
without that, we might not be here now.
-
NB: It could have been more unstable than it was.
-
And there could be other properties of technology.
-
It could have been harder to have arms treaties,
-
if instead of nuclear weapons
-
there had been some smaller thing or something less distinctive.
-
CA: And as well as bad incentives for powerful actors,
-
you also worry about bad incentives for all of us, in Type-2b here.
-
NB: Yeah, so, here we might take the case of global warming.
-
There are a lot of little conveniences
-
that cause each one of us to do things
-
that individually have no significant effect, right?
-
But if billions of people do it,
-
cumulatively, it has a damaging effect.
-
Now, global warming could have been a lot worse than it is.
-
So we have the climate sensitivity parameter, right.
-
It's a parameter that says how much warmer does it get
-
if you emit a certain amount of greenhouse gases.
-
But, suppose that it had been the case
-
that with the amount of greenhouse gases we emitted,
-
instead of the temperature rising by, say,
-
between three and 4.5 degrees by 2100,
-
suppose it had been 15 degrees or 20 degrees.
-
Like, then we might have been in a very bad situation.
-
Or suppose that renewable energy had just been a lot harder to do.
-
Or that there had been more fossil fuels in the ground.
-
CA: Couldn't you argue that if in that case of --
-
if what we are doing today
-
had resulted in 10 degrees difference in the time period that we could see,
-
actually humanity would have got off its ass and done something about it.
-
We're stupid, but we're not maybe that stupid.
-
Or maybe we are.
-
NB: I wouldn't bet on it.
-
(Laughter)
-
You could imagine other features.
-
So, right now, it's a little bit difficult to switch to renewables and stuff, right,
-
but it can be done.
-
But it might just have been, with slightly different physics,
-
it could have been much more expensive to do these things.
-
CA: And what's your view, Nick?
-
Do you think, putting these possibilities together,
-
that this earth, humanity that we are,
-
we count as a vulnerable world?
-
That there is a death ball in our future?
-
NB: It's hard to say.
-
I mean, I think there might well be various black balls in the urn,
-
that's what it looks like.
-
There might also be some golden balls
-
that would help us protect against black balls.
-
And I don't know which order they will come out.
-
CA: I mean, one possible philosophical critique of this idea
-
is that it implies a view that the future is essentially settled.
-
That there either is that ball there or it's not.
-
And in a way,
-
that's not a view of the future that I want to believe.
-
I want to believe that the future is undetermined,
-
that our decisions today will determine
-
what kind of balls we pull out of that urn.
-
NB: I mean, if we just keep inventing,
-
like, eventually we will pull out all the balls.
-
I mean, I think there's a kind of weak form of technological determinism
-
that is quite plausible,
-
like, you're unlikely to encounter a society
-
that uses flint axes and jet planes.
-
But you can almost think of a technology as a set of affordances.
-
So technology is the thing that enables us to do various things
-
and achieve various effects in the world.
-
How we'd then use that, of course depends on human choice.
-
But if we think about these three types of vulnerability,
-
they make quite weak assumptions about how we would choose to use them.
-
So a Type-1 vulnerability, again, this massive, destructive power,
-
it's a fairly weak assumption
-
to think that in a population of millions of people
-
there would be some that would choose to use it destructively.
-
CA: For me, the most single disturbing argument
-
is that we actually might have some kind of view into the urn
-
that makes it actually very likely that we're doomed.
-
Namely, if you believe in accelerating power,
-
that technology inherently accelerates,
-
that we build the tools that make us more powerful,
-
then at some point you get to a stage
-
where a single individual can take us all down,
-
and then it looks like we're screwed.
-
Isn't that argument quite alarming?
-
NB: Ah, yeah.
-
(Laughter)
-
I think --
-
Yeah, we get more and more power,
-
and [it's] easier and easier to use those powers,
-
but we can also invent technologies that kind of help us control
-
how people use those powers.
-
CA: So let's talk about that, let's talk about the response.
-
Suppose that thinking about all the possibilities
-
that are out there now --
-
it's not just synbio, it's things like cyberwarfare,
-
artificial intelligence, etc., etc. --
-
that there might be serious doom in our future.
-
What are the possible responses?
-
And you've talked about four possible responses as well.
-
NB: Restricting technological development doesn't seem promising,
-
if we are talking about a general halt to technological progress.
-
I think neither feasible,
-
nor would it be desirable even if we could do it.