Placeholder Image

字幕列表 影片播放

  • The prospect of artificial intelligence excites and repulsive people in equal measure:

  • will it bring us a kind of paradise or a techno hell?

  • To get a clearer handle of what might happen and when, it's best to divide A.I. into three categories.

  • The first of these is "artificial narrow intelligence" or what people call "weak A.I.";

  • this kind of A.I. is already in place;

  • it's the kind of A.I. that uses big data and complex algorithms to arrange your Facebook timeline or beat you at chess;

  • narrow A.I. has an intelligence that's limited to one very specific arena; it may not be able to pass the Turing test,

  • but our lives, infrastructure, and financial markets are already very dependent on it.

  • The next step up the AI ladder is artificial general intelligence or strong AI;

  • this is an intelligence that can, at last, think as well as we can; we're probably about 30 years away from this.

  • The hurdles to creating strong AI are all about building machines that are going to be good at doing things which come very easily to humans,

  • but which machines have, traditionally, really stumbled with.

  • Oddly, it's so much easier to build a machine that can do advanced calculus

  • than it is to build one that can get milk from the fridge, recognized granny, or walk up the stairs.

  • Our brains are brilliant at so-called "everyday tasks" like decoding 3D images,

  • working out people's motivations, and spotting casual sarcasm. We're very far ahead of machines here.

  • Some scientists doubt we'll ever see strong AI,

  • but the majority of AI experts alive today seem to think that we'll be there in the coming decades;

  • if you're under 35 the great probability is that you will be there to enter the strong AI age.

  • So, what will happen to the world once we've succeeded in creating an intelligence to rival or equal our own?

  • Well, the rivalry will be extremely short lived for one thing

  • because the key point about strong AI is that it will be able to learn and upgrade itself on its own without instructions.

  • This is what makes it so revolutionary and so different to almost any machine we've ever built;

  • the maker won't be in charge of mapping out all the possibilities of the thing he or she has made.

  • The machine will be given a baseline capacity, but it can then build on this as it develops.

  • It will be a trial and error learner with an infinite capacity to acquire skills;

  • it'll have what AI professionals call "recursive self improvement".

  • This is crucial because it means there'll be no reason for AI to stall once it reaches the human level.

  • The more intelligent system becomes, the better it becomes at improving itself, so the more it will learn and do.

  • This virtuous cycle equates to an exponential growth in intelligence that would leave humanity amazed,

  • but also baffled, dwarfed, and perhaps very scared.

  • It might not take very long at all, only months perhaps, before the machine is cleverer than its creator.

  • This is the moment that gets very exciting.

  • It’s a moment often referred to as "The Singularity",

  • which is where we encounter the third sort of AI, "artificial superintelligence".

  • Technically, this is any AI that exceeds human levels of intelligence even slightly,

  • but any self improving superintelligence is going to be sure to improve a lot very fast indeed.

  • AI that reach this level would soon be leagues ahead of us,

  • and statements such as, "well, let's just switch it off"

  • might be like trying to take down the internet with a slingshot.

  • The prospect of such super intelligence appalls and excites people in equal measure.

  • We're approaching two alternative futures

  • with the speed and uncertainty of a skydiver who can't quite remember if he's wearing a parachute or a rucksack.

  • Some including: Bill Gates, Stephen Hawking, and Elon Musk are so scared

  • they believe that we're unlikely ever to be able to effectively control any super intelligence we create.

  • Artificial minds will just single-mindedly pursue their aims and these aims may not necessarily coincide with ours.

  • A machine wouldn't specifically want to kill us,

  • but it's amorality would mean that it would be willing to cause our extinction if necessary.

  • These critics point out that intelligence is not value loaded.

  • It's tempting to assume that anything intelligent will just naturally develop vaguely human values,

  • like, empathy and respect for life,

  • but this can't be guaranteed because ethical values are based on purely human axioms,

  • and given that we find it impossible to agree among ourselves what's right and wrong

  • in areas like euthanasia or abortion, say,

  • how could we possibly program a computer with a knowledge that could soundly and reliably be deemed moral?

  • Now that's the pessimistic angle, but there is a more cheerful angle, of course.

  • According to the optimists, in a world of artificial super intelligence,

  • machines will still be our servants, we'll give them some basic rules of never killing or doing us any harm,

  • and then they'll set about solving all the things that have long bedeviled us.

  • The immediate priority of super intelligence would be to help us to create free energy,

  • in turn, dramatically reducing the prices for almost everything.

  • We would soon be in the era that Google's chief futurologists Ray Kurzweil describes as 'abundance':

  • everything currently costing would drop to almost $0, the way that data costs now.

  • Work for money would, essentially, come to an end.

  • The real challenge would be not getting miserable with all this abundance,

  • after all, Palm Springs and Monte Carlo already now point to some of the dangers of wealthy people with nothing much to do.

  • The solution here is to develop a side of A.I., that's been intriguingly dubbed A.E.I., or, Artificial Emotional Intelligence.

  • This A.E.I. would help us with all the tricky tasks at the emotional, psychological, and philosophical end of things.

  • We'd be helped with: understanding our psyches, mastering our emotions, drawing out out true talents-

  • we'd hit what we were best suited to do with our lives-,

  • and guiding us to the people with whom you might form good and satisfying relationships.

  • Most of the many psychological mistakes which allow us to waste our lives could be averted;

  • instead of fumbling through a mental fog of insecurities and inconsistencies,

  • we'd be guided to a more compassionate, happier, and wiser future.

  • Science fiction is sometimes dismissed in elite circles,

  • but we can see now that: thinking twenty to fifty years ahead, and imagining how life will be is a central task for all of us;

  • we should all be science-fiction writers, of a kind, in our minds.

  • We are poised just before a tipping point in human history.

  • We need to build up the wisdom to control which way we will tip,

  • and part of that means thinking very realistically about things that, today, still seem rather phantasmagorical.

  • Humans are toolmaking animals;

  • we're on the brink of creating tools like no others,

  • so the trick is going to be to stay close to the underlying ancient purpose of every tool,

  • which is to help us to do something we actually want to do more effectively.

  • If we keep our wits about us, there's no real reason our computers should, necessarily, run away from us;

  • they should just be much much better versions of our earliest flint axes.

The prospect of artificial intelligence excites and repulsive people in equal measure:

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

人工智能 (Artificial Intelligence)

  • 114 24
    VoiceTube 發佈於 2021 年 01 月 14 日
影片單字