And it seems like all I'm hearing about lately is...
It's all about artificial intelligence.
I think it's time we talk about AI.
Okay, I love my Google Assistant.
And I even say goodnight to it.
Which is mildly embarrassing.
But I'm here for it because then it tells me the weather for tomorrow.
And it plays cricket sounds which I think are actually helping me sleep better.
And it's also getting to know me, right.
So it tells me when the train is gonna to leave in the morning, and it's even recommending songs I might like.
And for a long time companies have been pushing this off as artificial intelligence.
Which I always thought of as a marketing term.
Like I don't think we think that Siri is intelligent.
But then Google showed us this.
That's the voice of Google Assistant making calls for you.
This made me think that maybe artificial intelligence is closer to replicating human intelligence than I thought.
So I called James Vincent.
And he's a reporter here at the Verge that covers all things AI.
I wanted to know were AI is right now and where it's going.
So AI as a field is a term, it's sort of an umbrella and it includes lots of different types of AI that kind of come into fashion over the years, and then they test them, they get to the limitations and they move on to the next one.
And the thing that's very much dominate in the field at the moment is machine learning.
Which is all about giving a system a lot of data.
And then it goes through that data.
And it learns the patterns within it.
And then a flavor of machine learning is what's called deep learning.
Deep learning is basically, you know, basically machine learning but it uses a lot of data.
And it uses a lot of computing power.
Which we now have access thanks to the internet and thanks to cheaper chips.
Okay James, real talk though.
Have you seen Terminator?
I mean how close are we to the Skynets of the world?
Like is that coming?
It's not really.
I mean, so what you get in films, popular culture.
That is what's usually called artificial general intelligence.
Which is a huge step forward from what we got at the present moment.
So AI is a field, it was kind of founded on this belief you could build computers that were just humans essentially.
That just kind of thought like humans, acted, reacted like humans.
And as it's gone on we've kind of realized that oh wait no no this is a really difficult task,
And we're not near it basically.
So while Google Duplex sounds very human.
It's actually what we call narrow AI.
Now at the moment everything that's being sold to you as AI is narrow AI.
And it was built under a limited predetermined set of functions.
It pops up on your phone.
Your Google Home or your Echo.
And it's how Facebook recognizes your face.
And automatically takes photos of you.
This form of AI is designed to complete very specific tasks, and it's incapable of doing anything else.
Now that doesn't mean it can't do impressive things.
Take for example Deep Mind's Alpha Go.
Which is an AI program trained to play the game Go.
Now Go is like a strategy game, sort of like chess, except it has way more possible outcomes.
And in 2016, the AI system battled against legendary Go player Lee Sedol defeating him four to one.
And in 2017, Deep Mind retired the AlphaGo AI after it defeated the world's best Go player, three to zero.
But I'd like to point out that this program AlphaGo would continue playing Go even if the building it was in was on fire.
Even if the room was on fire.
That's Dr. Oren Etzioni.
CEO of Allen Institute for Artificial Intelligence and professor at the University of Washington in Seattle.
It's an excellent symbol because it shows that in these very narrow, very well structured tasks.
Like a board game.
We can achieve superhuman performance.
But on things that are more nuanced.
Things that have to do with language.
We are actually very far from even the abilities of a child.
So common sense could be thought of as the missing link between AI and AGI.
And when referring to common sense.
We're referring to the whole range of capabilities that humans have and computers just don't.
Dr. Etzioni is one of many researchers that are working on programs to teach computers common sense.
But within the community there's a large debate over whether these smarter computers could be dangerous.
Today narrow AI is being used to solve very real, very serious problems like helping doctors diagnose cancer, or predicting future weather disasters.
But it's also being used for things that people find worrying and that's understandable.
I mean these are things like facial recognition for mass surveillance or augmenting weapons.
Which raises the question.
How if at all is AI going to be regulated?
This world evolves so rapidly that it's very hard to put these policies in place.
What I would suggest instead is that we identify applications of AI.
For example, AI cars or AI weapons.
And that we define very careful regulations around those specific applications.
So yeah I was creeped out by Google Duplex.
But it's nowhere near the AI we have in the movies.
This is very specific pointed AI.
And when that's applied to security or weapons.
It can be scary but all we can do is educate ourselves.
And be in the know about what AI we have right now.
And maybe where it's going.
These are still early days, and we should assess the exuberance people have and some of the fears that they have with a note of caution and skepticism.
If you want to know what AI can really do and what it can't, just have a chat with Siri and Alexa.
And you'll quickly see where reality stops and the hype begins.
Hey thanks for watching.
Let us know on the comments below your favorite uses of AI so far.