字幕列表 影片播放
>> Announcer: Live from Austin, Texas
it's the Cube.
Covering South by Southwest 2017.
Brought to you by Intel.
Now here's John Furrier.
Okay we're back live here at the South by Southwest
Intel AI Lounge, this is The Cube's special coverage
of South by Southwest with Intel, #IntelAI
where amazing starts with Intel.
Our next guest is Dr. Dawn Nafus who's with Intel
and you are a senior research scientist.
Welcome to The Cube.
>> Thank you.
>> So you've got a panel coming up and you also
have a book AI For Everything.
And looking at a democratization of AI
we had a quote yesterday that,
"AI is the bulldozer for data."
What bulldozers were in the real world,
AI will be that bulldozer for data,
surfacing new experiences. >> Right.
>> This is the subject of your book, kind of.
What's your take on this and what's your premise?
>> Right well the book actually takes
a step way back, it's actually called
Self Tracking, the panel is AI For Everyone.
But the book is on self tracking.
And
it's really about actually
getting some meaning out of data
before we start talking about bulldozers.
So right now we've got this situation where
there's a lot of talk about AI's going
to sort of solve all of our problems in health
and there's a lot that can get accomplished, whoops.
But the fact of the matter is
is that people are still struggling with gees,
like, "What does my Fitbit actually mean, right?"
So there's this, there's a real big gap.
And I think probably part of what the industry
has to do is not just sort of build new great technologies
which we've got to do but also start to fill that gap
in sort of data education, data literacy,
all that sort of stuff.
>> So we're kind of in this first generation of AI data
you mentioned wearable, Fitbits.
>> Dawn: Yup.
>> So people are now getting used to this,
so that it sounds this integration into lifestyle
becomes kind of a dynamic.
>> Yeah. >> Why are people grappling
>> John: with this, what's your research say about that?
>> Well right now with wearables frankly
we're in the classic trough of disillusionment. (laughs)
You know for those of you listening
I don't know if you have sort of wearables
in drawers right now, right?
But a lot of people do.
And it turns out that folks tend to use it,
you know maybe about three or four weeks
and either they've learned something
really interesting and helpful or they haven't.
And so there's actually a lot of people
who do really interesting stuff to kind of combine it
with symptoms tracking, location, right
other sorts of things to actually
really reveal the sorts of triggers for
medical issues that you can't find in a clinical setting.
It's all about being out in the real world
and figuring out what's going on with you.
Right, so then when we start to think about
adding more complexity into that,
which is the thing that AI's good at,
we've got this problem of there's only so many data sets
that AI's any actually any good at handling.
And so I think there's going to have to be a moment where
sort of people themselves actually start to say,
"Okay you know what?
"This is how I define my problem.
"This is what I'm going to choose to keep track of."
And some of that's going to be on a sensor
and some of it isn't.
Right and sort of being really intervening
a little bit more strongly in what
this stuff's actually doing.
>> You mentioned the Fitbit and you were seeing
a lot of disruption in the areas, innovation
and disruption, same thing good and bad potentially.
But I'll see autonomous vehicles is pretty clear,
and knows what Tesla's tracking with their hot trend.
But you mentioned Fitbit, that's a healthcare
kind of thing.
AIs might seem to be a perfect fit into healthcare
because there's always alarms going off
and all this data flying around.
Is that a low hanging fruit for AI?
Healthcare?
>> Well I don't know if there's any such thing
as low hanging fruit (John laughs)
in this space. (laughs)
But certainly if you're talking about
like actual human benefit, right?
That absolutely comes the top of the list.
And we can see that in both formal healthcare
in clinical settings and sort of imaging for diagnosis.
Again I think there's areas to be cautious about, right?
You know making sure that there's also
an appropriate human check and there's also
mechanisms for transparency, right?
So that doctors, when there is a discrepancy
between what the doctor believes and what the machine says
you can actually go back and figure out
what's actually going on.
The other thing I'm particularly excited about is,
and this is why I'm so interested in democratization
is that health is not just about,
you know, what goes on in clinical care.
There are right now environmental health groups
who are looking at slew of air quality data
that they don't know what to do with, right?
And a certain amount of machine assistance
to sort of figure out you know signatures
of sort of point source polluters, for example,
is a really great use of AI.
It's not going to make anybody any money anytime soon,
but that's the kind of society that we want to live in right?
>> You are the social good angle for sure,
but I'd like to get your thoughts 'cause you mentioned
democratization and it's kind of a nuance
depending upon what you're looking at.
Democratization with news and media
is what you saw with social media now you got healthcare.
So how do you define democratization in your context
and you're excited about.?
Is that more of
freedom of information and data is it
getting around gatekeepers and siloed stacks?
I mean how do you look at democratization?
>> All of the above. (laughs) (John laughs)
I'd say there are two real elements to that.
The first is making sure that you know,
people are going to use this for more than just business,
have the ability to actually do it
and have access to the right sorts of infrastructures to,
whether it's the environmental health case
or there are actually artists now who use
natural language processing to create art work.
And people ask them, "Why are you using deblurting?"
I said, "Well there's a real access issue frankly."
It's also on the side of if you're not the person
who's going to be directly using data
a kind of a sense of, you know...
Democratization to me means being able
to ask questions of how the stuff's actually behaving.
So that means building in
mechanisms for transparency, building in mechanisms
to allow journalists to do the work that they do.
>> Sharing potentially?
>> I'm sorry? >> And sharing as well
more data? >> Very, very good.
Right absolutely, I mean frankly we still have
a problem right now in the wearable base of
people even getting access to their own data.
There's a guy I work with named Hugo Campos
who has an arterial defibrillator and
he's still fighting to get access
to the very data that's coming out of his heart.
Right? (laughs)
>> Is it on SSD, in the cloud?
I mean where is it? >> It is in the cloud.
It's going back to the manufacturer.
And there are very robust conversations about
where it should be.
>> That's super sad.
So this brings up the whole thing that
we've been talking about yesterday
when we had a mini segment on The Cube is that
there are all these new societal use cases
that are just springing up that we've never seen before.
Self-driving cars with transportation,
healthcare access to data, all these things.
What are some of the things that you see emerging
on that tools or approaches that could help
either scientists or practitioners or citizens
deal with these new critical problem solving
that needs to apply technology to.
I was talking just last week at Stanford with folks
that are looking at gender bias and algorithms.
>> Right, uh-huh it's real. >> Something I would never
have thought of that's an outlier.
Like hey, what? >> Oh no, it's happened.
>> But it's one of those things were okay,
let's put that on the table.
There's all this new stuff coming on the table.
>> Yeah, yeah absolutely. >> What do you see?
>> So they're-- >> How do we solve that
>> John: what approaches?
>> Yeah there are a couple of mechanisms
and I would encourage listeners and folks in the audience
to have a look at a really great report
that just came out from the Obama Administration
and NYU School of Law.
It's called AI Now and they actually propose
a couple of pathways to sort of making sure
we get this right.
So you know a couple of things.
You know one is frankly making sure that women
and people of color are in the room
when the stuff's getting built, right?
That helps.
You know as I said earlier you know making sure that
you know things will go awry.
Like it just will we can't predict how these things
are going to work and catching it after the fact
and building in mechanisms to be able
to do that really matter.
So there was a great effort by ProPublica to look at
a system that was predicting criminal recidivism.
And what they did was they said, "Look you know
"it is true that
"the thing has the same failure rate
"for both blacks and whites."
But some hefty data journalism and data scraping
and all the rest of it actually revealed that
it was producing false positives for blacks
and false negatives for whites.
Meaning that black people were predicted
to create more crime than white people right?
So you know, we can catch that, right?
And when we build in more system of people who
had the skills to do it,
then we can build stuff that we can live with.
>> This is exactly to your point of democratization
I think that fascinates me that I get so excited about.
It's almost intoxicating when you think about it
technically and also societal that there's
all these new things that are emerging
and the community has to work together.
Because it's one of those things where
there's no, there may be a board of governors out there.
I mean who is the board of governors for this stuff?
It really has to be community driven.
>> Yeah, yeah.
>> And NYU's got one, any other examples of communities that
are out there that people can participate in or?
>> Yup, absolutely.
So I think that you know, they're
certainly collaborating on projects that
you actually care about and sort of asking
good questions about, is this appropriate
for AI or not, right?
Is a great place to start of reaching out
to people who have those technical skills.
There are also
the Engineering Professional Association
actually just came out a couple months ago
with a set of guidelines for developers
to be able to...
The kinds of things you have to think about
if you're going to build an ethical AI system.
So they came out with some very high level principles.
Operationalizing those principles
is going to be a real tough job
and we're all going to have to pitch in.
And I'm certainly involved in that.
But yeah, there are actually systems of governance
that are cohering, but it's early days.
>> It's great way to get involved.
So I got to ask you the personal question.
In your efforts with the research and the book
and all of your travels, what's some of the most
amazing things that you've seen with AI
that are out there that people may know about
or may not know about that they should know about?
>> Oh gosh.
I'm going to reserve judgment, I don't know yet.
I think we're too early on the curve
to be able to talk about, you know,
sort of the magic of it.
What I can say is that there is real power when
ordinary people who have no coding skills whatsoever
and frankly don't even know what the heck
machine learning is,
get their heads around data that is collected
about them personally.
That opens up, you can teach five year olds
statistical concepts that are learned in college
with a wearable because the data applies to them.
So they know how it's
been collected. >> It's personal.
>> Yeah they know what it is already.
You don't have to tell them what a outlier effect is
because they know because they wear that outlier.
You know what I mean.
>> They're immersed in the data.
>> Absolutely and I think that's where
the real social change
is going to come from. >> I love immersion as
a great way to teach kids.
But the data's key.
So I got to ask you with the big pillars
of change going on and at Mobile World Congress
I saw you, Intel in particular, talking about
autonomous vehicles heavily, smart cities,
media entertainment and the smart home.
I'm just trying to get a peg a comparable of
how big this shift will be.
These will be, I mean the '60s revolution
when chips started coming out,
the PC revolution and server revolution
and now we're kind of in this new wave.
How big is it?
I mean in order of magnitude, is it super huge
with all of the other ships combined?
Are we going to see radical >> I don't know.
>> configuration changes? >> You know.
You know I'm an anthropologist, right?
(John laughs)
You know everything changes and nothing changes
at the same time, right?
We're still going to wake up, we're still going to
put on our shoes in the morning, right?
We're still going to have a lot of the same values
and social structures and all the rest of it
that we've always had, right.
So I don't think in terms of plonk,
here's a bunch of technology now.
Now that's a revolution.
There's like a dialogue.
And we are just at the very, very baby steps
of having that dialogue.
But when we do people in my field
call it domestication, right?
These become tame, they become part of our lives,
we shape them and they shape us.
And that's not radical change,
that's the change we always have.
>> That's evolution.
So I got to ask you a question because
I have four kids and I have this conversation
with my wife and friends all the time
because we have kids, digital natives are growing up.
And we see a lot of also work place
domestication, people kind of getting
domesticated with the new technologies.
What's your advice whether it's parents to their kids,
kids to growing up in this world,
whether it's education?
How should people approach the technology
that's coming at them so heavily?
In the age of social media where all our voices
are equal right now, getting more filters are coming out.
It's pretty intense.
>> Yeah, yeah.
I think it's an occasion where people have to think
a lot more deliberately than they ever have about
the sources of information that they want exposure to.
The kinds of interaction, the mechanisms
that actual do and don't matter.
And thinking very clearly about what's noise
and what's not is a fine thing to do. (laughs)
(John laughs)
so yeah, probably the filtering mechanisms
has to get a bit stronger.
I would say too there's a whole set of practices,
there are ways that you can scrutinize
new devices for, you know, where the data goes.
And often,
kind of the higher bar companies will give you
access back, right?
So if you can't get your data out again,
I would start asking questions.
>> All right final two questions for you.
What's your experiences like so far
at South by Southwest? >> Yup.
>> And where is the world going to take you next
in terms of your research and your focus?
>> Well this is my second year at South by Southwest.
It's hugely fun, I am so pleased to see
just a rip roaring crowd here at the
Intel facility which is just amazing.
I think this is our first time as in Dell proper.
I'm having a really good time.
The Self Tracking book is in the book shelf
over in the convention center if you're interested.
And what's next is we are going to get real
about how to make, how to make these ethical principles
actually work at an engineering level.
>> Computer science meets social science,
happening right now. >> Absolutely.
>> Intel powering amazing here at South by Southwest.
I'm John Furrier you're watching The Cube.
We've got a great set of people here on The Cube.
Also great AI Lounge experience, great demos,
great technologists all about AI for social change
with Dr. Dawn Nafus with Intel.
We'll be right back with more coverage
after this short break.
(upbeat digital beats)