字幕列表 影片播放
-
[MUSIC PLAYING]
-
JEN GENNAI: I'm an operations manager,
-
so my role is to ensure that we're making our considerations
-
around ethically AI deliberate, actionable,
-
and scalable across the whole organization in Google.
-
So one of the first things to think about
-
if you're a business leader or a developer
-
is ensuring that people understand what you stand for.
-
What does ethics mean to you?
-
For us, that meant setting values-driven principles
-
as a company.
-
These value-driven principles, for us,
-
are known as our AI principles.
-
And last year, we announced them in June.
-
So these are seven guidelines around AI development
-
and deployment, which assigned to us how
-
we want to develop AI.
-
We want to ensure that we're not creating or reinforcing bias.
-
We want to make sure that we're building technology
-
that's accountable to people.
-
And we have five others here that you can read.
-
It's available on our website.
-
But at the same time that we announce
-
these aspirational principles for the company,
-
we also identified four areas that we
-
have considered our red lines.
-
So these are technologies that we will not pursue.
-
These cover things like weapons technology.
-
We will not build or deploy weapons.
-
We will also not build or deploy technologies
-
that we feel violate international human rights.
-
So if you're a business leader or a developer,
-
we'd also encourage you to understand what
-
are your aspirational goals.
-
But at the same time, what are your guardrails?
-
What point are you're not going to cross?
-
It's the most important thing to do, is to know what is
-
your definition of ethical AI development.
-
After you've set your AI principles,
-
the next thing is, how do you make them real?
-
How do you make sure that you're aligning with those principles?
-
So here, there are three main things
-
I'd suggest keeping in mind.
-
The first one is you need an accountable and authoritative
-
body.
-
So for us in Google, this means that we have senior executives
-
across the whole company who have the authority
-
to approve or decline a launch.
-
So they have to wrestle with some
-
of these very complex ethical questions
-
to ensure that we are launching things
-
that we do believe will lead to fair and ethical outcomes.
-
So they provide the authority and the accountability
-
to make some really tough decisions.
-
Secondly, you have to make sure that the decision-makers have
-
the right information.
-
This involves talking to diverse people within the company,
-
but also listening to your external users,
-
external stakeholders, and feeding that
-
into your decision-making criteria.
-
Jamila will talk more about engaging
-
with external communites in a moment.
-
And then the third key part of building governance
-
and accountability is having operations.
-
Who's going to do the work?
-
What are the structures and frameworks
-
that are repeatable, that are transparent,
-
and that are understood by the people who
-
are making these decisions?
-
So for that, in Google, we've established a central team
-
that's not based in our engineering and product teams
-
to ensure that there's a level of objectivity here.
-
So the same people who are building the products
-
are not the only people who are looking
-
to make sure that those products are fair and ethical.
-
So now you have your principles that you're
-
trying to ensure that people understand
-
what does ethics mean for you.
-
We're talking about establishing governance structure
-
to make sure that you're achieving those goals,
-
and the next thing to do is to ensure that you're encouraging
-
everyone within your company or the people that you work with
-
and for are aligned on those goals.
-
So making sure, one, that you've set overall goals in alignment
-
with ethical AI--
-
so how are you going to achieve ethical development
-
and deployment of technology?
-
Next, you want to make sure that you're training people
-
to think about these issues from the start.
-
You don't want to catch some ethical consideration
-
late in the product development lifecycle.
-
You want to make sure that you're
-
starting that as early as possible-- so getting
-
people trained to think about these types of issues.
-
Then we have rewards.
-
You have to make sure if you're holding people
-
accountable to ethical development and deployment,
-
you may have to accept that that might slow down
-
some development in order to get to the right outcomes--
-
making sure people feel rewarded for thinking
-
about ethical development and deployment.
-
And then, finally, making sure that you're hiring people
-
and developing people who are helping you
-
achieve those goals.
-
Next, you've established your frameworks,
-
you've hired the right people, you're rewarding them.
-
How do you know you're achieving your goals?
-
So we think about this as validating and testing.
-
So an example here is replicating
-
a user's experience.
-
Who are your users?
-
How do you make sure that you're thinking
-
about a representative sample of your users?
-
So you think about trying to test different experiences,
-
mostly from your core subgroups.
-
But you also want to be thinking about,
-
who are your marginalized users?
-
Who might be underrepresented in your workforce?
-
And therefore, you might have to pay additional attention to
-
to get it right.
-
We also think about, what are the failure modes?
-
And what we mean by that is if people have been negatively
-
affected by a product in the past,
-
we want to make sure they won't be negatively affected
-
in the future.
-
So how do we learn from that and make sure
-
that we're testing deliberately for that in the future?
-
And then the final bit of testing and validation
-
is introducing some of those failures
-
into the product to make sure that you're stress testing,
-
and, again, have some objectivity
-
to stress test a product to make sure it's achieving
-
your fair and ethical goals.
-
And then we think about it's not just you.
-
You're not alone.
-
How do we ensure that we're all sharing information
-
to make us more fair and ethical and to make sure
-
that the products we deliver are fair and ethical?
-
So we encourage the sharing of best practices and guidelines.
-
We do that ourselves in Google by providing
-
our research and best practices on the Google AI site.
-
So these best practices cover everything
-
from ML fairness tools and research
-
that Margaret Mitchell will talk about in a moment,
-
but also best practices and guidelines
-
that any developer or any business leader
-
could follow themselves.
-
So we try to both provide that ourselves, as well
-
as encouraging other people to share their research
-
and learnings also.
-
So with that, as we talk about sharing with external,
-
it's also about bringing voices in.
-
So I'll pass over to Jamila Smith-Loud
-
to talk about understanding human impacts.
-
JAMILA SMITH-LOUD: Thank you.
-
[APPLAUSE]
-
Hi, everyone.
-
I'm going to talk to you a little bit
-
today about understanding, conceptualizing, and assessing
-
human consequences and impacts on real people and communities
-
through the use of tools like social equity impact
-
assessments.
-
Social and equity impact assessments
-
come primarily from the social science discipline
-
and give us a research-based method
-
to assess these questions in a way that is broad enough
-
to be able to apply across products,
-
but also specific enough for us to think about what
-
are tangible product changes and interventions that we can make.
-
So I'll start off with one of the questions
-
that we often start when thinking about these questions.
-
I always like to say that when we're
-
thinking about ethics, when we're thinking about fairness,
-
and even thinking about questions of bias,
-
these are really social problems.
-
And one major entry point into understanding social problems
-
is really thinking about what's the geographic context in which
-
users live, and how does that impact their engagement
-
with the product?
-
So really asking, what experiences
-
do people have that are based solely on where they live
-
and that may differ greatly for other peoples who
-
live in different neighborhoods that are either
-
more resourced, more connected to internet-- all
-
of these different aspects that make regional differences so
-
important?
-
Secondly, we like to ask what happens to people when they're
-
engaging with our products in their families
-
and in their communities.
-
We like to think about, what are economic changes that
-
may come as a part of engagement with this new technology?
-
What are social and cultural changes that really do impact
-
how people view the technology and view their participation
-
in the process?
-
And so I'll start a little bit of talking about our approach.
-
The good thing about utilizing kind
-
of existing frameworks of social and equity impact assessments
-
which come from--
-
if you think about when we do new land development
-
projects or even environmental assessments,
-
there's already the standard of considering social impacts
-
as a part of that process.
-
And so we really do think of employing new technologies
-
in the same way.
-
We should be asking similar questions about how communities
-
are impacted, what are their perceptions,
-
and how are they framing these engagements?
-
And so one of the things that we think about
-
are kind of what is a principled approach to asking
-
these questions?
-
And the first one really is around
-
engaging in the hard questions.
-
When we're talking about fairness,
-
when we're talking about ethics, we're
-
not talking about them separately
-
from issues of racism, social class, homophobia,
-
and all forms of cultural prejudice.
-
We're talking about what are the issues as they
-
overlay in those systems.?
-
And so it really requires us to be
-
OK with those hard questions, and engaging with them,
-
and realizing that our technologies and our products
-
don't exist separately from that world.
-
The next approach is really towards thinking anticipatory.
-
I think the different thing about thinking
-
about social and equity impact assessments
-
from other social science research methods
-
is that the relationships between causal impacts
-
and correlations are going to be a little bit different,
-
and we really are trying to anticipate
-
harms and consequences.
-
And so it requires you to be OK with the fuzzy conversations,
-
but also realize that there's enough research,
-
there's enough data that gives us
-
the understanding of how history and contexts impact outcomes.
-
And so being anticipatory in your process
-
is really, really an important part of it.
-
And lastly, in terms of thinking about the principled approach
-
is really centering the voices and experiences
-
of those communities who often bear the burden
-
of the negative impacts.
-
And that requires understanding how
-
those communities would even conceptualize these problems.
-
I think sometimes we come from a technical standpoint,
-
and we think about the communities
-
as separate from the problem.
-
But if we're ready to center those voices and engaged
-
throughout the whole process, I think
-
it results in better outcomes.
-
So to go a little bit deeper into engaging
-
in the hard questions, what we're really trying to do
-
is be able to assess how a product will impact
-
communities, particularly communities
-
who have been historically and traditionally marginalized.
-
So it requires us to really think