Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • AI is being discussed a lot, but what does it mean to use AI responsibly?

    人工智能被廣泛討論,但負責任地使用人工智能意味著什麼?

  • Not sure?

    不確定?

  • That's great.

    好極了

  • That's what I'm here for.

    這就是我來這裡的目的。

  • I'm Manny, and I'm a security engineer at Google.

    我叫曼尼,是谷歌的一名安全工程師。

  • I'm going to teach you how to understand why Google has put AI principles in place, identify the need for responsible AI practice within an organization, recognize that responsible AI affects all decisions made at all stages of a project, and recognize that organizations can design their AI tools to fit their own business needs and values.

    我將教你如何理解谷歌為什麼要制定人工智能原則,確定企業內部對負責任的人工智能實踐的需求,認識到負責任的人工智能會影響項目各個階段的所有決策,並認識到企業可以根據自身的業務需求和價值觀來設計人工智能工具。

  • Sounds good?

    聽起來不錯吧?

  • Let's get into it.

    讓我們開始吧。

  • You might not realize it, but many of us already have daily interactions with artificial intelligence, or AI, from predictions for traffic and weather to recommendations for TV shows you might like to watch next.

    你可能沒有意識到,我們中的許多人每天都在與人工智能(AI)互動,從預測交通和天氣,到推薦你接下來可能想看的電視節目。

  • As AI becomes more common, many technologies that aren't AI-enabled, they start to seem inadequate, like having a phone that can't access the internet.

    隨著人工智能的普及,許多不支持人工智能的技術開始顯得不足,就像手機不能上網一樣。

  • Now AI systems are enabling computers to see, understand, and interact with the world in ways that were unimaginable just a decade ago, and these systems are developing at an extraordinary pace.

    現在,人工智能系統使計算機能夠以十年前難以想象的方式觀察、理解世界並與之互動,而且這些系統正在以非同尋常的速度發展。

  • What we've got to remember, though, is that despite these remarkable advancements, AI is not infallible.

    不過,我們必須記住的是,儘管人工智能取得了這些令人矚目的進步,但它並非無懈可擊。

  • Developing responsible AI requires an understanding of the possible issues, limitations, or unintended consequences.

    開發負責任的人工智能需要了解可能存在的問題、侷限性或意外後果。

  • Technology is a reflection of what exists in society.

    技術是社會存在的反映。

  • So without good practices, AI may replicate existing issues or bias and amplify them.

    是以,如果沒有良好的實踐,人工智能可能會複製現有的問題或偏見,並將其擴大。

  • This is where things get tricky, because there isn't a universal definition of responsible AI, nor is there a simple checklist or formula that defines how responsible AI practices should be implemented.

    這就是事情變得棘手的地方,因為負責任的人工智能並沒有一個通用的定義,也沒有一個簡單的清單或公式來定義如何實施負責任的人工智能實踐。

  • Instead, organizations are developing their own AI principles that reflect their mission and value.

    相反,各組織正在制定自己的人工智能原則,以反映其使命和價值。

  • Luckily for us, though, while these principles are unique to every organization, if you look for common themes, you find a consistent set of ideas across transparency, fairness, accountability, and privacy.

    不過,幸運的是,雖然這些原則對每個組織來說都是獨一無二的,但如果你尋找共同的主題,就會發現在透明度、公平性、問責制和隱私權方面都有一套一致的理念。

  • Let's get into how we view things at Google.

    讓我們來了解一下谷歌是如何看待問題的。

  • Our approach to responsible AI is rooted in a commitment to strive towards AI that's built for everyone, that's accountable and safe, that respects privacy, and that is driven by scientific excellence.

    我們對負責任的人工智能的態度植根於一個承諾,即努力實現為每個人構建的人工智能,它是負責任的、安全的、尊重隱私的,並且是由卓越的科學所驅動的。

  • We've developed our own AI principles, practices, governance processes, and tools that together embody our values and guide our approach to responsible AI.

    我們制定了自己的人工智能原則、實踐、管理流程和工具,它們共同體現了我們的價值觀,並指導我們採取負責任的人工智能方法。

  • We've incorporated responsibility by design into our products, and even more importantly, organization.

    我們將責任設計融入產品,更重要的是融入組織。

  • Like many companies, we use our AI principles as a framework to guide responsible decision-making.

    與許多公司一樣,我們將人工智能原則作為指導負責任決策的框架。

  • We all have a role to play in how responsible AI is applied.

    在如何負責任地應用人工智能方面,我們都可以發揮作用。

  • Whatever stage in the AI process you're involved with, from design to deployment or application, the decisions you make have an impact.

    無論您參與人工智能流程的哪個階段,從設計到部署或應用,您所做的決定都會產生影響。

  • And that's why it's so important that you, too, have a defined and repeatable process for using AI responsibly.

    是以,您也必須制定一個明確的、可重複的流程,以負責任的方式使用人工智能。

  • There's a common misconception with artificial intelligence that machines play the central decision-making role.

    人工智能有一個常見的誤解,即機器扮演著核心決策角色。

  • In reality, it's people who design and build these machines and decide how they're used.

    實際上,是人們設計和製造了這些機器,並決定如何使用它們。

  • Let me explain.

    讓我來解釋一下。

  • People are involved in each aspect of AI development.

    人工智能發展的每個方面都離不開人的參與。

  • They collect or create the data that the model is trained on.

    他們收集或創建模型所要訓練的數據。

  • They control the deployment of the AI and how it's applied in a given context.

    他們控制著人工智能的部署以及在特定環境中的應用方式。

  • Essentially, human decisions are threaded through our technology products.

    從根本上說,人類的決策貫穿於我們的技術產品之中。

  • And every time a person makes a decision, they're actually making a choice based on their own values.

    而每當一個人做出決定時,他們實際上是在根據自己的價值觀做出選擇。

  • Whether it's a decision to use generative AI to solve a problem, as opposed to other methods, or anywhere throughout the machine learning lifecycle, that person introduces their own set of values.

    無論是決定使用生成式人工智能來解決問題(而不是其他方法),還是在整個機器學習生命週期中的任何一個環節,這個人都會引入自己的一套價值觀。

  • This means that every decision point requires consideration and evaluation to ensure that choices have been made responsibly, from concept through deployment and maintenance.

    這意味著每一個決策點都需要考慮和評估,以確保從概念到部署和維護都以負責任的方式做出選擇。

  • Because there's the potential to impact many areas of society, not to mention people's daily lives, it's important to develop these technologies with ethics in mind.

    由於這些技術有可能影響社會的許多領域,更不用說人們的日常生活了,是以在開發這些技術時必須考慮到倫理問題。

  • Responsible AI doesn't mean to focus only on the obviously controversial use cases.

    負責任的人工智能並不意味著只關注有明顯爭議的使用案例。

  • Without responsible AI practices, even seemingly innocuous AI use cases, or those with good intent, could still cause ethical issues or unintended outcomes, or not be as beneficial as they could be.

    如果沒有負責任的人工智能實踐,即使是看似無害的人工智能用例,或者那些用意良好的人工智能用例,仍有可能造成道德問題或意想不到的結果,或者無法帶來應有的益處。

  • Ethics and responsibility are important, not just because they represent the right thing to do, but also because they can guide AI design to be more beneficial for people's lives.

    道德和責任之所以重要,不僅因為它們代表了正確的做法,還因為它們可以指導人工智能設計,使其更有益於人們的生活。

  • So how does this relate to Google?

    那麼,這與谷歌有什麼關係呢?

  • We've learned that building responsibility into any AI deployment makes better models and builds trust with our customers and our customers' customers.

    我們已經認識到,在任何人工智能部署中建立責任感都能產生更好的模型,並與我們的客戶和客戶的客戶建立信任。

  • If at any point that trust is broken, we run the risk of AI deployments being stalled, unsuccessful, or at worst, harmful to stakeholders and those products' effects.

    如果在任何時候破壞了這種信任,我們就會面臨人工智能部署停滯不前、不成功的風險,或者在最壞的情況下,對利益相關者和這些產品的效果造成損害。

  • And tying it all together, this all fits into our belief at Google that responsible AI equals successful AI.

    這一切都與我們在谷歌的信念不謀而合:負責任的人工智能就等於成功的人工智能。

  • We make our product and business decisions around AI through a series of assessments and reviews.

    我們通過一系列評估和審查,圍繞人工智能做出產品和業務決策。

  • These instill rigor and consistency in our approach across product areas and geographies.

    這為我們在不同產品領域和地區的工作方法注入了嚴謹性和一致性。

  • These assessments and reviews begin with ensuring that any project aligns with our AI principles.

    這些評估和審查首先要確保任何項目都符合我們的人工智能原則。

  • While AI principles help ground a group in shared commitments, not everyone will agree with every decision made about how products should be designed responsibly.

    雖然人工智能原則有助於使一個團體立足於共同的承諾,但並不是每個人都會同意關於如何負責任地設計產品的每一個決定。

  • This is why it's important to develop robust processes that people can trust.

    正因如此,制定值得信賴的穩健流程就顯得尤為重要。

  • So even if they don't agree with the end decision, they trust the process that drove the decision.

    是以,即使他們不同意最終的決定,他們也會信任做出決定的過程。

  • So we've talked a lot about just how important guiding principles are for AI in theory, but what are they in practice?

    是以,我們已經談了很多關於指導原則在理論上對人工智能的重要性,但在實踐中它們又是什麼呢?

  • Let's get into it.

    讓我們開始吧。

  • In June 2018, we announced seven AI principles to guide our work.

    2018 年 6 月,我們宣佈了指導我們工作的七項人工智能原則。

  • These are concrete standards that actively govern our research and product development and affect our business decisions.

    這些具體的標準積極地指導著我們的研究和產品開發,並影響著我們的業務決策。

  • Here's an overview of each one.

    下面是每一種的概述。

  • One, AI should be socially beneficial.

    第一,人工智能應該對社會有益。

  • Any project should take into account a broad range of social and economic factors and will proceed only where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

    任何項目都應考慮到廣泛的社會和經濟因素,只有在我們認為總體效益大大超過可預見的風險和弊端時,才會繼續進行。

  • Two, AI should avoid creating or reinforcing unfair bias.

    第二,人工智能應避免製造或強化不公平的偏見。

  • We seek to avoid unjust effects on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

    我們力求避免對人們造成不公正的影響,尤其是與種族、民族、性別、國籍、收入、性取向、能力、政治或宗教信仰等敏感特徵有關的影響。

  • Three, AI should be built and tested for safety.

    第三,人工智能的構建和測試應確保安全。

  • We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

    我們將繼續制定和實施強有力的安全和安保措施,以避免造成傷害風險的意外結果。

  • Four, AI should be accountable to people.

    第四,人工智能應該對人類負責。

  • We will design AI systems that provide appropriate opportunities for feedback, relative explanations, and appeal.

    我們將設計人工智能系統,為反饋、相對解釋和上訴提供適當的機會。

  • Five, AI should incorporate privacy design principles.

    第五,人工智能應納入隱私設計原則。

  • We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

    我們將提供通知和徵得同意的機會,鼓勵採用具有隱私保護措施的架構,並對數據的使用提供適當的透明度和控制。

  • Six, AI should uphold high standards of scientific excellence.

    六、人工智能應堅持科學卓越的高標準。

  • We'll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches.

    我們將與各利益相關方合作,利用科學嚴謹的多學科方法,促進在這一領域發揮深思熟慮的上司作用。

  • And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

    我們將負責任地分享人工智能知識,出版教育材料、最佳實踐和研究成果,讓更多人能夠開發有用的人工智能應用。

  • Seven, AI should be made available for uses that accord with these principles.

    第七,人工智能應當用於符合這些原則的用途。

  • Many technologies have multiple uses, so we'll work to limit potentially harmful or abusive applications.

    許多技術都有多種用途,是以我們將努力限制可能有害或濫用的應用。

  • So those are the seven principles we have, but in addition to these seven principles, there are certain AI applications we will not pursue.

    這就是我們的七項原則,但除了這七項原則之外,我們還不會追求某些人工智能應用。

  • We will not design or deploy AI in these four application areas.

    我們不會在這四個應用領域設計或部署人工智能。

  • Technologies that cause or are likely to cause overall harm.

    造成或可能造成全面傷害的技術。

  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

    武器或其他技術,其主要目的或實施方式是造成或直接促進對人的傷害。

  • Technologies that gather or use information for surveillance that violates internationally accepted norms.

    違反國際公認準則收集或使用資訊進行監控的技術。

  • And technologies whose purpose contravenes widely accepted principles of international law and human rights.

    以及其目的與廣泛接受的國際法和人權原則相違背的技術。

  • Establishing principles was a starting point rather than an end.

    確立原則是起點,而不是終點。

  • What remains true is that our AI principles rarely give us direct answers to our questions how to build our products.

    事實依然是,我們的人工智能原理很少能直接回答我們如何構建產品的問題。

  • They don't and shouldn't allow us to sidestep hard conversations.

    它們不會也不應該讓我們迴避艱難的對話。

  • They are a foundation that establishes what we stand for, what we build, and why we build it.

    它們是奠定我們的立場、我們的建設以及我們建設的原因的基礎。

  • And they're core to the success of our enterprise AI offerings.

    它們是我們企業人工智能產品取得成功的核心。

  • Thanks for watching.

    感謝觀看。

  • And if you want to learn more about AI, make sure to check out our other videos.

    如果您想了解有關人工智能的更多資訊,請務必查看我們的其他視頻。

AI is being discussed a lot, but what does it mean to use AI responsibly?

人工智能被廣泛討論,但負責任地使用人工智能意味著什麼?

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋