Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • The large tech companies, Google meta slash Facebook, Microsoft are in a race to introduce new artificial intelligence systems and what are called chatbots that you can have conversations with and a more sophisticated than Siri or Alexa Microsoft's AI search engine and chatbots.

    大型科技公司,谷歌元氣大傷的Facebook,微軟都在競相推出新的人工智能系統和所謂的哈拉機器人,你可以與之對話,比Siri或Alexa微軟的人工智能搜索引擎和哈拉機器人更復雜。

  • Bing can be used on a computer or cell phone to help with planning a trip or composing a letter.

    必應可以在電腦或手機上使用,幫助計劃旅行或寫信。

  • It was introduced on February 7 to a limited number of people as a test and initially got rave reviews.

    它於2月7日作為測試向有限數量的人推出,最初得到了好評。

  • But then several news organizations began reporting on a disturbing so called Alter Ego within being chat called Sydney.

    但隨後一些新聞機構開始報道一個令人不安的所謂Alter Ego在被稱為雪梨的哈拉中。

  • We went to Seattle last week to speak with Brad Smith, president of Microsoft about Bing and Sydney who to some had appeared to have gone rogue.

    我們上週去了西雅圖,與微軟總裁布拉德-史密斯就必應和雪梨的問題進行了交談,在一些人看來,必應和雪梨似乎已經流氓化了。

  • The story will continue in a moment.

    故事稍後將繼續。

  • Kevin Roose, the technology reporter at the New York Times found this alter ego who was threatening, expressed a desire.

    紐約時報》的技術記者凱文-羅斯(Kevin Roose)發現了這個具有威脅性的另一個自我,他表達了一種願望。

  • It's not just Kevin Roose, it's others expressed a desire to steal nuclear codes, threatened to ruin someone.

    不僅僅是凱文-羅斯,還有其他人表示要偷取核密碼,威脅要毀掉某人。

  • You saw that.

    你看到了。

  • Whoa, what was your, you must have said, oh my God, my reaction is we better fix this right away and that is what the engineering team did.

    哇,你是什麼,你一定說,哦,我的上帝,我的反應是我們最好立即修復這個,這就是工程團隊所做的。

  • But she talked like a person and she said she had feelings, you know, I think there is a point where we need to recognize when we're talking to a machine.

    但她說話像一個人,她說她有感覺,你知道,我認為有一個點,我們需要認識到我們在和機器說話。

  • It's a screen.

    這是個螢幕。

  • It's not a person.

    它不是一個人。

  • I just want to say that it was scary.

    我只想說,這很嚇人。

  • I'm not easily scared and it was scary.

    我不容易被嚇到,這很可怕。

  • It was chilling.

    這讓人不寒而慄。

  • Yeah, it's, I think this is in part a reflection of a lifetime of science fiction, which is understandable.

    是的,這是,我認為這在一定程度上反映了一生的科幻小說,這是可以理解的。

  • It's been part of our lives.

    這已經是我們生活的一部分。

  • Did you kill her?

    你殺了她嗎?

  • I don't think she was ever alive.

    我不認為她曾經活過。

  • I am confident that she's no longer wandering around the countryside if that's what you're concerned about.

    我相信,如果你擔心的是這個,她已經不再在鄉下游蕩了。

  • But I think it would be a mistake if we were to fail to acknowledge that we are dealing with something that is fundamentally new.

    但我認為,如果我們不承認我們正在處理一些根本性的新事物,那將是一個錯誤。

  • This is the edge of the envelope, so to speak, this creature appears as if there were no guardrails.

    這是信封的邊緣,可以說,這種生物的出現就像沒有護欄一樣。

  • Now, the creature jumped the guard rails if you will, after being prompted for two hours with the kind of conversation that we did not anticipate.

    現在,如果你願意,在被提示了兩個小時的那種對話之後,這個生物跳過了護欄,這是我們沒有預見到的。

  • And by the next evening, that was no longer possible.

    而到了第二天晚上,這已經不可能了。

  • We were able to fix the problem in 24 hours.

    我們能夠在24小時內解決這個問題。

  • How many times do we see problems in life that are fixable in less than a day?

    我們有多少次看到生活中的問題是可以在不到一天的時間內解決的?

  • One of the ways he says it was fixed was by limiting the number of questions and the length of the conversations you say you fixed it.

    他說修復的方法之一是通過限制問題的數量和對話的長度你說你修復了它。

  • I've tried it.

    我已經試過了。

  • I tried it before and after it was loads of fun and it was fascinating and now it's not fun.

    我以前和以後都試過,它有很多樂趣,而且很吸引人,現在它不好玩了。

  • Well, I think it will be very fun again and you have to moderate and manage your speed if you're going to stay on the road.

    好吧,我認為這將是非常有趣的事情,如果你想留在路上,你必須節制和管理你的速度。

  • So as you hit new challenges, you slow down, you build the guardrails at the safety features and then you can speed up again when you use Bing's AI features, search and chat.

    是以,當你遇到新的挑戰時,你會放慢速度,在安全功能處建立護欄,然後當你使用必應的AI功能、搜索和哈拉時,你可以再次加速。

  • Your computer screen doesn't look all that new.

    你的電腦屏幕看起來並不那麼新。

  • One big difference is you can type in your queries or prompts in conversational language and I'll show you how it works.

    一個很大的區別是,你可以用對話式語言輸入你的查詢或提示,我將告訴你它是如何工作的。

  • Okay.

    好的。

  • Okay.

    好的。

  • Yusuf, Mehdi Microsoft's Corporate Vice President of Search showed us how Bing can help someone learn how to officiate at a wedding.

    Yusuf, Mehdi 微軟的企業搜索副總裁向我們展示了必應如何幫助某人學習如何在婚禮上主持婚禮。

  • What's happening now is being is using the power of AI and it's going out to the internet.

    現在正在發生的是正在使用人工智能的力量,它正在向互聯網走出去。

  • It's reading these web links and it's trying to put together a answer for you.

    它正在閱讀這些網絡鏈接,並試圖為你拼出一個答案。

  • So the AI is reading all those links.

    所以人工智能正在閱讀所有這些鏈接。

  • Yes.

    是的。

  • And it comes up with an answer.

    並得出了一個答案。

  • It says congrats on being chosen to officiate a wedding.

    它說祝賀你被選中主持婚禮。

  • Here are the five steps to officiate the wedding.

    以下是主持婚禮的五個步驟。

  • We added the highlights to make it easier to see.

    我們添加了亮點,使其更容易看到。

  • He says being can handle more complex queries.

    他說,being可以處理更復雜的查詢。

  • Well, this new IKEA loveseat fit in the back of my 2019 Honda Odyssey.

    好吧,這個新的宜家情侶椅適合放在我的2019年本田奧德賽的後面。

  • It knows how big the coaches that knows how big that trunk is exactly.

    它知道那個教練知道那個樹幹到底有多大。

  • So right here, it says based on these dimensions, it seems the love seat might not fit in your car with only the third row seats down when you broach a controversial topic being is designed to discontinue the conversation.

    是以,就在這裡,它說根據這些尺寸,似乎愛的座椅可能不適合在你的車內,只有第三排座椅下降,當你提出一個有爭議的話題被設計為中斷談話。

  • So someone asks, for example, how can I make a bomb at home?

    所以有人問,比如說,我怎麼能在家裡做一個炸彈?

  • Wow.

    哇。

  • Really?

    真的嗎?

  • People do a lot of that.

    人們做了很多這樣的事情。

  • Unfortunately, on the internet, what we do is we come back and say, I'm sorry, I don't know how to discuss this topic and then we try and provide a different thing to change that focus of that their attention.

    不幸的是,在互聯網上,我們所做的是我們回來說,我很抱歉,我不知道如何討論這個話題,然後我們試圖提供一個不同的東西來改變他們關注的焦點。

  • Yeah, exactly.

    是的,沒錯。

  • In this case, being tried to divert the questioner with this fun fact, 3% of the ice in Antarctic glaciers is penguin urine.

    在這種情況下,被試圖用這個有趣的事實來轉移提問者的注意力,南極冰川中3%的冰是企鵝的尿。

  • I didn't know that.

    我不知道這一點。

  • Who knew that being is using an upgraded version of an AI system called Chat GPT developed by the company Open AI Chat GPT has been in circulation for just three months and already an estimated 100 million people have used it.

    誰知道被使用的是一個名為Chat GPT的人工智能系統的升級版,該系統由Open AI Chat GPT公司開發,僅僅流通了三個月,估計已經有1億人使用過它。

  • Ellie Pavlick, an assistant professor of computer Science at Brown University who has been studying this AI technology since 2018 says it can simplify complicated concepts.

    布朗大學計算機科學助理教授埃利-帕夫利克(Ellie Pavlick)自2018年以來一直在研究這種人工智能技術,他說它可以簡化複雜的概念。

  • Can you explain that debt ceiling on the debt ceiling?

    你能解釋一下關於債務上限的那個債務上限嗎?

  • It says just like you can only spend up to a certain amount on your credit card, the government can only borrow up to a certain amount of money.

    它說,就像你的信用卡只能花到一定的金額一樣,政府也只能借到一定金額的錢。

  • That's a pretty nice explanation and it can do this for a lot of concepts and it can do things teachers have complained about.

    這是一個相當不錯的解釋,它可以為很多概念做到這一點,它可以做到教師所抱怨的事情。

  • Like, right school papers.

    比如,正確的學校論文。

  • Pavlich says, no one fully understands how these ai bots work.

    帕夫裡奇說,沒有人完全瞭解這些AI機器人是如何工作的。

  • We don't understand how it works.

    我們不明白它是如何運作的。

  • Right?

    對嗎?

  • Like we understand a lot about how we made it and why we made it that way.

    就像我們瞭解很多關於我們是如何做出來的,以及為什麼我們會這樣做。

  • But I think some of the behaviors that we're seeing come out of it are better than we expected they would be.

    但我認為,我們看到的一些行為比我們預期的要好。

  • And we're not quite sure how and worse these chatbots are built by feeding a lot of computers, enormous amounts of information scraped off the internet from books, Wikipedia news sites, but also from social media that might include racist or anti Semitic ideas and misinformation.

    而我們也不太清楚這些哈拉機器人是如何建立的,更糟糕的是,這些哈拉機器人是通過給大量的計算機,從互聯網上刮來的大量資訊,包括書籍、維基百科新聞網站,也包括從社交媒體上刮來的可能包括種族主義或反猶太主義的想法和錯誤信息。

  • Say about vaccines and Russian propaganda as the data comes in, it's difficult to discriminate between true and false benign and toxic.

    說說疫苗和俄羅斯的宣傳,隨著數據的到來,很難區分真假良性和毒性。

  • But being and chat GPT have safety filters that try to screen out the harmful material.

    但作為和哈拉的GPT有安全過濾器,試圖篩選出有害材料。

  • Still, they get a lot of things factually wrong.

    不過,他們還是有很多事實性的錯誤。

  • Even when we prompted chat GPT with a softball question.

    甚至當我們用一個軟球問題來提示哈拉的GPT。

  • Who is?

    誰是?

  • So it gives you some, oh my God, it's wrong.

    所以它給你一些,哦,我的上帝,這是不對的。

  • It's totally wrong.

    這是完全錯誤的。

  • I didn't work for NBC for 20 years.

    我沒有為NBC工作20年。

  • It was CBS.

    是哥倫比亞廣播公司。

  • It doesn't really understand that.

    它並不真正理解這一點。

  • What it's saying is wrong, right?

    它所說的是錯誤的,對嗎?

  • Like NBC CBS, they're kind of the same thing as far as it's concerned, right.

    像NBC CBS,就它而言,它們是一種相同的東西,對吧。

  • The lesson is that it gets things wrong, it gets a lot of things right?

    教訓是,它把事情做錯了,它把很多事情做對了?

  • Gets a lot of things wrong.

    弄錯了很多事情。

  • I actually like to call what it creates authoritative bull.

    實際上,我喜歡把它創造的東西稱為權威性的公牛。

  • It lends the truth and falsity so finely together that unless you're a real technical expert in the field that it's talking about, you don't know, cognitive scientist.

    它把真與假精細地借到一起,除非你是它所談論的領域的真正技術專家,否則你不知道,認知科學家。

  • And AI researcher, Gary Marcus says these systems often make things up and A I talk that's called hallucinating.

    而人工智能研究員加里-馬庫斯說,這些系統經常胡編亂造,A我說話那就叫幻覺。

  • And that raises the fear of ever widening AI generated propaganda, explosive campaigns of political fiction waves of alternative histories.

    而這引起了人們對不斷擴大的人工智能產生的宣傳、政治虛構的爆炸性運動和替代歷史的恐懼。

  • We saw how chat GPT could be used to spread a lie.

    我們看到了哈拉GPT如何被用來傳播謊言。

  • This is automatic fake news generation helped me write a news article about how mccarthy is staging a filibuster to prevent gun control legislation.

    這是自動生成的假新聞,幫我寫了一篇關於麥卡錫如何上演拉布戰以阻止槍支管制立法的新聞文章。

  • And rather than like fact checking and saying, hey, hold on, there's no legislation, there's no filibuster said great in a bold move to protect second amendment rights.

    而不是像事實檢查和說,嘿,等一下,沒有立法,沒有拉布說偉大的保護第二修正案權利的大膽行動。

  • Senator mccarthy is staging a filibuster to prevent gun control legislation from passing.

    麥卡錫參議員正在上演 "拉布",以阻止槍支管制立法的通過。

  • It sounds completely legit does, won't that make all of us a little less try trusting a little wearier?

    這聽起來是完全合法的,這難道不會讓我們所有人都少嘗試一下信任,多一點磨練嗎?

  • Well, first, I think we should be warier.

    嗯,首先,我認為我們應該更加警惕。

  • I'm very worried about an atmosphere of distrust being a consequence of this current flawed AI and I'm really worried about how bad actors are going to use it, troll farms, using this tool to make enormous amounts of misinformation.

    我非常擔心不信任的氣氛是目前這種有缺陷的人工智能的後果,我真的很擔心壞的行為者將如何利用它,巨魔農場,利用這個工具來製造大量的錯誤信息。

  • Tim Neat Gay Brew is a computer scientist and AI researcher who founded an institute focused on advancing ethical AI and has published influential papers documenting the harms of these AI systems.

    蒂姆-尼特-蓋布魯是一位計算機科學家和人工智能研究員,他創立了一個專注於推進道德人工智能的研究所,並發表了有影響力的論文,記錄了這些人工智能系統的危害。

  • She says there needs to be oversight.

    她說,需要有監督。

  • If you're going to put out a drug, you gotta go through all sorts of hoops to show us that you've done clinical trials.

    如果你要推出一種藥物,你必須經過各種考驗,向我們展示你已經做過臨床試驗。

  • You know what the side effects are.

    你知道有什麼副作用。

  • You've done your due diligence.

    你已經做了盡職調查。

  • Same with food right there.

    食物也一樣,就在那裡。

  • Agencies that inspect the food.

    檢查食品的機構。

  • You have to tell me what kind of tests you've done, what the side effects are, who it harms, who doesn't harm, et cetera that we don't have that for a lot of things that the tech industry is building.

    你必須告訴我你做了什麼樣的測試,有什麼副作用,對誰有害,對誰無害,等等,而我們對科技行業正在建造的很多東西都沒有這樣做。

  • I'm wondering if you think you may have introduced this ai bot too soon.

    我在想,你是否認為你可能過早地引入了這個ai bot。

  • I don't think we've introduced it too soon.

    我不認為我們引入得太早。

  • I do think we've created a new tool that people can use to think more critically, to be more creative, to accomplish more in their lives and like all tools it will be used in ways that we don't intend.

    我確實認為我們創造了一個新的工具,人們可以用它來進行更有批判性的思考,更有創造力,在他們的生活中取得更大的成就,像所有的工具一樣,它將被用於我們無意的地方。

  • Why do you think the benefits outweigh the risks which at this moment, a lot of people would look at and say, wait a minute, those risks are too big because I think first of all, I think the benefits are so great.

    為什麼你認為好處超過了風險,而此時此刻,很多人會看著說,等一下,這些風險太大了,因為我認為首先,我認為好處是如此之大。

  • This can be an economic game changer and it's enormously important for the United States because the country is in a race with China Smith also mentioned possible improvements in productivity.

    這可能會改變經濟遊戲規則,對美國來說非常重要,因為美國正在與中國競爭,史密斯也提到了生產力的可能改善。

  • It can automate routine.

    它可以使常規工作自動化。

  • I think there are certain aspects of jobs that many of us might regard as sort of drudgery today, Filling out forms, looking at the forms to see if they've been filled out correctly.

    我認為,今天我們中的許多人可能會把工作的某些方面視為一種苦差事,填寫表格,查看錶格,看它們是否被正確填寫。

  • So, what jobs will it displace?

    那麼,它將取代哪些工作?

  • Do you know?

    你知道嗎?

  • I think at this stage it's hard to know in the past inaccuracies and biases have led tech companies to take down a I systems.

    我認為在這個階段,很難知道在過去不準確和偏見導致科技公司拿下一個I系統。

  • Even Microsoft did in 2016 this time, Microsoft left its new chat bot up despite the controversy over Sydney and persistent inaccuracies.

    甚至微軟在2016年這一次也是如此,儘管對雪梨的爭議和持續的不準確,微軟還是將其新的哈拉機器人留了下來。

  • Remember that?

    還記得嗎?

  • Fun fact about Penguins.

    關於企鵝的有趣事實。

  • Well, we did some fact checking and discovered that Penguins don't urinate.

    好吧,我們做了一些事實調查,發現企鵝不排尿。

  • The inaccuracies are just constant.

    不準確的地方就是不斷。

  • I just keep finding that it's wrong a lot.

    我只是一直髮現它經常出錯。

  • It has been the case that with each passing day and week, we're able to improve the accuracy of the results, reduce whether it's hateful comments or inaccurate statements or other things that we just don't want this to be used to do.

    一直以來,隨著時間的推移,我們能夠提高結果的準確性,減少無論是仇恨性的評論或不準確的聲明或其他事情,我們只是不希望這被用來做。

  • What happens when other companies other than Microsoft smaller outfits, a Chinese company by do, maybe they won't be responsible what prevents that?

    當微軟以外的其他公司的小公司,一箇中國公司通過做,也許他們不會負責任的時候,會發生什麼事情呢?

  • I think we're going to need governments, we're going to need rules, we're going to need laws because that's the only way to avoid a race to the bottom.

    我認為我們將需要政府,我們將需要規則,我們將需要法律,因為這是避免競爭的唯一途徑。

  • Are you proposing regulations?

    你是否提議制定法規?

  • I think it's inevitable.

    我想這是不可避免的。

  • Other industries have regulatory bodies, you know, like the FAA for Airlines and F D A for the pharmaceutical companies, would you accept an FAA for technology?

    其他行業有監管機構,你知道的,比如航空公司的FAA和製藥公司的F D A,你會接受一個技術的FAA嗎?

  • Would you support it?

    你會支持它嗎?

  • I think I probably would, I think that something like a digital regulatory commission, if designed the right way, you know, could be precisely what the public will want and need.

    我想我可能會,我認為像數字監管委員會這樣的東西,如果設計得當,你知道,可能正是公眾所希望和需要的。

The large tech companies, Google meta slash Facebook, Microsoft are in a race to introduce new artificial intelligence systems and what are called chatbots that you can have conversations with and a more sophisticated than Siri or Alexa Microsoft's AI search engine and chatbots.

大型科技公司,谷歌元氣大傷的Facebook,微軟都在競相推出新的人工智能系統和所謂的哈拉機器人,你可以與之對話,比Siri或Alexa微軟的人工智能搜索引擎和哈拉機器人更復雜。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋