Placeholder Image

字幕列表 影片播放

  • Let's get back to the Google I/O developers conference in Mountain View, California.

    讓我們回到位於加州山景城的 Google I/O 開發者大會

  • With the spotlight on hardware this year,


  • Google CEO Sundar Pichai announced a new artificial intelligence supercomputer chip


  • looking to transform the search giant into an AI first company and a real cloud computing contender.

    希望能因此將搜尋引擎龍頭公司變成以 AI 及雲端技術為主的競爭者

  • We caught up with Scott Huffman, Google's vice president of engineering, and asked just what this supercomputer chip means for Google.

    我們訪問了谷歌工程部的副總裁 Scott Huffman 這個智慧對於谷歌會帶來什麼影響

  • Well we're really excited to be able to have the computing power to be able to really harness all of the newer machine learning algorithms.


  • One of the things that is exciting about these new algorithms is they're verywhat in computer science terms we call — "highly parallelizable".


  • So you can do many computations at once and get very high scale and process a lot of data that way.


  • And the new chips are really designed to do that from the ground up.


  • So really designed to do the kinds of machine learning processing that we're using a lot of.


  • Digital assistants are all the rage now, but Google Home sales are still dwarfed by Amazon Echo.

    數位助理在現今社會非常夯,但 Google Home 的銷售仍低於 Amazon Echo

  • How is the Google Assistant different from Siri, different from Alexa, or Cortana?

    谷歌的小幫手跟 Siri、Alexa 或 Cortana 有什麼差別呢?

  • So, one thing that we're very excited about with the Google Assistant,


  • is the ability to actually go across all the different devices and contacts in your life.


  • So as you go from your house, to your car and your commute, to out and about on your day,


  • we want the same assistant to really be available to help you in all those different places.


  • And so today, we're really excited to deploy the assistant out to all the iPhones,

    所以今天我們很高興能將小幫手推出給所有的 iPhone 使用者

  • make it available to iPhone users in the US,

    讓身在美國的 iPhone 用戶都有辦法使用

  • and we're in the process, of course, of rolling out across all the Android phones, Google Home, Android Auto, Android TV, Android Wear,

    而當然我們也在準備推給所有的安著手機、Google Home、Android Auto、Android TV 和 Android Wear

  • so really making that assistant always available to you no matter what you're doing.


  • Now what is it gonna take for voice technology to actually improve?


  • Because, you know, I've used all of these devices, and in my own experience, it's still rather crude.


  • Well, so we think we're making a lot of progress, but one of the big things and Sundar talked a little about it today,


  • is really using broad-scale data and neural algorithms in order to improve the technology.


  • So we've been actually pretty significantly overhauling, kind of, all of our algorithms under the hood every couple of years


  • to take advantage of the new computing power that we have and new and larger amounts of data.


  • And every time that we do that, we see a pretty big jump in improvement.


  • One of the things that we did as we worked on bringing Google Home to the market that was really an exciting thing,

    另一件我們將 Google Home 推上市場時所做的事

  • is because Google Home needs to work at a distance, I might be standing far away,

    是因為 Google Home 必須要能夠遠距使用,因為我可能會站得比較遠

  • there's a lot more noise in the microphone signal.


  • And so by adding essentially artificial noise into our training data,


  • we were able to have our neural network actually be able to recognize things at a far distance away.


  • So, these kinds of algorithms are very powerful for kind of, shaping recognition in different environments.


  • What is one place that you see the assistant going, where it hasn't gone yet?


  • Today we're enabling a voice conversation into a great set of functionality


  • but that... the users are doing sort of the obvious things on every device.


  • But I don't think we fully realized yet the vision of having any kind of conversation you want,


  • having it really be understood, and then having the assistant tap in to all the different services in the world and a seamless way to do that.


  • That's really the vision.


  • And so I think we have a long way to go.


  • One example that we showed some beginnings of today that we're really excited about is something we call Google Lens.

    其中一個我們很期待看到的、現今也有一些成果的是 Google Lens

  • And this is just the realization that you know, speaking out loud is great,


  • but when I'm talking with my friend, a lot of times what I do is point at something,


  • and then we talk about that. We talk about what we see.


  • And with Google's advances in computer vision and computer, kind of, image understanding,


  • the assistant is actually going to begin to have that capability over the next few months,


  • so that I'll be able to open my camera, my viewfinder, and then begin to talk to the assistant about what I see.


  • And so we're really excited about that.


Let's get back to the Google I/O developers conference in Mountain View, California.

讓我們回到位於加州山景城的 Google I/O 開發者大會


單字即點即查 點擊單字可以查詢單字解釋