字幕列表 影片播放 列印英文字幕 [MUSIC PLAYING] LAURENCE MORONEY: Hi, everybody. And welcome to "TensorFlow Meets." I'm Laurence Moroney. And I'm delighted today to meet with Arun Subramaniyan from BHGE. And Arun, I know you've been doing lots of great stuff with probabilistic modeling. So for those of us who don't really understand probabilistic modeling, could you tell us all about it? ARUN SUBRAMANIYAN: First of all, good morning. And thank you for having me here. And so probabilistic modeling and probabilistic theory generally is something that we've been using for several years now. And it's mostly for modeling systems that have a combination of very complex phenomena coupled with things that we can't measure precisely. And just to give you a simple example, if I were to ask you to predict where the stone would land if you throw it, then any high school student would tell you that they can calculate it precisely based on how fast you threw it and at what angle you threw the stone. Now, if I were to add a little bit of uncertainty to it, saying I don't know exactly at what angle you threw the stone at or what velocity you threw it at, then-- LAURENCE MORONEY: Maybe like wind shear and stuff like that? ARUN SUBRAMANIYAN: --and wind shear and stuff like that, then all of a sudden, your predictions are no longer as precise. Now, in a simple system like that, you can already see things starting to get complex. Imagine a complicated system in the real world. Things can get much more complex if you're trying to predict something precisely. LAURENCE MORONEY: So you're working on a lot of complex systems like this. Could you share some examples? ARUN SUBRAMANIYAN: Absolutely. So at BHGE and broader GE, we work on a lot of complex systems. So for example, say, designing gas turbines or trying to predict what a very large-scale system like an offshore oil platform would do-- we're talking about hundreds, if not thousands, of variables interacting with each other. And most of the time, you can predict or you can measure maybe a few hundreds, if not a few tens of those variables. So how do you predict behavior of such complex systems and still have actionable, meaningful outcomes from your models without knowing all the information about the system? That's really where we use probabilistic modeling. LAURENCE MORONEY: I see. It sounds complex. So what is your approach to this? How do you get started? ARUN SUBRAMANIYAN: [INAUDIBLE] So we get started with-- starting with the domain. So we understand the domain. So what I mean by "domain" is you might be a mechanical engineer. You might be an aerospace engineer. You might be a petrophysics engineer. You start with the understanding of the domain and marry that with traditional machine learning techniques. And that's been going on for several decades. That gives you a very good understanding of how to predict things precisely. And that's what we call known knowns. And so we can predict the known things very precisely. LAURENCE MORONEY: So known knowns, right? ARUN SUBRAMANIYAN: Exactly, known knowns. LAURENCE MORONEY: A core area you can work from. ARUN SUBRAMANIYAN: Core area we can start from. And then we add a layer of probabilistics on top of it to say, what are the things that we cannot measure precisely or measure at all? And that's where probabilistic modeling comes in. And that is what I would call known unknowns. An example of that would be, say, if I am trying to predict how a crack is going to propagate in a particular component. Then I need to know what is the temperature of that particular component, for example. I measure it to within plus or minus 10 degrees. But I don't know what that variation in temperature is going to do to my crack propagating. LAURENCE MORONEY: I see. ARUN SUBRAMANIYAN: So that work, that's what I would call known unknowns. And once I know what are the unknowns that I'm not entirely sure about, I can go say, OK, this is what is the impact of that on something [? real. ?] There is another level of complexity, where things that I don't know I don't know. LAURENCE MORONEY: Got it. ARUN SUBRAMANIYAN: And that's what I would call unknown unknowns. LAURENCE MORONEY: Unknown unknowns, right? ARUN SUBRAMANIYAN: I know it's a mouthful. But an example of that would be, say, you have designed a system. You have put it out in the real world. You know some of the things that is going to affect that system. But you're not entirely sure of everything that's going to affect the system. And that is that other everything. That is what we would call unknown unknowns. LAURENCE MORONEY: I see. ARUN SUBRAMANIYAN: And most of the time, in real world, you can predict something up to, say, 90% or 95% of the time. The last 5% is what surprises us. And in systems which are safety-critical systems that are critical to keep up the infrastructure of the world, you can't necessarily have even a 1% chance of something going down. So for example, if power goes down, you need to be able to bring that back up very quickly. So those are the kinds of things where unknown unknowns come in. LAURENCE MORONEY: Got it. So starting from the known knowns, then going to the known or knowable unknowns, and then there's the unknown-- ARUN SUBRAMANIYAN: Unknowns. LAURENCE MORONEY: Unknown unknowns. ARUN SUBRAMANIYAN: Exactly. LAURENCE MORONEY: I see. So you've gone from known knowns to knowable unknowns, and then unknown unknowns. ARUN SUBRAMANIYAN: Unknown unknowns, right? And when you're trying to model systems that are highly complex and are extremely critical, you need to be able to predict things at all of those levels. And even if you're not able to predict unknown unknowns, you need to know how much are you missing. If an event like that happens, how would you respond to it? That is really where unknown unknown comes. LAURENCE MORONEY: So now, bringing this, then, into just developing these things, you use TensorFlow Probability. ARUN SUBRAMANIYAN: Yes. And we started with TensorFlow and then combined that with TensorFlow Probability quite a bit. LAURENCE MORONEY: So could you tell us a little bit about how you use all that? ARUN SUBRAMANIYAN: Absolutely. So we started with TensorFlow for deep learning, precisely. And when we got introduced to the TensorFlow Probability team, and Josh Dillon specifically, what we realized was they were bringing extremely deep research concepts from the probabilistics world into a production world that is not generally common. And we were able to mix the deep learning community with the probabilistics community we had within our own teams. So running a reasonably large data science team, what you have to do is mix teams that are not necessarily talking the same language. And TensorFlow allows us to do that very effectively because now, a deep learning expert who doesn't understand probabilistics well can talk to a probabilistics expert who doesn't understand deep learning in the same language. LAURENCE MORONEY: Nice. So having that framework that they could work together was very powerful for them. ARUN SUBRAMANIYAN: Absolutely. And it helped scale both our teams as well as our deployments very quickly. LAURENCE MORONEY: Wow, so a lot of complex stuff that you've been working on. There must be somehow you got started to figure out this. And how did you learn all this? ARUN SUBRAMANIYAN: Absolutely. So I'm not a trained data scientist by training. So I got into data science by accident. So I'm an aerospace engineer who had to solve very complex problems by mixing these concepts together. LAURENCE MORONEY: It's a very common story, by the way. ARUN SUBRAMANIYAN: Absolutely. So one of the things that helped me a lot was trying to mix practical aspects with the deeply theoretical aspects. So for practical aspects, a book that I really love is called "Doing Bayesian Data Analysis." And that gives-- at least it gave me-- quite a bit of understanding of how these are applied in the real world. But at the same time, thinking about probability requires people to think about solving problems in a very, very fundamentally different way because we are good at being trained at saying, here are the bunch of inputs. How is that going to get me one outcome? But if the same inputs give you multiple outcomes, that's a very different paradigm to think about. So a set of books that helped me were from E.T. Jaynes, which is at a much more philosophical level of understanding probabilistics. I would urge folks to at least dabble in both the practical aspects as well as some bit of the philosophical aspects together. And if you look at the recent blogs from the TensorFlow team as well as the broader community in doing probabilistic deep learning, there's a lot of fantastic blogs out there that'll help people get started as well. LAURENCE MORONEY: And I'd say one thing. I know you've written a couple of blogs yourself. There's another one on the way the way where you've gone into a little bit more detail than what you've been talking about today. ARUN SUBRAMANIYAN: Absolutely. And we walked through the three blogs because we wanted to walk through known knowns, known unknowns, and unknown unknowns, and bring it all together. LAURENCE MORONEY: So the one that you're still working on is the unknown unknowns. ARUN SUBRAMANIYAN: Yes, and we're close to getting done with it. And it's getting published in the next month or so. LAURENCE MORONEY: So all that's on blog.tensorflow.org, right? ARUN SUBRAMANIYAN: Absolutely. LAURENCE MORONEY: So thanks so much, Arun. And thanks, everybody, for watching this episode of "TensorFlow Meets." If you've any questions for me, or if you've any questions for Arun, just please leave them in the comments below. And also, in the description for this video, we'll put links to everything that we spoke about today. So you can check him out for yourself. Thanks, and see you next time. ARUN SUBRAMANIYAN: Absolutely. Thank you, and thanks for having me. [MUSIC PLAYING]
B1 中級 Arun Subramaniyan討論概率建模(TensorFlow Meets) (Arun Subramaniyan discusses probabilistic modeling (TensorFlow Meets)) 2 0 林宜悉 發佈於 2021 年 01 月 14 日 更多分享 分享 收藏 回報 影片單字