Placeholder Image

字幕列表 影片播放

  • All right.

  • Hello, world.

  • This is CS 50 Live where we do a bunch of programming from scratch.

  • Talk about technology would do all sorts of fun stuff.

  • C Studios.

  • Harvard's intro to computer science.

  • If you're not familiar, you can find all of our videos and stuff on youtube dot com slash CS 50.

  • So go ahead and check that out.

  • But today, So my name is Colt in Ogden.

  • Host the situation here, But today we have see cities Nick Wong here, and, uh, I'll let Nick take it away.

  • So we're continuing our kind of Siris on neural networks.

  • I thought attack people were asking, you know, linear algebra needed, like, required for no networks.

  • And I think you can actually attest to the fact that in order to get them working Not really.

  • Right.

  • Uh, experimenting over the weekend did all sorts of cool things, but actually, I'll ask you to talk about that a little bit in a sec, But to get him working to, like, run neural networks to follow with tutorial online, I'd say no, you don't really need in your, uh, linear out right.

  • You do need never lets to use little sleeping today, but yeah, I would say that you don't necessarily need it in order to get them to work in a very kind of technical sense of the question.

  • But I think that understanding some kind of fundamentals of linear algebra Ives, as was suggested in the chat, is super useful for actually doing meaningful things with neural network.

  • So if you want to answer your questions about why you're your own, that has just noise showing up every time you ask it to display an image.

  • Or if you want to understand why certain layers don't fit together or you need some sort of adjustment or your training some gan a generative, adversarial network and you want to know, you know, why are your images looking?

  • One way versus another than linear algebra is pretty helpful.

  • So what?

  • What did you find over the weekend?

  • Cold.

  • I did find a lot of noise, so I don't know if you want to do like a switcheroo of the laptop or if you wanted to know if we want Thio before we actually cut to the full view.

  • Now, I don't know that much at all about general generative adversarial networks for neural networks, at least in terms of the math.

  • You know how they sort of work in principle.

  • Um, but I was injured.

  • I am into game development, and there's a sort of appeal to me about generative art, right?

  • Like Sprite art.

  • Does that mean general generative art would be instead of an artist?

  • Let's say you and I are working on a game programmer.

  • You're the artist, and I want 100 weapons in my game.

  • It's a lot.

  • It's a lot of weapons you're gonna have to create every single one of them can barely draw like 11 sword, maybe.

  • Yeah, and you have to, like, creatively.

  • Think of all of the sort of differences between the weapons.

  • So there's a creative There's a creative cost.

  • There's a labor cost, a time cost that takes you away from doing other things that I might need you on.

  • Maybe also programmer, Maybe you could help.

  • Very.

  • Uh, I'm not sure you're talking about drawing eyes like I could also draw the trees and maybe like the shields.

  • Yeah.

  • Yeah.

  • Thank you for pointing down trees.

  • I brought firewood if you weren't where you were, but tentative.

  • A spreader is really cool because it takes away the cost of doing art as a human, which is a very slow process.

  • Even if you're a good artist, it's time consuming to make really good artwork.

  • And so I've always been interested in the appeal of potentially finding a holy grail of generative artwork.

  • And, um, generative adversarial networks are pretty cool.

  • People have done a lot of interesting things with them.

  • Um, there was a article on medium that I read about generative artwork, and I might even be able to actually find the article.

  • So medium sprite, our neural, you're gonna that a generative networks of super useful because they generate these very sharp images.

  • They could be used to generate things that look maybe even realistic photorealistic, which is crazy because a lot of these, like variation all autumn and coders or kind of like random variant generators are they generate that kind of pixelated images sometimes, or like if you're given an actual image, there's just too much complexity and like a human face or something like that for you to build something that looks realistic, based even on something that's realistic.

  • But these Ganz.

  • They have the ability to kind of question each other as you go.

  • Yeah, no, it's super cool.

  • The article that inspired this and I don't know how I actually ended up reading this was it's called Machine Learning is fun.

  • Part seven Abusing generative adversarial networks to make eight pixel art was like, I'm somebody who's interested in a large I'm also interested in generative stuff.

  • So there, those are the kind of goes into detail about, you know, Jenna references a paper on this thing called a DC gan which, actually what was what a.

  • D.

  • C.

  • Stands for trying to remember what that actually stood for.

  • Uh oh, deep convolution.

  • But a the end goal of this article was actually create pixel art that looked like this, and the sprites themselves aren't actually generated.

  • But all the tiles in this, the everything but a Simon here, this monster and this bat is all generated from a neural network, and this is done with Basically, they went into detail on how they did it.

  • But down here we scratched a little bit, and they go into obviously the details on how a d, uh, general adversarial network works and all that stuff which will actually end up getting to, if not this one, but like, one of the more extreme on general.

  • But basically, they took, I think, 10,000 screenshots of any s games, like all of these ones here him into a ah a a neural network.

  • And you can actually find the get every boat.

  • Uh, you could find the report on get hub and actually used it myself.

  • But it end up creating things that look like this.

  • So pretty, pretty chaotic.

  • Yeah, a little wild.

  • And, you know, you've talked about over fitting data.

  • It does over fit in a lot of situations.

  • You can see, like a direct copy and paste of certain menus and things like that in some of the screen show.

  • Definitely grab patterns, which is nice, but also grabbed a lot of noise.

  • Oh, yeah, I feel like that's a, uh, inevitability.

  • But they ended up creating stuff that looked like this, which is pretty cool.

  • So getting pretty close.

  • And they basically just took a bunch of tiles here from some of that work that they dio and then made a made a level lot of it, and what I wanted to end up doing myself was something very similar.

  • So what I did was I grab some artwork, for example, from the dungeon crawler stone dungeon crawl stone soup game which, if you go to dungeon crawl stones, soup tile.

  • I looked.

  • I was looking for the sliced tile set.

  • Um, where is it?

  • Here?

  • Open game are dot org's has it.

  • They're 32 by 32 tiles, and I apologize for anybody who are eager to get into, like the tensorflow part of this.

  • Okay, it's all good guesses.

  • A.

  • Basically a massive tile set for this game called Dungeon Crawl Stone Soup.

  • It's a rogue like game.

  • There's a ton of like, you know, weapons and enemies and items and all sorts of things.

  • Perfect for this.

  • Use cases are 32 by 32 pixel tiles, right?

  • And what I did was I grabbed that artwork I took.

  • I did a bunch of experiments, but we'll just talk about my most recent one.

  • I took all of the, um where is it's in data.

  • It's in the weapons.

  • I took all the weapons.

  • Everything that looks like this basically easy access maces.

  • Sword sides also fun stuff, but there's a lot of patterns were talking about this in advance, right?

  • They all kind of angle up into the right.

  • There's some ways that still like what's at the top of that versus the bottom oven.

  • A lot of cases, a lot of color variants.

  • Um, and I ran the network on this for 20 epics, which is just like a training period, sort of like an interval of time.

  • And what ended up coming out of that at the very end, with stuff that looked kind of like this.

  • So you can see there's, like, kind of a sword like this.

  • Halo weapons, yeah, is only after 20 mics, right, which is very short, pretty small period.

  • And on a CPU on a laptop on dhe, you know, we're doing a lot of things that were kind of what people would be like.

  • Oh, there's no way I would work it all, but it very clearly gets a lot of the patterns that you're looking for, where it gets the main one.

  • Everything singled up into the right.

  • It also gets that there's all these different colors that belong, and there's I think there are a couple of minutes we're looking at it before the stream that had kind of like a clear colored radiant like a lot of weapons will have like a darker portion of the bottom and then later at the top because metals are often reflective and woods and like plastic.

  • Something's or not.

  • Andi, it's kind of amazing.

  • I think to me that it's even capturing any patterns in a very short period of time.

  • It's very cool.

  • Yeah, no, it was It was super awesome.

  • And, uh, this is motivating me.

  • And I think the ultimate way we ended up getting to this around about was that the math is super important because I do not know what I need to change about this, to get it to produce the things that I want.

  • So work it from the top down approach.

  • I know.

  • Have a motivation, actually, dig into the internals of this and figure out how all of these pieces end up working together.

  • Now somebody asked for the linked to the article and to the repo.

  • So I'm gonna go ahead and do that.

  • Let me just find this really quick in the meantime, if you want, I'll give you the cable here.

  • We could switch over to your computer and all that.

  • You take it away.

  • That was a big shell on my part so that it's fun.

  • It's interesting stuff, but, um, let's, uh, let's get to the actual, I guess talk of our heart.

  • I mean, this is still part of pretty much all under the same set of ideas were trying to take in images, images.

  • They're tricky there.

  • There, I think particularly interesting to humans because we can see them.

  • We look at them all the time, our entire world.

  • This assumes kind of privileged wise that we can see my policies.

  • That's not for you.

  • But generally speaking, there is a sound right?

  • And that was kind of the other point someone mentioned the chat was like, Have it sample rock songs and generate music is really these are all just different forms of data, right?

  • An image, a sound wave.

  • They're all just different kinds of data on dhe.

  • You can essentially represent them within these matrices in pretty much the same way.

  • Now, a lot of work has been done on developing kind of these generative models for images that kind of recognizes that in the way that, like sound and sequences of numbers, are kind of linearly dependent, right?

  • So in a lot of times that make this like Markoff assumption, where any pixel that I'm currently at only depends on the pixel before it.

  • But that's not necessarily true in an image where the things around it actually kind of shape what's going on.

  • And you can actually generalize that kind of Marco V an assumption to being like, well, at any pixel here.

  • Only the pixels that came kind of before it top down, left, right, kind of style mattered.

  • And so this is still a kind of that same assumption, but it works a little bit better for images where you're actually taking, like all three things before it, instead of just the one I kind of like, kind of like image filtering is like purple exactly where we're kind of just like filtering down diagonally rather than you sample around a pixel to anyone actually trying to figure out what's in there on.

  • That's kind of a little bit clearer idea.

  • It works a little bit better for what we know to be true with how images often are.

  • But even that has its own issues there some pretty intricate complexities with how images form.

  • And we're not entirely sure of how they work in terms of like, how do I want to model an image for a computer?

  • Eso image is a really interesting to humans.

  • S O R.

  • Sounds and sounds are a little bit less.

  • I kind of popular.

  • I guess my nature of people are really interested in.

  • How do you change images around?

  • Well, there's a really cool, generative music thing that I saw right, and they're still around like it's not like people haven't researched.

  • It still exists, actually, finally, that you only do all sorts of cool things.

  • There is.

  • You can go a simple as just kind of taking the fundamentals the primitives of music theory and saying, Well, we have X number of notes.

  • We have X number of rhythms, and how do I re combine those two get things to work?

  • There's kind of this, like learned, almost natural language processing style, way of doing things where you take a bunch of music that has existed, but you have it's primitive.

  • So you have, like what it looks like, maybe like a MIDI file or an XML file, and you then try and generate over that.

  • And then you have this kind of like raw sound wave given a set of amplitude.

  • Sze, can I sample from that and figure out what's going on from there?

  • And so there's many different ways of kind of approaching each kind of data, so we're gonna focus a lot of images.

  • But sound is also super interesting.

  • There's all different ways to represent different kinds of data, and there's all different kinds of data and like it's an enormous field.

  • For that reason, it's being used in all sorts of things can from like genetic see sorry, genetics and sequences of DNA and genetic material to like protein constructions.

  • That's all very bio heavy all the way through to light.

  • How do I predict how, like a city's traffic mapple look, what is the city heat map look like from the top down?

  • How do I organize things for navigational purposes all the way over to, like, how do I generate art?

  • There's a generative artwork that sold for some ridiculous amount of money a little while ago on Dhe.

  • That's kind of cool.

  • It's, you know, is that Do we replace artists by doing that?

  • And I'm sure people would argue, Well, no, of course not.

  • But realistically, if I could generate something and you can't tell whether it was me or a machine, then maybe effectively, I can write.

  • And there's all these kind of weird problems with it.

  • There's just so many different ways and kind of path you can take down into these that even in our three streams, where we've taken like a collective six hours to discuss, like one very particular image set, what we could do with it, we still have not even covered, like, not like, not even a small amount.

  • We've covered a very, very minor.

  • Perhaps that's almost negligible.

  • Really.

  • Just a little get together.

  • Yeah, yeah, yeah, very talking with my hands.

  • So just stipulations.

  • Yeah, all over the place for some reason.

  • Can't find the the medium article that did the handsome, really cool, generative music on it.

  • But if I find if you d'oh, I will post for sure.

  • Exactly.

  • Okay?

  • And I think someone at the end asked Can you tell me what the difference between an MLP multilayered Perceptron and a deep neural net is?

  • And they said the same.

  • Am I right?

  • Is that the same is kind of like saying squares and rectangles like it's It's not necessarily that they are the same.

  • So much as like a deep neural network is kind of a generalized form of how you would, uh, it could be used for many things from like classification to prediction on what will come next and things like that.

  • Deep neural nets are very wide, relatively wide range of things.

  • A multilayered Perceptron is a form of neural net that uses particular types of layers within it.

  • So it has a very particular dense layer and has a very particular like mathematical formula behind it.

  • Class, for instance.

  • Relationship.

  • Yeah, very much.

  • This is a relationship.

  • Wait a sec screensaver back there Pretty so blue, Booker says.

  • Squares and rectangles are kind of the same kind of magic.

  • Is that well, squares are all rectangles, but not all rectangles.

  • They're squares right, so square is defined by having like its width and height do the same.

  • But a rectangle can have any arbitrary set of width and height.

  • Um, and that's really just kind of essence of the definition difference.

  • This similarity occurs where they're composed.

  • Their polling owns four sided poly grounds with 90 degree angles at every intersection between the sides.

  • So basically, the idea is just like I think, what?

  • I have a kid.

  • Someone's like Oh, all squares and rectangles, Not all rectangles, squares And I was like, Whoa, that's wild And I use that kind of any time.

  • This sort of relationship of, like, super set some set comes up or kind of instance in classes Put, Colton put it.

  • It's a pretty common like pattern and phenomenon that shows up in the world sometimes less intuitively than others.

  • It's often you can often relate that in general, to life causation and association, right, like I associate maybe squares with rectangles.

  • But just having the Cosby rectangles on create a square, so there's always like a very the more abstract Ugo.

  • More interesting, that pattern gets, but it shows of all time on.

  • People often confuse one for the other, and it's it's an interesting thing to talk about.

  • So that night asked how much computation Jerry, do we need to go into ml and I I think for this we're actually we've kind of worked our way from the bottom up, So we're we're actually gonna talk quite a bit about just kind of higher level things in this one.

  • I actually probably won't coat a whole lot today.

  • I'm a little tired.

  • Didn't have quite enough time for myself to prep.

  • Um, sorry.

  • I was doing a lot of P sets, so that's an excuse for you guys.

  • But that is essentially what's going on.

  • Being our been a lot of work going on, and I just you know, there are other things that kind of took priority, but we will still coat a little bit.

  • These two just kind of get people introduced to Tensorflow.

  • Probably not like a huge amount of coating on my part.

  • And then we'll talk about some of the things that I found useful when I was trying to understand how this whole like style transfer concept works, and that's ultimately what we want to do today.

  • Yes, yes.

  • So it's the end of today.

  • We want to figure out how this sort of thing works, and I figure we'll start with it, and then we'll go from there.

  • Okay?

  • Sorry.

  • I don't have a website that does it for you.

  • Yes, they do have a nice website.

  • It's called Deep Part that I oh, feel free to hot by if you'd like.

  • This is due to two.

  • I think there was a really good one in, like the Cali stream where we both look kind of not horrible.

  • So there's this image from us that might be broken, but that's okay on.

  • We're gonna just kind of pick a neat style that we think would be very like a minimalist, like 50.

  • Wait up.

  • Completely gonna actually email in here.

  • So interesting.

  • What kind of put that over here.

  • See what goes on Cygnus.

  • Steal all your information?

  • Yep.

  • Everyone's gonna hijack my emails.

  • Now your image will be done in around 10 minutes.

  • It takes a long time, then.

  • Yeah, It's a pretty complex process that's going on all that.

  • We can make our own version of it.

  • That's not nearly as terrible, but you know, we'll let this kind of run.

  • I'll check my email in 10 minutes or so.

  • Set a time.

  • Yeah, and we'll see what goes on.

  • But if you wanted to see it come a preview transfer image style transfer.

  • Gonna get images.

  • Wow, This is not quite what I was expecting.

  • It is a change there.

  • Yeah, I think this is a duck that goes images of you.

  • And I could explain why it's a little little distinct.

  • Okay, so we have these kind of two examples and there's a paper I'll pull up later.

  • That shows off what's going on.

  • Oh, that's so Yeah, it's super nifty.

  • So this image, you actually can't really distinguish the fact that it's been altered.

  • This one's a little bit easier to see that it's been changed.

  • But this one, not so much.

  • And there's a lot of really cool examples on how this works.

  • Essentially, this is my content image.

  • So this is what I want to see.

  • Same thing over here.

  • But this is my style image.

  • So my style is kind of like How do I want it to look what kind of texture What kind of attributes don't want to have on the content that's so crazy finding the outlines of the buildings, applying things to them, like putting the sky behind the building.

  • It's wild.

  • Human could not do this super and the clouds don't even look.

  • The clouds are all modified to be out sick.

  • There's the same style, but they don't look at all fake.

  • If you're familiar with deep fakes, there's kind of a similar concept going on there, and it's kind of scary, like, I mean, as someone who's interested in security.

  • This is terrifying to me.

  • People can afford things that even I would struggle thio tellapart, especially because only given this I may not be able to tell you where, like which thing it was faked from with how it worked, I might not even be able to notice that this isn't really grant thing.

  • Might not apply to well to like two pictures that are very different.

  • Teret dick domains like it be trying to take that first top left image and then mix it with that middle like that might not turn out.

  • So it's a little less queen.

  • It would be definitely worth trying on your own, but this is maybe a recognizable image with a pretty clear you know whose style that looks like 100% painted crazy, which is kind of the wild part.

  • And here's some kind of other examples.

  • I mean, it's definitely worth Googling around to kind of see what exactly happened, but I think that it's mind boggling.

  • I really love these, like photo realistic ones.

  • I think they're the ones that just blow my mind entirely.

  • But being able to transfer style is something that I think is a little wild.

  • It's not something that's intuitive in terms of like how it actually works.

  • So there's a paper that was written on this.

  • On the paper is image style transfer using convolution, all neural networks on def.

  • You remember from our last stream.

  • One of the reasons that we talked about using convolution, all neural networks is we want to use these convolution all layers to learn features from an image.

  • So if I had an image of our faces, then one of the important features from our faces might be our faces themselves.

  • It might literally be you know what, where what arrangement of eyes now knows make us more or less distinct.

  • But what I say, like a future of some image, I certainly do not mean learning the image itself, which is something that neural networks can have a problem with in the form of over fitting, where we end up actually just grabbing the face.

  • In which case it's pretty clear which one's which.

  • But it's not General right?

  • Like to distinguish between ours is easy, but to generate one of us from the other one is difficult blue boogers comment about.

  • It's just like when I paced a picture of my head on a picture of the rock, right?

  • Exactly.

  • Well, in kind of the way of like, that's the idea.

  • But when you do something like that, it's easy for us to tell where the uh, you don't know.

  • You know, maybe you're very good at photos up, but with a girl that I could do that to anything as a very general sense, Bobby Knight says.

  • So if you look in this, we could basically well, that's the whole point like that.

  • That's why I like, for example, I want to be able to generate Sprite art right.

  • I don't want to spend millions of hours making millions of sprites true well, and with something like this, you know, there's already a lot of natural variation in people.

  • And there are many data sets that contain people on dhe.

  • You could actually take the spray art style and apply it to real people.

  • That's true.

  • And it would essentially do the same thing without you having to generate any of the variants making neural net that it's cold.

  • It's hair seamlessly on every CS 50 student.

  • Well, that kind of wild, very probably doable, very wild.

  • So essentially, yeah, recommend going in reading this paper.

  • It's not too terrible.

  • Mathematically, they go into a little bit of math, which will kind of break down throughout this stream.

  • But, uh, it's they have a lot of like helpful graphics on understanding just exactly what's going on here, and I'm gonna pick one to show you.

  • But here's another set of examples of Here's the reference image.

  • And then there are several different style images that are applied.

  • I think this one's pretty nifty.

  • This one looks like someone took acid, but this one, I think, is particularly interesting where you essentially just have kind of this like weird kind of block style art.

  • But I can still tell what the original image Waas didn't guys no similar techniques to the ones used to generate the image of the black hole actually having about that, do you?

  • Yeah.

  • So they're actually not really at all similar to those techniques.

  • Those techniques were pretty awesome.

  • The team that built the things that generated the image of a black hole.

  • It was not just one team.

  • Several teams that kind of collaborated and work together on dhe really kind of just a awesome achievement and feet of science.

  • I think I was awed by that image and also just like the amount of work that went into it.

  • The woman who was credited with kind of being at the head of that is brilliant and amazing, and also near here.

  • She works at Harvard, which is awesome.

  • Released a herd affiliate.

  • And I was just that the whole team was awesome, and clearly no one person could have done all of that.

  • But there is so called super interesting.

  • I recommend going in reading the article on how they did that.

  • There was a lot of work done on, kind of like inferring what it must look like for us because there are kind of problems with this standard way of visualizing a black hole or visualizing things where we generally visualize them by bouncing beams like photons off of them and seeing what energy comes back to us.

  • But the issue with a black hole is that those photons just don't come back.

  • So it's very difficult to figure out what exactly it looks like kind of the ultimate form of darkness.

  • But there is this kind of wild image that was generated basically by saying what they were able to grab from inferring from like radio frequencies and other kind of incredibly large amounts of data.

  • There's someone that says the black hole pick was a waste of time until there's nothing we didn't already know.

  • I almost could not more strongly disagree with every part of that sentence if you, I mean, yeah, I very, very thoroughly disagree with that, just based on the idea that even if you it suggests that you know pictures or nothing reasonable than the like terabytes of information that they were able to process and do so in such a cohesive and reasonable fashion with such a low error rate and such kind of ridiculously controlled variants is I think impressive in and of itself, even like the collaboration between the teams is impressive in and of itself.

  • On the J.

  • P.

  • You guys, I think he's being ironic, and I could concede that that is likely that.

  • But what I kind of my point there is that there's a lot of kind of controversy going on with the black hole picture where people are kind of harassing and going after the scientists involved and kind of spreading some misinformation about how it works or what went on behind it or who coded what.

  • And so my point is simply to support the scientists and the team behind that, because they did something incredible.

  • S o, I think, in this case is actually worth not necessarily chancing spreading any sort of misinformation on that.

  • Yeah, but it's super interesting.

  • And we definitely recommend looking out, uh, just go see what's going on behind that.

  • Yeah, So now we're back to kind of our little bit less, I guess, celestial, if you will image generators and kind of how do we transfer a style from one image to the next?

  • And I think this picture does a really good job of explaining what exactly is going on.

  • So that's kinda recap we have these convolution, all neural networks and so convolutions are just a way of processing an image and trying to figure out what things air features from the image exist and are important Now.

  • Important is something that will go back to later, because it'll matter how we actually get this toe work.

  • But importance is something that's really difficult to dictate.

  • If you say, How do we want to quantify what's important to me?

  • It's fairly difficult to do mathematically, and they're these tricks.

  • He's lost functions that we create that do exactly that s O.

  • If you look at what's going on here, we have some style image of a top.

  • We have our input content image at the bottom, and somehow, at the very end here we end up with some form of modified image on DSO.

  • Essentially, what's going on is you have some convolution, all neural network that knows what features are important.

  • So it's fairly common for, at least in the tutorials, that air kind of around up there to use a pre change network that has already learned for classification purposes what features are valuable.

  • Are there a lot of circles in this image?

  • Are there a lot of slanted lines?

  • Do they slant to the right and left things like that?

  • So like V.

  • G.

  • A.

  • 16 is a 16 layer convolution, all neural network that's often used for classifying things.

  • There are many online you're going to like the caress applications holder, and it has a ton.

  • But these pretty trains networks.

  • You don't have to train them yourself, and that would be kind of ridiculous on a laptop.

  • But they come for the train, and they have already learned what are called like the feature representation.

  • Is that air valuable in an image?

  • So given any two images, it can.

  • Once that image is passed through the neural network, it can tell you, hey, hear the things from that image that, according to what I know is important, are in it or not on.

  • And that's essentially what you're seeing here is what things up on the top.

  • What stylistic things are important from this image on up at the bottom or tied down at the bottom?

  • What content Things are important from the image, so there is kind of this idea in order to transfer style over is how much do I want to understand from each image is style, and how much do I want to understand is content, and that's that's pretty tricky.

  • Eso generally what ends up happening?

  • They think they do a pretty good job here of showing that as you reconstruct the images as you go further deeper into the neural network from a convolution, all neural network, if you're reconstructing the image from really deep within, the style is pretty much exactly retrieved.

  • The content is a little bit muddled if you go from the very beginning.

  • Style is still pretty much noisy, kind of like just basic colors.

  • But the content is pretty much exact on DSO.

  • Essentially, what they want to demonstrate is if we pick from different at different sections of the neural network different layers what their representation czar, Then we can actually try and throw these on to some other image, kind of like a template image that we use on that template image.

  • We can then measure how well it has acquired both the style and the content, and so the kind of general idea is, I have three images, actually, but only two of them are input by the user.

  • My 1st 1 is my style.

  • My 2nd 1 is my content in my 3rd 1 is kind of this just random noise image.

  • It's just like a nice little template.

  • Clean slate.

  • It's not all zero because all zero has actual significance.

  • So we just throw in a bunch of uniformed noise or maybe even Goshen noise and we say, Okay, given this image, I'm going to add in features from the style and from the content.

  • I do this and then I measure the error between this new kind of mixed image and the style.

  • And then I also measure the error between this mixed image and the content and what I'm trying to do.

  • My new loss function for this neural network is now the air.

  • The combined error between these two things.

  • And I could do any really any kind of like adjustable linear combination of these two losses to get a kind of overall loss, and I want to minimize that.

  • And the minimization of this overall loss means to us that we will have gotten as close as we can kind of balanced as close as we can between the content and the style.

  • Andi, just pick which layer of our pre change neural network or neural network that we've trained.

  • We want to grab these representations from in order to decide whether or not we're grabbing.

  • Excuse me, stylistic representations or content ones.

  • That's kind of the high level overview.

  • That's the idea, Susie.

  • So this paper I know someone asked, How much math would you need to know to understand this paper from my couple read throughs, I would say not much.

  • 2000 pages.

  • 2000.

  • So there's only 10 pages, but it was part of a journal, which is a lot longer large.

  • So yeah, this paper's actually fairly short and, you know, the last, like, three pages or so a reference or start.

  • Two pages are references, so it's really only an eight page that's, uh, cities there.

  • That skyline shot looks really super cool.

  • I really love the like, blending kind of nighttime images of one city with like, the daytime image of another city and like seeing what happens, really curious.

  • What happened if you took away water from one of them right and kind of like saw what happened across something that was maybe completely unrelated.

  • That's the other thing is they tend to show off images that are like semi related, where you get the best of fitting.

  • I imagine, like that's really on dso There's all these things going on, and then when they start doing it with, like, abstract art, I think it's really cool.

  • But it's then less of like you can tell what's going on.

  • So much is like Let's see what this would have looked like if so and so had painted it.

  • I think that's super interesting.

  • I also think that combining these like again or a general of adversarial network might be really interesting.

  • S o Can I use one of my adversarial networks may be generated.

  • Part of it is actually generating these kinds of images rather than like some variants on the images.

  • That is like a little bit more, we'll say, constrained on Dhe, then having the adversary try to figure out maybe within some set of parameters, whether or not it's faked, or maybe how well can it figure out the content of the image on and you can adjust your loss functions kind of just as you go, or depending on what you're trying to minimize for.

  • So generally, when I talk about these lost functions, what I really mean is that these neural networks are optimization problems.

  • I'm really just trying to minimize or maximize something.

  • You'll notice that generally speaking, a minimization is actually also maximization that kind of the same idea.

  • I just want the reverse.

  • If I negate something that has this kind of like abstract Lee over returned bowl shaped and it ends up being an under turned or just a normal bowl, and I can then minimize that and generally we just prefer to minimize that's where all the terms like Grady and dissent and things like that come from, we're going down.

  • Ingredient.

  • Um, there are some, like, little tricks.

  • Well, it's time to check the email.

  • T.

  • That's a great point.

  • I'm gonna swap me out of there so I can thank you know, we could hack into your email.

  • Yeah, as fun as that would be really nto defense street out, we would have a very entertaining, um, very entertaining moment.

  • Agrifood Ask, did you learn neural nets?

  • at Harvard or on the Internet.

  • And if so, do you recommend sources?

  • Well, so that's a great question.

  • I have kind of done a little bit of both, to be honest.

  • So I've actually I'm currently in a machine learning class that does cover some neural network stuff I'm also dealing with, Like, how do I How do I learn this on my own?

  • How do I go on Google papers?

  • How do I kind of look around and try to find things that are interesting?

  • I guess I've kind of been a little bit of both.

  • It's a pretty cutting edge field, too.

  • So I imagine that there's a lot of new stuff coming out every day.

  • Yeah.

  • Yeah, it seems like almost every day there's, like, some new, like, mind boggling you.

  • No idea That comes out.

  • Um, I think I think you'll wanna You want to pull this up cold?

  • This is pretty nifty.

  • Oh, wow.

  • Look at that.

  • What they did to my eyebrows.

  • Man, you have some aggressive eyebrows.

  • I've got a I've got a goatee going 10 to go, Thio.

  • That's a strong goatee.

  • We look pretty cool.

  • I located like I also I started aging, like have grain were older.

  • It's kind of like that, you know, Smokes a cigar older.

  • You know, it kind of looks like you're, like, ready to get it for like Like like like a What's it called?

  • Great Gatsby Style.

  • Like the long cigarette?

  • Yeah, like one of a kind of like looking at a green light over in the corner.

  • I receive a suit, and then I can start in The Great Gatsby.

  • You You look actually pretty similar.

  • I hear you could never tell that on me, which is cool.

  • Like major changes.

  • I don't think you do.

  • Look, got nice, clean buzz going on the right.

  • Yeah, yeah.

  • I got, like, a much sharper fade.

  • Apparently, it's a green.

  • Yeah.

  • Oh, so the kids, I can't.

  • We see these kind of things.

  • He had to pay for that.

  • Uh, yeah.

  • So this is kind of like a beautiful applications of style.

  • Transfer is We took this kind of original image over here, and we applied it to this image are content image over here, and we ended up with this just wilds.

  • Application of that style is like, the coolest thing like that is like the sickest thing.

  • I could do this all.

  • Can we do it?

  • Let's do another one.

  • That's like, Let it cook for 10 minutes.

  • Yep.

  • I don't even know if this actually took 10 minutes.

  • I think it might have.

  • Maybe they maybe they say just get high server load or something.

  • They want Oh, market.

  • All right, so let's pick another image.

  • Mr Holmes.

  • Dr.

  • Watson.

  • Fusing friend George Clooney.

  • Those are very nice compliments.

  • What image are you thinking here?

  • Another stream image.

  • Or do we have one from another separate stream?

  • Yes.

  • Another stream that we've already seen.

  • That one.

  • Your private files.

  • We're gonna just kind of skin through everything that I d'oh.

  • So we have these ones, like that's not bad money.

  • You look like you're facing the screen.

  • This one I'm just like, Oh, wow.

  • Yeah.

  • You're very intent on the just staring and we were doing.

  • And that one looks pretty close.

  • Hand signals are like people.

  • Oh, we can upload a style.

  • Oh, that Is this OK, so this is turned into a street.

  • We will get to tensorflow.

  • I promise.

  • Like a little bit.

  • But wait, It's just too much fun.

  • What's the cool styles that started on this image?

  • Uh, could you just write art, like you said?

  • Yeah, he has that thing.

  • That definitely could be a thing.

  • We're gonna make it a thing.

  • Let's get maybe, like, two Sprite people together.

  • Would that work?

  • OK, Sprite art to people next to each other.

  • I gotta You gotta love search engines.

  • Like I can't imagine what this would have been like.

  • There's a lot of cartoon ones.

  • They're all like, cute cartoon.

  • I'm not seeing any, like a really great keeps growing a little bit.

  • It's There's this we get sort of teenage teenager, But what it worked wouldn't like, would it screw up the fact that, like, down to find out, try it.

  • But we're occupying, like, different spaces, you know?

  • Yeah, space.

  • Like have, like the turtle in the background or something.

  • Oh, this is gonna be this is gonna be a wild or it'll be completely disguise.

  • I also appreciate all the like e.

  • I know you said you like to mess with all your settings, but all the color changes you made your whole operating system and all your programs.

  • Yeah, the whole thing is very, very customized.

  • But you said your red green colorblind.

  • So you can see the red and that Yes.

  • So the red on its own is OK on oftentimes, like different hues of color kind of model.

  • Oh, they changed the time estimate.

  • Three different hues of color will modify from you.

  • Like what Things blend with each other.

  • Video games tend to be very problematically huge because they're all the same.

  • You and so they're like exactly the same here.

  • So then, like red and green, then just blend entirely s o.

  • I actually feel like college union stuff.

  • I have to change my settings.

  • So it's like dude and a fake.

  • And then it's like blue and orange, and that's super easy for me to see.

  • Other people were wild for alien like people will see it just like what is going on.

  • It was more like, What settings do you have on the mess with this place?

  • So it's very interesting.

  • Yeah, So we'll see what that ends up being.

  • What kind of continue talking about this sort of thing?

  • You have custom.

  • You saved an image in and send it to me.

  • I did save it.

  • Let's let's go see.

  • Go see if I actually say I will totally make that like the screen shot for the stream.

  • That would be super cool.

  • Um, here will say this image as I would see a style.

  • Nick Colton didn't make it.

  • What's the resolution?

  • It made it a Didn't make a question because the baseball things pretty low, right?

  • Yeah.

  • The base Perez was like, Whatever I screen shot, I can upscale it if I need Thio.

  • This is you're just seeing what goes on behind the scenes were really just, like, very anytime.

  • 700 by 300.

  • I cannot scale that.

  • No, thank you.

  • Fine.

  • I'll put it, like in a small section underneath, Like, a title or something.

  • Yeah, that'd be cool.

  • Anyway, I love like Os X flips the matrix representation, right?

  • Like the width is on.

  • The first started the height with this first heights last.

  • I think that's normal.

  • I feel so I'm used to like the matrix representation heightened with Oh, I don't think we'll end up mattering, but it's kind of money.

  • You spend too much time on that machine running clothes in a machine, learning that really is, like, just a fact way spent way too much time on our peace.

  • That's old times.

  • Um, and it's it's killing me.

  • But the image is cooking now.

  • Well, I returned.

  • Hey.

  • So let's let's see it.

  • Oh, do, but didn't say it was gonna email you in, like, 11.

  • So now it's 10 minutes.

  • So it is actually probably a realistic customer, so yeah, well, happen this intensive intensive Well, J P Guises salsa piece.

  • That's live.

  • No one would want to see that.

  • My best to get you would get out, get out.

  • But it would be very funny.

  • Very time effective, you guys.

  • I'm sure you know Consul, and you're all right.

  • So we're gonna happen sometimes.

  • Airflow stuff.

  • I was originally going to code, and then I got to today and I kind of looked up.

  • Look up this morning.

  • It was like, Yep, I do not have enough familiarity with exactly what I would want to code to give this.

  • So we're gonna borrow someone a youtuber that I also watch.

  • Hey, doesn't really great job.

  • Name is I think Suraj rav all so sorry.

  • Surrounded by butchered your name.

  • I'm very apologetic for that.

  • But he has this really cool library.

  • We're not really library, but just can't get home on.

  • I would definitely recommend checking it out.

  • Very, very appropriately.

  • Named for the stream today.

  • Yeah, he literally titled it.

  • What?

  • We We're going T o.

  • So you know, I got this.

  • Looks like on the shoulders of giants and all that, and he does a really great job with all sorts of cool things.

  • Andi, I would recommend watching his YouTube show, but here's the guy's name is really neat.

  • If you see this Suraj, why did you stop for intensive pool for Nick Long and cold thing yesterday he live, We're gonna hop through his eye pie notebook because it's kind of neat.

  • And he he has all sorts of cool stuff here.

  • Actually, I'm gonna hop through a couple of nice my notebooks.

  • I should get rendered on.

  • Get, huh?

  • Yeah.

  • D'oh!

  • That blows my mind.

  • It's pretty neat.

  • I, like get up does also it's cool things.

  • Good.

  • Microsoft.

  • He does kind of like a brief explanation of how this works on and keep it really want.

  • I think that's great on gonna hop down to the loss functions.

  • So Okay, so there's a lot of tensorflow specific stuff going on here, which is kind of wild.

  • But tensorflow, if you're not familiar, works on these kind of sessions.

  • So I start some tensorflow session, and from there I can then add like variables to it.

  • I can add layers to whatever is being run.

  • I can have a model to it.

  • And like, there's this graphical way of displaying models that super useful, I mean ml andi.

  • So if you're not familiar, top a logical graph of convolution A ll neural net thes top.

  • A logical models for how, like a neural net might work.

  • These aren't really what I'm looking for.

  • This is probabl

All right.

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

A2 初級

神經網絡,第三部分!- CS50 Live,EP.58 (NEURAL NETWORKS, PART 3! - CS50 Live, EP. 58)

  • 0 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字