Placeholder Image

字幕列表 影片播放

  • (light classical music)

  • - Good morning everyone and thank you so

  • much for coming.

  • And thank you for having me here at GOTO Amsterdam

  • I'm very, very happy to be here.

  • My name is Ray, I'm a developer advocate

  • for the Google Cloud Platform.

  • And there are two things that I do, that I enjoy doing.

  • Number one is I love to bring some of the latest

  • and greatest technology from Google that

  • we have to offer to developers all around the world.

  • And the second thing I love to do is to hear about

  • your experiences and your feedback about the session

  • today upon the technology

  • I'm going to show you today.

  • And also about the Google Cloud Platform as well.

  • And the best way to reach me is on Twitter

  • @saturnism, that is my Twitter handle.

  • So if you have any questions, please feel free

  • to reach out from there.

  • An aspect of technology, I mean I've been

  • doing technology for a very long time, 22 plus years or so.

  • And prior to Google, I was working at

  • an open source company.

  • Prior to that I was at a consulting company.

  • But the other true passion of mine is actually traveling.

  • I love to travel and I often take photographs as well.

  • And if you'd like to see some of the photos in places

  • I've been to, feel free to check out my Flickr at

  • flickr.com/saturnism as well.

  • If you have any questions about these photos, especially

  • where I'm holding a compass in the middle of the desert.

  • I don't have the time to talk about it today.

  • But, I think my sound just went away.

  • There you go.

  • But if you have time, just come up and ask me about

  • this story and what it's about.

  • So I'm here to talk about microservices today.

  • And I'm assuming that many of you

  • have already heard of microservices, right?

  • How many people here knows about microservices?

  • Yeah, I know, I know.

  • So I'm not here to talk about the theories behind it.

  • I'm not here to convince you one way or the other of

  • what you should be doing.

  • But, if you do want to explore and create microservices

  • on Azure, here are a few things I'd like to share.

  • And this is going to be mostly a how-to and experiences

  • that we are happy to share with everyone.

  • The first problem that you're going to run into

  • is that as you are decomposing or as you are

  • creating a new application, new system with microservices.

  • The first problem you're going to run into is

  • there are going to be so many services.

  • If you decompose a simple application into say two services.

  • You have the front end,

  • and you have like multiple back ends, say two back ends.

  • And that's already three instances of your application

  • you have to deploy and manage; rather than just one.

  • And of course for redundancy reasons, you probably have

  • multiple instances of each of those services.

  • Well and now you're looking at potentially a

  • multiplication problem.

  • You have three different services, you have to deploy

  • two each, that's six you already have to manage.

  • Well two is probably not enough.

  • In my scale, some of these layer out more than the other,

  • and eventually you're going to see maybe 30, you know,

  • 40, 100, instances that you all have to deploy and manage.

  • And that will be fun.

  • And the traditional way of managing these services

  • just don't work anymore.

  • Why?

  • Well typically what's gonna happen is that before

  • your project even starts, maybe even now.

  • You have to request the servers to be procured

  • Right I don't know if that happened to you,

  • but when I was doing my project as a consultant,

  • I have to order the number of servers

  • even before the project starts.

  • Maybe nine months before the project finishes.

  • And then the servers would come in

  • and what I have do is do what?

  • I have to write the little documentation that's

  • very, very low of how you are able to install the servers.

  • And put the right components onto them,

  • and then finally deploy out your application

  • So like the company insisted that you do a piece you

  • have to install.

  • How you want to configure the hosting, the network,

  • the firewall, and then you probably lay down like

  • you're using a Java application,

  • you lay down the application server,

  • like Tomcat or WebLogic or whatever.

  • And then you configure it, and finally you deploy your

  • application onto it.

  • And then very quickly, you're going to find out that the

  • group woman probably doesn't work the very first time.

  • How many people have that happen to them before?

  • Yeah, I though so.

  • And the other problem you're going to find out,

  • is that in that in the production environment, you are going

  • to run into the trouble where it just doesn't work.

  • And it works in your house of set up production.

  • Or how many people have that happen to them as well?

  • Yeah, (laughs) it happens all the time.

  • And the problem with that is that usually that this

  • long and complicated procedure is either done through

  • manually, where somebody actually follows it to

  • create that environment.

  • And it's not very consistent.

  • And if you're scripted also, you have to write that script.

  • And if you make any mistake in that script, then you're

  • also going to run into troubles.

  • And those scripts may even run differently from

  • environment to environment.

  • So one of the first things that we need to solve

  • is how do you actually deploy the same application

  • multiple times at this scalable fashion?

  • And one of the first things that you

  • need to look into of course, is the container.

  • How many people here know about back end containers?

  • Oh everyone, yay!

  • All right, so not going to into those in detail,

  • but just remember--

  • (disruptive scuffling)

  • Hello, hi.

  • (laughs)

  • Just remember that

  • if you're using containers your back end--

  • (disruptive scuffling)

  • The insulation procedures in a sequential order.

  • Just checking if there's another sound, yeah?

  • And that is still that kind of thing in terms of

  • what you want to run.

  • And that image can be deployed anywhere else.

  • And you can very quickly send out new application instances

  • very, very quickly.

  • The other problem you're going to see is that

  • if you have so many services,

  • you don't want to put all of them individually

  • one on a single machine, right?

  • Because if you have 100 servers,

  • you don't want to have 100 machines.

  • So what you need to do is to bean pack them as

  • efficiently as possible into a single,

  • or the fewest machine possible, so that you have

  • less infrastucture to manage.

  • And when you do that, then you're going to run

  • into issues where occasionally you may have multiple

  • instances on the same machine.

  • The first thing you're going to run into is

  • the port conflicts.

  • And you want to avoid those as much as possible.

  • And then you have other challenges as well.

  • How do you make sure that they're all up and running?

  • There are so many to monitor.

  • How do you do that?

  • You cannot do that manually, of course.

  • You need to be able to check the health

  • of these systems and individual services,

  • so that when they have issues,

  • maybe you need to restart it.

  • And then again, you don't want to do it mentally.

  • You might want to do this automatically as well.

  • And one particular challenge that I see

  • and they need to ask me is about environments.

  • If you have say 30, 20 services in your environment

  • that you have to run and manage.

  • How do you create more of these environments

  • and reproduce it in a consistent fashion?

  • Remember, deploying one application is hard enough

  • when you don't have the right tooling to do so.

  • You'll embrace your hand.

  • Now you're going to be dealing with hundreds of them,

  • and you have to think about the tools

  • that you're going to use.

  • So today is all about the tools I'm going to share.

  • Many of the dotcoms, the technology companies,

  • have the tools to this, including Google of course.

  • And I think it's partially what makes microservices

  • architecture successful.

  • If you actually go into this architecture without

  • the right tooling, you may actually eventually find

  • yourself in a situation that you just simply

  • cannot manage any of these at scale.

  • So at Google, just remember this.

  • Everything at Google, all open services that we offer,

  • including Search, YouTube, Gmail.

  • I'm assuming everyone in here uses some of these things,

  • they all run in containers in Google.

  • And we're not using docker containers,

  • but we're using the fundamental container technology.

  • In fact, Google contributed the fundamental technology

  • that makes containers possible, which is called cgroups.

  • We contributed cgroups into the Linux credo

  • many, many years ago.

  • And that is the fundamental technology that all of the new

  • container products are kind of using today.

  • At Google, we launched about

  • two billion containers per week.

  • Now this number is not as impressive as it sounds.

  • Why, because this is the US billion.

  • It's not the European billion.

  • I know there's a very big difference.

  • (laughing) Yeah I know.

  • It's not two million million,

  • it's only two thousand million.

  • It's not, yeah there's a worldwide huge difference here,

  • but we know how to manage containers.

  • We know how to manage all services at scale.

  • This is what we do at Google, as engineers.

  • We cannot possibly deploy the application

  • the traditional way, that is why,

  • this is what we do.

  • If I have a Hello service, Hello application .

  • A very easy one.

  • It's the only one I could write.

  • Rather than deploying into individual machines ourselves

  • or with a script of some sort specifically designed for

  • those machines.

  • We deploy to a target of a cell.

  • A cell is really just a cluster of machines.

  • And a single cluster of machines at Google like single cell

  • can be composed of up to 10,000 machines, okay?

  • So we don't deal with each machine individually,

  • 'cause that would be just way too many.

  • But what we do is we run tools to do this for me.

  • We can say how many veers across are to deploy to?

  • Go figure it out.

  • We specify the binary number we want to deploy.

  • These are potentially static binaries that can just be

  • copied and deployed on any of the servers.

  • We can specify arguments, but most importantly

  • we are able to specify the resource requirements.

  • And this is where containerization is going to be

  • really helpful because when running multiple

  • services, multiple applications,

  • you don't want them to step over each other's toes

  • when they are fighting and competing for resources, right?

  • You don't want one wrong way service to take 100% of the CPU

  • and like all of the other applications

  • just don't work anymore.

  • So with resource isolation, you're able to contain them

  • so that they don't exceed their boundaries.

  • And then we can say how many instances of this service

  • do we want to run?

  • We can say well I need five replicas,

  • that means five instances.

  • Or, I Google, maybe we need a little bit more.

  • And it's super popular,

  • 'cause that's the only thing I can write.

  • We want 10,000 instances, we just say 10,000.

  • And this is all the container is,

  • you just say how many you want,

  • you say what you want to deploy and we deploy it.

  • And this is the way it works behind the scenes.

  • We will copy the binary into

  • a central repository of some sort,

  • like a share-file system

  • where all the machines can get to.

  • Okay and then we're going to send that configuration file

  • into our internal tool.

  • The internal tool is called Borg.

  • And there is a Master node which is called the BorgMaster.

  • And the BorgMaster understands how to receive the YAML file,

  • or the configuration file, and knows how to deploy it.

  • Once the BorgMaster receives it,

  • it's going to consult the scheduler which then

  • will be asking all the machines the same question.

  • Do you have enough resources to run this application?

  • And it's going to check all of the available server

  • nodes to see who's available to run this application.

  • If you cannot run it, it will just skip you.

  • If you can, then what it's going to tell you to do

  • is to go ahead and download that image.

  • And then start it.

  • And we can do this very, very quickly

  • and very quickly you're going to see my

  • Hello World application running the Google

  • data center, simply by deploying this one descriptor.

  • And we can do this so fast, that we can deploy about

  • 10,000 instances in about two minutes and 30 seconds.

  • And that's partially because we also have a very, very fast

  • internal network that we can actually

  • copy these large images around very efficiently.

  • And you do get that kind of benefits if you are using

  • the Google Cloud platform as well.

  • So that's how we do things within Google.

  • But that's not enough, right?

  • Because that's all internal.

  • If you want to try this out yourself,

  • well this is what you need to do.

  • This is where that Kubernetes come in, right?

  • Kubernetes is the open source project that is designed

  • to orchestrate containers.

  • So if you're running your applications inside of containers

  • and if you need to deploy it at scale,

  • then you can use Kubernetes to orchestrate them across

  • multiple machines, just like in the similar fashion that

  • we do within Google as well.

  • And Kubernetes is actually based on the experiences

  • and the scale that Google had had

  • for the past many, many years.

  • And deploying so many containers,

  • we know how to do this at scale.

  • We know how to deal with the common issues.

  • So then we actually open source's Kubernetes project.

  • It's all open source, it's written in Go.

  • The important take away on this slide is that

  • it will run in many different environments,

  • in multiple clouds, and also on print.

  • And that's super important as we go into the demo

  • in a few minutes.

  • The community is very, very vibrant.

  • We're on version 1.2, I think 1.3 is coming out very soon.

  • We have many, many contributors and many commits

  • and many stars.

  • Well basically the gist is if you wanna try it out,

  • please get involved with a community as well.

  • They have a Slack channel, they have a forum,

  • they're very responsive on GitHub as well.

  • So please go check if out if you'd like to contribute,

  • or learn more about Kubernetes in detail also.

  • But today I'm just going to give you a taste of it.

  • So this is how it works.

  • It's a very easy slide for me to make.

  • If you haven't noticed, all I had to do is copy and paste

  • the previous one, yeah and do a string replace, yay.

  • So this is how it works.

  • Rather than a static binary, we're building a container

  • image which is inherently a static binary anyways.

  • And it's a docker image and I can actually push this

  • image to a central repository and in the docker world,

  • this could be known as a docker registry, right?

  • You can have the registry online.

  • You can have a public one or a private one.

  • You can store it wherever you want

  • as long as the machine can get to it.

  • Then you're going to be able to write like a simular

  • configuration file that says what you wanna deploy.

  • You push it to the master,

  • the master checks against its scheduler,

  • and then it checks with all the machines in the cluster,

  • and say if you have enough resources to do it to run

  • the application and then it's going to pull down

  • the container image and start it.

  • Easy right?

  • It's very simple concept, but it's very powerful

  • because well it first of all allows you to describe

  • what you wanna deploy in a simple file.

  • Just like that, it can specify the resource limits as well.

  • But you can also specify how many instances of something

  • you want very similar to what we do internally.

  • But here's the catch, or the most important thing

  • that you wanna remember.

  • With Kubernetes, you're really configuring,

  • you're viewing your entire cluster

  • as a single machine in a way.

  • All the resources on individual machines, all those CPUs,

  • and its memories that's available to you,

  • you'll manage them through a single pan of view

  • through Kubernetes.

  • They're just one Kubernetes cluster to you.

  • You deploy it to this one single Kubernetes cluster

  • and will take care of how to put it on the actual

  • machine for you behind the scenes.

  • So in all of the talks, I'm just gonna go into the demo

  • very quickly, not that one.

  • So I just want a show of hands,

  • how many people here are Java developers?

  • Oh, I'm in the right place, okay, fantastic.

  • So you probably already know how to containerize

  • your Java applications.

  • You either use the Dockerfile to do it,

  • or I just want to point out one thing.

  • Which is there are some really, really nice plugins

  • you can actually use if you haven't used them already.

  • Let me see here, so Spotify, actually produced a couple

  • of plugins for containerizing Java applications with Maven.

  • They have two, there is a Docker Maven plugin

  • and there's a Dockerfile Maven plugin.

  • Now don't ask why there are two, it's a better question

  • for them, but actually I know why.

  • But they are both very good and they do things a little

  • bit differently.

  • But the beauty here is that you can actually capture

  • your docker image creation process inside of your

  • pom.maven, pom.xml.

  • And the beauty of this is you can actually tie

  • this into the execution faces so that whenever you're

  • packaging your JAR files you can also produce

  • the container image at the same time.

  • This is really useful.

  • And then you can also tag it with the version numbers of

  • your Java application too.

  • Or if you want, you can also tag it with your GID hash also.

  • So let me go back to this service right here.

  • Another question is, how many people here heard about

  • Kubernetes before?

  • All right.

  • Well before I talk about it obviously (laughs).

  • Sorry, but, how many people here has used it?

  • Oh, well I'm glad you are here.

  • So how many people have seen in action?

  • Seen it?

  • Oh quite a few.

  • Okay so many of you haven't seen this,

  • so this is gonna be new.

  • Okay so first of all what I'm going to do,

  • what I have done already is that I created this image

  • and I push it into a registry.

  • And now I have this image in the registry somewhere.

  • I just want to run this at scale on many different machines.

  • And the way I can do that is, not delete.

  • Kubectl run.

  • So I don't know if you can see this on the top.

  • Yes you can hopefully.

  • So to run the image, a container image in a cluster

  • of machines is very easy with Kubernetes.

  • Here I have a cluster of machines

  • in the Google Cloud Platform.

  • I have four different nodes that can actually

  • run my workload, or my applications.

  • All I have to do is say kubectl run.

  • You can name this however you want.

  • I'm just gonna call this helloworld-service

  • and then I can specify the image.

  • This is the image that I want to deploy.

  • And this can be located anywhere that

  • the machines can get to.

  • And I'm using a private docker registry that comes

  • within the Google Cloud Platform itself.

  • So that I just push my image there and then I can

  • download it from my projects in Google Cloud.

  • And then here's the important part, the dash L,

  • following that, I can specify a bunch of key value pairs.

  • And this is very important because these key value pairs

  • are labels and that's a very important concept

  • in Kubernetes because everything in Kubernetes can

  • be labeled.

  • And the label, the keyed in value pairs, you can

  • name it however you want.

  • For example, if you like, you can say I want to label

  • this deployment version is 1.0.

  • I can say that this is the environment of staging, maybe.

  • I can also say that this is for

  • the conference GOTO Amsterdam, right?

  • I can name this however I want.

  • The important take away here is that with labels,

  • in the future you can query Kubernetes,

  • and say please tell me all of the applications who

  • has the label of app is equal to helloworld-service,

  • and of the version is equal to one.

  • You can query this later via the API and it's

  • also very important when you want to route

  • the traffic to these application instances.

  • So that's all I have to do.

  • I'm just gonna run this command line to start a service

  • in my cloud in my Kubernetes cluster,

  • and as you can see very quickly, it just got deployed.

  • And I haven't done any manual scripting here.

  • I haven't done anything else to say which machine

  • to deploy to.

  • I got four different nodes here, and I can see that

  • this got deployed to one of the nodes, TPLA.

  • The two boxes here are very, very important.

  • The box in gray is what we call a pod, a pod, P-O-D.

  • A pod is the atomic unit that

  • Kubernetes can actually manage.

  • Now you may be asking, hold on a second,

  • I thought we were all talking about containers here.

  • I thought containers are the atomic

  • unit that we should be managing.

  • But in Kubernetes it's called a pod.

  • Now what is a pod?

  • A pod can be composed of a single container

  • or multiple containers, okay?

  • And they are guaranteed to have the same IP address.

  • A pod has a unique IP address.

  • They are guaranteed to live and die together.

  • So when a pod dies, all the containers within the

  • pod will go away.

  • And they are guaranteed to be scheduled onto

  • the same physical machine.

  • Now what would you actually run within the same

  • pod of the different containers?

  • If you have an application with the front end

  • and the back end.

  • Are they tightly coupled together that you want to

  • run inside the same pod?

  • The answer is actually no you probably don't wanna do that.

  • Why?

  • Because if you do that, you cannot scale

  • the front end and the back end independently

  • from each other, okay?

  • So you want them to run in separate pods in this case.

  • So what would be a good use case for the pod?

  • Well maybe you have a Java application that's

  • exposing metrics via GMX.

  • And your system has another collector that

  • needs to collect the metrics with a different format.

  • Well rather than writing and changing your application,

  • to talk to that format,

  • what you can do is to run a sidecar container in the

  • same pod that's able to understand GMX metrics

  • and also be able to push it to the metric server the

  • way that the metric server understands.

  • So you can actually compose your application with

  • multiple tightly coupled components if you want to.

  • Now the box in blue, we call this a deployment.

  • And it does a few things for you.

  • And very importantly, is that you can tell

  • the deployment to, well first of all deploy the application.

  • When I was running this, I was actually deploying

  • the copies of the pods.

  • What you can also do is to scale.

  • You can say kubectl scale the deployment helloworld-service

  • and how many do we want?

  • We can just say replicas is equal to, and I can say four

  • and it's going to tell the deployment that any four

  • instances now deployment is going to say,

  • check again, something else is called the replica.

  • I said, to say hey, do I have four instances now?

  • Do I have four replicas?

  • If I don't, then I need to spin out more.

  • If I do, then I can stop.

  • If any one of these instances goes away,

  • it will actually notice it and say,

  • oh no, I have three, but I need four.

  • Let me go ahead and start another one for you.

  • So that's very easy to do and we're going to see

  • deployment in more detail in a second.

  • Now notice that every pod here, every box in gray,

  • has a unique IP address.

  • Now these IP addresses are unique to the pod.

  • They can talk to each other, even if they're done

  • on the same machine, but they come and go.

  • They are ephemeral.

  • Then the question is how do you

  • actually get to these services?

  • How do you actually consume it?

  • If you need to know, oh the IP addresses.

  • That's probably not the best way to do it.

  • Typically what you do today in your infrastructure,

  • is you create a load balancer in front of this, right?

  • And then you configure the load balancer to know which are

  • the back end endpoints.

  • In Kubernetes, well these back end endpoints

  • can come and go, the IP addresses could change.

  • So you don't want to configure those manually.

  • But in Kubernetes, we have the first class concept,

  • which is called a service, okay?

  • And a service is really almost like a load balancer.

  • Once you provision it, it will give you this stable

  • IP address that will then

  • be able to load balance your request.

  • So to expose all of these pods as a service,

  • all I have to do is do kubectl expose,

  • there we go.

  • Kubectl expose the deployment which is the box in the blue.

  • And I can say the port number 8080, that's the external

  • port I want to expose at,

  • and then the target-port is 8080 'cause that's where my

  • application's running on.

  • So I can expose this application by putting a

  • load balance in the front and that is the box in green.

  • And the way that the service decides where to route the

  • traffic to is by using labels.

  • So if a traffic request comes in into this green box,

  • in this service, it's going to see

  • well which one of the gray boxes, which one of the pods

  • matches my label descriptor that says well it has to

  • be routed to the application

  • that's called helloworld-service.

  • And now I can actually get to this service.

  • Now another very big question that you're going

  • to run in to is how do you discover your services?

  • If you have multiple of these things,

  • how do you actually know the IP address

  • that you need to connect to?

  • Well many people actually run a separate registry

  • of some sort.

  • Well Kubernetes actually have this right out

  • of the box as well.

  • So there are multiple ways to do this.

  • The first way is potentially using the API.

  • So I can access everything that I do from the command line.

  • They all make API requests behind the scenes.

  • So even to know which services are actually available,

  • I can actually get back either YAML or JSON Payload

  • and I can see all of the services that's running here.

  • So if I go and look for helloworld-service,

  • I can have all the details about it.

  • And I can also get its IP address, right?

  • But you don't wanna do this every single time.

  • Kubernetes actually exposed this service

  • as a DNS host entry for you right out of the box.

  • Yeah that's really nice.

  • So for example, let me just do one more thing here.

  • If I want to get inside of the cluster,

  • and the way I'm going to do it is by running

  • a back script directly inside of the Kubernetes cluster

  • and I'm going to do a kubectl exec -ti.

  • The name of the container and bash bin

  • and if I have internet, digital connect, there we go.

  • It's really slow, get off,

  • stop watching YouTube videos please.

  • So I'm inside the Kubernetes cluster right now and

  • what I'm going to do is I can curl this URL of course.

  • I can do the IP address 8080/hello/ray

  • And that worked, whew!

  • That was a little slow (laughs).

  • But like I said, you don't want to do this all the time

  • with the IP address.

  • And like I said we actually exposed the DNS name.

  • So this becomes very, very easy to do.

  • So I can say helloworld service and there we go.

  • It just resolves it right behind the scenes for you.

  • So you don't really have to run a separate registry

  • and node if you don't want to.

  • When the instances come and go,

  • they will actually update the endpoints behind the scenes

  • and it will always route to the right

  • and available instances.

  • Now if you really want to know what endpoints is it

  • behind the scenes for this particular service.

  • What I can do is that, this is called helloworld-service

  • I can actually do kubectl get endpoints and the name

  • of the service and I can actually see a list of endpoints

  • that's available and ready to serve.

  • So if you don't want to do a service side load balancing,

  • if you wanna do a client side load balancing,

  • you can still get to this IP addresses also.

  • Yeah, that's all good, it's pretty easy, right?

  • It's very easy to do.

  • But what I have just done is really just deploy

  • a stateless service.

  • And of course your application probably have state.

  • So to show you a little bit more of how to deploy

  • a full stack application with state,

  • I'd like to invite my co-speaker Arjen Wassink

  • to the the stage.

  • - [Arjen] Well thanks.

  • Is the mic on or?

  • - Yeah, where's the mic?

  • - Okay let's do it without the mic.

  • I'm (mumbling)

  • and everything that comes with it.

  • It's quite a lot (mumbling)

  • so one of the things that we see is that the kind of

  • applications that we were building some years ago,

  • (mumbling) and now we are having more and more

  • into development software we need to route, so

  • we're going with microservices.

  • So the shear number of instances we have many

  • instances getting burned.

  • We had to use three for that.

  • And like Ray already said, containerization

  • (mumbling) and the orchestration Kubernetes

  • really helps in creating an environment where we

  • can manage a box with a number of (mumbling).

  • So I want to test you Ray.

  • - Yeah?

  • - I brought some with me.

  • - Okay

  • - I created a small application.

  • Contains a mistake.

  • It's a (mumbling) based application

  • where you can enter your favorite artists,

  • the albums he has produced,

  • and the songs that are on it

  • You can enter and add (mumbling) tracks, and so on.

  • And you can also delete and edit

  • so basically you've got a (mumbling).

  • I was at (mumbling) quite easy architecture.

  • We have from that the angle of JS

  • that's being served for (mumbling)

  • a small (mumbling).

  • Backed by Java Red Services

  • to take logic correct operations and the data is restored

  • and I just used (mumbling).

  • No offense (mumbling)

  • and running on Kubernetes.

  • - Okay, what was that?

  • - Before we get started, I want to

  • make it even more more difficult.

  • - [Ray] All Right (laughs).

  • - I want to see it running on a Raspberry Pi Platform.

  • - Okay, all right, let's see it.

  • - So I brought my micro related server

  • (laughs)

  • - For microservices?

  • - Yes, for microservices so we need to route (mumbling).

  • - Right, but you've done that for me already,

  • thank you very much.

  • (laughs)

  • - I was being polite.

  • Kubernetes is already running on (mumbling)

  • and that's all being sent.

  • (mumbling) If you want to use something yourself

  • There's a (mumbling) so if you have

  • holiday's coming so.

  • (laughs)

  • Play around with that and experiment with

  • social technology on a small scale and (mumbling).

  • Now, inside (mumbling).

  • - So remember this is the beauty of Kubernetes.

  • It runs in many, many different places.

  • It's not something that's limited to the Google Cloud

  • Platform of course.

  • Although it's probably the best place for you to run it.

  • But if you want to (laughs), you can also

  • run it on Raspberry Pi, which imagine,

  • this is not a very powerful machine, but

  • you can actually simulate the entire data center

  • with similar components at a small scale and

  • play with it.

  • (laughs)

  • So, let me do this deployment with MySQL

  • which actually has state.

  • And typically, if you start a FALCON container

  • without the volume mount, that's not good for you

  • to run MySQL, why?

  • Because when you start MySQL, it's going

  • to write some data.

  • When you shut it down, it's gone.

  • When you start it again, it's going to start

  • off fresh without any volumes and you

  • don't have any data.

  • Anyone?

  • Yeah, (laughs) that could be a problem

  • if you try to run a stable application

  • without keeping state (laughs).

  • Yep, great.

  • So with the NFS Mount, or if you are

  • in your own data center, you can actually

  • share different drives, different volumes

  • in different ways, whether via NFS, iSCSI, RDB,

  • GlusterFS, there are so many different options

  • and many, many of them are actually supported

  • within Kubernetes.

  • And the first thing you need to do is

  • to register the value inside of Kubernetes, okay?

  • So even if you have that physical device available

  • somewhere you have to register it.

  • Because Kubernetes actually needs to know how much

  • storage is being offered for this particular value

  • which is in the capacity column right here.

  • And the second thing is how do you actually connect

  • to the volume?

  • How do you connect to it?

  • So different type of shared filed system has

  • different way of connecting to it

  • and in this case we're using NFS,

  • so I'm going to say the server and also the path.

  • If you're using something else like say GlusterFS,

  • you'll do it differently, okay?

  • And we can support many different ways.

  • So the first thing I need to do is to register it.

  • So I have created this thing and I'm going to say

  • kubectl create -f the volumes/nfs-vol1.yaml

  • And that is going to, oh, it already exists, let me check.

  • Check get PV.

  • No, okay.

  • So let me delete that so it's really easy to just

  • delete the volume as well.

  • So I can say delete PV, yeah, let me do that.

  • So I'm going to say delete the first one and also

  • delete the second one, okay?

  • And now they're deleted, but the data is still there.

  • It's just that they're not

  • registered with Kubernetes anymore.

  • I'm going to re-register it so that it will work

  • properly for me, okay?

  • So I have created the volume here and it has

  • the capacity of one gigabyte and it is available.

  • Now the second thing I have to do is to lay down

  • a claim because all of these volumes are just resources

  • to Kubernetes.

  • And these resources could potentially be shared or

  • reused right?

  • When the volume is actually being released,

  • you don't want it to just sit there without being

  • used if it's not important anymore.

  • So you wanna be able to reuse the disk with somebody else

  • if they need the capacity.

  • And so now what I need to do is,

  • I need to say lay down a claim to say

  • I need to use the volume, please give me one

  • that best describe my need.

  • And to do that, I need to create what we call

  • a PersistentVolumeClaim, a PVC.

  • Okay Persistent Volume Claim, and notice here, all

  • I need to do is to describe how much capacity do I need.

  • And the type of access we want it to do,

  • whether it's rewrite from just a single container

  • or a single pod, or is it rewrite from multiple pods?

  • But notice here I'm not specifying which volume

  • I want to use, I just say how much I need.

  • Why, because Kubernetes will then find the right

  • volume that best fits my need and assign it to me.

  • So if I need to lay down a claim and say I need

  • to use a volume that I need to create this PVC

  • I can do kubectl create -f mysql-pvc

  • and that's going to make the claim.

  • And once I do that if I do a get pvc

  • what I can see is that it's actually bound to

  • one of the volumes that's available that best

  • fits my need and now MySQL PVC can access this volume.

  • Now then what I need to do is to update my deployment

  • to make sure that I am mounting the right volumes.

  • And that's really easy to do.

  • All I need to do is specify it.

  • Say here volumes is referencing to the PVC claim

  • that I just created.

  • Right it's like a ticket for me to use the physical volume.

  • And then I need to specify the mount path.

  • Which is here, that I need to mount this disk into

  • var lib MySQL, okay?

  • And the beauty of it is that Kubernetes will actually

  • take care of mounting this volume for you behind the scenes.

  • So if I want to run this MySQL server, right?

  • So create MySQL service and also the deployment.

  • You all know what services are now.

  • And if I do that, what this is going to do

  • is there we go, let me refresh.

  • What this is going to do is to run MySQL on

  • one of these nodes and before it starts the application,

  • starts MySQL, it's going to mount that NFS

  • volume for me as well.

  • And then start the container and then make sure that

  • the volume is mounted into the path I specified.

  • So now MySQL server has the right data.

  • - [Arjen] Okay, yeah.

  • - [Ray] Oh yes, we have sound!

  • - [Arjen] We have sound.

  • - [Ray] Yay!

  • - [Arjen] Yeah, you can access it also.

  • - [Ray] Yeah I can access it, but I can also access

  • it directly by deploying the front and the back end.

  • - [Arjen] Okay, I want to see the back end running.

  • - [Ray] Yeah (laughs), so just remember all of these

  • things are running inside of the Raspberry Pi processor.

  • So I'm going to do something very quick because of the time.

  • I'm going to use create -f.

  • What that's going to do is deploy everything for me

  • in one shot, oh!

  • I'm sorry I think I used the wrong one so now my

  • sequel is actually deployed here, there we go.

  • So now I have MySQL,

  • the front end and the back end, okay?

  • Now when I created the front end how do I get to it?

  • - [Arjen] Yeah we have the services running here on

  • the Raspberry Pi.

  • But normally the services uses

  • an internal cluster IP address which can be used.

  • But it can't be used outside of the cluster.

  • So you want something to expose the service to the outside.

  • In Google Cloud you have the load balancer for that.

  • Ray already showed you.

  • One easy thing to do that this small micro data center

  • is to use a node part, and what actually

  • Kubernetes does then when creating a service

  • is generate, dynamically assign a port number to

  • each node for that service.

  • So we have a certain port number at which we can reach

  • that service within the cluster.

  • - [Ray] So in this particular case,

  • it dynamically generated a port number for me.

  • And so you can avoid all sorts of port conflicts

  • if you are to run and share directly from the host, right?

  • So it's on this port, so if I go there,

  • all right, that works, pretty good.

  • So the application's up and running.

  • Can you see it?

  • - [Arjen] There's already data.

  • - [Ray] Arjen's CDDB database.

  • So he likes Air (laughs), I don't know Air, but yeah.

  • There we go, and you can click into it.

  • And then you can see all the tracks when it comes about.

  • It's running on Raspberry Pi, so it's a little slow.

  • But you can modify these things as well.

  • So we have the full application running, fantastic.

  • - [Arjen] Okay that's nice.

  • - [Ray] That's too easy, too easy, what else you got?

  • - [Arjen] Well a product on (mumbles) just came

  • and we have a new version we have to ship.

  • So marketing has decided that the coloring scheme

  • was not that nice, so they offered a new styling.

  • So we have a version 10 available for you,

  • We wanted to roll it out into production.

  • - [Ray] Okay, so typically what you may do is to

  • deploy the new version and then shut down the old ones.

  • But what I like to do, is to do a rolling update for this.

  • What I want Kubernetes to do for me is to roll out

  • the new version one instance at a time,

  • or multiple instances at a time.

  • And shut down the old instance when a new instance

  • is up and running.

  • And you can do this very easily with Kubernetes too.

  • And remember deployment, the box in blue?

  • That can also manage the rolling update for me.

  • So all I need to do is to do a kubectl edit the deployment

  • and I can say the CDDB front end.

  • And then down here I have the image that I can change.

  • And if I want to update this to version number nine,

  • I just update it, I save this file.

  • Now look this is really cool.

  • I'm looking inside the state of Kubernetes via

  • my local text editor.

  • And I can modify this state just by saving it.

  • Now assume I save it, what it's going to do

  • is perform a rolling update.

  • And you can see the new instances coming up,

  • and then once it's ready, it's going to shut down the

  • older instances.

  • And while all this is running,

  • because we have readiness checks and liveness checks

  • setup in Kubernetes.

  • You can actually just refresh the page and, ooh!

  • - It's already done. - And it will still work

  • during the running update.

  • Yeah, did I use the wrong version?

  • I think I did.

  • - [Arjen] Version 9, yeah.

  • - [Ray] But it's got to a different color here.

  • It doesn't really work with the rest of the color

  • scheme here.

  • So I think we should probably.

  • - [Arjen] Yeah, we have a problem now in production.

  • Customers don't like the new color scheme.

  • So we want to rollback as fast as possible.

  • - [Ray] To do a rollback is really easy.

  • I can actually see a history of my deployments as well.

  • So I can do a

  • kubectl rollout history deployment cddb-frontend

  • and I can actually see a list of deployments I have made.

  • Now if I actually deployed this thing with a

  • dash, dash record,

  • we can actually see what caused the change.

  • What caused the new deployment.

  • And you can rollback to any of the revisions that you

  • see that's still in history, which is awesome.

  • Of course you don't keep all of the histories.

  • There is a limited amount of that,

  • but if I need to rollback, this is what I do.

  • I can say rollout undo,

  • just do go back one deployment, and let me do that.

  • And just say undo my deployment cddeb-frontend.

  • And what it's going to do, check this out.

  • It's gonna do another rolling update as well,

  • and now you're rolling back to your previous versions.

  • Super simple to do, and again because we have the

  • health checks and readiness check as we're doing

  • this rolling update, all the connections will be

  • routed to the instance that's ready to serve.

  • And now we just roll back.

  • That's also too easy Arjen, what else do you got for me?

  • (Arjen laughs)

  • - [Arjen] I'm really curious.

  • We have now our micro data center.

  • How many people have been in a data center themselves?

  • - [Ray] Quite a few, okay!

  • - [Arjen] Quite a few, yeah.

  • And you wanted to pull a plug somewhere in the

  • data center and see what happens.

  • - [Ray] Oh no, yeah (laughs)

  • - [Arjen] Yeah Yeah there's one!

  • (both laughing)

  • - [Ray] There's one, I would love to do that too.

  • - [Arjen] Yeah you love too?

  • - [Ray] No, actually not here.

  • - [Arjen] At Google.

  • - [Ray] No (laughs) I can't.

  • - [Arjen] You're not allowed in there?

  • - [Ray] But I also don't want any plugs to be pulled today

  • On this Raspberry Pi cluster,

  • or is that what you wanna do?

  • - [Arjen] Yeah.

  • So we had already a volunteer here, so come up on stage.

  • - [Ray] Whoo hoo, give this gentleman a hand.

  • Brave person trying to break my cluster.

  • - And before you pull it. - Wait hold on a second!

  • Why don't you swing back here, swing to the side

  • so people can see this in the video.

  • - Yeah from the side?

  • To make it really nice I want to see the nodes

  • going down on which MySQL is running.

  • - Wait, you want to see MySQL go down?

  • - Yeah.

  • - No, come on not this!

  • - That does happen in production.

  • - That has happened before. (laughs)

  • - And it busts, I will be called

  • at some point, it's gone wrong and it's in the

  • middle of the night and I don't want that.

  • - You don't want that, okay. - No.

  • - So let me just show you that the MySQL server

  • is up and running.

  • As you saw in there, so I can do

  • mysql-p-host 10.0.0.120 okay?

  • And what is the password?

  • - [Arjen] Root.

  • - [Ray] Of course.

  • - [Arjen] Yeah, everything is root.

  • - [Ray] It's not Superman, it's root.

  • So I can show databases and we have the data here, voila!

  • We have the quintor database which has Arjen's favorite

  • songs.

  • And it is running on 04,

  • so it is running on the fourth node.

  • - [Arjen] The fourth node?

  • - Whatever you do, do not plug the

  • first one, it's the fourth one.

  • - The bottom one. - This one?

  • - Yeah, there we go.

  • - Pull this network cable? - Are you ready for this?

  • (both sighing)

  • - All right, oh, so it's gone, it's gone.

  • - And now? - And nothing happened.

  • Nah I'm just kidding.

  • So what Kubernetes has been configured to do

  • is to check this health of the machines as well.

  • And we configure it to check every 30 seconds or so.

  • So in about 30 seconds, which is right about now.

  • - Yeah - If it ever works,

  • you're going to see node 4 turn red.

  • Turning red.

  • Yeah there we go, whew!

  • (applause)

  • Wait, wait that's too easy, of course it turned red

  • it went down!

  • - Oh and what's happening now?

  • - Check it out, it actually restarted

  • MySQL for me as well.

  • Yeah?

  • It's not bad, MySQL is now up and running.

  • Yeah very good and actually this is what Kubernetes

  • actually do behind the scenes.

  • You remember the volume mount?

  • Well the volume is no longer just mounted on the machine

  • that died because it went away.

  • Kubernetes managed the volume mounts for you.

  • So now if I go into that node

  • ssh into root@10.150.4

  • Oh boy, Dot 3

  • By the way, I get really nervous about this demo

  • because unplugging MySQL is not something that

  • you should be doing.

  • I do not recommend just trying this at home.

  • But if I wanted to go back here.

  • (laughs) Definitely don't do it.

  • Wait, what, oh yeah sorry.

  • It's 0.4.4 isn't it?

  • - [Arjen] Yeah.

  • - [Ray] That's the name, oh sorry.

  • Four is gone so it's on three now.

  • So if I go there, check this out, this is really cool.

  • If I can connect it it, there we go.

  • If I go root.

  • If I have three mount,

  • if I see NFS mount it's actually here,

  • so that's a good sign.

  • The other thing I wanna make sure is that I can actually

  • connect to it.

  • Now remember, MySQL just got rescheduled to a

  • different machine, but I'm going to use the same

  • command line to connect to it with the same IP address.

  • Because it is using a stable IP exposed as its services.

  • Now if I go to root, it connects!

  • That's not bad so far.

  • - [Arjen] Is the database still there?

  • - [Ray] Yeah, oh is the database still there, let me see.

  • So not use, so show databases.

  • Yes! It is still there.

  • But do we have the right data (laughs)?

  • Are we cheating?

  • Do we have the right data?

  • - [Arjen] Different volume, yeah.

  • - [Ray] There we go, so I just refreshed my application.

  • And as you can see, it's connected back to this

  • right database.

  • Cause I have to retry it, reconnect.

  • So it reconnected and we got all the same data here.

  • So yeah very good, I guess that worked! Thank you very much!

  • - [Arjen] So thanks for our real life chaos monkey.

  • - [Ray] Yeah.

  • (applause)

  • And if you plug it back in, then it will be

  • marked as scheduled to be ready to redeploy as well.

  • Not bad.

  • - Well we have the application now running on

  • my micro data center, really nice, but we can't

  • go into production with that.

  • - If you don't want to use Raspberry Pi, sure (laughs).

  • But if you want to run it unframed

  • with a more powerful machine you can.

  • But we can also run it in Google Cloud as well

  • like I showed earlier.

  • And the beauty of it is here is that

  • if you want to achieve a state

  • where you want to not only be able to manage your

  • services efficiently, just like what we have shown.

  • But you also want them to have a hybrid deployment across

  • multiple data centers or multiple different providers

  • Whether it's cloud on unframed.

  • You can actually use the same set of descriptors

  • like here this is running locally on my machine.

  • I have the same deployment yaml file which is

  • nice because you can check in your architecture.

  • The only different thing I'm doing here is the volume mount.

  • Why, because in the cloud, I rather than using

  • a fence, I can actually mount a real disk from

  • the cloud, right?

  • I can provision new disks, I can mount it.

  • So all I have to do is to provision that volume,

  • register it with a different persistent volume.

  • And here I'm just saying that I wanna use a

  • GCE disk and so I can go ahead and create that.

  • And so I can register it and then I can go ahead

  • and mount, lay down the claim.

  • So I can do then mysql-pvc right?

  • And then once I have done that,

  • By the way, this is all happening in the cloud now.

  • And what I can do finally is to deploy this application.

  • I can just do a create dot.

  • There's one very big difference here,

  • which is in terms of the load balancer.

  • Because rather than exposing on the node ports directly,

  • on each individual machine, I can actually instruct it

  • to create a real load balancer directly

  • from the YAML file as well.

  • And now the application is being deployed.

  • Let me just go back and take a look, the same application.

  • The only thing I really had to do was to make

  • sure that of course the arm binaries don't work in

  • the x86 environment.

  • So I had to change the base image so that rather

  • than using arms Java binary, I'm using a x86 Java binary.

  • And once this is up and running,

  • we can actually go and see it.

  • Now what this is doing right now,

  • is just waiting for the external load balancer

  • to be created.

  • So if I say get-svc

  • what this is going to do is to create a real

  • load balancer with a real external IP address.

  • And there we go, so that's the external IP address.

  • And I can go there, this is too hard.

  • And there we go, we have the same application deployed

  • in the cloud with the same descriptors.

  • It's very easy to do.

  • So all of a sudden you can just deploy to multiple

  • environments with exactly the same way.

  • And that's beautiful.

  • - Yeah. - Yeah.

  • - It's really nice, you have one set of

  • configuration files and you use it for different

  • environments to set them up.

  • So that's really nice.

  • - So if you are interested in this technology,

  • please give it a try.

  • And if you want to learn more about Kubernetes,

  • go to kubernetes.io.

  • And if you wanna try Google Cloud Platform you can go to

  • cloud.google.com as well, and you can provision

  • Kubernetes clusters very, very easily by a click of a button

  • will install everything for you and manage everything

  • for you as well.

  • If you wanna try it on the Raspberry Pi cluster,

  • check with Arjen.

  • We have a really, really good blog that he wrote.

  • So you can buy the right components

  • and play with this as well.

  • So thank you very much for your time.

  • - [Arjen] Thank you.

  • (applause)

  • - [Announcer] All right, thank you very much.

  • - Do we have time for questions?

  • - [Announcer] There's no time for questions, but

  • there are very interesting questions so we will make

  • some time for questions.

  • - [Ray] Okay, all right, nice!.

  • - [Announcer] Before anyone decides not to wait for the

  • questions and leave, please vote.

  • I see that we have massively enjoyed this presentation,

  • but I also see that the actual head count is much higher

  • than the number of votes.

  • So please vote for this session.

  • - Yeah, thank you.

  • - [Announcer] There's many questions and we can't handle

  • them all, but since you're a Google guy, here is

  • an interesting one. - Oh no.

  • - [Announcer] How well does Kubernetes

  • fit Amazon Web Services?

  • - How well does it work?

  • It actually works!

  • In fact in one of the conferences that I been to

  • about more than half a year ago,

  • one of the attendees came over during lunchtime

  • and said that this is awesome, I want to show

  • my boss how to do this.

  • I'm like yeah sure, let me deploy this, are you using cloud?

  • Yes, I'm using, um no.

  • So what do you need to deploy this on Amazon?

  • So ah, but you can actually do it

  • over lunchtime with the right tool setup.

  • Downloading the right services, it actually just

  • installs and you can provision service there as well.

  • And it actually works with their load balancers and their

  • disks as well, so you can give it a try.

  • But if you're running on Google Cloud Platform of course,

  • we also have really good support with the click

  • of a button, with a single command line,

  • you can also provision the services for you, yeah.

  • - [Announcer] Last question.

  • What's your experience with database performance

  • when running on NFS volumes?

  • (Ray laughing)

  • - That's a great question! - Or Raspberry Pi.

  • - And you probably should not do it (laughs).

  • No kidding, just remember that NFS is

  • something that we're using for the demo.

  • Some people still use it for a variable of things.

  • But if you want to use something faster you can.

  • You can use RDB, iSCSI and a bunch

  • of other things as well, yeah?

  • - [Announcer] All right great.

  • - Well thank you very much, are there more?

  • - [Announcer] Thank you, that's it, thank you very much.

  • - [Ray] All right, thank you!

  • (applause)

(light classical music)

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

GOTO 2016 - 基於Java的微服務、容器、Kubernetes - 如何做 - Ray Tsang & Arjen Wassink (GOTO 2016 • Java-Based Microservices, Containers, Kubernetes - How To • Ray Tsang & Arjen Wassink)

  • 75 8
    colin 發佈於 2021 年 01 月 14 日
影片單字