看 BBC 學英文
CHRIS RAMSDALE: Hello, and thank you
for coming out to this year's Google I/O conference.
I'm Chris Ramsdale.
KATE VOLKOVA: I'm Kate Volkova, software engineer on App Engine
CHRIS RAMSDALE: Yes.
And today, if you didn't get a chance
to attend the keynote session, there
was a lot of great technologies that
were announced and talked about.
Android had a slew of amazing technology that's coming out,
for both the consumer and the developer.
Chrome had some great advancements.
The Cloud Platform has some great technologies
that were coming out.
And I'm happy to talk about a few of those
as we go through the session today.
So if you did get to attend, you saw
that our director of product management, Greg DeMichillie,
was using a demo application called WalkShare, which
was an Android client that was hooked up
to back-end services that are running on our Cloud
Platform-- namely managed VMs that were running on top of App
Engine and a Redis Cluster that was
running on top of Compute Engine--
and using the Datastore to store data.
That allowed you to save walks, and then share them
with your friends, and then have your friends comment on them.
Well, today in this session, Zero to Hero with Google Cloud
Platform, we're going to take a look at that application
and do a deeper dive into how we built it,
using our unified tool chain, our managed platform.
Kate's going to talk a bit about how we run Google production
services on your laptop so you can be an efficient developer.
And then finally, we'll look at some advancements
we're making in the DevOp space so that you can actually
debug your application in production
and feel confident about what's running.
So to get started, we need to start
with the foundation, a cloud platform project.
And that's super simple to do.
All we need to do is bump out to our developer console here.
We do Create Project.
We'll give it a sample name, demo name.
We'll call it walkshare10.
And we'll go ahead and create that.
Now, that's going to take about 10 to 15 seconds.
And while that does happen, let's
take a look at some of the changes
that we've made in our Developer Console since our Cloud event
back in March.
So the focus has been on taking the user experience
and really consolidating it down to the core pieces
of your application.
So as you can see on the left-hand side,
we have APIs and Auth so that your application connect back
to Google Cloud services and other Google services,
as well as third party applications connecting back
into your application, as well, via the endpoints
that you might surface through Cloud
Endpoints or any RESTful style.
We have Monitoring, right?
So a consolidated view into the metrics
that are coming from your application-- the performance
of that application as well as centralized logging,
coming from Compute Engine and App Engine, which
I'll touch on throughout the talk.
Source Code for storing your source code in the cloud,
as well as doing your builds in the cloud.
Compute, a consolidated home for both App Engine and Compute
And finally, Storage for all things storage
related, whether it be non-relational, relational,
or blob data.
And then Big Data for our analytics tools,
such as BigQuery and Cloud Dataflow,
that we announced today.
So now we'll see that our application
has been successfully built.
So simple enough?
Well, actually, through that process,
within that 10 to 15 seconds, we've
created quite a bit of infrastructure
for you and on your behalf.
We've created namespaces so that your application can connect
back to our multi-tenant services, our cloud services,
via Memcache, Datastore for storing NoSQL type data, task
queues for communicating within your application.
We've created those namespaces for you
so you can securely hook into those services.
We've given you a centralized logs repository
so you can funnel all of your data from your compute back
to one spot, where you can view it and interact
with it via the Logs API or through our Developer Console.
We've given you a Git repository so you can actually
store all your source code into the cloud,
enable things like Cloud Debugger,
like you saw today from Greg.
And then we also give you agents that
are running on top of these VMs that are hooking back
into all of our monitoring data.
So they're monitoring the applications that are running,
they're monitoring your compute, and they're
funneling all of that data back into the dashboards
that we have.
So now that we've got this up and running,
we've got our project created, let's actually
add some source code to it.
And I want to do that with our Google Cloud SDK.
It's our unified tool chain that brings together
all the services within Cloud, be it
App Engine, Compute Engine, Cloud Storage, Cloud Datastore,
pulls it all into one unified tool
chain so you have those services available at your fingertips.
So see that if we bump out to our terminal
here, I have some local source.
And what I want to do is I want to take that local source
and push it into the cloud, and then actually have it built
and be deployed, and we can check out
our application running.
So this is pretty straightforward.
I'm going to use our G Cloud application, or Google Cloud
And since I'm terrible about remembering command line
options, command line parameters,
I'm happy that they actually have code completion
and command completion built into the SDK.
So if I just do a double tap, I'll
get the commands that I can run.
You know, sometimes in life, it's
just the little things that actually make your life much,
So I do gcloud init, and we'll use that same application
that we-- same project that we just built.
All right, and that's going to go through,
and it's going to create some local directories for me,
in which lies some metadata about the Git repository.
All right, let's clear that for ease of use.
Now, what we'll do is we'll navigate into that directory,
and we'll copy in our source.
OK, so we get a Java application,
represented by a pom.xml file and some source.
What we'll do is we're going to go ahead and add that.
Let's see, initial commit for the comment.
All right, all looks good.
And then finally, if we just do a Git push,
it'll push that up into our Repo.
OK, there we go.
So it's taking my local source and pushing it into the cloud.
And the idea there is that we want
you to be a productive developer and allow
you to use the tools that you're used to, in this case,
Git, to develop locally and then finally, run.
And I mean run, I mean in Google production data centers.
So as this Git push is going from source into the cloud,
into that Git repository, we're going
to see that there are services that
are picking it up and actually building it.
So let's funnel back over to our developer console.
And if we go to our project, we do
a refresh on the Git repository.
OK, so we see my source code now, up in the cloud.
And you see my last comment, which was just initial commit.
And then since this is a Java application,
we need to build it somewhere.
What we'll see is if we click down into the Releases section
here, we should see a build that has been kicked off.
Give it just a second.
There we go.
And actually, by this time, it's actually
build, tested, and deployed.
So where is it actually building?
So we saw that we have a Git repository in the cloud.
I pushed everything up there.
Something had to kick in.
What we're doing is, on your behalf,
we're spinning up a Compute Engine virtual machine
that's running Jenkins.
So to do continuous integration for you,
it's picking up that push, because there's hooks into it,
it's building on the VM, it's running my tests.
And if all my tests pass, it actually does a deploy out
to App Engine.
And we can see that right here.
And if we drill in, we can see the
build logs and the diff and everything like that.
So if everything is working correctly,
we should have a new version up and running.
And if I go to walkshare10.appspot.com, voila.
So there's our application running.
If I click here-- so we'll post a silly comment.
Everything is saved.
So now we've pushed everything into the cloud,
and it's running.
Now, once this is running at scale,
let's say that we wanted to do some sentiment analysis
or something on this application.
So I've got hundreds of thousands
of comments that are running in, that are being stored.
And I want to take and do some sentiment analysis
on those comments that are coming in.
Now, to do that, I know that I'm going
to need a bigger, beefier machine.
I'll need more CPU and more memory, right?
And furthermore, I'll need some kind of library
that will allow me to do analysis
on the streams that are coming in.
Something like OpenCL, which is a fantastic
library for doing this.
Now, the problem is that, historically, App Engine
hasn't supported this.
For security and scalability reasons,
you're not able to run native code, so C or C++ code,
or access things like the file system or the network stack.
It also doesn't have the memory configurations and the CPU
configurations, the footprints, that I
need to run sentiment analysis.
At the same time, though, I don't
want to jump all the way over into unmanaged infrastructure
as a service and run all those VMs myself.
So lucky for me that back in March of this year,
at our Cloud Platform Live event,
we launched into limited preview a new feature
called Managed VMs, which takes the management platform
capabilities of App Engine and merges those
with the power, the control, and the flexibility of Compute
Engine, thus providing you the best of both worlds.
And in the spirit of making developers highly efficient,
we've made this super simple for you to get up
and running, to move from App Engine into Managed VMs.
All you have to do is change your configuration files.
So here, we're looking at a Java configuration file.
You set the VM property to true.
You specify which machine type you would want.
In this case, we want an n1-standard,
but actually there's a little typo in this [? deck. ?]
You actually want a high CPU machine here.
But the nice thing is that with this property,
you can specify any Compute Engine machine type,
both that we have now and the ones
that we're investing in in the days in the future
and in the months to come.
And then we need to specify the number of instances
that we want.
In this case, I just say that I want five.
You can put whatever you want to in here.
And furthermore, you can programmatically
change the number of instances when
you're running in production.
And then, in the coming months, we'll
have auto scaling that will apply to this as well.
So we'll really build out the complete offering.
At that point in time, you'd be running in production,
and you'd have access to those native resources
that I talked about.
You'd have that flexibility of Compute Engine.
So you could get at the lower level network stack,
or the file system, you could run the OpenCL or C or C++
binary that you wanted to run, and you'd be able to do
the sentiment analysis that we were looking at.
Now, furthermore, it's not just about being
able to run these native libraries
and have access to the network stack.
This also brings to the front a new hosting environment,
where we can build new run times, both internal to Google
So we won't be constrained to just having Java and Python
We could look at having Scala or Haskell or [? Node. ?]
And to prove this out, we've been
working with internal teams-- both the Go team
and the Dart team-- that are both done in the Cloud sandbox,
or down in the sandbox today, as we speak.
We've worked with them over the months to build this out
and to actually have them vet out
this new hosting environment, where
they can run these run times.
And we're looking to partner with other people
within the community and other open source providers
to make this happen.
And so at the end of it, once you've made your configuration
changes, all you need to do is save your file,
do another Git commit, do another Git push,
and you're running into production.
And when you do that push, what's nice,
and what is in the spirit of making developers highly
productive, is that that actual push, those
I think it was one, two, three, four, five
lines of configuration, get pushed out,
and you have managed VMs running in production.
And what that means is that we give you
the hosting environment.
So Compute Engine VMs, we make sure that they're healthy,
and we make sure they put health checking on them and healing.
We give you a web server, because after all, you're
serving web traffic.
We give you an application server to run your application.
We give you a Java Runtime Environment to run Java.
And then we run all your third party code as well.
Then we take and install a load balancer, and wire it all up,
and hook it into your web servers.
And we configure that all for you, on your behalf.
You don't have to specify any of that.
And furthermore, in this model, what you do
is you get us providing operating system updates
and security patches for that hosting environment,
similar to how we do on App Engine today.
We do colocation and locality optimizations.
What that means is that we take all the pieces
of your application and make sure
that they're running together in a very highly available way
so that we're minimizing network latency
and latency within your application
and giving you very, very high SLAs.
And then finally, you get Google as your SRE.
And what does that last point mean?
That's a great question.
At Google, there's a set of software engineers
that are tasked with making sure that services like Search,
and Gmail, and Geo-- they're ensuring that they have
high uptime and they're running in a very efficient manner.
And what they do is, with those SREs, with Managed VMs,
you're getting the same-- they're watching over
your production deployments in the same manner that they're
watching over Search and then applying the same monitoring
that they have for App Engine for years and years and years.
So what that means for you as a developer is
you focus on your code and what you want to build.
And when you deploy it into Google production,
the SREs are watching over that-- the SREs and a lot
of our services-- to ensure that you have high uptime
So if there's something like network degradation in a data
center, you don't want to worry about that.
We got that covered.
If there's some link between two data centers
that has degraded performance, you
don't need to worry about that either, right?
If there's some external event that are actually
impacting the running of your applications,
we've got that covered as well.
We've got monitoring services built into the data
centers that funnel back into our SREs,
and into our graphs and our dashboards,
to make sure that everything is running performantly for you.
Now, with Managed VMs, one other point
is we're running all of that inside of containers
on top of Compute Engine.
And we happen to be using Docker as our technology.
Now, why are we using Docker?
It's pretty cool, right?
Who's heard of Docker in the crowd today?
Yeah, that's what I thought.
So it's a great technology, and it
does a lot of amazing things.
But more specifically, what it does
is it's a tool chain that gives us static binaries
that our services can scale up and scale down.
It's like a template that you can just
go punch out new ones, right?
Give me one more of these, give me one more these,
give me one more of these.
And since it's static, we don't have
to do any initialization when we spin it up.
It's very good for scaling.
Finally, also it provides portability.
So that which you are building on your local environment,
we can easily run in production, and it just moves with you.
And finally, it provides a container host environment
that allows us to manage the host environment,
providing those OS updates and security patches
that I had talked about before, without impacting
that which is running inside the container,
concretely, your application.
And for more about how we're using containers within Google,
be sure to check out these two sessions on containers
in Google Cloud and containers in App Engine
over the course of today and tomorrow.
OK, so just to check in, we've gone through quite a bit here
in a matter of about 15, 20 minutes.
We've clearly moved away from the zero stage
and closer to the hero stage, right?
We've gone through getting started
and getting something up and running.
We've gone through getting some code,
hooking that code up to our Git repository in the cloud.
We've pushed, we've seen it build,
and we've deployed out to production.
We've utilized a new feature set within Managed VMs.
So I definitely think we're making
a fair amount of progress here.
But I did mention that we are going to talk about some code,
and Kate's going to dive into how
we're doing production of Google services on your laptop.
So without further ado, I'm going to hand it over to Kate.
Clap for her, come on.
Kate, Kate, Kate.
KATE VOLKOVA: So here, on my laptop,
I've already got the code for our WalkShare demo.
And it's in the diagram of the project already,
maybe even more than once today.
So I'm not showing it again.
But just as a quick reminder, we've
got an Android mobile application,
and then we've got the whole bunch of modules
running on App Engine servers, or on Compute Engine
instances, that prices, comments, or displays
any other stats for us.
So let's see what we are going to concentrate
on today, which is App Engine modules,
and in particular, your workflow when
developing on our platform.
So you see here, I've got three modules.
And the second one is the common server, written in Java,
that runs on Managed VMs.
And the third module is a server talking
to the [INAUDIBLE] storage, written in Go.
So to allow you to iterate more quickly when developing
on our platform, our SDK provides a set of tools
to emulate App Engine production environment locally
on your machine.
And as all the other Google Cloud Platform command line
tools, it is now bundled under Cloud SDK
and available under gcloud command.
So let's try to run that command.
So here, what I passed through it is just the output level
that I want to see.
Then App preview, seeing that we're in preview right now.
Then app, that's saying that it's App Engine component.
Run the command that I actually want to do.
And then the list of modules, and they just patch
that tells App Engine which request
to route to which module.
It's actually already started, but I
thought it could take a couple minutes.
So here is the diagram to explain
what was going to happen.
So when you run gcloud command, app run command,
we start the development server for you,
locally, on your machine.
And that development server is just
a web server that simulates running App Engine production
environment locally on your machine.
And by simulating here, I mean first is enforcing some sandbox
restrictions that you would have,
running App Engine application in production, like now
restricted access to the file system.
Or in second but more-- that seems more important to
me is emulating all of our services
again, locally, on your machine, like Datastore or Memcache.
And on top of that, you also get the local implementation
of admin console that, again, helps you debugging the app.
So nothing really new yet.
But if you remember, one of our modules
is a module, running on Managed VMs and has this magic VM
equals true setting in App Engine
configuration file, App Engine [INAUDIBLE] XML.
So how we're going to emulate that-- I mean,
starting and restarting several virtual machines,
one for each instance, on your laptop
would significantly slow things down, if not
completely kill it.
So here, the technology that is gaining more and more momentum
lately, called containers, comes to the rescue.
And we use Docker containers to provide you
with a local experience when developing for Managed VMs.
So when you have VM equals true in your App Engine
configuration file, development server
will trigger the Docker build command and build the image
with your code, or your binaries.
And then it will run that command
and start the container for you and start routing requests
from development server to your container
and to your app running inside of that container.
And to make this scheme work on all the platforms
that we support, be that Linux, Mac, or Windows,
we still have a virtual machine with that Docker demand
preconfigured out and running on it.
But just one.
So now that we know what's supposed to be happening, let's
flip back to the logs and quickly go over what
I've just talked for you.
So first we see starting API server,
so we'll get our local Datastore and Memcache implementation.
Then we are starting a dispatcher module
and all the other modules.
And here, we are connecting to the Docker daemon
and starting the Managed VM module.
So we are building the image, we are tagging it
as application ID, module name, and the version.
Then we build that image, we create a container
from that image, and we start running it.
We also sent the start signal to that instance
exactly the same way as it works in production.
And this time, it even [? rates ?] 2,200,
which is cool.
So one more thing to note here that we'll get to a little bit
later is this line, that we can also
debug, attach the debugger to our container,
and do some debugging.
So one more thing to see here is the docker ps command
that just lists all the containers running
on my machine.
And so I have these containers running for three minutes.
That's about time I've been talking.
And we can even get some logs from that container,
seeing that we are starting the instance in debug mode,
and forwarded the log somewhere.
So now we have everything up and running.
So let's see how it looks like.
Pretty similar to what Chris just shown in production,
even though it's locally on my machine.
And again, what I was talking about
is the local version of admin console
that lists all of our modules.
We can click on these instances.
This is just a testing servlet printing out
the Java version of the comments module.
So we have the standard Java 7.
And then we can do some more stuff here.
We can see the indices, for example.
We can change something in the Datastore, like unknown.
Maybe after this talk I will be better known,
so let's update this one.
So sometimes little things like that help us
with debugging when we develop the application locally.
So let's try to post a comment now.
Something is definitely wrong.
Chris, I told you not to touch my code before the demo.
CHRIS RAMSDALE: I sent you a code review.
KATE VOLKOVA: OK.
CHRIS RAMSDALE: [INAUDIBLE]
KATE VOLKOVA: Well, well.
So I mean-- in normal life, if something
like that happens in production, the first thing I would do
is probably go check the Datastore
if the data is corrupted.
But again, Chris asked me to show you
how easy the local debugging of your App Engine module running
inside of the Docker container can be.
So let's try to do that.
So here I have the Android Studio
and my project open in here.
And I guess part of that application is Android.
So we're just using the same tool
for developing all of the modules.
And here, let's try to attach the debugger to our container.
So if you can see that log, we're attached to the container
right now, and let's try to post another comment.
That will be boring comment because it probably
won't work again.
OK, so we've got some stack trays.
We've got something-- let's just follow through the methods
and see what we're trying to add.
OK, we're gonna do some checks.
We're extracting the parameters of the request.
This line tells me something.
I guess he wants it to rename the methods and the properties
and didn't rename all of them.
Oh well, let's just deattach again,
fix it back, and rebuild the project
and try to post the comment again.
So let's see if it still compiles now that I touched it.
And while we are restarting that module,
let's all do what I was talking about, actually looking in
to Datastore and see our entry with the wrong property.
And let's just click Delete, and like it never happened.
So back to the console.
The cool thing about development server
is that it watches for the file changes or for, in this case,
Java class changes.
And now it is that something changed.
And we just sent the stop signal to all the instances
that we had, and then rebuilt the new image
and created the new container from it,
and started to forward requests again.
And apparently, that didn't quite work.
Let's just rebuild once again.
CHRIS RAMSDALE: So your VM's up, but it's not
restarting the instances?
Is that it?
KATE VOLKOVA: Ah, yep.
The image got rebuilt, but it does not
want to restart the instance right now.
CHRIS RAMSDALE: I think my bug was
more systemic than you thought.
KATE VOLKOVA: Yeah, I thought it was so unimportant.
Let me just quickly go over it again.
Let me just try to restart everything.
CHRIS RAMSDALE: Well, better to be doing it
locally than in production, right?
KATE VOLKOVA: Oh well.
You need to have a backup plan sometimes.
CHRIS RAMSDALE: It's the demo demons.
KATE VOLKOVA: No, that would complete [INAUDIBLE].
CHRIS RAMSDALE: You're going to try one more time?
KATE VOLKOVA: Ah, yep.
CHRIS RAMSDALE: So while she debugs
that, gives it one more shot, I think
one of the interesting things here is that when
you think about container technologies like Docker, one
of the things that they promote, much like I said,
was the ability to port your application back and forth.
Well, what's interesting is if you
use cloud-based services-- so hosted services,
be it our services, Amazon's services,
whatever they may be-- if there's
no local story for that, the portability kind of starts
to break down, right?
So if we just had Datastore in the cloud and that was it, then
when you say, well, it's great, I can port my application
from my local environment to my production environment,
the fact of the matter is that in your local environment,
if you don't have Datastore or task queues or Memcache,
you can't actually build there because you can only
get half the story, right?
So by taking this and making these production services
available on your laptop or wherever you're
doing development, really, it completes the story.
So you really do have portability.
And it's kind of crazy, because even
in the Datastore side of things, I'll never
forget about a year ago--
KATE VOLKOVA: Yeah, I guess that that will happen right
during the demo.
I need to clean up some space on my disk.
CHRIS RAMSDALE: [LAUGHS]
KATE VOLKOVA: While I'm doing that, you can keep talking.
CHRIS RAMSDALE: OK.
Just bat me away when you're ready.
So the anecdote here was that about a year ago, I
worked closely with the Datastore team as well.
And I'll never forget the tech lead, Alfred Fuller,
came to me, and he's like, so we're
going to put eventual consistency-- hold on,
does everybody know what eventual consistency is?
OK, so it's the idea that when you run horizontally
scalable services, that sometimes to get that scale,
the data might not be consistent.
So you might do a write, and if you come to another data
center, it might not have replicated yet.
And that gives you scale, but you
have to build your applications around them.
Because if you expect consistency,
you expect I do a write, and then I do a read immediately.
And if that's not the case, then weird things
happen in your application.
I still think it's a pretty complex concept to grasp.
And so do some of our customers, because what they were doing
is they were building in our local environment,
like Kate's trying to demo here.
We didn't have that.
It was strongly inconsistent.
Because after all, it's on your laptop.
There's no replication of other data centers, right?
And this kept impacting customers
when they moved into the cloud.
So that porting from local to production
was causing discrepancies.
And so Alfred comes and says, I'm
going to build eventual consistency into our SDK.
And I was like, you are out of your mind.
He's like, no, no no.
We're totally going to do it.
And within two weeks, they basically
mimicked, down to that level-- anyways.
KATE VOLKOVA: I think we really added
a little bit of excitement into our demo,
and proving that it's all real and happening right now,
locally, on my machine.
Was not planned.
I got quite a sweat.
OK, so we can develop locally, debug locally.
So let's try something a bit cooler now.
As you know, using Managed VMs, together with App Engine,
allows you any level of customization that you want.
And you can run any third party libraries
or call any binaries, which was not quite allowed
with a classic App Engine.
So let's try something here.
So for those of you who like the functional style of Java 8
as much as I do, let's try to insert [INAUDIBLE] here.
Search for COOL STUFF.
And just remove that old iteration.
Ah, don't break anything again.
OK, so now we've got some Java 8 kind of style code in here.
Here, Chris was supposed to ask me that,
but App Engine only supports Java 7.
And my answer to this would be let's
add a little bit customization to here.
CHRIS RAMSDALE: So this was in the same vein
of us saying that how we're enabled in Go and Dart
and how we could enable Node and Scala and Haskell.
Kate's just doing this in terms of App Engine,
or in terms of Java.
So going from Java 7 to Java 8 is a pretty big move,
but with a few lines of configuration,
she now has the semantics and language aspects of Java 8
inside of her application.
KATE VOLKOVA: Yeah, but more than two lines.
And what I did here is just a little bit more customization.
And to use instead of our AppEngine-based Docker
image, the Docker image that I've just
built before the demo-- oh no, again that
was based on the device.
OK, and hopefully that now there will be [INAUDIBLE].
So I was just trying to remove some containers,
I've removed some images.
Hopefully none of them are important.
CHRIS RAMSDALE: So you know when somebody
says they're demoing things that are hot off the press,
and you guys say this is hot off the press.
We have early dog fooders that are trying this out right now.
So I think it takes a lot of courage to get on stage
and try it out.
Should we call this one?
KATE VOLKOVA: Yeah, I'll try to fix it and show the rest,
but while you keep talking a bit more.
CHRIS RAMSDALE: Fantastic.
So minus the demo demons-- I thought actually
that was some really cool technology, of taking
Google production services, moving them into a laptop,
and then taking technologies like Docker
to enable them so that you can be
highly efficient as a developer.
And what it hopefully will show is
how we can take that technology and allow you to further expand
the languages and run times that you're using on our platform
as a service offering App Engine.
And then furthermore, bringing it all down
into one centralized IDE.
So you notice Kate, if she had been doing Android development,
she'd do it right there, inside Android Studio.
So you can build your mobile client
and build your back end services,
all within one centralized tool.
And mobile and cloud working together
is something we're extremely passionate about.
It's near and dear to our heart.
So you definitely want to check out these sessions,
if you're interested, over the course of today and tomorrow.
And by the way, don't worry, these are all in your schedule.
But they'll also be put up when we get done with the session.
So I want to talk a bit about integrated DevOps.
So when you actually move from your laptop into production,
you do that deployment, your DevOps don't need to leave you.
They shouldn't leave you, actually.
In fact, one would say that it's even more
important to have that introspection into applications
that are actually running in production.
Because after all, it's no longer
a bug that's impacting you and your other developers that
are building out the application,
you're talking about bugs that actually impact your end
users, and then sometimes your business.
So like in the case of Search, you
add another hundred milliseconds of latency,
and it could cost you millions of dollars in terms of revenue.
So with that, let's take a look at how
we're doing things within the Google Cloud Platform,
in terms of DevOps.
Sorry about that.
So I'm going to bump back down to the console here.
So the first thing we'll do is we'll
take a look at our-- yeah, looking for monitoring data.
Sorry about that.
So first of all, what we have is integrated monitoring
and metrics for your Managed VMs and for your compute.
And since we're bringing together App Engine and Compute
Engine, we're also bringing together the data
and the monitoring that you actually need to see as well.
So here I'm looking at one instance,
and I can see a summary of overall traffic,
and I can easily bump back and forth
between the actual underlying VMs.
So here, I'm seeing compute statistics,
like CPU utilization and memory utilization.
But I can also see a summary of the requests and the response
So those are things that you would get out
of your app heuristics.
When you're just running a raw VM,
all you really see is disk, network, CPU, and memory.
But when you're running a full on stack,
the full on stack that I had mentioned when you move
a Managed VM into production, you
get that web server, that application server,
that web serving stack that you want to see
and have introspection into.
And so we're doing that.
Now, with Managed VMs, what we're doing
is we're moving and creating a homogeneous fleet of compute
And that homogeneous fleet of compute
is managed by our services and by our SREs,
as I kind of mentioned going through here.
Now, for those services and those teams to do that,
that fleet of compute needs to be hermetically sealed.
Meaning we can't just let-- we don't
allow developers to willy-nilly go into the machines
and create what we call a kind of special snowflakes.
Because if you have, for example,
a hundred VMs running, and VM 45 and 46
are slightly different than the rest,
and you go try to manage all those together,
it becomes highly, highly complicated.
And you can imagine, as you scale up to tens of thousands,
it gets even worse.
Now, given that, that those are locked down
and you don't have root access into those VMs,
one might say, well, hmm, that kind of poses
a non-trivial problem of how do I get data off of those VMs,
So how do I get request logs or application logs or system
logs, or even third party logs, off those VMs?
Well, the logs are on the VMs, and the VMs
are funneling all of that traffic and all those log data
back to a centralized logging repository that I mentioned
in one of the earlier slides.
And what that means for you is, as a developer,
you come back to our console here,
and you'll see that we have integrated logs access.
So it will allow you to do things like filter logs
by log source, by log request type.
You can filter by errors-- in a second.
You can actually do debugging of the request logs--
the application logging-- in terms of the request.
So you can see what the application
is doing based on what the user is requesting.
And finally, you can see those third party logs as well.
So let's say if we bump into-- let me actually
pick a different one here.
There we go.
Just was a little bit delayed.
So here what we can see is we see the App Engine logs.
And if I filter through these, I can probably find one
If I click on the Info one.
So here, what I'm seeing is that this is a request back to the--
that's not necessarily that interesting.
Well, you can see it's a request back to the _ah/remote_api
That's the request that came in.
And what you see highlighted in the yellow
there is actually what the application was logging.
I could actually sort by status.
[INAUDIBLE] Don't see any errors.
Look at that, I actually have no bugs in my code over here.
And then, if I come down to Compute Engine.
I had mentioned that a portion of the WalkShare demo
was actually running a Redis cluster that
allows you to do streaming, with some indexing in there.
And so we're doing is actually running-- I can show you
the Redis logs here.
So I pull that off and filter by something as simple as Redis.
So there you can see all the Redis logs.
The idea is we've consolidated it down