Microservices and Kubernetes: New functionality to build and operate apps (Google Cloud Next '17)

MARTIN GANNHOLM: Good afternoon, and thank you for joining our session My name is Martin Gannholm And during the six years that I’ve been at Google, I’ve worked on and led the teams that have delivered a number of the cloud products you might be using, like Compute Engine, Container Engine, and Kubernetes LOUIS RYAN: Hi, I’m Louis Ryan I’ve been at Google about nine years now I’ve worked on Google’s API platforms So when you make API calls to Google, all of those go through stuff that my team works on And also more recently, I’ve been doing a lot of work on gRPC But I’m going to hand you back to Martin who’s going to show you some interesting stuff MARTIN GANNHOLM: So what we’re going to talk to you about is based on a lot of our experiences building and running large-scale cloud services at Google and how we can bring this kind of learning to Kubernetes and you folks So there’s been a lot of talk over the last few years about zero ops and server-less computing and how these concepts allow you to spend less of your time managing and dealing with infrastructure In effect, they can let you run ambitious apps at scale without having to know all the details So if you use Google App Engine or BigQuery, you’re familiar with how they allow you to focus on your expertise and writing your business logic or your data processing, and not managing all the machines underneath the covers So that’s kind of the context So we’re going to talk to you about some technologies that are currently under development in open source, and that will get you on the way to being able to operate systems at scale, cheaply, efficiently, and securely So we consider these building blocks the beginning of a journey to deliver the promise of zero ops and server-less computing in the integrated hybrid cloud So all the different parts of your solution stacks, whether they’re running in-house, on-prem, or if they’re in a public or private cloud, whether you wrote the components or whether you acquired them from some third party, we will cover all of those pieces So there are three intertwined principles that we believe are the foundation that will get us there The first is services, not infrastructure And what we mean by that is we need to look at it at a service abstraction layer And if we do that, we can get away from dealing with the infrastructure It’s very common for companies in recent years to have a service catalog that make it easier for their application developers to sort of get bootstrapped with different pieces But oftentimes those are just a piece of infrastructure, like a virtual machine that gets provisioned, some deployment system or scripts run and install things on them And at that point, it’s yours It’s very much like getting a free puppy And you quickly learn that getting the puppy was the easy part You’ve now got to take care of it In contrast, often managed capabilities behind an API is the kind of thing that is allowed for the explosive adoption of cloud services And we like to bring that same kind of thinking to all the different components that you build or that you acquire from other people So choosing an abstraction layer and getting it right means a few things You’ll be able to separate the construction, the operation, and the usage of components from each other, which gives you more flexibility in how you source the components and how you manage them You’ll be able to inject key behaviors in different control points in this higher level system, allowing for many of the intent-based functions that will let you operate at scale with little or no effort You’ll also be able to compose your solutions across a wide range of infrastructures of environments and technologies And this really brings us to the next principle– interoperability This is fundamentally about choice and flexibility, about giving you a path from today to tomorrow and to whatever the future may bring This includes legacy components that you already have running that you use, in-house developed services that might be more modern, third-party software-as-a-service pieces And all of these will span multiple different environments And it’s all about allowing you to choose the best option for each individual piece without giving up on the promise of zero ops and server-less And this is a vision that is only realizable if we succeed in the third pillar We like automation at Google We think that teaching computers to execute repetitive, error-prone and large-scale tasks so that humans can focus on value-added pursuits

is a good thing So for example, if you want to apply an enterprise-wide policy to all of your services, that should be trivial Do you want to canary test an update and then roll that out to all of your zones or whatever geographical spread that you have if no anomalies are detected and do that automatically? Yep, that’s one of the things you want to do What if something wasn’t right? Automatic rollback So the answer for us is automate everything So let’s touch on two of the hard problems to solve for Let’s just say it– networking’s hard Doing networking right is harder And now you want it to be secure? You’ve got to be kidding Nobody wants to deal that, at least I don’t So we talk about this as the space between the micro-services So the red lines here represent that space And so the questions are– how do you securely connect services together? Can you do encrypted and authenticated access? Can you protect against DDoS or even just unintentional packet storms? Credentials– that would be great if you could flow credentials through the system But not just once, you want key rotation, right? You want these things to update and manage that So those are the kinds of things that would be better if you didn’t have to deal with The second part is hybrid cloud or multi-cloud That can be hard If you use a homogeneous system, like Kubernetes, for example, which is multi-platform, you can run it in many different places And if you’re all in on that, everything’s fine But most people don’t have that luxury So you have legacy You have virtual machines Those are the kinds of things that you have to sort of include as you evolve So some of the questions you might have is, well, if I’m using a service on-prem, is that different? Do I have to operate differently? Do I have to deal with it differently than a third-party acquired service? How do I tell which services I’m actually dependent on across my entire solution? How do I enforce company policies across those? Or can only do it on a subset that I have under my immediate control? And how do I get a handle on the security aspect of this layout? So let’s get more concrete and talk about the service abstraction So we have partnered with several other companies around the Open Service Broker API, which is a relatively new standard in the making which originally came from Cloud Foundry So those of you familiar with Cloud Foundry will recognize these concepts And so we’re working with these companies on adopting and then extending this great abstraction And so the importance of a standard is that services that are written for consumption in the Cloud Foundry application will be able to be used unmodified from Kubernetes with what we’re going to show you today and vice versa And so as people adapt their services to this model, the amount of places you’ll be able to use them is going to be great So let’s pretend that I am a service producer And I want to package up Google’s cloud services behind a broker or inside a broker so that people can easily consume them So we want to expose our cloud platform APIs, like you see them down there, Cloud Storage, PubSub, BigQuery And so a broker can have multiple services that it’s able to be a factory for, essentially So let’s look at one of these, Cloud SQL So Cloud SQL has two types of instances If you look on the Cloud console it has a first gen and a second generation The Broker API has a notion of a plan Each service class can have one or more plans And so as a producer I might choose to use Gen 1 versus Gen 2 as the two plans that I can offer you So when you want to get one of these, you choose which plan you want And I can make that bifurcation there Alternately, I could choose a different way to factor it I could say, well, what if there are different fundamental sizes– small, medium, large? I could do that That’s another thing you can do with the plan Or a different type of a way to look at it is– what about standard versus high availability deployments of Cloud SQL? So those are different choices that as a service producer I have a lot of flexibility with this API to specify So when you go and create a service instance–

so you have a service class you want an instance of You want to use it So you create a service instance You also specify which plan you want And so here are some screenshots from Cloud SQL And you see on the left are the options for Gen 1 And on the right there’s a long scrollable list of options on Gen 2 And so the Service Broker API allows for the difference between these two things And in the future, the parameters that are valid for one plan versus another will be schematized and so that we can then automate We can actually have the user interface parse through that and present you the options really nicely and be able to show you what options are valid for which plan So once you’ve created a logical instance, you want to use that service from an application And rather than copying API keys, IP addresses, configuring SSL, and all those kinds of things that you today do in the in the Cloud console, and then take whatever information they give you and apply those somehow to your code, to your config of your applications, the Broker API, in conjunction with another piece, the service controller, which I’ll get to in a second, has a way of configuring and passing this information directly into your application, saving you that step So to illustrate this further, the service controller is a logical construct And it’s tied to your runtime environment So here you can see the dotted box is meant to be a Kubernetes cluster And so inside of that there’s a service controller that understands what it’s like, what it means to be a Kubernetes application So it sort of does some of the work in this scenario So the service catalog has a few purposes One, you can register brokers with it, as many as you want What it does is it asks the brokers what are the available service classes? And then it sort of creates an aggregated catalog of those things Additionally, the service controller can let you– and this functionality doesn’t exist today, but this is one of the things that we’re looking at is that this is now a control point where you can apply IAM policies So that you can say, well, certain people can’t access the user service I can sort of lock that down so they can’t bind to instances of it, or they might not be able to create instances of it So once the user decides which service they want to use, for example lets pick Cloud PubSub, they ask the controller to create an instance of the class And the controller delegates to the broker and says, hey, why don’t you create this class– create an instance from it And then that, in turn, goes and does whatever it does, completely opaque You have no idea what it does It could provision a large amount of virtual infrastructure, many virtual machines or whatever Or it could simply just insert a row in a database noting that you now have a virtual instance So it’s a nice abstraction because it allows the service provider to do whatever implementation they want, and it’s transparent So, let’s see, so now we have the service, and now we want to bind an application to it We want to use it from an application So here’s an application, just a Node.js application, not a Kubernetes cluster And we want to tell the system that we want to bind these together So we talk to the service controller, create a binding The service controller, once again, delegates to the broker The broker figures out what are the coordinates, where is the actual instance, what IP address, which port, those kinds of things And again, it’s service-dependent, but that’s a very common type of thing that it returns And it also returns credentials, credentials that could be unique for each binding or could be shared for all the bindings to a particular instance Again, this is a service– it’s part of the service definition So as a service producer, you can choose what makes sense in your scenario And the last step that the service controller does is that it sort of creates the virtual binding and injects those credentials and those coordinates into the application so they can very easily consume it And I’ll show this in the demo in a few minutes So, yeah, in Kubernetes this is done through secrets and config maps, and the application just accesses them through environment variables So here are the pieces we’re going to talk to today So these are three different open source things I talked about the service broker I hinted at the service catalog That’s what I call the service controller So a service catalog is a special interest group that’s part of Kubernetes And so we’re doing work there in the Kubernetes community

to add that capability to Kubernetes And service mesh is something that Louis will get into depth on, and it’s another open source project that we’re engaged in It’s actually called Istio But we’ll get to the URLs for that later So what these do– we talked about what the broker does It’s a core service abstraction And the catalog basically provides environment-specific help to sort of get consumption of services to be something that is very natural in an environment, in this case, Kubernetes And the mesh is for traffic management and inter-service policy controls So this is the application we will show you in the demo What we have here, the gray boxes, are existing micro-services in my deployment And so what we’re going to do– the application– the purpose is to have a bookstore to sell some books So we want to have a front-end that shows you the books and allows you to buy them It’s a very, very, very simple app And I suppose we were already selling other items– sneakers or something else– and that’s why we already have these inventory and payments connections So we will add the books front-end application We’ll bind it to the existing micro-services And then we’ll all get to PubSub in a second So can we please switch to the demo machine? It went to sleep So here’s the Kubernetes dashboard And we have a few new things on the left here, a little service catalog And here we can see the service classes that are available for creation If we look here, there’s a couple of catalogs There’s a KA Desk 1, which is really our department catalog These are the micro-services that we built in-house, and we’ve put them in a broker, in a catalog there But we don’t need to create any of these because we already have some pre-created instances for the purposes of this demo So what I’m going to do is I’m going to switch over to– I’m going to run a script to install the application So this is really just creating a deployment in Kubernetes using YAML file But I’ve just wrapped it up in a shell script And so that will do some work There we go We have created something And so if I say, kube control get pods, we will be able to see– there we go, booksfe So this is my new deployment here These are some pods, and we have an error Shucks What is that? Let’s take a look OK Deployments, here we go– red text So it says, secrets booksfe users not found OK So why is this? Well, it turns out there’s good reason for this So here’s a little bit of the source code of– whoops, that’s not what I meant to do This is a little bit of the source code of that front-end application And some of the stuff that it does here is it tries to access some things in the environment And these actually come from secrets And so the secret doesn’t exist because we haven’t created the binding So one of the things that happens is when you create a binding, under the covers the service controller, because it’s Kubernetes, says, well, I’m going to propagate these things in a secret And then the application is configured to expect that secret to exist And those environment variables get created from values in the secrets And so Kubernetes will just not launch these pods if all the dependencies haven’t been satisfied So we will go back and just create some bindings So we’re going to go in here And for each one of these, for most of them– what I’m typing here is– this is a label So in Kubernetes, everything is done– specifying what you want to address is all done through labels So our label for our application is booksfe And I have to do this three times So we’re going to copy that So that’s inventory Whoops, that’s the one I did We don’t need to do payments We’ll do purchases And then we will do the last one, which is users And what I didn’t show you, if I did a kube control get secrets,

you’ll see that it has a bunch of them now that are– it has a couple that are a few seconds old– booksfe, inventory, purchases, and now there’s one called user So that one just got created So now those secrets exist And let’s see what’s going on with our deployment So now when we go back, we’ll see that the deployment is actually now running So Kubernetes just figured that out, that now those dependencies are satisfied So we have a front-end And so now I am going to launch my app I’m going to launch the new web browser for this app Oh, yes, it works So here is my front-end I have some books it’s getting from the inventory, and I am just going to buy one of these This is a classic book And let’s buy this “Crockpot Recipes.” And I can look at my purchases and see that I have a couple of things right there So let me show you one thing I am going to show you the secrets, just so you can see how those work– secrets And then I’m going to say kube control I’m going to describe that I’m going to look inside of it And I’m going to pick one of these Whoops Oh, describe secret That was– oh, of course I’m just going to do this I’m going to cheat There we go I forgot the O YAML So here you can see some of the data So the host name and port are being passed in And that came from the binding So those originally came from the service, and the controller injected those into the application through secrets and environment variables So now we’re going to make a modification So say that we’re selling books Our publishers are very interested in– well, let us know when you sell some books Cloud PubSub might be a good way If you publish events, they can just subscribe to those channels and then look at them So in order to do that, what I want to do is I want to go back to the service catalog And I don’t have Cloud PubSub So I’m going to add one I’m going to do it up here– add catalog So I’m going to create one called GCP Bear with me while I type incorrectly And then I will use this AUDIENCE: You forgot a T MARTIN GANNHOLM: I missed something? Oh, yeah, look at that Thank you All right, double check It’s the first time I’ve screwed that up on all my demo runs I’ve made other mistakes but not that one Ah, look at that, a thing of beauty So now we have four services that our environment knows about that we can now create So let’s just spend no further time Let’s create an instance Let’s create the PubSub instance Now, the interesting thing that I will show you here is that when we create a binding to it– so we’re going to bind to it from our application And we still want it to use the same application And I’m going to now type some parameters Remember earlier when I said that we will have schematized the parameters in the future? Well, that’s the future So right now I’ve got to type in some ugly JSON So it’s a very small amount of text So this is just setting the permissions on that Does that look right? I think it does So we’ll create that binding So now I’ve said I want to bind my application to that But the application doesn’t know how to talk to PubSub yet And so for purposes of making this a shorter demo, I’m not going to make changes to the code and recompile it and rebuild my Docker image and everything like that What I’ve done– or actually what one of my engineers did was– basically make it optional so that we actually can– we have the same code And it just knows it’s passed a parameter that says whether it should use PubSub or not So here is the code for that And so I just need to make a change to the YAML file That is one of our config files Save that Now, if I can update this application,

we should be able to see some action on that deployment again All right, so kube control get pods Take a look at what’s in here Yep, you see one is terminating, another one is running So it looks like we’re up in business again Where did I leave that window? Demo– nope What am I doing? Oh, that is surprising Oh, there we go All right, so let’s buy a couple more books Let’s buy this one Let’s buy this one And now let’s go back here and run a gcloud command So I’m to call gcloud to basically fetch from the subscription channel It’s called GCP Next And ideally, everything works and we get something back There we go OK, so there’s the purchase that was made Let’s see if we have another one Different message ID Not very exciting Let’s see what else we got That was it I made two purchases So that’s that piece And I’m just going to– so now I’m going to hand off– no I need to go back to slides, please Can we show the slides on the screen? So what are some of the key takeaways? This is the app that we modified, and we showed how we can easily connect to preexisting services from a new app and also how to provision and connect to a cloud service in the same way One of the existing services, the payment service here, we didn’t actually deal with that But what’s interesting is that it’s actually exposed as a service as well And so there’s a way that you can wrap existing legacy things as services And those of you familiar with Cloud Foundry will recognize that same capability And so a lot of the benefits that you’re going to see, especially in Louie’s part of the talk, are then applicable to something like a legacy service, like payments as well, because it now gets wrapped in this abstraction We also showed how we need to know knowledge of where the service came from or how to negotiate credentials The necessary information just showed up in secrets and then environment variables Now let me hand off to Louis, who will show you some even cooler stuff LOUIS RYAN: Thank you, Martin Martin talked to you a lot about how people are going to acquire services, how we’re going to make sure that services have well-known relationships with each other And now I’m going to show you some of the things that we’d like to do underneath all of that to help make that work better and to help you get some insight into how those services are talking to each other and be able to start imposing policies on the conversations that those services are having with each other And there’s a term floating around nowadays that I think is gaining some momentum for all of us, and it’s the service mesh You have lots of these services They’re all talking to each other It’s not a network in the traditional packet sense It’s a network of services and their relationships with each other and all the things that they want to do So I think of a service mesh as a network for services So let’s talk about that a little bit I got the wrong button So what would you want a network of services to do for you? Well, when we build services today, we want reliability We want them to perform well We want them to be able to deal with transient failure modes Maybe if I bring up a service and it’s talking to some other service, and that service has a failure mode, maybe I can’t reach it or maybe or one of the jobs goes down I’d like the network, the service mesh, to take care of that for me We’d also like the service mesh to help us with performance When I bring up lots of jobs, and maybe I’m bringing up stateful jobs and stateless jobs and doing all these types of things in production, routing traffic to a stateful job when it hasn’t had time to warm up its cache is probably a really bad thing to do for latency And it might even be a really bad thing to do for availability if cache loading is a very expensive operation, and you start to get these cascading failure modes We also want the network, the service mesh, to help us with visibility

I’d like to know which services are talking to each other I’d like to know if there are services in my deployment that I don’t expect to be there So we’d like to have a lot more visibility about what the behavior is in this network And I’d like to have control, right? When a service is talking to another service, it would be very nice if I could go and interpose in that flow and say, no, you’re not allowed to make this communication You’re violating some corporate policy, or you’re using too much of a resource or doing these types of things So how do we make all of that happen? Can we switch back to the demo? So meanwhile in ops land– so while Martin was doing all these wonderful things, one of the things we also spun up on the background is some monitoring So all the services are talking to each other They’re generating traffic Wait a minute Did we switch to the demo? Thank you So here’s a typical Grafana dashboard It’s showing you the traffic between the services We’re showing you a simple request-per-second metric And you can see that we’ve annotated the graph with the services, right? So you know that the bookfe service is talking to purchasers is talking to users And for some really strange reason, it’s doing 60 QPS of traffic, Martin? This is the apps’ guys running load test in production again Well, let’s go stop that, because that’s just bad practice And that looked like it was the books front-end Let’s see– books front-end talking to the user service And Martin definitely did not buy 50 books a second So let’s go deal with that So we go back to our instances And we can go look at the user service And we see we have our binding– books front-end talking to the user service So let’s dig in there And notice this link, adds network function So this is very rough This is very much demo alpha quality stuff at this point But if I go in here and say add network function, I can start to take some action on the interaction between those two services And in this case, I’m going to impose a quota And I don’t think Martin really wants to buy more than five bucks a second– five books a second So I’m going to turn that on Now, what happened under the hood, the system is aware of this binding It’s going to take that notion of relationship, and it’s going to push a piece of configuration down into a system and say, hey, for this binding, this relationship between two surfaces, we want this quota So let’s hop back on over to the Grafana dashboard and see if anything happened And look at that We see the error rate start to spike up, right? The book’s front-end services, or this load test that Martin is running, is still sending traffic But now we’re rejecting it It’s getting dropped And actually, it’s not even going into the service itself It’s being dropped earlier in the network And so we’re seeing the request will actually start to cap out at 5 QPS, the valid ones And all the other traffic is going to get rejected There are lots of these types of things that we can do in this network if we had this amount of information if we’re able to interpose in the traffic in sensible ways So how do we do this type of thing? And can we switch back to the slides, please? Although, Grafana’s very pretty We could look at it for another few minutes if you’d like Can we go back to the slides? Thank you So how do we do this? Well, the first thing that we need to do is we need to capture traffic We can’t really do anything if we don’t know what’s moving around So here’s a little subset of the application We have the books app, and we have users service And the books app is calling the user service via its API Well, one of the things that we did before Martin started driving the demo is we installed a component into the Kubernetes cluster This is part of the Istio project, which Martin alluded to earlier And it’s called the proxy manager And what the proxy manager does is it listens to the Kubernetes API And when you spin up pods, it rewrites the pods on the fly and injects proxies into them And then it also rewrites the networking rules

inside Kubernetes So instead of that traffic going directly from the books front-end to the users service, it’s actually going through proxies And it’s going through a proxy on each side And we’ll talk a little bit about why we do those types of things So now that we’re capturing traffic, we can start to do interesting things with it We could start to inject network functions There’s different classes of network functions But we’ll talk a little bit about some of them here Maybe we want to roll out a new version of the user service, user service v2 If you don’t want to send all of the production traffic to it live, right? Maybe you just want to send 1% of the traffic So you want to qualify that service in production and make sure that it’s stable before you cut everything over The proxy that was injected into the books at V can now start to split the traffic on your behalf And you can do this at the operator level You don’t have to go and change the application code to do this This is a very important thing Maybe the user service is unreliable And so instead of the books app having to deal with the fact that its calls to the book user service are flaky, the proxy can actually do the retries on behalf of the service itself And so as long as the proxy has some awareness of what type of operation it is– is it item potent? Is it safe to retry? It can actually do these things on your behalf And so if you have a system where you have this kind of intermittent flakiness, but it’s not due to anything that you can directly control right now, as an operator, it’s very often the best of bad choices is to start turning on retries of exponential back-off Maybe you want to move the service up into the cloud Maybe the user service is this vast, enormous thing, with hundreds of nodes in seven different regions And now you want the proxy to do smart intelligent things on your behalf, like load balancing, right? Maybe that load balancing is going to be regionally aware Maybe you might want that load balancing to do something like optimize for latency, right? So you’re going to send all of your traffic to the most local node possible until it starts to fill up, and then you’ll start to shard it over to some other region Maybe you want to maximize for availability, and you just kind of keep an eye on the back-end health of each service, and the proxy can be aware of these things Now, it’s worth noting that we didn’t have to rewrite any application code to do this As Martin mentioned earlier, the books front-end was written in Node.js And user service, maybe it’s written in Go And the purchasing service is written in .NET and all these things So you have this big, complex, polyglot environment Implementing sophisticated network functions like these is pretty expensive It actually takes a lot of engineering effort to get it right and to do it consistently And consistency is what really matters here, right? If one piece of this network doesn’t really play well, you can start to have nasty things happen You can have cascading failure modes And so having the proxy do this work on your behalf is actually massively beneficial, right? As long as it is consistent and it does them well, you don’t have to worry about the application developers consuming some fancy network run time, trying to write exponential back-off code for the 15th time You just don’t want to be doing those things So what else can we do? Well, we can inject identity So another piece that is still installed into the Kubernetes cluster is a certificate authority And when we bring the proxies up, we also inject credentials into them Those credentials, in the case of Kubernetes, are tied to Kubernetes service accounts Now the proxies can use those credentials, those certificates, to enable crypto between the proxies So we can do mutual TLS So now both applications, both nodes of this graph, can strongly verify who’s calling them They know it precisely And it’s not just some random piece of code running in my cluster No, it’s something running with that authority Even if the user’s application isn’t written using some standard protocol, maybe it’s not using HTP, right? Maybe it’s a database with its own proprietary binary protocol So maybe you can’t tunnel the identity all the way down into it, because it’s protocol doesn’t matter or it won’t let you do that But at least you know the traffic between those two nodes is secure and is tied to that identity And if it’s tied to that identity, we can start to do some things with it We can start injecting policy associated with it Maybe the books application is owned by Martin’s development group And maybe the user service is owned by my group And I think his development team is a bunch of cowboys, and I’m just going to shut them off, right? I can do those types of things There’s lots of interesting things we can do with policy

So another component that gets injected into the Kubernetes environment is this thing we call the mixer And it can do a wide range of things So I showed you earlier– we showed you monitoring, right? We inject this component, and the proxy’s sending all this information to it about the behavior of the network And we simply flow that into Grafana, right? We flow it through Prometheas into Grafana, and we had some nice dashboards and things like that And I can visualize my network and the behavior of the services between them And we could have done that with Stackdriver or some other monitoring tool We also did quota, right? And in the example that I showed you we have a quota component running inside this And on the user side it said, hey, I’m supposed to enforce 5 DPS of quota, and it just prevented the calls on that server-side proxy It prevented them from actually getting into the user service And if there was mutual trust, then we could have actually done that check all the way back up at the book side, and the traffic would never even hit the physical network But in worlds where you span multiple trust boundaries, you really do need to make sure that you enforce ACL-like checks in the same trust domain as the service We could also have done authentication in ACL-ing So maybe you have multiple authentication schemes in your network Maybe you’re using the OAuth for [? signjots ?] or who knows what And those credentials are flowing through, let’s say, HTP calls You can delegate them out to the service, and they can make sure that they’re consistently enforced across the network And also you can enforce ACL policies, right? If I have that notion of identity, whether it’s a mutual TLS identity, maybe it’s the Kubernetes service account, maybe it’s something else, it doesn’t really matter You can use that to perform ACL-ing checks And a lot of companies out there have standard ACL policy systems And you can use this to integrate that And this is probably the most important point is that the system is easy to customize And so what we’ve really done is set up a framework where we provide a whole bunch of standard signals about the traffic flowing between services And then we provide this convenient runtime integration point, where you can go and put control over that traffic So it will know which service it came from, which service it’s going to, what the operation is, what region it came from, what credential or identity does that have, how big is the thing You can enforce whatever policy you want And so this mixer thing, the current implementation is written in Go We think Go is a good language for people to write extensions in and to plug them into this framework and then just deploy into the cluster So in conclusion, we have our principles And Martin talked a lot about these in the beginning And the big one being services not infrastructure, right? What we’re trying to focus on is services as units that you can find, acquire, consume, and then operate, right? And not the lower-level building blocks necessary to do all those types of things– bringing the level of extraction up We talked about interoperability And in particular with the open service broker API– it’s a community effort And we showed you a way to bring services from other environments into your workplace, right? I had my own catalog I could take Google’s catalog I could take some other vendor’s catalog, bring them in, and start to work with those things Or maybe as an internal thing, right, I just create my own catalog and I curate it And that’s all that my internal developers are allowed to use And there’s this qualification process to get things into the catalog And then we talked about automation We want to make sure that everything is automatable So we walked you through our UI, and we showed you a bunch of command line stuff, and I did some things to enforce policy And then I showed you some of how that worked But everything here is driven by an API So if you want to change the policies around how services talk to each other, you make API calls to do that, right? You probably need to have a lot of privilege to do it, but that’s what you should be doing And so you can start to automate the rollout of services, automate the acquisition of services from third parties, and do all these types of things This is entirely open This is a collaborative effort Google is not doing this on its own or anything like it And so Martin talked about the open service broker He also talked about the Kubernetes service catalog And I talked a little bit about the service mesh and Istio And these are the companies that we’re working with on a daily basis to make that stuff happen So here’s some useful links, right?

You can go and follow these links, read about this stuff As Martin warned you at the beginning, things are alpha, sharp, pointy, dangerous, borderline unreadable Maybe you have to read the code to figure out how it works It could be quite painful So hopefully the next time we give this talk, there’ll be nice shiny docs when you go follow these links, and everything will be entirely understandable And you’d all be using it, and you wouldn’t need to ask me any questions [MUSIC PLAYING]