Cloud66 – Crash Course: Using Docker in Production for your Web Apps

thank you Hey hi everyone my name is Casey Jodi I’m one of the guys working at cloud 66 just a little bit about who we are what we do so now 66 is everything you need to have to run a doctor based container application in production on any cloud on your own servers so the way it works is we either build the docker images for you from your source code you have your own dock and workflow that’s great and you build it you give us the images we then connect your own cloud provider of choice digitalocean obviously and the usual suspects you connect to those far up the servers for you provision them roll out the containers and we also do a lot more than just taking care of containers so there is a lot about web traffic your visitors the databases replication backups access control all days firewall security patching of operating system so it’s basically the whole stack of running your application in production on any cloud provider so that’s what what we do we get a little bit tical how it works a little bit more than that but i’m sure you’re not really interested in class 66 you’re more interested in like what we’re going to do to help developers use darker so to start and make sure that I’m on the right track I when I asked a couple of questions just for my mother for my own sake how many of you’ve used darker in development or production great how many have not used it in production okay so we have a lot of people who have tried it in development but not in production right great cool so I think the best way for me to start is to tell you about a couple of projects that are open source free sponsored by car 66 happens to be what we find them very useful both for our own customers but for ourselves this is what we built for ourselves primarily I’m you’re sharing it and i think is you will find them very useful the first one is the starter which is what it does is essentially it’s the easiest way to take your application into production or into a darker file and make a darker eyes right and it’s very simple what it does it’s an executable it’s obviously open source and it’s written in go you can compile it yourself if you want you can contribute hopefully later on but you get it get on your laptop then you point it to your project let’s say you have a rails application you pointed to your project and then it reads your your source code tries to do some basic analysis of it and then it spits out a doctor fine so that’s the first step usually it’s good because we have a lot of intricacies around how to build darker files and time is going to talk a little bit more about how it’s actually worked out in production for dissolution internally but there’s a lot of funeral quirky things that you need to take care of when you think about dr. files and I’m sure you’ve realized that when you are doing you know darker in development and then it’s spits that’s another file as well and that’s called service llamo that’s a specific to class 66 so you can use that file to then take your project you’re talking file and the service y ml and host it on a digital ocean server but I’m pleased to say that since last week we have released another a new version of starter that spits out a dr Campos as well so if you don’t want to use class 66 find by us we would love for you to try it but people have different preferences you can use dr Campos run it in development if you ever get into production hopefully we’d love to have you on on class 66 so let me show you a little bit about what are the things that we do at startek so we’re all familiar with Doc and files and how how they weren’t just the brief kind of a quick thing I’m going to use a high-tech system or just discovered a pressing this just to make sure that I can pair the bluetooth I couldn’t do that but I realized that is good laser so um so doc files they have layers you start from the like it from start from the base and then you keep building these layers on top of each other this is like a java application so you have a base of linux it’s a debut in one so those two essentially are the one instruction under your dacha file then you install your application server your runtime your application well the web server and application server then your application comes here and here is the runtime the file system and the reason that’s in red is because when you do doctor obviously you have it contender running the result of the image going into a container any change in the file invalidates the last layer right which adds a kind of a challenge if you think about it that way now who’s familiar with like rails as a framework

okay that’s great cool sometimes I go through like a node crowd and you know I find myself completely out of skill all right ah great okay so you know for those who understand rails and it’s fairly simple you know you have like in node you have packages in rails you have gems in you know python all languages have these libraries gems shareable things and there is a step always in this installation that you have to build those and download them from the github usually put them next year applications to your application is comparable essentially on its own right and every time we do something like bundle installing jam on NPM install you’re changing a lot of files which means that layer on top is changing a lot so every time I have a docker file I add my source code to it I run docker build i’m downloading an NPM install for example and then all of my files are invalidated although I have not changed anything your application now what’s the likelihood of me changing my code versus the likelihood of being changing a dependency of the application I’m more likely to change a code than add a new jam or an NPM right so you don’t want to rebuild an invalid all the cached years based on that so let me show you how a basic rails in this case dhaka file can look like i start with the base your lock your ruby latest which is a sanction docking hub official language and i do a bunch of things here which is usual you know update installed and then build essentials no Jas for asset pipeline compilation you know ugly flying of the JavaScript files and squashing you know compiling sass and everything around that I create the environment variable then create that folder set my work directory there add my gemfile which is like my package dependencies in rails sink NPM package Jason install those which pulls them down for booze pull those gems or NPM or whatever package you have from github installs it and I add my application from local build context to the folder right now it’s kind of funny because this will essentially add that as well because this gem file lives inside of that folder but as you can see as we kind of talked about it before in previous the slide I want to add that one first because the likelihood of me changing this is where less than this changing so I add this one first and then every time I change this one of all these layers each one of them are used to turn into a layer in darker image only this one gets invalidated so are my bill becomes much faster and that’s basically one of the things you know one of the most basic kind of best practices that you can do with your doctor any questions so far cool it’s basically the same right so if i were to feed one rails application into stars and i’ll get you to demo the exciting parts excuse the slides one of the things that you need to look into first we need to look into want gems for example you use right now if i wanted to just right arm rails dr. file obviously would be very simple at the gem file at the you know source code and then run a bundle install you know get it built but you might be using memcache d you might be using gratis my sequel postgres you know all sorts of things around that imagemagick what are the dependencies for those things what what linux packages do i need to install for example to have imagemagick running in my talking pretty much everything well that’s that’s what we like stuff exactly luckily it’s not the same for everything so if you think about postgres you have just postcards client drivers and things like that and that’s what started does so it reads your gemfile in case of a rails application well obviously the first step is to detect well-trained worker using so it uses for clues eyelid seems like a rails one starts with that recent jump on and says ah I can see imagemagick I’m just going to add a boatload of packages to the to the add part of the in apt install I can see that you’re my sequel so I’m going to do with my sequel client then it looks at your data with llamo so it creates the environment variables that you might want to need and are fairly standard you know my sequel password my sequel username y SI called host URL whatever else and also every framework has its own kind of commands that you have to run again going back to the example of rails in this case you know you have database migrations that change the scheme of database you want to run those if you have any database migrations so those are the commands that you run at every build that you want to do that right let’s get to demo

okay so how to install starter it’s fairly standard with Drew if you’re running a mac pretty much everybody does you should be able to try it now if you have internet you basically just tap the car 66 tap do a brew install klutzy 660 starter and you’re up and running it’s a go application you can go to that URL you see it on github compiled yourself you are into go it uses one go 14.5 experimental which use venturing as well so you don’t shouldn’t need any dependencies coming from the internet it has everything you need to build it I’m going to share the slides as well in case oh don’t worry if it doesn’t okay cool now here’s my application right it’s a fairly standard rails app doesn’t do anything fancy I’m just going to run it now what i’m doing here is i’m saying rails server bonded to all my network cards and start anybody see this don’t you to make a bigger all good sweet little bigger good or more bigger there you go should be okay right okay let’s see what it does it’s a super complex thing there you go fairly simple it’s got to my sequel like database every time i refresh anthony increases our counter right into the database and just as a random style on CSS right if i stop this and run it again we wearing 21 it goes 22 23 so it is writing today to it okay and this is a kind of any kind of point that I want to demonstrate somewhere okay let’s put this thing into darker with starter and ionic in production here’s how I want to go to you I’m going to go to starter I’m running my starter that I just built so I’m using dot starter I’m pointing it to that project okay so it detected that i’m using a rack based framework rails it couldn’t find the version of the Ruby Express because i did not explicitly say what version of Ruby i’m using so it’s just going to say I’m assumed latest so I’m going to assume niches if I had something that said like Ruby 2.20 then it would add that try to find that image on docker hub and use that as a from statement of your dacha file I’m going to say yes now it doesn’t find it couldn’t find a data races because sequel light when it comes with everything that it means it doesn’t need anything extra and i’ve been using like my sequel to jam that it would say I it seems you use in my sequel already and I can’t demo it to you I’m not like it’s not paper where I’m going to say there is no database and these are bunch of Ruby or rails Essex specific commands that it just suggests that you might run what this one does mean this is all gung this is the important part basically says not the schema today to miss your database has the schema I’m just going to say yep that’s good it’s going to run it after each build and then after each deployment is going to do the migrate my database so in case i have a database migration it will run it there are not applicable to this case but it’s nice to have it okay there we go so what it did is it created the docker file and a service llamo let’s go and see what dr filer created as you can see it didn’t create any oops okay yes exactly um you got me so it’d be so the reason it works this way not not the twice but the reason it writes the docker file into your source code and not nowhere else is basically it’s based on the assumption that i have my code and usually my coat and my taco

file kind of live side by side if i add imagemagick then i’ll have to go to my local file and add a bunch of packages so every version of my code is supposed to work with the same version of a docker file so they kind of are lockstep and that’s what it does fairly basic stuff adds it there but I can customize it so it drops it in my source control I can’t do a git add and add this to our source control and that’s my starting point hence the name starter but you have other abilities of saying okay this is a good start now I’m going to go and you know modified customized that add my own packages things you started couldn’t detect and identify and that’s really simple we’ve seen all of this obviously now you can identify that that combination of these two commands together allows me to do all this in one layer instead of creating another extra layer on top and all things like this and I’ll get to that with when I’m talking about habitus which going to take care of these more advanced build build structures so that’s that let’s see if I can build this so I’m going to go to my this file say doc hair field right that’s all I did so now i have a docker file hey dr. image let’s run it my rails application is on port 3000 let’s see what’s in it I need the name okay here’s my application okay I’m running it inside now one thing I need to do is to find my containers IP address which is by darker host IP address and there you go so now he’s running inside of docker container fairly simple now you remember last time I stopped the one that was running natively and I started again it started from 21 to 22 so it was actually in the database now i’m going to refresh so we were 25 30 34 i’m going to stop kill the container start again I’m back on 24-25 because the database was the one that was persisted in the image so that’s a start up fairly simple just get your head start in to having your application usually web application into this and it does just let’s study more than this as well as detecting the framework it would try to guess the port on the web server using it also is compatible with proc files if you’ve used like Heroku you know that you have background processes that you want to run on profiles like you know Q managers or email background senders and things like that that are not web service centric and do you usually put them in a proc file so it would go and detect those processes and creates a new line for each one of them so you basically can end up with a service composition of a bunch of containers each one of them running your background processes and then one for your web and you can scale the web one obviously don’t scale the back end ones here’s a an example of a service yamel now it did create so I mentioned that one of the other files that it creates starter creates is a service llamo service animal is a service composition father music loud 60s think of it as dr. Campos for class 66 it’s more production centric than dr Campos which is more developed Patrick so it’s fairly simple format you show you familiar with it I have one service here which was my web where do I get the gift was the source code for it what branch that was what come on I need to run to run the server was the build and deploy command the schema load on the migration which we saw whereas my bill truth is with the context of that built again fairly simple the port is on 3000 now you notice that there’s put 80 here instead of 3000 which basically means and I put it on production I prefer to have it on port 80 as opposed to for 3,000 I want to tell my you know customers to go on my domain Colin 3,000 and a bunch of standard environment variables am I going too fast or too

slow or is it good nothing’s I’m going too fast okay ask questions okay you just just thought I don’t I don’t leave anybody behind just just ask questions interrupt me all right cool now I’m going to show you a little bit more complicated service yamel just just a little bit about how Prague files are and then you’re going to put this into digital ocean server so cal 66 the main application the web application of classic 6 is self is written in rails whoo that’s why I’m so very eccentric and I’m just going to run starter against classic 60 self and see what it comes up with so I’m going to point it to a project of internally called maestro right so again we’re not explicitly saying what version of Ruby we want so it finds that and it found my sequel this time I’m saying yes it found Redis elasticsearch and these are the three databases that we use our class 66 in the back end any other databases no it also found a profile and as you can see using custom web which is unicorn for us it’s not passenger which is another application server for Ruby we use unicorn which is you know what we prefer Faye which is our web socket real-time communication mechanism and updates everything and sends logs back to the UI scheduler which is you know what it is and then you have different priority background workers and just found that and again it’s going through know the schema migrate and spits out the files let’s have a look at the files so dr. file is again the same in this case but here is the service young again my web is running there it detected that i’m using unicorn so it’s use your own account unicorn instead of rails s or web brick which is the name for that now I’ve got faith which was my web socket communicator as you can see it found out that I’m yelling running Faye on thin which is another application server that handles WebSocket for us adds it there detected the port 8080 and it’s just going down to every single one of those processes now at the end just going to bring it a bit higher for so you can see there’s a section called databases and this is kind of the part that we’re different from dr. Campos because we at classic six when you run your application on classic six we run your databases on a vm we don’t run it inside of the container because of that if you tell us that you have my sequel what we’re going to do is we’re going to connect your digital ocean account gone far above a single server provisioning my sequel on it the version that you want you can specify which version you want and all that sort of stuff you fired me my sequel you fire up another server to put Redis on it if I have another service your elasticsearch obviously you can say i want to share like elastic century greatest because my search load is not that much but then what it allows us to do is you can just go to cloud 60 stash board or the command line and say I want replication what happens then is glad success will connect to your digital ocean account fire up another server based on that create a replication for you as simple as that and you can say I want to have another Redis and you can actually have things like I want to have my main replica of my main database in Amazon and my back back up and digital ocean you can have cross cloud cross data center replication even so that’s kind of a more complicated setup now you saw their application let’s go to class 66 that’s how classic six life is much ash board and we do a lot of dog fooding or drinking our own champagne or whatever that is called and we deploy class 66 pretty much with cloud 66 so what I’m going to do is I’m going to get the doctor file that we created and the service gml and see if I can host that application up on digital ocean ok so here what I’m going to do is I’m going to give my second name I’m going to say what environment i have any can create environments as much as you want many as you want this is my web where did it

come from that’s my source code and i can use the service uml here but in this case I’m just going to shoot old-school way that’s my get my service is available on 4 3000 I did not use any databases obviously while a sequel light I’m going to say next ok so now I have an option of choosing a cloud proper that we support or deploying on a server that I already have I’m going to say a cloud provider obviously I’m going to choose dissolution I’m going to go to New York to say and what size server I won’t say one gig off we go it’s going to start building it solution is going to give us a server under your account it was actually my account but it will be yours give it a server we then connect to it blasted with basic everything package it we don’t install anything that’s ours apart from one open source agent that monitors your IP address changes in case you bounce your box or you know there’s a crash with digitalocean decides that they want to move it to another physical so the I Peters might change and all someone is the heartbeat make sure that you know if your server it becomes unavailable and it’s unavailability could be because a network cell tissue or it’s on a huge load so you can’t even run a crime because your CPU saturated that’s it everything else is exactly your server you can cessation that server you look around it looks like exactly you run your ansible of Sheffield puppet or whatever vegetable scripts on it yourself you’ll find everything the same way everything is open source there’s engine X therefore your web if you have a load balancer on digital ocean we can actually do load balancers which is H a proxy we can do that and if you’re using say Amazon then we do a lb it’s uh you can just do whatever you want ok this is going to take a while because it’s going to just build and install a sort of stuff so here is the one I created create prepared earlier when I was in that room so this one’s building is the one I created with the same setup and here it is as you can see I I did that before a couple of refresh is Psalm 121 for here and you can actually go to that URL fireflight 101 see it 66 thought me and you’ll see the app now as as simple as that we didn’t do anything fancy I can do exactly the same thing I have databases and everything else around ok great any questions so far sweet ok so that was starter where which is about getting started with docker it supports rails node basic python django not much of it we hope that you like it hope that you contribute to if you want again is something that will help everyone and it doesn’t not necessarily just create classic six files it would do dr. Campos as well if you are a Java developer and you want to contribute to that more than welcome to do that now the next project that I wanted to talk to you about it’s called habitus which is kind of more slightly more advanced I would like to think of it it’s another open source project that we are announcing today as open source it was closed source till now and okay and um be built it for ourselves and i’ll tell you why we had to build this thing and we use it quite heavily ourselves if you are into building darker images you’ll you’ll realize that’s useful thing and now we’re working with core OS and a bunch of other companies just because they love it they want to contribute to it as well which is quite cool so what is habitus it’s a docker build tool you can see it on habitat I oh I don’t know why the logo looks like an elephant’s bum but here we go so what does it do normally you create a Dhaka file which is quite simplistic you create a base and you clear the bunch of layers on top of it and you have a docker image but what if your bill is more complicated than that what if you have some scenario like this let’s say I have a job application or a go just because I like go I have a go application I have to build it IE compile it then I want to have two images created button on that one for the API one for the web and then upload it to my s3 buckets ok everybody can download them right it’s more of a complicated build but as well as that there’s another thing going on here when I billman a compiler compiled language like Java were like oh I have to build that image with compile libraries with compile-time libraries like for example you might want to have if you’re doing

like imagemagick or if you didn’t go you might want to have GCC like your see live you know your C compiler you might want to have some libraries which you won’t need if you’re running it in runtime what it would do is it bloats the image makes it slower to download and just makes more users more space but I think more importantly increases that tax surface packages that you have to keep it up to date and just you know it becomes a group picture with a lot of people moving around so you might want to do something like I want to create this compile-time image compile it then take out the compile one put it into another image build that one and then take the artifacts of that and uploaded s3 something like that or even something more imagine that you want to pull out the code from git repository and it’s your private git repository you need a privatization key if I go and say get pull in as a layer in my darker file I need to have a get at add command in my doctor file that says add as a sage private key there and why just doing that I’ve added and included my private SSH key into a docker image that somebody might just go and say by mistake darker push and that’s it is public everybody’s got my private key so what do we do you have to take that layer out or you have to prove it you know private key from somewhere else and if you take a layer out with a delete obviously it’s kindly in the history you can it’s like get history cannot rewrite history in it majority of cases so therefore it doesn’t matter if you do it delete you want to squash that image essentially just yank out that image altogether or that layer so these are the things that habitus does here is an example of a example of a build llamo is another ya know file that you can create to build complicated steps of built so the first step I’m going to call it compile is an arbitrary name names compile the doc and father is going to use a docker file compile because you can have just to rename Duncan files that runs then it does something that I don’t know it’s a toggle file let’s say create a project an executable called Iron Mountain it’s actually a real thing and puts it there what I’m going to do with daca with habitus is going to go in there and pull that file out so it not only does it build the image it fires up an image a container based on that goes to that container pulls the executable out put into your build context so the next step let’s say API can have an ad Iron Mountain an Iron Mountain will be available there so it’s I basically moved an artifact from one image to another one I can have commands to delete as well so this was what I was talking about when I was saying about private keys so when I do something like remove our any ssh-keys that’s there it doesn’t just run that command it drugs that command but also squashes it it pulls that image layer out so your image does not have any trace of your private key in there and then that was the API and then you have web which is the same one and i have the upload step which at the end it’s just on a docker file that we’ve created it’s just that and you know s3 CMD which is a command line tool to upload things into s3 takes it in my access key for Amazon secret key and uploads that into s3 my buckets right there and you can depend have dependencies there so I’m saying my API depends on compile step my web depends on compile and my upload depends on both web and API so it builds a dependency tree and then runs each step multiple of those in parallel if they don’t depend on each other so any independence that will run in parallel for faster build it does all of that we’re just saying you know habitus pointed to something and get it running as you can see it supports syntax like environment valuable replacement so you don’t have anything in your source control that has things like you know a CWS keys or and things like that any questions so far okay cool now as I said you can go to habitus that IO have a look at this it’s an example of how it works it is open source now so you can go to github see the source code all that sort of stuff and you know you can read about artifacts how it move they move around between different images the cleanup step which squashes and Yanks out layers from the images image sequencing which is quite interesting so you might want to do something like build one step and

the second step from is actually previous step so not only do you move the artifact you create the base image for the next step in the prior step and you can run multiple of these ones at the same time on the same build machine what it means for you is you can actually have a build darker build machine a server in your office now what happens is all of you have done talking for the most of you two done dr. in development and you know when you want to build the doctor users to go dr. bill and if you’re unlucky and you have a file that you don’t want to have on your you know local you might have it add dots and that file ends up in the in the image if you have a build server you can have a repeatable repeaters reproducible if I can say it image build system and for that to happen you need to have be able to do parallel bills so if I commit something and it triggers a bill and my colleague commits something intrigues it all they can run at the same time so habitus takes care of that by assigning a session ID to each one of them tagging each image that it builds into sessions and trunks so it can actually run you can run multiple parallel habitus instances parallel to each other step dependencies we talked a little bit about environment variables fairly standard stuff you can also run a command which is like upload the image or whatever that might be and that’s basically and a bunch of like command line things that you can do about how to find a doctor a demon and like that okay I’m just going to run it once so you see how high it looks okay so i actually forgotten the syntax ok Ward director is dash D and appoint it to a go language project that I have ah ok I’ll have a bad llamo file so here’s my is probably the dash I have not drawn this part there we go it’s probably not that I’m just good sorry take on you two shouldn’t be dash I’m just going to take this one out for now you know ok so there’s there’s a there’s a I’m guessing there’s a tap space issue somewhere here oh that’s a ah well that’s a pretty smelly right all right I’m not going to debug it now I guess I’ve smart is my local built that’s that’s not working but as you can see like this one has tagging around it so you can actually have tagging around that as well and there’s one caveat that we have not been able to solve and to something that if you guys interested in a challenge would be great and that is the squash lay squash part where it actually pulls out an image it requires images files within the docker image to be moved out to temporary folder so it can be written back without the ones that they’re going to pulled out and it requires the Linux permissions on the file for GID nur ID to be preserved and that requires sudo

rights so if you want to run habitus you need to run it in sudo if you have a clean up command step if you don’t you can run it without that and that’s something obviously you want to avoid not have if you can but up but then again it’s a tool for your laptops or your internal build server in the company so should be ok everybody within the company’s trusted but ideally want to avoid that step as well and that’s another thing that’s thing ok with that i think i’m done have any questions or anything you want to comment or ask would love to answer it otherwise thank you very much yes yes yeah Oh actually kind of yes and no we have two products classics is for rails and classic 64 docker and the class uses for rails does actually run that starter step it’s not starter itself but pretty much what we built in starter is coming from those heuristics that we learn from how people put gems where they put it what they do with it and the analyzed part on the other side is is another thing and that is there are two ways that you can run darker in production with car 6 this one is I have my own build system for daca it’s working I don’t want you in there I’m just going to give you a docker image I’m going to say I have it in you know quay aisle or I’ve got it in j frog or somewhere and it’s my daugher repository and it’s built its QA tested CIA’s run on it you just pull that out here’s my credentials go and pull it out and deploy to my service in that case analyst really doesn’t do anything it immediately comes back and says okay I can check that your repository credentials are okay and you have the image that you say you do have but if you say I have this git repository and here’s my private as SH key then what we do is we actually pull the code out we run it through habitus to build the image for you in a hosted way so if you don’t want to have your own build server in house you can actually use God sixties for that and not only that it builds it and then we give you a private doctor repository to push it out to that repository which is it then available near your service that you want to deploy it so just immediately pulls it out to the service and they get started it also does things like new versioning of it so you can have a look at this which is quite interesting because if you look at em like our other stacks on class 66 itself like internal service is probably one that we’re running with docker as well you can see that you basically have like a build timeline how many times we built that image how many times we published it as in rolled it out so you can say I want to roll back to that hash of the image and what we found is quite as interesting is this the old school of Wayne doing doing doing things you have your artistry git repository and you have a hash for each get ref that everybody like comments and change something and it goes to production and say there’s a performance performance digression like you have the site goes too slow and then sis admins or the developer managers can go and say side slow this deployment what comments gone into it what is the issue somebody just said sleep 10 because they wanted to test something let’s just say for the sake of work with when you have darker involved you actually have two repositories why don’t you code the other was your duck a repository so there’s like a break in the in the flow as well as that you might have multiple versions of you of your app running in flight like the old versions are draining traffic the new versions coming up and the request goes too slow some of them go slow some of them don’t you need to be able to say at this one is the new version what gear what docker image hash it is what’s gone into that build which code commits going to it so there’s like a long trace that you have to go back all the way to the comment line where is that sleep 10 and becomes much more difficult to find and that’s why something like this will help so we call it build grid and that’s what the analysis that sure yeah couple of other right yes okay very very very good observation so right so every server that you fire up with class 66 to run your application has these things your application obviously but it also has collecti which is everybody’s familiar with is an open source tool that collects metrics from your server like house how CPU is doing how much memory do I have how much do I have and how much network backlog do I have so that’s we use to show you charts on custis’s dashboard to see like everything and you can put monitoring on it and all sorts of things so I don’t run out of disk space and things like

that there’s also another thing that we run called Delphie which is the product that we have and that is a dns server it’s a local dns of so every container that comes up it actually points to that dns server which is a local one and what it does is quite interesting because you can say deploy deploy deploy three times in a row because you had three commits that just gone in and let’s say you have a CI CD that every comment left and 20 developers everyone commits and at any time you have like three things going in you have three versions of your application in flight you don’t know what’s and now getting the traffic the old ones are drained from traffic you have a longing for large file being uploaded wait for that to finish to kill it before that all sorts of things like that right however let’s say you have two services web an API if v1 of web the old version of web is asking for API it needs to get the old version of API if the new version is asking for API needs to get the new version of the API because you might have a contractual issue like the API is not backward compatible or whatever that might be and this DNS is aware of it because you deployed with class 66 you know which version of which container has what version of your code and just by the fact that is asking that question from us that says API that low classics is that local you know are your IP addresses an old version i’m going to give you an idea of an old version of api or the other way around and that’s how it’s a version where dns service and service discovery so that’s what you see at Delphie running on its weave which is our network provider so we use we work which is a use of the company that we used for the basic network provider I’m you chose IP address of 25000 because we realize that no cloud provider is going to build a data sent in Yorkshire in England so if you have a cross data center network the IP completes going to be minimal so that basically provides a seamless network between between all your containers and non container components like databases on earth any other question we’re concerned about scaling right so there’s two ways to do that and I’m very bad so maybe this is a stupid question but so from what I’ve seen in the UI on class 66 I can easily add servers put a proxy in front of it and get like round robin going on check but it also looks like i could put more than one container on a box yes what’s the very good question yeah that’s a very good question is actually quite quite interesting so here is an example of that as you can see we have four services we have four services when I was responsive and this one is running two containers each one of them is running one it very much depends on the type of application that you’re running now an example of that would be no genius no Jace everybody knows a single-threaded one so you might want to have multiple ones that so you can leverage your multiple cores another example of having the benefit of running multiple is like if you have something that drains the cue to say send emails and you end up having a backlog backlog because the spike in traffic and emails that you have to send but sometimes some applications could be multi-threaded themselves so you can actually get away by having one of them on the server okay in that a specific example I’m not the PHP experience and another wordpress example they expect by any means but I believe that you can actually get away with having one instance of that because it sits behind engineers as an application server so the whole HTTP what Apache will do for WordPress will happen will be done by engine X the application server side of it is something that I’m not really an expert on but I believe you can actually have one instance of the whole thing that might be wrong any other questions sweet thank you very heavy use file using non-web processes kept moving to hosting on AWS with docker images for a new project if

you’re coming from world where the number of instances for a given process control via profiles and scaling the number of those up and down what Arsene are there any sort of best practices for how you would approach that in a right either continuing to use profiles and you know for men or tools like that right yes okay great great question so the question is using Heroku dinos and workers you have a proc file what is the best practice field moving out over a grand frankly what we call ourselves like we sit at the bottom of a past cliff people running Heroku everybody loves her okay and everybody starts with her who nobody ends matera crew and people come to classics a lot of them I would love for you to govern they should check out dissolution that’s an Amazon but not in any seriousness what actually happens is we get this question quite a lot because Heroku obfuscates pretty much everything there’s a diner and magic and everything’s there’s no like a single unit of thing that you can take bond once a cpu cycle means CPU cycle and a megabyte is a megabyte notice like a diner when it like they have to rest nowadays and whatever then so there is there’s a lot of magic involved which means you cannot really translate that then it’s a very valid question what my answer is and this is something that I have to tell talk to my customers caught a lot is very much depends unfortunately there is no magic bullet it very much depends on what framework you use is it multi-threaded is a single threaded how much I owe bound is it how much memory boundaries it is so you have if you have a ir bound process means that you might have a single threaded one but that thread is going to get stuck because it’s an acute too bright to the disk or whatever that might be if it’s not just goes and come back and response to something put something on a queue whatever that might be then you have a different scenario and how many uses do you have to serve for that as in its non-web I understand but it could be how many emails you want to process and how many calculations you want to make how many matches you want to make in a dating site we actually had that problem which was a quite an interesting one because it was this dating site who’s actually still my our customer they have like a thousand men and a thousand women wear heterosexual kind of matching would be like 1 million combinations of different things and as every single new member will add another thousand calculations to who’s the best match so there had there was a lot of kind of diners essentially working all those things and it was very much depends on how long it takes in that case it was very much a mathematical thing it was very fast for every single calculation so you could get away with having a very cpu-bound distribution not much memory but if you have a lot of like Redis it’s very much memory bound you have to think about that I’m how would be more than happy to help you on a specific case our team’s our support team is quite familiar with those things especially coming from Heroku but there’s no magic put it I’m afraid oh right no I’m afraid I’m not I’m sure I’m sure if i look at you know we have about probably 4,000 different stacks unique stacks and i’m taking out different environments of the same stack out i’m sure there is that on the least technical member of the team so I’m afraid are you you’re stuck with me but I’m sure there is examples of that I can dig it out if you just you know send me an email output by email out at the at the end if you send an email I’ll be more than happy to help you out with that well my treat anthem is there but my most cash at classic 6.com in terms of performance pricing right so as far as performance is concerned there is nothing that we do that degrades anything over another one first the only thing that’s different is a development environment a development environment doesn’t do backups and doesn’t do load balancing but is free so that’s one side as for anything else you might say production staging QA you know showcasing whatever else they are the same what we find however is that many of our customers develop those things you like you have like a your they were staging and QA unlike dissolution and then our production on AWS or whatever that might be and then they have a kind

of fish dehydrated version of the main thing on a couple of servers one database and everything else in another server so it degrades the performance obviously but it’s way cheaper and it’s much faster to spin up and then production goes into a full-blown you know fifty front end servers you know three databases on whatever else that might be a not usually worse we actually have a feature called failover which allows you to clone a stack from one database from one server up and running yes so um you can use any get rift that you have but there’s a hash or a tag or a branch to deploy that however you want and that can be built to an image built into an image so you can say on that version to be into this docker image and then that we deployed and you can go to this page that I showed you here if you go to the build grid page you know this page basically you can have different these are different versions that are built that this one’s well this one was deployed but this one wasn’t it was just to see that the build works so you can have each one of them deployed and you should go back and say rebuilt that because there was an external dependency that wasn’t in the version of version control that will change this build which is not quite right but we can say publish this we can publish this and that is just a built image which will just all it takes is for the old containers to go away and the new ones to come in there’s no build step or anything else basically just a simple dr. pull start them up giving them network and all sorts of other things that happens but it’s fairly quick so you don’t need to go through the hall deployment cycle and you can also do that by commit by putting it into your git pull you know commit hooks so every developer will do that anyways your whole thing of stars and it goes to production or to stage it sweet well thank you everyone thank you I’m Tommy Murphy I’m from digital ocean I’ve been here about six months and working on the delivery team here where we’re trying to make it easier for like our developers internally to ship code so we’ve got like a few different dashboard apps like our API we’ve got some like billing like a synchronous process is a lot of like storage management we have a lot of just internal services that we run and we’re trying to make that easier to deploy so one of the technologies that we’ve played and fought with here was a docker at a trying to achieve that goal so yeah kind of like the why dr. this slide should be self-explanatory says so we started out we’ve got our CI environment is a drone and that works like running your code inside of a docker image so when I got here we had like a ruby base image that had like four different versions of Ruby installed on it you just across like five different like Ruby and rails projects and like there’d be no j/s installed for some reason and like weird versions and that’s like where we started out was this like one base image that was just kind of like everything was thrown into and difficult to manage then came along this project to do like a isolated environment so so many products were developing we needed to like spin up copies of digital ocean like clone digital ocean and clone joke so I yeah we need and like a lot of our stuff is deployed with like currently

with chef on like droplets internal like droplets for us so like cloning all of digital ocean is kind of a big task so one of the ways we like set about doing it was a just for like each different projects we created like we threw out the the CI build image and created our own that was you know like based off of a like official docker hub image so like off of ubuntu or off of like the Ruby base image so that kind of that knowledge of like what are the dependencies of this application reside actually in that applications repo and then once we were able to do that to like a minimum set of our applications we were able to like make a docker compose that stands up a new like digital ocean so that was pretty cool and this kind of where we’re at is like having you know different like specific docker files for specific projects and now we’re like a bootstrapping and running the CI within that docker image throwing out like one with everything else and kind of where we’re moving to is like a container containers deployed in production so like bored you know kuber Nettie’s muses cloud 66 right you know these different like platforms as a service where we will just develop artifacts and then make it easy for the rest of the digital ocean developers to have that artifacts running in production so that’s kind of like we’re like right about here halfway we’re not we’re not running fully in production yet but many of the lessons learned from like the cloning of our like our infrastructure have been like iterated on on these doctor files so what I thought I’d do is just start off with like this is from the docker hub or ducker compose documentation for like a rails application it looks very similar to that one we’re like from an image install some dependencies do the gem files which is nice for the caching a bundle install and then add the rest of my application in so where you may run into some issues like first off is this Ruby 220 image which it turns out like 220 is no longer supported version of Ruby so that docker image doesn’t necessarily get like updates or security updates so you know I ran this a few weeks ago where I just looked at the the packages in that base image that had like security updates and there were 64 of them so if you care about that you might want to like have a plan for like dealing with like your base image like contributing back up to dr. proper to say like hey update you know this thing needs updates if it’s an officially supported image you’ll probably want to be based off of an officially supported image yeah and this is like openssl had about like 15 SI VES in that like a version of from the Ruby 220 image so that’s you know probably like one of the packages that’s most relevant it’s your application and like being able to respond to like vulnerabilities than that so one of the things like since we’re just running in like CI and locally and stuff right now we can like ignore that for the most part but when we like as we go towards production we want to have like plans in place to deal with that so this is just something a bunch of googling and stack overflow came up with to like update just the security dependencies in the docker image so like adds an extra layer and yeah the cloud 66 products like like squashing layers or adding this to a build would probably be like a good fit it’s part of like banning out your different doctor images so yeah back to the example all right um saw some other packages is adding the gem files and then you get to the bundle install which was talked about previously which is all well and good when all of your gems are like public gems hosted on like a ruby gem but the problem comes when you’ve got like something like this where I’m referencing like another get project

that is a private repo or like you know on our like github enterprise or you know somewhere else where you need like authentication in order to actually like fully complete the bundle install step when you’re pulling in your other dependencies so everyone like first stuff goes and like commit secrets so like just don’t do that stop it so like if you add something like this or you have your like your token in like just stop it we can solve this problem without like putting our secrets in our repository or the private key into the reared SSH which there’s actually a good idea about like pulling that out if you’re going to put it in your image so we looked at it and had some like different ideas is like because when you’re building the image you’re in like kind of a clean room environment you don’t have like all of the like SSH keys that you like normally have as a developer so one of the things is like you could try to share the ssh agent with like the context of running the build steps but actually doing that is pretty difficult with a docker to like share that into there you could run like bundle install as like ci or developer and you know into a special folder so that your developer is like pulling it as their identity compiling the gems and then you know adding that to you the thing that can then be just like added into your repo so that the bundle install step in your doctor file just like actually doesn’t do anything and you can like a vendor the gems like just directly and get which is actually what we started out with was bundled package so this pulls the theoretically source only gem and like caches it into a vendor / cache folder that should be platform independent and so we were doing this as like developers can do that and then just committed it to the repo it didn’t add like that much weight to the repo to have these like Jim Pyles but the nice thing is that in your dock or build step you have all of your dependencies like already with you when you’re building and that was all great and good until like on Friday we added google protobuf gem which publishes like platform-specific there’s only about 50 gems that do this and so like when you run bundle package on Mac or Linux or Windows or something you get it completely different like a prebuilt binary so that was annoying and we’re probably going to go to a system where like our CI uses the identity that cloned the repo to actually do the caching of gems but yeah it’s just there can be a lot of issues at that step that you might need to to work around and takeaways don’t commit your secrets to something it’s gonna be in your git repo history forever and then the last step yeah this is adding right you add the rest of your files into your application or whatever so one of the problems is after this everything like runs is root so all of your ad and like copy commands are going to be rude and any like since we didn’t have like switch users as part of the dockerfile or create users you’re going to be running your like infrastructure as root which you may or may not care about because like depending on your like threat models but it’s just something that like if you want to run as not root then you have to do a lot of different like adding layers to change ownership of files like switch back and forth between like root and unroot but we’re going to just kind of ignore that and go with like docker 110 was released recently and it adds support for like name spacing to where the root inside the container is not your system root and I don’t think it’s turned on by default yet we’re just not looking at that problem since we’re not actually like a running it in production yet but yeah I think that’s good things to come I think from

docker and like rocket to about running as a less privileged user in addition to the other like benefits of running inside of a container boom and then right so everything’s route now like what do you run when we stand up like our cloned environment dr. Campos we’ve just been running like the web brick server right but you might need to have like a faster like unicorn or puma something a different like application server that is before you’re like actual Ruby code that does like things like the HTTP parsing and like we might have stuff for like TLS termination things like that finding out like how much to put in the container and what to split out like you could run just like unicorn here but do you want like an engine X like specific route for serving your assets versus you’re like it going through Ruby to serve assets so I there’s yeah a lot of you know different options there and how much you want to put within the container we’re kind of sticking to like a like trying to stick to a one process per container model so we looking to run like engine X separately in a separate container outside of our like rails you know unicorn host but yeah is another thing to think about on how much you want to put within like one container do and then just getting around to like your basic like 12 factor like writing your applications step your configuration from your code even when you do it we’d run into stuff that like didn’t work out well so we had a lot of like EML files that had configuration dropped off like data bags and chef so one of the I think things we did to make this much easier is instead of doing file-based configuration we just took environment variables was kind of the least common denominator and you know there’s the Figaro gem or daddy and V gem in the rails ecosystem to do this and like one of the areas that kind of like tripped up is the like development and tests like you can pass in just a database URL but one of the things that like a rails serve our spec does is like copy or development schema into your test and it does smart things so we separated like the database name for those two is like hard coded but we left everything else is just like configurable and I think our database GMO is the only like ya moul left and it pretty much looks like this of just interpolating your environment variables into it some other places where we had like things configured in ways that didn’t work with like doctor and doctor compose and arbitrary environments was like DNS things where we had you know hard-coded or like you could put in like a staging region or whatnot but like when you don’t have like an actual DNS tours all of that too or like things need to be ip-based that got like we got bit by that and had to kind of refactor some code to make more things configurable it and like yeah making just everything configurable not just the host but like the actual port you’re on different like credentials to use just where possible like configure the whole thing even if you don’t think you’ll need to because when you’re running like you know for Redis instances and one like a vm it you’ll want to like change the ports it just like is easier that way do so yeah that was about it on the stuff that we’ve run into thanks for attending I’m Tommy Murphy you can you know follow my thought leadership on Twitter right there and yeah digitalocean yes is hiring so yeah are there any questions it was mostly just like things were

expecting there to be full DNS names like code was written with the expectation that this would be in DNS but when like spinning up an arbitrary environment like all we had to link the two containers right was like an IP address and a port so is basically poured like assumptions made when initially writing code that you know we thought it was configurable but it turns out like oh it’s not fully configurable to where we can spin up an arbitrary environment with it I yes they’re a little different than other droplets but yeah yeah yeah so that’s an active active area of research right is how to get the configuration for like that third step of running everything in production and yeah one of the things we’re looking at now is Cooper Nettie’s and just using the secrets feature of that which isn’t very complete and then there’s other like secret management solutions we’re looking at like vault or yeah yeah it’s best another unsolved problem I’ve looked at it a lot we don’t have it running but yeah I just like simplicity to begin with of like treat everything as like a secret everything as an environment variable and it’s kind of like a first step I guess at our like first iteration of trying to run some stuff in production in containers cool well thanks for attending