Cost Optimisation on AWS

hello and welcome to this AWS webinar my name’s Ian massing ham I’m a Technical Evangelist with Amazon Web Services located in Europe and I’m gonna be your host for this session today this webinar is on cost optimization on AWS and it’s the final webinar in a series of three webinars that we’ve produced to intended to help new customers get started with the AWS cloud and you can find part 1 in this series what is cloud computing with AWS and part two best practices for getting started on AWS in the links panel in the webinar today you can also access them from the video description if you’re viewing this webinar on demand on the AWS webinars YouTube channel before we get started I just want to cover a few housekeeping points if I may firstly just to let you know as I’ve already said the webinar from today will be recorded and made available on YouTube on our webinars YouTube channel you can also download the presentation materials from today’s webinar if you take a look in the files panel in the webinar interface you can find a PDF of the materials from this session there or if you’re watching on demand take a look at the webinar description once again on YouTube you’ll find a link there over to SlideShare where you can view and download PDF of the materials from today’s session so do grab the materials if you think that will be useful for you there are a few links and references in the materials today and of course you can click those links from the PDF saves you having a copy or paste them into your browser so I do download the materials if that’s useful for you if you have any questions at all during the session today whether those are about the content of the webinar or for that matter anything else related to Amazon Web Services please feel free to ask those questions using the Q&A panel that you can once again see in the webinar interface I and the rest of the team here at AWS is standing by to help answer any questions that you might have if you ask us anything at all that we can’t answer during the session today don’t worry we’ll get back to you in email over the course of the next few days and then lastly when we close out the webinar today will put the webinar into Q&A mode for a few minutes we’ll show you a slide there actually with some social media links on some Twitter accounts that you can follow to stay up to date with Amazon Web Services here in Europe and also worldwide so keep an eye out for those social media links at the end of the session and you’ll also have the opportunity at the end of the session today to rate today’s webinar we’d love to get some feedback from you on how we’re doing with these webinars in this series so if you are able to spend a couple of seconds at the end of the webinar just giving as a score from one to five let us know how we’re doing and also we really appreciate it it helps us optimize the content of course you can leave us qualitative feedback in the Q&A as well if you feel as anything that we could improve anything we’ve done well during the session today we’d love to know about that so we can optimize this webinar for future audiences okay so let’s get started cost optimization on AWS now if you’re a startup or relatively young organization this is all about making your funding go further and of course if you’re a more established business that’s been around a little bit longer optimization is a key part of improving profitability for organizations and making them more viable and this is something that AWS is actually a really useful tool for it’s kind of four basic reasons for that really those are really to do with the way in which using Amazon Web Services can change the economic model for your access to the IT services that you need to run your applications and workloads you’re able to replace traditional upfront capital investment variety with this low variable cost model that comes with the AWS cloud we operate at significant scale here at AWS by virtue of the large number of customers that have chosen to use Amazon Web Services to support their applications and when their IT world loads and we use those economies of scale to continually lower costs past those cost savings onto those customers and of course onto new customers thirdly those choice in the pricing model to support variable and stable workloads and we’ll talk about this to greater depth later in the session today but in essence you’re not necessarily forced to use resources under the on-demand pay-as-you-go model if you know that you’re going to have workload and application there’s going to be around for a little while you can pre buy reserve the capacity that you need for that particular application or workload and that will enable you to further improve your pricing lowering your costs even more and fourthly we have a series of mechanisms for allowing customers to save money as they grow bigger things like the very nature of Amazon s3 the pricing model farms and s3 are simple storage service you hit certain volume break points in the pricing model where the cost per gigabyte for your storage will reduce not just for the new stuff that you’re

storing but also everything you’ve already stored in that service and there were several other services on Amazon website web services that have similar characteristics where you’ll save more money as you go grow bigger so why exactly is the cloud more cost effective than traditional on-premises IT that you might own and operate yourselves and there’s a couple of reasons for that the first is we have a very high scale and highly utilized platforms are able to drive very high levels of utilization fundamentally higher levels of utilization that any one organization will be able to drive themselves we’ve got a rather technical term or technical sounding term that we use for this which is called aggregating non-correlated workloads in other words we have customers for many different sectors many different industries that are using AWS today their their workload profile in terms of the amount of demand that they have varies over time but different sectors tend to vary either over different periods of time or have their peaks at different periods so that means that we can aggregate these non-correlated workloads onto our single underlying technology platform and that means that our utilization is very high much much higher than you would traditionally see with on-premises data center infrastructure where you might be the only organization that’s using that you’re likely to have periods of under-utilisation and that’s a period when your investment is really being wasted it’s seeing idle we have much much much less of that Amazon Web Services and as a result of that our cost to operate or cost to provide the services per hour it’s going to be much much lower than traditional on-premises data center infrastructure when you’ve got that those periods of idleness those periods of under utilization so you can do some of that stuff with virtualization as you can see here lowering the capex that you have and getting more efficient utilization but you still don’t eliminate capex in the same way that you do if you start to take advantage of the AWS cloud and he’s still very very unlikely to be able to achieve the same levels of scale of AWS can achieve and that’s the real reason that the costs of using AWS are usually so much lower than running IT services and workloads on your own IT infrastructure and your own premises are in your own colocation facility we’re also able to take optimization steps that might not be available to all organizations things like our own specific hardware designs where we have our own hardware design teams that are working and optimizing the technology platform that we use for the particular kinds of workloads that our customers run we have direct control over our own supply chain and we also have direct control over our own hypervisor technology and our network protocol as in the stack that we use in the game that can allow us to drive efficiencies there will be very difficult even impossible in some cases for individual organizations to achieve so those are some of the reasons that you’re going to find lower cost operating model running your applications on AWS than you might do if you run them on your own dedicated infrastructure now we have a proactive pricing philosophy here at Amazon Web Services as well and we try to create these flywheels actually in many of the businesses here at Amazon not just within AWS but within other businesses the hammers and runs this specific flywheel that we’ve tried to spin here at AWS is an innovation and cost reduction flywheel which we think benefits customers a great deal so now let’s start the ecosystem global footprint new features and new services obviously what we’re doing here is listening carefully to the requirements that customers tell us they have areas where they think AWS might be able to help them we’re aggregating those requirements together from the large number of customers that we’ve been able to get using the platform we use that to help us define and develop new services and those new services help drive more usage on the AWS platform that in turn leads us to require more infrastructure and once we have that we’re able to make use of take advantage of economies of scale in the infrastructure to look to innovate the way that we’ve just described things like our own supply chain control things like custom designs for hardware and software that we might introduce in order to replace components that have been sourced more traditionally historically that can allow us to lower infrastructure costs reduced prices which we do proactively pass cost predictions on to customers when we were able to achieve them that unlocks more customer demand in some cases making deployment scenarios or workloads viable for deployment on AWS where they may not have been economically viable previously which in turn drives more usage leading to us requiring more infrastructure and look unlocking more economies of scale and opportunities for innovation which we can in turn pass back on to customers in the form of lower infrastructure castle so we’ve spin this wheel and by doing

this over the period of time since AWS launched its first services quite a while ago now actually back in 2006 believe it or not we’ve managed to proactively reduce prices 47 times over that period the most recent being earlier this year when we reduced bandwidth charges for our CloudFront content distribution network quite significantly so we do that proactively and you don’t have to take any action in order to take advantage of AWS price reductions if you’re already using a service and we would use the price for that service that will be proactively passed on to you in your billing from the point at which the price reduces there’s no need to come back to us and renegotiate or ask for improved pricing will proactively pass those price savings those cost savings on to you in the form of those continuous price reductions so one of the reasons that customers that use AWS over a period of time very often and see their costs or diminish as they continue to make use of so you can see an example with Flipboard there’s actually really good video where one of the leadership team members at Flipboard is talking about this an AWS summit event that took place last year in New York you can find on YouTube really encourage you to take a look at that it’s a very interesting case study of how a combination of optimizing using the sum of the cost optimization tips and tricks’ we’re going to talk about in a second plus AWS is continuous drive to minimize prices for customers it’s led this particular organization to dramatically reduce the amount that it cost them to deliver service to their customers in this digital publishing application that they have was very very well known I really would recommend taking a look at it if you haven’t done so already it looks beautiful on the iPad or on a high-definition Android tablet and they can deliver a new kind of media consumption experience for you as well it’s pretty cool you can see here I’ve also delivered some really strong financial benefits from using AWS over this period of time since they launched reducing their cost to server user by something like 90% over that period so very very successful use of the AWS platform from to minimize costs to serve in this high-growth startup let’s move on now and take a look at those cost optimization tips and tricks techniques that you can use to minimize the amount that you spend on AWS services and we’ll talk through a few of these in sequence and try and give you a little bit of insight into text and the techniques you can use to minimize your AWS belt bill each month first thing to say is this is like to Amazon ec2 are Elastic Compute cloud the virtual machines or instances that you run inside the AWS cloud you can use to host your applications or workloads and you’ll be aware I guess from the previous two webinars if you joined us for those that we have a very broad range of different instance types available things like the c3 or c4 which are our compute optimized instances delivering the best price performance per CPU cycle the m3 series which is balanced CPU and memory instances good for sort of general purpose workloads or for workloads that you’ve not fully assessed yet in terms of what the right instance type is and then the our three memory optimized and the iSeries instances which are optimized for i/o what we’re saying here in terms of choosing the right instance types is that it’s important to optimize around the particular instance resource whether that’s memory CPU or i/o which is going to constrain your application okay and by optimizing around that specific resource you will deliver the best price performance for your application it’s obviously the case that you may not know at day one what that resource is going to be if you’re deploying a new application maybe you’re working with a new commercial off-the-shelf package that you haven’t used previously maybe you’re developing a new application using one of Amazon’s Amazon Web Services SDKs or your own development environment now you can choose an instance that best meets your basic requirements from the m-series m3 series or r3 family perhaps starting with memory trying to choose the closest amount of virtual cause to what you believe your workloads going to need and assessing peak i ops storage requirements at this time so I up the second requirements at this time as well and you can tune that so take a look at the metrics available to you through AWS cloud watch through our metrics package through trusted advisor or even through third-party application management and monitoring tools if you want to make use of those things like new relic or of the third-party application monitoring and management tools and use those to assess instant met instance metrics before you put your application into full production start to scale your application up and you can then change your instance size up and down or cross influence families on the basis of that monitoring data that you’ve got on the basis of those real metrics and once you’ve done that that you can then

deploy and scale your application and start to run multiple instances in multiple availability zones to ensure that you have a strong availability and reliability strategy for your app but you’ll be doing that on the basis of having optimized around whatever that constrained resources and that will deliver you the best price performance and the lowest cost of operations for you and you can obviously do this in a pre-launch phase before you put your application into production really recommend doing that so that’s option that’s up technique one really choosing the right instance type and making sure that you’re optimizing your route around the right resources the second technique you can use to optimize your costs on AWS is to make use of auto scaling now this is a very well-known feature of ec2 the Elastic Compute cloud and enables ec2 that dynamically add and remove computing capacity in response to events and those events are typically changes in metrics but they can also be things like a shed you also you can do this by time and of course you can manually initiate it via the API as well and you need three things to set up auto scaling a launch configuration which defines describes what auto scaling will create when it adds instances it includes things like an Amazon machine image ID like an army ID the instance type that you’re gonna add and other characteristics that you can define about the instances that you’re gonna create in response to these events next thing you need is an auto scaling group that’s managed grouping of ec2 instances you’re defining here the minimum and maximum size of the group load balancers that you may wish to register members of this group with and importantly which availability zones within the specific region you’re operating within you wish to create these instances inside and then lastly an auto scaling policy which defines the parameters for performing an auto scaling action here you can see a scale up policy where we’re going to add one instance in response to a particular event with a cool-down of 300 seconds or five minutes as to prevent us from having a scaling storms we might rapidly add large amounts of capacity so those are the three things that you need to create auto scaling of course you can find a lot more detail about this on the ec2 product detail page which talks about it in more detail of course within the console the AWS console is also now a wizard driven interface for creating auto scaling groups and setting up the various components that you need including automatically creating cloud watch alarms for you and now the effect this can have on your pricing on your costs is quite interesting so here we’ve got a representative workload an application that we’re running on AWS and you can see that this application experience is a mild peak in demand during the afternoon and a more extreme peak in demand during the late evening and we could address this with auto-scaling groups comprising large instances so here we’re using the m3 large instance to satisfy the CPU demands of this particular workload profile and we’re consuming in total here 41 instance hours over the course of the day with these m3 large instances that’s going to cost us just over six point three dollars per day to run that workload and you can see that if you increase the granularity with which you scales you can see on this second example here you can dramatically reduce your cost by doing this here we’re using rather than those 41 hours of the m3 large we’re using 70 instance hours of the t2 small and here we’ve dramatically reduced our cost by almost 70% to 1.96 dollars per day by scaling in smaller increments which you can see more accurately match the demand profile of our workload this is something that you need to assess maybe experiment with when you’re looking at using auto scaling to make sure that you’re scaling in the right at the right level of granularity you don’t have excessive amounts of capacity that are provisioned for long periods of time and there’s absolutely nothing to stop you doing this with AWS of course you can experiment try various different scaling models and different sizes and the only impact of that will be a slight variation in cost while you do your experiment and conclude your optimization so it really encourage customers to do that and make use of instances that allow them to scale in a way which is responsive and also allows them to optimize their costs number three is pretty obvious which is about turning off capacity that you’re not using and this is again one of the great benefits of using the cloud really the AWS cloud specifically has some tools that make it even easier to achieve this obviously with the cloud computing you’re paying for resources in the simplest terms on demand as you need them and therefore of course you’re not paying for them if you don’t need them as long as you’ve disciplined enough to stop them down shut them down when they’re no longer

required you won’t be charged for this lends itself really well particularly to workloads like development and test where you may have developers or testers that need systems during the working day when they’re in the office performing the activities that are responsible for but of course when they’re not in the office you’re going to have a much reduced requirement for resources so why not shut down those resources say five six or seven pm and start them up in the morning when people come back into work you can save fifty sixty percent of your weekly cost by doing that if you consider it you can also shut down systems at the weekend now there are some tools that are available to make this kind of use case more easy to support on the AWS cloud enabling you to do things like containerization of your workloads running your workloads inside docker containers and quickly spinning those up on the new ec2 container service making use of DevOps tools like AWS opsworks or elastic beanstalk quickly and easily allowing you to deploy your applications without having to worry too much about the underlying infrastructure resource templating with cloud formation which is a complete templating language for defining collections of AWS resources that you can then programmatically create and destroy very rapidly get your environment into a known state when you need it discard it when you don’t need it and then brewing it back in that known State at the next point that you require that environment and you’ve also got other services like usage tracking with powered watch and monitoring and logged with cloud watching the power works logs so all tools that you can use to dynamically control your infrastructure in a very responsive way get it into the state you need quickly and also destroy it quickly when it’s no longer required saving that state as necessary using those tools so take a look at those and try and work out what opportunities you’ve got to shut down instances and they’re no longer quiet number four is a very an area that’s got a lot of depth actually an area we could probably spend a whole webinar on which is about reserved instances and this is what I mentioned at the top of the session when I was talking about the four fundamental characteristics of the AWS cloud that make it cost-effective one of those was different purchasing models the fact that you’re not forced to use resources on demand and therefore pay a higher price for them if you know that you’re gonna require those resources for a long period of time you can make use of this reserved instance purchasing model and it’s just that a purchasing model so there’s nothing that means you have to stop and start on instance to turn it into a reserved instance and turn it into probably in quotes there it’s simply a billing construct so at the end of the month AWS will take a look at how many reserved instances you had available to you during that month and we will remove on demand charges for instances which could have been run under those reserved instance credits if you like that will automatically lower your bill for you and you can see an example here of that same workload that we were talking about earlier that afternoon and evening peak being supported through a combination of reserved and on-demand instances and we’ve shown up the left the right hand side of the slide rather the percentage of daily utilization each tier so you can see we’ve got a lot of instances that are used 100% of the time some instances they’re used 75% at the time some that are used 58% at the time and so on and we’re taking advantage of the reserved instance as buying enough reserved instance hours in this case we’d be buying three t2 micro reserved instances which would cover the baseline three levels of the demand for us that you can see there and then we’re taking the rest of the capacity that we need out of the on-demand pricing model that will allow us to optimize our costs for this particular workload profile minimizing the amount we’re spending on those three baseline instances whilst giving us access to the capacity that we need to deal with those demand spikes in the afternoon and the evening and then you can model your workload take a look at opportunities that you might have to reduce your costs by making use of reserved instances and this isn’t only about ec2 there are other surfaces where you can take advantage of capacity reservations to reduce your crop cost services like RDS the relational database service DynamoDB our non relational database service Amazon redshift are fast petabyte scale data warehouse Amazon ElastiCache managed caching tier that you can introduce into your applications do you improve the performance and will reduce the latency of access to data that your applications might rely upon all of those services can be purchased under this capacity reservation model that will help you reduce your cost for them and of course many of those services have to do with data persistence and those are typically the services that you need around a lot of time if not all the time over an extended period of time maybe a year or three years so you can take a look at ec2 and other capacity

reservations there’s a mechanism for cost optimization if you need help with that it can certainly make contact with us using even contact us for more contacting your AWS account manager or the Solutions Architect that works on your account now we can help you with that there are also partners partners like that ability cloudian cloud checker who offer reserved instance modeling tools can take a look at your billing and help you model what your cost might look like if you converted some or all of your instance capacity to reserved instances or made use of reserved capacity in some of your persistent tiers within your application so can certainly like take a look at those AWS technology partners to help you work out the detail of the benefits that might be available to you if you make use of reserved capacity as well fifthly kind of at the other end of the pendulum swing really from reserved instances is something called the ec2 spot market and this enables you to bid on ec2 capacity which is not currently in you see there was reserved or on-demand instances and the price for capacity in this spot market varies over time on the basis of current and Kane is updated every minute and made available via an API you can bid on this capacity and you can take advantage of prices that are actually up to 92% lower than the on-demand prices for the same capacity if you bid for it in the spot market but the thing that you need to bear in mind here is that you’re not guaranteed that capacity will be available in this spot market so it’s good for workloads for jobs where you need access to large amounts of computing capacity but you’re not overly concerned about the time at which your job might start or if your job may be interrupted if the price fluctuates significantly you’re not overly concerned about that you have a mechanism for dealing with that potential workload interruption if we need to reclaim an instance within the spot market that you’re currently making use of you’ll be given a two-minute warning of our notification but we will reclaim capacity if another customer bids at a higher price than your maximum bid so you know you have to have a workload which can deal with that kind of potential interruption and there are many workloads that fit that profile what are the most common is high performance computing workloads we have a great example of a customer here that’s made use of this AWS technology partner called cycle computing they provide supercomputing resources on demand to their customers and here you can see an example of a workload profile run on cycle computing this was a little while ago back in November 2013 it’s a really really significant workload a hundred and fifty thousand plus Intel CPU cores were used during this workload this capacity was all acquired from the ec2 spot marketer we’ve just talked about and the total cost of running this workload even with that very significant amount of computing capacity was only $33,000 as you can see some of the cost benefits that might be available if you do have a workload that can make use of this kind of car to advise you to take a look at the spot market take a look at the spot market product detail in the ECT product detail section on the website and take a look at some of the case studies there of customers that are making use of this to give you a feel for whether or not your particular workload might be a good fit for this kind of capacity delivering the next area whether scoped optimize costs is in storage and you can actually optimize around something that’s already a pretty low cost service Amazon s3 the simple storage service there are tools that you can use here to minimize costs one is called Amazon s3 reduced redundancy storage or RRS and this allows you to trade off the very high durability levels of the standard Amazon s3 storage class the 11-9 durability that s thérèse very well known for and instead to take a four nines durability level 99.99% durability this will deliver up to 20 percent savings now this is great for things that are easy to reproduce things like transcoded media files that you may want to store for delivery to particular device types for example they could be reproduced from a high-definition master in the event that they were lost but if you’re storing significant numbers of those taking the option to use Amazon s3 RRS can significantly reduce the amount of charges that you’re going to see for that storage in the similar vein we have Amazon Glacia this has the same

durability level as s3 1190 ability but the trade-off here is latency of access you have a three to five hour restore time for accessing objects that you’ve placed in Glacia but this can yield up to a seventh up to a sixty-six percent saving so it’s really two-thirds cost reduction in comparison to storing this information this data these objects inside s3 so it’s really good for long-term archiving backups very cold or old data that you might need to keep around for ion’s reasons and you can integrate Glacia together with s3 using lifecycle management policy so for example if you’re storing log data on s3 that you needed to keep around for a long period of time but you thought it was going to be very infrequently accessed you could automatically tear that down into Glacia after a certain period of time so 180 days have a teardown policy which moves it into Glacia and another policy which might delete it after it’s seven or ten year period automatically without you having to take any additional action for that so it’s very very good for long-term data storage and management use cases where you want to optimize costs so you can take a look at storage classes to further reduce the cost of s3 and it’s already a very good value in a low cost service seventhly offloading your architecture and it’s used to do in this particular example with applications that might have public assets so we’re talking about images static HTML Javascript CSS maybe binaries that you need to distribute for example maybe you’re in the mobile phone handset business and you want to push large volumes of firmware updates out to tens or hundreds of mobile devices does it make a lot of sense to do that from ec2 instances you can do it from ec2 instances but it’s not a very cost effective way to do it more cost-effective ways to offload duck traffic into Amazon CloudFront and Amazon s3 placing your origin data into s3 integrating that with the cloud phone and allowing cloud front to distribute that traffic around the globe to you or to whichever locations around the world you need to distribute it to and you can use regional distributions and also geo pipey integration to restrict who can get access to distributions within cloud front and you’ll see a couple of effects from this the first thing you’ll see is reduce cost it’s lower cost to deliver content from cloud front than it is to deliver it from origins on the Amazon Web Services Network and data between those origins and cloud front is not charged for so you can push information out from s3 to cloud front cache around the world have that delivered locally and there’s absolutely no downside to that your costs will be lower and in addition the second benefit is that user performance and user experience will also be improved because content is going to be delivered locally we optimized the backhaul part of the network to make sure that performances are optimized between cloud front the edge locations and the origins that you’re running back in those AWS regions around the world so there’s really no downside to making use of cloud fund it’s something that any customer that’s delivering data or any sort of scale should be making use of the only complexity really is the set up overhead and even that is a very simple process thanks to integration between cloud forum and other AWS services for example when you go to set up a distribution you can see your available s3 buckets as a drop-down so it’s really very very simple to do it make you set out to offload your architecture reduce the amount of ec2 instances that you may need whilst at the same time improving user experience is really a win-win in a similar vein really another area where you can potentially not only reduce costs with AWS but also really speed up the rate at which you can deliver innovation and really speed up the rate at which you can deliver change within your organization is to make use of AWS services now this probably best illustrated by taking a look at this hierarchy here so this is a model that we use to organize the different services that AWS provides today and you’ll see we have what we call core services at the bottom there ec2 with our compute or tose Kaling and load balancing s3 EBS and Glacia in storage CDN with cloud front databases with RDS dynamodb and ElastiCache and networking with services like the VPC virtual private cloud route 53 dns service direct connect and others and you can certainly consume and make use of AWS simply by using those what you might call primitive building blocks I guess you know things like virtual machines running in the cloud our ec2 instances our database is running on RDS but we find that many customers that are really getting maximum benefit out of the cloud using some of the abstractions some of the higher-level services that are available that sit on top of those primitives security and administration services and more importantly perhaps

the platform services that sit atop that things like Amazon EMR are managed service for running Hadoop workloads in the AWS cloud some of the app services we have like sqs are simple queuing service or app streaming away the app stream the deployment and management services these are particularly important enabling you to increase the rate at which you might deliver software into your environment through continuous integration and continuous deployment techniques one-click web app deployment with elastic Beanstalk or using cloud formation for resource templates or deploying code into production on AWS with our new AWS code deploy service they’re all things that can reduce the amount that you need to think or worry about infrastructure concerning yourself a whole lot less with the undifferentiated heavy lifting of low-level Services components and working with these abstraction layers that can just help you do things that much more quickly maybe most importantly often say this the customers that are making use of AWS or thinking about making use of AWS a lot of the value that’s inherent within cloud computing generally AWS specifically it’s about stopping doing things yourself you know making the decision that you for example not gonna run your own my sequel or Microsoft SQL databases you’re going to make use of the Amazon RDS relational database service instead not going to build out your own load balancing tier with something like h8 proxy or some other load balancing software technology gonna make use of simple to provision elastic load balancer instead of the new environment as for were there examples of services that you can make use of just on this one slide that can help you get things done more quickly and also reduce the amount of overhead that you have in operating services over the media medium to long term you know you don’t have to worry about backing up scaling or fail out quick configuring failover inside your database T if you make use of RDS there all features of the service that we provide so that’s a whole lot less stuff you need to worry about therefore a whole lot of resource that can be spent on things that are more productive and add more value to your organization so avoid this undifferentiated heavy lifting take a look at scenario situations where you can make use of AWS services as an alternative to building things yourself many of the services of course open and could be replaced at a later stage if you find if you find they’re not a good fit for you but in many cases there will be a more than adequate solution to needs that you might have you can make use of them instead of having to build things yourself penultimate area of focus is to use a feature we have called consolidated billing this allows you to aggregate multiple AWS accounts or the bills for multiple AWS accounts into a single payer account single top-level account that will give you a single bill for all charges incurred across all those linked to County enables you to do a couple of things share reserved instance discounts if you’re purchasing our eyes in volume we should talk to us about about that or we can offer advantage pricing if you have a discussion with us about your RI needs you can take those at the top level account and distribute those across sub accounts that you might have you can also combine tiering benefits if you have for example a volume tier on s3 in aggregate across all of your accounts why not take advantage of that would use cost per gigabyte for storage across even the smallest entity or smallest account that you might be running within your overall landscape and making use of this consolidated billing it allows you to do that it’s actually very simple to setup as well if you go into the billing console that you can see on the left here you’ll see in the left hand menu there is a consolidated billing option they’re just going to there I’ll explain to you in the option precisely how to setup and manage consolidated billing across the sub-account that’s definitely something that any customer that’s running a complex environment when more than one AWS account should take a look at and then lastly other AWS tools that are available to help you reduce cost we’re not going to spend a lot of time on this during the session today but it’s to say trusted advisor it’s a feature of our premium support service available to customers that take either business or enterprise support and as we said in the session last week business support starts from $100 per month or 10% of your AWS bill with the percentage charge falling away as your spending increases and you can find full details of that at slash premium support trusted advisor will automatically make cost reduction recommendations to you so if you for example running a workload I instance that is too large so you have a lot of headroom constantly lots of

unused resources you’ll be recommending will recommend to you that you resize that instance to make it smaller and save cost by doing so if you’ve had instances that have been around for a long time running consistently under the undo man what’ll it will recommend to you that you switch those or rather purchase our eyes reserved instances to lower the costs of running those workloads over the next period of time there are many optimizations that will be recommended to you it will specify or make you aware of what the potential monthly savings might be and it will explain to you how to implement the recommendations that have been made so that’s a super tool for customers that want to optimize their costs like all AWS services you don’t have to commit to it for a long period of time if you don’t wish you can take business support for a period of 1 or 2 months take a look at those cost optimizations maybe implement the ones that are appropriate for you and then stop your business support and take it again in another six month or 12 month period and review those cost-saving recommendations again so there are you’ve got an opportunity there to use it periodically if you feel that might be valuable for you in addition a reasonably recently introduced feature AWS ec2 usage reports will enable you to see your ec2 usage over time by instance type and this can as I’ve said earlier help you determine whether or not using reserved instances might be a good technique for you take advantage of to optimize your costs and you can find more details about that again in the billing and cost management section of the console go to the report section and there you will find your ec2 usage reports and you can view and cut that data in a variety of different ways to help you identify the insights that you need to determine if you’ve got your account running in the most optimal way as far as reserved instance usage is concerned that’s of course in addition to third-party tools that you might wish to make yourself as well ok so that’s all we have for you today in terms of content ten different ways in which you can optimize your costs on AWS if you want to review those again of course download the materials from the files per tab that you can see in the webinar interface right now I’m going to switch the webinar into Q&A mode at this point and ask you to give us a rating for the session today and also we’re going to take questions that might have been submitted during the question during the sessions if you have any questions please submit those right now using the Q&A panel so while we’re waiting for questions submissions to come in I just wanted to point you to these social media links that I mentioned at the beginning of the session today if you want to stay up to date with what I’m doing is AWS evangelist here in the UK and Ireland you could find me on twitter at en m with four M’s if you want to keep up to date with AWS here in the UK and ireland for local events future upcoming webinars maybe you want to come along to the AWS summit events that we run each year you can find details about all those events and also stay up to date with AWS news in the european english language time zone AWS underscore UK I and if you interested in AWS cloud global news and AWS events is what you’ll find on that twitter handle including lots of information related to new services launching that announcements how-to guides across all of our customers and all of our services global it’s a great account to follow when over 100,000 followers are now so it’s pretty popular as well so find us on Twitter those addresses if you’re interested in keeping up to date with AWS around the world okay just like to thank you for joining the sessions today as always you do really appreciate your giving us out giving us your time coming on to the sessions that we run do not forget to give us a rating cuz that’s super helpful for us and now we’re going to take a look at some Q&A thanks again for joining us today