The conversation around the adoption of cloud and cost is a long running one. I have read the to and fro for years as to how some companies believe it is cheaper on premise, whilst others believe that going all public cloud is the answer from a cost perspective. What still fascinates me is the basic facts are often ignored, or as I saw it so aptly put recently the frozen middle adopt an approach that ensures the status quo is maintained, dooming organisations to missing out on true success through innovation.

I wanted to look at the topic again considering recent articles that have been in the press around some of Australia’s financial institutions looking to move back out of public cloud due to cost and indeed looking to move into platforms based on Openstack. Let’s breakdown this conversation and a few discussion areas.

· How to understand what it costs to run your existing footprint in public cloud

· How to evaluate your existing cloud footprint to look for cost savings

· Could changing the architecture approach provide a better outcome in the long term?

Utilising a large-scale cloud provider which has a cost based stretched across what are millions of servers will always provide the most cost optimal approach (this is called economies of scale), and that your internal deployment no matter how large will never achieve the economies of scale a large public cloud provider will be able to achieve. There are many strong use cases to run applications on premise, it could be data sovereignty, regulatory or security reasons for doing so, but that is likely a topic for another time.

How to understand what it costs to run your existing footprint in public cloud

When evaluating your internal systems and looking at public cloud providers, a strong understanding of your cost base requires including the cost of your entire datacentre footprint and supporting services to evaluate a true cost of IT, these should include:

· Datacentre Facility costs

· Racking and cabling

· Electricity costs

·  UPS and generator costs

·  Server costs

·  Storage costs

·  Core and TOR network costs

·  Support staff

·  Support contracts

·  Software Costs

My approach is always to look at a timeline that includes at least one refresh cycle of infrastructure and datacentre costs. This is one of the key differences to the cost of hosting it yourself, versus the public cloud. All the infrastructure costs including the life-cycle costs are factored into the public cloud costs.

Once you have defined the cost of your on-premise platform over the period, then divide this cost amongst the servers in the datacentre, and then break it down to a per server per month cost. Once you have this you are now ready to compare this cost to running your servers in one of the cloud platforms.

If we look at AWS, they provide a comprehensive cost calculator that will allow you to input the specifications associated with your environment and complete several comparisons:

· Running all your servers on demand – good for Test/Dev, where you can switch the servers off for say 8 hours per day. To make this work, it is critical to implement an availability policy and restrict machines to only being on during that period, there are numerous tools available to assist with this.

· Utilising Reserved Instances – for systems that you know are going to be on all the time. A reserved instance can allow you to save ~35% of the cost of an on-demand server

My view is that once you truly understand the cost of hosting infrastructure in the datacentre then public cloud will clearly be a compelling option.

How to evaluate your existing cloud footprint to look for cost savings

Once you have moved to cloud then it is important that ongoing cost management of the platform is key to ongoing success. It is possible for costs to spiral out of control, especially when virtual instances are not right sized, or there is a lack of automation.

There are many tools out there to assist with ongoing cost management, it can be challenging sometimes getting support to implement the required changes to support the cost management activities. Some organisation could save large percentages of their existing cloud costs but teams do not have the bandwidth to implement these changes. This can lead to scenarios in senior management, that costs appear incredibly high, when team just require support to implement the required changes.

As a senior manager or CIO you need to step back in this scenario and do some more digging. Is moving applications back on premise really the right answer to the problem, or have you potentially just created a bigger issue and stifled future innovation options for your teams.

Could changing the architecture approach provide a better outcome in the long term?

My examples so far have focused on AWS, and they provide a whitepaper around how to optimise cost in their platform, this can be referenced here:

What has been discussed so far has really focused on running virtual machines, IaaS, either in public or private cloud environments, its past time to move away from monolithic applications and embrace a different approach. This is where the conversation should change, true transformation will only be successful when we break down applications from a monolithic approach to micro services. This is related to the subject of cost because in many scenarios a different approach using server less, or cloud based services, vastly reduces the cost of the application.

What is happening in Enterprises in Australia?

My experience is that the conversation is still focusing on IaaS, with a constant debate comparing the cost, companies believing they can somehow negotiate a better deal than the hyperscaler’s, and justify running things on premise. I have built on premise cloud platforms, and they are complex and expensive to maintain. Maintenance is akin to painting the Sydney harbour bridge, it literally never finishes, especially in the Enterprise, planning and implementing the next set of updates and firmware patches lasts all year, my question to you is, is it valuable work having an engineer running firmware updates on a server, not giving them enough time to work on automating the environment? I don’t think anyone will agree that it is fun or interesting!

This ends up being outsourced because the engineers want to work on more interesting problems anyway. That seems even more counterproductive to me, you have kept platforms on premise because of perceived cost benefits, but then wrap a managed services contract around it which means you pay a large amount for the most basic activities.

Not only does this push costs up more, it stifles creativity, the outsourced vendors can be hard to work with and organisations just end up going a different path when they want to create differentiation. I see mistakes time and time again that result in organisation paying far more for the outcome than if they had just quickly decided an approach and stuck to it.

This is the lost Australian opportunity cost of IT, and project delivery. The reason costs are so high is that projects run so long that scopes have to change, the scope changes and the delivery dates are pushed back, pushing up the cost even more. Delivery will be the focus of a future post.


Fundamentally, we use a cloud platform to increase our ability to innovate, this allows true change to how an organisation can deal with its customers. Hyper scale cloud providers buy more servers and have templated and automated every step in their processes to remove waste. On premise IT systems will never compete on cost and will never be able to, to proclaim so shows a lack of understanding of public cloud or a bad implementation approach.

Lastly, our ability to innovate in the datacentre will not be able to match the pace of the hyperscale cloud providers ability to add features to their platforms. These cloud providers are true innovation platform enablers and we should embrace them and use their success to build on top of.

  • Andrew Brydon

    Principal Consultant