Cloud, Interviews

Matt Bibby recently joined Contino as a Principal Consultant. Matt has years of experience working with DevOps in the cloud, so we sat down with him to get his thoughts on how organizations should assess the risk and cost of cloud migration. 

Why don’t you tell us a bit about your background?

I started out in my teens as a bit of a script kiddie, before going on to study computer science at university. I entered the Financial Services arena with Barclays as an infrastructure support engineer back in 2008, where I got the taste for the complexity and bureaucracy of enterprise level technology. Within two years hands-on engineering became pretty boring and I rapidly found my interest drawn instead towards architecture. I contracted as a digital architect during the digital transformation boom and rode that wave, spending five years working for Barclays, Lloyds Wealth, SSE and Shop Direct.

At the end of 2015, I took stock and reevaluated the industry. The new up and coming technology was cloud, so I joined AWS in January 2016 as a Solutions Architect in Financial Services, providing cloud adoption guidance to global financial services organizations. This was my first foray into vendor territory and it came as banks were really starting to ask questions about the cloud. I spent two years working with household name banks on the early stages of cloud adoption programmes, addressing issues across people, process and technology as well as other important aspects such as re-education and legal and commercial concerns. It was a very broad role.

And why did you decide to move to Contino?

In the role at AWS, once you’ve guided a customer through the initial contractuals, evangelised the platform and helped push them towards a few pilot projects, the actual delivery of business value is then handed over to AWS’s partners, not AWS itself. So to be deeply involved in making those key business changes you need to be part of a forward-thinking partner. And my ambition is to be involved in share-price-altering change, so I want to work for a company where I can reach that goal!  

What’s more, AWS have really mapped out this model of using small, high-quality teams to deliver business outcomes, in contrast to the typical “body shopping” commercial model that we see from the “Big Four”, and I wanted to join a partner that resonates with that model. Contino has enough backing to get involved in interesting opportunities, but is small enough to retain that crucial customer-centric, community feel.

Why do you think the Contino-AWS partnership works?

Because it resonates! Both companies share key qualities: they’re customer-centric, they focus on business outcomes, they build reusable solutions delivered via small, highly-effective teams.

Contino’s model aligns completely with how AWS have built their business. They also both have a similar trajectory in terms of growth and a high technical bar for hiring.  

Cool! Let’s get onto some DevOps topics … do you think there’s such a thing as an established DevOps operating model?

There are established principles and best practices around people and process, I would say. But when it comes to technology there is no one-size-fits-all solution. Best practices around people and process vary based on the sector, regulatory context and maturity of the organization. It also depends on where a customer is on their journey.

The reason why taking enterprises into the cloud to do DevOps is so complex is that you have to empathise with each individual situation. The principles are transferable and the destination is the same, but how you get there at a high level is bespoke and unique to each client. But that’s the magic of it! And that’s why it’s so hard, as well. You have to empathize with where each organization is right now, determine where they really need to be, and then pick the best-possible path from the many available. But every company has constraints on how they do business, and these can be used to help identify which path is most appropriate.

What are the main challenges when it comes to migrating highly-regulated enterprises to the cloud?

The main challenge has to be education around their understanding of their risk appetite. Regulated organizations want to have their cake and eat it, too. If you talk to a global bank they want the benefits of faster time-to-market, growth in revenue and the cost savings of the cloud and DevOps...but they’re unwilling to shift their perception of risk posture! We say, “well, you can get what you want, but you need to take a different view on risk and do things you wouldn’t have done yesterday”. This, at a deeper level, also means that how they assess risk is flawed (and, ironically, a risk!) and needs to change, which starts with education.

What’s wrong with how they assess risk?

In the financial industry, putting material workloads in the cloud and doing DevOps is currently perceived as being riskier than not doing it. But the risk of staying put will soon be higher than the risk of moving to the cloud. And they can then either make a proactive, practical choice to make a change, or external disruption will force their hand. Most banks have built capability for the cloud, but they haven’t migrated material workloads. The technology leaders in the industry are waiting for the risk appetite across the industry to shift.

Truth be told, I think banks still have a bit more time left on the clock. Challenger banks have only just started to scratch the surface. But retail banking has become commodity and change is coming.

So are there still good reasons to do DevOps in the cloud?

There are two forces that are pushing companies towards DevOps in the cloud. The first is the fact that their technology will continue to age and become irrelevant - this is a ticking time bomb. The second is disruptive market forces.

These will intersect at some point and a moment will arrive when everyone is essentially forced to go to the cloud. Either that or a visionary will join a bank and move them all-in with the cloud and everyone will fast-follow instantly. There will be a race to be second. Everyone is kind of waiting for the firing gun but we don’t know if it will be an internal or external factor. Whether it will be push or pull.

The problem with any given cloud provider is that it’s always cheaper and more efficient to do things in a centralized way (e.g. with a single cloud provider) but you also start to accrue systemic risk. When you federate you reduce risk. So the cloud can help you move faster, which reduces both cost and time-to-market, but there’s a move in risk posture that accompanies that.

There’s no right or wrong answer, it’s a sliding scale that always comes back to risk posture versus cost and time-to-market. But the dial is moving...the time-to-market and cost savings you want are worth the risk factors of centralizing things into one company.

It also makes sense to engage with multiple cloud providers. Set up legals and contractuals with AWS/Microsoft/GCP then distribute risk across them vertically (i.e. on a product by product basis - don’t use a multi-cloud architecture, never spread workloads across clouds!).

And how can companies best deal with regulation in this context?

Well, the cloud model is changing how regulation works. Ten years ago, regulators were there to make sure the banks had the right people doing the right things in their own data centres. Comfort was found in decades of consensus across the industry and the ability to see, own and touch datacenter and technology assets. Now regulators are starting to work with the cloud providers.

In the UK, banking regulation is arguably ambiguous and is unfairly contextual based on your size and systemic risk. Hence why you have challenger banks moving quickly, with the big banks being left to assess their own risk but with an undertone of “of course you can do it, but don’t mess up”.

My key messages are: ensure your commercial contracts with the cloud providers are fit for purpose, invest in your people to ensure they are capable of operating cloud workloads, understand the new model of “shared responsibility” with your cloud provider and architect your workloads appropriately to take advantage of the highly distributed nature of cloud computing, alongside the higher-value platforms and abstract services.

What are the most common pitfalls that you see when migrating to the cloud?

Because cloud providers do not always have a data centre physically present in every country that they operate in, data sovereignty requirements cause customers to refrain from using cost-saving, high-value cloud services. Global organizations believe the best approach is to aim for technological consistency across the board so they reduce themselves to the lowest common denominator (i.e. the cloud services that are globally available, excluding those higher-value services that are only available in certain countries). In so doing they hamstring themselves and reduce their own competitiveness.

In the old world, consistency was the best way to go: the same hardware, the same software, the same architecture. But applying the same method to the cloud world where you have a varied range of services (and refusing to use them because they’re unavailable in one of your 10 regions) is illogical. It’s crucial to take a customer-centric view in each business, and not horizontally across your business.

Another massive pitfall is how global organizations assess sunk costs, total cost of ownership (TCO) and the cost of technology, especially in financial organizations (although I would argue that cost reduction is less valuable and a more contentious value proposition than time-to-market for cloud providers).

If you go to a bank and want to migrate most of their applications to the cloud, you would do a TCO exercise to demonstrate the benefit. You’ll get your number and show it to them but the bank will come back and say “we can do it cheaper internally”, because they typically work on a cost allocation model with low-cost capital investment, sinking all costs (power, cooling, networking etc.) into a cost per unit, so you’re not comparing apples with apples. In some cases, where there is high level of utilization (e.g. High Performance Computing Workloads - HPC) the TCO vs cloud can be negligible, but on the whole infrastructure utilization is, from experience, below 20%, so how does that work?

A third issue I see is organizations failing to deal with the people problem properly. We know from psychology (and from experience!) that individuals like to feel relevant. Banks fail to realize that the resistance and inertia that everyone moans about comes about because they’re not identifying with the talent they already have. There are people they need to put confidence in, to provide the relevant skills to, so that they can be the change you want to see in the world!

By bringing in large SIs you never solve the people issue - and that’s the critical one. You can’t buy in personality and attitude. But you can nurture it! On the other hand, if you do keep putting people and teams back in their boxes, then eventually they’ll stay there. People go into their jobs wanting to make impactful change and they have a period where they push to try to effect change but if they’re told “no” enough times they’ll simply stop trying. And we now have decades-worth of people who have been told “no” too many times and have given up and, when the new talent rolls in, the old scepticism rubs off on them...and so the cycle goes!

It’s interesting you that mention higher-value cloud services, what is your view on “lock-in” in the world of public cloud?

We’ve all seen large enterprises locked into vendors such as Oracle and IBM over the years, and I like to break the topic into two areas: commercial and proprietary technology. The on-demand consumption model of cloud significantly reduces the problem of commercial lock-in and then you are left to assess your “reversibility” options with regards to the technology.

Let’s take AWS Lambda, for example. If you developed some Lambda functions, you write your code in standard NodeJS notation, including your own libraries with no proprietary frameworks. AWS provide the execution environment, saving you numerous man-days in engineering effort and significant operating cost moving forward as you pay per invocation.

If you decide to migrate to another provider’s serverless offering, the only risk you face is the integration touch points for CI/CD and potentially some small code changes. Therefore in my view the “lock-in” provides more value than the risk it introduces.

What would your message to C-levels be?

There is no perfect solution to transforming and disrupting your business! The priorities are: get close to your business and think big, advocate change, upskill your workforce, work hand-in-hand with a forward-thinking partner to co-deliver a pilot cloud project to pipe clean your people/process/technology issues. This can then inform your strategy for a mass migration.

Crawl, walk, run!

Thanks, Matt!




DevOps Insights Directly to Your Inbox!

Join thousands of your peers and subscribe to our best content, news, services and events.