Engineering, HashiCorp

Moving to the public cloud is a step all enterprises should be considering, yet many are still treading lightly around the perimeter of cloud capability or are still yet to make the first step. They are worried about security, scared of losing control, and concerned that, as a result, performance will be impacted.

But, as we will see, the real risk lies in bad code. Not the cloud.

In this blog post we take a look at:

  • Why enterprises should be moving the cloud
  • What concerns are holding them back
  • How they can solve their problems by using HashiCorp tooling to achieve highly-secure, fully-compliant, zero-touch infrastructure as code.

This describes a world in which security and compliance are baked into the scripts and templates from which infrastructure, configurations and applications are built and are automatically reconverged on an ideal-state. Operator access to infrastructure is no longer required and infrastructure can be short-lived because you always know exactly what is on it and can easily (re)create identical instances.

This kind of automation takes out the most vulnerable link in the chain: the manual human. This minimizes the possibility of bad code creating security or compliance concerns around the cloud and maximizes the incredible value that enterprises can derive from adopting it.

So why should all enterprises be considering a move to the cloud?

The obvious reason that enterprises should all be considering moving to the cloud is that it will save them money. No longer is there the need to make the initial investment in hardware, data centers and the resources to support these. The cloud also gives businesses the capability to only use the resources required to meet demand, so when servers aren’t needed, they aren’t used and they aren’t paid for, as cloud resources can be scaled up and scaled down as required.

Combining these considerations, we can represent one perspective on the value of the cloud with this formula:

cloud = (flexibility + scalability + speed of deployment)

The cloud gives enterprises the capability for flexible, scalable and rapidly-deployable infrastructure. Infrastructure that can be created as required, multiplied to meet demand and taken away when no longer required. This capability it also means that engineers can build their sandbox development environments on-demand.

It follows that:

cloud = (more innovation + faster software delivery)

Engineers have the on-demand capability to create the resources they need to translate their ideas into value, meaning they can iterate on their ideas quickly and effectively delivery software quicker.

Therefore:

cloud = faster time to value

Hashicorp_Tooling_1.png

A piece of terminology used to describe this value would be idea velocity. Idea velocity is the rate at which an idea can progress from an engineer’s mind, to the whiteboard, to a laptop, through tests, through iterations and eventually to the end user.

Therefore if:

cloud = (flexibility + scalability + speed of deployment)

Then:

cloud = high idea velocity

Which:

= low time to value

So why should you care about idea velocity?

Because businesses now rely on technology to be competitive.

As the ING Netherlands leadership team has it: “ING is no longer a financial services company, ... a technology company in the financial services business”.

Or the CIO of United Airlines: “We’re not an airline, we’re a software company with wings”.

As illustrated above, if your business relies on technology to make money, you’re a tech company. It’s also a well-known fact that tech companies that move faster and deliver their products quicker make more money, which is exactly why every business should care about idea velocity, and exactly why every business should be planning to move to the cloud (to increase their idea velocity!).

So why aren’t all enterprises in the cloud?

There are five core reasons:

Security, security, security (+security), control and performance.

Security

Decision-makers may find themselves asking:

“Engineers can do what they want - how are we supposed to be compliant?”

“Public clouds share resources, can others access my data?”

“Are patches up-to-date? Is my infrastructure hardware secure?”

“Compute instances...What’s on them? Who built them?”

All of fears can be soothed with a small amount of investigation into the leading cloud providers.

For cloud providers, cloud security is the be-all-and-end-all of their reputation and, therefore, their success as a business. Therefore they hold a wealth of accreditations in areas such as security and remain as transparent as possible to ensure that it is demonstrably clear to customers that they are trustworthy.

It’s also important to investigate the shared responsibility model of the cloud. Depending which service is used, whether it be IaaS, PaaS, SaaS or even FaaS, the cloud provider will take full responsibility for a portion of the components required to deliver your service, whereas some of it will be the sole responsibility of the customer.

Some organisations may also think that it is more secure to host all of their services in on-premise environments, but a cloud provider will be much, much better-staffed, have better and wider training and are much more focused on security because the quality of their cloud service is their product and their reputation.

The UK Government states that “it’s possible for public sector organisations to safely put highly personal and sensitive data into the public cloud”. It is indeed possible for you to put all of your sensitive data in the cloud, but how you configure and implement it is where the risk lies. A well-executed cloud is a safe cloud.

Therefore if you’re worried about security: stop letting humans do things. Automate.

Control

Some of you may be thinking “if I’m accountable for downtime, why give responsibility to the unknown third party?”.

The answer to that question is that control is not lost, it’s just that the donkey work is now taken care of by a business that specialises in it. The APIs and scalability that cloud providers are equipped with ensures that all of the control that an enterprise might require can be fully served.

Performance

Poor load times equals lost customers, and mere minutes of downtime in the financial industry could result in huge losses. But these performance issues are not going to occur because your enterprise moves to the cloud. The infrastructure is what the cloud provider promises, the guarantees of speed, performance and availability that they promise are exactly that, a promise. It is their product and reputation on the line.

The risk lies elsewhere. In your code. Your applications must be architected for cloud infrastructure.

bad code = downtime

How an enterprise executes cloud is where the risk lies. The best approach to solving this issue is to have everything as code, templating infrastructure, infrastructure configuration and the applications as code. If the full stack is code then the whole process of deployment can be automated, therefore, when things go wrong, infrastructure can be swiftly replaced and consistent uptime can be maintained.

So how can this be done technically?

Everything should be code.

Applications, configuration, and infrastructure.

With that point in mind, the process around creating that code should involve workflows that allow engineers to fail quickly and to automate the process that highlighted the failures (tests), allowing for fast iterations and the delivery of better software. The whole development process should also consist of a cross-functional team made up of engineers that have skills from all three core disciplines: development, operations and security. To get an engineer with the full-stack skillset is tough, therefore it’s important to group engineers with the different skillsets together, decreasing lead times and ensuring that engineers grow skills in all three disciplines organically.

With those key points out of the way, the actual technical process is as follows:

  • All engineers should be building and testing their code locally in a local development environment (such as Oracle VirtualBox) to ensure that they are meeting compliance standards within the organisation.
  • Engineers push their code to source control and raise a pull request. Raising a pull request is the process of notifying the development team that a piece of work is ready to be refined, discussed and eventually to be merged into production. Raising this pull request will trigger a build pipeline in a Build Tool, asking three core questions that are shown in the diagram below and calculating the answer through automated tests:

Hashicorp_Tooling_2.png

  • Once those questions have been answered, the code is ready to be tested in the development environment, then in test, which should be an exact replica of production to show how the environment would react when the new code enters production, and finally into product.

Hashicorp_Tooling_3.png

This process should be the same for applications, configuration and infrastructure.

This is where the HashiCorp toolset comes in. The core tools that would make the dream of compliant zero-touch infrastructure would be:

  • Packer: images-as-code. Used to pre-configure machine images with packages, configuration and elements required for compliance. Packer would be provisioned with a configuration management tool that would be used to enforce compliant configuration. Packer will pre-configure (or ‘pre-bake’) both local development environment images (e.g. VirtualBox) and cloud provider images (e.g. AWS AMIs).
  • Vagrant: local-environments-as-code. Used to capture an engineers local development environment as code, utilising the pre-baked compliant VirtualBox ISOs previously created by Packer. This environment would be used to not only test applications, but also the configuration management code that will be enforcing compliance of configuration in the longer term.
  • Terraform: environments-as-code. Used to orchestrate the creation of full, working environments that will be utilising the pre-configured images that are created by Packer and a configuration management tool. Again, the configuration management tool would also be ran at this stage to further demonstrate and enforce compliant configuration.

If an environment is templated, it’s clear what’s present.

No manual process means no human error.

If it’s code, it can be tested and compliance can be ensured.

With the HashiCorp tools in mind, the code created in each tool should follow the exact same flow as described in the above diagrams.

All code can be ensured to be compliant, whether it describes an application, an image, configuration or environments.

Hashicorp_Tooling_4.png

Enterprise benefits

Once the quality of your code has been guaranteed, the security, compliance and performance of your applications, configuration and infrastructure have been maximized. And the flexibility, scalability and optimum time-to-value of the public cloud can be taken full advantage of.

The business case for the public cloud - used well - is insanely compelling.

I’ve recorded a 20 minute video demo that demonstrates everything that I’ve been talking about in real life. Check it out below.

 

Automation is a central pillar of DevOps, which enables faster and more frequent software innovation through collaboration and iterative improvements. Find out why the cloud and DevOps go hand-in-hand in our blog - Why DevOps And The Cloud Are A Match Made In Heaven.

  • Jordan Taylor

    DevOps Practitioner

    Jordan has a passion for being a central catalyst to organisational transformations; helping organisations achieve value through technological and process-related innovation.

    With strong understanding of the core discipline of automation, picking up any new technology at pace and delivering value with it is one of Jordan's prized skills. However, listening, understanding and interpreting conversations with clients form a combined skill that Jordan continues to demonstrate as the key to the success of any technological or organisational project delivery.

    More Articles by Jordan