Skip to content
  • About Us
  • Our Services
  • Case Studies
  • Content Hub
  • Blog
  • Join Us
  • Contact Us
Introduction to DevSecOps Best Practices: Technology (Part 3 of 3)
Emre Erkunt

Introduction to DevSecOps Best Practices: Technology (Part 3 of 3)

This is part three of a three-part series on DevSecOps best practices. Check out part 1 (people) here and part 2 (process) here.

Last week, we explained the importance of having a defined process in place, but let’s face it: no one gets excited to play with a new process.

Technology and tools, though, tend to make people giddy. It’s the shiny new toy that everyone wants to get their hands on. And because of that, people may sometimes be a little too eager to use the latest and greatest thing they heard about at the last conference, even if it’s not in the company’s best interest.

That’s why figuring out the why behind the project is important. Picking the right tools should be a function of the process and not dictate the process. Once you are in position to pick the right tools for the job, you’re one step further along the DevSecOps journey, because those tools will enable your people to execute a proper DevSecOps process.

This blog outlines and highlights some of the required technologies to implement a successful DevSecOps methodology within your enterprise. Check out our whitepaper, Introduction to DevSecOps and Best Practices for Adoption for a full list and unabridged descriptions.

Automation and Configuration Management

Leveraging automation and using orchestration is key to DevSecOps success. Orchestration and automation make auditing easier through the use of metadata, which makes decisions easier to achieve as they are based on data points and repeatable processes.

Secure Coding Practices/Security as Code

All coding standards must be constantly checked against new security recommendations. Setting this up to be event-driven is an excellent way of finding vulnerabilities as quickly as possible (there is a big difference between finding an issue on day one as compared to day zero!).

All changes to the code need to be verified and tested against these recommendations: no change is too small during this process. This is not a trivial exercise, and the benefits associated with such practices should not be underestimated, they are not limited to the amount of changes occurring in the development lifecycle.

Host Hardening

The practice of host hardening is not new, but if it were used more often, fewer services and applications would be unnecessarily exposed to the internet. Countless examples of security incidents can be directly related to leaving a generic attack surface that allows automated attack tooling to succeed in even the most basic attacks.

Minimizing the attack surface by not installing or running anything that is not required for the core application and utilizing security features native to your OS (e.g. kernel security modules in Linux) make this task easier.

The Center of Internet Security has developed a set of industry-standard benchmarks for infrastructure hardening.

CI/CD for Patching

Once your metadata has been associated with each asset, we can use this data to implement patching at the CI/CD level. Feeds from threat intelligence and vulnerability management are compared to the deployed software stack to identify matches in the templates that are queued for deployment. Patching live systems becomes a thing of the past, thus limiting the impact of downtime. This will also provide the ability to determine risk exposure in near real time.

Application-level Auditing and Scanning

Auditing and scanning are a crucial aspect of DevSecOps that allows business to fully understand their risk posture. Each of the following solutions represent a higher degree of security assurance of the code, as reflected in the organization’s risk appetite.

1) Source Code Scanning

Source code scanning should be covered by implementing Static Application Security Testing (SAST). SAST is used for scanning the source code repository, usually the master branch, identifying vulnerabilities and performing software composition analysis. It can be integrated into existing CI/CD processes.

2) Dynamic Application Scanning Tool (DAST)

Dynamic Application Scanning Tools are designed to scan staging and production websites in running state, analyze input fields, forms, and numerous aspects of the web application against vulnerabilities. It’s important to recognize that any time you allow users to provide you with data (form fields, query strings, HTTP headers, etc.), you are allowing them to provide data that your web server or application code will have to deal with.

Pre-Deployment Auditing

Pre-deployment auditing uses a pre-defined template for building assets to ensure the desired internally certified security level. The check is event-driven: when target code is changed, a check is triggered. Validations should be blocked and required to be integrated into a CD pipeline at this stage, since this is the last opportunity before the exit.

This principle can be applied to infrastructure-as-code to enhance compliance by ensuring that not only your software, but the infrastructure you deploy it on is automatically compliant. Tools like terraform-compliance or HashiCorp Sentinel are effective here.

This kind of auditing also has the advantage of engaging security teams early in the software development pipeline, rather than announcing their requirements at the end.

Post-Deployment Auditing

Post-deployment auditing, compared to pre-deployment, is also event-driven, but the events that trigger checks include changes to policy, as well as code. So when either the infrastructure, or the standards (policies) that that infrastructure must meet, change, a check is triggered.

The idea behind Post-Deployment Auditing is to ensure that the certified security level which you achieved with Pre-Deployment Auditing is still applicable and valid. That’s why the number of Post-Deployment tests usually exceeds Pre-Deployment tests.

Automated Host/Container/External Vulnerability Scanning

Compliance can be achieved using automated security configuration assessments to reduce risks and maintain continuous compliance. This helps to cut compliance costs by reducing the effort and time required to assess the systems.

Hosts: run scheduled checks to determined if the host configuration is invalid or outdated. Some tools give you a ‘security score’. You can easily see how your security improves over time if it is integrated with monitoring systems.

Containers: monitor every call/command that your container makes. Anything suspicious or non-compliant is blocked.

External Vulnerabilities: a check that is run periodically against your infrastructure (updated continuously with the latest threats) to identify vulnerabilities.

Managing Secrets

‘Secrets’ in an information security environment include all the private information a team should know, for example, database credentials, or a third-party API. Secrets should be accessed or generated temporarily with specific authentication mechanisms that are different for each environment such that no one – not even the developers – can reverse the logic or exploit a backdoor around secrets by just having access to source code.

The main purpose of managing these secrets is to eliminate (or at least minimize) the potential for human error in the handling of such private information, e.g. losing it, accidentally posting it in a public forum such as GitHub etc. The ideal technique is a synchronized, encrypted, auto-generated secrets store in which entities are temporary, with as short a time-to-live (TTL) as possible.

Enabling People and Processes

Technologies are key and the successful implementation of the ones listed above will greatly reduce the enterprise attack surface as well as the ability to effectively manage their technical security debt.

As well as automating scanning and monitoring, implementing the above technology practices means that security, regulatory and compliance requirements can be embedded as code into the software delivery pipeline to ensure that any code deployed is secure and compliant. Any deviation from this can be spotted early and fixed quickly.

By using the proper tools and technology, you are ensuring your people and the processes that have been implemented have the best chance of succeeding. If you’d like to learn more about our vision for DevSecOps in the enterprise - across people, process and technology - check out our free guide: Introduction to DevSecOps & Best Practices for Adoption.

More Articles

Contino APAC Welcomes Penelopi Stefanopoulos as New People Manager

Contino APAC Welcomes Penelopi Stefanopoulos as New People Manager

29 January 2018 by Penelopi Stefanopoulos
Secrets of the Agile Manifesto Authors: Ryan Lockard at DevOps Days New York

Secrets of the Agile Manifesto Authors: Ryan Lockard at DevOps Days New York

26 January 2018 by Ryan Lockard
Former CEO of Cloud Sherpas, David Northington, Brings His Track Record of High-Growth Tech Success to Contino Board

Former CEO of Cloud Sherpas, David Northington, Brings His Track Record of High-Growth Tech Success to Contino Board

23 January 2018 by Matt Farmer
  • Londonlondon@contino.io