Skip to content
  • About Us
  • Our Services
  • Case Studies
  • Content Hub
  • Blog
  • Join Us
  • Contact Us
5 Things I Learned at the Great US Security Debate
George Ge

5 Things I Learned at the Great US Security Debate

The demand for DevSecOps knowledge and talent has exploded as more firms realize the need to not only deliver software in an agile and continuous manner, but to do so with security as a top priority every step of the way.

When I heard that Contino and Lifion, a human resources startup backed by ADP, decided to partner up and host the Great US Security Debate, I was excited to learn about security from some of the industry’s experts.

The six panelists were split into two teams, a red hacker team and a blue defense team. The teams were presented with real world situations along with related security questions and asked how they would either exploit or defend in that situation, riffing back and forth with the crowd deciding which team would have been victorious in real life.

Here are my main takeaways from the event.

1. Anyone can be a target

You don’t have to work with sensitive client data to be a valuable target for hackers. Any old server will do. As red team’s Krista Mazzeo explained: “I want to get into your machines and mine cryptocurrency. If you’re in the cloud, I want to leverage and take advantage of your auto-scaling capabilities to quickly make a profit and leave you with a hefty bill.”

It’s a scary reminder that security is important no matter the scale of your project. So remember the fundamentals. Use private subnets when appropriate and restrict security groups. Remember to set up billing alerts. Simple measures go a long way.

2. Security is not just a technical issue

As the blue security team pointed out, it doesn’t matter how technically secure your system is, if someone puts a gun to your head, you’re probably giving up your passwords. While it won’t always be a matter of life and death, even your most trusted parties can be compromised by financial bribes or unknowingly fall victim to social engineering. If our trusted partners can be compromised, then we have to work under the assumption that inevitably our systems will be compromised as well.

Security relies greatly on trust. While the issue of trust will most likely never go away, there are some things you can do to mitigate a malicious user. For example, follow the principle of least privilege to ensure that a user only has access to the least amount of privilege necessary to complete a given job. Consider requiring multiple keys that are spread out among trusted parties. While we can’t always fully trust our users, we can limit the scope of the damage that they can do.

3. Security has to be in-depth and real-time

While some of the previously discussed measures limit the scope of possible damage, we would ideally like to prevent and nullify incoming threats before they occur. To use the blue team’s analogy, security should not be like a tomato with a soft and squishy interior. A system should should be multi-layered without reliance on a single mechanism.

One example that everyone will be familiar with is passwords. Here are some of the possible layers:

  • Strong passwords containing letters, numbers, and symbols
  • Server-side password hashing
  • Password rotation policies
  • Limitations on login attempts
  • Two-factor authentication

While passwords are a fairly trivial example, they illustrate that by combining relatively simple layers of security, one can create a more robust system.

But what should you do when an attacker does manage to break through? The main strategy outlined by the blue team was to use real-time logging and machine learning algorithms to monitor all activity and behavior. When abnormal patterns are automatically detected, the system would create an alert and revoke privileges, kill processes, etc.

While I’m personally not completely sold yet on the current capabilities of cybersecurity AI without seeing it for myself, the theory is sound. With enough logs and training, machine learning could generate reliable alerts on suspicious activity. I’m sure there are many tools out there right now that I’d love to explore that aim to do just that.

The takeaway here is to ensure that you are generating sufficient logs and that you are actually analyzing/monitoring them, whether it be with human or computer eyes. Log aggregation into something like an ELK stack or CloudWatch would be a great first place to start.

4. Security does not happen in a budgetary vacuum (tackle low-hanging fruit first)

One retort that the red team had regarding blue team’s defense plans was that companies have limited budgets. Indeed, every company would hire full teams of dedicated security experts and internal hackers if they had the means. And every malicious attack could be implemented by sheer hash-cracking brute force or simple bribes if there were unlimited resources at one’s disposal.

However, as the blue team pointed out, many of the recent and well-known breaches were not through sophisticated or obscure vulnerabilities. They often occured because of simple things such as not changing default passwords. In fact, most data breaches happen because of human error and/or negligence.

While it’s true that every security plan must be accompanied by a realistic budget, it is also true that there is a lot one can do without breaking the bank. Using open-source compliance-as-code tools such as Chef’s Inspec, small teams can easily implement security controls and best practices across their entire infrastructure stack in an automated and scalable fashion.

5. New technologies will continue to shape the security landscape

Security is an ever-evolving game of cat and mouse. While the specific technologies may change over time, the core principles and ideas around security usually remain the same. New technologies can mitigate existing concerns but can also introduce new ones.

One promising technology that is poised to reshape the entire industry is serverless. Serverless allows developers to write small, reusable, and scalable functions and ignore underlying infrastructure. No more time spent on OS patches and upgrades.

From a security perspective, the most important thing to remember is that even though we call it serverless there is still a server somewhere sitting in a data center under the management of someone else. Serverless architecture requires greater trust because it requires you to relinquish all control of underlying infrastructure to your cloud provider.

Large cloud providers like AWS have the economies of scale and expertise to manage and upgrade their machines in a consistent and secure manner. They are also highly motivated, as a security breach would kill their reputation and possibly their entire business. Overall, serverless should be a big win for security. By abstracting away infrastructure to trusted vendors, teams will be able to focus on their applications and application security.

However there will be new challenges as well. As blue team’s Curtis Yanko explained: “with an explosion in the number of serverless endpoints, you’ll no longer have spaghetti code, but you can have spaghetti routes which can increase one’s attack surface.”

Who Took Part?

The event was moderated by Contino’s own Jesse White, Account Principal and Tim Harney, Partner Solution Architect from AWS.

On the red hacker team we had:

And on the blue security team:

More Articles

6 Benefits of Compliance as Code for the Enterprise

6 Benefits of Compliance as Code for the Enterprise

21 June 2018 by Cliff Almond
Persistent Storage Strategies for Containers

Persistent Storage Strategies for Containers

19 June 2018 by Benjamin Wootton
3 Ways the New AWS Landing Zone-As-Code Accelerates Enterprise Cloud Adoption

3 Ways the New AWS Landing Zone-As-Code Accelerates Enterprise Cloud Adoption

17 June 2018 by Stuart Slade
  • Londonlondon@contino.io