Skip to main content
Episode 7

Season 2, Episode 7

Understanding Container Security With Ben Bernstein

Guests:
Ben Bernstein
Listen on Apple PodcastsListen on Spotify Podcasts

In this episode of The Secure Developer, Ben Bernstein from Twistlock joins Guy to discuss container security. Are you currently using containers, or thinking about moving to containers in your stack? You won’t want to miss this episode.

With containers, developers control the entire stack. While empowering to developers, this can also open up new security vulnerabilities. Ben and Guy discuss the tools and processes you’ll need to put in place to ensure your containers are compliant and secure.

The post Ep. #7, Understanding Container Security appeared first on Heavybit.

Teilen

“Ben Bernstein: A lot of the people that read about containers, they read the theoretical material. The developers now control the entire stack. Security is probably one of the most important things about software and developers can make mistakes. How do you make sure that everything is compliant and is as safe as possible? You'd rather know about a vulnerability as you check in your container or your code, rather than wait for it to happen in production.

Guy Podjarny: What's more important is the revolution of how software is built and who's building it.

Ben Bernstein: Today's tools were not built for the CI/CD world.”

[INTRODUCTION]

[0:00:37] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[0:01:07] Guy Podjarny: Hi, everybody. Welcome back to The Secure Developer. Here today with us, we've got Ben Bernstein from Twistlock. Thanks for coming on the show, Ben.

[0:01:14] Ben Bernstein: Thank you for inviting me.

[0:01:16] Guy Podjarny: I think, maybe before we dig in and start talking about all sorts of things, container security and microservice security and the likes, maybe Ben, do you want to give a quick intro to yourself, your background, what do you do?

[0:01:27] Ben Bernstein: Sure. I've actually been a developer throughout most of my career in Microsoft from all places, working on different security suites and the OS security of Windows. Recently, me and another friend, we started Twistlock, which had to do about the change that is happening in the world of developers and how it actually affects the security world. This is how Twistlock came into being.

[0:01:50] Guy Podjarny: Cool. Again, I think the container security space is hot and new, but also, entirely imperative to the adoption of containers, which is growing probably faster than the security controls on it are.

[0:02:04] Ben Bernstein: Absolutely. Absolutely. For us, it's really an opportunity and we were pretty amazed with the reception of the concept that she's outlined, yeah.

[0:02:13] Guy Podjarny: Yeah. I guess, maybe to level set a little bit, right, we're talking containers and security, right? This is probably going to be a theme this episode. Can you give us a bit of a baseline about what should you care about, what should you think about when you think about security aspects of containers?

[0:02:29] Ben Bernstein: It's really interesting, because a lot of the people that read about containers, they read the theoretical material, they come to the conclusion that the most fundamental issue about containers is whether they're as secure as VMs or not, and whether you lose something between moving from VMs to containers. That's not what I've seen in practice in all the customers that actually, and organisation that moved to containers, and especially the enterprise one.

You must remember that the core thing about containers is the detachment of software from the actual physical, or now virtual machine. The interesting thing is not whether they're as secure as VMs, but rather, how you could control the mess and the empowerment of developers to do so many things. The developers now control the entire stack and just like with everything else in life, you need another safety measure, right? I mean, security is probably the most important thing, or one of the most important things about software and developers can make mistakes. How do you make sure that everything is compliant and is as safe as possible? In the past, you had IT people. It will be their safety belt. Now you need something else to help them.

[0:03:47] Guy Podjarny: Yeah. I guess, the location of where some of these decisions are made change, right? The hopefully, outdated but still probably very much alive in certain systems world, where in order to run something you would have to ask InfoSec to provision a server, or ask IT to provision a server and that would require an InfoSec inspection. Then it's not a great world in many ways. But in that world, at least there is that security inspection that just completely disappears in the world where as a developer, you put in a Docker file and wa-la, the entire operating system just got stood up.

[0:04:20] Ben Bernstein: Absolutely. Here lies also the opportunity, because when you think about it, as a developer you'd probably not want to wait until the IT person knocks down your door and says like, “What the hell you just did?” What you'd like to do is to use the CI/CD tools in order to push thing into some staging mechanism, or something that reviews it and pushes back to you if there's any issue. That's actually really a good opportunity for you as a developer to get the feedback right away.

In the past, a lot of the things that used to be issues would only be discovered very late in the process and then you had to find out which developer did what and why. Here, once you do something that is wrong, this is an opportunity to actually push back things. But just like you said, you have to choose a different location to do these things and the CI/CD tools would be probably the best location to give feedback to the developers about anything you can do. Because for example, vulnerabilities, you'd rather know about a vulnerability as you check in your container, or your code, rather than wait for it to happen in production and it being picked up by some mechanism. Always closer to the developers, always have and always will make more sense.

[0:05:37] Guy Podjarny: Yeah. I definitely agree with that perspective. I think containers do two or three things in the process. One is the technical one, which as you pointed out is not that important around the fact that technically, your operating system is going to run under, or within a container, versus within a VM. There's a lot of conversation in container security world earlier on about what really was just shortcomings in the Docker engine. Some of those are still there, but they're less interesting about whether a container is isolated or not. They're interesting today, but long term, they're secondary. What's more important is the revolution of how software is built and who's building it, right?

Maybe splitting those in two a little bit. There's the technical aspect of it you mentioned right now, right? It's not that technical, but it's around how is the software being built. It's now built as part of a CI/CD process. Maybe we lost some security gate that we had before in asking the infosec person whether they can do it, but we've just gained access into this CI/CD world and this opportunity to run tests early on. I guess, we can have the whole thing be a net negative in terms of security if we don't capitalise on that opportunity, or we can turn it into an advantage if we do tap into it.

[0:06:52] Ben Bernstein: Absolutely, absolutely. It almost goes back to the question of tests and a lot of the other stuff. You can look at some of the companies who have been doing it right, like Netflix and Google, and the way they did their CI/CD and the way they did their staging and their Chaos Monkeys and all that stuff. I mean, just trying to figure out about things as early as possible and doing it in an automated way is really important. You must develop new tools that don't exist today that enable you to do all of that. Because today's tools were not built for the CI/CD world. They were not built in order for the security people to set the policy and then for that policy to be enforced in a dev-friendly way. When you're thinking about how you're going to build your dev to production environment, this is definitely something that you want to keep in mind.

[0:07:47] Guy Podjarny: Yeah. I think the notion of how secured tools were built for security audits more so than any continuous testing is, it's a bit of a recurring theme in the show here. We've had it come up several times, because the tools were built for the present in which they're being used. In fact, even today's present, right? The majority of security controls today happen outside of the container world and the continuous world. That's increasingly changing, but that's still the case. You need tools that focus on the use case of building in that CI/CD, again, capitalising on an opportunity, because you just lost something, but you gained something all that much powerful if only you had the tools to take action there.

[0:08:28] Ben Bernstein: Absolutely. Honestly, this is almost half the story. The other half is actually, the fact that containers in themselves, the fact that they're minimalistic, the fact that they're immutable, the fact that they're more declarative, even let you eventually get to a situation where at runtime, you could get to even better indication of compromise and anomalies and better baselining based on machine learning, and a lot of the good things that security is about, but it’s not the – I guess, it doesn't have to do with the developer space, but I'm just saying, because developers move to this, you actually get more information and you're able to protect them better at runtime. Not only do you get better feedback to the developers, eventually the security pros would also find this system more useful. It's a win-win.

[0:09:17] Guy Podjarny: Right. In this case, I think, I would say that containers are just one manifestation of infrastructure as code. An infrastructure as code, as a whole, implies predictable hardware, predictable deployments. Again, barring bugs. But it's predictable deployments, and therefore, you can go on and you can check controls. Netflix actually have, I think it's called the Conformity Monkey as part of their Simian Army, which goes off and lets you deploy stuff. I think as is, I'm not sure if the Conformity Monkey is a gate or not, but then it goes off and it randomly finds systems and it just checks to see whether they conform to what they should be conforming.

Developers can go on and do whatever they please, but they may be caught if they've done something wrong by the Conformity Monkey, again, giving them sometimes an opportunity, but also showing them the responsibility that they need to address that. Tools of that nature, they don't have to be containers, but they have to be in that context of infrastructure as code.

[0:10:11] Ben Bernstein: Absolutely. I actually had an interesting discussion with one of the people in Netflix, and they mentioned to me that they even have a new monkey that tests access control. You don't typically think about that, but developers now have not only the power to create code and to create the entire stack, but there are some things that typically were taken care of by the IT people and suddenly, developers have full control over it and you suddenly don't have the extra safety belt.

One of which is identity and how much privileges does your services have? Because you as a developer, and you're testing in some environment, you might create some authentication and authorisation policy, which is good for your environment, but maybe it's not good enough for production. They actually have this monkey that tests the least privileged user and that's really – it's interesting that they came to this conclusion so early, because they did it based on practice, because they saw the developers sometimes make mistakes and you need some kind of, again, staging tools, monkeys to check whether they did everything correctly or not.

[0:11:18] Guy Podjarny: I guess, trying to enumerate a few examples, just give people some things to tee up. You're using, let's even focus on containers, as opposed to the broader concept of infrastructure as code. Your need to test for something, have some security controls as part of your CI/CD process. We touched on two examples here. You can look for vulnerable artifacts in those containers you're deploying and the notion of least privilege user, so you can audit probably the user that systems are running with. What other examples do you encounter? What type of, or you think that people should do?

[0:11:53] Ben Bernstein: Sure. The most common one, or the most basic one would be a golden VM that used to be a way for IT people to force certain OS hardening rules. Anything you could imagine about OS hardening, simple example would be there shall be no SSH daemon in production, right? I mean, that's just one example. But anything that you'd expect the base OS to have, anything that, when I say base OS, I'm just thinking about the user mode, because the kernel, the Linux kernel is shared with the host, but still, there's so much damage you can do by accidentally slipping something into the OS layer that's not protective. Then you basically need to make sure that it conforms to certain standards.

Then you go to stuff like devices, right? You could write something that looks at some attached device for some reason. As a security person, you probably want to limit these capabilities, because just because you in development, it made sense to you to attach this device, you probably don't want to attach any device in production. A lot of these slips that could happen need to be actually checked before something is being put into production. On top of that, there's a whole specific to containers, something called the CIS benchmark, which have to do on whether in the container, you defined a user, or did not define a user, and it's based on which version of Docker you used and whether you use certain restriction or you didn't.

Honestly, even the biggest experts could get something wrong. Not to mention, just a standard user who's trying to get around to writing a hello world program and may not have restricted everything that should be restricted. The CIS benchmark has about, I think, 90 different things that could go wrong and you want to check for, ranging from the daemon configuration, the host configuration, the specific containers. It's all over the place. It could be things that the developer did wrong, or something that the DevOps, or the IT person that set up the host on which the Docker is running has done wrong.

[0:14:00] Guy Podjarny: Yeah, those are really useful. We can throw a link to them in the show notes. The concept of enforcing, or testing for some, basically, policy violations, right? Those that sometimes sounds like a heavy concept, but in fact, it's actually very, very straightforward to see that you're using the right operating system. I can totally see that happening. In fact, I've seen it happen even personally, I have done it, which is when you're local and you create a Docker file, or you create some environment, your bias is just have it to work and the inclination is just add things. Then by the time, the distance of time between the time you've just done that and you've made the decisions about whatever operating system or whatever you installed on it and the time in which you commit that and have that deployed, there's a lag there.

During that time, you don't remember those decisions that you've made earlier on that you entirely intended to make temporary, except nothing's more permanent than the temporary. Yes, those are really useful. I came across this interesting Docker file linter earlier on that does some of those components. We'll throw a link to that from the guys at Replicated. You go on and these are tools. Maybe this is the technical side of the fence, right? The tooling you can put and the audits, or the checks that you can add as part of your CI/CD piece. I think the other part is the people piece, because what also shifts in what you've described of the process is it's not just the tests that get run. It's also the people that run them, the change. It's not the infosec person that does whatever inspection on the check. It's the developer that is adding a test to the CI that does the inspection. How have you seen that interact? You work with all these companies, they're doing, they're adding container security components. What do you see works from the interaction between the people coming in with the security inputs and the developers, or DevOps teams that need to apply them?

[0:15:53] Ben Bernstein: It's interesting, because it’s bottom-up. The whole approach to DevOps seems to revolve around smart people who own the dev space and then smart people who come from the ops space, and they typically work together in order to create some kind of a legitimate infrastructure on which the entire organisation can follow. The end result is that the SecOps people, security pros, they would like to set certain standards and have them being applied across. They need the DevOps people to actually implement all the mechanisms. If you go back to how application security used to be in the older world, in the VM world, you always had the security ops people working with the networking guys in order to put in all IPS, IDS mechanism.

It's almost the same to some extent. They work with the DevOps people, but here the DevOps people have a lot more responsibility, because they're dealing with a lot of delicate things such as the development process. They need to be very professional about it, the toolings. All the tools are still new. There's a variety of things, so they need to be experts in that. Sometimes you get to a situation, where you run into a security pro person who actually is so good that he learns about the development process, he learns about the CI/CD tools and he's comfortable implementing some of these things himself, but that's the exception rather than the rule.

[0:17:24] Guy Podjarny: Yeah. I think the, maybe one delta between the network ops people and the DevOps people is just the pace of change. The network world did not change faster than the security world, or in fact, probably the other way around, while the development world, especially in the continuous versions of it, changes very, very quickly. To an extent, I think you're entirely right. I entirely agree with the importance of having the security team and the development team, or the SecOps team and the DevOps team communicating.

I would also say, that this is a case that resembles a little bit more DevOps. Not just blurring, but entire elimination of the line almost between those components where those teams work very, very closely and very much hand in hand. DevOps did not eliminate ops teams, or make all developers ops experts. There's still people within the majority of companies that operate it that are predominantly dev, or predominantly ops. It's just they're not 100% anymore. They’re 5% of the other thing, or 10% of the other thing. Either way, they're a part of the same team’s cohorts goals and work together.

[0:18:34] Ben Bernstein: Absolutely. It even elevated the level of policies that the security people put into the picture, because in the past the security people used to be involved in every little thing that the developers did before they actually put it into production. Now, it's no longer manageable, because the scale is so big. It actually forces the security people to think about this meta policy. There should be no this and everything should be that, and apply it, because they can no longer go to every person who owns a microservice and ask him to describe in 10 pages what he's going to do and then read these 10 pages and the next day, he's going to change it slightly.

Actually, it elevated their level of policy making, and also required them to get to understand the DevOps space much better in order to understand what they can and cannot do. I absolutely agree with you.

[0:19:29] Guy Podjarny: Yeah. That process has actually happened in the ops world, the notion of write it down and then write it down in code. Ops systems were also voodoo, like the flow of actions you might do during a security audit. They were just in somebody's head, or they were written in some outdated document. Then as systems and the deployment of those systems became more automated and more touched, then those had to, first of all, be written down in code, so that they'd be predictable and not go out of date, because they represent what's on the system. Later on, even be written down, or edited by people that are not in ops. I guess, it's the same process that security needs to be.

[0:20:06] Ben Bernstein: Absolutely. We're actually taking advantage of that. Like you pointed out, when we get to actually see what's running, you need to understand the full context. You need to understand the infrastructure that was there from the hardware, all the way to the actual last bit of software configuration that you did. Like we said, you can do it on a manual basis. Actually, infrastructure as code is actually very helpful in the process of protecting software, all the way to runtime. This is a blessing for the security world.

[0:20:39] Guy Podjarny: These are really good topics. When we talk about containers, we talked about both the security implications within the containers, thinking about what's in them and the fact that they get created differently. We talked about the opportunity to integrate testing and which tests you could do as part of the CI/CD process and the people that run them. Maybe one last topic we can touch on, which is also top of mind for many people is not the containers themselves, but rather the microservice environment that they enable, right? Containers as the tool, or as an opportunity offered us the opportunity to now deploy many, many, many different systems, because it's that easy to create them and to create lightweight versions of them, eliciting this new microservice environment, right?

Suddenly, you have 100 servers, or maybe 100 is a little extreme, but 20 servers that perform the same action that a single server would have had before. That also introduces a whole bunch of security concerns, no?

[0:21:36] Ben Bernstein: Yes, absolutely. It does. It goes back to our talk about the scale. By the way, we've seen customers running it on hundreds of hosts. We have some customers that plan to go to thousands, so absolutely. You need to, when thinking about security, like we just said, you need to take into account the whole stack, but you also need to think about the scheduling of these microservices in different environment. On one hand, understand the full stack, which could mean different hosts. On the other hand, you need to understand the software piece, the specific container that you have. If there's an issue with it, you want to flag it and say that this was the container. It wasn't the actual host.

It has to do with how you analyse threats. It has to do with how you report the threats. It has to do, again, with the fact that you need to do everything automatically, so when something comes in, you need to analyse it automatically, because there could be thousands of the same container, or it could be thousands different ones. Some of which could go up for three seconds and go down and you'll never see them again. Everything needs to be automated. You need to think about scale and you need to think about the different pieces, about the orchestration and about this new stack that's not exactly just VMs and software which you run set up on. It goes back to everything we've talked about, including the fact that these are microservices, which just makes things worse.

[0:23:02] Guy Podjarny: I guess, here, there's also the, once again, the two-fold version of it, right? When you talk about container security, one topic that often comes to mind when you run those containers is the fact that the containers run on a machine and many cloud services, like AWS would have their security policies run, which network slots are open, or which VPC are a part of be an aspect of the machine, while the containers run on those. There's merit in them, again, and security concerns that run today. There are, once again, shortcomings of the current ecosystem that's just adapting to it.

Probably, the bigger concern is in the changes that are here to stay, which is the fact that now you have all these different microservices and to think about how they interact. What happens when one of those services misbehaves, what type of exceptions might bubble up to the user, or to the outside organisation? What type of network monitoring do you do to identify whether one of those components were compromised?

[0:24:01] Ben Bernstein: Actually, that's a huge opportunity again, because suddenly, you got infrastructure as code and suddenly, you got the person who developed the service implied to you, or even explicitly tell you depending on whether he's talking about the inbound traffic or the outbound traffic, but he's implying to you where each microservice might need to go. Then if you baseline it correctly and you understand the orchestration mechanism, then you have this new type of firewalling where you could, instead of just looking at static, or FQDNs, you suddenly understand this is a service, he's trying to do one, two, and three. If he's doing four, which doesn't necessarily translates into a different IP. Maybe it's the same IP that you had before, but now it's a different host. Or maybe it's a new IP, but that's okay because it's talking to a microservice that it should talk to at this point.

I mean, this actually presents a challenge and again, an opportunity for tooling companies and firewalling companies and security companies to create a different type of firewall, or a more elevated and container-friendly type of firewall.

[0:25:12] Guy Podjarny: Right. Each of these services now are much easier to understand. If you understand them better because they're doing something much more pointed, then it's easier to differentiate right from wrong and be able to monitor it in the right way.

[0:25:25] Ben Bernstein: Absolutely. That's exactly what we think in Twistlock. This is what we believe that the security world is actually going to revolve around that, about pure software and about understanding the developers when making the security decisions. Because now, developers are actually telling you more and you need to listen to that.

[0:25:46] Guy Podjarny: I think that's maybe where the communication needs to indeed start going the other way, right? In the deployment process, the gates that have disappeared have now moved into the developer's hands. The developers now control what gets deployed, how it gets deployed, what tests run on it before it gets deployed, and that opportunity was lost in the gate, but it was gained now in running these far better tests, in a far more continuous and efficient fashion. Now that that's deployed, security is never static, the fact you deployed something that you believed to be secure at the moment does not end your security work, now you need to monitor these things in production and that's where the information needs to come in the opposite direction.

Again, like in DevOps, a lot of the concept is if it moves, measure it. If it doesn't, measure it in case it moves, right? This notion of building operable software, you need to build, it's probably not a word, but secureable software that has the right outputs to enable a security professional looking and probably monitoring the system in production to distinguish right from wrong, just like they would a service that is just about to hit its capacity threshold and you're going to have an outage.

[0:27:00] Ben Bernstein: Absolutely. I see it as almost a thread that goes from the dev through baking, to staging, all the way to production and it could go both ways. This is really the biggest change. It's the change in development, it's the change in IT, it's the change in responsibilities, it's the change in security and it's the whole opportunity for the ecosystem, specifically security. Absolutely.

[0:27:26] Guy Podjarny: This is a really good conversation. Thanks again for joining me in it. I think, it's amazing to me every time how often you come back to the analogies between the DevOps world and the security evolution that needs to happen for us to secure this world. Before we part, can I ask you, if you think about a development team, or a DevOps team that is running right now and wants to improve their security poster, who wants to up their game in terms of how they handle security, what's your top tip? What's the one thing you would suggest they focus on?

[0:28:03] Ben Bernstein: If I had to say one thing, I would say that you should really start designing security as early in the process of moving to DevOps as possible, because you need to think about the tools and you want to put them in as soon as possible, because it's much harder to implement changes in the process later down the road. It's sound simple, but that's what all the people who implement best practices have done that we've seen so far.

[0:28:31] Guy Podjarny: That's a really sound advice. Also, I guess, containers give you an opportunity to do that, because you're probably restarting, or rethinking some processes. That's your opportunity to build security in. Thanks a lot, again, Ben, for joining us on the show.

[0:28:44] Ben Bernstein: Thank you for giving me this opportunity. I really appreciate it.

[END OF INTERVIEW]

[0:28:48] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on the show, or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.