Skip to main content
Episode 105

Season 6, Episode 105

Modernizing Security With Tim Crothers

Guests:
Tim Crothers
Listen on Apple PodcastsListen on Spotify Podcasts

Today on The Secure Developer, we look at how to modernize security in DevSecOps. To guide us through this, we are joined by Tim Crothers, Senior Vice President and Chief Security Officer at Mandiant. Tim is a seasoned security leader with over 20 years of experience building and running information security programs, large and complex incident response engagements, and threat and vulnerability assessments. He has a wealth of experience in cyber threat intelligence, reverse engineering, and computer forensics. He has authored 17 books to date and presents regular training and speaking engagements at information security conferences. As someone who has been in the world of IT since the 80s, Tim explains how he has seen DevSecOps evolve over time, how security has changed its approach over the years, and what DevSecOps means to him. We discuss the differences between controls and guardrails, how often developers are allowed to override guardrails, and to what degree these are left to the decisions of development teams. To find out what Tim considers to be the optimal setup for the split of responsibility between development teams and security teams, what he looks for when hiring new people into his product security team, and what his top three KPIs are, tune in today!

共有

[00:00:47] Announcer: Hi, you're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, Dev and Sec collaboration, cloud security, and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers, and infrastructure as code. To learn more, visit snyk.io/tsd. That's snyk.io/tsd.

On today's episode, Guy Podjarny, founder of Snyk, talks with Tim Crothers, Chief Security Officer at Mandiant. Tim is a seasoned security leader with over 20 years of experience building and running information security programs, large and complex incident response engagements, and threat and vulnerability assessments. He has deep experience in cyber threat intelligence, reverse engineering, and computer forensics. He has authored 17 books to date, as well as regular training and speaking engagements at information security conferences. We hope you enjoy the conversation. And don't forget to leave us a review on iTunes if you enjoyed today's episode.

[INTERVIEW]

[00:02:14] Guy Podjarny: Hello, everyone. Welcome back to the Secure Developer. Thanks for tuning back in. Today, we're going to look at the big picture, and how do we modernize security and even take both a local or internal look, as well as an interested look on modern security in DevSecOps. And to guide us through all of that and get his views on it, we have Tim Crothers, who is SVP and Chief Security Officer at Mandiant. Tim, thanks for coming on to the show.

[00:02:40]  Tim Crothers: Oh, thanks for having me.

[00:02:41] Guy Podjarny: So Tim, before we dig in here and talk about all things DevSecOps, tell us a little bit about what you do, and maybe a bit of the journey that got you sort of into security and through to where you are today.

[00:02:52]  Tim Crothers: Sure. Yeah, absolutely. So I'm old. I've been doing this for a long time. I got into IT back in the mid-80s. Actually a hobby before that even. But college and university in the mid-80s. And did what we would now consider infrastructure for the better part of a decade. And then in 1994, this thing called the Internet popped up. Because I was in infrastructure, of course, the first bit of security was really firewalls, which of course are really just routers with rules and it kind of was a natural progression. And I've been just crazy fortunate to proceed from there and been able to participate kind of in the information security, and now cybersecurity we call it, industry, as it grew and developed.

Over the course of that, worked for lots and lots of different organizations. Also spent some time in law enforcement and, yeah, just been crazy fortunate to get to work with crazy-talented people and learn from them and get better.

[00:03:54] Guy Podjarny: And I'm curious, what drew you into security in the first place? Do you remember what caught your attention?

[00:04:00]  Tim Crothers: I love the protection mission, the defense. Same thing that drew me to law enforcement for a while. But the thing I've come to love over time even more, well, that drew me in, is what I love about cybersecurity is we get to play in everybody's sandboxes, right? Effective cybersecurity is not just DevSecOps, but its network, its infrastructure, its people, its intel, it's reverse engineering. And we could go on and on, right? And that I think is the fun part, is we get to play in literally everybody's sandbox at one point or another.

[00:04:40] Guy Podjarny: Very cool. And one of the things that kind of jumps up a bit when you look at your resume is the few years that you spent in customer success, within Mandiant, but I think your previous iteration in Mandiant. I guess it's interesting. You know Mandiant, clearly, is a security provider. It's not like you left security during that time. But what sparked it? And maybe what of it is left of doing a role that wasn't in itself securing internally?

[00:05:09]  Tim Crothers: Yeah. So that was really, effectively, a form of consulting role, right? I'm really passionate about helping organizations be successful in their security outcomes. And of course, as you well know, and our audience understands, it's complicated, it's tricky. And so my customer success role was really going onsite with customers and helping them implement Mandiant solutions, Mandiant’s consulting, all of our different capabilities that they were trying to integrate into their, essentially, operations so that they could come out that other side more successful. And that's fun, because you get such a broad view, right? Every organization is very, very different in many respects. There are lots of similarities, of course.

But even down to things like individual organizational culture, of course, ends up having a piece, a part. And so figuring out, “Hey, how do we help you mature your threat hunting capabilities? How do we help you mature your operational, your security operational capabilities, what have you? My career has been a mix of practitioner and vendor side. And what I love about both is that, ultimately, I really liked the practitioner side the most, because it's the hardest. And I love the challenge. Because it's not just buying tools and putting things, it’s how do you really make this work in that organization despite politics, and budgets, and all of these other real-world things that are a part of a business or public organization?

But the vendor side gives you such a broader view, right? Because you're working with so many more companies. That gives you less individual depth at those organizations, but really helps you get a really broad sense of options and approaches and things that companies are doing. And so having both, I think, is ultimately where I enjoy. I can't do one or the other too long without moving back and forth a bit. Yeah, exactly.

[00:07:18] Guy Podjarny: Missing the other part. No. But it’s great. It's great to have that sort of different perspective. And I'm keen to tap into some of the learnings, I guess, from both of those lenses here. So I guess let's begin indeed. So we're here to really explore the latest and maybe that modernization or sort of maturing, or at least the developer view of it. So when you think about DevSecOps, or if you start from some definitions, what does that mean to you? When someone says what is DevSecOps to you, how do you explain it?”

[00:07:49]  Tim Crothers: I think it's fairly straightforward. It's the art of producing a functional and secure code.

[00:07:57] Guy Podjarny: That's pretty short.

[00:07:59]  Tim Crothers: I think we can't have one without the other in my view, right? The whole point of developing code is to fulfill some sort of function, right? Security, from my perspective, is just an aspect of building good code. And indeed, I've found at several organizations now where I've helped build a DevSecOps capability, that as we produce more secure code, inevitably, we're producing more quality code as well.

The number of defects, many patterns that we use as developers and engineers in our code that produce insecure code are also producing functional defects, right? And so when we get an alignment that chasing that solves both, I find that's often a way of really helping engage with the developers and the engineers, because this has got to be a partnership, right? This is a fundamental reason, I think, a lot of our security fails is when we try and dictate and we don't truly partner with the groups that we're trying to help be successful. From my view, my role is to help our engineering teams at Mandiant produce good, secure functional code quickly, right? We need to move fast as a business. And partnering on that, for that outcome together, is how we produce things that solve the business needs and security needs. And ultimately drive innovation often as well, I find.

[00:09:38] Guy Podjarny: So I very much relate to that. How do you think of the difference? So you've been around a while. I'm not going to acknowledge the old or not on it. But you’ve sort of seen the industry maybe before this term was started to be adopted. What do you think is different? You probably could have said you should have secure and functioning code 20 years ago as well.

[00:09:57]  Tim Crothers: Sure. Well, I think a lot of Is the approaches have changed, right? Just like if you look at engineering, most organizations have gone from a waterfall methodology to a more agile approach, right? I would say that security in an industry has very much followed that same path. And that, originally, a lot of our focus in security was about having the appropriate controls in the appropriate places, which essentially became tollgates of sorts, right? Whereas now, it's not that controls aren't still valuable, but the focus is less about the controls and more about the results. How do we partner to deliver, right?

Again, my simple definition of product security, I very much like to focus on the outcomes. Are we reducing the number of security defects over time? That's just a simple, but yet really outcome-driven way to measure the effectiveness of our DevSecOps program is if we track, over time, the amount of defects. And then our focus is, “Well, how do we eliminate those? Instead of controls, are there approaches like, say, something that's happened at many of the advanced engineering organizations, is like a service mesh approach, right?

Where instead of doing authentication at layer seven in the application, we move that down. Especially in an API-driven architecture organization, we can drive that down to the transport layer, where we're using mutual TLS to authenticate. It's a great innovation, because, one, developers no longer have to handle secrets, right? It just takes the whole headache of secrets management off of their plate, which allows them to go faster, yet simultaneously improves the security because no longer secrets embedded in code. We've got better visibility and/or observability, if you want to use the SRE term for it, right? Which of course inherently gives us better outcomes on the security front. So there's a great specific innovation.

Oh, and it also satisfies control of application to the network layer, which means we can probably eliminate a bunch of firewall rule requests and other things in there. So we simultaneously improve security, allowed our developers to go faster, it’s just wins all the way around for the business, right?

[00:12:33] Guy Podjarny: Yeah, yeah. No. It sounds like a great initiative. And it's interesting to think, indeed, about authorizations and secrets that are sort of the bane of existence, while one of the modern surrounding and the idea of taking them off. And I definitely relate to that, the notion of when you think about some things are defined in code and some are more operations minded and access the system, there's definitely a kind of a good argument to be said that that's more about sort of runtime controls than it is necessarily about how the application behaves.

[00:13:06]  Tim Crothers: Absolutely.

[00:13:07] Guy Podjarny: So maybe let's take a bit of a – I actually dislike a bit the subject. Too much left to right bit, because it's a continuous loop in all of our visuals. But still, kind of in like a code to deployment view a little bit of controls. Get a bit of a perspective. When you think about – I guess, you've talked about less of controls and more about security visibility or tools. But still, when you think about instrumenting or equipping the development workflow with sort of security insights or helpers, walk us through it a little bit. What do you like seeing there?

[00:13:39]  Tim Crothers: So there's where I think automation becomes a great tool for us, right? It's important if we're going to produce good quality code that we've got consistency, that we've got good practices, that if I'm checking in and writing the code, that someone else taking a look at that code and validating it before committing it all the way up and down kind of that chain. And then good processes around our testing, that just like we've got the QA tests that we're going to want to run on our code, it's a great time to do some of our security quality testing, of course, as well, right?

And so, the key there, from my perspective, is just understanding our engineering teams preferred practices. Are you git-centric? Are Kubernetes K8s? What are those patterns so that we can partner to put those, I like to call them guardrails, rather than controls, around? So we want to support the outcomes that the teams want to do. We just need to be able to ensure that the appropriate practices and processes that they've determined are correct. And typically, we’ll collaborate on those. Are being followed consistently? And that, certainly, if you step back and look at the adversarial approach, they’re always ultimately – Again, if you really simplify it down, the consistent thing is they're looking for gaps, gaps in our processes, gaps in our other things. And how could they take advantage of those gaps? Obviously, things like supply chain compromise have been demonstrated to be a real threat. And so a lot of that can be satisfied just by good process and following good process, rather than necessarily having some specific tool to do it.

But there, again, is a great place where we as an industry, there's lots of opportunity for automation, because why should we have a need in a modern development environment to manually hook all of our tooling into a new project that an engineering team has spun up, right? They forked their code. They're going to take a different path. That spun up a new project and git. Our tooling should be able to automatically detect that, automatically hook in and see the processes going on that without us having to do all the manual care and feeding that was kind of maybe the hallmark of a lot of our legacy software development tooling.

[00:16:27] Guy Podjarny: Yeah, yeah. No. Those are good tips. Let me dig into sort of two things you said there. One is the controls versus guardrails. I mean, terminology is tricky. But when you think about those words, how are they different?

[00:16:41]  Tim Crothers: I would say they're different in that when you think of controls – And often they are really one and the same. So of course, is an industry, for a lot of reasons, we do have compliance and regulatory requirements that we as organizations have to uphold depending on what we do, right? Maybe PCI, if we accept credit cards, and that's a contractual requirement. And so invalidating for those regulatory, we will express them in the context of controls. We have a control here in that you're only allowed to login. You can see how the access to GitHub is controlled via maybe the certificates that we've configured for the access, which then gives us two forms of access rather than just one, etc.

So we'll still often describe them. But when I work with the engineering teams, I take the approach of calling them “guardrails”, because what we inherently want is flexibility. We want the ability for the engineers to not – Maybe to use an analogy, so to speak, shoot themselves in the foot and make a really bad choice that's going to harm the project or the organization. But similarly, if we've got 10 developers on the team, give them four different approaches and ways that all work depending on their personal styles.

And so maybe that's just implementing four different types of controls that allow, but collectively call that a guardrail that, “Hey, here's the things that we want to absolutely prohibit. But we want you to be you develop in the style that's best for yourself and for your outcomes, and have that flexibility. I think that flexibility is at the heart of what I consider a guardrail versus a control. A control is a specific thing that accomplishes a specific outcome.

[00:18:44] Guy Podjarny: Yeah. I think words do matter and communicating that. And the notion of guardrails are there to help you. Versus controls, they're there to arm you. How often do you allow developers to unilaterally choose to override a guardrail versus not? When you think about the guardrails you put in place, how often are they hard versus left to development teams to decide?

[00:19:10]  Tim Crothers: That's a great one. I would say it specifically depends on the specific guardrails. So for instance, let's take secrets in code, right? Detecting secrets embedded in code can be, of course, very tricky. So we can certainly apply some regex patterns as code is checked in that says, “Ooh, this looks like a secret.” There's a great example where we would allow the dev to override it, right? And really, truly what I would call a guardrail, where we've got some regex patterns that are looking for things that might be an embedded secret. During the check-in process, it pops up a warning and says, “Hey, this looks like a secret.” We want to confirm. Is this a secret or not a secret? Gives them the opportunity to say, “No, this isn't a secret.” You know, add a note. So we've logged that, yes, we're still doing our due diligence for looking for that. Hey, this is a false pattern. We can then take that back so we can continue to tune our regexs. So allow the developer to override that. Continue the code commit to finish. But yet, we've got a good guardrail in place.

Whereas other sorts of guardrails, like perhaps not allowing single factor commits to get without – There is no override for that one. Or perhaps another one would be when committing to the production, only certain authorized individuals in the engineering team have the ability to perform that activity versus all of the engineers on a team for processes. So it really comes down to the specifics of which guardrail’s in place. How stringent and what the rules associated with the overrides would be.

[00:20:57] Guy Podjarny: Yeah. No. Very cool. And I think it's probably a good thing to define as well. And it sounds like you're using a combination of security requirements like that access and accuracy. I guess, sometimes, all developers, and frankly, security teams that really love them as well, have experienced the notion of having a low accuracy control, which in turn requires security to be able to override it. And that ends up – Nobody is pleased with that situation.

[00:21:27]  Tim Crothers: Absolutely not. Well, and you're accomplishing nothing, right? The thing is you've always got to be stepping back. I don't think this is unique to security and going, “Is this guardrail accomplishing what we intended to accomplish? If we're not doing that analysis, then we're failing, I think, in our roles.

[00:21:48] Guy Podjarny: So Tim, like a lot of what you described right now, I think, probably have a bit of a selection bias here with the listeners, but still a lot of security people would default to nodding their heads and saying, “This makes sense.” And yet a lot of security people do not practice that in their day-to-day. From your experience in these different lenses, what gets in the way? Like why doesn't everybody operate this way?

[00:22:14]  Tim Crothers: Oh, that's a great question. I think it's a combination of things. Often, it's expertise thing. All of these different disciplines, like engineering and development. In my case, I have a background in it, I've continued to maintain those skills over the years. Having that common communications language with the engineering teams, obviously is indispensable. And so, certainly, it is not uncommon for a security team to not have anyone in the team that has that ability, that background, that allows them to understand, kind of, maybe some of those ramifications.

I think, similarly, sometimes it's simply that the need for security has just exploded so fast. In my case, I am incredibly fortunate. I've made a lot of career choices specifically to go work with people that were the best of the best so I could continue to elevate my skills, right? And so it is very common to have IT security people who just haven't come from that maybe technical background or had that opportunity to work and mentor under folks that are just incredibly skilled at the profession of security. And so they have a very maybe an audit-focussed or kind of control-focussed view.

And, as always, that's an opportunity for us as engineering teams to help them understand, “Hey, get passionate.” What I would challenge folks is to go the other direction and try and meet in the middle, “Hey, security team, we love to produce secure code, but this is actually slowing us down and hindering us. What if we tried this way instead?” That would always be my challenge for folks like that.

Just like some security folks maybe fall into the trap of thinking that, “Oh, well, they don't want to do it securely.” I completely disagree. I don't think any engineer wants to produce insecure code. Certainly, I've not run into one yet. I think we, in the industry, IT security industry, often just make it too complicated and convoluted, right? The other direction, right? We can go help simplify and help our security folks better understand engineering in meet in the middle so we can all produce better results would be my suggestion.

[00:24:43] Guy Podjarny: Yeah, yeah, absolutely. And I'm a big fan of DevOps analogies, and that's besides some of the core principles and talk about breaking the barriers between Dev and Ops is walk a mile in their shoes, embed in teams, but it really is all about empathy. I love the point you make there about development teams sometimes feel like they're too often you sort of hear a bit of a victim mentality. Sometimes, even surprising, when you talk about Dev teams that are otherwise quite empowered. We're talking about security team this and security teams that. So while you go up and try to propose an alternative, that’s actually probably under-done. Probably not a lot of teams do that. They feel like security is this mandate. When sometimes maybe people on the security side would be receptive. They just don't know how to apply it.

[00:25:31]  Tim Crothers: I think that's definitely the case more and more often, right? Certainly, there are still very much security teams out there that kind of take a very controlled forceful mandate approach. But in speaking to peers, that's becoming less and less the case. Specifically, because I think those of us who are partnering, which is more and more and more of the industry, are producing much better results, right? We can do so much to get more together. SRE is a great example. So many of the outcomes that site reliability engineering is looking to achieve just overlaps significantly with what we want to achieve in security, or at least approaches to achieve in security. And so there's just such a fantastic opportunity to move both of those needles in positive directions, right? And in the end, produce better outcomes.

[00:26:33] Guy Podjarny: Yeah, yeah, that makes a lot of sense. So maybe taking the – We talked a little bit about the minimum, maybe talk a bit about the maximum? So when you think about the split of responsibility between development teams and security teams when you think about the optimal setup that you think we sort of strive at, what would you say that is, right? Like what's the correct end goal in terms of Dev, security, security guardrails, you know, on the other side of that, what's the panacea there?

[00:27:06]  Tim Crothers: That's a great question. Ultimately, the responsibility for producing secure code has to rest on the shoulders of the engineers, because they're producing the code. Where security's role in there is to partner with them to really optimize the processes and the tooling to support them in being successful on that, right?

I think it all starts with the architecture, right? Have we defined a good architecture for the environment? We're all in agreement that this is how we're going to do identity access management. This is how we're going to do— Start with those core, “In our organization, these are our standards.” And once we've got that defined right, then it allows the engineering teams to go fast, because part of the standards are just technical standards. Part of the standards, of course, should also reflect the security requirements and be part of that. Then we partner on the good processes, the effective tooling. And then everybody just goes fast, right?

And I think there's also a real big component in there, where a really good security team is going to be very consultative, right? My product security team does a ton of work on analysis. A lot of the – Especially the legacy tooling, if you think about specific security flaws, like cross-site scripting, right? Very easy thing to creep into our code. Often a tool will say, “Oh, you have 500 cross-site scripting problems in your code.”

But when you do the analysis, it's four or five maybe lines of code that are actually producing all of those individual cross-site scripting results. And so taking that manual step before we – If we throw 500 issues to the engineering team, that’s doing no one any favors. That's not helpful. If instead, though, we take the time to really trace that back in the security team, because we've got the expertise in development, then instead we go, “Hey, we've got five lines of code that – And by the way, this is how you rewrite that line of code to not have that issue.” It's a great opportunity for not just improving the security, but improving our engineering team's ability and hopefully, again, drive those number of defects down over time.

From my perspective, the number we should be counting is five there, not 500. If we're going to the engineering teams and saying, “Hey, there's five issues we'd like you to get resolved in your next sprint. “Or if we go and say, “Hey, you got to fix those 500,” and we've not taken that extra step to help them understand what the real root cause is, then we're not effectively partnering. So that's what I think that really mature, healthy interplay between security and engineering teams look like.

[00:30:20] Guy Podjarny: Yeah, that sounds like a good reality. So I guess when you think about hiring for your team, and maybe specifically the product security team, what do you look for? And maybe what are some red flags or in order to try to avoid – And your quests to reach that destination?

[00:30:38]  Tim Crothers: Yeah, absolutely. Great question, suit. So for product security in particular, I go for engineers, right? People with experienced track records in a modern engineering environment, multiple languages, just as if I was hiring them for an engineering and development team. Because teaching someone product security and secure code is much easier than teaching engineering to someone who only understands the product security pieces or understands. And I find that's the case in most of the security in, I think, modern IT security. You will find a lot of folks is with deep engineering backgrounds. Similarly for operations, right? Having people that have gotten SRE backgrounds, or network backgrounds, or that, and then teaching them the security, because they're passionate about it and they want to maybe learn and broaden their career. Much better outcomes, I find, consistently going that direction.

[00:31:42] Guy Podjarny: Yeah, for sure. I think we – Early on, this was also a great hiring methodology to step up. Although, today, I'd say Dev scarcity might be getting up there to kind of match the security talent scarcity. So maybe hiring one quite be as easy. No, but that's definitely a recurring theme around sort of hiring engineers and how it's easier to teach engineers, the relevant pieces of security, then taking a security person who doesn't know how to code or doesn't understand programming and teach them that.

So one last question maybe before I always have this sort of crystal ball question I'd like to ask at the end. But before we go there, one last question around KPIs. So you kind of threw out a KPI before, talking about sort of the different backlog. But when you think about your KPI, what are your top three, right? I know that security is complicated. But if you measure yourself, which actually maybe even practically as you measure yourself or your boss, what are some of the three top metrics that you use to –

[00:32:43]  Tim Crothers: Yeah, absolutely, that's actually easy from my perspective. So I have three specifically that I take to our board of directors.

First is what we call containment time. So that's a measure of – From the point at which we have a prevention failure, i.e. one of our preventative controls. And maybe the example I like to use is it's Monday morning, and the blood level on my caffeine system is too high and I've clicked on a phish and had malware successfully detonate my laptop. That's an example of what I call a prevention failure. And so, of course, the threat actor, the cybercriminal, they've got an intention. Maybe they want to deploy ransomware, or whatever their goal is. That's just their foothold. And so at that point, it's a race. And so containment time is all about how do we contain that threat actor before they accomplish their goal?

And our goal is to have that three hours or less, right? Because that just doesn't give the threat actor enough time to harm, hopefully, the organization meaningfully. And ultimately, what it is, is it's a really effective measure of the effectiveness of our detection and response capabilities, right?

[00:33:58] Guy Podjarny: And this is from – The containment time is from detection to containment. So it doesn’t –

[00:34:02]  Tim Crothers: Yeah, containment time is from detection to containment. That's right. We have an overall one that we call – The overall, from the point at which the actual prevention failure occurs to containment, which is basically the time window when we’re vulnerable. But we have to compute that separately, whereas containment time is the moment of basically we know we've got a problem.

Then the second KPI is prevention failures. Simply account of prevention failures. And we take this again very broadly because, ultimately, it's a measure of the effectiveness of our preventative controls in our organization. But this would also be things like relevant for this, of course, we have a bug bounty program, right? And so if a researcher submits a bug bounty, we count that also as a prevention, a prevention failure and measure to containment time for when we've mitigated that risk. Now, it's not publicly available, but we still treat it that same way. Because at the end of the day, if a researcher was able to exploit some sort of system in a way that was not intended, then we've had a preventative failure.

And then our third KPI would be the one I mentioned early, and that's product security defects. And so in that case, what we do is we snap a line at the end of every quarter, how many open, high or critical product security defects do we still have in our product? And our goal is to be to zero for each. Because, of course, that number will, again, fluctuate over time, right? We've got some new code development, and that introduced some new defects. Or maybe we ran a red team exercise, and we identified some 0 days or some other things. But the goal is to have then the appropriate processes in play and partnership with those teams so that we've got proven processes to identify and close those on a continual basis. With a goal being that what we're shipping to our customers has no known product security defects on them.

Given what we do, of course, as an organization, a nightmare scenario for me is that some sort of a cyber criminal was able to leverage our products as that vehicle to compromise one of our customers. And so, of course, product security is one of our primary focuses as a result.

[00:36:34] Guy Podjarny: Yeah. No. Those are great because they cover – like every gap or every sort of hole that I see in those tends to, for the most part, be addressed a little bit by the previous one. And many of them are quite measurable. The prevention failure is probably the one that is least measurable because it almost incentivizes you to not know, right? Like prevention failure implies you’ve found a problem late, right. So you need to investment them. And how do you reconcile that right? It's almost like a great KPI, it’s sort of like a do a bug bounty or not, do sort of pen testing, right?

[00:37:15]  Tim Crothers: No, that's a great question. The key there is integrity. I think, as an industry, I feel incredibly strongly that us as defenders have to have unassailable integrity. And the thing that I repeat with my team is the number is just a tool to accomplish things, right? It's really more about this is a dashboard, right? At one point in my career, I spent time at General Electric. And for those who are familiar with GE, they're really the home of Lean Six Sigma, right? And that approach of process optimization is just hugely valuable. And so all of these, the prevention failures are ultimately useful, because, one, we want to understand what was the root cause so that we can constantly improve that prevention control and on an ongoing basis. And like everything else, just like if we have an operational failure, throwing people under the bus and blaming versus a blameless root cause analysis, right? It's the same thing in security. Understand what went wrong. How can we fix it? Etc.

And again, what I found is that creating that culture where people want to know what all of the preventing failures are so we can go after making things better inevitably leads to innovation. A fun fact, my last team prior to Mandiant, my final week there, we were awarded our 14th patent in cybersecurity. And that was a retail organization. So these things, if we build our culture correctly, just incentivize innovation out of our folks in that partnership. And I don't know many engineers that don't love to innovate.

[00:39:08] Guy Podjarny: Yeah. I know, for sure. And that aligns with the engineers. And I feel like we probably – I had that sort of on my list of sort of things to talk about- culture, but we're kind of running out of time to really open that up. We're going to need to kind of get you back here and probably spend a whole episode on the cultural aspects.

[00:39:24]  Tim Crothers: Oh, I’d love that. I think that's a huge important factor of our success, is the cultures we build.

[00:39:29] Guy Podjarny: Yeah, absolutely. Tim, tons of great insights here, concrete tips and things you need to do, as well as kind of broad perspective. Thanks for that. Before I let you go here, one last question. So if you took out your crystal ball and you thought about someone in your – Or we looked at someone in roughly your position in five years’ time, what would you say would be most different about that reality?

[00:39:53]  Tim Crothers: Oh, that's a great question. Five years from now. I think what's interesting about the last five years is the speed, and the pace, and the complexity has grown logarithmically, I think, is probably the appropriate word. And I don't see that stopping anytime soon. I think what we're probably going to be forced to happen is much more specialization is going to be occurring, right? A lot of our security groups and leaders have had to or have been able to be very generalist and I think the role as a security leader is going to continue to evolve to be much more about leadership, and focusing on the culture, and having really strong, really deeply skilled experts leading the various aspects of our security programs. And I think that'll be a fun change.

I think the other interesting change that will start to really manifest somewhere around the five-year mark is that, inevitably, because we, as a profession, have arguably been failing. And I would argue the fact that every year there are more breaches occurring than every prior year is hard data to prove that we haven't yet successfully turned the corner. That's right. Which from my perspective, is just awesome, because it just means that there's still so much opportunity for innovation and figuring out better approaches. But that is leading to our various governments around the world to lean in harder and farther and etc. And so it'll be interesting to see how that kind of evolves out.

Historically, that has led to more problems, more harm than good, because they tried to regulate everything. But I'm seeing some really interesting positive signs around like CMMC in the United States, where it's being much more driven by processes that are in place, rather than kind of control-oriented. PCI version four, which is out in draft form now, for instance, has also taken that similar approach, where they define the outcomes we need to achieve rather than the how’s.

And so as that starts to happen, hopefully, we'll take that path. And that will lead to solving one of the fundamental biggest problems, I would say, we still have in the industry for people like me, is you're kind of darned if you do, darned if you don't, right? Where if you have a breach, you are both a victim, and you're guilty for failing to protect the organization, right?

Whereas in physical security, like take banks or things like that, the industries and the governments have agreed upon kind of minimum based standards that as long as you can demonstrate you were adhering to those, you were just a victim, rather than also guilty of failing your customers or what have you. And until we kind of turn that corner in an industry, rules like minor are kind of fraught with danger.

[00:43:19] Guy Podjarny: A little bit. Some CISO had a role when there’s a breach.

[00:43:24]  Tim Crothers: That's right. I like to say that a breach is a resume generating event in my industry.

[00:43:31] Guy Podjarny: Yeah. I think that. So it's an interesting view. Actually, I jot down that from LinkedIn. Share it kind of. I think a similar perspective about the changing legislation. And I think he was an example of how a variety of corruption cases have brought that into finance in sort of – I think it was Enron and sort of others that really introduced all sorts of CFO regulations to what was looser at the time. And today, we take it for granted, which is that there's SOCS and there's all sorts of other sort of regulations that require you to do finance in a certain way. Security is a slightly more creative field maybe than finance. But it'll be interesting, indeed, to see with the recent executive order from Biden and a variety of others. See where that leads. So definitely interesting.

I'm curious, just on the expertise comment, the first one, what examples kind of jumped to mind? So like which expertise aspect?

[00:44:27]  Tim Crothers: Well, I think there's lots of various expertise. Certainly, our discussion today is a good example of one, right? If you're leading product security, then you should have a deep understanding in –

[00:44:40] Guy Podjarny: Software engineer, yeah.

[00:44:41]  Tim Crothers: Yeah, absolutely. Software engineering, operations. All of those are obviously fields that take years and years to master, right? But there's also very specialized things like reverse engineering and things like that. Or another great example that a lot of you're more mature security teams are focusing on is what is commonly being called now detection engineering. How do we build detection capabilities throughout our infrastructure and environments in creative ways rather than just maybe deploying network sensors? There's so many more ways to build effective, which has a lot of parallels with observability for the SRE field, for instance.

But another example would be it's become a critical skill in the last decade is data analytics and machine learning. That is just absolutely, even up to the point of data science, right? It’s just incredible capability when embedded in a security team. Again, it's just a force multiplier for our outcomes and things like those tooling that we mentioned earlier that's tons of false positives and stuff, right? There are so much more modern ways to go about achieving some of those outcomes if we've got the depth of capabilities.

[00:46:05] Guy Podjarny: Yeah, indeed. Tapping into innovation. So, yeah, thanks. Thanks for the examples. Tim, we've gotten long here. But I think every moment was a great conversation. Thanks so much for coming onto the show and sharing some wisdom here.

[00:46:16]  Tim Crothers: Oh, my pleasure. Thanks so much for having me.

[00:46:19] Guy Podjarny: And thanks, everybody, for tuning in. And I hope you join us for the next one.

[OUTRO]

[00:46:27] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @devseccon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]