Skip to main content
Episode 147

Season 9, Episode 147

Redefining Cybersecurity With Sean Catlett

Guests:
Sean Catlett
Sean Catlett
Listen on Apple PodcastsListen on Spotify Podcasts

Episode Summary

In this episode of The Secure Developer, Guy Podjarny and guest Sean Catlett discuss the shift from traditional to engineering-first security practices. They delve into the importance of empathy and understanding business operations for enforcing better security. Catlett emphasizes utilizing AI for generic tasks to focus on crafting customized security strategies.

Show Notes

In this episode of The Secure Developer, host Guy Podjarny chats with experienced CISO Sean Catlett about transforming traditional security cultures into a more modern, engineering-first approach. Together, they delve into the intricacies of this paradigm shift and the resulting impact on organizational dynamics and leadership perspectives.

Starting with exploring how an empathetic understanding of a business's operational model can significantly strengthen security paradigms, the discussion progresses toward the importance of creating specialized security protocols per unique business needs. They stress that using AI and other technologies for generic tasks can free up teams to concentrate on building tailored security solutions, thereby amplifying their efficiency and impact on the company's growth.

In the latter part of the show, Catlett and Podjarny investigate AI's prospective role within modern security teams and lay out some potential challenges. Recognizing the rapid evolutionary pace of such technologies, they believe keeping up with AI advancements is crucial for capitalizing on its benefits and pre-empting potential pain points.

AI-curious listeners will find this episode brimming with valuable insights as Catlett and Podjarny demystify the complexities and highlight the opportunities of the current security landscape. Tune in to learn, grow, and transform your security strategy.

Links

Follow Us

共有

“Guy Podjarny: You mentioned, so what are some examples, indeed, of ways you've so strengthened, I think you mentioned empathy and others in past locations?

Sean Catlett: Yeah, I think one big one, just working from the fundamentals to saying you want to shift from one to another is really understanding the way that business operates. I hate to say it that way because it's actually something that is sometimes missed because people come in and they have such a narrow scope, they don't actually know, for example, how the business makes money.”

[INTRODUCTION]

[0:00:32] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community, found on devseccon.com, where you can find incredible dev and security resources and discuss them with other smart and kind community members.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

[INTERVIEW]

[0:01:19] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we're going to talk about the modern security team, or what we think it is, and what we learned about it. To help us dig into that, we have Sean Catlett, who is a very experienced CISO, specifically with the CISO at companies like Reddit and Slack, and spent a lot of time leading security at Betfair, and even spent some time on the dark side, on the vendor side with JupiterOne and others. Sean, thanks for coming on to the show.

[0:01:46] Sean Catlett: Yeah. Thanks for having me. Thank you.

[0:01:48] Guy Podjarny: Sean, maybe just for context a little bit, maybe I don't know how much I hit the mark with my description there, but tell us a little bit about your security journey just for context before we dive into the substance.

[0:01:58] Sean Catlett: Yeah. I think you hit on some of the highlights, but I think there's a lot of interesting and I think diverse backgrounds I’ve worked in. Like you said, on both sides of the coin. Really, my start in security was actually on the detection response side. Leading into a response for Bank of America. It ultimately took me to London, where I moved over to build out at the Barclays Bank. Really on that side, learned the super large scale, ultra large scale, I'd say, more traditional, but very modern and advanced at the time, capabilities there. Ultimately, went to Betfair, which was really my first experience, and what I would say, a more engineering-focused, or engineering-led, mostly because that was a tech company in the gaming space.

Really, for me, it was adapting some of my background up to that point there. Then spent some time on the vendor side, startups. Built out a cyber threat intelligence startup, famously that sold out to FireEye, now Google Cloud Intelligence, I think.

[0:02:54] Guy Podjarny: Always gets long names once it gets into the big companies.

[0:02:56] Sean Catlett: Exactly, exactly. I'm really, really proud of those experiences. Then ultimately, spending some time with Reddit and Slack most recently. I think those are really the opportunities were really – I got to see a different side and a different risk calculus in how to deliver security programs, but also, a real focus around doing that via dedicated and direct engineering as a core part of your team and function, versus really the previous organisations where while security engineering was a part of some of the things within the team, starting to branch out to do things, even externalised from your team and ultimately the dream, which is actually putting things into the core product from the security function, which I think is super exciting.

[0:03:38] Guy Podjarny: Yeah. I guess, also different levels of regulation, right? Because I'd imagine, as Barclay is clearly a bank, and also, Betfair, they're a gambling company. There’s a lot of financials and regulations versus a Reddit and Slack, which are a bit more pure play, SaaS technology solutions on it.

[0:03:53] Sean Catlett: Yeah. I think that's true. For many times, and I think at Reddit and my recommendations, I was told we're not a bank. Until you really start thinking about, you do want to protect your customer's information and where that is and how you approach that problem and in many ways starts to look and feel very similar, right? Where you want to make sure you've identified that correctly, segmented that and understand all of the data flows. You have to do that regardless.

I think, ultimately, folks like Slack, because again, with B2B businesses, your regulation is everyone else's regulation. Ultimately, you do take on the transitive factors of all of your customers, their regulatory environment, to be able to be successful. I think, ultimately, you do end up in a way – while you do have that B2C pace, which I think is super interesting, where you’re business to consumer, you're delivering a product that people love, because they're the consumer of that product, you're still selling, ultimately, B2B, whether it be Reddit with ads and ads environments and large companies there, or to Slack, which is obvious, where you're selling two businesses to use it as a communication product.

[0:04:54] Guy Podjarny: Yeah. I guess, it's interesting. To an extent, you need to choose more. I think when you're in a regulated industry, you could hide behind the regulation and say, “Hey, I'll just satisfy this.” Then above and beyond that, saying, “Okay, what's the security versus compliance type equations to an extent, when you're in a place that is not as regulated?” You're right, you need to be compliant to a bunch of customer demands. I guess, my sense is that you have a lot more leeway to do the things that you actually believe will make you more secure to battle a little bit with less checkbox requirements.

[0:05:27] Sean Catlett: Yeah. I think it's how you approach it. I think from my perspective, and I've said this for a lot of my teams, when we talk about compliance, because compliance can be perceived, if delivered as the bar you're trying to reach, is perceived as pretty onerous and actually, they're not really, in some cases, you're doing things, it's not clear why, it's because it was written down and codified years ago, or decades ago.

When you think of what a really strong security program, which is looking at an adversarial environment and trying to defend the assets your customers care about and your business objectives care about, those are really compliance is a bar to pass and the exhaust of your program. You're not trying to attain that. It should be the things that you naturally just exude by delivering a really strong program of security outcomes.

Now, you will always have things that catch you up with that approach, because there are some things which, I think all of us would say, how you approach that problem can look differently to one auditor to another or a different regulatory environment. This is actually one of the big differences in regions between I may have focused very data control, data-centric environment, versus one that maybe is less data-centric and data control, environment-centric and more maybe just protective-centric, those can come into conflict as you have to then articulate your approach, and you look at the people process technology controls, you put around it and explaining mitigating controls can be somewhat complex.

Ultimately, I think that compliance isn't this terrible thing that comes and happens to you. It's a thing that you should aggressively embrace. Then, I think, bringing a, I think, it's part of the talk today in the discussion is bringing an engineering focus to that. Things like, policy as code, compliance as code, things that you can do now with cloud and other capabilities, which just weren't available to us before, I think, is something that's really exciting for the way we approach that problems up.

[0:07:16] Guy Podjarny: Yeah. That makes a lot of sense to me. You want to bring it up sometimes, also, the forcing function. Sometimes, maybe a tool in the set of the CISO’s arsenal to mobilise the organisation, while in the background, you're actually doing something better.

Actually, let's maybe dig in a little bit on these different approaches and maybe a lens. You've spent time in London before. You're in London now, but you spend a lot of that time, notably Reddit and Slack while being on the West Coast or something and working with these very West Coasty companies. I guess, what's your perception around the difference in ecosystems, how CISOs and organisations default to behave in these surroundings and how do they differ?

[0:07:56] Sean Catlett: Yeah. I mean, here I have to note, I'm not the representative for all of these different domains for my limited slice of experiences here, but I definitely feel, because I felt a bit of a fish out of water going into the Bay Area for the first time from my previous experiences, right? I come from, as I said, large enterprise, selling to large enterprise. Obviously, done some startup work and some entrepreneurial, different businesses up to that point. I got some of those sorts of trade-offs that you make as a smaller business, versus the ultra large.

I think that in the two sides that I would say, and definitely coming over and what I can characterise as my previous time to this time, there's much more diversity of organisations and types of technologies and things that I've seen coming back this time, there's a much more vibrant fintech marketplace and types of organisations, obviously, cloud and cloud security and cloud technologies have been embraced. I wouldn't say they’re as embraced as the cloud-native firms that you see and some of those that I work with. You definitely see some of them are cloud-native and definitely starting up and all over in there. You definitely see that direction.

What I'm really interested to see is that actually, it's still very much, I'd say, compliance first and compliance by design, privacy by design culture. I think that was some of the organisations and that's some of the things that I was trying to implement in the organisations where I was at was that type of focus, to let you really enable your organisation by having privacy first and privacy by design and privacy engineering organisations. I see that in a way with GDPR and other regulatory regimes bringing a flavour, even all the way to the West Coast in the past. Here, it's definitely still heavily focused around that environment. I think, ultimately, there's going to be a blend and a mix of that engineering-first, or engineering-led security culture and the traditional regimes that you have around data compliance.

[0:09:51] Guy Podjarny: Right. Maybe let's dig in a little bit into the notion of an engineering-oriented approach to security, or a security organisation, and then we come back a little bit to the regions and how they defer. Maybe let's start with just a bit of a definition. You’ve said a few times here, engineering approach to security. What does that mean to you? How do you try to define that?

[0:10:10] Sean Catlett: Yeah. I think of it as a couple of things. I think the first, and these were things that I've learned over my career, it's where you start to shift from being a consumer of products and technologies, mostly in the security space, or domain to where you start to look at building your own and/or integrating with the fabric of your organisation and the way that they operate, versus in some ways, being siloed and were able to deliver your security program really independent of a lot of the ways that an organisation would operate.

What I mean by that is you could, in many cases, build a security team following a lot of major compliance frameworks and really deliver that as a program to many different security, or many different organisations, different industries, pretty adaptable because you in many ways, just deliver the same IAM controls and other capabilities.

As you move towards a more engineering-like culture, you're actually having to source and embed yourself and your teams into the security of the practices of your engineering organisation, which are companies trying to build its business objectives, but not at a policy level, but actually at a technology and engineering level. What that means is usually, you're putting people and embedding people and process earlier in the lifecycle of what's being built, trying to get and source signal and advice from your own organisation and make sure that's as early as possible in the process and probably a less focus around more of the policy, or compliance capabilities that we talked about earlier, unless they're delivered through technical means or engineering controls.

In many cases, you're just offloading the complex work that you don't want to be done in a number of teams, to be done in one centralised organisation, things like, whether it be cryptography, or authorisation patterns that you want to build centralised APIs for, versus something that you want each team trying to roll their own and then try to figure out the security outcomes of those.

[0:11:59] Guy Podjarny: You mentioned two things here that I wonder if you see them as coupled. On one hand, the ability to build tools. You might be getting pieces of solutions you might not be looking for either an outsource, or a product that is straight up the whole package for you, but you might piecemeal it, you'll build it. That's adapting to your business. It's taking a bunch of tools and adapting them to your business. Then the second thing you talked about is adapting it to how people work.

You threw in a bunch of these DevSecOps principles that we think of today, which are being respectful of their needs and talking about them. I think it's pretty easy to understand how the engineering skills are required to be able to adapt to their surrounding because they work in various ways and you need to adapt to it. Is it a necessity though? I guess, if you're looking to embrace an engineering approach to security, does that need to come right away, or is it sensible to also think about just doing the former and not doing the latter?

[0:12:58] Sean Catlett: It's a challenging question, mostly because I think it's about the maturity of the organisation. It's like, it depends, which I hate that answer. Really, when you look at the organisation, what it's trying to do and recognising the security teams place in that, in most cases, we all want to, regardless of engineering or not, make sure that our teams are receiving good signal about what the business is trying to do. I think in a non-engineering, or engineering is less emphasised, sort of more traditional CISO structure, that's done really at the strategy and policy level for an organisation to go and understand what they're trying to accomplish.

In most cases, you build your team and your people in locations where they could be most effective and you see things like, whether we follow the sun threat detection, security operations capabilities, etc. I think when you look at the more DevSecOps, or the engineering side, what I think is most interesting about that is it actually is a fundamental change in the security model itself. That's usually where you end up with needing to really rethink. I've been challenged by that, as I mentioned in my previous roles, where I first came to grips with the fact of the change in separation of duties and what that really meant for what a true DevOps, or DevSecOps environment was going to be like. Just in time access, versus really defined and rigid separation of duties in the traditional sense. Having to think through what that means, both for business delivery and what their outcomes are, as well as then the change for the security team.

That's usually where you end up then needing to build security engineering in, because you start to have to engineer a solution for something that's not perfectly off the shelf. I will say, just as one thing from my mantra or views, though, is not trying to build a security, or an engineering organisation to build things that do exist. I think that that's one thing that people do get caught up in. Some of it's by necessity and price and some of the security industry challenges that we know exist. But really, you should be trying to build things that help your business to scale and help your business to achieve its goals that are unique to that business. Most other things, you're just trying to make work.

[0:15:03] Guy Podjarny: Right. Yeah. I think that's a very good point on it. It's not about reinventing the wheel if there is something good for you. It does mean you adapted; you breach gaps. You might even be seeking out different tools, because if you work in that context. I like the cultural comment, or maybe the – almost the systems comment on, if you embrace this approach, then oftentimes, you need to rethink almost everything with your team. Maybe let's talk a little bit about that. Maybe let's start with the people aspect of it. What is the difference in the people you would hire, or the skills that they need to have if you were to embrace this more technology-first approach?

[0:15:43] Sean Catlett: Well, it's going to be a little bit off pattern, but I think it's really important. You have to have the expertise first. One thing I always talk about is humble experts. Humble, the people that are humble are the people who have gotten there, because they've been there, done that. They've already gotten probably the scars from previous work, but they still have the expertise. That's something that your team, I think, to be credible to interact and engage with your teams, which are already potentially struggling with maybe a number of security issues, or the business challenges they have, or scarcity of resources to come in and say, “Well, here's what you should do,” with no context, I think, is actually really, really challenging. I think, I would start off with actually, empathy, which I think you gain.

[0:16:23] Guy Podjarny: What expertise are you referring to? Is it more the security domain expertise? Is it development expertise, DevOps and those types? What Type of expertise are you seeking?

[0:16:33] Sean Catlett: Yeah. I think, first and foremost, the security expertise. Then, I think second to that, the engineering expertise, that hands-on capable level. If you think of your staff engineers, those are usually folks who have built entire environments, architected those and can be really adaptable across a number of different domains as they have to provide advice and/or hands on capability to maybe design a solution and/or understand how a solution needs to be refactored, which is where your team comes in, obviously, with the credibility factor.

Ultimately, having engineering in your team. That's why I keep returning to empathy. It also teaches a bit of empathy to the rest of the team, because you learn how the speed and pace that things are not overnight. You may have built a solution that then you have to refactor. You realise how much the team is asking for other teams to go do when they're saying, “Hey, just patch this. Just fix this. Hey, just remove this dependency and its entire refactor.” You get to learn that and have a bit more empathy for the rest of the organisation.

[0:17:35] Guy Podjarny: I fully, fully agree. I guess, as you embrace though, it sounds like on balance, whatever, if you had two candidates, one was a bit better on the security side, one on the engineering and theoretically, identical in other skills and preferences, you do still feel that secured expertise needs to take priority as the role of the team, but there needs to be enough engineering competency to achieve the empathy. Am I reading it correctly?

[0:17:58] Sean Catlett: I mean, it's a tough question, because you think of as – 

[0:18:01] Guy Podjarny: Because it's theoretical. Yeah.

[0:18:02] Sean Catlett: Right. Well, you also think of by domain and where that would be applied, right? I think for certain things, as I gave some previous examples, if I was just to take, and I don't know that this exists, but a generic high-level engineer that doesn't have any security skills. I don't think those really exist. I think, fundamentally to be that calibre and that level, I think, ultimately, you start thinking about that very early, especially people who have built systems and adapted massive-scale distributed systems. It's just not something you can just leave.

What I'm articulating is people who have built and been a part of that security side, that security thinking when they need to both mentor others in the team and grow them and build them, because that's a core part of building up the organisation. But also, be credible in what they're going to ask other teams to do. You need to make sure that they have the security chops to be able to do that, and also, fundamentally, to be able to create. Because ultimately, if it's just things that just picked out of a book, or a playbook, as you mentioned on compliance, that generally doesn't need a ton of, I say, high-level expertise.

It's when it comes in conflict with something that the business wants to do, what your team needs to accomplish, a threat vector that you're trying to solve, and usually, that's very quickly. That's where that expertise comes in of avoiding errors and making sure that things are recoverable.

[0:19:19] Guy Podjarny: Yeah. Right. I agree. I do think, oftentimes, it's the reality is that there are only so many unicorns who are great at this and great at that. Sometimes you need to, I guess, assess the priority. What do you think you can more easily teach them, which you make a good point. It does come down to team composition within it and the gaps that you might have there. Okay, so those are technical chops, and I think I already cut you off a little bit in between. What about their approach culturally, or are there different – you mentioned the humble expert. I wouldn't say, humility is the strongest trait of the security industry on average. I guess, what other – maybe even a bit of a contrast of, like if you were and you weren't to embrace the engineering approach, are there different traits, or competencies above and beyond the professional subject matter expertise?

[0:20:06] Sean Catlett: Absolutely. I think for me, that's something where I am continually challenged as a leader of teams, where you can over-rotate on a lot of the technical and deep architectural challenges you have and miss some of the things that you could do and design at the policy level, at the legal and compliance level, or company direction level, that I think is also really important where you will need to have the negotiation skills, the ability to, I think, be friendly and adaptable and open, which is actually some of the challenges, as you said, with humility is a challenge. There's others around starting to build a wall-offed silo because some of it is just by design, it needs to happen because you can't share everything, especially in a well-thought-out security program. There are just things that just are not going to be published and available to everyone.

Finding the balance between those folks who can build an engineer, and in many cases, they're on a dedicated team that rotates people through. We can talk through the patterns that I've seen them to be successful there, where they're getting exposure to these other organisations, and again, to develop that humility, or empathy for what someone else is trying to accomplish. It's not they, or them that are trying to do this thing to you, or to the security function, but actually, recognising these are requirements that the business is trying to achieve, maybe they have business goals to approach a certain country, or maybe it's a new business type, and so those requirements come in.

Then they start thinking creatively. They think, okay, that's a challenge now that I take in and embrace, versus this thing that's really challenging for the organisation to deal with. I think fundamentally, that getting people with those types of, I'd say, that the soft skills really in the mix, I mean, of course, yes, you'd love to have everyone be unicorns at all of these things. For me, it's about making sure that people are paired up around programs. That to me is the exciting part of being a leader of a function is you get to actually look at across your team and think of what sort of a mix can you apply towards a certain problem that maybe be stretching somebody a bit, but also giving somebody else an opportunity to help to mentor them, or vice versa.

[0:22:10] Guy Podjarny: Yeah, I think it's hard to avoid sometimes that it depends and balance commentary on this, because it's hard to build the right organisation. I would love to dig a little bit into those patterns you mentioned, because I do think that there's a question of transition, right? One of the questions that I think anybody looking to change their approach to security is they have some people in the team that might actually been very good at the previous ways of working. Maybe sometimes you're just building up a team, and so you can hire people to the new destination. For those people, sometimes you need to invest in building up the changed approach, or the different skills. You mentioned, so what are some examples, indeed, of ways you've so strengthened, I think you mentioned empathy and others in past locations?

[0:22:52] Sean Catlett: Yeah. I think one big one, just working from the fundamentals, so saying you want to shift from one to another is really understanding the way the business operates. I hate to say it that way, because it's actually something that is sometimes missed because people come in and they have such a narrow scope. They don't actually know, for example, how the business makes money, what the core and critical systems are that need to be protected. Then how the tech stack actually operates to accomplish those goals.

They don't feel, in many cases, confident that they can step in and maybe make recommendations. I think, starting at just a base level of a skills assessment for the team, understanding what skills you have, many cases, you're pleasantly surprised to learn that someone actually may have more of the engineering, or technical skills than you thought to apply to certain things, they just maybe weren’t confident, or put into that role. The other is sourcing and seeking people in internship programs and others to bring them in that are not the security domain experts, but are maybe further along in the technical engineering journey, and build that mix up as you look to then start the initial process of, okay, what recommendations should we make? What design documents should we be creating for certain capabilities? How would we move from A to B?

That's the step one and stage one to try to migrate an organisation, maybe from one that's less confident making, I'd say, recommendations and/or requirements for an engineering organisation to go and accomplish outside the security team and looking more internally is starting to push them a bit further outside.

Then with the leaders, it's really having them engage with their counterparts, whether they be in product teams and engineering functions, infrastructure, and that's something I've always really tried to do is push my leaders and myself to engage outside of our organisation, whether it be participating in the staff meetings, understanding their challenges, or at least participating in their once a quarter to half year planning, so that we can make sure that we are there at the earliest and cheapest place to do security, which is on a whiteboard, or in design, where you can say, “Hey, that would really conflict with these things. Let's try something else.”

[0:24:59] Guy Podjarny: Yeah, and I think, I guess, I wonder if in your mind the terms of service orientation and engineering approach, are those also necessary couplings, or likely couplings on it, which is if you're going to take the engineering, maybe I'm coming back to engineering approach, has the element of, you could still be a forceful, I am not invented here security person, you can build a very strong a software development capability within your security organisation, but still work in a very old school way of my way, or the highway that doesn't engage with the different businesses.

I think what we're saying here is we're choosing the interpretation of engineering orientation to also be one that embraces the DevOps mindset that embraces the collaborative work and uses those engineering muscles to also adapt the security program to the business. That in turn requires better empathy to understand what is that business, whether it's the top-level requirement makers, or it is the real-world frontline execution of these requirements, like software development. Is that correct? Do you think the service level orientation is a part of that cultural change that is needed to make this succeed?

[0:26:11] Sean Catlett: Yeah. I'm only approaching it from the way that I see the world. I think, what I look at is the impact, both that I would like to have, or I want my team to have on the ultimate business. I think building something that is fully integrated with the business and what the business outcomes are just enhances that impact. You get more opportunities to anticipate, build something maybe that both the business and customers needed, fix things rapidly. I guess, people are more apt to give you the early indications that something is wrong, versus the find it in a scan, and/or find it in after it's built and then go and assess it, which is the natural challenges you have in a more siloed organisation.

Where, it's others build it, then you either bolt on, or you secure afterwards, versus together. I think, ultimately, that's for me, it's about where you find the ability to have that impact. I think, ultimately, if you're building your team and aligning those, and I like the concept of service – like a service outcome, or service-oriented organisation because it's just that's where things are going to be more effective and cheaper. It doesn't mean that you don't also have really, I think, dedicated hardcore engineering in your team that can then build quickly the outcomes that the company needs. I think you miss a step, or miss a major opportunity if you don't get the organisation, even it doesn't have to be huge organisation, but just have them more integrated with the way that the business is trying to operate, resolving risk faster.

[0:27:40] Guy Podjarny: Yeah. I think that makes sense. To me, I think the risk is less that someone who is already embracing this newer approach will not sufficiently invest in their own engineering jobs. I don't think that conflicts. I think it's easy to see the other way around. It is easier to not take on the cultural change and just the technical change and just build up a tech team.

We've been selling this a fair bit up until. Let's talk about the downside. What are the trade-offs? What does the business need to get comfortable with, or sacrifice, or make an effort for to embrace this approach? I guess, how do you convey that to the security stakeholders?

[0:28:12] Sean Catlett: The challenges are downsides. I think there are ultimately, there could be an over rotation then that everything is a technical solution. This is feedback that I think a lot of folks, especially when you look at, I think, it's unfair to paint a brush of just Bay Area, or West Coast. I think there's a business calculus on the West Coast, especially with the number of startups and the degree of innovation around the risk trade-offs that are made, where most things can just look like, if you have an engineering team, it's like, every problem is now a nail to be hammered by engineering, versus some of it could be policy, some of it could be a cultural change or adaptation.

I think that that's really something that is something to be looked for and then ultimately, those human factors as well. The fact of maybe it is a bit of training and enablement for the rest of the organisation, versus trying to pull everything into your own team, or into security to be engineered. It's something that is something to try to push outwards. That's something that I don't think a lot of places are prepared for as it's going to go and stair step maturity cycles. One, it's not a smooth curve, but like, as you adapt, I think you have some conflict, you may have to pull back from certain things. Certain things just are not in a position to be changed to the degree that your newfound engineering prowess is going to be able to say, “Let's go change this and tackle it head on.”

Sometimes, it's the things that need to be done are actually more, I always joke, like in the janitorial section of end-of-life software needing to be refactored. That's not something that everybody's not functional. It's not exciting to either you or customers, but just has to be done. I think that those are things where the – ultimately, those can be things that actually get laid on to the team where you have to then negotiate with your peers and partners to say, “How do we balance this?” So, that's not all that team is doing, which can feel like I said, very janitorial, coming and cleaning up for environments where maybe there's just things that should have been cleaned up before. I think it's –

[0:30:07] Guy Podjarny: It’s got one, right? One of it is – actually, you already mentioned two, which is it's not always a technology solution. I guess, still under the new and shiny excitement of it, some of these are chores. I guess, a bunch of those is you still need to eat your vegetables. Sometimes you can’t, whatever, invent some fancy way to overcome that. That’s one.

I guess, it's less a trade-off and more a challenge, or a mistake that people could do. I'm seeking trade-offs, things that almost were better before you took this approach, or to the more compliancy, controly sort of data-focused approach.

[0:30:44] Sean Catlett: I mean, a trade-off is around just waste. This is wasted effort, wasted change, because in some cases, compliance can just say, “No, you don't try to go build something to do a thing.” You just say, “We're not going to do that. Comply it up.” Compliance says, and/or the regulation says, we're not going to try to find an engineering solution for that. There's just an ROI around choosing wisely and not doing, kind of the measure twice, cut once.

There's ultimately a trade off on the – just the way that the organisation rotates around things that can be built, versus policy, policy enablements, in many cases, what you take in from other stakeholders around legal and other, I said, the governance risk and compliance space. I think more modern organisations are not waking up to actually making that an engineering talk. This is something I'm very proud of what we're really pushing for with Slack, as I mentioned earlier, where you have a number of B2B customers and you have all of their compliance regimes as things you're needing to do.

We were at the time moving from a FedRAMP moderate to a FedRAMP high. That's a huge lift. Doing that right, or doing that to where we can meet the needs of the organisation means a heavily engineering-focused GRC component, versus the just reading the list and going through and doing the more checklist types of approaches. I would say, that there's definitely, if you don't invest in something, and there's always trade-offs, it is ultimately a zero-sum game. There's no perfect budget.

You're going to have some things where you're going to have to make sure that you have either a cyclical process to re-review and really put some effort into risk management, risk prioritisation and GRC, in addition to obviously, what you would be doing to go build solutions, as well as choosing which solutions you build. I mean, ultimately, those have to be aligned to your biggest risks and/or biggest enablers. Getting that wrong can, as I said, lead to a lot of either wasted effort, or huge systems that need to be supported post-delivery because they're now embedded in something that's in your flow of either data or where you interact with your customers.

[0:32:49] Guy Podjarny: Investment, I guess, the constant change is clearly always – it's always easiest to do it the way you've always done it, to making this change both has the pitfalls, especially the over rotation aspect of it. Either way, is going to require investment upfront, which I guess is true for any place in which you're building. I guess, what about – one typical pushback is on almost predictability, or risk tolerance, and/or that relates to compliance.

Over the years, maybe it became a little bit better. But what has your experience been in terms of assurances you can give auditors and compliance elements when you take this approach, versus when you don't? Has it been easier? Has it been harder to, I guess, pass regulation audits from including the FedRAMP, those FedRAMP highs and high amongst them, but even the simpler ones?

[0:33:40] Sean Catlett: For me, it's building that in, as I said, earlier by design. One of the core challenges is knowing that you will be audited. The things you're going to need to build need to have the right ability to produce the evidence, just like a solution that you would expect someone else to provide to you as a product. I think, fundamentally, that's where a lot of things can get tripped up. It's not as easy as a couple people go and build the solution when it's going to interact.

As I said earlier, with the storage or transmission of customer data, if it's going to be interacting in some way with a critical security subsystem, those are things that you're going to need to show that the right approaches and that security is dog food, what is told everyone else to do. That's something that I think is, I guess, maybe a cultural dynamic that you, again, learn that empathy side was when you realise that you need to also have the scale to have the secondary reviews and the SDLC processes inside of security by security, instead of something you just recommend, right?

Having to make sure that all those have gone through the same SDL process that you've designed, probably the same automation steps that you push out elsewhere, also need to be verified and validated inside of the team. Those can be things that honestly can trip you up because you are so used to everyone else following a procedure that you don't look at yourself as just another one of the engineering functions. That's actually something that I was glad to have learned from folks, like at Reddit and Slack. That's a lot of how they treated the engineering functions that we have. It's just another pod in their list.

We have the same requirements. We have the same lists. We have the same asset approaches to the rest of the organisation, even though we may have had different security models, because of how we had to segment or do something for our particular purposes.

[0:35:20] Guy Podjarny: Yeah. I think that's a very good point and indeed. I guess, it's true to an extent to any engineering, software engineering that is handled outside of the core R&D organisation. You see this with BizOps and whatever the software processes, or sometimes lack thereof, that populate the software systems of the world.

We've managed to go through the majority of this episode and not say AI. I think it's starting to fix that. To be frank, we're talking about technology adoption, we're talking about tech forward for security, or this approach. Maybe let's dig in a little bit about how does AI play a role over here? Maybe for starters, just from a skills usage, how important is it for the security team to level up on their AI skills to leverage AI on it? Then maybe we talk about the use of AI elsewhere in the department and relationships.

[0:36:11] Sean Catlett: Yeah. I mean, I think AI is so broad, I have to start with which components, because AI has been a clear –

[0:36:18] Guy Podjarny: Right. Yeah. I'm talking about the AI revolution, so like the elements.

[0:36:21] Sean Catlett: Like the LLM world.

[0:36:24] Guy Podjarny: Yeah, the new and improved whatever, the current labour de jure of AI, which is the LLMs, and notably, the ease of access to generic, or broadly applicable and powerful AI engine that is the LLMs, the GPTs of the world.

[0:36:38] Sean Catlett: First of all, I will just say, I'm a huge proponent, mostly because of seeing both a personal and professional impact and also, just the proliferation of how many things that organisations can make use of and individuals can make use of. I think, I'll flip between those two. I think the first is at a – let's say, a team development and the skills development level to have a number of different opportunities to learn and train differently, exercise differently, and also, things like whether be coding co-pilots, the ability to then ingest that information specific to your environment to be able to provide better recommendations that then it's like human in the loop versus in the past deep reviews.

I love some of the advanced tech around threat modelling and having some automation there, where given just board drawings. Think of creating what that application would be and then threat modelling that, which would just be outstanding. I think then the flip side is organisations adopting this, having to look and really for me, it's around similar things that we had in the past around just aggregation of these massive training data sets, which could be extremely sensitive from a corporate perspective for internal use, the exposure of using external APIs and those, which is very similar to a lot of things we've previously dealt with, but now it's just the pressure to do them in many ways puts pressure. I think, not since we've seen in probably internet days and other things, which is go build this connection with this organisation, because we need to be able to transit data that quickly, which is definitely dating me, but something we've all experienced in our long time past.

This is something that's super similar. I think now, it's about evaluating the business need, figuring out what those controls can be, whether they be controls you build on your side for things that could transit these APIs, or just dedicated review and then re-review of the uses of those, because that's the piece that I think is so exciting, but also challenging for security organisations is you could open up an API and that API could go from receipt of some type of information over some period and then two weeks later have very advanced, completely new capabilities that haven't been reassessed.

I think, ultimately those – having this degree of rate of change, the team needs to be built and adapt to be able to ingest that, ingest that rate of change in what's happening and the business requirements, but also in the solutions that are being provided. Because ultimately, I do think that the data leakage aspect, I think that can be dealt with just in similar ways to all as a DLP program and a lot of the walls that get thrown up. But ultimately, you end up losing because you're actually not able to go and make use of those solutions.

[0:39:14] Guy Podjarny: Yeah. I think makes a lot of sense, and I think that's a very good point that it actually wasn't what I had in mind, but I think it's a super important one, which is you have to be on top of the technology. Maybe like, I think what you're saying is marching order number one for the security team is understand and invest the time to understand this technology, because like it or not, it's going to be embraced by the business.

Don't think that just because you took a crash course, you'll be okay with it. You're going to need to keep up the pace because it's happening and it's changing. I guess, taking – my thought was also more about AI as a tool for the security team. How important, which I guess, you touched on there at the beginning as well. I guess, if you're temporarily, maybe this changes by the time of the episode airs, but you're temporarily not leading a security team. But if you're leading a security team, would you say I need to a portion, a piece of my team to actually be dedicated to start building with AI, because you think this is a powerful – I guess, maybe the counter between, so the pace of change, the security concern and all of that versus the power it might give an AI security team. How do you think about that?

[0:40:21] Sean Catlett: I mean, honestly, you hit on one of the reasons that, as I said earlier that I've become such an advocate. Being between jobs at the moment, this is the ideal time to have that happen. I'll say, because the advancements that are happening day by day, and if you broaden it to not just the LLM, the tech side, but also audio, visual, and voice that are happening so broadly, and I mean, literally multiple times per day, that I think I know now any team that I join and lead, I would love to have this be a part of a dedicated part of our research. I just don't think it's something that you're going to have to be able to keep up with the pace, both for your own uses, which I think is equally as important as the – as I said before jokingly around AI and what is it, ML was a similar thing and the teams that I built in the past, I don't know how you do threat detection without really strong machine learning.

For me, it's similar in that way, which is how do we expand the capabilities of a limited team, limited budget through this automation? I think by learning it and adapting and using it, you get better at. Then, obviously the security team is great a lot of things, but one thing is usually breaking things. I think, them taking and embracing that and figuring that out and then be able to offer that advice quickly, and also, not offer advice, which is just no. I think it's so easy just to say, and I've talked to a number of organisations where that has been their approach, which is, “We're just not there yet. We're not comfortable with the technology.”

You think, what is that actually going to cost them in the long run? Ultimately, I think, yeah, I think that there's a definite place for teams to be embracing and being knowledgeable, keeping up to date. But then, fundamentally, you’re back to the first point that I made that I think is – where I guess, my advocacy comes in. If people are not stepping in and starting to use that, and it doesn't have to be super deep, but I've been able to build applications with Copilot plus GPT that I probably would have never been able to do before, at least to the depth to accomplish some of the goals that I've had personally, and I find that exposure and the ability for people to tap into that and then the thinking that it creates as we talked about earlier for engineering thinking, thinking through design, thinking through architecture, I just think that in and of itself becomes a bit of both security architecture training, as well as security awareness and the why behind some of the things that you're asking other teams to do on the engineering side.

[0:42:44] Guy Podjarny: Yeah. I think that's well said. Also, and it comes back, indeed, with the empathy. Just as you started off talking about the needs to be in the shoes of a software engineer, and I think had some mileage and AI. Gen AI doesn't really give us maybe the mileage, the time to accumulate too much mileage in it, but it's a very good point that investing in the understanding and the muscle of creating AI or using it within the team above and beyond the skills and the knowledge and specifically, the security policies also builds empathy within the team to the people that are building on the other side.

I think, maybe a little more tactical, but probably specifically within the world of application security, probably the most immediate terms because with all due respect to Gen AI, it tends to be within applications and established business, it tends to be except for a very select few, a small slice. While on the flip side, things like Copilot, or coding system that generate code, those are getting embraced at a much faster pace.

I guess, what's your what's your view – in part is just, what's your get feel almost, like response to that in terms of concerns that it raises, versus one that you think are not real? Then maybe relate to that is like, how do you think, how would you imagine this evolving? Where should one prepare for?

[0:44:02] Sean Catlett: I think, the first piece, as far as concerns, I definitely feel lowering a barrier to entry always sounds good, until you think of them – the opposite of that, which is the training and skill set required to identify usually around security and/or scale, or other – could just be just basic resiliency that the complexity required. Unless, you're super modular, super tiny and having some of that type of assistance. I think at scale, this is not going to build whole distributed systems for you at the snap of a finger. If they did, having that be ultimately secure, given the ability and mostly what you learn around the hallucinations/deletions wholesale between things and just, because it's just really just completing that with a set of tokens is not actually building up what you're trying to accomplish.

I think, ultimately, that's the learning, I think, that's required is not the overreliance. I think for me, it's around where people can use that to maybe go into a bit of depth, because it's, like I said, on threat modelling, I think that's super interesting on how it can actually help to make sure that you've expanded through all the different permutations that maybe you would just forget, or maybe just bit the isolation of the skill set of the team that may be broadening to say, “Okay. Well, the team focused here, because you have somebody who is an expert in crypto and an expert in networking. They were very, very good here. But what about someone who's in application design, or maybe data storage, or some of these other capabilities? Where are we going to make sure that that happens?” That's where I feel like, that would be a tremendous assistance.

I don't think we're entering a phase, I guess, in the near term of this human out of the loop automation. I think that I'm very intrigued by that space. To be honest, that's something that is probably most interesting to me at the moment is the use of either multiple LLMs to try to accomplish a task together and producing some workable output, because of the error correction and things passing to something that has intended to look for that, and also validation steps. I think we're pretty far off on that side. Again, by the time this is published, I'll be completely wrong. It's funny –

[0:46:07] Guy Podjarny: Only takes is a few weeks, though. But I think that's probably still true. I think today, it's very much about the attended human in the loop small bits, because of the creation. But I really liked your fundamental, if it lowers the barrier of entry, it starts off being exciting. Then you say, well, but then if you lowered the bar, did it actually take on all the responsibilities of the lower step bar? That's why which at the moment, it does not.

[0:46:31] Sean Catlett: Yeah. I guess, yeah, that's where, again, it comes back to system design. You take that. You know that now and now could you add in whether it be more of an assistance for the patterns and things that your organisation needs to make sure that people do as a part of the solutions. I think, it goes back to some of the old solutions, which had the – a lot of security checking happening in the ID, that was either specific to what your security team had actually implemented for checks. We did that in our past.

Similar to that, enabled with the augmented AI type of thing with Copilot. Yeah. I mean, that starts to become super helpful for people just to avoid the completely avoidable mistake of it generating something that is completely off-patterned to what your organisation needs to do, but that requires, obviously, training and fine-tuning that is really just not quite there yet, at least that I've seen for being able to deliver that scale. But like I said, well, watch the space. I'm sure –

[0:47:23] Guy Podjarny: We'll see it. Yeah, indeed. Every gen AI-related comment right now has an expiry date of 20 minutes or so.

[0:47:29] Sean Catlett: Exactly. Yeah.

[0:47:31] Guy Podjarny: Sean, this has been excellent. Thanks a lot for all the great insights. Before I let you go, maybe one regular final question here that I've been asking, which is, if you could outsource one part of your job to AI, what would that be?

[0:47:42] Sean Catlett: Oh, wow. Honestly, just the calendar maintenance, Tetris, that we all experience, including keeping me up to date with what and why, what I'm meeting and all these different things from – I've seen people post parts of it. This comes back to some of our things we talked about before, but if I could have a fully outsourced piece to have the higher-order things that I want to be focused on, which is being prepared, versus chasing the meetings, then I would go all over that.

[0:48:09] Guy Podjarny: So good. Yeah, and that is actually a common pick in answers that I've gotten, sort of show how we, yeah. We just want to do the job we don't want all the overhead and surround sound around us. Thanks, noted. I think a good bunch of people are working on it, so maybe one of them –

[0:48:21] Sean Catlett: I hope so.

[0:48:21] Guy Podjarny: - eventually, works out. Cool. Thanks again, Sean, for the great insights today. We'll be on the lookout to see how you engage with AI security. We'll probably maybe have you back once you built a team that has that muscle on it.

[0:48:33] Sean Catlett: Cool. I really appreciate it. Thank you so much for having me.

[0:48:36] Guy Podjarny: Thanks, everyone, for tuning in. I hope you join us for the next one.

[END OF INTERVIEW]

[0:48:44] ANNOUNCER: Thank you for listening to The Secure Developer. You will find other episodes and full transcripts on devseccon.com. We hope you enjoyed the episode. Don't forget to leave us a review on Apple iTunes, or Spotify and share the episode with others who may enjoy it and gain value from it. If you would like to recommend a guest, or topic, or share some feedback, you can find us on Twitter @DevSecCon and LinkedIn at The Secure Developer. See you in the next episode.