Skip to main content
Episode 68

Season 5, Episode 68

DevSecCon Panel

Guests:
Clint Gibler
Douglas DePerry
Tash Norris
Jesse Endahl
Zane Lackey
Listen on Apple PodcastsListen on Spotify Podcasts

Today’s episode of The Secure Developer features some fantastic content from a panel at DevSecCon London. Clint Gibler, Research Director at the NCC Group is joined by Doug DePerry, Director of Defense at Datadog, Tash Norris, Head of Product Security at Moonpig, Jesse Endahl, CSO at Fleetsmith, and Zane Lackey, CSO at Signal Sciences. The discussion begins with a dive into building a good security culture within a company and ways to get other members of an organization interested in security. Some of the strategies explored include cross-departmental relationship building, incentivizing conversations with the security team through swag and food, and embedding security within development teams. We then turn our attention to metrics. There are often competing priorities between developers and security, which can cause tension. The panel shares some of the security metrics that have and have not worked for them, and we also hear different takes on the often-divisive bug count metric. Next up is a dive into working with limited personnel and financial resources, one of the most common constraints security teams face. We hear how the panel approaches prioritization, adding value to the organization as a whole, and the importance of making the security capabilities digestible to the developers. After this, the panel explores risk quantification and subsequent communication. While it's difficult to quantify risk precisely, there are some effective strategies such as risk forecasting. Along with this, techniques on communicating with executives in resonant ways to convey the severity of potential threats are also shared. Other topics covered include policy-driven vs technical-driven security and skilling up less technical teams, how to know when security is ‘done,’ and incentives for upholding security protocols!

Share

[00:01:20] Clint Gibler: We’ve heard about a lot of great things today and some good knowledge has been shared, and what we’re going to do now is a raw, honest panel from people who’ve worked at a number of different companies saying, “Hey, this is what worked for us. This is what didn’t,” and we’re going to allow plenty of time for questions.” 

One thing that I wanted to mention at the beginning is we’re all friends. We get along really well, but we’re purposefully going to try to find some opportunities where we disagree about things, because I think that when everyone agrees, it’s not very interesting. Also, hopefully that will show us some of the edges around different concepts and principles. How culture perhaps at a company or a specific technologies may make something work in one case and not in another. 

First, let’s just go down the line and have people introduce themselves. 

[00:02:02] Doug DePerry: I’m Doug DePerry. I’m currently the Director of Defense at Datadog. Datadog is a cloud-based SaaS monitoring solution, application traces, logs, host metrics, all that sort of thing. We kind of present it to the user in a single pane of glass, providing unmatched visibility across your entire install base. 

I’ve been with Datadog for about three years. I’ve worn many hats there. I was recently Director of Product Security. I did a lot of infrastructure security work there as well. And prior to that, I worked for various consulting firms doing all kinds of offensive security type of stuff. I’m soundly in the defensive category.

 

[00:02:35] Jesse Endahl: Hi everyone. My name is Jesse Endahl. I am not the CEO. I’m the Chief Security Officer of a company called Fleetsmith. We’re basically a modern Apple device management SaaS product. If you’re familiar with that space, we’re kind of a newer entry. The product does a bunch of things. Probably one of the coolest things we do is we automate the new hire device setup process, but that’s kind of just the coolest part. The part that I find even cooler is actually the ongoing like patch management stuff that we completely automate, which is very unique. Before Fleetsmith, I was at Dropbox as an infrastructure security engineer. I did a bunch of different projects there, monitoring, detection, log aggregation, alerting, a bunch of stuff. That’s my background. 

[00:03:18] Zane Lackey: Zane Lackey. I started out on security consulting and pen testing. I was at iSec Partners and then NCC group for a long time. After that, got given a unique opportunity to go be the first CISO at Etsy about 10 years ago when it was Etsy and Netflix pioneering a lot of what we know call DevOps and cloud. I got to be one of the first CISOs to live through that shift to DevOps and cloud and build a whole application security infrastructure, security, security engineering groups around that. 

And then took a bunch of those lessons learned when we needed the more modern way to defend our applications and our APIs and everything there. We got very fed up with legacy web app firewalls and we took our lessons learned and turned that into Signal Sciences, which is where I am today, which is the marketing folks would call a next gen laugh and [inaudible 00:04:04]. So protecting apps and protecting APIs.

[00:04:08] Tash Norris: I’m Tash, and I am head of product security for Moonpig. If there’s one thing you could take away from today, it’s to have that awful chime in your head for the rest of this session. I’m previously financial services. So I came from about 5-1/2 years in financial services and then moved to a much smaller, more agile environment, which I love, and I would solidly put myself in that blue team camp, like Doug.

[00:04:32] Clint Gibler: Okay, great. Just lastly a little about me. I’m a technical director and research director at NCC group. We do pen testing of all the things, web app, mobile, network, as well as hardware reviews, crypto reviews. Me in particular, I’ve been focusing on helping companies scale their security, whether that’s building like AppSec programs or integrating static analysis, dynamic analysis and just trying to have like systematic, scalable security wins. That’s something I’m very excited about. 

One thing I just wanted to quickly mention, there are so many conferences these days with so many great talks. It’s sort of hard to keep up. So, I started a free newsletter. It’s linked at the bottom. It’s called TLDR sec, but basically it’s like once a week-ish. I’ll say, “Hey, here’s a cool talk, detailed summary of it that you may have not heard of. Here are some cool tools,” things like that. If you want to check it out, feel free. 

Okay, great. We’re going to start off with some specific prompts that we think are useful and interesting and then we’re going to allocate a bunch of time for your questions. Start noodling about them now, otherwise we’ll just awkwardly stare at you and make eye contact with everyone one at a time. 

One thing that we’ve heard from a couple of different talks today is the importance of security culture. How do you build a good security culture and how do you get developers and management to care about security as well as the downsides obviously of having bad culture? One thing I’m curious about is can anyone provide some like specific actionable things that they did that measurably improves the security culture in your organization?

[00:05:57] Tash Norris: One of the things that we focus on a lot is it comes back to kind of core education, capturing people’s interests. A thing that we see a lot, which I imagine a lot of people do are phishing attempts. People pretending to be CFOs, CEOs. I must update my bank details now, but don’t call me, those types of things. We do a lot around infographic, short, sweet pieces of data just to remind people of key security terms. 

They’ve been really good to kind of start to build those foundations up, but the biggest piece for us is around building relationships with teams and not just engineering teams, but marketing teams, HR teams and just being present. So, making sure that they know who the security team are that they feel like they can talk to us. And being accessible has been really important, and building those relationships up has then helped us to start to make security just part of who we are and what we do rather than something that’s added on at the end. That’s helped test to make sure that we’re embedded within our operating principles for our engineering teams and within their objectives as well, and that’s allowed us to start to expand more and more and then how do we do the kind of sexier, cool stuff around culture and innovation to increase that security awareness?

[00:07:06] Clint Gibler: You mentioned building those relationships with various teams. I totally agree. Things I’ve heard from a number of companies is like, “Hey!” Just like grabbing lunch with different teams,” or one technique someone mentioned, maybe Zane or Doug, “Hey, just show up to work with a bunch of cupcakes and invite people to come by their security teams’ desks,” and then as they’re talking you’re like, “Oh! Hey, by the way. There was this question I had for you,” and because you’re nearby, because they came for cupcakes.

[00:07:30] Tash Norris: Yeah, you can’t underestimate the power of food, right? That’s the easiest way to get me along to something. Yeah, food has been a huge part of it. I’d like to say that I’m actually quite persistent in making sure that I'm included, but not in a bad way. What I try to do end up being really cognizant of doing, I’m so quite early on in my journey with Moonpig. I’ve been working their parent company for about a year. One thing I’ve been really cognizant of not doing is creating loads of brand-new meetings or forums that people have to go to as soon as I land. 

What I’ve been doing is going along to existing forums, guilds, and just being present and absorbing what they’re doing and just saying, “Hey, there’s another way to do that. Have you thought about this?” Rather than no and stop. So just being present. Turning up to things that they already do and be a part of their team, and sometimes that means doing things that I would say I’m part of my core role, but just being an extra cog and just being supportive has helped them to realize, “Oh, we can talk to Tash, and she’s there.” You joke about cupcakes, but on my first day at Moonpig, I did bring a lot of donuts and I was like, “I’m Tash. This is my first day. Come and say hi,” and that worked really well to introduce myself to people.

[00:08:36] Doug DePerry: Zane’s been luring developers to his desk for like 10 years now with candy, I think. You know a whole thing about this. 

[00:08:42] Zane Lackey: Two of the dumbest things that we did at Etsy, that ended up being wildly successful. We would genuinely set a big bowl of candy out in the area that was kind of on one of the main engineering floors so that whenever engineers would walk by, folks would just grab some candy, because it was out. Inevitably, what that would lead to is folks grabbing some candy and just saying hello and everything and then the magic sentence would start, which is, “Oh, hey. While I’m here, actually, we have a question about something.” That bowl of candy just paid for itself a thousand times over the minute that you heard that sentence, because it was a conversation with security that was otherwise, in reality, almost never going to happen.  Incentivizing interaction with the security team, which is a small subset of the overall culture question. 

But the other one that we did, we kind of rolled our eyes at when we did it because we thought it was just going to be a dumb thing and end up being super successful, is we made different swag for the security team. T-shirts and stuff like that, and we would give them out to anyone who came by that asked a question about security, whether that was they reported a phishing email to us, or a lead architect asking, “Hey, can you have a quick look at the service that we’re actually starting to think about doing,” or anything like that.

And we’d give it to the people that asked those questions. They would, inevitably, within the next day or few weeks or so wear that or have a coffee mug or anything like that, and their teams would then ask them like, “Oh! Where did you get the shirt from?” They’d say, “Oh! Because the security team gave it to me because I asked a question.” They would basically do 90% of the branding and cultural awareness work for us in that way internally, because then we’d get a bunch of folks who are like, “Oh! Well, I want a coffee mug or a T-shirt,” or “Hey, there’s candy there anyway. So, I got nothing to lose.” And go engage with the security team. 

We started to think about the interaction piece of culture in the sense of if you’re building a security team or scaling a security team running there, you’re basically running a company inside a company and you need to be thinking about how are you doing marketing and outreach to what are your customers, which is the rest of the business. 

[00:10:45] Doug DePerry: I totally agree with that. I did a very similar thing at Datadog. I branded the team, right? I had a logo made up and worked for the art design team to do – We did koozies. We did two kinds of stickers. We did T-shirts and people loved it. It went over great. It started more as a thing – I used to work at Yahoo, and their security team was branded ‘The Paranoids’. It was really helpful to have that brand especially at a company that was so large and so widespread, but I kind of just took that in that way. Everything that you send out or every presentation you make internally, you have your logo you have your brand. We have a bunch of Slack emojis for safety, bits, is the Datadog mascot. We took safety bits and put them within a lifesaving ring and that kind of show that we’re the lifeguards. We’re here to have fun, but help you when you’re drowning sort of thing. 

The other thing that worked really well for us too is embedding security engineers with development teams. The roles in the time period, we did a couple of things where someone who I used to work with at iSEC Partners, when he came on board, I just put him directly right with our web development team and he just got to know everybody on the team. He sat with them. Even if they didn’t ask him any questions on that day or whatever, but every time they walked in to work, they’d walk right by his desk and they would go like, “Oh! There’s the security guy. I know how to go if I have any problems.” 

We’ve also done like temporary things where we would just embed someone for a period of like two weeks or a months and they would just go and either fix a few security bugs for them or just working on a particular like thorny situation, but just like being there, being visible, being aware. Then they create relationships with those groups and now you’ve got that kind of bond for life so to speak. Those are some of the things that were successful for us. 

[00:12:15] Jesse Endahl: I have an interesting one. Your comment actually about the donuts reminded me of this oddly enough, because I think what’s so powerful about that is that it was a very foundational, kind of first impression moment. I guess on a similar note, I think security onboarding is an overlooked place where you can really set the tone and foundation for that relationship between every employee that gets hired at the company and the security org. 

I think that’s probably the most important thing that should be in a security onboarding rather than – I mean, yes, of course. Cover the phishing attacks and all that stuff. But culturally, that’s so powerful if you reset people’s expectations, because oftentimes people come into a company and they’ve had some negative experiences in their past with other security orgs that have maybe a more toxic kind of culture telling them, “Hey, look. The way that we view our role at this company is to help you achieve your business goals and accelerate your work and not be a blocker.” People are like, “Wait. What?” Then if you follow through on that and you actually do help people with their architecture decisions, with code reviews, whatever it may be, that trust will be built up and it will just really snowball in a positive way overtime. That’s what we’ve done.  

[00:13:22] Clint Gibler: Actually, along those lines, I’ve heard from a couple of other companies, sort of the dual value of onboarding. It’s like, “Yeah, here are some security things,” but also setting the tone and also just, “Here’s our faces. We’re your friends.” When you have security questions, like here’s who you can come to, and it’s sort of like rather than this like abstract, scary security org. It’s like, “Oh! I know Josie, or Alex, or whatever. Yeah. They gave my onboarding thing and they were super chill. I should go chat with them.” 

[00:13:47] Jesse Endahl: Also, saying explicitly, “We will never make you feel bad. If you ask us a question about security, we will thank you 100% literally every single time. There is no such thing as a dumb security question.” 

[00:13:58] Clint Gibler: Definitely. Then just, finally, some other things that I’ve heard. Running internal CTFs, especially if you can choose a vulnerable application that uses like the same tech stack. If you’re using like a NodeJS application, use like Node Gode, or if you can even like build like a mini-CTF that uses some of the same company-specific frameworks that you use and you use maybe a pen test or bug bounty examples to say, “Oh! Look for bugs in this application.” By the way, it’s almost exactly like ours and it’s even based on bugs we’ve had in the past.” Developers tend to love that. It’s a bit more work, but I think the ROI can be super high. 

Another company, this is Segment, has an internal leaderboard for security, where basically they can dole out points where, “Hey, you proactively found a bug, or you fixed it, or you really went above and beyond building secure by defaults into this microservice or something,” and basically anyone in the company can see where their ranking is. Participating in a CTF gets you some points. I think the CTO or one of the senior leadership is very active in it. So there’s like some competition between senior people and senior leadership really buy into it is very powerful as well.

Cool. I think something, a couple of talks have mentioned today is the competing priorities of security and developers. Security wants things to be safe. Developers obviously need to get new features to market to hit business goals and business requirements. If there can be some differences in incentives there, and I think we often think about like metrics and KPIs or OKRs or objectives and key results and things like that. In terms of security metrics, I’m curious, what are some good ones and what are some bad ones? Because I think it’s subtle. 

[00:15:33] Tash Norris: Working across a few different companies and working quite closely with – London has got a great security community, and if you’re not engaged, I’d encourage you to engage with your peers there’s a lot of knowledge sharing that happens. So, you get to kind of hear what some people are doing for their metrics, and it’s really interesting. 

My pet peeve is people that want to report on the number of bugs found. When I say, “Hey, we found zero vulnerabilities this week. That’s amazing!” Because it encourages their own behavior, right? You don’t report or you wait until the next week or you over-report whatever it is. What happens is what they don’t report on, which I think is probably more powerful, is [inaudible 00:16:08]. Like the engagement that we have of engineering teams. 

I thing that I think is really easy for us to forget as security people is that if your company doesn’t make money, then you won’t have a job. Ultimately, the bottom line –

[00:16:21] Doug DePerry: Blasphemy.

[00:16:23] Tash Norris: Yeah. The bottom line is that you do have a common goal. It’s just that your customer is your engineering teams and their customer is the end customer. But that means that you have to work together, but it’s so important, and people kind of push this how having security and main engineer’s objectives, and I get that. 

But what you don’t want to do is start saying, “Okay. Well, reward you if you report 50 bugs this week,” or whatever, because that just drives me mental, when actually it should be about how quick are we to fix it? Or are we fixing the right things and are we aware of what we should be fixing versus actually is it the right thing to accept something or to have a conversation around prioritization?

[00:17:04] Zane Lackey: I’ll jump in on that too. That, I’m with you, that I can talk for hours on this one alone. I think one of the single worst metrics that we kind of came up with as an industry is bug count. I think it will lie to you every time, because you don’t know if you’re getting better at big discovery. You don’t know if actual bug count is going down. As new technologies are coming in, are you missing whole vuln classes? Are you actually detecting new vuln classes? That metric will lie to you every time. 

We did a bunch of experimentation on different metrics. There’s always going to be ones that are unique to your business, but ones that we netted out on as being very useful across not only a big chunk of – I’m speaking Etsy in particular, across a big chunk of Etsy, but then across all different kind of business units and acquisitions as well would be certainly ‘time to remediation’ is one. 

Another one that was very useful for us was actually just average engagement. Just tracking engagement from the particular teams. How often do different product teams or DevOps teams proactively engage with security. Because you’ll notice once you start tracking that, a whole bunch of those teams are at zero. Those are your ones to go work on and getting their engagement higher. 

The other ones that we did – There’s two more. One other one was – This one gets a fuzzy at a high level, but basically ‘time to detect’ or ‘likelihood of detection’ for a given vulnerability for exploitation of that. If it was a SQL injection, how likely were we to detect that SQL injection in the API that it was in of someone actually exploiting it? And that’s usually a very low number at first.  

[00:18:33] Clint Gibler: Quickly. What do you mean by how likely? How do you measure how likely? 

[00:18:36] Zane Lackey: Yeah, it gets really fuzzy. I realized I tossed that one out there, and that’s like an hour-long talk and I’m –

[00:18:41] Doug DePerry: Yeah [inaudible 00:18:42].

[00:18:43] Zane Lackey: Yeah.  

[00:18:44] Clint Gibler: You can give us the hour version at the party, but do you have like one or two sentence version? 

[00:18:49] Zane Lackey: Yeah. Usually when you start with it, it’s kind of binary of “would we have been able to detect this in logs or in any other system that we had?” Then “was there anything actually proactively awarding off of that?” Usually the answer is no and very much no when you first start tracking that, and then you start to actually build up from there. 

Final one with Jess number of times you say no. Just to wrap it up. Track how often you try to block something or say no as a security organization, and you want that to trend down over time a bunch, yeah, for a variety of reasons.  

[00:19:20] Tash Norris: I used to joke that I would give myself foot stomps a year and I would save them all for WordPress. 

[00:19:26] Zane Lackey: Yeah. 

[00:19:27] Tash Norris: But on the likelihood to detect that is a really interesting one, because one thing I’ve been kind of – We’ve been starting to think about is when we do detect it, how – I’m sorry to use the phrase ‘shifting left’, but like how far right were we when we detected it? Actually, should we have detected it earlier and then kind of bring it in your likelihood, which is could we have detected it earlier? If so, how would we have done that? That’s something that’s been really interesting to explore as well.

[00:19:51] Doug DePerry: Totally. I want to take it back to raw bug count and tell you why you’re both wrong. This is going to be good because I’m going to weasel out of it. I do agree that like raw bug count is – You’re both actually exactly right on that. It does kind of incentivize wrong things, But I do think that you can glean signal from types of bugs and where those bugs are found and categories and severities and stuff like that. 

I did red team work at Yahoo and I also ran the bug bounty program at Yahoo, and one of the things that I did was collect all the metrics that we had and kind of like combined the bug bounty metrics with the Jira ticket metrics and smashed them altogether. From there, I could determine how many bugs and what type of bugs were being reported on what properties. How much did Yahoo Mail cost us last quarter?

You can look at all those types of bugs and where they’re occurring and then target those areas. There are an awful lot of XSS over here. Why is that? Oh! Well, they’re not using a framework, or they could be doing it wrong, or there’s a new development that came in. They might need some more targeted education or something like that. I agree, that like raw bug count is bad, but I think you can still gain some signal from it to help you at least in the very beginning.  

[00:20:54] Clint Gibler: Going off bug counting. So, there’s actually a neat talk from Arkadiy, security engineer at Airbnb one or two years ago at BSidesSF. It’s called a data-driven bug bounty, but I think it applies to any bug regardless of how you find it, and the idea is, say, you keep track of security bugs in Jira. You give it like a security label and also you say like what type of vulnerability is this? Is this like access control, XSS and so forth? How was it found? Did you find it via SAST, VAST, bug bounty pen test? How severe is it? Is this like a critical issue versus like low severity? A couple of different meta attributes, maybe like how it was fixed or the root cause, things like that. 

Once you do that systematically over, say, 6 months or a year or 2 years, you can start saying, “Hey, every critical issue we found has been through this one tool or through pen testing,” and maybe we double down on that versus we’re spending a hundred grand every year for this tool that’s found one low. Why are we doing that? Or maybe just having a security engineer do like a manual code audit of our most critical services has given us a huge more ROI.

Or if you’re like, “Hey, we systemically have XSS in these four microservices. Well, what languages do they use? Oh, they’re all using the same one. Maybe let’s build like a secure wrapper library to sort of systematically kill this class of bugs.” 

I totally agree measuring only bugs by counts can incentivize the wrong behavior, but if you keep track of various meta information about it, it lets you make data-driven decisions, which then allows you to prove to yourself as well as management, “We’re spending our time on things that are impactful and to make a difference,” because you’re measuring things. So, you can say, “Oh! We just spent a quarter making this better. And, look, we don’t have this issue anymore,” or it’s like 30% less. Rather than trying to do 100 things and then being like, “Oh! I kind of feel better, maybe, I don’t know.” Good for you. Good for management too.

[00:22:38] Jesse Endahl: I think what could be really interesting where you would track the different  categories that you find from bug bounties, pen tests, all these different sources, would be to then go back and reevaluate your design phase decisions that you actually made originally, because those tend to just be made one time and then never ever revisited. Then you can just end up doing whack-a-mole fixing all these bugs that would have been solved if you just redesigned the surface, right? But I don’t necessarily mean like it’s a thoughtful refactor, right? Go back to the design phase and do it again. I haven’t really seen that as a model anywhere. 

[00:23:10] Doug DePerry: And like learn from it for the future or actually like try to redesign that. Sort of like change stuff – 

[00:23:13] Jesse Endahl: Actually try to redesign it, and then maybe if you want to be really data-driven, like maybe set some thresholds where you say, “If we start seeing a number of critical bugs from any category, that should trigger a re-review of a design,” or something like that.  

[00:23:28] Tash Norris: Do you worry that they would specifically hide, or maybe not hide, but omit information that might lead to finding critical bugs so that they don’t have to go through redesign before product release? 

[00:23:40] Jesse Endahl: Maybe, but I think it would depend on how [inaudible 00:23:42]. I think if you frame it like, “Hey, do you like your sprints being interrupted fixing two or three bugs every two or three sprints, or would you rather just fix this once and then never have to deal with it again?” I think that might help with that, maybe.  

[00:23:55] Clint Gibler: Yeah, I like that idea a lot. I guess to continue with one last little bit of time. Let’s say you have some systemic trends in terms of this vulnerability class or maybe all your bugs are in Java applications where your Ruby ones are fine or something like that. We’ve done projects like this with a couple of companies, that’s why I’m talking about it, is basically looking at a specific vulnerability class or type of things and think like how can we either prevent this categorically or how can we build tooling to detect it either in source code or in a running system or via monitoring and logs systematically? How can we either crush this? Prevent it from happening or find them with high signal and low noise ideally? Because triaging obviously requires precious AppSec time. 

Okay. So, we’re going to do one more prompt. If you don’t have questions yet, please do. Otherwise, we’ll just all sit in silence and ponder the meaning of life. Speaking of not having enough time, I think one thing all of us probably have in common is that we have not enough people, not enough time and probably not enough budget. So, there’s near infinite things you could work on. How do you prioritize what to do given limited time, given limited money? 

[00:25:04] Tash Norris: It’s a great question. 

[00:25:05] Doug DePerry: Yeah. That’s the real trick.  

[00:25:09] Zane Lackey: I think from a strategic view, what you focus on most and the only way that we scale and survive the shift into cloud, DevOps, digital transformation, whatever, DevSecOps, whatever you want to call it, whatever, is focusing on delivering security capabilities in a way that the development teams and DevOps teams can consume them themselves. Because I think the very root of that question, the kind of pressure behind it is the fact that the velocity is increasing exponentially. 

I mean, for the last 20, 30 years, the entire security industry has been built on this kind of gatekeeper blocker model that tries to review everything before a change gets made or code gets deployed, and that just doesn’t scale. It just falls over completely. 

My favorite story actually on that, which is horrifying to security people and hilarious to development and DevOps folks, was I met about a year ago for coffee with a CISO of a Fortune 100 that we all know very well. 

And the CISO, I asked him, “Hey, how is your shift to DevOps and cloud and all of that? Curious, your take on it. How is that going for you?” He point blank looked at me and said, “Oh! I’m not allowing that to happen here. It’s not secure. I don’t believe in it. It’s not going to happen here.” Okay. That’s a bold career choice would usually be the term.  

I went for coffee with the CIO immediately afterwards and I asked him, “How do you think about cloud and DevOps and everything?” He said, “That’s a little different,” and like, “Oh, yeah. We’ve got 50 apps in the cloud today. We’re going to have 200 by the end of the year and we’re going to have a few thousand next year.” I looked at them, “Oh! You talked to the CISO.” “Yeah. We just don’t tell them anymore. They just try to say no to that, and so they’re at the kids’ table and the adults are making decisions over here. And they’re welcome back anytime, but we’re going to make our own security decisions and we’re pushing forward.” 

The reason I tell that story, not just because it deeply amuses me. I think it’s the perfect example of where we are as an industry right now, which is that if security tries to hold on to the old model of being the blocker and the gatekeeper, it just gets routed around. Back to the question and the root of it, and I hope I bought everyone some time. How does security stop functioning like that and focus its highest priority efforts on bringing capabilities that it had to do in a highly specialized way to where the development teams and the DevOps teams can actually consume security capabilities themselves? Because it’s the only way we scale, and that has to be the highest priority.

[00:27:29] Tash Norris: One thing that I’ve learned from security vendors, and I say this with love, is that whilst they often can be consistent in their communication, they do often go out to their customers for their roadmap, and I think that’s something that security miss is when you’re thinking about your roadmap and everything you’ve got to do, it’s really easy to think about all the things that you want to do and you feel you need to do. I’ve said it, it’s all very well providing along these capabilities. But if they don’t provide value to engineers, then the perspective is going to be that you didn’t provide any value. 

At the moment, I said I’m Head of Product Security, which is a great title, but I am a team of one and I’m one of their first security hires for Moonpig, because there are some separations happening there. So, there’s a lot to do and there are a lot of moments where I’m like, “There’s so much I could do at once. What do I go after?” 

The first thing I’ve done is think about what value can I provide to the business immediately that has an immediate return on that investment that helps to increase our value? Then the other, this is a spoiler and a shameless plug for my talk tomorrow, is I’ve started to create my Moonpig security Ponzi scheme, which is basically it’s a pyramid scheme for security teams and it’s that I don’t scale and that as much as we are hiring and we’re trying to do all these great things, what I need is everyone to do security for me, which makes my job easier. 

It’s about those little incremental pieces of value that I can deliver whilst building my own roadmap of the things that I think are important as an SME, but then also making sure that I’m speaking to product owners and getting ideas of what they think is valuable, because it’s all very well me coming back to them in six months’ time and saying, “I’m ready to engage now.” Then they say, “Well, we’ve already put 200 apps in the cloud. So, you’re too late.” 

You have to have that conversation and then that allows you to shape what you need to go after. For me, it’s been really important. Not, one, to solely go after what they want me to go after. But, two, to be ready to compromise and what I think is impactful versus what the business needs while still being mindful of regulations not becoming the next Equifax or whoever and still maintaining my integrity and the integrity of the business. 

[00:29:36] Jesse Endahl: The best thing I’ve heard on this actually – I swear we didn’t talk about this beforehand. This is actually one of Zane’s talks from years called Attack-Driven Defense, and it’s a really simple concept. For some weird reason, most of the security industry doesn’t follow this. It’s very simple. You look at what real-world attack activity looks like and you look at the attack chain. What are each of the steps that enabled that attack to occur? Which ones are happening at-scale? There’s actually a list. It’s a known thing. There’s a great report that Verizon puts out every year, the DBIR. If you aren’t familiar, check it out. It’s amazing. I think it’s like 40 pages, but it’s very digestible. A lot of beautiful charts and graphs. That really is like a pretty clear roadmap for like what should be top of mind, because that’s literally how companies are getting hacked right now. I think that’s probably the number one that I would personally allow.

[00:30:24] Doug DePerry: Yeah. We do the same thing, right? You need that visibility to understand not only if and how you’re being attacked, but also like to understand the gaps in your defenses that you recognize as professional and as someone who understands the environment that you’re operating in. 

The prioritization thing, it’s like the top thing I complain about. It’s one of the hardest things that I have to do. It’s just there are competing priorities. There’s the business. There’s CTO had an idea. There’s VP had an idea, or read an article or something, and it doesn’t mean it’s invalid, right? Then there’s just like the current status of the industry or like what is being attacked and when. Try to think about the priorities every single day. 

Not as far as setting a timer once an hour, but it always feels like that sometimes. I think about reshuffling the priorities, but really try to focus on, honestly, the basics and where you’ll be attacked first. The odds of you being hacked by [inaudible 00:31:16] or very targeted attack are actually pretty low, and it’s kind of hard to prove that. It’s hard to prove attack likelihood.

I mean, most of the stuff that you read about is open S3 buckets or system is not patched, and patching is hard. At any kind of scale, patching is hard. Understanding the inventory of your asset is difficult. You got 100 servers. All right, that’s fine. Easy. You have 10,000, 20,000, patching is hard. That’s just one of the things that I would just – That, and actually vuln management is difficult as well.” Knowing about a vulnerability and not yet doing something about it and then having that be the cause of an issue is just like that’s just inexcusable for me, but like that’s the state of the industry.  

[00:31:56] Tash Norris: That’s common, right? If you go to any delivery team I bet, in any company and you ran like a threat modeling session or just like an AMA on security or whatever, there’s always going to be someone that says, “Oh yeah, but we all know that there’s X,” and you’ll be like, “What?”

But that’s common, and you’re absolutely right, in that there are so many teams that don’t know how to raise vulnerability and how to tell you there’s this thing and know how to do it. And this is really key. I know we’ve all kind of mentioned you should be able to talk to your security teams. But in many companies, there’s still a stigma of an engineer saying there’s this big vulnerability and feeling like they’re going to get told off or ashamed or whatever, because they’re trying to tell you there’s something wrong with their application. 

I’ve also seen recently, and I don’t know if this is something that other people have seen, but over the last couple of years, I’ve seen teams almost be swayed by product owners to not talk about vulnerabilities because they want to push for features. I see some laughs, so it sounds like I’m not by myself here. 

But it’s just really interesting in how this push for feature, and ultimately, that kind of bottom line return on investment at a company level, is that let’s just kind of sweep over the fact that we’re maybe, I don’t know, 1AZ and AWS, or we’re all running of one EC2 instance, or whatever it is. But I just find it really interesting, that as a product owner, the push to them is features, because that’s ultimately – If you look at a lot of companies’ objectives is increase purchases of X by X or whatever. So, there’s that push on feature and not optimization or resiliency and security. 

[00:33:33] Doug DePerry: It’s almost implied, but quarterly goal that says, “Don’t get hacked,” right? This is kind of implied, right? They all want the security, but they don’t necessarily want to invest all the time. I think, again, I’ll air quote shift left. I think finding the vulnerabilities earlier or preventing them from being created in the first place is important just because it prevents this tech debt of security bugs from piling up and then they have to stop what they’re doing, concentrate on fixing a bunch of bugs rather than shipping features. As you said, if the business isn’t making money, we don’t have a job. It’s hard when it gets to that point. It’s a huge tradeoff at that point. I think that’s why vuln management is really difficult.

[00:34:09] Clint Gibler: We are open for questions. Please be loud. Unfortunately, we’re using all the mics. 

[00:34:15] Audience Member : [Inaudible 0:34:15]

[00:34:59] Clint Gibler: Have we made any attempts to quantify risk? If so, how do you do that effectively and then how do you communicate that to, say, upper management? That’s an excellent question. I love it.  

[00:35:10] Doug DePerry: Yeah. That is a really good question. I’ll start, and it’s not a great answer. I agree. It’s important to me, and I think a lot of the stuff that Ryan McGeehan has blogged about in the past about forecasting is really important. We do OKRs, objective and key results, and it’s like these two or three goals, I guess, for the quarter, and that’s one of our OKRs for this quarter is to really dig into – And we’ve dabbled in the past, but it’s like you have to kind of train yourself or you have to learn how to forecast before you can start forecasting things like risk and doing more threat modeling and things of that nature to try to find out in some scientific manner what is going to bite us first and focus on that. 

[00:35:49] Clint Gibler: How do you forecast then? 

[00:35:51] Doug DePerry: I haven’t read all the blog posts yet, but it’s essentially you take like a kind of a concrete thing, and then based on your knowledge of the system in which you’re operating and your knowledge or experience as security professional or development professional or like some sort of you need some knowledge in that general area. You attempt to predict the likelihood of that thing happening.  

Ryan McGeehan at McGoo has done a lot of work around here, and I think it’s really worth trying really like digging into. I don’t know if it will work for Datadog in 2020, but that’s something we’re going for. 

[00:36:22] Jesse Endahl: I’m also looking at this as –  I think how it works is something like a little bit of like Wisdom of the Crowd’s thing, but at obviously a smaller scale, like the scale of your security team. I guess there’s a lot of research. Again, I read these posts probably like two, three months ago. I need to re-read them. 

But I think it was like that teams of experts tend to actually get pretty close to predicting accurately the risk of hypotheticals in the future as long as they’re fairly similar to events that have occurred in the past. I think that’s roughly the concept, which then allows you to make a forecast and risk projection and then at least quasi-scientifically measure a reduction of that risk after you’ve finished whatever security project or security engineering, re-architecture of a service, whatever it may be, can say, “Okay. We have reduced the risk in this area by N% now that is done.” I believe that’s roughly the concept and it’s something that I’m also looking at.

[00:37:16] Clint Gibler: I don’t disagree with anything that you both have said, but I want something more actionable and specific.

[00:37:22] Tash Norris: There are two things that really annoy me about risk management. One is that there are so many frameworks and none of them are written for product owners. The whole idea is that you have like ease of exploitation, likelihood to exploit, all of these different levels and yet, as a product owner, I’m like, “What the fuck does this mean?” They don’t understand that, and yet they’re expected to make a decision based on those frameworks. 

 

The second is we never give them data points, right? If your product owner or whoever accepts the risk and you say it’s medium, how many people if they see that exploited or they see an attack actually loop all the way back to that product owner, even they’re no longer the owner of that product and say, “Hey, that actually happened.” We don’t complete that feedback loop, and I’ve seen it in previous companies where a product owner, we’ve come back in and we’ve said that’s a critical or that’s a high. They said, “Well, we’re coming up to our peak season or Christmas, and so we’re going to accept it anyway.” 

Yeah, we see – I don’t know, a significant amount of fraud or whatever it might be as an output that comes through the security team as an incident or a fraud team as an incident and that doesn’t make its way to the product owner because they’re not part of the operational response. They have no idea that there’s actually an incident off the back of that. And so, they’ve made that decision thinking, “Well, I accepted a critical and nothing happens,” and it’s carry on. At the end of the day, the amount of men that are lost in operational overhead, or response, or overtime, and whatever, never goes back to the effect of their budget or their bottom line. They’re got no reason to care about accepting that or not. 

This might not work for everyone. It works for me. I still refer to things like [inaudible 0:38:53] risk ratings matrix and things like that to help me quantify risk for an audit perspective, right? Because it’s still important to be able to say this was critical or high, medium or low so you can follow your risk management framework. But my product owner does not give a shit about any of that. 

What I’ve started to do mostly is keep myself entertained is to actually demonstrate the vulnerability or the exploit to the product team often in real-time, because it’s really fun. My favorite is a script that I run that just brings back a lot of public buckets with credential.txt files, credential.csv and a lot of access keys. It’s all about that demo, and then I show them how it really is and how ease of – I don’t have to then say, “Okay, ease of exploit is 5,” or whatever. I’m just actually showing them and then it makes it so much more real. 

I’ve shown before like news articles of other company’s breaches and just said, “That’s the same thing,” or I’ve shown them like bug bounty reports and the number of bounty submissions we’ve had historically for a similar bug classification. Then that’s shown them, “Okay, they’re found really easy,” and it’s just made it a little bit more real, or sometimes I’ve said actually it’s really hard to see that, or it’s really hard to cover it. It’s made it a bit more real. So then when they do risk accept or when they’re looking at quantification of risk, it’s a little bit more translatable to them. But then I still have my framework version from an audit perspective. So, I kind of hit my PCI DSS or my ISO 27,001 or whatever it is I’m going after. 

But I get so frustrated. I can’t tell you how annoyed I get about the number of times as security people we deal with incidents, and we never go back to the person that risk accepts or chose not to fix or deprioritizes a product. I really feel quite passionately that that’s something that we’re really missing in security is that full feedback loop, and I’m not necessarily talking about purple teaming, because this could be 6 months after the fact or a year after the fact.

But there should be a way of us saying, “That decision cost us X amount of money and this is the impact in resource in time.” Actually, how would that change and how do we start holding people accountable in a way where we can then change KPIs and objectives to say, “One of our objectives is not to lose X amount of money in bug bounty payments, or on-call time for incidents,” or I don’t know, breaches or whatever it might be.  

[00:41:03] Zane Lackey: Yeah. I’ll toss one in just for the communicating the board into executive leadership and that side of the risk question, which was we tried a lot of different things there, and the stuff that really stuck was we’d say, “Okay. As far as risks to the business here, here’s the top 10 ones that we as a security program are going to be working on this year.” 

I’d map each one of those to something you said, which is spot-on, map each one of those to a breach that they had heard about so they could easily, very easily digest it. In those risks, by the way, it was very important to put all of those in business terms. So, it’s not we’re going to work on cross-site scripting.” Great. That sounds exciting, or terrible. I don’t know. I’m a board member. We are going to be working on protecting user data for the API or for the mobile app or the main site or something in kind of business terms there. Map it to breaches that they knew about. Then we’d say the two kind of axes that we’re measuring on are our ability to detect someone attacking us there and our ability to make it much more difficult for them. 

For these 10 top level strategic risks, there’s a lot of zeros on that list right now and we’d be impacted by a lot of those same style of breaches. Our goal was to get ability to detect like ball park at let’s say 50% for a bunch of these, and for a bunch we’ll make it 80% that we’ve made it much more difficult for somebody to do. A risk that kind of tie it together. A risk might be – This is less today than certainly was 10 years ago. We’re going to reduce the risk of lost or stolen customer data based on employee devices. What we’re going to do from there, we’re going to get all of our employee laptops encrypted and we’re going to have that control in place. 

Okay. Now here’s what we’ve done against that, because here are 50 news articles about somebody leaving their laptop at the pub or in a taxi or something like that and there was a massive breach as a result. Mapping it like that to the board and to the executive leadership in terms they can understand. What you’re doing to get the job done. Then you, later on, to really kind of build an effective program, you map all of your resource requests back to that. You say, “I’m asking for headcount here and budget here to be able to move this from zero to 50%. You don’t want to give me headcount? Okay, we can leave that massive headline breach at 0%, but that’s in terms that you very much understand now.” 

[00:43:19] Clint Gibler: Yeah, I like that. There’s a book called How to Measure Anything in Cybersecurity Risk. I haven’t read it, but I’ve heard good things. There’s a new-ish model for quantifying risk called FAIR, F-A-I-R. I know a couple of people who are into it. I haven’t looked at it too much in detail. 

Then, some friends of mine on the Netflix security team have been doing some pretty interesting things. Basically they’re trying to build this real-time data-driven model of how risky a given service is. Let’s say you have all of your applications going through the same CICD pipeline. You have a real-time data of all the systems running in production. 

Basically, what they can do is they have built a number of security controls such as a service that makes it easy to do mutual TLS between services. They have like a nice hardened access control library that everything is supposed to use to talk to each other and various other security controls, so they can, in a lightweight automatic ways out of all the different codebases, who is using the security controls we’ve built and who isn’t? 

Also, based on live AWS data, for a given service, how many instances are running? Is this codebase running on one EC2 instance versus like a thousand? Do you have other meta information? Is this edge-facing? Is this directly detectable to the internet versus internal only? Obviously, something that anyone can talk to is more risky.

Then, also, how important is the service to the business? Is this like our core streaming or payments processing, or is it some internal thing that like, “We kind of don’t want it to go down, but it’s fine if it does.” There are many axes of types of things that make something more or less important or risky to the business. If you can programmatically automatically determine the security settings either of the code and the controls as well as how it’s running live in production, you can basically build this continuous model. Also, what are the vulnerabilities that are currently in an application whether it was found by pen tests or out of date dependencies or things like that?

Collecting all of these things, you can say, “Oh! Hey, we have this one service that has a high vulnerability that hasn’t been fixed. It doesn’t have this one security control. Also, it’s in production on like 10,000 EC2 instances. We should probably fix this like right now, versus, “Yeah, this one has a critical issue, but it’s internal only and it’s running on like one server and it doesn’t have access to an PII.” Maybe we’ll get to it eventually. But I thought that was a very clever way to programmatically try to infer risk based on where you should prioritize time. Yeah, we can talk more about that later.  

[00:45:49] Audience Member: [inaudible 0:45:47]

[00:46:09] Clint Gibler: Was it a next gen? Friends like these.  

[00:46:15] Audience Member: [inaudible 0:46:15]

[00:46:09] Clint Gibler: To summarize, historically, security wasn’t very technical. It was very policy-driven. Now, there is a more emphasis on understanding the technical details of platforms and sort of the inherent risks there and being able to either contribute directly in an engineering capacity or just, I guess, offer more detailed actionable guidance. Is that reasonable? 

Cool. Thank you. 

[00:46:57] Doug DePerry: Zane can talk to this a little bit more than I. He has seen a lot more companies, but I do think that for the most part I think you still have that divide, and I think it just depends on the type of company and whether they really value that sort of like technical work. Like just from my own experience, Datadog is a highly technical company. The co-founders are engineers by trade. They’re still very technical and still keep up on that stuff. So they understand the need for a technical team. They won’t have it any other way. 

I think the other thing as well with modern development methodologies, the way you interact with everyone who writes code. Everyone we hire in the security team writes code. Some do it. That’s their job. We hire developers that also happen to be pretty good with security or just developers who are interested in security. Literally, their job is to write tools, application services to either allow the security team to do things or for the benefit of developers to make things easier for them and create libraries and things like that. 

I think that is pushing that button. I mean, Datadog is kind of a tech company, right? Or a software company, right? That has worked for us. Other companies larger, financial institutions for the most part, you’re focused more on policy and compliance and may not be as technical. I think I won’t argue against that, but there is a finite number of folks that have the skills, but I think hiring of developers specifically for security has exploded of late for sure. 

[00:48:11] Jesse Endahl: I totally agree with all that. I think another thing in terms of just if you already have a company with an existing team and the existing team doesn’t have that technical background, I think you can do things to try to change the culture of the team where maybe you can get approval that people are allowed to attend one or two conferences a year, buy N number of books maybe that are shared amongst the team, or have like a reading group or something, a study group where you actually say, “Hey, we’re all going to learn about the latest in web app security this month,” and do themes like that. Just having that culture of learning in your team I think can go a really long way over time. Of course, it will take some time to see those returns, but it’s going to have a huge impact.  

[00:48:49] Tash Norris: I think it’s important to remember that as security teams, we almost have to be an expert in such a wide range of topics. As a software engineer, you might be frontend, backend. Or historically, you would have DBAs, or Linux engineers, or Wintel engineer, or whatever.

I think as a security team, sometimes the perspective of your team is that, especially for AppSec people, right? You need to be able to answer a question on such a wide range of topics and your engineers expect you to be a mile wide, an inch deep. They expect you to be a mile wide and a mile deep and it’s incredibly challenging. I think as leaders in your security teams, you should recognize the amount of pressure that might be on your people and investing in training and having that support is really important. 

But I also think it’s fine to make sure that that culture is more pervasive over the whole of your environment so that it’s okay for people to say, “I don’t know, but I’ll find out how,” or one of my favorites in my old team, I had a career switcher. She came from Nestle, sold coffee. Great, because she brought in coffee samples all the time. But switched into security engineering because that’s what she wanted to do. 

She used to say, “Let me take that away for you.” I mean, it’s little things of empowering people with these lines to say. But what it meant was that we then took time to say, “Okay, how do we upskill in the right areas and how do we know what to go after so that –” My dream was we were then be the team that when tech come along or a CIO come along and say, “We want to do X, this crazy thing.” We’d be like, “That’s great. We’ve already done some research and we’ve been playing around.” 

Actually, it’s been about teaching, and I’ve been very lucky to have previous CISOs that were all about engineering and learning, but it’s been about kind of bringing other leaders on the journey to say, “Actually, 10% time or afternoon time to build stuff is actually hugely powerful.” [inaudible 00:50:35] team with our career switcher, we’ve been working on building a Python class web app that ultimately is going to just be a one-stop-shop for engineers to come to. It’s something that can provide value, but helps us bring up her engineer skill. She built her first API last week. 

But when we were in a threat model with another team, she started asking questions around the security of certain features of that API and it’s things that she wouldn’t have easily been able to pick up from just listening to a talk. It’s only from building. Empowering and giving people confidence to say, “First of all, I don’t know how to do something,” is fine. But also, then giving them the freedom and time to explore and knowing that it makes them a better security engineer is really important. 

It’s hard to do that. Like at the moment, there’s a team of one, but previously I was a team three with a much bigger company. It was hard to justify to my boss of why we needed to take – We did 45 minutes a day every day for three months, which is a lot, and we did at the end of the day and it was a big deal to my boss, a VP, to say, “We need that time.”

But the growth that he saw in his team because of that was really powerful. Now, she’s an exceptional engineer. For me, it’s helped justify that career switch is really powerful for her, but because we allowed her to invest that time. But I think we also have to be kind to ourselves, right? We’re expected to be a master of many things, and security and technology in itself moves so fast and just being kind to ourselves and each other is really important. But you have to bite the bullet and invest that time whether it’s on free stuff or paid stuff. You just got to do it. 

[00:52:02] Zane Lackey: I’d say one quick twist on some of this too, especially in injecting technology understanding into the security team is make internal hires from here, in addition to all of this bit of training up folks, career switching, like all that. The best hires I’ve ever made in any of my security groups were internal engineers and DevOps folks, because you can teach them security and the amount of time you have to invest to that is way lower than having you do the reverse. 

Most importantly, they already have the platform and technology understanding of what’s used there. But actually, most importantly, is they already have the political relationships with the technology org. Run internal hiring campaigns from your technology teams to bring them to the security team. 

[00:52:48] Tash Norris: If you want to ask anyone about that, but Dan on the front row, we hired him from development, our software engineering team when I was in financial services. Honestly, one of the best AppSec hires we’ve had because he knew really well how teams worked, their CICD pipelines, he had that relationship. But ultimately, we can teach a lot of this security stuff, but knowing that technical knowledge deep in, that’s really hard. 

[00:53:09] Jesse Endahl: I want to throw one more thing on this. I think one thing that can also be tricky for a less technical team is to even know what you don’t know. It’s like a forcing function to answer that question is to do what I don’t see this done very often, but I really love it as a framework, which is a root of trust analysis, which is really starting from the point of asking yourself a question, “For this thing, whatever it is, server feature, whatever, what are all the things that have to be secure in order for it to actually be secure?” What are all the layers of a stack that this thing is relying on? 

That will quickly surface things that you might not really know how it works, but you’ll know that you don’t know how it works. Then now you have a map of, “Oh, I know how these upper layers work. Oh! I don’t really understand how the OS boots up and that layer of the security,” but at least you’ll know what you don’t know and you’ll have kind of a roadmap for yourself that way. 

[00:54:00] Clint Gibler: Yeah. Another potential idea maybe let one of the people pair program with a developer for a day or for a sprint or two. Just literally sit next to them and do the same sort of programming they do, and then you can be picking things up through osmosis. 

One thing I will notice, I feel like a lot of the very successful modern security teams who are really pushing things at the forefront tend to have significant development expertise as well and are very familiar with the technology. So that as Doug said, you’re building services that developers can use, secure wrapper libraries, infrastructure. I think that – No offense to the vendors externally or internally, but I don’t think you’re going to buy your way to security. I think there are many things that certainly will help. But I think, fundamentally, to do a great job at eliminating classes of vulnerabilities, you need internal development experience as well. 

Cool. Yeah. I think we had a question. 

[00:54:51] Audience Member: [inaudible 0:54:51]

[00:54:59] Clint Gibler: I love this question. How do you know when you’ve done enough security? I think this is really great, because – Right? In any suitably complex system, there’re always bugs as long as there’s not a trivial application. You could keep investing more people, more time, and you get this point of diminishing returns as with anything. In that graph of a security investment by ROI.

[00:55:22] Zane Lackey: We’re done. We’re all good. Let’s go home.  

[00:55:26] Jesse Endahl: I think a big piece of it for me depends on what is it that your company is protecting and then what’s the worst-case scenario for the company? That’s a good starting point. There’s going to be like long discussions on prioritization that comes out of that. But as a foundational thing, I think one of the things I actually really like about GDPR regulation is the fact that it puts things in the format of – I know. Controversial statement.  

[00:55:49] Doug DePerry: No. Please. Continue. I want to hear this.  

[00:55:50] Jesse Endahl: What’s the emphasis on data and the data that you’re processing instead of like some hypothetical binary thing, black and white, like this is secure, this is not secure. It’s much more granular, right? It’s like, “Okay. This is the most important data to protect and it should only be living in these places and processes in this way. Then this other data over here, it still needs to be secure, but it’s literally less important.” The worst-case scenario where that data ends up public is not as bad as if this other data ends up public on the internet, right? 

I think that is a great starting point for any company to have that discussion, is to start with a data classification framework and put things into buckets, and it should be very simple and easy for literally every single employee to understand and it should be in your security onboarding.

That way, it’s a natural hook into those, hopefully, productive and frequent conversations between any random employee in the company, engaging with the security team. Saying, “Hey! I’m doing this project,” and it could be someone in the engineering, sales, design, it doesn’t matter. They say, “Hey, I noticed that this thing we’re working on deals with level 3 data, or level 2 data, level 1 data. I remember from my onboarding; I’m supposed to talk to you.” And that’s just the hook into that conversation. 

I know, that’s not like a black and white answer to the question, but I think that’s how I would start on the first security hire at some company in this room. That’s probably how I would start the whole discussion with people internally. 

[00:57:09] Tash Norris: I’m going to take you through my chain of thought that happened really quickly after your question, which is first of all was never. And then I was like, “Well, obviously, then I would just – You’d always be focusing on bugs.” Then I was like, “Well, if the developers can sleep...” 

Then I remembered about one time when a developer happily told me that GDPR wasn’t going to affect them because they weren’t going to take in any more customer data after May 2018. Of course, it only applied to data captured from May 2018. I was like, “Oh no!” Then I remember that I’ve had a lot of conversations like that. I’ve kind of come back to one of the things that we’re doing now, which is really similar to where Jesse’s at, which is we came up with 6 or 7 questions, which was are you internet-facing? What’s the type of data you’re handling? Are you business critical? Some really simple classification questions. Then depending on what they answered, ultimately, passionately dictates how much I care to be really brutally honest about it. 

Then that helps drive for me how much time I spend, which ultimately depends on how many resources I have available, which could be almost nothing, which is just say, “Hey, follow these steps,” our normal SDLC process all the way through to we’re going to threat model, we’re going to pen test. I want to sit and be a part of your sessions, or whatever it is. 

But I think you have to be able to give yourself that release. I know people might disagree and it’s fine to sometimes say, “I don’t care about that piece of data.” Do your thing. Just follow our policies or whatever it is, but you go and go wild and make sure you engage on the right things, because ultimately otherwise you’re just going to piss all your engineer teams off if you insist you’re involved in every cops deployment. 

[00:58:46] Doug DePerry: I think I would say my official answer would be without coming across like a jerk is that security is a process. It’s not an end state, right? That’s kind of like a cop out. Yeah, I knew you were going to touch on that as well. Internally, in my head, the way I look at it is we don’t get owned by accident. It’s not something stupid and preventable. S3 bucket open to the world. I accidentally exposed build machine security group open to the world, something like that. Very easy, relatively easy things to prevent, and that gets sucked up by automated scanners or malicious actors of very low sophistication. That’s how I kind of internalize it. 

[00:59:22] Clint Gibler: Next question. 

[00:59:23] Audience Member: [inaudible 0:59:32]

[00:59:46] Clint Gibler: Frameworks are getting better, especially modern ones do a ton of security things by default, which is great. But as developers are using them, just by using the frameworks built-in features, how you’re supposed to, they’re getting all these security wins that they may not even have to think about security anymore. Do you think that’s a good thing, a bad thing, neutral?  

[01:00:04] Doug DePerry: I’ve thought about this a lot as well. I agree. We have done a lot of work to abstract certain security concepts away from developers so that they don’t have to think about it, or incentivizing the bad behavior for not thinking about security?

I think one thing that we have had some success with and that they’re using it, but one thing that I have learned as well is that you’re dealing with highly technical individuals, people who are creative, who are interested in how things work. Very much like security folks, except they just focus on slightly different objectives. They don’t necessarily want black box. They want to know how it’s working underneath. Some may not, but I think it’s been pretty 50-50 from the developers that we’ve worked with where they’re like, “Why am I calling this function instead?” They want to know. I mean, I think that’s kind of just a conversation sort of thing you have, but that’s a good question. 

[01:00:04] Zane Lackey: It’s mostly a very good thing I would say. I think a common pattern that I think all of us have seen across companies that are doing this well, and everybody calls it something different. It’s kind of the paved road or golden path, or whatever you want to call it, of, “Hey, if you write code or deploy a service or something like this following the standard pattern, there are whole classes of vulnerabilities you no longer have to worry about. I mean, that obviously is I think really good. 

I think kind of at the root of your question of, “Oh, do they now is them not knowing about those vulnerability classes kind of bad? Should they still be aware of it?” I’d say in a vacuum, I think that’s a totally fair point to kind of see back and forth. I think, in reality, there’s not anyone on the planet who says, “Man! There’s just a shortage of security things to be thinking about and to be worried about, right?” 

There’s a backlog that everyone in this room could work on for the rest of their careers. The more you can wipeout individual vulnerability classes and patterns and make things safe by default so then you can look for what’s opting out of those patterns and treat those as issues to triage very quickly or other things that you can’t wipe out with the framework that you’re going to need folks to be cognizant of and focused on, it frees up the dangerously few resources that you have to focus on more important things, and therefore I think it’s a really good thing in general. 

[01:02:12] Jesse Endahl: I totally agree. I think in an ideal world, developers just going about their job using the tools that they’re provided and libraries and everything. They don’t think about security. It’s secure. It happens transparently. You don’t get on an airplane and think like, “Okay. I need to make sure the wings working exactly this way, or else we’ll crash. I need to make sure the windows don’t blow open.” It just happens. 

I think, really, what I think we should all be fighting for, in my opinion, is the world where developers can build features very quickly and efficiently and they don’t even think about security or even have to know about it. But because it has like Netflix said the paved road, everything is just secure. I think to get buy-in and to get like adoption of all these nice things you build, if you can build in like useful features like perhaps logging, monitoring telemetry into the security primitives you build for development teams, then they’ll use it not just because it’s more secure, which they may be ambivalent about, but because it gives them things for free. Like, “Oh! I was going to have to spend a couple of days building some nice logging and monitoring or like performance metrics. Oh! But if I use your secure XML parsing library, I get that for free.” Also, XXE is prevented by default. 

[01:03:20] Tash Norris: I think it’s important to not allow frameworks to breed apathy, which is that because everyone says it’s a secure wrapper or I don’t have to care about it or all of a sudden no one looks at it anymore, and I think as security teams, it’s easy for us to say, “They’re using the secure wrapper, so I don’t have to look at it either.” So, what happens is you end up with a system that ages. We heard from some of our speakers earlier that as a malicious attacker, I might go for – sorry, an attacker. You might go for some of these frameworks through some of these platforms, because then it’s a great point of compromise, right?

The one thing I would say with that is you want engineers to feel like they can develop securely by default and have those guidance, but you don’t want it to breed apathy. At the same time, I’m not going to say they have to learn about all the things that could go wrong, but I do want them to care that things could go wrong. And still look at whatever it is that they’re using that framework with just that tiny bit of paranoia in the case something might go wrong.

[01:04:19] Clint Gibler: More questions maybe from the back. I don’t think we’ve had questions from back there yet.  

[01:04:22] Doug DePerry: You can see back there?  

[01:04:24] Clint Gibler: I can’t, but we can pretend. Or from anywhere. 

[01:04:31] Audience Member: [inaudible 01:04:31]

[01:05:05] Tash Norris: I can repeat that if you didn’t get it.

[01:05:05] Clint Gibler: Just to make sure. Do you think incentives are important for upper and middle management so that they don’t ignore things and –

[01:05:15] Audience Member: [inaudible 01:05:15]

[01:05:19] Clint Gibler: Okay. For making impactful organizational change for the better for security. Are incentives important for those middle and upper management? If so, what are some good ones, or how do we make it happen? 

[01:05:30] Tash Norris: They shouldn’t be. That should just be part of their own ways of working, right? But the most powerful one – Right. This is such a soft skill thing to say. I don’t really want to be the one to say it. Feedback is really important, and that’s a really big incentive. Actually, feedback really is and I found that written feedback, but like time and a place feedback, well- written with examples has actually been really powerful and not just to the individual, but I don’t want to be like I make the examples of people, but, “You cross me, I swear to God.”

Giving examples of feedback and really taking the time to have that conversation with people I found has been really helpful in giving that incentive. Especially when you have big corp organizations that want to do 365 feedback or end of year, midyear, actually becomes really powerful, especially if you can get your senior leadership and security to do it for you if you say, “Hey, we are going to commit to giving you feedback in return for being a security champion or whatever.” I found that to actually work surprisingly and it doesn’t cost me anything from a budget perspective. Yeah, I don’t know what you guys think. 

[01:06:35] Zane Lackey: Are incentives important? Absolutely. 100% yes. But it’s just that when you say the word incentives, people’s mind tends to go to financial, right? Are incentives important? Absolutely, as a bucket term. It’s just that an actual incentive is going to be context-dependent on your organization. In some companies, I can think of like a few different super leading-edges tech ones, someone will get a shout out in the company’s Slack channel when they’ve done something very mindful about security, and that’s the incentive. 

In others, seems certainly ones where financial performance, end of year bonus, really, financial services for the most part. Part of your bonus multiplier might get tied to some sort of security thing. I think I’ve seen it every end of the spectrum there from financial to just social, and I think the reality is, for organizations, it falls somewhere in the middle and finding the right one for your organization I think is a super important fit. 

[01:07:31] Clint Gibler: Right. I think we are close to wrapping up time quickly in perhaps one or two sentences. If there is one thing that you would like to leave people with or perhaps like a most impactful lesson learned or something like that that you’d like to leave everyone with. I leave you that opportunity now. 

[01:07:46] Tash Norris: Vulnerabilities don’t discriminate would be the one thing that I’ve learned. It doesn’t matter what size of business you are or how you experienced you think are or your team is or how much money you spend of how big your budget is, vulnerabilities don’t discriminate, and you could be vulnerable to the same thing that big financial services could be or you could be much better at protecting against it. So, you have to be really mindful of what your risk landscape is. Not what everyone else’s risk landscape is.  

[01:08:11] Zane Lackey: The one lesson I had learned very painfully early on from Etsy is that this shift to DevOps, cloud, DevSecOps, all these. The number one theme of this is security focuses and it’s very cliché when you say now. But I think it’s the single most important theme. Security has to focus on enabling the rest of the business to move at the speed that it wants to. 

I think that the story from before is kind of the painful example of, “Well, then security just gets routed around and they don’t tell security about the 50 apps in the cloud.” But I’d say the one theme to guide your work whether you’re on the security side, whether you’re on the DevOps side, the development side, is don’t be thinking about security as that old sort of gatekeeping mentality. If it is, it’s not going to be successful. Focus on how it can enable you to move at the speed that you want to move at. And that shakes out a lot of cultural stuff and it shakes out a lot of technical stuff. 

[01:08:57] Jesse Endahl: I think something that actually dovetails really nicely on that is making things secure by design as much as possible across the stack will enable that. It will enable a faster business velocity, a faster engineering velocity. Everyone will be happier and you’ll be more secure. Anytime you have that opportunity, take it.  

[01:09:14] Doug DePerry: Exactly. Don’t be a blocker. The build versus buy argument I think is a strong one. Don’t delude yourself there. I’m not going to build a [inaudible 01:09:21]. I’m going to buy one. It might be a next generation one if I were to do something. Very sorry. Very basic example. But build versus buy is a very important thing, especially, again, to your point about having developers on the staff. You have to stop yourself from taking on too many projects sometimes. 

Also, I don’t like using the word people problem, but it’s about relationships. It’s about working with people and other teams, and we’ve been like peaks and valleys with that. Learned our lesson a few different times as far as interacting with other teams and listening to their needs and communicating with them properly. There’s a big difference I’ve learned between communication and understanding. You can communicate quite a bit, but if they’re not picking up what you’re putting down, it can create issues. I think I’ll end with that. 

[01:10:07] Clint Gibler: All right. Well, thank you so much for your time. If you like content like this and want to hear about other modern security research that’s going on, feel free to check out the free newsletter. Thank you so much for attending. We’re excited for the rest of the conference. We’ll be around tonight. Happy to chat more, and see you tomorrow.