Skip to main content
Episode 82

Season 5, Episode 82

Two Angles Of Application And Product Security With Mike Shema

Guests:
Mike Shema
Listen on Apple PodcastsListen on Spotify Podcasts

Today’s guest, Mike Shema, is no stranger to podcasts. As the host of the Application Security Weekly show, he has firsthand insights into the trends and movements in the industry. When he is not on air, Mike works with developers at Square to protect applications, their data, and their users. With a broad range of AppSec experience, from manual security testing to building a commercial web scanner and helping teams build secure products, he has seen it all. In this episode, we hear about Mike’s moderator role at Square and how it ties into the organization’s engineering-biased security approach. We learn about their partnership strategy, how they split up cloud and governance security, and the benefits of specialist teams. Mike candidly shares how his empathy for developers has grown over the years, and as such, he is cognizant of not playing the gatekeeper role. The conversation goes to tooling, where Mike sheds light on his ‘why bother?’ addition to the age-old question of whether to build or buy. Moving away from his work at Square, we then take a look at some of the industry developments he has picked up on as a podcast host himself. He talks about how developers have leapfrogged security teams over the past few years and why this is a good thing for the industry. Be sure to tune in to hear this and much more.

Share

ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk is a dev-first security company, helping companies fix vulnerabilities in opensource components and containers, without slowing down development. To learn more, visit snyk.io.

On today’s episode, Guy Podjarny, President and Co-Founder of Snyk talks to Mike Shema. Mike works with developers at Square to protect applications, their data and their users. His experience in App Sec has ranged from manual security testing to building a commercial web scanner to help teams build secure products. He’s one of the hosts of Application Security Weekly podcast and writes about InfoSec and DevSecOps with an infusion of references to 80s music, apocalyptic sci-fi and spooky horror, to keep topics entertaining.

[INTERVIEW]

[0:01:30.9] Guy Podjarny: Hello everyone. Thanks for tuning back in to The Secure Developer. Today, we’re going to look at application security and product security from two angles here and to help us with that, we have Mike Shema who on one hand cohosts The Application Security Weekly podcast. And on the other hand, works on product security at Square. Mike, thanks a lot for joining us here on the podcast.

[0:01:51.9] Mike Shema: Thanks Guy and with that, my hands are full. I've run out of other hands to talk with!

[0:01:58.9] Guy Podjarny: Indeed. Before we dive in to those sorts of two interesting angles you have on the application security, can you tell us a little bit about what is it that you do and maybe, a little bit about the journey that got you into security and where you are today?

[0:02:10.8] Mike Shema: Sure, I kind of work backwards in time, I’ll start with where I am now, Here at Square, work with – it’s easy to say product security. But what that really means is work with engineers and go through on a daily basis, the fundamentals of threat modelling. What are you building, what could go wrong and honestly, the important part, what are we going to do about it? That’s really interesting. It’s because that – what could go wrong is a way to have a really fun, entertaining conversation with the engineers about where they’re building, they have the context, they have the insights about that.

My role, I get to be more of as much a moderator, a facilitator of engineers being creative thinkers about how to pick apart their own applications as well as then doing consulting, so I’m doing a lot more consulting and collaborating than I am coding on a daily basis.

But, to get to this point, I was doing a lot of coding in C++ I will say, that will always have a warm place in my heart, despite quite early difficulty, a steep learning curve - let me say it that way - to get in to do it properly. But that was a claw less.

There, I was building a web application scanner and that was a really fun way to get a really deep dive into what are web security problems and how much can we automate that? Because I’ll say, maybe – I’ll phrase it this way. Perhaps I’m a little bit lazy at heart but it’s more in the fact of, if I’ve done something three times, can I turn that into bash scripts? And if I have had a bash script that’s working, maybe I could turn that into something that is after compiler errors, it’s good working C++ code.

And then, before that though was really early days of application security, before we had the OWASP top 10, this is back in late 90s, early 2000s just doing manual testing, consulting, basically seeing – here’s some web application is being built on, not even ASP.net with ASP and has some backend database to it and that was really exciting quite honestly. Because we were all figuring out, “What are the mistakes the developers are making?”

I think one of the things I’ve come around to appreciate in this journey is that early on, I probably had more of the attitude of hacking web apps and thinking, “Wow, look at this horrible code, what are these idiot developers thinking?” Then I got into – doing some automation thinking, a lot of these attacks are so simple, cross a scripting sequel injection, what are those lazy pen testers doing?

And then, I’ve gotten to the point where I’m working now on – with teams, building products and I’ve come I think a much healthier 180 and think right, there’s a lot of engineering and difficulty to putting together products, building code, maintaining legacy code and I have to admit. It’s a much healthier approach, a much more constructive approach now to be like, “I have empathy for developers who are working under engineering constraints and by the way, need to write some secure code.”

[0:05:04.9] Guy Podjarny: Yeah, that’s a great learning and kind of well-phrased and I guess, maybe you can pick it as a kind of seeing the light, you know? Learning to appreciate the two sides. But I guess if I over embrace this sort of dark and white analogy, you’ve also kind of moved from security vendors from sort of building or kind of helping as part of a security tooling company to securing inside a non-security organization and then back again I think a couple of times.

What have you sort of seen as part of that type of empathy, does it help, does it also enrich your perspective?

[0:05:41.3] Mike Shema: It definitely helps with empathy because you suddenly see that, “Oh, his is how teams are using tools, these are how teams are bringing in vendors and here are the vendors that succeed over time.” Meaning, they’re not just solving an immediate problem, meaning, they do solve a problem that developers actually have. But they’re making that developer’s life better. I’m actually deliberately using that term developers right now because I think that’s one of the other things I’ve discovered, at least that things that have changed in the past five, ten years maybe is that a lot of the security tools.

A lot of the security vendors, if they can drop in to an IDE, wow, that is 10 times, some other huge number times, 100, that’s an order of magnitude better for the developers because go to where they are working and security team can still be the enablers, the security team can still help with vetting to say, “This is a good tool, this passes some baseline findings that we would expect, it has a level of quality but look, we’re going to help you developers bring it together.” I think from that kind of vendor to consumer aspect, that’s one of the things I’ve learned.

I think going back from the consumer to the vendor aspect, another thing I’ve learned is also figuring out just what solutions are smart to work on rather than thinking from just a web application perspective, I want to go figure how to automate this particular test, CSRF or cross scripting or kill off this bug class, those are neat but if I just think, I want to write the best cross scripting finding tool in the world, on the engineering side.

Why don’t I just try to talk all of our frontend engineers to adopt React and for the most part, not even have to worry about cross scripting at all. And then realize, you know what the developers are doing, they’re pulling in a lot of this opensource tools, third-party dependencies, suddenly they’re pulling in gems pip from Python and so on.

And some of these have problems and I definitely will admit, the point where I’m in my career, I don’t want to have to be the one who sits down and like manually audits or even brings in some SaaS tool or something to vet all of that third-party code. That’s a problem I’d love to push off on to some vendor, have the vendor solve for me and is one of those things I wouldn’t initially have realized from the beginning because my original thinking was, “We just need to fix, we need to secure the code that we’re writing.”

If there’s a theme of this particular episode, when lightbulbs go off, the other lightbulb that goes off is, “Right, there’s a lot of other code that we’re using that has vulnerabilities.” And I realize you know, talking with you. I think you probably are kind of familiar with this problem space but it’s one of those ones that I’m honestly just saying, sincere discovery of my own, the problems that have come up that developers have run into that we also don’t want to try to say, “Developers, you’re responsible for running all this code analysis and doing the inspection of all these third-party libraries.”

[0:08:44.5] Guy Podjarny: Yeah, I think you know, clearly, I relate to the problem space and that need and maybe you can also think about this a little bit as staying up to speed with the industry because it could be that when you started in your sort of C++ roots, you know, indeed, most of the risks and most of the security concerns were far more in the code to write but applications to date don’t look the way that they did before, definitely with the advent of cloud native and the adoption of far larger quantities of open source which is awesome but also changes a little bit the security landscape.

Maybe let’s dig a little bit into Square and your work over there. We often times discuss on the podcast what do security organizations look like, can you describe a little bit what does product security or cloud security, these related organizations, what do they look like at Square?

[0:09:36.2] Mike Shema: Square is really cool and to going back to that idea of empathy and working with engineers, Square security is actually really heavily engineering-biased. Meaning, there are engineers writing, alongside are cloud deployments building up the platforms for our cloud usage, what does it mean for us to go cloud native and do that in a secure way, handling secrets, looking at Terraform, looking at configurations, avoiding the all too common, here’s the open S3 bucket, elastic search instance that dumps everybody’s data on to the internet.

It’s heavy engineering from that and it’s also engineering that’s focused on either a kind of component that we’re using for fundamentals to build up applications so might be key management, it might be handling strong identity and authentication, it might be some of – we’re doing more now with privacy engineering which is a very close ally of security, that’s part of our security org that's engineering. With that said, as well as I mentioned the cloud.

With that said, product security falls a bit closer on let’s call it sort of the governance side of things which in the sense of just what is the risk out there? Is this acceptable risk? Is this risk that we should be worried about and is this risk that should be addressed? That’s sort of what product security team, our role is looking at to figure out what’s going on in product areas and how can we either help and help – meaning, help to reinforce that yes, you developers writing the code, you should have some ownership of security.

What that means though is that here is what we might recommend on improving in authentication model. Or making sure that let’s have a conversation about abuse, because we can talk all we want about doing some code linking, doing analysis of third-party dependencies and whatnot. But it comes around to what if we lose money in account takeovers? What are different ways that people could misuse or abuse stolen credit cards against our APIs, things like that. These are things that don’t have a really easy OWASP top 10 reference that we can talk about but they’re still important conversations. We have those conversations with the product team.

And then some of the conversations we come back with the security org’s engineering teams and when we say, we see that there’s a common challenge within the product teams where we have a great story about secrets management for services within our data center. But if we’re moving to the cloud, say AWS, GCP for example. How much do we want to migrate our capabilities from the data center and just hook those up to the cloud? What about if we just want to use the cloud native type and secrets management? There are conversations there just to figure out comfort levels what are gaps between capabilities and then also say, “Here’s the API that a lot of our developers are used to, can a security org engineering side of things go and recreate that API into cloud, make it easy, just as easy to use this in Lambda and so on.”

[0:12:38.3] Guy Podjarny: Interesting. These cloud components are also a part of the product security domain. Product security collaborates with engineering but it includes both decisions around implementation inside your own code or in your own application as well as these cloud-native, cloud security decisions like you know, to resolve containers or the use of a cloud platform for secret management for instance, those all kind of qualify under that product security umbrella?

[0:13:06.7] Mike Shema: Indeed, one of the ways though, to kind of explore that more is that over time, I’ve realized that there’s no way I can become an expert on all of the security domains. And now I think, in general, we’ve seen many security careers, many security orgs also focus more on specializations.

Our security engineering has a teal that we call our infrastructure security that is focused on cloud infrastructure and securing cloud infrastructure but we also have another dead just call it a more traditionally focused engineering team that is focused on building out the engineering capabilities of going into the cloud.

We already have a team that’s outside of security, as well as the team inside the security, specializing in what does it mean to build apps and secure secrets, secure data, do encryption, do identity easily within the cloud. Someone in my role is a little bit more of a fixer, meaning like, I know some basic principles to apply here in security but once we get into the alphabet soup or the crazy long list of all the different services AWS has, let me make sure that I connect you to our security engineering team and the right people who have much better domain specific knowledge about solving this and the same may be true if we’re talking about mobiles or talking about something else, that’s really how we’re aligning those kinds of specializations.

[0:14:29.4] Guy Podjarny: That’s a really interesting structure there so let me see if I got this right, you have teams that are a bit more specialized like security engineering or cloud infrastructure, security team or maybe a mobile security team, those teams have more focused secured expertise and they might for instance in the context of security engineering build components to actually embody that expertise and components that are built into the application and then product securities draw a base to be a liaison or to be someone who has both a deeper understanding of the product than most of the security teams and a deeper understanding of security than most of the product teams, sometimes to answer questions right away, sometimes to bring in the relevant expertise from another spot. Does that sound correct?

[0:15:12.5] Mike Shema: Yeah, that’s it. One of the things we’re trying to do because in addition to just us being liaisons, we do want to honestly practice what we preach in the sense of what about security actually writing code and basically having that empathy for developers because I think not everyone needs to write code to be in a security org but I think a security org does need teams that are writing code.

From the products security perspective, where also we spun up a – call it a product security engineering team just to be a little bit more specific about, you know, how we’re calling them and that is the team’s focused on what can we do to help product teams solve maybe some of their domain specific problems or something that’s not necessarily that a one particular product area should own?

Example of that might be just API views, I was talking about earlier, one team might be getting abused to carting. Stolen credit cards, another might be suffering types of abuse against their API employees because people are tempting to do credential stuffing.

Others might be making you know, other attackers maybe attempting to go and enumerate data about merchants that we have. Just that are leading to different types of takeovers. These types of things where we pull in to product security engineering and start thinking what can we do here to provide something that is a new capability that product teams can take advantage of to help with abuse.

Or, conversely, we might just flip it and say what are some of the fundamental security principles or capabilities that any organization should have?? One is asset inventory. Everybody is easy to say, very hard to do, just like identity management, easy to say, very hard to do. We have a security team dedicated to identity, but we don’t have really have a team that’s as well focused on inventory right now. Part of that is a vulnerability management so we do have a team dedicated to vulnerability management and scanning. If we just say hey, where are all of our APIs?

Well, end points. One particular team could answer it, another team could answer it their way but we don’t have a cross if we – I think an important question, ask that across the entire organization, what’s our entire tax surface, that’s something that we’re starting to talk about within our products security engineering team to figure out rather than have people like me and my peers go out and just interview and ask, “Hey, how many API’s? Okay, you’ve got 16, you have 12, cool, on October 5th, this was the last number that we have,” by the time October 6th rolls around, that data is outdated, it goes stale, et cetera.

Let’s solve this through some sort of automation. That kind of gives an idea of also our mentality and how we’re trying to organize our engineering to address those basic questions of things like inventory, attack surface.

[0:18:00.7] Guy Podjarny: I think it sounds very adaptable and makes a lot of sense. I guess, one last question still as I dig in, how does tooling work here? If you're looking to build, you kind of mention some scanning or maybe there’s a protective piece of software that you want to put that in. How does that play with that ownership split that you describe?

[0:18:22.9] Mike Shema: That’s interesting. I think on the one hand, our security org still struggles I think like everybody else, build versus buy. I like to throw in the third one, you know, build buy or why bother? Because there are some tooling that maybe we don’t even need to worry about at all. It sounds neat and maybe that’s what everybody else is doing but perhaps we’ve moved on for that.

You know, I’ll kind of say maybe I’ll sort of tease and say that’s my quasi scripting finder. Let’s just move to react and not worry about a build or buy for that type of problem. On the other hand though, what we’re doing is we have tried to orient a lot of the security engineering teams, I think around problem spaces so call it cloud security, call it identity management, call it privacy, call it security infrastructure in terms of how we manage secrets or how we’re protecting data, encrypting data.

So, with that said, then what we do is we start to look at tooling. What is the tooling the security team needs? So, there is going to be easy things that just we are our incident response team. You know they need to have visibility in the logs and to do analysis. So, they will work with teams with the established engineering teams have done with their logging pipelines and tap into it and so that’s just tools that only the security teams need to really consume because that’s their need.

On the other hand, if we get into things like Terraform or tooling for cloud configurations, security is not the one that is actually generating the majority of those configurations. I mean managing them through Terraform but if security can look at those and say, “Here is a baseline we like. Here is how we are going to analyze how different tools or how different teams are using their cloud ecosystem,” and we can point out, “Looks like this is a mistake, this here is another mistake.”

That is a great area for tooling and that is where the team, this infrastructure team is looking at what are the great easy steps they can do on their own for building their own tools but it’s just asking the question too, is this a problem space that we want to become experts in or is another vendor out there that actually addresses this type of problem? And I think coming from a company called Qualys, vulnerability scanning that is kind of a commodity problem.

That is where I would strongly recommend, we should just buy. This is a well-established area, these tools that have plenty of money behind them, engineers to solve that but as we start to look into the cloud, AWS, GCP for example even, what kind of security tools do these cloud offer? Are they sufficient? Can we just tweak them a bit, play around with the API’s they expose or maybe we need some better tooling from the third-party because we want to make it happier for the engineers who are actually the ultimate consumers of these tools?

So, I realize that was a pretty long-winded answer to your question but if I were to try to summarize, it is sort of like there’s a couple of tools we’re building that we’re the end users for but when the user experience is important because it is the engineers that is consuming the tools, that is when we take a much closer look at that build versus buy scenario.

[0:21:24.1] Guy Podjarny: Yeah, I think that makes a lot of sense and aligns to the team that eventually has the most responsibility unit. So, I guess I have one more question, maybe do you have a little bit of some of your insights from the podcast as well. So, you know you mentioned the way you are sort of split up within the org, what are some of the core practices that you think helped make you successful at Square? And I don’t know, some of those it is okay to say they’re successful here at Square, but maybe you even have a bit of a perspective on are you doing it the same as is common in the industry. Do you think you struck a new nerve on it? Above and beyond the way you have already described the collaboration with teams?

[0:22:06.8] Mike Shema: I think some of the things, see I am trying to think on my feet here. Some of the things that we are doing really well is starting from that perspective that the security org is actually writing code and building services and I think that already is building one, basically is building the trust that the security org understands the pain points that the engineers are going through, meaning what we are using the same tools they are building up building services.

From sort of that product security team, some of the things that we are doing better successful and we are trying to do more of is figuring out what threat modeling means and how we can be a bit more consistent about it and talking to as many engineers as possible because threat modeling, I will always love that as well as keeping it simple meaning what are you building, what could go wrong? Starting just an open-ended conversation from there.

But that’s not something that is easily scalable and you know there is a lot of manual conversations that go through there. So, I think we are doing a pretty good job there of having those conversations because the security team very much has the perspective of not being the team of saying no, not being the gatekeeper. So that is one of the things that I think the security org has done really well is for the most part, they can’t follow the exceptions that I think make sense with as they relate to data handling policy, privacy, things like that.

But they are pretty much product team, go forth and conquer. If you need help, we are here as the security team and by the way, we’re also doing monitoring. We are doing some design reviews. We may see some things and come up to you and say, “Let’s discuss this design and there is some things that we can change,” but I think that’s help given and builds a lot of trust that make the conversations about threat modelling a lot easier because the engineers are not pre-disposed.

To be like, “Uh-oh, what’s going to happen here or this is going to interrupt my road map” things like that. So, I think those are positive ways that the security orgs approach it both on engineering as well as it’s just that’s communication interaction with others.

[0:24:13.7] Guy Podjarny: Yeah, no definitely and threat modelling is I think it is the second time we’ve mentioning it here sort of a topic close to heart. So, would you describe, you know it talks about a very healthy relationship, do you use a specific methodology around the threat modelling above and beyond just sort of the trust and the positive interaction?

[0:24:33.4] Mike Shema: So not right now and honestly, we were still in kind of a discussion about how organized and how standardized should the threat modelling be and right now, I think the answer is really not very much meaning we don’t need to get into something like Stride or a particular specific framework and once we started introduce tooling, some of the tooling becomes too much of a binary close-ended questions meaning are you doing this? Yes or no.

Are you doing that? Yes or no. Are you doing this other thing? Yes or no. Are you exposed to the Internet or not exposed to the Internet? All of these questions are fine but if you step back for a second and say, why am I pushing a lot of this effort to answer 10 or 20 different questions on the developer that may not have actually have any bearing if I am just trying to tease out one particular interesting thing? Why don’t we go to the developer and just say, “How would you attack your own product?”

What worries you the most? What’s the area of code that keeps getting pushed off that nobody has been had the time to go back and fix up where you think all of the mental to-do’s are in there. and I think those are the better ways of call it bringing in a framework around the threat modelling. I will say however that there should be a bit of a shared language. So, if we start using terms like risk or vuln or flaw or exploit, just a little bit of shared language.

So, we say this is what’s happening and we can say, “Ah, here is an SSRF to you something more specific,” or we just say, “This is an authentication problem” or this is an authorization problem or data handling and just have those rough along broad categories but I don’t think we need to get into something overly specific.

[0:26:19.3] Guy Podjarny: Yeah, it makes sense. You know it sounds like an easier way to actually achieve the discoverability, the information you need on the spot even if you need to codify it a little bit more after. So, I feel like I can continue asking you a whole bunch of questions about Square's methodologies but maybe let’s take a step up and look at the industry in-depth, a bit more of a threat.

So, you’re also the co-host of the Application Security Weekly Podcast. And I am sure you get this great visibility across the industry. What are some common practices that you say maybe recently have come to bear with regards to these type of developer collaborations? Maybe things that weren’t as popular before. You feel like they’re gaining some momentum that you think are worth sharing here.

[0:27:06.4] Mike Shema: So, I think if there is one thing actually, speaking of momentum, I almost want to say that the Dev, DevOps, basically the engineering teams, they pretty much leapfrogged security I think in the last couple of years, mostly because of the cloud, and I think that is because the engineers are seeing here is how to abstract everything from my functions as a service, so Lambda’s all the way to infrastructure as a service. So, everything pretty much gets abstracted into code.

So, the engineers are just doing great things about spinning up large or small applications within a single region across the region worldwide and it boils down for the most part to some YAML or some human readable code and the reason I say that they’ve leaped over security is that suddenly securities like, “Oh wait, we can’t just – you are developers and you are dealing with infrastructure. You are dealing with networking decisions. You are making these networking decisions because they work for your app. You need to, same with data storage." And security can’t just say, “We are going to be the gatekeepers. We need to review because the pace of release is too quick,” and as I was saying a little bit earlier, just the domain expertise. You can’t have one person that just top to bottom knows everything that the new ounces of all of the cloud environment let alone things like containers and your listeners and you might notice, I have astutely avoided mentioning Kubernetes or containers because that is absolutely a large blind spot for me in terms of being able to get and talking to the ins and outs of security properties specifically and fortunately, I have colleagues and friends who could pick that side of the slack up for me.

So those are one of the trends we see and the reason I bring that up as a trend is that if we talk about if I put my sales hat on or a vendor hat on, you know they talk about personas. Who is the buyer that they are interacting with, and a lot of times the buyers tend not to be as much the security team. They are the engineering team. They are coming down from your CTO and they are saying, “Okay, we are building this code, building this infrastructure and we want to do so securely or we have been told by security we want to deploy a secure code. How do we hook this up and make the engineers happy?” because the engineers are ultimately and I think this is a good thing, responsible for a lot of the security, and so now the vendors are looking to that kind of CTO persona or that engineering persona to figure out where are you working meaning what can I do within your CLI, your command line or within your IDE and does my tool give you any value in there and can you interact with it from the normal type of software development lifecycle that you’re used to, rather than having to pull in the security team to approve something or having to go out to get another web application where you’re looking at trying to avoid this single pane of glass but that’s the euphemism right?

[0:30:04.5] Guy Podjarny: Because you get to, you know?

[0:30:05.9] Mike Shema: Exactly. Developers have enough tabs open in their browsers as it is. They don’t need yet another one. So those are some of the trends, meaning engineering has really pushed the need for security and actually they’ve taken in and done a lot of security capabilities roles in all of these abstractions of code and infrastructures of service. They are pulling in tools into their own tools called the CICD pipeline and security I think is where it needs to catch up.

And I like working at Square because I think security has caught up quite a bit with the heavy investment as engineering teams on the security org but others also other security orgs should be considering what does it mean to work with developers as well as build services for developers.

[0:30:52.5] Guy Podjarny: Yeah, absolutely and I think this is a heavily reoccurring need that indeed comes up. You know I have this perspective around that, around this shift of IT as you describe as a service where really there is a whole set of needs that need to move from IT security solutions to application security solutions. It is the same concern, it is around an unpacked server. It is around the network decisions, around the misconfiguration.

You know probably doing more of them than before but they no longer require IT security solutions. They require an application security solution basically bringing them into this world of product security, of application security. I am curious also to an extent because of how you described this sort of partnering and square between product security and development, do you hear more about security champions program or practices like that that connect the two entities?

Do you see a rise in that type of partnership? Do you find it is needed in Square on top of the partnership you already have?

[0:31:57.9] Mike Shema: So, it is one of those things that I’d love to see develop and have more of at Square. So primarily in the sense of having a venue for communication and collaboration for security minded engineers who want to be tuned into what’s the security are doing as well as just having those conversations about what is going on within – you know, here is a breach in the news, how does that affect us? And sort of having a called a pre-mortem.

Let’s have a table top exercise and say, “What if? What if this breach happened to us and we did have a S3 bucket opened or what if all of our engineers that they’re all off tokens were compromised into some other SaaS we are using?” those will be really fun to see. So, we don’t have that yet here but I would love to see that and prior to part to Square, I also worked on product security at Yahoo and Yahoo was a company that did actually have a pretty good – well, we went through a couple of versions of it but it is called a security champions program and Yahoo is pretty cool. They branded their secure genes, The Paranoids. So, they have what was called the local paranoids but that was successful and I thought that especially for a large company because you can’t really scale like my role. I can’t go out and scale and just have – I could if I wanted to every single day, every single hour yet another threat modelling conversation for this other particular service, this other particular product, but ultimately that is not really the smartest way and you can burn people out that way. So, when you are approaching a new problem with manual processes, something like threat modelling or just discussion and call it security awareness, you do need people but those people should be coming from the engineering org and that’s absolutely where security champions program comes from. One of the challenges is that it’s great for self-selecting.

Because people who are interested in security will be like, “Yeah, I would love to talk about that and you know, what is going on? Hey, I heard about this really cool presentation out of this year’s black hat, this year’s def cons” etcetera and they want to talk about that. One of the challenges though is how do you then get into other orgs and other engineers and bring them, incentivize them when they perhaps have desire but they don’t necessarily have enough time or they may just ask quite bluntly "what’s in it for me?" I am being measured on how much code I write and that is not exactly the case and being a little bit fictitious but you know if they worried how much code I write, how many sprints I am successfully delivering on time, having time to talk about security doesn’t play so easily into that. So that is one of the things that is always a hard problem to solve.

[0:34:34.4] Guy Podjarny: Yeah, I definitely have been hearing more and more about security champions on this podcast and from other security leaders that I talk to here and hopefully if you’re following this podcast, you would have heard a bunch of those in our collection episode here but hopefully, there are increasing patterns that shape out around allocating some times. We had that Geoff [Kershner] from Medallia talk about how to actually get a Stanford credential to developers that participate in that.

And all sorts of like Twilio, they have a progression process for kind of certifying you and one of them, I think it was in Medallia, they have a rotation element to it. There is some interesting practices shaping up in the industry, hopefully we can all learn from one another. I probably have more questions for you Mike but I think we’re really starting to stretch the amount of time here. So, before I let you go, if I can get one more bit of advice.

If you have one advice or one you know, it can be a pet peeve you want people to stop doing or just want top tip that you want to give a team looking to level up their security foo, what would that be?

[0:35:36.8] Mike Shema: So, I’ll turn the pet peeve into something more positive as well as to be a literation there but that would be the arguments, discussions, conversations around which programming language should we use and honestly, I was saying earlier on I love my C++. There is plenty of, call it memory safety issues. I have written my share of them as [inaudible 35:57] has pointed out to me. So, let’s really worry anymore about making fun of a particular language except of Pearl, sorry.

Because and more focus on, can it be readable and that’s why I am actually joking about Pearl is that just what code are you writing in, is it readable? Is it maintainable? And so I would say do less of worrying about the choice of language and more of is a language leading to a point where you have simple linking that you can have get commit hooks that pointed out common errors, common mistakes and is honestly, as someone who enjoys writing as well as who enjoys reading is the code nice to look at?

Can you hand it to someone else and it doesn’t have a ten-page read me on it with a bunch of paragraphs and can they for the most part look at it, understand what is going on? Because especially - Square is no stranger to this - re-orgs happen, ownership changes within code. People come and go, the legacy code builds and build. I mentioned working at Yahoo and you know companies that’s been around for 20 years, you are going to have legacy code.

But if you have code that is mostly scrutable and someone can read it and you can lint on it pretty easily to find simple mistakes. I think that’s really what I would say, go in on that and hopefully a lot then of other as you are security champions become more aware, become smarter or more knowledgeable about security topics, you can improve the design, improve the architecture but just start with simple linting.

[0:37:26.5] Guy Podjarny: Yeah, no perfect. That’s good kind of the core, at the basics and very important to build on. Mike, this has been a pleasure. Thanks a lot for coming onto the show.

[0:37:35.9] Mike Shema: Thank you very much. I really enjoyed chatting with you. This was fun.

[0:37:39.2] Guy Podjarny: And thanks everybody for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[0:37:45.2] ANNOUNCER: Thanks for listening to The Secure Developer. That is all we have time for today. For additional episodes and full transcriptions, visit thesecuredeveloper.com. If you’d like to be a guest on the show or get involved with the community, you can also find us on Twitter at @mydevsecops. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode. Bye for now.

[END]