Skip to main content
Episode 3

Season 1, Episode 3

Security From The Start With Sabin Thomas

Guests:
Sabin Thomas
Listen on Apple PodcastsListen on Spotify Podcasts

In episode 3 of The Secure Developer, Guy is joined by Sabin Thomas, VP of Engineering at Codiscope, where he creates tools that help developers build and deploy secure code faster. The two discuss the difficulties presented by the accelerating release of new tools and frameworks, the problem of too many sticks and not enough carrots, and the benefits of designing with security in mind from the start.

The post Ep. #3, Security From The Start appeared first on Heavybit.

Share

“Sabin Thomas: Because these frameworks are coming out so fast, the ability to spend time to become really good at this is lacking. We want the developer who is at the forefront of this to go down that journey and make it more secure.

Guy Podjarny: A lot of the aspect of security that we struggle with in the development community is that it's all sticks and no carrots.

Sabin Thomas: Anything you write should be thought about. You should expect that there are bad people out there that would want to expose things. If you have a security designated role and make that part of that design, before any code gets written, that has a great impact.”

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[0:01:07] Guy Podjarny: Hi, everybody. welcome to The Secure Developer. Thanks for listening in. Today, we have Sabin Thomas with us. I’ll let him introduce himself in the moment, but he has a lot of experience building and leading engineering teams in security companies, as well as companies that focus on security, but actually do security. There's a lot of knowledge for us that I'm eager to tap into and have you listen to as well. Sabin, thanks for coming on the show.

[0:01:30] Sabin Thomas: Absolutely.

[0:01:31] Guy Podjarny: If I can ask you to just introduce yourself a little bit, what's your history, maybe how you got into security?

[0:01:37] Sabin Thomas: Yeah, sure. I think, I have more of a varied path in the security space. I would say, this is one of the more recent explorations for me. My past has been over 15 years working in software, doing everything from enterprise, HR systems, HCM systems, financial institutions, and then more so e-commerce, search advertising, and then now developer tools, and then with a very big security focus. It's been a great journey. I think this is the most interesting of all of them.

[0:02:07] Guy Podjarny: Oh, cool. Yeah, definitely not a small feat to get developer tools on security. I guess, you work at Codiscope today? Do you want to put us up a little bit on what Codiscope does?

[0:02:16] Sabin Thomas: Absolutely. My role is the VP of Engineering at Codiscope. Codiscope has been more than a year and a half until its inception. We are based in Boston. Primarily, we are a developer tool company. We do a number of things. One of the number of things that we do is specifically, security tools aimed at developers. We've had almost six to seven years of experience in terms of the people that we have in our company that have been working on developer solutions for other stacks, Java, .NET, PHP, and have got a good amount of experience in that type of realm, that type of field. We also have a specific division of our company that's focused on e-learning content and educational materials, specifically aimed at security-themed courses, helping developers get better, improve their security. Now, over the last year, we've been refocusing our efforts now specifically on JavaScript and a different model at understanding code and making developers code securely.

[0:03:18] Guy Podjarny: Okay, cool. I actually have a whole bunch of questions that come out of that.

[0:03:21] Sabin Thomas: Yes. Yes, sure.

[0:03:23] Guy Podjarny: I guess, first of all, you mentioned developer tools on the focus on developer. Clearly, we need security knowledge. There's a lot of problems out there. Why developers? Why developer tools versus just, I guess, this typical infosec and security tools?

[0:03:35] Sabin Thomas: Yeah. That has more to do with the genesis of Codiscope, I would say. A little bit of background there, Codiscope as a company, we were spun out of a company called Cigital, which has close to 20 years of experience in application security consulting. They've been doing this for a long time and they have a great enterprise footprint. The reason for the spin-off was that as a product, we would have that much more impact if we were able to touch the developer. Traditionally, security, especially in enterprise companies and regulated industries, have been mandated from the top-down management, or security team, or the CISO, and these companies would certainly understand the need for it.

There's an obvious risk with doing insecure code, and would turn that around and mandate that to the development team to use the tooling, the processes, and so on. We felt there was a different approach to that, where we could go directly to the developers, have that experience be very native to their development environment and really make some impact there to the point that they feel that's part of their tool set. The aim is still the same. We still want secure code. We want the developer who is at the forefront of this to go down that journey and make it more secure. That's what we feel, I guess, the most stickiness in our approach to it.

[0:04:52] Guy Podjarny: Yeah, it makes sense. I think as the pace of development grows and just the impact of an individual developer in just the throughput, right? How much can a single developer achieve today in terms of new functionality just becomes staggering, right? You can do a ton of things, which is awesome. It also implies that really, anybody I think outside of development is just not able to keep up. Let alone an understaffed, or just a shortage of talent security team.

To an extent, we really have no choice, but to get developers involved. It becomes not really a question about whether developers should embrace security and start acting on and using, applying some security best practices. Rather, how do we make that happen? Clearly, it's a necessity if we have any chance of winning, like achieving some form of security. A lot of it is just about how do you get developers engaged. I find I really like, and we do the same at Snyk, right? We're focused on saying, this is developer tools that the security. I feel just by that statement, there's an indication of focus, which hopefully, I'm seeing a little bit more of and hopefully, it keeps growing in the space.

[0:06:07] Sabin Thomas: Yeah. I mean, just to match the need as well. I mean, the need for developers is an all-time high. Everything everybody does is always has a software component to it. That's one piece of it. The other piece is that developers have a five-second attention span, so it's whatever framework they're working in. The next one is obviously cooler and better. With that churn across frameworks and that repetition, I think developers are just prone to doing things incorrectly.

Not because they're malevolent from the start, or they have intent that way. It's just the pace and keeping up with these things, means that you just can't be an expert at everything. Compare yourself to the Java developer who's been working on it for 20 years has basically covered every aspect of Java, the toolset. Then the developer who's been working on Node.js, Swift, Go, take your pick in the next two years.

[0:06:58] Guy Podjarny: May not know where the pitfalls are, right? May not have that experience.

[0:07:01] Sabin Thomas: Yes. I think just with the amount of time that you have to spend on this, you are not able to spend the time to really understand it, to get to know what you're coding and make that secure. Tooling is even more important.

[0:07:14] Guy Podjarny: When you say it's developer tools, practically speaking, you've seen in Cigital and all that building, pretty much the same, maybe security, especially for the education tools, right? It might even be the same tools. But here at Codiscope, you're trying to make this developer tools that do security. How do you see that manifest in the product? I mean, what makes the product more compelling, a security product, more compelling to a developer?

[0:07:38] Sabin Thomas: Yeah. I think a lot of that is tied into what we recently launched a few months ago, a product called Jacks. Jacks.codiscope.com. Definitely try it out. The way we approached this product was very different from how we did our past products. The understanding being that we wanted to approach developers first, and again, not something that was mandated on to developers.

In that model, we really looked at how GitHub, a lot of other tools that have had a good amount of developer traction, New Relic, where these tools were just simple enough for the developers to understand where it fit in their ecosystem. That the model was pretty solid. We took a look at that. We understood where we wanted to fit in. We also understood, as part of our mission, to have developers code securely that the educational aspect of that was very important.

I think developers certainly want to know when they're doing themselves a good job, when they've been able to work with the framework correctly, they've been able to do the right things in their code. What is missing in the tool sets that's there right now is an ability to track that. There's no ability to say that, “Hey, you've actually become a pro programmer in Scala and you've done packaging corporation correctly.” That type of analysis, or that type of tracking is missing. We felt that combining the need to have a secure tooling experience, along with the educational experience, would establish that environment that a developer would want to.

We approached that. That was very much the part of our product design from the very start. We took a brand-new, fresh new look at it at Jacks. We spent a good amount of time understanding the user experience, what developers would really want, what would accelerate them. How would we accelerate that dev cycle, instead of being a deterrent, or a new body of work?

[0:09:29] Guy Podjarny: Yeah. Not being a gate, right?

[0:09:31] Sabin Thomas: Yeah. Which is, I would say, sometimes a case with most security tools. They are findings, and you have to now spend time to work through them. We want it to be an accelerant. We're still tinkering with it. There are things that we measure everything about Jacks. We track and try to understand, is a developer faster now than he or she was before that product.

[0:09:52] Guy Podjarny: I had this really good chat with Camille Fournier, who's a CTO in New York. She had this quote that said that she feels that security, in the case of security tools, you pay to get more work. It just pay out, and it's not what you want. If you're paying, yes, you want to achieve better security. Maybe you are, but you want them to make it easier, not harder. Not sending down that path.

[0:10:15] Sabin Thomas: Just looking at that as well, it's a tricky thing to do. I mean, to find out if a developer has really done this correctly, takes a good amount of analysis. It takes a good amount of understanding of the behaviour. We're still finessing with that. I don't think there's necessarily the right way to do it. There's certain nomenclature we try to stay away from in terms of all our products, which is something that says that you fixed something in response to vulnerability that was triggered.

Unless we're absolutely sure, we don't want to say that a fix has happened. What we can tell is, these packages look like you've done this correctly. This code you've written looks like you've done this correctly. That's the path we're going down. We don't want to be too forward in saying that you've done this correctly without making sure that that's absolutely right. I'm sure in the early version of the product, these are things that we're still figuring out. We'll iterate on that. That is certainly the mindset, the goal of where we want the product to be when it's at the point it's mature.

[0:11:14] Guy Podjarny: Yeah. I think when you talk about developer tools as well, first of all, there's the five-second attendance plan aspect, which is pretty much how much you get when you build a developer tool to demonstrate value and to show ease of use. Historically, well, not historically, at present, even the vast, vast majority of security tools today, you need loads more time than that just to be able to get them up and running, let alone, starting to operate them on your system. You have to have quick onboarding.

In general, just an understanding that even the exact same core technology, be it security static analysis, or be it really any other authentication, or whatever vulnerabilities and dependencies, any of these components, the same core technology might be at play, but the surrounding of the product and how easy it is to use and how easy it is to get going is quite critical.

I feel like, there's an interesting conversation as well to be had around education and tooling and this virtual cycle between them, right? How much education do you need to do upfront, versus how much can tooling do for you and almost reduce the need for you to know? How can the two work in tandem to basically help, on one hand, reduce some of the effort from you, just do some of the stuff for you and ensure the tools are secure. At the same time, surface things to you in-line with the work that you're doing. Instead of you going off, for instance, to do a three-day course and learn about secure coding, how can you have some of that education around making mistakes be a natural part of as you code, you identify these things.

You just repeat that enough that you know that you know to ahead of time, not make that mistake, allowing you to basically take it up to the next level. I think an aspect of that is with automation tools and with static analysis and such. Some of it, I think you do is, I don't know if it's secure assist, right? Or you have the in-line in the IDE types of security education. Do you see that paying dividends? do you see people over time making fewer of the same mistakes?

[0:13:21] Sabin Thomas: Yeah, certainly. With SecureAssist, which is one of our products in our portfolio, the intent is very much that, be as native to the developer experience. The way they do that is by being an IDE plugin. Right now, they have IntelliJ, Visual Studio, Eclipse, some of the major browsers. As the developer is in that, as you're coding, those are right opportunities where we can inject the right type of education to do that correctly, especially in response to something that's been done.

I think, teaching when somebody's been able to see for themselves that they've made that mistake, or that something that was incorrect, I think that is the right point at which it becomes sticky. It is a little trivial sometimes in the way that people write code, or developers write code where you can almost attribute it to a syntax type approach. There are more complicated vulnerabilities, or inaccuracies that can happen that may not work so well in that type of experience. This is where we’re finding that, even with languages, we're seeing that manifest. If I'm talking about a Java code base, I can be sure all of that is typically in just one repo.

In the analysis that SecureAssist does, we can find out from the way you've initialised your application routes, or web routes to the way you've initialised your database. All of that is in a single repo and we can figure that out. In more modern frameworks, more dynamic, I would say, even with JavaScript, that's split across 25 different repos. Making that connection can be a little complicated. To get that full holistic understanding of the code you're writing, the corrections that we offer have to be in-line with that.

I would say, there's certainly still a need to have developers be educated at the point that something we've noticed is a miss, and as you're coding is the right time to do it. I'd be remiss if we don't do that holistic understanding. That's where a tool like Jacks is a little different in that you invoke it primarily on your cloud-based repos.

I think we're still figuring out what the right model is. I think it's a mix of both. It's a little challenging for newer frameworks, because in IDEs, almost a non-existent concept, I would think, for a Node.js developer, especially somebody who's been – if they've started with Node.js as their first programming language, I would say that an IDE is probably the last of their concerns. It's usually text-editor, or something like that.

[0:15:52] Guy Podjarny: Yeah, they're going to be in Atom, or in Sublime –

[0:15:55] Sabin Thomas: In Sublime. Right.

[0:15:55] Guy Podjarny: or in some form. Yeah.

[0:15:56] Sabin Thomas: In that scenario, for a developer who's been using something like this, they don't really expect a lot of the IDE. They don't expect a lot of feedback out of the ID. Where they're expecting feedback and possibly educational material is a little later on in that build chain. That's a little more native to them. Whereas, if I talk to a Java developer, they want everything to happen in the IDE, the ability to debug, the ability to run debug builds and do stack traces.

There's a different nature of developer. We want to be the answer to both of them. This is where we'd like to work with the community to see if there are other things beyond the build chain, beyond the IDE, where we can inject that.

[0:16:34] Guy Podjarny: Yeah. I think, developer is not one thing. Different developers, clearly. For starters, it’s just people and people work differently. But also, the norms of or the best practices for how to develop software. It may be then in the world of JavaScript. It will be typically slightly more continuous processes. There'll be a higher percentage of people that have some CI, test automation, continuous deployment processes, maybe as compared to maybe the average in Java. You're right that at the same time, the development environments themselves, the debugging tools, those are anywhere from less mature to non-existent and building them there.

Also, you mentioned this notion that sometimes you can't identify an issue in a smaller context. Oftentimes, the lack of existence of a security control, like the fact, for instance, that you did not validate input, or that you did not encrypt some piece of data, that by itself is not a security flaw right there and then, right? That's not a vulnerability. If you did not have that security control throughout the flow of an action, suddenly, your system becomes vulnerable, right? There's some learnings there.

[0:17:42] Sabin Thomas: Yeah. Specific to instruction, though, there are many mechanisms that have been valuable to the developer. There's certainly the old school method of development where you have a three-day instructional seminar that's on site that all your enterprise developers have now are required to take. There's e-learning courseware that they can take on their own time and measure their progress that way.

I think the challenge with any of an instructional mechanism is that without the appropriate tracking and the appropriate metrics, it's very hard for developers to know what used to be gotten out of it. This is where we're looking to answer that with Jacks, which is because of the way you've coded and because of the way you've interacted with our courseware, we have an understanding to say that you may have a predisposition to coding these insecure insecurities, or these vulnerabilities in your new project, and we want to avoid that. That's certainly the goal.

[0:18:37] Guy Podjarny: I like the notion of it's almost like continuous education, right? As you do a part of that.

[0:18:41] Sabin Thomas: Absolutely.

[0:18:42] Guy Podjarny: It's not security is a moving target, but also in general, the ability to absorb information in one time, and then remember if you might not have a chance to apply it, and remembering that if a few months later you come across that is hard. But if it's constantly there, constantly pointing decisions and questions for you, you can evolve it.

[0:19:03] Sabin Thomas: Just to close that out as well, one of the things that we've done in talking to our customers that have been using our products for a long time, we find that the thing we're always asking is what level of security training is happening currently? Then also in our developer outreach when we're talking to devs, or new graduates, the question we ask is what kind of security training have you received in college? The answer to all of them has been none. We've actually gotten responses from customers asking if we could create security training for college level academia type usage scenarios.

The fact is we are almost 40 years, I would say, 15 years into mainstream software education. The fact that we still don't have that speaks volumes. What's happening is a lot of people are learning this on the job. They're learning it as a result of an incident that has happened. At that point, it's almost, I would say, fatal to have to learn from that type of scenario, so there should be a better way to do it.

[0:19:59] Guy Podjarny: Yeah, precisely. They should be able to pre-empt them. I guess, maybe switching to security education itself, you see a lot of, you have these tools, you educate developers about security solutions. You see a lot of actual mistakes, or a lot of interest and specific ones. Can you share some insight into some of the common, or the most common mistakes you see, or conversations you have?

[0:20:22] Sabin Thomas: Yeah. It varies by the nature of the developer. What we find among our clients is that they will engage us for a certain type of curriculum. The curriculum could be varied depending on the client. A client would want introductory security training, defensive programming and Java. How do you prevent buffer of vulnerabilities in C++? Basic stuff that is also a good reminder. It checks off certain boxes in the security team to make sure the developers are doing this year after year. Remind them about it.

Our focus is to make that material fresh and relevant and to make that contextual to what you're coding in. We find from our experience in e-learning and the type of questions we're getting from our clients, the type of use cases that we want, that we feel that e-learning still has a good amount to go to make it that much more relevant. That's what we're taking into heart with our product development with Jacks. The other question you had was –

[0:21:23] Guy Podjarny: Yeah, just like, this is very useful on the types of security education that people actually, I guess, explicitly ask for. On the flip side of that, if you look at mistakes, through Jacks, or through the IDE assist, which mistakes do you see are most common? What do people not do, or incorrectly do that exposes them to security problems?

[0:21:44] Sabin Thomas: Yeah. One of the things that we always find, especially with Node.js repose is just, whether they're using Mongos, or not using Mongos, or the native Mongo driver is basic SQL injection. SQL injection in MySQL, or SQL injection in MongoDB still happens. It's not so much, I would say, a developer vulnerability that we find is just an inaccurate use of the product, I would say. This is, again, part of what I was saying earlier, because these frameworks are coming out so fast, the ability to spend time to become really good at this is lacking.

Sometimes it's also a result of documentation just missing. If you use the find operator, or if you use the where operator in MongoDB and you don't filter your input, that's a big scenario at that point. I think frameworks are getting a little better. If we're looking at Node.js between the people that use Express and Happy.js is a big difference. There's people who are using Happy.js at the very outset. Developers, those repos are pretty good to start with, because validation is a big part of Happy.js.

I think that mindset is really good. We encourage developers to go that way. ReDoS is also a big thing. RegEx, denial of service, that always happens. People just don't know what to do with it. I'm surprised the number of developers that come out of really strong programs a lot of these schools and still say, “RegEx is a little complicated for me.” They need help.

[0:23:12] Guy Podjarny: I think just to give a little bit of info for those who don't know it. Regular expression denial of service is ReDoS vulnerabilities are the case, where executing a regular expression takes a very long amount of time. Regular expression with all its back references and logics that it needs to match can take a very long amount of time to run, even on a small string. Definitely a non-linear amount of time as compared to the length of the string. It's fairly easy to, if you don't restrict the length of an incoming string, which is something you often omit to get to the point where that thread spins quite a bit.

Especially if you're in JavaScript, where it's single threaded, or not single-threaded, but it scales by events, not by threads. It can really fairly easily take down a system, a machine and introduce denial of service.

[0:23:56] Sabin Thomas: Another thing that we also see is incorrect use of crypto libraries. I'm surprised by the amount of code where people use math.random to seed a key algorithm, or something like that. It's just the wrong thing to do. But they haven't been taught anything else. Unless, they get the type of education that should pre-empt them from using math.random, they will continue to use it. That's one of the popular things that we see as well.

[0:24:18] Guy Podjarny: I think education is an aspect and tooling, or better default is an aspect as well. In the case of the Mongos usage, I feel the big platforms, Angular, React, they come in and they put a lot of emphasis on security. While imperfect, they still have very secure defaults. This notion of secure defaults of taking on responsibility, of not shaking off to say, “This is not my responsibility,” just makes many of these components, like make it harder for you to make a security mistake.

Many of these packages just don't do that. Sometimes they're just small. They might not have the capacity in terms of the amount of investment that's been put into them. It might not have the security knowledge. Sometimes the authors of them perceive them, because they're not a platform, but rather a component. Somebody says, “Yeah. Of course, I don't filter my SQL injection in so the relevant functions in Mongos. It's not meant to be used that way.” That piece of information about how it should and should not be used just goes away. Really, I would hope that over time, we evolve into some model that is a little bit more defence and depth oriented.

To an extent, I find the NPM package, or the Maven package, or in general, these open-source packages, and as you use more and more libraries and you chain them together, because these libraries are developed out of context with each other, they're just developed independently. There's no guarantee that the use of those packages is going to be done correctly from a security perspective. The only way to help assure some of that security is to build it in, is to actually have each of these components, have defence in depth, have every one of these components and force its own security restrictions, even if it means that you've gone through seven of the same checks, you eventually have a shot at not letting an attack through.

[0:26:09] Sabin Thomas: I have to think, what is in the mindset of a developer, I as a developer, if I'm coding something up, if I'm doing a blog app, or I'm doing a very simple website, I can understand the mindset there to say that this is not something that's going to be used by a lot of people. Why would I need security? I know the way that I would think about something this 15 years ago has changed a lot from when I'm doing that now. Anything you write should be thought of that.

You shouldn't expect that there are bad people out there that would want to expose things, or maybe an incorrect usage of your application completely, not because it was a nefarious intent, but because it was just there. I think that mindset is changing. That's a good thing for developers to understand that. That anything you write, anything you put out there, anything you deploy no matter what it is, if it's from a simple non-profit charity website to running payment transactions, that the vectors are still pretty much the same. The threat model is still pretty much the same. I would say, that is an important thing to keep in mind for fresh, young developers, I would say.

[0:27:15] Guy Podjarny: Yeah. I think the other aspect of security would be transparency. It's about the fact that you should be declarative about what security aspects you are and are not taking on. This ties back both to the earlier conversation we had around how there are some security threats you can to address in a repository, and then we now continue this to go, I don't know if it's a different granularity or not, but not about a repository, but rather about a package, each of these components. It makes sense, sometimes your package might be 50 lines of code, and making it in for security restrictions would reduce its functionality, as well as make it maybe 200 lines of code, and that starts being a little bit cumbersome. But maybe the right way to do it would be to make these packages declare whether or not they enforce security. Just by that sheer statement, you can say, “Well, can I now ask a question to say, in this flow, am I calling a package that does, or does not do security controls?”

Or, security control is probably also not a black and white flag, but just to be declarative about what security constraints you are proposing, or you are taking on, and which security constraints you are not?

[0:28:20] Sabin Thomas: Yeah. To that extent, there may be a bit of a struggle. I know organization like OWASP try to establish across all applications, these are the common attacks, these are the common vectors of vulnerabilities. I think that's good to understand, because we're still seeing that happen, the OWASP Top 10 from five years, or eight years back is still relevant to today, even though it's a different environment for applications. I wonder, though, if in setting up this common language, common declarative presence about your package, that we spend a lot of time in standardising something, that we forget about different use cases for each. That's something that I just have to think through. I don't necessarily think it's an easy problem to solve, but I think it's irrelevant. I just don't know the path to get to the final.

[0:29:07] Guy Podjarny: Right. I guess, when you talk about declaring something, then you want to declare it in a non-custom fashion. If I want to say, “I do not allow invalid inputs through.” I have input validation. I might have bugs. Maybe I have vulnerabilities there, but at least I have made the attempt to sanitise input, and you use some form of standards to indicate that you did. Then maybe later on, a tool like Jacks could go on and say, well, you have this chain of actions here that you've performed, none of which claims to be doing input sanitisation.

Because you could do, I mean, Jacks has a sophistication, I guess, to try and deduce that itself through the language assessment, but it will be nice to layer in some declaration. Over time, it also allows us to – a lot of the aspect of security that we struggle with a little bit in the development community is that it's all sticks and no carrots. There's no way to celebrate the success. If you could somehow stamp your package to say, this one has good security controls, clearly, you need to implement them, but assuming people don't abuse those flags, it's nice to say, well, a good quality package has these seven security things that it's doing, and you would aspire as a developer to have all of those flags. If you did so, then you'd have some thumbs up on your repo and on your package, and people can recommend, or applaud you for it, right? You could get a little bit of that value.

[0:30:35] Sabin Thomas: Yeah. I think the rating business is good. It'll certainly help. It’ll certainly help when you relate that to another package, because those are where those metrics really make the most sense. If I were to choose this type of markdown parser, or another one. If I look at those ratings, now that my decision process is a lot more clearer, I think that really certainly helps. What's going to be challenging is defining the first few packages that get it. I certainly think this is where the community can really help. If that is going to be a value, if the community certainly feels that, we should make that happen. By all means, we should all get together to figure out a common protocol, so the community can understand what they should be using, and then we, in the business that we're doing, can make that work. I think that's certainly a value.

[0:31:21] Guy Podjarny: Yeah, definitely. and I think we need to keep shouting that from the rooftops, and keep getting people to engage and care about security, but try to find these actionable next steps that you could consider doing.

[0:31:33] Sabin Thomas: It's an interesting concept as well, crowd sourcing security, because if that was easy to have been done, we wouldn't need to exist. In the challenge that we have, security has always been an afterthought. Firms like what we do, and firms like us, in what we do, it's our responsibility to take that on, like you said, shouting it from the rooftop, be that leader. At the point that the community understands it, now it becomes a more active engagement. But it's a struggle.

[0:32:01] Guy Podjarny: It is. Always is. Never going to end, but hopefully improves.

[0:32:04] Sabin Thomas: Yes, certainly.

[0:32:06] Guy Podjarny: I think we're about out of time. Before we part off, I'd like to ask you maybe one tid bit of a question.

[0:32:12] Sabin Thomas: Sure.

[0:32:12] Guy Podjarny: Say, if you had just from your experience and what you're seeing, if you're talking to a development team and you have one security aspect, or one thing that you think they should apply to make their system more secure, what's your favourite? What's the current thing that you're championing the most?

[0:32:29] Sabin Thomas: There's a lot of answers I can give that. The official answer would be to use jacks.codiscope.com.

[0:32:34] Guy Podjarny: Of course. Of course.

[0:32:35] Sabin Thomas: It's a great tool that will help you do that. This is something that I've indoctrinated among the teams at Codiscope, which is building security and that touches different parts of your development process. One that I've seen had the most impact on is at the design step. If you have a security designated role, or somebody who can take that on and champion that and make that part of that design, before any code gets written, that is already part of the mindset. That is something that's sitting there. When devs are coding, it's always at the back of their mind, but it sits there. That has a great impact in the way that I've seen the developers of my team’s code.

I would say, as part of your design, make a security review step an explicit part of that, and you will see gains out of that. That's just coming from personal experience. I mean, there's certainly more answers to that, but that's something that I have seen that has had more impact than any other thing that I've seen before.

[0:33:30] Guy Podjarny: Yeah, fully agree. It will be a high impact step to just acknowledge that security is a core component of some of that process of your flow and of that design. Thanks a lot, Sabin, for coming.

[0:33:42] Sabin Thomas: Great. Yeah.

[0:33:43] Guy Podjarny: For those who are listening, you definitely should go and check out Jacks. It's jacks.codiscope.com, and it would probably tell you a few things about your JavaScript code that you may not know and you definitely should, maybe about how you're using Mongos, maybe something else. Thanks again, Sabin, for coming onboard, and good luck.

[0:34:00] Sabin Thomas: Absolutely. This is a great service you're doing. I think the more we can benefit the community, things like this really help. Thanks a lot, Guy.

[END OF INTERVIEW]

[0:34:07] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show, or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.