Skip to main content
Episode 61

Season 5, Episode 61

The Rise Of HTTPS And Front-End Security Toolbox With Scott Helme

Guests:
Scott Helme
Listen on Apple PodcastsListen on Spotify Podcasts

For this episode of The Secure Developer Podcast, we welcome Scott Helme to chat with us about front end security. Scott is the force behind Security Headers and Report URI and he is also a Pluralsight author and an award-winning entrepreneur! We get to hear about Scott's professional trajectory since leaving college, the interesting developments and changes he has made along the way, and his current work with his different projects. Scott then explains the service that Security Headers provides, something that he created to effectively scratch his own itch. The educational value it offers is quite remarkable and our guest does a great job of explaining exactly how it functions and its ease of use. From there he turns to Report URI and explains how this company compliments the services of Security Headers. Our conversation progresses onto the topic of HTTPS and the encouraging increases that have been happening for years now in terms of adoption and ultimately, security. This is something that Scott has been very excited about and happy to see, as it shows a general trend in the industry towards better, safer practices and standards. The last part of our conversation is spent with Scott sharing some thoughts on organizational approaches to security and what he sees in the near future for the space. For all this and then some, tune in today!

Share

[0:01:10.1] Guy Podjarny: Hello everyone, welcome back to The Secure Developer. Thanks for tuning back in. Today, we have Scott Helme with us, it’s kind of unclear to say from what, you know? Works on Report URI, works on Security Headers, works on a variety of different things. Scott, welcome to the show, thanks for coming on.

 

[0:01:25.2] Scott Helme: Hey, no worries, thank you for having me.

 

[0:01:26.0] Guy Podjarny: Scott, we’re going to talk front-end security and security practices, today. We’ll sort of dig into things that are a little bit more subject matter than maybe some episodes where we dig into security organizations or structures. But before we dig into those areas, tell us a little bit about yourself, what is it that you do, and how you got into security?

 

[0:01:44.0] Scott Helme: I guess I got into security just through interest. My history, I did software engineering at university. I’ve been into computers forever, so I’ve always been kind of a bit of a tech geek. Did software engineering at university, graduated in 2008 in the middle of a financial crisis, so job hunting didn’t go well when I graduated. I went into tech support originally, so graduated through first, second, third line tech support and then I actually moved into quality assurance, so I went to be a junior QA.

 

Originally working on more the hardware side of things, so actual physical hardware device QA, electrical safety environmental, things like that. Then I moved into software QA which was a bit more of an interesting field for me, pushed all the way through the ranks, did a few different roles there, I eventually moved into automated software QA. Then I was really hooking my degree in software engineering with my interest in QA which is kind of like the interesting breaking stuff, testing if things work, what happens if we press this button 47 times?

 

A kind of – that inner destructive child in me I think was really happy. Eventually, in the automated testing roles, back then, security was just becoming a bigger and bigger thing and I just thought reading a few blogs online and it was like, hey, if I type in like these special characters into the field, in our application, it just explodes and it’s like some security issue and I was like wow, this is really cool. You can do really bad stuff with just some small bits of knowledge and I started integrating security tests into our – what was it at the time, just like a functional and regression testing suite.

 

Then I just kind of opened Pandora’s box. I just found myself engrossed in reading blogs, watching videos and it’s so fascinating and I started finding issues in things around me like tools or products or the biggest one that I did  back in like 2013 which made really big headlines and got me into the security game officially, with my home router. It’s got like a web interface and you can log in and change the WiFi name and stuff and I found some really critical security issues in it, tried to tell the company, it went horribly wrong, it made like national headlines here in the UK.

 

And then the pen testing firm approached me and they were like, "Hey. You’re doing this stuff in your spare time, you should come get paid for this." And I was like wow, getting paid for my hobby, that sounds like a great idea and for anyone listening, that’s the best way to ruin a hobby is to start doing it as a job and then then yeah just pen testing.

 

Pen testing was great, it was good fun and it pays well but it was just like, why are we building products and testing the security stuff at the end? This just seems really daft because someone would spend 18 months building a product and then I would come in and just kind of bulldoze it and I felt awful and it’s like, "No, go back to the beginning, this is terrible." That pushed me more into a consultancy role, which was really fulfilling to head off those issues at the pass and then eventually to kind of where I am now which is training, you know?

 

My role at the time was – I was trying to push the security knowledge as left as we could. Pen testing is at the end, consultancy, you can come in the middle of a project and training was basically trying to embed the knowledge at the start. So I do a lot of training now, I only do two courses, one on encryption with Ivan Ristic on TLS and then one on application security with Troy Hunt and people who aren’t familiar with those two chaps, they should totally check out and that’s pretty much where I am now, really, that’s from the start to the end of Scott.

 

[0:04:47.4] Guy Podjarny: I love the path, different guests on the show, you know, kind of traveled different journeys. I don’t think I’ve had anybody sort of you know, do it through an overly publicized hack. Obviously, some sort of existing systems, I like the QA analogy of just like understanding that you know, you’re trying to find flaws and building out from there. Definitely relates to why find these problems at the end, you know? When you can kind of help avoid in the first place.

 

Scott, we will talk about sort of you know, broader security practices or what you should do but what are the areas, you know, you mentioned a couple of applications security or encryption but currently, a lot of them revolve around front end security and sort of securing your website’s interface to the world and the page that is in the browser. How do you see front end security fit into the broader picture?

 

Do you feel people who care about it less, they care about it more, what do you find when you talk to people about their TLS cypher suites or needing to protect the browser, are they conscious of these risks, has there been change with it last few years?

 

[0:05:42.3] Scott Helme: I think that it’s definitely been changed in the last few years and the two training courses that I do, I guess they just came through looking at the market and seeing where the gaps were, this is how I started Security Headers and Report URI. What is the thing that we need that isn’t there?

 

That’s what got me really heavily involved in the TLS training course because as we all know, HTTPS on websites over the last few years has become, it’s not become important, it was important before. I think the correct thing to say is probably people are realizing how important it now is. We’ve had huge pushes from the browser vendors in changing their UI. Search engines now prioritize encrypted pages. I think finally, 2020, is the year where we can say all websites need to be HTTPS now.

 

We have the tooling in place to actually realistically make that statement, right? All websites needed HTTPS a few years ago but it was too difficult, there were technical barriers, there were cost barriers, it was prohibitive and now I feel I can kind of finally say that and actually have the tools and everything around us to back that up for people that do it.

 

The same again with application security as well you know? I think the relentless sequence of events playing out in the media of companies being breached and these things happening now. Perhaps not the best motivator for organizations to start taking application security more seriously, but it seems to have been effective, so I think we have this dual pronged approach where some organizations are maturing and realizing that they need to take application security more seriously and then others are just kind of looking around and being like wow, really bad things can happen, maybe we should get on board with this. I don’t really like those negative motivators, a lot of the time but I think they are being effective in terms of encouraging people to take application security more seriously.

 

[0:07:22.6] Guy Podjarny: And I think fundamentally when you deal with the risk reduction, there’s some amount of the conversation needs to happen about what if that risk materializes the full way of avoiding that. You talk about application security but different people unfortunately that definition is not as consistent on it. What do you capture under the scope of application security?

 

[0:07:40.6] Scott Helme: I guess that’s a fair point actually because even my experiences more recently are largely focused around the web front end, so we mentioned Report URI and Security Headers which I guess are very indicative of where my interest lie at the moment. Because I followed through with those projects for the last few years.

 

[0:07:57.2] Guy Podjarny: Maybe tell us a little bit about Report URI and Security Headers because I don’t think everybody —

 

[0:07:59.3] Scott Helme: Sure, yeah. Okay, start with Security Headers because that came first. There’s a heap of different HTTP response headers that have been standardized and allow an application to control security features in the client, in the browser. So we have one of the most common attacks that people may have heard of is something like cross-site scripting where you inadvertently end up with JavaScript in the page that shouldn’t be there.

 

Headers are like content security policy, allow you to declare a white list of content that should be on your page and therefore obviously content that shouldn’t be on your page. If the browser loads your page, and there’s a script tag there, the browser would normally just execute the script tag because that’s what it’s supposed to do. But now with the introduction of content security policy, the browser can say, "Okay, this webpage is only supposed to have JavaScript from these three locations but there’s JavaScript loading from this fourth location, not on the list," the browser would block that fourth location and block that script. This is all controlled with an HTTP response header.

 

CSP is probably — the data, it’s certainly the most commonly used security header so it’s kind of the easiest and the first to talk about but there are several other response headers that control different things. Like for example, is the page allowed to request permissions for the camera or the microphone if we flick this over into a privacy control now. You might say, "Well, we never have permissions for the camera or microphone in our application, we don’t need them," so you can disable them on the page. Now, if you’re loading advertisements or third party content, that will prevent them from being a little bit naughty and devious and trying to request microphone permissions.

 

There’s a heap of these different headers and you can control a heap of different things with them and they’re all focused on security or privacy. They’re absolutely worth checking out and security headers was just an easy way to check them out so you can go to securityheaders.com. Type in address for a website, hit scan, it takes like two seconds and it gives you a quick run down of do they have any security headers set and then do they have them set with restrictive policies or lax policies?

 

There is a grading system, a lot of people look at the tool and think perhaps it’s a security assessment tool. But I think it’s really important to remember that first and foremost it’s an education tool. To let you know these headers exist whether you have them or not and to give guidance on how to set them up. It’s not a security assessment tool, if you scan and they get an F grade, it doesn’t mean they’re going to get hacked tomorrow. It is just for spreading this knowledge and information in an easily consumable way.

 

[0:10:22.9] Guy Podjarny: That’s a really good way to highlight those. I know people get scores, it evokes all sorts of emotions and people like one hand they sort of get a bit of a competitive vibe, you know, you want the A, you want the A+ and so don’t like it, get something different but it might actually drive you to change on –

 

[0:10:38.0] Scott Helme: Gamification is really important, it's a very powerful motivator, you know? I think for me, it just adds a bit of fun to it, you know? It’s like, "We've got to turn on these security things on, do this security stuff," and it’s like, "No, we’ve got to go get the A," that’s like a much more positive experience to me.

 

[0:10:55.8] Guy Podjarny: I agree and it’s also even especially in the context of security where there’s often times a lack of visceral response, you know? You can feel that risk that you’ve just reduced but you can’t feel the score that you’ve just increased.

 

[0:11:05.8] Scott Helme: Security Headers highlights a whole bunch of security headers I guess it’s in the name, that you should be using with CSP at the top but Report URI takes that to the next level. Setting a content security policy header is great and as I mentioned, you know, the browser will block that script and stop it from executing. It does a heap more stuff but that’s the thing people usually focus on. But someone comes to your website and the browser, the client is taking active measures to protect the visitor, that’s something that you should really want to know about and that’s where Report URI comes in.

 

One of the things that a lot of these security headers can do is, when the client has to step in and take preventative measures to protect the visitor, the browser can then also call back to a reporting end point and say, "hey, here’s a rundown, I loaded this page, this bad thing happened, I blocked it and the user is safe." And the browser can dispatch that report to a nominated end point and this is what Report URI does. You sign up for an account. Most people can just kind of like run on a free tier and then you can nominate as your reporting end point. If the client comes to your website and is taking these defensive measures is blocking hostile JavaScript or whatever it may be. It will dispatch a report to us, it’s just a nicely formatted JSON payload which we ingest and then we produce graphs and dashboards and reports on what’s happening. I’m saying, reporting is really important when I’m here running a reporting service but this is why I founded the company and this is why I started it because to me, deploying a CSP is kind of like 50% of the value, you’re getting that defensive measure, you’re getting the blocking action.

 

But if you as the host don’t know the client is constantly stepping in and taking defensive measures, you can’t go fix the actual underlying problem. This is where I think there’s kind of like a shortfall, looking at CSP, it’s like okay, yeah, the Javascript has been blocked, awesome, we’re good. It’s like, well no, you kind of need to know that’s happening and then go and figure out how did the JavaScript get in the page, that’s step two, that’s the other 50% and you can’t do that without reporting.

 

What you actually want to do is use the CSP as a stop gap, whilst you identify how did the JavaScript get on the page and fix the underlying cause. For me, that’s kind of like the whole value proposition is the blocking is a temporary measure until you fix the underlying application issue. I did start Report URI, gosh, it’s like six years ago now and I saw these features being standardized in the browser and this could be done and I was like, this is amazing. I want it for my application so I hit Google, I looked around, I spent like 10 minutes on Google and I was like, I can’t find a website that does this, you know?

 

If you spend 10 minutes looking for something on Google and don’t find it, it doesn’t exist. I built one, there and then I was like, well, I’m just going to build one for myself and then eventually I launched it into a company, opened it up and on average, we are processing about 64,000 CSP reports a second at the minute which puts us in the 13 plus billion CSP reports a month. It kind of turned around really quickly and it’s really good when we hear people say, "Wow, we saw a sudden spike in reports today." Sometimes I really – you know, they’re not attacks, right? It was like, one company recently that we’ve worked with is like we’ve pushed in a style sheet into production and we’re loading it from a different third-party location. Obviously, it got blocked because that new third party location for style sheets wasn’t white listed. But you know, it just goes to show, that is a piece of content that apparently shouldn’t have been there and we got notified about that almost immediately, we knew there was a problem and it’s like okay yeah, this is a simple case of adding a new location to the white list. But that could have just as easily been a bad piece of –

 

[0:14:32.8] Guy Podjarny: — location.

 

[0:14:33.2] Scott Helme: battle sheet or something. It is really cool to see these controls that exist where you can know exactly what’s going on with your site, in the browser, and it’s information that I still, to this day, haven’t seen that you can get any other way, this is a piece of telemetry that – you know, just to be clear, I’m not saying we’re the only reporting service that does this now, but I just mean these mechanisms themselves, white content security policy, there’s no other way to get that telemetry from the browser and it’s really useful.

[0:14:39.0] Guy Podjarny: Typically we don’t discuss tools as much on this show but I think the reason I find this really interesting is there’s indeed this lack of awareness, you know? There’s a lot of evolution, there’s been sort of significant new capabilities added to the browsers, not only added to the browsers but also permeated through different browsers. So now they’re actually useable because it’s a challenge oftentimes in browser land where if two small a portion of your clients actually get affected there, then you don’t necessarily reap the benefits.

 

But then, the next iteration is that people need to understand those problems and the sort of last bit, very much fits the dev ops principle, right? If it moves, measure it type model, before you couldn’t really – I’ve personally had the dubious pleasure of trying to hack together telemetry from the browser and the early performance monitoring solutions had to hack that together until it got standardized in the browser. But these are probably the most substantial security-oriented monitoring tools, almost, you know? That are available in the browser, is that a reasonable —

 

[0:15:55.2] Scott Helme: Yeah, a couple of things that like the support thing, you’re absolutely right, there’s a probably a couple of things that we can clarify for people is, number one, the support now is fantastic, as you say, you know, it’s pretty much across all of the mainstream clients and also going back quite a few years as well. But the cool thing with all of these kind of telemetry mechanisms is, yeah, support’s currently hovering around like 94% of clients or whatever it is but even if it was only let’s say like 10% of clients.

 

If 10% of your clients are calling home and saying, "Hey, there’s a problem on this page," it’s like okay, cool. I know there’s a problem on the page, you know, we don’t necessarily need to chase that 100% perfect support number because if you have a problem and only 10% of your clients are telling you, that’s going to be sufficient for you to know that there’s a problem.

 

This is a really good thing is that we don’t have to be too worried about that lack of 100% support. I mean, we’ve only talked about one or two reporting mechanisms, there are so many, it’s unbelievable and they’re all native. You mentioned there like hacking together different solutions, you know, there’s no code to deploy for anything that we’ve just talked about. These are all native functionalities, built into the browser, you just hit the on switch, you know, you’ve not got a library or some agent to deploy or something like that is so easy to get started, it’s literally just like flick the switch.

 

[0:17:07.6] Guy Podjarny: Yeah, for sure. We kind of jumped a little bit ahead into Security Headers and CSP but indeed, kind of one of these security wins in this world of the web and the front end has been HTTPS right? Over the last few years, we’ve seen a dramatic improvement if you will and in the percentage and saying you know, this year might be the year in which it comes, everyone should be on it.

 

I guess, how do you describe that change from your perspective, what were the primary drivers in making HTTPS - which has been around for a while now - suddenly get its broadened option.

 

[0:17:40.6] Scott Helme: I’ve kind of got two different views on this. One that I can start with is like the data driven view. I actually run a small crawler project over at crawler.ninja which was the cheap domain on offer at the time. It used to be the Alexa top one million websites. I now use the Tranco list but it’s basically a list that’s produced every day of the top one million ranked websites in the world based on traffic and other metrics.

 

And my crawler fleet crawls them every day and analyzes a bunch of different security metrics and one of them is HTTPS. I have been running this crawler fleet for coming up on like five and a half years and every six months I produce a report to say what has significantly changed in the last six months, where has this gone? And looking at that list and the data that I produce, the amount of change that we have seen in HTTPS in the last probably four years, we have gone from roughly 25% of those top one million websites well up into like the 70% region.

 

So this is like hard numbers that we can actually see that has been a significant shift in the number of websites and that is the websites that are actively redirecting to HTTPS. So if you type in facebook.com and hit enter, are they going to bounce you to HTTPS and the number of sites doing that is just phenomenal now. So, March was my last report and I am just looking at the data here now and the graph line, you know, so going back to the end of 2015, we were actually well under 20% of sites redirecting and on the latest reports now, we are pushing up to 62%, in literally just a matter of few years. We have tripled, more than tripled actually the number of websites redirecting to HTTPS.

 

So from the hard evidence backside and Mozilla published telemetry on this and so do Chrome and the HTTP archive has a graph of this. And that they are all seeing the same thing and we have had encryption on the web for 25 years, ballpark now and in those first 20 years we really didn’t see that much adoption and if you look at the web now and I would say this to people listening is can you think of a website that you use day to day that runs on HTTP and they’re like really hard pushed to think of something that I actually use on a day to day basis, maybe there is one forum in the back of my mind that is run by some person in a non-official way but I really can’t think of a website that I depend upon regularly that is not HTTPS now and I can very easily cast my mind back just a few years and think of some that weren’t. So that is a really recent and very rapid change and I think that is largely down to two factors, which was removing the cost barrier and removing the technical barrier. There is an organization out there called Let’s Encrypt. They are a certificate authority that give away free certificates.

 

And I think they have been responsible for a really good portion of that because traditionally you would have to go buy a certificate and now Let’s Encrypt will give you one for free. So let’s say you run a forum and you’re really into dogs and it’s like, “Let’s all post pictures of our cute dogs and share them and it will be fantastic.” Are you going to pay $200 a year to buy a certificate to encrypt that, when maybe you don’t even have that kind of money to spare for a side project?

 

And Let’s Encrypt are like well, “No, well now you don’t have to pay anything, here is your free certificate,” but then that left people with a technical barrier. You know organizations now most cloud hosting providers will just automate HTTPS away, take the pain away. If you are running your own environments, Let’s Encrypt works on a standardization of a protocol called ACME, which is like a fully automated protocol to get and deploy certificates.

 

So I think Let’s Encrypt started in 2016 that was a very new thing back then now it’s been around for a few years. We have seen widespread adoption of ACME in various different clients and different environments and again, it is like that maturity of the tooling and the platforms available, is why we are seeing this kind of sky rocket in adoption now.

 

[0:21:27.8] Guy Podjarny: I very much relate to the excitement about HTTPS adoption. I think it is almost like above and beyond whatever the HTTPS or specific security win is. It is this great tale to try and replicate across other aspects of security. It’s been around, it’s had this lukewarm adoption for such a long time and suddenly jumped up. So I think that introspection later served the post-mortem here, it’s not dead. It is actually now alive.

 

But understanding why this happened is so important. So we talked about cost, generally reduced friction I guess, right? If you move the effort in terms of cost and effort in terms of implementation, I think that is one great property. The other bit was that there was a fair bit of almost like a consortium of relevant players, right?

 

So Let’s Encrypt came about but then you know Akamai was on board and some of the big cloud providers were on board. And so sort of CDNs and some of the big hosting providers should be noted we’re making some pretty significant money from charging premium dollars from HTTPS traffic, well above the certificate costs jumped in and drove this change.

 

[0:22:31.2] Scott Helme: Yes, so it nicely ties back into the two points of a lot of providers out there were charging for SSL traffic. Airquote SSL on bill term there, we use TLS these days but there was that aspect of it and again that is reducing the cost side. If we go back a few more years, the reason that they were charging for it, it was that there was that complexity, there was that cost that they were bearing and also the performance of encrypted traffic was probably a factor for them if we go back a few more years.

 

So you know there was a genuine argument that could be made that this was quite expensive, this was quite technical and there is an extra overhead but I think one of the other really good things that we have seen is especially in TLS version 1.2 that came out in 2008 and version 1.3 came out in 2018, they were both so heavily focused on improving the performance of the encryption, those actual physical overheads were reduced so much that again, you were taking away one of the ‘costs’ - like performance cost now not financial - but you are taking away one of the costs associated with doing encryption and I spent two days on my TLS course deep diving into protocols and ciphers and all of these things and getting really hands on with the technical side of it but I always just come away with this thought that you know one of the biggest reasons of the adoption is that it became cool. I don’t know where it came from and there was this huge industry wide effort.

 

You know, so I can’t really attribute this to any one organization pushing it, but it just became the cool thing to do and then the standards buddies came along and if you want stuff like HTTP 2 or Brotli compression, they were all locked onto secure versions of the protocol so I mean on HTTPS. So now it’s like, “Okay, well not only is HTTPS’ performance impact negligible but there’s all of these sweet improvements that you can get from a secure connections like H2 which makes stuff blazing fast like Brotli would choose better at compressing than Gzip. Other features like the geolocation API a few years ago was locked and removed away from HTTP onto HTTPS. That one is a really sound argument, right? It is like I am sending my long to six decimal points over the network. Okay that should be secure.

 

[0:24:38.5] Guy Podjarny: But at the same time it’s one that is a pure security decision while you could have technically implemented it in a different environment even HTTP 2, it required TLS but you know there were some place. It wasn’t obvious that it would be the chronicle itself can run on a non-TLS connection.

 

[0:24:55.1] Scott Helme: Sure because we have H2 and H2C which is H2 clear. So in the earlier versions of the specification HTTP 2 was going to be locked to TLS secure connections in the spec but casting my mind back now I think the standards body backed down on that and now we have HTTP 2 and HTTP 2 Clear but all of the browsers were like, “No, we are only going to implement H2.” So there isn’t going to be a H2 Clear so essentially that wasn’t a standardized, you are absolutely correct.

 

From a technical perspective there is no reason to do that. So many people are so critical especially Chrome and Firefox when they decided to do this but I kind of looked at that position and think, well they had a stick and they can yell at website operators and tell them to do HTTPS and hit them with a stick and say, “Do HTTPS.” They can take a carrot and dangle it on the end of the stick and say, “Ooh look at this nice thing that you can get over on the secure side of the web.”

 

So there was the online ‘fighty’ scenarios that you get whenever we talk about things and people are like, “No they shouldn’t have done this, this is wrong. They are bullying us, they are forcing us to do HTTPS if we want H2.” But you know I look at it sometimes and I think but that was such a fantastic encouragement, right? Now there is a reward for your effort of deploying HTTPS and that is a really good encouragement and so I think it was the right decision, there was a lot of disagreement around that or I am happy with the way that that played out.

 

[0:26:10.9] Guy Podjarny: I am with you on that camp. You know both, I remember the disagreement that was something to do with Akamai at the time, but I think it is also the right move. It is a principled move that is not that dissimilar to sort of PCI or other sort of regulations. I think PCI is maybe a decent analogy because it is a commercial entity versus a government that is saying to manage risk, but it was the browsers picking up and saying we share the responsibility. It is a shared responsibility model here. We want to encourage you to move to this other world. So we are going to make these non-essentials only work on secure connections.

 

[0:26:42.7] Scott Helme: How many of the cool new browser features are locked to HTTPS now. Yeah we saw things like geolocation be flipped over, because that existed in the HTTP world but now, if you want to do anything cool in the browser, you know all of these new powerful APIs and local storage and everything, service work, you’ve got to start in HTTPS-land. So I just think now, we are over that hump and we’ll probably never have that argument over like H2 clear or not again.

 

Because I feel like we’ve pushed over the top of the hill and any new features that get introduced now are going to automatically be available on the secure context only. So hopefully like we won’t have to have this call again in the year and talk about that.

 

[0:27:19.5] Guy Podjarny: So we are celebrating the great security win right? And we talked about HTTPS, which probably has the most adoption that have got sort of CSP as maybe the second adoption to a certain level degrees and then security headers not at all the same order of magnitude probably, right at the moment. And we talked about how the industry has helped them. It is a security win right? Like technology and reducing the cost, reducing the effort. Giving a carrot into doing it and making it cool, which goes alongside them, which is a really interesting like making the social motion.

 

One bit that I think is still a little bit murky is ownership. So we talk about these three areas and generally a rise in front end security awareness, if we call it that. Who do you see in the organization engaging with secure controls? Is it the application security team that is coming along and saying, “You know, you need to have this security header.” Is it the development teams and also like the implementation is a slightly different stack, you know, suddenly like a CDN is involved potentially or other sort of network parties versus maybe a backend security, which might have a different set of stakeholders. How do you see when you engage with an organization on this space, who inside the org interacts with this problem?

 

[0:28:31.6] Scott Helme: There is probably two really defining metrics here that I see and if we have the size of the organization on the X axis, I work with organizations down in like the tens of people all the way up to the thousands of people and then on the Y axis we have I want to say the legacy of the orgs. They are running large financial institutions on a much more legacy, not just the infrastructure but process as well.

 

Some organizations if they’re lucky enough will have security teams. And they will have dedicated security people if we air quote that whose job it is to do security throughout the organization. I feel like this approach is really good if you are large enough to have dedicated security people and budgets but now I think this kind of model where you have two or five or how many people in your security team, people that are responsible for security I just don’t think that is going to survive going forward.

 

So I think everybody within an organization now has to have at least some small part to play within security. I don’t think we can have security people whose responsibility is to do security and then everybody else doesn’t have to think about security. I think we are in a world now where everybody has to have minimal baseline security knowledge and security consideration just like everyone in an organization has to have like HR knowledge of what is appropriate to do and not do in the workplace.

 

It is like the same with security. We just got to have this basic idea of how to conduct ourselves and the course that I do with Troy, the Hack Yourself First course is very much aligned on this path where the purpose of the course is not to turn everybody into penetration testers. It is just to give them that insight into how systems and applications get broken in order to have that knowledge in the back of their mind, so that when you are building something you might just challenge something or ask a question or like how can this go wrong.

 

I work with some organizations where I just liaise with the security team and that said and they have these five people that are responsible for securing the whole organization. I was like, “Wow I do not envy your job,” because it is such a massive undertaking and security is such a broad spectrum thing now. To other organizations where the Hack Yourself First workshop I literally sit down with development teams.

 

And these are just developers who have no security in their title and now formal security role within an organization. But the organization wants their developers to improve that baseline security knowledge, so that the best way of doing things is to start writing secure code in the beginning and that requires just that little bit of security knowledge. So there is a huge spectrum of organizations and for the much more modern younger start-up kind of organizations I guess have that security knowledge embedded within development teams. And then I work with a couple of financial institutions where no one has any interest in security apart from the five people in the security team. It is super weird to see that.

 

[0:31:12.1] Guy Podjarny: It’s DevSecOps is probably the buzz word with all of the complexities of buzz words you know probably best embodies emotion of changing it but I think I fully relate this challenge of the security silo versus the developer silo. But there is also sometimes a front end versus back end. Do you find that the people that deal with the content security policy or the right security headers or even TLS, are they the same people that deal with sort of an SQL injection or compromised — an unpatched container maybe even on the dev side? What has your experience been in terms of those realm of concerns?

 

[0:31:44.2] Scott Helme: I definitely see that separation there. So the majority of my time is probably split between consultancy and training and they are very, very similar in many respects really. Training is just more focused on attentive consultancy, in some way. So the main breakaway would be the TLS that is usually kind of a more focused team and a more dedicated team and they’re more infrastructure-y than application people. So there is definitely a slight separation there.

 

I don’t really think I have ever come across many back end people like the Hack Yourself First workshop is usually largely focused around developers and front end people. The TLS workshop where I do generally end up with more infrastructure focused people. They are often not application or developers or anything like that at all. So still very visible now that I think about it those silos are present.

 

[0:32:27.7] Guy Podjarny: Within the teams in it I think it is one of the challenges that the security teams face, which is we like to simplify and sort of say security and dev, but in practice inside security, you’ve app sec and cloud sec, different specializations within development there is definitely a wide variety of world. It is a great realization to hear both on the successes but just the general interest in the front end side, which historically is probably been the one that cared the least or invested the least in security.

 

To take up more responsibility as I guess we could also say they take on more of the functionality with the single page apps and the reacts and views and angulars of the world moving more functionality from the back end to the front end as well.

 

Thanks for taking the time and this has been super interesting to deep dive into these different aspects. If I highly simplify and recap here, we talked about sort of I guess three buckets maybe of sort of security threats of https, which is on the rise and on the win and people still need to pay attention we didn’t get to dig in to that of different TLS versions and through those. We talked about the content security policy and how that should be applied but there are two levels I guess. You can just use it to block scripts but you can also use it to monitor your activities over time and understand how scripts got into it.

 

And then the security headers are a broader suite, than just the CSP and definitely should be used and added to your site and sort of tune to your surrounding and we talked about your sort of two great tools and securityheaders.com and report UI, report dash UI that are both great and mostly free for most users. Free tools as well as have their commercial version especially for that report UI as well as your great training sessions. So like a world of information, assets about that front end security.

 

[0:34:02.6] Scott Helme: Yeah, maybe I will just chuck a link to crawler.ninja in there. That is free and open and people can take the data and then my blog has explanations and all of the things that we talked about if people want to recap on that.

 

[0:34:12.2] Guy Podjarny: With all of that already great guidance you’ve provided, I like to ask every guest before ending the show, if you have one tip to give a team that is looking to level up their security, something they should start doing, they should stop doing, what would that be?

 

[0:34:24.4] Scott Helme: I mean for me with my really heavy bias, it would have to be look at the easy wins with security headers. Some of them are super easy to get started with. I like the fact that that low cost on no cost, some of these are just the case of switch on a security control in the client and modern browsers are loaded with these things. So I’d check them out.

 

[0:34:42.9] Guy Podjarny: That’s great and very practical advice. Scott, thanks for coming onto the show.

 

[0:34:46.9] Scott Helme: No problem at all. Thank you for having me.

 

[0:34:48.7] Guy Podjarny: Thanks everybody for tuning in and I hope you join us for the next one.

 

[END OF INTERVIEW]