Skip to main content
Episode 128

Season 8, Episode 128

Tackling Software Supply Chain Security As An Organization

Listen on Apple PodcastsListen on Spotify Podcasts

Continuing our mini-series on supply chain security, as we deep dive into the organisational aspects of this charge and hear from a number of our experts about solutions and initiatives to better prepare for supply chain risks and visibility issues.

Simon and Guy are joined by Adrian Ludwig, Aeva Black, Jim Zemlin, Emily Fox, and Eric Brewer as we start thinking about securing the supply chain as an organisation.  Guypo breaking down the four fundamental steps for doing this, and how to tackle the subject of SBOMs or Software Bill of Materials. Our guests share fascinating perspectives on how these areas relate to a company's overall preparedness and particularly to the open source space. We also cover some general advice about raising security awareness at a company, so for all this and a whole lot more, make sure to join us. Next week is our miniseries finale, where we will tackle the future of software supply chain security, so make sure you tune in for that !

Share

[00:00:02] ADRIAN LUDWIG: Any product that we're using at scale is one that's going to be validated by lots of other companies. Do I want to dedicate a bunch of my security organisation's time and effort to making sure that everything is absolutely correct inside the SBOM of every one of our vendors? No. I don't think that is actually a sensible allocation of resources within our organisation. Do we have them? Yes. Do we look at them? Sure. Are we investing a ton of resources in optimising our analysis of them? No.

[00:00:34] ANNOUNCER: Hi. You're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

[EPISODE]

[00:00:56] SIMON MAPLE: Hello, and welcome to part three of this software supply chain series. Thanks a lot for joining us again. We do hope that you have enjoyed the series so far. Together with experts in the field, we are starting to unravel the software supply chain security for you. So far, we have defined what software supply chain security is earlier in the year series, why it is important to pay attention to it in 2023, some key terms that we should know and key initiatives we should pay attention to. Today we'll talk about solutions and initiatives that organisations and companies can take to better prepare themselves for supply chain risks and potential visibility issues that they obviously have today, not knowing what's under the covers from all the risks and the issues that we've talked about previously.

What do we think about an organisation that perhaps, let's say, they're starting at ground zero. It's day one for them. They're looking at what they need to do to improve their supply chain security. Maybe they've got some little bits of AppSec and some security already done in their estate. What are the key things? And I guess priority wise, it's important, what are the first things that an organisation like that should really look at? And from an actionability point of view, what steps can they take to improve their posture and their supply chain security risk?

[00:02:08] GUY PODJARNY: I think the base point is really knowing what you are consuming. I can't really stress that enough. There are lot of new, exciting sexy terms here around your, like Sigstore, and Attestations and all like new attack vectors. All of those are, in my view secondary to starting with just knowing which components you are even using. Just the fundamental SCA capability, software composition analysis capabilities of saying, which components are you using? Are you properly curating them? Then, look at those and say what are the known problems in it? This is not theory, this is concrete, you are using these components and they are vulnerable. 

Evolving that path really includes starting from your applications that are most important, that touch the most sensitive data. You can start from there, and you can expand further and similarly start from just detecting which components you're using at any place, like whether it's in code, or in build or whatever it is. Just know what you're consuming, and over time track where those components are in your system. So maybe you combine your asset management system or others and know where that is. Then last on that journey, it's just actually trying to prevent problems on it.

For starters, you need to know what you're using, know if there is a problem in them and try to address the ones that are significant. Then introduce some form of, basically, the equivalent of security questionnaire, or some form of security process for consuming a component in the form of some pipeline or pull request verification that says, "Hey! You are about to introduce something that is problematic." Depending on the size of the company, there are a lot of built in assessments of what is problematic and not problematic. Simplest example of it is, does it have a vulnerability and at what severity? Then, they'll be more involved with the security practice in the organisation, the more you want to define policies that are maybe a bit more sophisticated and that vary project to project. 

[00:04:04] SIMON MAPLE: As Guypo mentioned, there are a few steps involved in securing your supply chain. Creating that software bill of materials, the SBOM, it does not contain a huge amount of information to really be able to say there are known vulnerabilities or in fact, none at all. There's a level of enrichment on top of the SBOM artefact that needs to be added. The SBOM artefact doesn't really feel hugely useful, unless you actually add that enrichment. Adrian Ludwig, the Chief Trust Officer at Atlassian, and also a member of OpenSSF shared his interesting lens on SBOMs.

[00:04:36] ADRIAN LUDWIG: SBOM is super interesting to me. I think there are couple different ways to think about it. One is that it produces a sense of transparency. In general, I think, a sense of transparency leads to better alignment of incentives by the organisation that's creating a piece of software. I think that's a hugely valuable thing to think about SBOMs, is like, as soon as I have to describe what's inside a piece of software that I'm making, that I'm going to be very, very careful about what's in that piece of software. That's one optimistic SBOM is good, and it will drive the effects that we want.

The other more realistic, but super negative way to think about SBOMs is it shifts the responsibility from the company that's producing a piece of software, to the consumer of a piece of software, to know what's in there and to make demands about it.

[00:05:22] GUY PODJARNY: It all builds on one another. SBOM is the smallest sort of building stone. It's just to know what are you even using, which components of which versions as globally acceptable, an identifier for those components. Then on top of that, you have an evaluation of that SBOM to say, is it problematic? For that, you need a source of Intel. Again, anyone of the SCA vendors will have that, but the quality of that sort of intelligence varies. It's important to understand that vulnerability databases are opinionated, curating a vulnerability, looking at a piece of software and saying, "This bug is a security vulnerability," or even a bug for that matter is not fact. It's opinion. 

Some people say it's defined behaviour. Some people would say it's more risky. Very much statements like, how severe this is? How likely is it that an attacker would exploit? So you need to trust the judgement of the provider of your sort of intel to say whether it is a good source of vulnerability databases, a good source of opinion on whether a dependency does or does not have a problem. 

And of course, whether it's sort of timely or not. Here, we're still only talking about known problems, so we're not talking about trustworthiness. We're going to get to that down the stream, just talking about known problems with this component, with vulnerabilities at the top, and then other secondary things like license are also important, but they're just a little bit less complex. You have to evaluate. You have to assess the components that you have. 

A third level of that is indeed, continuously monitoring changes on it, because it's important to understand, the dependencies are fast moving. You are using ten libraries in your application, and they in turn would use fifty others, which in turn use another two hundred and each of those will have a fairly regular release cadence. So, pretty much every day when you build your system, it's likely that there's some new updates to be taken downstream. It doesn't mean you need to take the update every time. 

A lot of systems are on lock files and things like that to try to contain you. But fundamentally, you also don't want your software to grow stale. That's really where policies around consuming updates, say, should I change? Am I going to be more secure or less secure? Am I introducing a problem when I change? It's important because you can't always be in a detect and respond type mode of, "Oh, yeah. It turns out that two months ago, some developer picked up this library and introduced it. Now I know that there's a vulnerability, and I need to go back and rework that. You want to try and actually find that out at the time of consumption when it's cheap to review, when it's cheap to address, or change or upgrade. 

Then the last thing I would say is a response to that. We talked about detecting, talked about addressing, evaluating and addressing the vulnerabilities you have. We're talking about preventing. Then there's the notion of responding at any given time an SBOM that whose evaluation was entirely kosher and clean can become problematic when a new vulnerability is disclosed. That's where really you want to know about responses, like an incident response process. So that when Log4Shell comes out, who is using Log4j, and you are set up to understand what are the minimum changes you need to do to fix this problem and set up to roll that out quickly. All of this is like a chain of addressing your own internal use of open-source components, maybe under the SBOM title, but really, SBOM is just the very beginning.

[00:08:47] SIMON MAPLE: So, let's roll this up a little bit. Number one, understand what you've got in your environment. Make sure you create a bill of materials for that. Number two, don't just leave that bill of materials by itself. You need to enrich it with information and intelligence that will actually provide you useful information on top. Now, in my experience what I've seen is a lot of folks tend to create that bill of materials with an existing tool, like an SCA tool, and that tends to add free enrichment on top, out the box. 

Number three, have a good policy about how you're actually going to fix, how you're going to upgrade these libraries, whether they're coming in through new functionality, which is being added, or whether they're existing in the backlog. And have that good policy about when you want to upgrade those. Number four, responding to new disclosures as they come in. 

This is still all open source and the internal side of open-source consumption, an important first phase that SCA tools like Snyk can get you started very quickly. The second most important thing as Jim Zemlin, Executive Director at Linux Foundation and Guypo explains is tracking the software through the pipeline.

[00:09:54] JIM ZEMLIN: If you're just new to security, it's just the number one thing I tell folks. We work a lot with government. We work a lot with industry leaders. I used to have a lot of conversations with open-source groups. I'm now having a lot of conversations with boards of directors that are concerned about security. I always talk about like, you got to know how software flows. You got to know how interdependent it is. You got to know how, at each major point of that stream of code, there are little forks in the river where people can cause problems from a security perspective, and you have to protect all of those points. that's really how you should think about software supply chain security. From developer, to source, to build, to packaging, to dependencies, all the way to the consumer. Be clear on how code flows and set up security along every point in that stream.

[00:10:49] GUY PODJARNY: I think the second most important track is indeed this notion of just tracking software through a pipeline and understanding that your build system is properly hardened, you do not have some security misconfigurations. Adding where you can the proper hash verification, using the lock files where you can to just sort of avoid inadvertent issues. This whole area is a lot less structured. It's much less – the practices in this area are far less mature. There isn't obvious, you should do A, B or C. Different guests, as you'll hear have different opinions about what's the right way to do it, ranging all the way from, you really should have a local copy of all of the open-source components that you're using. So you know they're not being tampered with too far looser paths of just trying to find things that are a little bit problematic.

In that process, there's an element of it that is around hardening your pipeline, which I would say is a higher priority. So this is things within your control as a developer, as a platform developer, probably to build a secure product that is the pipeline, not the product itself. The pipeline itself is a product, and this is about investing in securing that product, the pipeline as a product and not having weaknesses in it. Then it has another aspect that is a bit more like XDR, sort of detect and respond, which is really about an attacker. 

The third strand, I would say, from a security perspective is to think about trust. Trust is hairy. There are no still properly, truly great sources of information about trust. It conflicts as I mentioned, It conflicts with the open source, maybe ethos from time to time, around that sort of transparency. It's almost like transparency versus safety type question. Yet, it is important, especially if you are a highly regulated, highly targeted organisation like a bank. Elements like knowing, indeed, the nationality of the components you are consuming, or maybe the security practices, or maybe even just the sheer prevalence of it is important. Today, trust mostly manifests in things like, well take a look a little bit maybe like Snyk advisor is a good example. Look at components to understand what do you know about it, should you use it.

A fourth tier is a bit less about risk, but might actually bump up the priority list because of commercial needs. That is indeed the production, sort of supplying your SBOM to others. In practice, supplying your SBOM to others does not help your security. That is not the goal. It's hard to put it on a security track. But oftentimes, you need to do it right after that first step of figuring out your own dependencies because your customers demand that, and so you want to be able to export them.

The evolution of that sort of track is, now that you are consuming these SBOMs, what do you do with them? Frankly, nobody knows at the moment. There are some interesting questions around how much security responsibility does it transition from the supplier to the customer? If as a customer, you are consuming some piece of software. Now you know that it's using certain libraries, and a new vulnerability is disclosed in one of those libraries. Did that roll some of the responsibility of knowing about that library, but it's vulnerable down to the customer?

Before, you wouldn't know that as a customer. The supplier knows it, so 100% of the responsibility for finding out that the library was there, for finding out that it's vulnerable, and for doing something about it, sat with the supplier. It's unclear whether this should change anything about this responsibility. It is still the vendor providing the software. The customer generally cannot do that much to sort of fix the problem, but that is still sort of a conversation in practice. What is pretty certain is that if you get a bunch of these SBOMs, at the very least, when something like Log4Shell happens, it streamlines the process of you needing to send an email to all of these different vendors that you have. And say, "Hey, are you using Log4j as well or not?" So, at the very least, having this repository should help you do that. 

Then the second practical use of it is change management. When you get a new version of that SBOM from a vendor, you can now contrast it to the previous one and say, "Hey, has anything changed?" Typically, security assessment, processes for vendors are quite labour intensive. It's not very practical for companies to do them again and again. Every time they get a new version, they do them at some frequency. But having the SBOM allows you to do some amount of security scrutiny from version to version, and to see whether you want to actually enact that. Those are the two currently most concrete use cases for SBOM management. Again, the tools there are still fairly nascent.

[00:15:45] SIMON MAPLE: Guypo asks an interesting question there. What are customers actually doing with their SBOMs once they receive it from the vendor? Is there a shift in responsibility because they have that? We had Adrian Ludwig weigh in on this.

[00:15:56] ADRIAN LUDWIG: Any product that we're using at scale is one that's going to be validated by lots of other companies. Do I want to dedicate a bunch of my security organisation's time and effort to making sure that everything is absolutely correct inside the SBOM of every one of our vendors? No. I don't think that is actually a sensible allocation of resources within our organisation. Do we have them? Yes. Do we look at them? Sure. Are we investing a ton of resources in optimising our analysis of them? No.

[00:16:24] SIMON MAPLE: At the moment, most companies are not doing much with the SBOMs they receive from their vendors. But to Adrian and Guypo's point, it does give them information to rely on, should a vulnerability like Log4Shell surface again. But then there's that concept of trust too, as Guypo mentioned. When updates to open-source software happens, what's the best course of action? We had Emily Fox, Security Engineer, who also serves as the co-chair of the CNCF Technical Oversight Committee, and Aeva Black, Open-Source Hacker at Azure Office of the CTO at Microsoft, as well as Adrian and Guypo share some interesting insights to dissect this.

[00:17:00] AEVA BLACK: Open-source software has since the beginning, had people who contributed with pseudonymous identities. They didn't want to use their legal name on the internet. Maybe they have a separate email address that they just use publicly. Because if you sign your Git commits, there's a name and an address in an immutable record that is public. For some people, it creates safety concerns. People who have stalkers, people who might be from a country or work in a situation where that's not conducive to their safety. There's a lot of other circumstances where people just don't contribute under their legal name, and they shouldn't have to. We can trust each other through building a working relationship. Over time, people can review my code, or I can review their code and see that it's good code. And we can use a key or some other identity token that as a surrogate for a driver's license, or a passport number or Social Security number that P2P has sufficed for that for open-source communities for a long time now, little key sending parties.

There are lots of folks that I've met once or twice in person, and I've known them online for 15 years. We exchanged P2P tokens a long time ago, and I review a bunch of their patches, and they review mine and we're good people. We see that there's a web of trust between us or other people that I know more closely, who also know them closely and we can see that they've signed keys. I think there's been a call to change this in the past couple of years to start mandating real names in open source, and I think it's been misguided. I think it comes from well-intentioned people who are used to working inside a corporate environment, which is higher trust and higher regulation. I think it also comes from federal contractors, who are seeing the word open-source crop up in the executive order a year and change ago. They're seeing the word open-source crop up in DOD guidelines, for the first time in a decade, in federal procurement guidelines, or drafts of them. This is kind of new.

There's a lot of folks who are federal contractors, who aren't used to seeing that word. So they're reaching for the tools that they know to provide security, or assurances as they're servicing these contracts. But they don't realise that the federal guidelines do not actually require this. Here's a little analogy. If I were a federal contractor selling windows, buying windows from Microsoft and selling that sub-licensing to the government, I wouldn't have to tell them the name of every software developer who wrote a line of code in Windows. The government only needs to know me as the sort of point of contact, or one who has access to those systems, who's servicing the contract, not who supplied the source code. It's the same for open source. If you are a federal contractor selling a solution to the government that includes open source, you need to know who's doing that work, but not who wrote the code in first place.

EMILY FOX: I will say, the first and foremost is verification of trust, and that is actually the root of pretty much everything within the supply chain from a security perspective. A lot of our current challenges in this space or don't trust anything, make sure that it's signed, make sure that you're getting it from the appropriate location that you think it is to establishing that provenance, but it's the verification piece that is the most important. Because you can have all of this great enriched information coming in and about an artefact in the build environment it came from, all the dependencies that are within it, because you have building materials associated with it. But the key value in all of this architecture and within the reference architecture itself is the ability to verify with high assurance all of the information you're receiving, because without that verification, you're just perpetuating an existing problem.

Usually, the second part of this is once you have the verification in place, you also have a responsibility to define what is considered acceptable for your individual, or organisation or if you're a maintainer of a project. So what are the field values that you're looking for? So CISA and NTIA have a minimum amount of fields for a software bill of materials that need to be produced. But you as an organisation, as an adapter or as a maintainer might want more than that. Any material or artefacts that are being produced, while you might be able to verify everything that it's from the source that it says it is and the correct steps were followed. If you're missing some of that material that's going to make that outcome operational for you, then you've lost the quality of it and it's no good anyway. So you've spent all this time and energy building all of these systems and capabilities in the end, just throw it out or have an impartial decision as a result of it.

[00:21:33] ADRIAN LUDWIG: I have a dream. That dream is one where, as soon as a dependency is out of date, we immediately apply a patch that updates that dependency and push it. There's sufficient robust A/B testing and validation within our dev pipeline that that can happen. If something breaks, it turns out, engineers are really, really good at fixing reliability issues, and they are very proactive about it. 

Shifting the entire discussion of dependency management and supply chain management into one where we're willing to create outages. Because we have sufficient confidence in our A/B testing and our automated rollout system that it happens infrequently. This way, we can just solve dependency management, all the toil related that can go away. That's the dream for me, is we get to a point where all the libraries are up to date all the time. All the components are updated all the time.

[00:22:20] ERIC BREWER: Well, a few relatively easy steps. The first thing I tell companies is have a private copy of whatever open source you're using. Sometimes that's called vendoring, some of which is just an internal copy of a public repo. It's kind of a hassle to do, because now, you have to keep your internal repo in sync with the upstream repo. But I actually want that process to be explicit rather than automatic. Reason for that is, if you have your own copy, you have two big advantages, maybe three. One advantage is, you know exactly where the source code came from, because it came from your private repo, and you can actually prevent external access to other repos. You presumably now have an archive of it, so you won't lose the source code if you'd later have to investigate a vulnerability, so the archiving is important too. You get that for free if you have the repo in your control. 

Second thing you get from that is you can make your own security patches whenever you want for whatever reason without having to worry about upstream, which is very important in an actual incident where you need to move quickly. The third reason is, I don't actually want to take upstream changes automatically in general until I've vetted them. Now, that's a complicated process, because most companies don't vet upstream changes, Google does. We have a private copy of all the open source we use and we vet as best we can, which is frankly not that great, all incoming changes. But the idea is, if something – malware shows up outside, we hopefully wouldn't necessarily make copy into our internal version, and so, it wouldn't get deployed, and that has saved us on occasion. But having a private copy is one relatively good step.

The second thing I recommend, and you can do this on Google Cloud using Cloud Build, but you can do these other ways too. Which is, don't really have a bunch of scripts that build stuff that everybody runs locally. For anything that's going to production, we actually have that built by a service. The service might be doing the same thing, developers do on their own workstation. But for the one we actually build for production, it's built by essentially robot accounts or service accounts. So, we know exactly a complete audit trail of what source code was used, what tools were used to build it, what artefacts were generated. Those artefacts are signed by the service to prove they were built that way. We also can prove other things, like all the code was actually reviewed by two independent Googlers, so that we know that a single Googler can't cause an insider risk attack. That's a more advanced prevention. You don't need to start there. 

But the idea that you have a service building your artefacts, instead of people for the reduction actual golden version. That gives us a lot of peace of mind, because we really know what's in it. We know we have it properly inventoried. Also, it starts to generate useful secondary logs, like when we discover vulnerability, we now know all of the artefacts that contain that vulnerability, because we have a complete build record of everything in production, and we can actually go kill those jobs if needed.

[00:25:09] GUY PODJARNY: In terms of exact practices and exact tools that you can use, one practice that I'm very kind of in favour of around trust is just trying to change your practices instead of trying to be at the very, very bleeding edge. Every time a new version comes out, immediately embrace that component and upgrade to it. Maybe try to introduce a lag. We tried to codify this in in Snyk by having our recommendation of, you should upgrade the library to basically state, if a new version came out, wait for, I think, it's 30 days before we actually open the upgrade for you to consume it. Unless, there is a known security vulnerability on that version, in which case it probably was just disclosed, and a fix has been rolled out, and you want to adopt it right away.

This is an exercise in risk management, and you need to try and align the two lenses, but I think it's a very practical practice you can do to try and reduce the risk of embracing a malicious component. Because if you wait those thirty days, it's very likely that the ecosystem would have flushed out, identified this component is malicious by someone locally installing it or such, and then they can do something about it.

[BREAK]

[00:26:15] SIMON MAPLE: Now, before we get into the next bit of this episode, we have been speaking a lot about open source and code vulnerabilities in this mini-series. One thing we can do together to tackle software supply chain security is finding and fixing vulnerabilities in open- and closed-source software. That is exactly what we are doing this month at the Big Fix. Now, the big fix is a free, and fun event for the community where you can invite your teams and build a more secure software ecosystem for us all to benefit from. The event kicks off on 14th of February, and you can preregister at The-Big-Fix, with dashes. Now, let's get back to the episode.

[EPISODE CONTINUES]

[00:26:51] SIMON MAPLE: There's a great number of tips there in terms of what people can focus on, and also, what to look to implement themselves, or at least have some kind of understanding as to what the security is like for certain components, and their services and vendors that they're using. But as much as supply chain security is a security issue, you equally need that similar priority or similar concern among your development teams and the development organisation. 

Of course, developers often will exist in thousands in larger enterprises. So I guess, there are two things here. First of all, what advice would you give for a security individual or security team that's trying to raise awareness within a development organisation, so that as many developers as possible recognise this space as a high-risk area? Secondly, what would you say are the best strategies that lead to actionable, good security hygiene practices for developers to use that mitigate the risk in these areas?

[00:27:45] GUY PODJARNY: I fully agree with this, a critical need to get developers bought into this. When you talk about who is making demands of open-source maintenance, when you're talking about who is effectively employing the security review, when they are consuming an open-source component, these are developers that are making these decisions. It's not practical for the business to have a developer go to a security team to decide whether an open-source component is valid or not before they embrace it. That will just grind all systems to a halt. It's been tried in the past, and there's a reason it's not working. It's just not practical in a DevOps reality. You have to have to have to get developers on board. 

I think as a security team, you can do a few key things. One, make it visible. Developers most of the time are just not aware of a security problem with security question of this whole conversation happening when they consume an opensource components. They're well aware of its functionality. They're well aware of its popularity. They're well aware of maybe sometimes even its performance implications. But they don't know that they need to ask the questions about security, and they definitely don't know that there is a security problem. So investing tools that just give visibility, that flag of the fact that there is a vulnerability. Second is, build on appreciation for the importance of the problem. So help them understand why does this matter? Fine, you have a scripting issue there. This needs to be in a gamified fashion. 

All of these boring security curriculums that they might be going through, they know on an annual basis, they're probably not making the cut, but doing fun. We'll do this like stranger danger type, talks on it, or having them participate in some capture the flag for those who wants. Or even just in time learning modules on those. You don't have to explain every single vulnerability to the umph, degree but you have to generally get them to appreciate the importance of security and how to handle. That goes hand in hand with helping them make it actionable. So if all you do is tell them about problems, at the end of the day, you're going to become persona non grata. It's not the best strategy. So you want to think about when you tell them about a problem, really invest in actionability, in remediation. I'm amazed at how many security solutions today say they do remediation, and really, their remediation action is logging a Jira ticket. Logging a Jira ticket is very important, and it is not remediation. It is logging a Jira ticket. It is logging the fact that there's a problem. Any more than logging a support ticket is a fix to the support problem. It's not. You should keep logging Jira tickets, but you want to think about how do you help developers really figure out how to fix that. A lot of tools have support for that. 

The last thing I would say that's probably most admitted is celebrate good work. Because security is just so kind of infamous for just being a stick, never a carrot. It's always around, "Why didn't you fix this? Why didn't you do this?" I think security teams are in a great place to celebrate the developers that have done good work. All of these are, to an extent, they're not software supply chain security practices. They are software security practices. They're application security practices, developer security practices. That's really where the worlds blend a bit. There's a fair bit of confusion around kins of who's at the top. Is it software security or developer security the top term? And software supply chain security is a subset of that? Is it the other way around? Is it software supply chain security? The umbrella term around consuming software securely, and part of that is to actually secure the pieces of software, are the two very disparate domains.

I'm personally of the opinion that there are two disparate domains. One is about producing quality items, and then the second is around consuming and kind of moving them along the journey in a good fashion and owning them responsibly. But there's no doubt that we need both and they mostly coalesce in open-source maintainers, meaning to build secure software. Clearly, we need to provide them with software security solutions and tools that are free. Snyk is free for open source. We're very proud of that. There are other tools that are free for open source to use and I think that's important for everybody to do. We need to help them learn security, apply security and motivate them. Recognise and celebrate them when they invested in security as a community. Then separately, we need to consume software security ourselves.

[00:31:51] SIMON MAPLE: We could perhaps break this into a couple of pieces. One is, the pieces that are wholly owned by our own organisation. So perhaps, we as a company want to secure all the pieces that we are building, the open-source components that we are pulling together into our application. That's something I can do internally. As I pull things into my build, that's something I can add into my pipelines. However, the second piece is really the things that we really rely on others within the community to help with. For example, if we're talking about signature verification, attestation of third-party libraries that we pull in from other repositories. As you mentioned, there are other tools, perhaps, like Snyk advisor and things that allow us to make good decisions as to whether we should use certain open-source libraries or not.

Tackling software supply chain security as an organisation is complex, whether you are a consumer or distributor of software. There are several community initiatives like the OpenSSF from Linux Foundation and initiatives from CNCF that are trying to improve the level of trust that we have in the software we are consuming. There are several steps that organisations can own as we discussed today, to secure their own software supply chain. Now, a lot of work has been done in this space in the last few years and there's a lot of work that still needs to be done. What does the future of software supply chain hold? Well, we'll be tackling that in the finale of this mini-series, the Future of Software Supply Chain Security. So be sure to join us for that episode. 

[END OF EPISODE]

[00:33:18] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]