Skip to main content
Episode 162

Season 10, Episode 162

Advancing AppSec With AI With Akira Brand

Hosts:
Headshot of Danny Allan

Danny Allan

Guests:
Listen on Apple PodcastsListen on Spotify PodcastsWatch on Youtube

Episode Summary

In this episode of The Secure Developer, Danny Allan sits down with Akira Brand, AVP of Application Security at PRA Group, to explore the evolving landscape of application security and AI. Akira shares her unconventional journey from opera to cybersecurity, discusses why AppSec is fundamentally a customer service role and breaks down how AI is reshaping security workflows. Tune in to hear insights on integrating security seamlessly into development, AI’s role in secure coding, and the future of AppSec in a rapidly shifting tech landscape.

Show Notes

In this engaging episode, The Secure Developer welcomes Akira Brand, AVP of Application Security at PRA Group, for an in-depth discussion on the intersection of AI and application security. Akira’s unique background in opera and stage direction offers a fresh perspective on fostering collaboration in security teams and influencing organizational culture.

Key Topics Covered:

  • From Opera to AppSec: Akira shares her journey from classical music to cybersecurity and how her experience in stage direction translates into leading security teams.
  • AppSec as a Customer Service Role: The importance of serving software engineers by providing security solutions that fit seamlessly into their workflows.
  • The ‘Give Them the Pickle’ Approach: How meeting developers where they are and educating them can lead to better security adoption.
  • AI’s Role in Secure Development: How AI-driven tools are transforming the way security is integrated into the software development lifecycle.
  • Challenges in Security Culture: Why security is still an afterthought in many development processes and how to change that mindset.
  • Future of AI in Security: The promise and risks of AI-assisted security tools and the need for standards to keep pace with rapid technological advancements.

Links

Compartilhar

Akira Brand: “I think AppSec is a customer service role. Your customers are software engineers, and the service you're providing is an advisory service. Now, to do all that, you need to understand what software engineers want. First off, they want to work in their own environments. They want to know more about their craft. Many of them are driven by a strong sense of pride in what they do, and if you can give them a combination of ease of use of any kind of tool you want them to use and education around how this stuff all works, that's the pickle for the software engineer.”

[INTRODUCTION]

[0:00:45] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun.

This podcast is brought to you by Snyk. Snyk's developer security platform helps to build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open source dependencies, containers, and infrastructure as code, all while providing actionable security insights in administration capabilities. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:25] Danny Allan: Welcome to another episode of The Secure Developer. My name's Danny Allan. I'm the CTO at Snyk Security. Super excited to be with you here, especially because today I'm joined by someone who's been in the application security space for a very long time, a very experienced individual, and that is Akira Brand, who is the AVP of Application Security at PRA Group. Akira, welcome to the podcast. How are you?

[0:01:47] Akira Brand: I'm great. Danny, thank you so much for having me today.

[0:01:52] Danny Allan: I love getting together and speaking with people who've been in the space as long as you have and as long as I have. Maybe you can just introduce yourself to the audience and kind of your journey that has brought us to where you are today.

[0:02:05] Akira Brand: Yes, so the thing I like to start out with when I talk about my journey into AppSec is that I am not from a traditional computer science background. I do have a degree. I have my Bachelors of Music in Vocal Performance. I went to conservatory and I studied opera. Yes, I know.

[0:02:23] Danny Allan: That's awesome. Opera. I don't think we've ever had someone on who's classically trained in music.

[0:02:28] Akira Brand: Well, I mean, I'm glad to be your first one. And you can tell from my laugh, right? You'll hear it in the resonance. So, I have my BM in music from the University of Denver, and I worked in the classical music space for about 10 years. I was a performer. So, I did quite a lot of professional singing. I also was a music teacher. I directed operas. I stage-managed operas. I held leadership positions in arts, non-profits. I've loved that industry, but about six years ago, I was going through a really hard time in life, and I realised I really needed a change. I had a former boyfriend suggest I should get into programming. I was like, “Nah, I don't do that computer stuff.”

When I was in the classical music world, it was very analogue, right? This was before everybody had iPads that they were writing down on their scores and stuff. It was all pen, paper, pencil, three-ring notebooks. We maybe wrote emails once in a while, and that's about as technical as it got. So, I was a Luddite, and in some ways I still am, which is why I think I might be in cybersecurity as opposed to a different aspect of computing because I still don't really trust these things.

I tried my HANA programming and I really liked it. I ended up going to a boot camp called the Turing School of Software and Design. I worked on the back-end program there and got my first job actually as a webmaster for a little mom-and-pop shop where they were doing e-commerce. Gosh, it's been a really interesting journey because I remember when I was first starting in tech, the best piece of advice I got was from a gentleman named Brian Holt. He is an instructor with Frontend Masters. You may know the name.

In one of his classes that I took, he said that when you're first starting out in tech, you should follow your interests like a butterfly and see where it takes you. I tried programming, I tried developer relations, I tried teaching, and I loved all those things, but I really wanted to do more engineering work in the cybersecurity space. So, I worked really hard and I studied like crazy, and I landed my first job as an application software security engineer. That was a few years ago, that was with a company called Resilia.

From there, it's just been taking off into the AppSec space. I feel like I found my niche in this industry, which feels really good, especially because I took the scenic route to get here, right? It took me a little while to figure out what I really liked and what I wanted to do. But yes, I just threw a lot of trial and error and a lot of trial by fire, if I'm being honest. I found my way into cyber and I just, I love it.

[0:05:26] Danny Allan: I love that journey and I often believe that the people who are best at application security are those that have a very well-rounded perspective on everything from building software to breaking software to singing music to being on stage, because I have to believe that a lot of those skill sets help you out especially as you're helping to advance the application security program at PRA Group.

[0:05:50] Akira Brand: Yes. It's interesting. The thing that has really helped me the most in my current job is my background as a stage director. It's not necessarily because I'm orchestrating giant productions of operas, although we do have a talent show every now and again. But it's really about bringing out the best in the people around you to allow them to express themselves and do what they do best in a way that you support them in doing so.

When you are stage directing an opera, it's the exact same thing. I once did a production of Mozart's Così fan tutte and it was a very small production. It was like at a local art house café, and my singers at the time were very new to the industry. I had a lot of this, maybe their second or third opera production. I had to provide a space for them to execute on their own vision in a way that they felt safe and supported. I feel like that skill transfers so well to working with people in different departments, which is so relevant for AppSec, right? We don't live in a bubble in AppSec. We live in a ginormous ecosystem.

We have to talk to software engineering. We have to talk to infrastructure. We have to talk to everybody that's going to be touching on AI in any way, shape, or form, right? If we come to it with an agenda and saying, “We know what we want and you need to do it this way.” That's not the right approach. I find humility and understanding and allowing, creating the circumstances inside of which people get to express and also perform at their best is the biggest skill I've taken from the performing art world into my current job.

[0:07:49] Danny Allan: Well, let's touch on that a little bit because I actually believe that the technology part of AppSec is rel – when I say it's relatively easy, don't get me wrong, there's a lot of magic or wizardry that happens here. But the hardest part maybe of transforming a company is in the culture and collaboration and the people side of it. We talk about people, process, technology. The people side is often the hardest. What have you found to be most effective in changing cultures and creating across those groups, unlocking the potential to build better software, like what works in the real world?

[0:08:25] Akira Brand: There is a gentleman whose name just escaped me, so I might have to put that in the show notes later. He owned a conglomerate of ice cream shops, and they sold ice cream to hamburgers. It's not McDonald's, and I swear this is going somewhere. He has the best customer service approach I've ever heard and that is when someone walks into his store, immediately what they got was a pickle, and people would come into the store for the pickle and then they would stay of course and order ice cream and hamburgers and stuff. His whole customer service mindset is give them the pickle. Whatever the people want, give it to them immediately.

I say this, there is a reason for me saying this. I think AppSec is a customer service role. Your customers are software engineers and the service you're providing is an advisory service. Now, to do all that, you need to understand what software engineers want. First off, they want to work in their own environments. They don't want to go out to some random tool that's somewhere else that they don't work in, right? And they want to learn. They want to learn more. All the software engineers I've met, almost without exception, have been extremely curious people. They want to know more about their craft. Many of them are driven by a strong sense of pride in what they do. If you can give them, a combination of ease of use of any kind of tool you want them to use, and education around how this stuff all works, that's the pickle for the software engineer, right?

So, what I've done at PRA group, I've actually started a monthly lunch and learn is it where I call it AppSec aerial view, where everyone one InfoSec and now software engineering, I did open it up to them recently, can come and learn about a topic. I teach them about AI and AppSec. I teach them about best practices for authentication and authorisation. Man, I'll tell you, people go nuts for that because it gives them first off, knowledge, it peaks their curiosity, and then I always make sure to tie it back to what they're doing at any particular time, so it's not a waste of their time. Then they feel empowered to be better developers, better at their craft. I find that that works – I've done that at every single organisation that I've been at since starting in my AppSec journey, and it works every time.

[0:11:09] Danny Allan: So, give the developer the pickle. I got it. That is awesome. How similar is that? I hear a lot about security champions programs. Is what you're describing in a security champions program or outside of a champions program and giving them the pickles or what they need and keeping them building software?

[0:11:29] Akira Brand: I think it's outside. I'll say it this way. I don't necessarily have a strong vision for making a security champion program that is separate from software engineering. I want everybody in software engineering to be a security champion, because I give them a good experience when they interact with me, right? I know that some companies do have security champions programs that are formalised. I may be a little bit pie in the sky, but I truly want every single person on the InfoSec team, on the software engineering team, even on the infrastructure team or just IT in general. I want them to see security as an interesting and fascinating topic that they can get involved with at whatever level they want.

[0:12:21] Danny Allan: Yes. That's fair. Actually, I'm an idealist in the same way in that if you can get developers passionate about building better software, your job has made five hundred times easier. It's far more effective, I'll say, than implementing gates through the process. Another thing, though, that you said that I think I've seen consistently work is make it seamless for the developer, like keep them in flow. Whatever they're doing, don't try and distract them out of the flow. Are there specific practices that you've seen for good hygiene that have worked better than others? Like reject into a PR check, or do you say, “No, no, I want them to do this within the IDE.” Do you have best practices for helping them stay within their context flow?

[0:13:07] Akira Brand: Yes. So, I think whatever security tools you're using, if you're at that level of influence in your organisation, if you can choose security tools that integrate directly with the IDE, the integrated development environment, that's a great place to start. So, for example, Snyk has a linter when I was using it. Anyway, but yes, so we would put the Snyk devtools into the Visual Studio Code. It really helped the developers catch a lot of the problems before they manifested into full-on, we need to do our secure code review, or it gets picked up by our desk later on.

Security tests are very helpful. So, if you have some kind of, maybe even a bit of a flavour of test-driven development, not only does that help the specific feature get shipped in a more secure way, you can reuse those tests, right? You can reuse those patterns that you create in that testing suite.

Then the last thing I would say for good hygiene is it's a good idea to create a library of libraries. Everyone in the right software, myself included wants to use these cool open source libraries. We want to be able to reach for a tool and just be able to plug it into our environment and not have to worry so much about it. But of course, that's not necessarily realistic if you're developing in an enterprise environment. So, what I would like to do is I like to make a library of vetted libraries that my developers can select from and give them as many options as I can. And if a dev still wants to use a different one, like just run it past the security team, we'll take a look at it and we'll check it and make sure we'll vet it for you.

But just kind of giving them like, it's like when you go – that's a funny analogy. When you go to a bar and you get a menu and you're like, “All right, I know I want to get a bourbon thing, so I'm going to go to the bourbon section of the menu.” Here's your five options, pick from those five, right? And then you're off to the races, you get a drink in your hand sooner than rather than later. Kind of the same thing but for developing software and that analogy might break down a little bit, but I'm going to use it for now.

[0:15:11] Danny Allan: I like it. Providing the menu of secure alternatives. Do you think that AI is going to have a hand in this specific area? In other words, inspecting what the developers are providing. I’ll give a very specific example of this. Right now, a developer could choose to use any LLM if they're writing something that is backed by an LLM for some AI purpose. Now, as an organisation, you might say, “I'm going to allow, I don't know, GPT-3.0, but I'm not going to allow something else.” Do you think that the organisation should have standards and then enforce that in some way, or just say, “Here are the positives and we're going to block this through this gate?”

[0:15:48] Akira Brand: I think that depends on your organisation and what kind of compliance regulations you're beholden to. The compliance will have a lot to say about that, maybe even more so than the AppSec side. I think that it is a good idea to have policy standards and procedures as far as it relates to AI. The thing that I'm finding a little frustrating is that AI changes so fast that my policies and procedures and all that stuff just can't keep up, right?

So, I think it's going to take a little more than one person leading the whole charge for AI adoption. It has to be multi-people, like you got to get a bunch of people's brains on that issue because otherwise, it just goes too dang fast, right?

[0:16:35] Danny Allan: The velocity of this space is almost unreal. I was looking at the Azure LLM option list just recently, and I think it was 1,800 different LLMs were available. It only launched, what, two or three years ago, we got our first GPT kind of 3.5 and now there's 1,800 of them. The velocity of the industry actually is causing an issue because it's moving faster almost than security can keep up with it. Would you agree with that or at least that standards can keep up with it?

[0:17:04] Akira Brand: I think it's faster than the stamped, signed, sealed, delivered processes are because that just takes time, takes a lot of people's eyes. You got to pass it through a lot of approval gates. I don't know what the answer is, but I think a more flexible iterative approach is in order for procedures and processes when using LLMs. I just don't know what that is yet, but I'm experimenting. I'm working on that problem.

[0:17:34] Danny Allan: What about the development cycle itself? Have you seen an evolution in the way that software is developed? I certainly have, but I've also moved from mainframes all the way to modern pipeline, cloud-based. Have you seen a shift in the last five years in the way that software is being developed, and how does that influence the security side of things?

[0:17:57] Akira Brand: That's a good question. I have seen a pretty sizable shift in my own career of how software is being developed and I've worked at a lot of different companies where some companies practised Agile, some companies practiced Waterfall. Some companies practiced kind of like a weird mix therein. We called it Scrum, but we weren't really sure what we were doing. I think what's really important inside of all this velocity is that, security consistently tends to be a shoved-in afterthought.

So, whether you're doing Agile, or whether you're doing Waterfall, or whether you're doing Scrum, or whatever it is. Security comes into the last second and people come to us and say, “Hey, can we do this thing?” And when we inevitably say no everyone gets only mad, including the security team, because we're like, "You should have involved us sooner." It's like, "Well, that's great, but when?"

I think Agile was not developed with security in mind, and I also think the way that security is done isn't mindful of the agile lifecycle. If I had my – if I had my druthers, if I had my ideal world, the frameworks we used to build software would be a lot more inherently secure. The infrastructure on top of which we built software would be more inherently secure, and I would functionally want to create programming frameworks, release structures, security tools that would put me out of a job. So, that's not the case right now though, but nothing has been designed around security. Even the whole idea of DevSecOps has a lot to be desired, in my opinion.

So, we're kind of just shoehorning in. No matter how much I feel like the software trends change, because they do, I feel like security is still an afterthought, and I get frustrated by that personally

[0:19:51] Danny Allan: I do believe that that framework-based approach does far more to improve security than all the training in the world. I say that – I always use the example of, Java did way more to eliminate buffer overflows and all of the education in the world. For example, at Google, I think they use content security policies to essentially wipe out cross-site scripting overnight. So, these framework-based approaches often are a better approach than trying to catch every single variant of every single issue across the evolving complexity.

[0:20:22] Akira Brand: Yes, for sure. Like, say, you have a hole in your wall and you have all this rot inside the wall. Well, you can keep patching the whole, but you got to open it and pull out the rot. Open it and pull out the rot. Just knock the dang wall down and replace it. That's kind of what I want. I mean, the thing I'm not sure about, and I'm looking into this as well, is how much money is being spent to develop these frameworks that are secure by default. To develop a more secure operating systems to advance cryptography, things like this. Then, there's post-quantum cryptography, and what's going to happen when the first quantum computer becomes readily available and accessible, and everything we do just gets thrown out the window? I don't know.

Those are interesting things to think about. It's going to be interesting to see where the money goes inside of security, because I think right now, especially in Silicon Valley and all the VCs, all that money is going toward AI, but is it being done securely? Then, we're stuck at the same problem?

[0:21:33] Danny Allan: Well, I have this underlying belief that I've held onto for a while now, that the root causes of security tend to be the same today as they were 25, 30 years ago, it's just the infrastructures that change. So, if you think about the root causes of most security issues, it's validating input, it's authentication, it's authorisation, it's output, and output encoding, obviously. Those things continue to be the same issue today that they were 30 years ago. In other words, the first implementation of this might have been a buffer overflow, overflowing a memory buffer, and that was an injection-type technique.

In AI, which is the hottest, coolest thing right now, you have prompt injection. It's really the same thing. Have you validated the input that the application just got in order to determine whether there's going to be a security problem? Do you agree with that?

[0:22:28] Akira Brand: I do agree with that. I think if you look at the OWASP Top 10 for LLMs and you look at the OWASP Top 10, it's the same list. It's like that meme where corporate needs you to find the difference between this picture and this picture. It's the same picture. Of course, it's not a one-to-one mapping, but the principles are very similar. So yes, I would agree with you, Danny.

[0:22:48] Danny Allan: Practically then, let's say we're trying to address those root causes. How many of the organisations that you've worked with, and it doesn't have to be your current one, actually have a good incident response or forensics to understand when things happened, and then build programs around those root causes. Does that happen in the real world, or are we just continuously educating on the top 10?

[0:23:10] Akira Brand: I think it depends on the maturity of the organisation. I can't necessarily make a blanket statement there. I will say that IR usually gets about halfway done from what I've seen. Like you find out the incident, you do the root cause analysis, you fix it, you patch it, but you don't design to make sure it doesn't happen in the future, because that's expensive. And it doesn't prove to anybody that there's going to be a payout for that expense. So, you're kind of stuck in this hamster wheel almost of incidences.

I think if you can iteratively approach education and maybe tweaking your toolchain, tweaking your development environment, again, starting to maybe write some security tests for like what caused the incident in the first place, you're going to be in much better shape. Like I said, that takes people, and that takes money, and that takes time. I've never worked with an organisation that had all three of those things in spades. Have you? I don't know. There's always tradeoff.

[0:24:20] Danny Allan: Well, at Snyk, as you might imagine, we are very deeply invested in making sure that we're building it into our pipelines. But I will say, most organisations don't do all three of those. They do one or two well, but they don't do the third one well. How do you think AI changes this? With the rise of AI and machine learning, do you think that these techniques are going to help application security hinder application security, or just another complexity to the world that we live in? How do you think about AI impacting application security?

[0:24:55] Akira Brand: I think we can use AI to teach us how to do things more securely. So, l would not recommend, of course, putting proprietary code in AI ever. But you can maybe do a mock-up of something you're working on, and feed it into AI and say, "Okay. Is this secure? Tell me where this is insecure." So, it can kind of be like an automated code review for you, as it were, which is great.

It can also say, "I'm designing an API, give me top five security principles that I need to keep in mind while I'm designing this API", so you don't have to read a huge book and forget half of it anyway because you've already been working on 10 other things. So, I think as far as like knowledge goes, it's extremely useful. I would like to see more tooling bring in AI-suggested auto-remediation, but there's a lot of stickiness around that because if you're going to get out on a remediation suggestions for your code, that means you're probably sending your code to some LLM, and a lot of people really don't like that.

So, I wish there was some kind of way to do that, that an AI tool could look at my codebase in context and discover ways to remediate in a way that makes sense for my codebase, where I see a lot of tools fall short. This is not any one tool in particular, this is just my experience with security tools in general. It's like, the remediation guidelines are very vague and very broad, and they don't actually make sense for my code base. They don't take the context into account. If someone can figure out how to make AI contextually give me pointed solutions without compromising my proprietary code, that would be like a billion-dollar idea right there.

[0:26:46] Danny Allan: Yes, I thousand percent agree, because while we have generative AI for fixes right now, it's saying, you have SQL injection, use a parameterised query, and it rewrites maybe your code with that. But actually, the better solution, you mentioned it earlier, having these internal libraries with the internal standards, you actually want it suggesting PRA Group standard, or Snyk standard, or whatever the company is.

The only way that I can see that working is if you use retrieval augmented generation because you don't, like you say, you don't want your code base going into a generally tuned model that might leak out to someone else. So, you need to have a RAG-based model, retrieval augmented generation model, to make that work effectively. But I agree, you need to do that. Otherwise, the broad fix isn't sufficient for the organisation.

[0:27:40] Akira Brand: Yes, I think that sometimes developers get frustrated with security tools because the fix is too broad. It's like, "Well, I can't fix this the way it wants me to because it doesn't take into account this other dependency that I have over here. If that – what did you call again, an R-A-G?

[0:28:00] Danny Allan: R-A-G, Retrieve Augmented Generation. It's basically, you have a local mini model that is tuned specifically for your environment, so the data doesn't leave your environment.

[0:28:11] Akira Brand: Oh, beautiful. Yes, let's get some of those.

[0:28:16] Danny Allan: It's more complex to set up, but I think it addresses the need that the industry will have, because as you say, your IP is one of your most valuable assets. You don't want that going out to a third-party and in fact most organisations can't because of regulatory compliance, or pressures that are on them. What trends get you most excited if you look at what's happening in the industry right now? What trends in the landscape make you most excited?

[0:28:43] Akira Brand: I am excited that AppSec is starting to keep up with modern web technologies in a way that I hadn't seen when I first got started. So, for example, I took a SANS course recently that was all about securing web apps, APIs, and microservices. The reason I took it is because of the APIs and microservices. I was like, "Oh my God, someone is teaching people how to secure microservices."

Inside of that class, we went really deep into, I mentioned earlier, we went really deep into the Ajax engine that powers single-page applications and modern JavaScript. I think that's so cool that there is some serious security research being put onto modern web applications. I would like to see more of that. I don't think, again, I'm not of the opinion that we need to have DevSecOps, and fix things in the SDLC, just because that's where the devs are. I think we need to take it several layers down further and let the machines do it through intensive research.

So, making a more secure spa architecture, making a microservice architecture that is just secure by design. That is really exciting to me, that people are starting to pick up on that. At least, that's what I'm seeing more of right now. AI in AppSec, I'm really excited to see where it goes. I know that security folks, myself included, can look at AI and go, "Oh my god, that's such a headache." But I think AI is cool, and I am so excited to see how people apply AI to augment their coding practices, and how people deploy AI to do like secure code review.

Again, it's going to be that balance of like, we cannot feed code for a proprietary code into an LLM. But through that solution you just talked about, I think that's extremely exciting. I'm looking forward to seeing where AI goes. I'm very much – it's funny, I turned into it from a luddite into a futurist. Now, I'm like all about it, like self-driving cars, let's go.

[0:31:01] Danny Allan: You think we'll ever get to that world where it is completely self-driving? Whether the applications are developing themselves. Obviously, would never develop completely autonomous, it needs some input. But we're seeing coding assistance now, and generation of code, and applications through low code, almost no code involvement. Do you think A, we’ll see applications get to completely autonomous building of themselves and B, the peripheral to that, do you think will get AppSec being completely autonomous as well? Or do you think, no, that's a pipe dream. It will always be augmented by humans who are who are driving the process?

[0:31:44] Akira Brand: That goes into a whole layer of topics that we don't have time for today. What I will say is that, to answer that question, we would need to delve more into the nature of consciousness. Knowing, okay, when do machines start to have their ability to self-create in a way where they're creating something novel and original. That is my definition of consciousness. If what you are making is something new that you have created from nothing, from your mind, or your experience, or whatever. Then, yes, can we get there? It depends how you define again, what is conscious and what isn't. But I think we can get there. Absolutely.

I don't think we're going to get there in my lifetime. I think the CPU is going to be a lot more CPU. But yes, I don't see why not. Every creature under the sun is conscious in its own way. Why can't machines be so?

[0:32:57] Danny Allan: Well, if nothing else, what we will see, I'm quite certain, just based on my observation of the industry is we're going to see far more simplistic models to create the code. I've seen developers increase productivity by anywhere from 10% to 30%, simply by using these coding assistants. So, it would be easier to code, and that's a good thing.

Security is getting easier as well. I mean, the tooling around this is also getting better. I think that's good for everyone. The result of that, of course, is much higher velocity of software coming at us.

[0:33:30] Akira Brand: Yes. That's why I think is important to take into the human note too, of like, you have to stop for a moment to think to ourselves, why are we creating this thing? Is it for the betterment of humanity, or is it just to make one or two people happier? Even then, it maybe it's still worth it. So, I don't know. That's again, that's a whole different conversation for a whole another time.

[0:33:51] Danny Allan: That's a whole philosophical conversation for another time.

[0:33:53] Akira Brand: We're going to go deep here, but we don't have enough time.

[0:33:57] Danny Allan: Absolutely. Akira, it's been great to have you on the podcast. Thank you for joining us, and thank you to everyone for joining us on The Secure Developer. We'll see you next time.

[OUTRO]

[0:34:11] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organisation. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better.

Please contact us by connecting with us on LinkedIn under our Snyk account, or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.

Up next

You're all caught up with the latest episodes!