Skip to main content
Episode 144

Season 9, Episode 144

Generative AI, Security, And Predictions For 2024

Listen on Apple PodcastsListen on Spotify Podcasts

In this engaging episode, hosts Simon Maple and Guy Podjarny delve into the transformative role of AI in software development and its implications for security practices. The discussion starts with a retrospective look at 2023, highlighting key trends and developments in the tech world. In particular, they discuss how generative AI is reshaping the landscape, altering the traditional roles of developers and necessitating a shift in security paradigms.

Simon and Guy explore AI-generated code challenges and opportunities, emphasizing the need for innovative security strategies to keep pace with this rapidly evolving technology. They dissect the various aspects of AI in development, from data security concerns to integrating AI tools in software creation. The conversation is rich with insights on how companies adapt to these changes, with real-world examples illustrating the growing reliance on AI in the tech industry.

This episode is a must-listen for anyone interested in the future of software development and security. Simon and Guy's expertise provides listeners with a comprehensive understanding of AI's current development state and offers predictions on how these trends will continue to shape the industry in 2024. Their analysis highlights the technical aspects and delves into the broader implications for developers and security professionals navigating this new AI-driven era.

Links

Follow Us

Share

Guy Podjarny: “I think in general, at least to me, was a learning from the beginning to the end of the year, the world, that I firmly believe today there will be multiple models in every company. Part of it is because of indeed, data security concerns, that there are certain types of data that you want to apply AI on, but you are unwilling to share with any hosted platform and so you want to be able to run locally.”

[INTRODUCTION] 

[0:00:29] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, dev and sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community, found on devseccon.com, where you can find incredible dev and security resources, and discuss them with other smart and kind community members.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:14] Simon Maple: Hello, everyone, and welcome to the first episode in 2024. Hope you all had a great Christmas, for those who celebrated, and an enjoyable New Year. Joining me on this episode, of course, is Guy Podjarny. Guy, how are you?

[0:01:31] Guy Podjarny: I am good. I'm good. I am looking forward to the Christmas that I would have already had by the time this –

[0:01:37] Simon Maple: Absolutely. Yes, absolutely. We’re recording this just before Christmas. But you'll be listening to this just after. So, Guy, what did you do – the New Year's Guy? Did you have a good break in New Year's?

[0:01:47] Guy Podjarny: Amazing New Year's. Actually, for us, with a bunch of plan changes, left us for the first time in a very long time back home for Christmas. So, we actually have a Christmas tree for the first time, which is kind of a little fun and was a little anticlimactic. Instead of getting it like I was all prepared, like, which one I'll choose and do it and prep for it and get all these tips and then kind of went on and bought it and took like five minutes. It's really not that big of a deal. People do it in multitudes. So, it was nice.

[0:02:19] Simon Maple: You need a cap for the real experience, I think as well, because they go crazy for – that’s hanging decorations and climbing the tree, right? So, that's the –

[0:02:27] Guy Podjarny: I got a big base so my dog doesn't tip it over.

[0:02:31] Simon Maple: Excellent.

Well, in this session, in this episode, we're going to take a look back at 2023, summarising and taking some of the key takeaways from some of the episodes and broader in 2023. And then take a look forward as well at 2024. What are the key themes going to be? What are the trends going to be? And we'll also look at what our predictions were last year to see how accurate or how wildly off we were. But yes, this is going to be a kind of nice summary episode. Let's take a look at 2023 from the podcast as from some of the stats point of view. Twenty episodes across 2023. How did they feel Guy? Did it feel more than usual? Fewer than usual?

[0:03:10] Guy Podjarny: No year is the same, because it’s one on its own. I think, it's probably like a decent amount. I think it's in and around the number we aim for, probably a bit burstier, as some listeners might have done. But we're getting better. We sort of get our cadence a bit more consistent again.

[0:03:24] Simon Maple: Yes. Topic-wise, actually talking about the burst. Earlier in the year, of course, we had the supply chain verse where we did three or four all in one go. I think pretty much releasing weekly, I think, and around supply chain security. So, a real hit of supply chain security, important topics, and well worth doing. We'd love to hear from our listeners, actually, if that's a format that folks like to consume the kind of topics in. And then yes, towards the end, obviously, I took over, right? And we did a number of topics and again, in a kind of a big hit in and around AI, your favourite, your learnings, Guy, from those? Which did you –

[0:03:57] Guy Podjarny: Yes. I think we're probably going to dig into a lot more detail. But I'd say, yes, we definitely kind of opened the year in the sort of supply chain security mantle. With AI looming, I think it was in their predictions, we're going to get to that. But supply chain security was a key topic. I think it remains a key topic for us. We can probably send a chat about it some more. But I think the series was interesting. It’s a bit of an almost documentary style, attempt at it, which I think we took a bunch of learnings from on it. But indeed, we get some feedback on it from listeners still, for those who remember it from the beginning of the year, or who want to go back and listen to it. If you have any thoughts on things you want to see more or less, that will be great.

I do want to sort of point out also, at the beginning of the year, we, I think, officially announced you, Simon, as a co-host. You've been sort of the joiner here, but I think you've had like nearly half the episodes or sort of leading around that for the year. That was very noticeable for me. We did take –

[0:04:50] Simon Maple: Of course, big shoes, big shoes to fill there as well, Guy. It's definitely a bit of imposter syndrome coming in there. But it's been fun to join.

[0:05:00] Guy Podjarny: Yes. I guess, as I kind of look back a little bit, I think AI dominated the charts, no doubt. So, that would warrant its own conversation. But what I love about these kind of summary moments on it is that you get to look back and remember a bunch of conversations that happened. I mean, a year is a long time. You kind of go back to the first third or so of the year. We actually started the year with a conversation about community, right? You had a chat with Rishi from Project Discovery in Nuclei.

[0:05:27] Simon Maple: Wonderful tool. Actually, whenever I talk to folks like that, it always reminds me of early Snyk days as well, actually. But yes, incredible work that they're doing, and the fact that it's so much as open source there, and there is such a community pool. Obviously, it's one of those tools that it improves the greater the community, the idea that these templates can be added by the community directly, into the open source repos, massively assists with the with the scale and breadth of the tool. So yes, really, really great conversation with the security by community style.

[0:06:00] Guy Podjarny: Yes, I love that, and I think that continues to hold. Sometimes, I think, in the shadow of AI, we lose some of that spotlight. It’s so taken over with it. But I think that was a great conversation on it. I think the supply chain security continues to be a topic. I'd sort of say, and I don't know if that's your sentiment as well, Simon, which is supply chain security was maybe like newer to the headlines in 2022, where a lot of pennies dropped with a bunch of big breaches. So, it was very messy. There was all this development with the open-source security foundation. We opened the year and a lot of the attention and the series was on just some clarification, some taxonomy, and what is it that I should do?

I guess my sense is that supply chain security now has sort of settled down a little bit into a pace. There's a lot more practicality. There's the end user group and the openness of staff that's contributing kind of more concrete plans and how do I consume it. There is regulation that firm up. So, kind of settled into place. I feel like it lost some of the hype, or of the – maybe, you could even say urgency lost a little bit of the – I don't think there are board conversations these days where the board is asking, “Hey, how are you not the next Solar Winds?” I think the conversations now are about, “What are you doing about AI?”

So, there's a bunch of regulations, and we see a lot of activity on the kind of practicality. Everybody knows they need to collect SBOMs. Everybody knows they need to share those with the rest, with their customers. But most people, they know they need to collect SBOMs, but still don't really know what they're going to do with them, and how do they handle it. Most of them are still dealing with the machinery of knowing and collecting and doing it, versus you the very few that are at a stage of optimising and levelling it up and truly hardening the pipelines.

So, there's a few that do this. But I feel like took it down a notch in terms of speed and evolution, and maybe for the sense of practicality.

[0:07:56] Simon Maple: Yes, I think it was out-hyped. To me, it was something newer and shinier came along, and I think you're right. There was a lot of existing solutions around that take a lot of the low-hanging fruit that a lot of companies already have in and around security hygiene, lacking security hygiene, things like that, with things like obviously, the CycloneDX, the SPDX, all new standards and things like that, that there was a lot of hype driving behind. Those things that a lot of the vendors, the communities, et cetera, needed to be able to release which was kind of like driving a lot of that.

However, a lot of this is still kind of business as usual for a lot of people like the idea of looking after their dependencies. This is what a lot of people have been doing after many, many years. It's more of that case of, okay, can we now itemise and can we actually store these in a secure, interoperable way?

So, it was definitely outgunned. It was out-hyped by the AI which kind of came in. Another thing which we kicked off the year with, which I don't know, this has been out-hyped by, as well. I guess it has. Is around secrets. Secret scanning and things like that. Of course, way back when, I remember it was early January or February, the CircleCI breach occurred. And as a result, there was obviously a lot of recycling of tokens and things like that needed to happen. And it was another one of those urgent things that every organisation pretty much had to do. And it really taught us again, some lessons in and around, okay, where are these tokens living? Oh, my gosh, we've got so many tokens that don't have these expiry dates, and, oh, my gosh, we've haven't done this recycling of tokens or regeneration of tokens in so long. How do we actually do it? What do we even find? Where we have these? I think, secrets and these kinds of things have taken a bit of a backseat a little bit as well from that hype. Would you agree with that?

[0:09:41] Guy Podjarny: I agree. Although, it definitely is a part of the buzz all around now kind of regularly. We sort of see whatever the Snyk capabilities around secrets kind of coming into play. I think there's acknowledgement by the industry, that the fundamental way in which we handle secrets is broken. The fact that if I'm Service A and I'm using Service B or whatever, I'm A and I want to use B, I need to B to issue a thing to me that gives permissions and then totally lose control over it. You gave me this token that allows certain access, but you no longer have visibility into who's using it. So, A cannot stop using it. They can share it with someone else. They can accidentally upload it to a public repository, and it's all done. B is kind of helpless to know whether they can revoke it or not, because they gave it away, and they have no control. So, they don't know what will get broken if it's done. Just that system is fundamentally broken. 

I think, to an extent, you're right, maybe it's similar to the supply chain element, which is, it's an acknowledged problem now. It is accepted as a fire and something that needs to happen. There's a sequence of sort of new tools in the secrets world that are trying to address this anywhere from increased capabilities in platforms like Snyk to detect them, to some dedicated players in the space, to enterprise solutions that also try to kind of track and really maybe tackle this.

Even some thoughts actually, in full disclosure, it's an investment of mine, a company called Authorize. It’s turned into this like intent-based access control. So, really trying to rethink, throw lower burners, rethink the whole model. Can you not have service be given a thing? I think it's been acknowledged as a problem. I think that CircleCI breach has probably helped emphasise it. But sadly, there have been quite a few examples over the years of secrets related to breaches.

So yes, but I agree. Probably like supply chain. This slot of excitement went to AI, and then I think, I guess it's also a healthy thing, that supply chain and secrets have maybe moved down a rung into the hands of people whose job isn't to deal with the hype, but rather, construct and adopt practical solutions.

[0:11:44] Simon Maple: Yes. Before we jump into AI security, I think we'll probably dominated. A lot of this episode, looking back on 2023, any other topics of 2023 that kind of stood out for you?

[0:11:54] Guy Podjarny: Yes, maybe I was really happy to have two conversations that actually you can kind of say lead into AI with Roland Cloutier, who at the time, just finished his role as global CISO at TikTok, and with a guy was then who is the CISO at Meta. Both of them also owned Integrity. Actually, went back and kind of glanced those conversations again. And I think there's just all this insight around the interaction between integrity, which is the notion of like identifying abuse and hate speech, and also some social manipulations, and security. How do you kind of keep the data safe? The tour different in different ways, and I think both conversations highlight those substantially. Roland, I think, was a little bit more contained in like how much he could share given that he was no longer at TikTok.

But I think in both cases, it also showed that especially the guy who's in conversations, the parallels that there's oftentimes an adversary, in both cases that is trying to manipulate something. At the end of the day, what I find interesting as we get into AI, is that it really, really manifests and things like prompt injections or others in which you're almost socially manipulating the machine and the system to do the right thing for you. So, I think this line between – you can call it safety, I find in an AI a bunch of these terms are tricky and it's not always safety. So, I think the common term is integrity. So, between integrity and security, especially in the world of AI, is going to increasingly be blurring. I found those two conversations really, really interesting, in the very big Meta, and also both platforms that use AI substantially. 

The other takeaway I had, maybe from those conversations was just thinking a little bit about scale and speed. That definitely comes back to some of the AI topic, which is, if you're going to use AI to scale your operations, whether it is discovery of videos, or whether it is a creation of software, you also have to think about how do you scale your protection mechanisms and remediation and prevention mechanisms in parallel. Because otherwise, you're creating a problem that you cannot handle.

For instance, in this context, if you're going to have these vast volumes of whatever video consumptions on TikTok, and your AI algorithm will help basically people consume more and more of them in an endless fashion, as we sort of see. Then the only way that you'd be able to keep up with this success of AI over there is with another success of AI, or being able to identify right from wrong in the process. So, I think that's very relevant to security, which is we can't just talk about AI in the context of discovery. We also need to think about AI in the context of prevention and remediation.

[0:14:32] Simon Maple: Yes. Well, I was about to say, let's jump into AI because it feels like we've already bled into it already. Almost half of our sessions last year were actually based on AI, which kind of like does show how overwhelmingly kind of fast AI has really come through our industry, and how much people are adopting it. But if we just take a step back and actually think about all the discussions that we had in and around AI, how would you position the landscape, specifically around AI security? How's everything sitting in your mind right now, Guy, in terms of categorising the various pieces of AI we need to think about, and as a result, the security measures and security positioning, we need to put in place?

[0:15:12] Guy Podjarny: Yes. I think that's a good question. It's interesting that it actually comes back to the same thing we had to do in software supply chain security, is if you don't have taxonomy, if you can’t talk about the thing in consistent terms, and you can really get any progress. So, I'd say if I was to sort of rank both categories in order of maybe urgency for organisations, per the conversations I've had, on and off the podcast. I’d sort of say, the top concern around AI security is data security. It's the most practical. People talk about, can I, if I whatever, write my source code or confidential information to OpenAI, will that get leaked out? Would that be a concern? How do I know that employees don't get access to data that they shouldn't? How do I know that they're not sharing data they shouldn't into the system? I'd sort of say that’s the top. When people say AI security, there's a good chunk of them that they really mean that. They mean data security with regards to Gen AI. [inaudible 0:16:04] that’s actually on the recent episode really kind of raised it well, but it came up in a whole bunch of other conversations.

I think that the next level of concern is around securing Gen AI use in software development, and so AI code generation. So that's the next. The first one is super concrete, because there are employees. Everybody's like going to ChatGPT. They're doing this. The second one is also very concrete, because it's taking off. The co-pilots of the world and such, everybody's using them, these coding assistants, you have to ask yourself. Is that secure? Is that insecure? How do we deal with that? So, that's the next category of it.

Actually, Ian Swanson talked a lot about that under the sort of the ML SecOps, the idea of having some form of policy around how to use the LLMs, but also how to track them. Snyk would definitely see a ton of that around needing a guardrail, because it's going to generate code. Is that code vulnerable, not vulnerable, trustworthy, not trustworthy? 

I’d sort of say, these are the two big categories, which is data security, and then securing kind of Gen AI-assisted software development. Then the next two are, I think, also interpretations of the words AI security. But are, I think, a little bit more focused right now. Not everybody cares about them, which are the use of Gen AI in security tooling. So, are you AI-powered? At Snyk we are – it was funny in Snyk. We had to shift like when we acquired decode a bunch of years back, and they were amazing AI-powered engine that still powers a lot of what we do at Snyk, when we would say it's AI-powered, people would roll their eyes, and we took the AI dimensions out of the slides, and post-ChatGPT, we’re missing it, put it back onto the slides. Because then people believe in the AI, they want to see the AI.

It's always been AI-powered. I think there's a lot of conversation about how would the security tooling industry be modified by AI? Are you using it? I should use AI. Where is it good? Where is it bad? Where is it deterministic? So, there's a lot of conversation about that. Sam Curry talks about that a fair bit in his kind of view in general, and specifically in the context of Zscaler. And then I'd say the last one is the notion of securing Gen AI application. So, for a lot of people, when you say AI security, what they mean is like LLM security, how do you prevent prompt injections? How do you prevent clearly the la cara conversation that you had? Oh, actually no, that I think I had. I was sort of hosting that momentary around that, because their Gandalf and stuff is on that domain. But I think for a lot of people, they mean that. That was a bit of a ramble. But I think when people say AI security, they might be meaning data security, coding assistance security, if you will, use of security within an AI, use of AI within security tooling, sorry, and securing AI applications.

[0:18:41] Simon Maple: Yes. So, it's very interesting, because it's almost a little bit overloaded as well, in terms of again, a bit similar to supply chain security, right? Whereas, when you say supply chain security, a lot of people do think closer to the third-party libraries only. Whereas that's actually only a part of the problem, right? And we talked about this in some of the early shows where when you actually look at things like pipeline security, all the way through, and even up to the dev and plugins that are used through there, and there's a ton of broader meanings to it. And depending on who you talk to and their angle and their specific needs, you're going to be talking at different points. Do you see that as largely stay in the same for 2024 in terms of the concerns in the areas? Or do you think anything's changing much in that space?

[0:19:22] Guy Podjarny: Well, I guess I'd like to think that taxonomy will improve. New system tend to that. We had a Royal on talk about the safe framework from Google, and before we had Ian talk about ML SecOps, and I think you talked to Christina there, right?

[0:19:36] Simon Maple: Yes.

[0:19:36] Guy Podjarny: So, I think there's different organisations that are creating taxonomy for AI security. I think a lot of them talk about, I think, there might be a bit of an overrotation to the securing of the Gen AI apps, of when you're offering a capability versus the usage. But they do try to encompass all of them. So, I would say, I think 2024 of my prediction is that in 2024, it would remain fairly murky. I think it's changing too quickly to really firm up. So, I guess my bet would be that in 2024, there are going to be a whole bunch of competing terms that are still very ambiguous in different views. And I think it's only by 2025 or so during 2025, that I think the industry will settle on a few clearer terms.

[0:20:26] Simon Maple: Yes. Absolutely. I think that's good prediction. I think we're still in that hype curve as well. So, things will always be a little bit up.

[0:20:34] Guy Podjarny: People don't know what it means. You say, AI security. It’s like, AI security importance of what are you doing around AI security. They don't even really have the definitions. There’s so many other problems to worry about. We did undergo still waves of the market correction throughout this year. So, it's not like it was a period during which you had no other worries. But AI –

[0:20:54] Simon Maple: Another question for you, Guy. How long do you think AI will stay at the top of this hype curve for Generative AI, generally?

[0:21:03] Guy Podjarny: That's a really good question. What do you think? Before I answer it.

[0:21:05] Simon Maple: Oh, wow. You can't do that, Guy. Well, I think, one of the things that I've noticed, obviously, this year is the speed at which the vendors, and I think as well, the investment in which vendors are putting into Generative AI models, as well as training and tooling. It's happening so fast that you almost want like different versions of Generative AI, right? So, if you go for the early-stage version of Generative AI, yes, that would probably be kind of lower on the hype curve. But guess what, we've actually had three versions or three alternatives, or three newer things that have taken that on, and actually, that's higher up the hype curve than the other one was.

So, I think it's interesting. I think it's going to drop a far slower down the hype curve than normal technologies, just because they are being advanced and being, each vendor is kind of like trumping the other on how fast they're advancing. I do think we will drop down the hype curve, but I think it will be a lot slower than previous technologies.

[0:22:02] Guy Podjarny: I think, actually, in 2024, I don't know if we'll see any slowdown. So, I agree with you it’ll be slower. I think the rate of evolution is definitely a part of it. But also, I think the other aspect is just consumer attention. You deal – like AI is a sort of dinnertime conversation topic as well, with your aunt and niece, right? It's not a topic that is confined to the workplace. So, in that sense, I think the best analogy I've seen to it is mobile, and smartphones and what they can do. Maybe that's also true in terms of tech evolution, which is people are trying to come to grips with it, and how does it work, and when can they use it, and change their methodologies, and their ways of working, because it's so tempting? But also, the ecosystem evolves, and they build on it, and they learn how to build on it better.

So yes, I think it's going to be around on the top of the hype cycle probably throughout 2024. I think it was deemed the topic of the year for 2023 by pretty much all the magazines on the [inaudible 0:23:04] and the Economist, the word of the year. To an extent, withstood like another war erupting in the Middle East, another – the continuation war in the Ukraine. It's just like a lot of messy stuff through the year, and AI still hasn't dropped. It might have moved to like number two, versus number one, on some cycles. So, I think it's going to be that way throughout 2024.

[0:23:28] Simon Maple: It's one of those things that I guess has gained adoption across pretty much most or all that schools as well, right? I mean, when we think about how fast people are wanting to try and jump on the AI kind of bandwagon. In fact, one of the sessions we gave actually in the podcast was a review of a survey that Snyk ran for an AI code security reports. One of the very interesting metrics that came out of that was that 80% of folks who responded said that they either – sorry, rather they sometimes, most of the time, or all of the time broke policies to use Generative AI.

So, the adoption is clearly strong, and there is not just a push from top down. This is a bottom-up needed kind of technology and tool. I guess one thing that we do see more and more is organisations that I think don't want to be blindsided by the use of – or rather, the speed, which AI is coming into the organisation and security teams may be even slowing that down potentially, by saying, “You know what, right now for the next three months or six months, we don't want Generative AI to be used in our applications or in cogeneration until we get a handle on this, or until we have the right mitigations and policies in place.” How much of that do you see still today? Do you feel like security or rather, maybe some of the concerns that you listed above that they listed before? Is that slowing down the adoption or the take-up of AI?

[0:24:54] Guy Podjarny: It's an important question and the answer is very much, it depends. So, I think the answer is yes. In many organisations, it is slowing down the adoption, especially the data security aspect that I talked about before, in which people just sort of feel they're worried about where it goes, and there's a bunch of these somewhat improvised solutions still at the moment of like proxies and things like that. Again, Tom, I think, in the episode with him probably outlined that the best, but I don't think anybody has the illusion at this point that they can stop it. So, all they're trying to do is they're trying to hold off.

They're not saying, yes, this is like a fancy new toy. I don't think it's worth the risk. Stop. But rather, they're shuffling, they're hustling, they're running as quickly as they can on the security side to figure out how to enable that. Because basically, nobody in the business is willing to accept that there will not be use of AI. Nobody's dismissing the potential for productivity gains. So, in that sense, I think it's different. If you can contrast that to bring your own device or something like that in mobile, then it was just the bottom-up pressure that eventually said, “Look, we demand. We the employees that demand that you allow us to use the device for our for a work.” While here, I think the business, if the board that says, “We the board, demand that you use AI, and you have to do it securely.” I think in organisations that are large enough that have enough to lose their security conscious enough, it's happening.

What I think is interesting is that I don't think security is playing a role yet in the choice of AI tool adoption. So, you don't really – maybe it's because people don't really know what questions to ask. But I think today, in the sequence of conversations around when you're encounter some fancy new AI tool that will really overhaul your SDR workflows, or your software development, then I think people ask a lot more questions about the functionality, the trustworthiness, would it actually work? More about the output, and maybe about the data security. But I find it, maybe surprising and definitely somewhat alarming, how little the people ask about the security controls that those AI tools have. How do you know they haven't been manipulated? How do you know they're resistant to prompt injection? How do you know whatever the data hasn't been tainted, et cetera?

[0:27:26] Simon Maple: Yes, which I think from the process of how they're selected in terms of the need, I guess, a prediction is coming out in the sense of what we will likely see in 2024 and beyond is people choosing small, kind of like smallish models for smaller tasks and maybe have multiple models for particular tasks. And people are choosing them as a result of the like, you say, the output that they need for that specific piece, rather than kind of like starting with, “Okay, which are the most secure models?” And then try and identify from that pool how they can use, because there may not be one that works for the for their need.

[0:28:00] Guy Podjarny: I think in general, at least to me, was learning from the beginning to the end of the world, of the year, the world, that I firmly believe today, there will be multiple models in every company. And part of it is because of indeed the data security concerns that there are certain types of data that you want to apply AI on, but you are unwilling to share with any host the platform. So, you want to be able to run locally. Other cases, it wouldn't be around costs. Other cases, it'll be around footprint, or whether you need it in a mobile device, then you might need it to be an open-source model that you're running it.

But also, just the sheer different models will be better at different tasks, and you will have chosen a model and you use it for whatever it is, a year. And then the rate of evolution is so high that a year later, some other tool built on another model will come along, and you will start using that. But you won't go off and rewrite all the previous ones to use. I think all of this in reality alongside other considerations like politics, and negotiations, and a company acquired, and things like that, I think would result in every organisation needing to be ready for multiple models. I think it is much more evident, maybe, that this would be the reality versus the multi-cloud. It's also helped by the fact that a lot of these AI models actually have very similar interfaces.

So, I think cloud is just very, very hard to replicate, your environment into multiple clouds. While today there are already these middleware pieces of software that say, “I will route it, these proxies, these load balancers that route requests to different places.”

[0:29:38] Simon Maple: Do you feel, with these multiple models, do you feel that that will be kind of like in-house trained? Or do you think that maybe requires too much AI expertise? Do you feel like a lot of it will more be outsourced?

[0:29:50] Guy Podjarny: I don't think many enterprises will train their own models because it's just massively expensive and probably not worth the effort, and you can achieve a lot of that with the fine-tuning or RAG, kind of retrieval augmented generation. If you don't know what RAG is, it's kind of the idea of context, right? So, if you run a model, you can fine-tune it, which generally says, “I'll give you a bunch of examples of correct questions and answers, how to respond”, and this few-shot learning is oftentimes very impactful in how good it is in responding to future responses. And RAG is retrieval augmented generation, which is really context, as to say, when I'm about to whatever a given answer that relates to stocks, can I go off and retrieve the current state of it? Or if I'm giving you an answer that wants to be fed by your docs, maybe your product keeps changing. And so maybe I have RAG that gives me that. I'm oversimplifying here.

But I think a lot of those will impact. So, I do think that in 2024, we'll see a lot more use of enterprise data and proprietary data in LLM flows. I don't think it will come through training. I do think it is interesting, if we sort of maybe touch on the security implications of these two statements we just touched on, one is if it's multiple models that are going to be adopted, and then you have to think about security from the lens of not just is this model secure, or this that model secure. But rather, if you're operating a security program that spans the models, what do you need? Sometimes you might maybe be willing to support that each of these models will have its own security, but you might need to get some information from them, or something like that. Oftentimes, you might need your security control to be a layer on top.

Similarly, if you talk about your own data being used and refining it, you're going to need a bunch of data security controls. You're going to need to think about which data is and isn't allowed to use. So, I think each of these predictions around AI be on the curiosity of where it goes, it really, really comes into play when you start to think about which security mitigating controls needed to have.

[0:31:50] Simon Maple: Yes. I think to wrap up some of the AI topics, one of the things that I think has been so fast in its delivery since AI has really taken off in the last year, year and a half, I'd say frameworks will be one but also standards and regulations as well, not just in the US, but in the EU. Start the framework in terms of what's been happening in 2023, and maybe some predictions around 2024, what would you say are some of the frameworks that are most interesting that people should be looking at and adhering to?

[0:32:19] Guy Podjarny: So, we touched on a couple of here, right? We touched on, say from Google, and ML SecOps. I don't know if it's a framework per se, from protect AI and the [inaudible 0:32:27] one, which I'm looking at right now, on its name. So, those are the ones you have today.

Interestingly, many of them talk about security hygiene, actually, the two of them. A lot of it is, there's a lot of fancy stuff in AI. But there's also a lot of same old. You're using libraries to generate your ML pipelines on it and you might be using ML flow. And in certain version, it has real vulnerabilities. So, you should be on top of known vulnerabilities in your component tree. Your code might have vulnerabilities how you're assessing it. And oftentimes, I got to say, in data science, it's quite alarming sometimes how just the fundamentals of the best practices of how the equivalent of software developed are quite concerning. Data is heavy and you see a lot of data engineers, data scientists that modify stuff and play with evolving their algorithms in production right next to the data, just because it's so hard to use mimicked environments, and that's very concerning.

I think a lot of the frameworks are just about the basics. I suspect we'll see more of them. We have requirements. You have both the US and the EU now trying to classify models, trying to sort of say, if it's a model that is sufficiently high risk. Some are prohibited to say this is not allowed, and some are just high-risk. The high risk it is, the more transparency you need to give. So, I’ll see a lot of that.

I don't know which framework will come up on top. I mean, maybe my previous comments on I think in 2024, we're probably going to have an explosion of those, and then probably it's only 2025 that we'll start seeing which ones come out on top. But what I do think will be the case is that today, I think most CISOs do not have an AI security policy, whether internal or framework-based. I think that's a fairly safe statement. I think, by the end of 2024, most CISOs of any organisation of sufficient size, will have a policy around AI security. All four definitions of it, or at least three. Only the ones that talk about securing your own use of AI and generative development, securing your AI apps and data security when it comes to AI and more. So, I think they will all have that. What do you think? What would you add to the to the frameworks or predictions or views?

[0:34:38] Simon Maple: Well, one thing we didn't kind of like touch on in terms of the predictions, I guess, was I think we mentioned it maybe, slightly earlier with the AI bill of materials. It's a really interesting topic actually, that I think is going to be quite interesting as well in terms of when we when we finished off with the supply chain topic earlier. We talked about the idea that these kind of like SBOMs, they exist and we don't know what necessarily to do with them quite so well. This is now one of the big rage that we're talking about here with identifying and being able to effectively list things like training data and things like that. Again, I wonder if we're almost like falling into the same trap here of like collecting this data, but actually not knowing quite what to do with it. And it also feels that a little bit further away than SBOMs in terms of the idea of having vulnerable libraries.

But yes, AI bill of materials. I've done a bit of reading on that one and it's interesting to see where that's going. But yes, I kind of still – again, it's another one, which is, it's unclear as to how much value we're actually going to get from that. But I think it's a good thing to be able to track. But I don't know necessarily quite why just yet other than to maybe traceability and identifying what's been effectively tainted.

[0:35:46] Guy Podjarny: I think that's a big part. I mean, in the broader conversation about AI, explainability and transparency are big topics. The fact that this is a black box, a pretty massive black box that spits out these very insightful and influential, and important decisions or outputs is really, really unsettling, and it's inconsistent. You ask the same question twice. You'll get two different answers and it's a very kind of uncomfortable system to work with, definitely in the security profession, but also just the sheer humans.

I think some form of tracking, what's inside at least sort of know what the ingredients are, that came into it are important. I also think that whether you find, like train your own data models entirely, or you're just fine-tuning using that or whatever, then I think data poisoning will grow in importance, and with it, data lineage. Being able to look at a piece of data and say, “Where did you get it from? What stages and phases that had undergone is today?” Almost nobody can give you that. It's amazingly – it's alarming how bad it is. The organisations that are best at it are the ones that deal with a lot of private information. And so, for PII purposes, they've evolved some muscles to be able to do that.

But in general, I think data lineage, and being able to know where's the tracing back, I think is going to be a big deal, and then capturing that, and the AI BOM makes sense. In that sense, it's actually not terribly dissimilar to software bill of materials, which is like what is in this thing that you gave me? And what type of scrutiny has it undergone through the journey? And how confident are you? Can you attest to the fact that that is the case? So, in that sense, I think AI BOM is a welcome addition, but I agree with you that probably what needs to be in there is not known. I think there was an interesting feasibility versus desire exercise going on right now. Because this industry is evolved so quickly, then I think there's a gap between what uninformed CISO would want to get from their vendors, let's say, that touch AI, and what is reasonable to expect those vendors to have. Because the vendors are shuffling to get something out there. It's a super fascinating landscape. It's doing. To an extent, if say, you're very informed as a Cisco, but you're quite risk averse, and you want certain security controls and such on AI, from any vendor that does anything with AI, any SaaS vendor and such that does anything with AI, then it's quite likely that your list of requirements will be such that none of your prospective vendors can get.

So, you're really back to the place of saying, “do not use AI, because it's not secure enough yet”, which is not a decision that the business is able to accept. I think, this sort of exercise in risk management is really interesting here. I think today, very few people are in that position, because I think most CISOs do not have a list that they are confident in. To an extent, we're sort of saved by ignorance, or this conflict is avoided by ignorance, and it's hard. It's not easy to figure out that list. But I think once we have that list, I think there's an interesting gap there. Priority of saying these are non-negotiables. Do I need to know the lineage of all the data that came into the model that you've trained? Sounds pretty like a simple enough ask, but probably the answer is no for everybody. So, how accepting are you of that?

[0:39:03] Simon Maple: Well, I think we should wrap it up there with the saved by ignorance in our soundbite for AI. Absolutely. Let's change gears slightly to talk about, you mentioned vendors, actually. Obviously, we're both at Snyk, and how's that been over 2023? Obviously, tough market conditions, obviously, for everyone, over 2023 continued. How would you see the market these days in and around security tooling?

[0:39:27] Guy Podjarny: Yes. I think it's been an interesting, yet another interesting year at Snyk. I think, it's never been boring in the eight and a half years since I founded it. I'd say, first of all, just macro a little bit. We underwent the sort of market correction and I think that was an interesting – I kind of say that we were sort of a little drunk on the euphoria of 2020, 2021, and then we're hungover for a while. It was unpleasant. I had to deal with sort of reduction of company size and just like the market became tough, a lot tougher than before. I really feel like midway through the year, or like actually earlier a bit, we kind of managed to climb out the other side. Now, we're sort of sober and robust, so it feels a lot healthier, and a lot of the growth is back.

A lot of it is maybe just familiarity, because like everybody is kind of – the rug was pulled from under your feet before for everybody involved. So, nobody really knew. Like, “Hey, I'm going to buy this. Do you even have budgets? Do you know if you're going to budgets for it?” Now, I feel like we're sort of regained some predictability, and with that, some better ability to operate. So, I definitely feel how that has improved from, I don't know, maybe I'd kind of hand wave somewhere around that sort of Q2 timeframe.

[0:40:42] Simon Maple: Yes, which a much has changed in the approach to market. I mean, obviously, previously, we were very, very kind of like developer security-focused, and most recently, in fact, in December, it was announced at risk was available as well, which is a different style of product to maybe the develop, pure developer focus that Snyk was grounded on, how to kind of like, maybe summarise the Snyk angle these days?

[0:41:08] Guy Podjarny: I feel, even kind of – and Snyk is, I like to think of driving force in it, but also working with it, because I feel like the market has matured. So, when we started Snyk, there was an acceptance of saying, “Hey, we're trying to have security match the speed of DevOps”, or come into this world of DevOps in which everything is running at such a pace that security is never going to catch up. The biggest, we really needed, really need developers to embrace security, to actually kind of embedded into the process, because nothing else will keep up. And the biggest obstacle to that is getting developers to embrace security.

So, giving them a tool that they would actually use. That was the whole premise of Snyk and it remains kind of the core of what we build. I think, that dam has been breached. I think, now, except it's not done. This is a journey. There's a lot to do. It's not like all developers do enough on security, not by a longshot. But I think it's accepted now that that is true, and a lot of the industry is doing good. We've kind of moved from these early adopters to an early majority world in which a lot more people are accepting this, but now they need help doing it. And security has to adopt a new approach as well. So, they kind of need to go from a more legacy AppSec that, like it or not, has been designed to the world of waterfall, right? Where people come to security and stop for an audit.

Today, they need a developer security program. So, suddenly they need things like assets discovery. Tell me if I'm a security team before, I didn't need to worry about not knowing what is getting to production. Now I do, like every developer. Can spin up a new repo, a new whatever, container. I need to keep up. So, I need to know which assets are there, and are the developers actually applying the security controls that I'm asking them? So, that's kind of one aspect. Operating the program is different. And another aspect of it is the fact that in reality, software is actually more software being produced. The rate of software though, it’s not just the pace, it's not just a change. There's literally more creation, more functionality being generated.

With that, we're just getting a lot of findings. So, we're actually almost too good at this or finding all these different bugs and vulnerabilities. But you're never going to fix all the bugs. So, the exercise of being able to look at all of this and see the forest from the trees, being able to say, “Okay, you have all of these different kinds of slices and problems. But I have to – like you told me about 10,000 problems, but I have capacity to fix 100. So, which 100 should I deal with?” 

We've always tried to make the effort of saying, “Well, let's level up your capacity from 100 to 1,000.” And we have capabilities to run AI physics and things like that, that help there. But a lot of it is around that. You're still going to need to prioritise and figure out which are the top. So, I guess, I perceive it as an evolution of the market. It's the same core premise, which is help security run at the speed of the business. But I think the market is in a place now that it's not enough to just give developers the security tools. It's now not just some digital transformation group at the corner of your organisation that is using cloud. It's the core. So, I'm super excited by kind of app risk, which is our kind of package for these capabilities for it, and to kind of get it to the next evolution, and it just makes everybody more successful that at adopting this new approach, which I remain as convinced as ever, that it is necessary for us to be able to develop fast and secure, right?

[0:44:31] Simon Maple: Yes. I agree. I think, perhaps of our listener, is excited and interested in that topic as well. Perhaps that's one that we could, let us know and we can maybe even spawn off a separate episode later in January or early Feb in and around that topic. I guess kind of like, if we were to link the two, what Snyk has seen in the market, how Snyk has evolved and things like that. Obviously, we've talked a lot about AI here. What have you been hearing from a lot of the Snyk customers, large Snyk customers in particular, in and around their adoption of AI, their approach to it, I guess? But also, how do we turn that ignorance into a trust, which is a core piece, right? How do we get our customers using AI at speed without being slowed down, but having belief and real trust in the output that it's providing them?

[0:45:14] Guy Podjarny: Yes. It's so clear what Snyk’s role when it comes to AI security, is the second definition I gave before, which is the use of AI as a part of your software development, to accelerate yourself development. We also play a role in helping you secure your AI apps, that fourth one that I mentioned, and we do a lot there around securing the stacks. But let me focus on the prior one.

I think there's kind of like a tactical and strategic answer to it. On a tactical level, I think what we're seeing is we're seeing a lot of customers that are clamouring whether their developers drove it over the execs to use coding assistants, like Copilot, and it's scary, to the organisations for a variety of ways, and in large part because this code is demonstrated to very frequently produce insecure code, these assistants, and when you eyeball something, when you review subparagraph that got created, you're nowhere near as deep as you want to be when you write it. It's very easy to miss a security issue. They're not as visible as a functionality issue or compilation error. So, issues get omitted.

They want Snyk to be a guardrail and we're seeing a ton of that type of adoption. In fact, we're seeing more and more Snyk customers, whether they became Snyk customers because of this or they already were. Let’s say, fine. You want to use whatever it is. You're sort of Copilot or the likes. You can, but you have to have Snyk installed as well. So that if you've made a security mistake, you have that security spell checker there to flag it and to secure it. That's happening – 

[0:46:35] Simon Maple: And that was one very large Snyk customer actually did. They actually almost – they pretty much opened up, I think, it was Copilot that they were using, and they pretty much opened up usage for all of their developers for Copilot because they want to encourage it. They see the value to the speed at which can be delivered to the business. There's a few choices there. They either say that we'll take the absolute speed there and we'll take the security consequences which no one ever wants to do. Or they say, we want to secure it so much using our existing practices that actually the benefit of building AI into a development is being reduced because of the slowdown security. Or either and say, “Look, I want to secure as I go. I want to develop a solution that will allow me to create code faster, secure code faster.” And that's largely what this one big, huge customer, tens of thousands of developers in this account. And they pretty much said you can absolutely use AI, Copilot. But we mandate that if you use Copilot in development you have to have Snyk in the pipeline or in your IDE, right?

[0:47:37] Guy Podjarny: Yes. It was pretty amazing. It's always frustrating when you can't name the kind of – kind of technology company. We actually have a big bank that's sort of in a similar state right now, maybe like a few weeks behind or a month behind. They started about a month later. But is achieving similar kind of rollout. And it really is the motivation. It's a demonstration of the fact that when you have a guardrail, you can run faster, right? If you're running on a bridge, and there is no guardrail, then you need to slow down. You need to run as fast because you might fall off, right? If there is a guardrail, you know you can run fast.

So, that premise. I think that's already manifesting a ton. We've invested a lot in that experience on it. We also provide just sort of better UX and flows for that to work well. We're doing all of that. I do think there's also the strategic aspect, which is, when you stop and consider it, it's likely that in many organisations in two years’ time, 30, 40, maybe more percent of the code being generated, will be generated by AI. If you’re a CISO or any exec in those companies, you have to ask, “Do I trust that code? Do I think it has magic?” I come back maybe even to the multiple models that might be running around here? Do I trust that code? What do I need to know to trust that code? Because that's going to be – that’s going to go from 30%, 40% to 60%, 70%, 95% of your code.

I think, strategically, it is important to just understand “what do you need to be able to leverage this AI cogeneration with confidence?” I think a key initial need there is you need to know if it's secure. I think Snyk plays a key role over there, of being that sort of enabler of AI Trust, which in turn, allows you to also strategically embrace this.

It's interesting because it's actually the same statement that we have said about DevOps and about cloud, because to a large degree, what you see, is you see DevOps, you see cloud, you see AI. They are compounding changes or there are changes that all accelerate software development in a compounding pace. They all multiply one another. And they all contribute to this reality that there's just no way as a security team that you can keep up. You cannot, in the long run, review all the vulnerabilities and choose which ones you're going to send to your development team. You're just never, never going to keep up. So, you have to embrace this more platform approach. You have to do good.

I also say, like I mentioned AI fix in passing before, but if all you've done, and that is already kind of hard is to say, as I generate code faster and faster and change it faster and faster, I level up my vulnerability discovery process to keep up because it's automated and using Snyk and all that. But you haven't really levelled up your ability to fix faster, then all that's going to happen is instead of having 10,000 variabilities of what you have capacity to fix 100, now you're going to have 100,000 vulnerabilities of what you'd fix 100. You can’t. That doesn't work. You need to change that equation.

So, our investment in AI fix and the ability to automatically fix a vulnerability in your code, whether we've already been big on sort of automated fixes on open source, and now we're doing it in code. But a lot of the hesitancy sometimes for adoption, many, many adopted, but the hesitancy for adoption a lot of times was just this discomfort of the fact that it's going to change the code automatically. You have to accept that if you're going to have AI generate the code for you, you have to make peace with the fact that AI will also fix the code for you. Otherwise, you're just digging yourself a hole.

So, I'm super, super excited by kind of this opportunity. To me, Snyk’s real opportunity, and we're heavily leaning into it, but we were naturally placed to do is to really be the enabler of embracing this new wave of technology to accelerate your development by giving you trust. Maybe I'll mention one more thing about this sort of part in the monologue here, I’m damn excited about it, is this notion of separation of duties. When I think about AI, I like using our mental models, right? Sometimes you think about AI, you say it's like a brain. So, you can train it all you will, but it might still spit out the wrong thing, or it might still be manipulated, so you need things that wrap it.

Another one that I really like is, is thinking about AI, as a kind of engine of sorts, as outsourcing. A lot of people talk about the developer will be the team leader of a bunch of AI developers, and who are sort of outsourcing some of their development, as it's bigger and bigger chunks to these AI elements. It's worth noting that these AI elements, or these AI agents, or whatever it is entities, you're running them. So, they're like outsourcing. You gave them a task, and they gave you back an answer. But they offered you no liability. You're doing this, but you're on the hook for whatever it is, whatever mistake they do. So, if this self-driving car runs over someone, you're the one being sued. The question is, now these kinds of AI systems are coming and says, “But don't worry, we've audited ourselves, and we're good, we're safe. Are you comfortable with that type of statement?” And I think what we're increasingly seeing and hearing explicitly is there's no way I'm going to let kind of the fox guard the henhouse. We need a different auditor to the creator. We can’t trust the same system. I think when you multiply that by the fact that there are going to be multiple models, and that they will change frequently, and as a security person, we know already in security that you can't have the same threat be addressed by different security tools in different sections of your company. You need something broad. So, I think both of this, it really kind of drives you towards the need for this AI trust enabler to be separate from the generators of the code.

[0:53:13] Simon Maple: Yes. Really important point. I think that's actually a really strong kind of interesting prediction that we'll see.

[0:53:21] Guy Podjarny: Yes. Well, I guess it's not much of a prediction for me to sort of say that we're going to lean into AI trust given the influence they have on that decision on it. But what I do predict is that this will be a growing demand on the customer side.

[0:53:31] Simon Maple: Let’s lean more into, well, first, why don't we jump and look back at this time last year when we were making predictions for 2023. I can quickly go through some of these and we'll score ourselves on how accurate we were. The first one, regulations about supply chain becoming laws, from an executive order, et cetera, in the US, for example. What do we see? Well, SBOMs really, I guess, drove behaviour with the hype that we saw from there, but really, it was quickly overtaken by AI. And we're actually seeing more and more regulations from AI being driven, I guess, both from the US and also the EU. A lot of a push there from the US government as well to NIST, to create a number of more standardised where we’re looking at and then thinking about AI. So, we were pretty close there with the supply chain work, but didn’t expect the AI –

[0:54:20] Guy Podjarny: It’s like a lot of standards and it's indeed affecting behaviour. It's just become a little bit mechanic, but maybe that's okay, because it's like a little bit less hype-ish. I do feel like it's a slight tidbit, which is actually had an interesting trip to Japan, where there's actually like a heavy, heavy amount of outsourcing over there. It's a culture that is a little further behind maybe the west or like other kind of countries in bringing technology in-house. And as a result of that, oftentimes, if you're outsourcing software development, and someone else is building it, you might be holding them accountable for security. And a lot these regulations now say that the SBOM that you provide, you're the one liable to it. Now, suddenly, it's forcing these companies to get their providers to give them an SBOM, and they need to kind of understand what's inside.

It's interesting. It's actually serving. for them. It's the first time they need more security capabilities in-house, and it's actually building out some platforms and such, which I hope will also be a driver of simplifying moving that tech in-house. I think you're seeing, this is just an example of a slightly larger behaviour change. I think, oftentimes, these companies would basically not be looking to really have many security tools at all, because they would just all expect their outsourcing entities to be the ones providing it.

[0:55:32] Simon Maple: So, not too bad with that prediction. You mentioned service registries will get big attractions, great attraction in the market as well. How close were you with that one in hindsight?

[0:55:42] Guy Podjarny: I think I might have been like a little bit premature on it. I mean, there is a world of IDPs of these internal developer platforms, backstage and others, and I think they are firming up a little bit. I think it's probably slower than I thought. I think maybe what's interesting is backstage, which has been open-sourced by Spotify and got contributed to the CN/CF. Actually, there was a lot of like questions a little bit about its dominance, and recently over the course of this year, Red Hat and Harness had both announced backstage back product. And that's in addition to kind of the growth of startups like Roadie that host backstage as their business, and you see a lot of evolution. So, I would say backstage is a little bit reaffirming its current leadership in this domain, but there's also a bunch of startups around. I think trend still on. Pace slower than I anticipated.

[0:56:29] Simon Maple: All right. Not bad then. Not bad. But could do better. Could be better on that one.

[0:56:34] Guy Podjarny: It could do better. Crystal ball has been murky.

[0:56:38] Simon Maple: Okay. This one, I mean, this one's always going to happen, I think. A major open source vulnerability, Log4j will occur and people will scramble. I mean, we shouldn't allow this, Guy. This is a teach prediction.

[0:56:49] Guy Podjarny: I'm pretty sure I copied that one from the predictions from previous years as well.

[0:56:49] Simon Maple: I think, maybe. Yes, so libcurl, WebP. Any others you want to mention?

[0:56:57] Guy Podjarny: I think those are sort of probably the big ones, WebP and libcurl. Maybe they're a bit recent. But yes, it's also worth noting that malicious components have become far more regular, right? Just the frequency and the spotlight on them.

[0:57:11] Simon Maple: This next one, this final one, I'm going to mark you down on this one, just purely because of the wording here. Generative AI will start making a presence. Yes, this is an understatement of the year, I feel, for our predictions, right?

[0:57:23] Guy Podjarny: Maybe, I could kind of claim like within the world of security, I would sort of say that it hasn't dominated. It's dominated the conversation. But how much is your security policy changed from the beginning to the end of the year because of Generative AI? I would think most people haven't gotten past the data security line. Maybe. But I expected big evolution. But I don't know. I think this clearly, the world has been dominated by the Gen AI.

[0:57:52] Simon Maple: So, let’s look forward to 2024 and wrap up this episode. It's been a lengthy episode, this one, Guy, in fact. 

[0:57:57] Guy Podjarny: Yes. Put the two of us in the same conversation and sometimes that happens.

[0:58:04] Simon Maple: What predictions do you see then for 2024?

[0:58:05] Guy Podjarny: I guess the first one, I would say, is indeed this evolution of developer security into this world that we've been thinking about as application security poster management, or ASPM. So, I think, on the Snyk side, I expect big adoption of app risk has been kind of the best received, launched the thing that we've ever had. We've had some really good ones. So, I think that would quickly become a part of the default required toolkit of every organisation. That's maybe one prediction that literally we're betting, kind of, we're making significant bets on the business, in the business on it. So, I think that's one. Can I say that there's going to be a big breach due to a supply chain security?

[0:58:43] Simon Maple: Let's take that one as a given, but you get no points for it next year.

[0:58:45] Guy Podjarny: It'll still be interesting to see how long it lasts. I think probably forever, it's almost like an if we sort of said a big breach because of misconfigurations or secret. Those would be, basically some people will leave some doors open. Some other people will walk through them. I think maybe a little bit more on the supply chain security side, we need to consider what the regulation would look like. I do think that there is a bunch of work happening right now, making its way through legislation paths, that will firm up a little bit more. Because today, you still find vendors giving you a PDF with sort of their SBOM, and can we explicitly say that that's not correct. There's still a lack of clarity around what is it that you need to do with these SBOMs that you're now accumulating. When a vulnerability comes along, are you kind of held accountable for knowing because you got the SBOM that there is a vulnerability in one of your vendors and you didn't do anything? Or is it the vendor’s responsibility? So, I think there's still some significant open questions that I think the right vendors will grapple with. But, otherwise, I think it will continue on the practicality path.

[0:59:47] Simon Maple: And maybe more on to the IDC bills materials, I guess from an AI point of view evolving. Is that something that you will see, expect to see standardised in 2024? I mean, perhaps too early even for that?

[0:59:59] Guy Podjarny: It's really hard. I would say, generally with AI, another easy prediction is to say adoption will skyrocket. I mean, I think there's no doubt that if you're still in an organisation that's holding back, Copilot use and things like that, then that's not going to be the reality by the end of the year. I think the definitions around AI security, we'll have to firm up. So, that was my comment that you'll have a security policy by the end of the year. Then, as a result of that, I think there will be some definition on AI BOM. I don't think, I'd actually don't expect the AI BOM to be required by the end of the year, because I don't think we know enough yet to know what it needs to hold. So, I think the regulations at the moment, and maybe through the year, will remain in the murky areas of you have to report it, you have to show that you made an efforts to keep it secure. Maybe some assurances around data security and legality, that we're seeing some things. I guess as a result of that, I would – maybe this is a little bit more on the limb here, which I expect there will be significant security breach that is caused by over-trusting AI. I think we'll see that in 2024.

[1:01:03] Simon Maple: Where do you think the problems will come mostly? Do you think it will be through kind of like you mentioned categories there? Do you think it will be through generative code that's been has been added in? Or do you think it will be more the LLM having greater privileges or access or prompt injection, those kind of things?

[1:01:20] Guy Podjarny: I think the platforms on which the AI engines are running are actually reasonably secure. So, I actually don't think it's going to be a data security, whatever, I leaked my kind of board meetings to the model, and it told them to others. I think it will be something around a system that is relying on AI to doing it. So yes, I think cogeneration or like configuring your infrastructure or something like that with AI, in a way that is not scrutinised. It's a bit hard to sort of say that it will happen in 2024, 2025. But I think what we're seeing is, at the moment, a lot of AI is still being used, not in production, or in sort of side capabilities. So, what is used in production is the code generation, and that's why I think in 2024, the breaches are most likely to come from there.

[1:02:02] Simon Maple: Yes. In terms of, in the early days of AI, there's a lot of, from the developer's point of view, worry, almost that they were not going to be in a job for much longer. I guess, from their point of view of, I guess, how does their role change in terms of fixing and securing types of issues? I think, maybe you alluded to that one a little bit earlier, and I guess, how do you see the dev role changing as well?

[1:02:25] Guy Podjarny: Yes. I think you may or may not like it as a developer. But I think the reality is that developers will, over the years, not this year, shift their role from writing code, to supervising code generation, and maybe later, not just code. I think, the reality is that there's just a lot of acceleration that AI can offer, and as long as we're just generating into these snippets, even today, when you're using Copilot, if you're generating 30% of your code with Copilot that implies for that 30%, you didn't write the code. You're just supervising the creation of it.

So, I think that would grow, and the developer will need to become a little bit more of this conductor, orchestrator that is supervising this process, and somewhat entertaining to think that they actually inherit a lot of roles of the security person. Suddenly they have a bunch of these entities that are generating code at a faster pace that they can keep up, and they're accountable for that code, and they need to know whether that company will be at. Is it trustworthy? So, they become a little, tiny governing entities themselves.

I think this is going to be a slow burn. I don't think in 2024, a notable percentage of developers will shift to this. I think we'll start seeing maybe a title for this type of like governing dev, we might sort of see some inklings of like a few people that identify as a part of some new movement that is this AI native developer. So, I think we'll start seeing things kind of in this new role of a developer in the AI era. It would be very interesting. I think, in 10 years’ time, writing code will be like building your own table for the backyard. It'll be something you could do and you might love the craft, but it wouldn't be the thing the economy is built on. Or maybe exquisite cases in which you need that custom carpentry, and you're a master of your job.

[1:04:14] Simon Maple: As you mentioned earlier, it’s like the need to automate and to accept the automation of the fixes, or security issues as well. It's part and parcel of writing code itself. It’s just the belief that trust of code.

[1:04:27] Guy Podjarny: Yes. I think, when you secure these applications for a long time, developer like human and AI code will be intertwined and vulnerabilities literally stand. So, you need to be able to kind of have a platform that secures the joint output of both AI and the developer. And again, I think, Snyk is very well positioned to be that. I think over time, you will just need to – as it moves faster, you're going to rely more and more on automation, more and more on built expertise, not just for finding issues, but also for fixing them. Just the continuation of the journey.

I think none of this surprises me. I didn't know precisely kind of what the paths are. But fundamentally, this is the way of technology. When the next tech revolution happens, whether it takes a decade or three years or 20, it would accelerate us once again, and it would further strengthen the needs to speed it up and maybe even back into it. Maybe even then, Snyk will be there and add the next layer of automation to accelerate your security as well.

[1:05:21] Simon Maple: Amazing. Well, some solid predictions there for 2024.

[1:05:23] Guy Podjarny: Indeed.

[1:05:24] Simon Maple: And thank you for that. At times, almost real-time commentary of 2023 as well. We almost need to do the 2024 review straight after this, Guy. Amazing memories and summaries of 2023 podcast episodes, as well, as some of the major events there. So, thanks very much, Guy, as always.

[1:05:42] Guy Podjarny: Yes. And thank you, Simon, for taking on so much more in 2023 as I'm sure you will in 2024. I wish that you would have had an amazing New Year's and Christmas.

[1:05:52] Simon Maple: It was incredible. It was incredible, Guy. It will be and was, absolutely, one of our best. So yes, amazing. Thanks all for tuning in and look forward to hearing from you for the next episode.

[1:06:03] Guy Podjarny: Cheers.

[OUTRO]

[1:06:08] ANNOUNCER: Thanks for listening to The Secure Developer. You will find other episodes and full transcriptions on devseccon.com. We hope you enjoyed the episode, and don’t forget to leave us a review on Apple iTunes or Spotify, and share the episode with others who may enjoy it and gain value from it. 

If you would like to recommend a guest or topic, or share some feedback, you can find us on Twitter, @DevSecCon, and LinkedIn at The Secure Developer. See you in the next episode.