In diesem Abschnitt
What is LLMjacking? How AI Attacks Exploit Stolen Cloud Credentials
As organizations increasingly embrace the power of AI and integrate it into their workflows, new security security considerations are coming into focus. One important area to be aware of is LLMjacking. This involves unauthorized individuals trying to manipulate and exploit your organization's Large Language Models (LLMs), particularly when these models are hosted on cloud services and accessed through online accounts.
While it might not immediately sound like a critical issue, LLMjacking can have real consequences for your business and your customers, including larger data breaches and vulnerability exploits.
To protect your organization and data, it’s important to understand how LLMjacking works, what the potential risks are, and the steps you can take to keep your LLM and enterprise safe.
How do LLMjacking attacks work?
The main goal of LLMjacking attacks is to gain access to and hijack an organization’s LLM. Often, this starts with stolen usernames and passwords. These credentials might have been obtained through various methods, including direct theft or purchase from online criminal marketplaces. Unfortunately, discussions about how to carry out LLMjacking attacks are also becoming more common in online communities.
Once these cybercriminals have valid login details, they can effectively "hijack" your organization's LLM, allowing them to interact with it just like a legitimate user.
AI CODE SECURITY
Buyer's Guide for Generative AI Code Security
Learn how to secure your AI-generated code with Snyk's Buyer's Guide for Generative AI Code Security.
What could happen if an LLMjacking attack is successful?
The potential consequences of an LLMjacking attack are extensive, but the most common are:
Unexpectedly high consumption costs
Cloud-based LLM services typically charge based on usage. If your LLM is hijacked, attackers could use it extensively, leading to surprising and substantial charges that you weren't expecting. This is similar to how some criminals might secretly use your computer to mine cryptocurrency, costing you electricity and resources. It’s been found that LLMjacking could cost organizations anywhere from $46,000 to over $100,000 a day, depending on the LLM they use.
Data breaches and information leaks
Your employees might use LLMs to process various types of data, some of which could be sensitive. If an attacker gains control of the LLM, they could potentially access this information, leading to data security incidents and the leakage of confidential or proprietary data. This can damage your reputation and lead to legal or regulatory issues.
Exploitation of vulnerabilities
Attackers could use a hijacked LLM to try and find and exploit weaknesses in your software systems, both those that are already known (one-day vulnerabilities) and those that are brand new (zero-day vulnerabilities). A study found that OpenAI’s GPT-4 could autonomously exploit 87% of one-day vulnerabilities with CVE descriptions alone, making it entirely possible for a bad actor to use an organization’s LLM to generate malicious code and exploit vulnerabilities.
Prompt injection and data poisoning
Attackers can use a technique called prompt injection to trick your LLM into behaving in unintended ways. By feeding it specific malicious inputs, they can make it reveal sensitive information, generate harmful code, or take other damaging actions. Additionally, they might try to poison the training data that the LLM uses. This could introduce errors, biases, or weaknesses into the model, making it less reliable or even exploitable in the future.
The real impact of LLMjacking on your business
The consequences we've discussed can add up to have a significant impact on your organization:
Financial losses: Beyond the direct costs of increased usage, dealing with an LLMjacking incident can involve expenses for investigating the attack, recovering lost data, legal fees, and repairing any damage to your company's reputation.
Increased security risks: The potential for data breaches, the introduction of vulnerabilities into your systems, and the manipulation of your AI all weaken your overall security and increase the risk of future attacks.
Disruption to Your Operations: If your LLM is being used excessively by attackers, it could slow down or even overload your systems, impacting their ability to function correctly. Furthermore, if the LLM is compromised to spread incorrect information or generate harmful content, it can seriously disrupt your business operations and negatively affect your customers' experience.
How to protect yourself: LLMjacking prevention and detection strategies
To prevent LLMjacking and mitigate risk, organizations need to have proper guardrails. Since stolen credentials are a common way for attackers to gain access, it's crucial to have strong Identity and Access Management (IAM) practices in place. This includes following the principle of giving users only the necessary access they need to perform their jobs (least privilege). It also requires multi-factor authentication (MFA) for all accounts that can access your LLM services. Finally, organizations should provide clear guidelines, documentation, and training to employees on how to use AI responsibly and securely, including LLMs.
LLMs should be continuously trained on high-quality, diverse, and trustworthy data to reduce the chances of vulnerabilities or biases. Consider using techniques like adversarial training to make your models more resilient as well as regularly auditing and updating your LLMs to patch any potential security weaknesses.
Implementing strong input validation and sanitization processes to prevent prompt injection attacks is also good practice. This means checking user inputs to make sure they meet certain standards and removing or modifying any potentially harmful content before it's processed by the LLM.
Detection tactics: How to spot potential LLMjacking Attacks
Security teams should be on the lookout for signs of a potential LLMjacking incident. There are detection tactics that mitigate risk, including:
Monitoring the LLM and user accounts for unusual requests, activity, or query patterns.
Implementing filtering for both inputs and outputs of your LLMs to identify any potentially malicious activity or anomalies.
Using security tools and systems to monitor logs, identify suspicious network traffic, block malicious connections, and scan for compromised credentials or malicious or vulnerable code.
Educate and train your users to recognize and report any suspicious or unusual activity they might encounter while using LLMs.
Tools to prevent LLMjacking (and other resource-jacking) attacks
Beyond the general security measures, there are specific types of tools that can be particularly helpful in preventing LLMjacking and other similar attacks. Since LLMjacking often targets cloud-based systems, having tools that provide good visibility and security across these environments is essential.
Static application security testing (SAST) and software composition analysis (SCA) tools are highly relevant here. Ideally, you want a comprehensive platform like Snyk Apprisk that gives you a clear view of security risks throughout your entire software development process and across all your environments. SCA tools, like Snyk open source, can scan your software code to identify any open-source components and their dependencies. This helps you make sure that any known vulnerabilities or other risks associated with these components are found, prioritized, and fixed before they can be exploited in your LLM environment or anywhere else in your systems.
Similarly, if your developers use LLMs to help them write code, SAST tools are crucial for ensuring the security of that code. While AI-generated code isn't always perfectly secure on its own, an LLMjacking attack could make it even more likely that malicious or vulnerable code is introduced. A SAST tool that scans code in real time and directly in the IDE, like Snyk Code, can ensure that vulnerabilities are found and fixed before they cause significant issues. Using a free code checker from a trusted security provider can also ensure insecure code doesn’t spread.
Other tools include:
Security Information and Event Management (SIEM) systems to analyze logs from various sources and help you detect suspicious activity patterns.
Emerging LLM-specific security tools that are designed to moderate the inputs and outputs of LLMs and protect against unique security threats related to these models.
Secret management tools to securely store and manage sensitive information like API keys, which could be targeted in an LLMjacking attack.
Cybersecurity is constantly changing, with new threats being discovered all the time. It's important for organizations to carefully evaluate their specific needs and implement a combination of tools and practices to achieve strong protection against LLMjacking and other types of attacks.
Future implications of LLMjacking
As AI becomes more and more popular and is used in a wider range of industries, we can expect the threat of LLMjacking to continue to grow. But with this increased adoption, the rise in LLMjacking continues. For example, DeepSeek was the target of LLMjacking only weeks after its public release.
As AI technology advances, new LLMs are developed, and new vulnerabilities are discovered, attackers will undoubtedly adapt their methods to try and gain access and exploit these powerful tools. This could have significant consequences as AI becomes an increasingly integral part of our daily lives and critical infrastructure.
To prevent LLMjacking and large-scale breaches, security measures surrounding AI are a necessity, not an option. One way to do this is by implementing an application security platform like Snyk, giving your organization visibility and security throughout your AppSec program, including AI adoption. By empowering your teams to take security seriously, you can protect your organization from LLMjacking and other resource-based threats.
Secure what matters most to your business
Find out how Snyk enables AppSec teams to build, manage and scale a modern AppSec program with Snyk AppRisk ASPM