Skip to main content

5 ways to prevent code injection in JavaScript and Node.js

Written by:
wordpress-sync/blog-banner-prevent-code-injection

April 6, 2021

0 mins read

Writing secure JavaScript code in a way that prevents code injection might seem like an ordinary task, but there are many pitfalls along the way. For example, the fact that you (a developer) follow best security practices doesn’t mean that others are doing the same. You’re likely using open source packages in your application. How do you know if those were developed securely? What if insecure code like eval() exists there? Let's dive into it.

What is code injection?

Code injection is a specific form of broad injection attacks, in which an attacker can send JavaScript or Node.js code that is interpreted by the browser or the Node.js runtime. The security vulnerability manifests when the interpreter is unable to make a distinction between the trusted code the developer intended, and the injected code that the attacker provided as an input.

How to prevent code injection

As a key secure coding convention, do not allow any dynamic code execution in the application. This means you should avoid language constructs like eval and code strings passed to setTimeout() or the Function constructor. Secondly, avoid serialization which could be vulnerable to injection attacks that execute code in the serialization process. Lastly, perform dependency scanning to ensure that your application isn’t susceptible to this attack due to third-party open source components. Furthermore, if you use a static code analysis tool like Snyk Code, you can find these potential code injection security vulnerabilities in your or your colleagues' code.

In this article we will look into 5 ways to prevent code injection:

  1. Avoid eval(), setTimeout() and setInterval()

  2. Avoid new Function()

  3. Avoid code serialization in JavaScript

  4. Use a Node.js security linter

  5. Use a static code analysis (SCA) tool to find and fix code injection issues

1. Avoid eval(), setTimeout(), and setInterval()

I know what you're think—here is another guide that tells me to avoid eval. Yes, that’s true, but I also want to give you real-world examples of other popular libraries that used eval (or other forms of code construction) which turned back to bite them, and lead to a severe security vulnerability.

But before we dive into the vulnerable third-party package references, let’s first explain eval and its accompanying functions. JavaScript runtime environments, like the browser and the server-side Node.js platform, allow for code to be evaluated and executed during run-time. A practical example of this is the following:

const getElementForm = getElementType == “id” ? “getElementById” : “getElementByName”;
const priceTagValue = eval(“document.”+getElementForm+”(“+elementId+”).value”);

With this, a programmer is trying to create a dynamic way to access data on the DOM. In this example, the assumption is that getElementform is also potentially user controlled, as well as the elementId variable. There are better ways of performing this task without the need for eval, so you should avoid dynamic code like this by all means.

On the Node.js side of things, one may want to allow accessing specific data points in the application, based on some dynamic evaluation. Here’s an example:

const db = "./db.json"
const dataPoints = eval("require('"+db+"')");

In this example, the general assumption is that the exact file we want to require is dynamic, and potentially user-controlled, in which again we have potential code injection security vulnerabilities.

Dustjs code injection shows a real-world example of insecure eval usage

LinkedIn’s npm package dustjs—an asynchronous templates project for the browser and server-side Node.js—shows how severe a code injection vulnerability can become.

While this package isn’t well maintained anymore, it still captures about 72,000 monthly downloads, and it had to deal with a code injection security vulnerability.

The maintainers of dustjs did their best to escape potentially dangerous user input that could flow into insecure code constructs like the eval() function, however the escapeHtml function itself had a security flaw, in which it only checked for string types and then escaped the input, where-as it should have also checked for other types, like say, arrays. This pull request fixed the code injection security vulnerability:

wordpress-sync/blog-prevent-code-injection-dustjs

How does eval() fit in all of this you ask?

If you use dustjs, you might also introduce the npm package dustjs-helpers to get additional template helpers like math operations and logical operations. One of those additional helpers is an if condition, which you may end up using like this in your own dust template files:

wordpress-sync/blog-prevent-code-injection-dust-template

Makes sense, right?

The problem is that uncontrolled user input in that query parameter device flows directly into the if condition helper, which uses eval, as you can see in line 227, to evaluate the condition dynamically:

wordpress-sync/blog-prevent-code-injection-dust-eval

Now everything becomes clear, and shows how several security issues come together in an unforeseen way:

  1. dustjs-linkedin, an open source package, has a security flaw in which it is incorrectly sanitized input strings in its escapeHtml function.

  2. dustjs-helpers, an open source package, uses an insecure coding convention in the likes of the eval() function to dynamically evaluate code at runtime.

Do you want to see how I exploited this vulnerability and hacked a real live working application just based on this exact vulnerability? Check it out:

Also avoid setTimeout() and setInterval()

To wrap up the best practice of avoiding eval(), I also want to call out other functions which, as a JavaScript developer, you have most certainly heard about or used at least once in your application: setTimeout() and setInterval().

A little less known fact about these functions, is that they also receive code strings. For example, it can be used as follows:

setTimeout(“console.log(1+1)”, 1000);

Luckily, string literals aren’t allowed in a Node.js environment!

2. Avoid new Function()

Another language construct, similar to the above eval(), setTimeout() and setInterval() is the Function constructor, which allows dynamically to define a function based on string literals.

Consider the following mundane example:

const addition = new Function(‘a’, ‘b’, ‘return a+b’);
addition(1, 1)

If you followed closely this far, you already are aware of the potential security issues that could arise from user input flowing into such a function...

3. Avoid code serialization in JavaScript

Serialization is quite a thing in the Java ecosystem. My buddy Brian Vermeer wrote a blog post about how security vulnerabilities impact Java applications due to insecure serialization operations. I highly recommend reading it: Serialization and deserialization in Java: explaining the Java deserialize vulnerability.

Back to our JavaScript world, apparently serialization is also a thing.

Chances are, you won’t be coding your own serialization and deserialization logic yourself, but in the wonderful world of npm and more than 1,500,000 open source packages at your disposal, why not use them?

js-yaml wins in popularity with more than 28,000,000 downloads a week and based on the Snyk Advisor it is found at good overall package health:

wordpress-sync/blog-prevent-code-injection-js-waml-health

That said, you can see from the above screenshot of the npm package js-yaml, that prior versions had security vulnerabilities in them. Which one you ask?

Versions of js-yaml were found vulnerable to Code Execution due to Deserialization. The way in which the vulnerability manifests, is due to the following use of the new Function() constructor:

function resolveJavascriptFunction(object /*, explicit*/) {
  /*jslint evil:true*/  var func;

  try {
    func = new Function('return ' + object);
    return func();
  } catch (error) {
    return NIL;
  }
}

Let’s see what a proof-of-concept exploit for this vulnerability looks like:

var yaml = require('js-yaml');

x = "test: !!js/function > \n \
function f() { \n \
console.log(1); \n \
}();"

yaml.load(x);

So if a malicious actor is able to provide such input, or parts of it, as used to create the x variable in the above proof-of-concept code, then a potential vulnerability becomes a real danger.

The above vulnerability is dated back to 2013, but a 2019 security vulnerability report found another case of Arbitrary Code Execution in js-yaml. So, be careful, or to give you a more actionable and practical advice - avoid new Function() and actually scan your third-party open source packages to make sure you don’t have these vulnerabilities and that if you do, you can fix them automatically with an fix pull request.

4. Use a Node.js security linter

Arriving at our tooling part of this guide, let’s talk about linters. JavaScript developers like their linters. Whether you use standardjs, or eslint, to enforce a code style, these are pretty common tools in any JavaScript or Node.js project.

Why not enforce good security practices? This is where eslint-plugin-security joins the party. As the README instructions go, using the plugin is quite straightforward. Simply add the following eslint plugin configuration to enable the recommended configuration:

"plugins": [
  "security"
],
"extends": [
  "plugin:security/recommended"

How does the linter help?

It has rules to detect insecure coding conventions, such as: detect-eval-with-expression - which detects uses of eval() with expressions or string literals. The linter some other rules such as use of child_process Node.js APIs.

Take a note that eslint-plugin-security had its last publish date dated over 4 years ago, and while it may still functionally work well, you may want to consider other successor packages like eslint-plugin-security-node.

5. Use a static code analysis tool to find and fix code injection issues

Static code analysis (SCA) linters in the basic forms as utilized with ESLint are a good starting point. They provide enough context to enforce code style, but as we’ve seen with the Node.js security linter, they’re not as flexible enough as needed to actually address security issues.

Some of the concerns developers have with a Node.js security linter such as eslint-plugin-security would be:

  1. False positives: The linter rules are quite basic and might end up alerting on many false positives which only drive more developer frustration and confusion. For example, the following RegExp(matchEmailRegEx) will cause the Node.js security linter to error because of the non-literal use for the RegExp function. Maybe matchEmailRegEx is just a constant in my shared/variables.js file? The linter isn’t advanced enough to know that.

  2. Rules are too rigid: In continuation to the above point, the rule is too rigid. It’s either you use child_process.exec(someCommand, []), or you don’t. The static code analysis process here, as used with the linter, isn’t smart enough to say that someCommand is a constant that you hardcoded. The fact that you’re using child_process.exec() with a non-literal, is enough to trigger a linter error, and that ends up causing frustration to developers, who end up turning the rule off.

  3. Rules are too basic: The collection of rules are too small, and the findings are too basic. They’re basically an all or nothing without much context into how the data actually flows from a given user input, to a potential sensitive code such as command execution, SQL queries, or other.

To iterate the statement from before, a security linter like eslint-plugin-security-node or others is a good starting point. It’s definitely better than nothing at all.

But there are better ways to find security issues in your own code, while you code.

Let me introduce you to Snyk Code, a static application security testing tool (SAST), that’s built for developers.

Finding a command injection in a Node.js application

Snyk Code is going to launch soon, but I will show you a sneak peak on how it works.

First, connect to Snyk with a GitHub account and then import the GitHub repository. To do so, click on Add project, and then on the GitHub icon:

wordpress-sync/blog-prevent-code-injection-add-project

Then, either find a repo from the list of repositories, or use the search box to type it in and and then toggle on the repository to start scanning:

wordpress-sync/blog-prevent-code-injection-repo-search

Snyk will then import the GitHub repository and scan it quickly.

It will automatically detect other manifest files that relate to potential security issues, like say if your using open source dependencies with known vulnerabilities, or maybe your Docker image introduces a bunch of security vulnerabilities too.

Let's focus on our own code in this Node.js application, so let’s click Code analysis and see what we find:

wordpress-sync/blog-prevent-code-injection-code-analysis

Snyk Code found several vulnerabilities. One of them is a command injection vulnerability, as we see here:

wordpress-sync/blog-prevent-code-injection-vulnerabilities-1

The problematic security issues found in this line of code explain the concern:

“Unsanitized input from the HTTP request body flows into child_process.exec, where it is used to build a shell command. This may result in a Command Injection vulnerability.”

But how does data flow from that url parameter into the unsafe exec() function? Click on the full details button for a more elaborate version of the data flow to add context:

wordpress-sync/blog-prevent-code-injection-command-injection

Here, we can clearly see the whole picture the way that Snyk Code analyzed it.

The url parameter is created out of the item array, which itself is the source of a user-controlled input, that flows as a message body input in the variable req.body.content.

Fixing the command injection

We can now take further steps to address the security issue such as:

  • Instead of using the unsafe exec(), we can use the safe version of that API: execFile() which takes care of escaping the arguments provided to it in the form of an array function argument.

  • We can, and should, validate, escape, or sanitize the item variable from user input, before it flows into sensitive code such as executing system processes.

Summary

Great job if you followed this far!

Hopefully, you’re now more aware of the problems code injection vulnerabilities can create. Whether they originate from your own code, or from third-party dependencies that you import in your application.

If you found this post useful, here are some follow-up reading materials from my colleagues at Snyk: