Serverless has grown exponentially in the last few years, and yet it is not done. In 2023, Datadog found that 70% of its customers who used AWS were adopting some serverless services.
While serverless is on the rise, it was identified by CrowdStrike in 2024 that the biggest security threat in modern applications is Cloud misconfiguration.
That is explained by the fact that as systems grow more complex and distributed, the risk of accidentally exposing resources or granting excessive permissions increases. In such a scenario, a more robust and proactive approach to security becomes critical.
In this article, to help address this challenge, I’d like to present the concept of “Zero Trust” and how to apply its principles to serverless applications from day one.
For the practical examples of this article, I am building a REST API for uploading and retrieving sensitive files. For achieving this, we are using the following tech stack:
- AWS as cloud provider
- Serverless Framework for orchestration
- Resend for sending emails
- Core components: Lambda, API Gateway, S3, IAM, CloudWatch
- Serverless CI/CD
You can check the code on https://github.com/brognilucas/zero-trust-serverless-sample
Zero Trust is a security strategy based on the principle of “never trust, always verify.”
This security framework assumes that threats can exist everywhere,
And thus nothing should be trusted by default, even if it is inside your own perimeter, in this case, inside the VPC of your cloud.
Before looking into it specifically for the serverless context, and cloud configuration problems is important to have a better understanding of the key tenets of zero trust.
Least privilege access:
- One should never have more access than it needs to.
Continuous verification:
- Ensure identity is checked before granting any type of access.
Assume breach:
- Assume that no system is perfect and that sooner or later a breach will happen, and therefore always try to do your best to minimize the impact.
Strong identity:
- Leverage multi-factor authentication (MFA) and context-aware policies whenever possible.
Applying ZERO trust architecture on the project requires as previously written, applying the following items:
Identity & Access Management
- Always require authentication for actions to be performed. On the example of this project, for uploading a file, checking files that the user has, or for downloading a file, the authentication is required.
For login:
Our implementation had the structure of authorization via user & password + 6 digit code sent via email.
For protected routes: usage of JWT token
You can achieve such protection on serverless by applying the following structure on the configuration:
provider:httpApi:
authorizers:
authorizer:
type: request
identitySource: "$request.header.Authorization"
functionName: authorizer
enableSimpleResponses: true
functions:
authorizer:
handler: auth/jwt-authorizer.handler
And the Lambda for authorization:
import jwt from 'jsonwebtoken';export const handler = async (event) => {try {
const token = event.headers.authorization.split(' ')[1];
if (!token) {
return {
isAuthorized: false,
};
}
const decoded = jwt.verify(token, process.env.JWT_SECRET);
return {
isAuthorized: true,
context: {
email: decoded.email
}
};
} catch (error) {
return {
isAuthorized: false,
};
}
};
For this example, we didn’t have any type of roles, as every user could make upload of their files, and only manage it’s own files, but it would be possible for a second iteration to implement a ROLE based system, where some users could only see files, others would be able to put files, etc.
Implementing Least Privilege with IAM Roles
For this project, we used S3 Buckets and DynamoDB, and each Lambda would need access to some specific resources, but with different levels of privileges. An example is that the Lambda that can list the files should not have access to get or put files in the bucket.
Another example is that the Lambda that will authorize the user should not be able to insert or update the user table, but it obviously needs to get the user and create a new MFA code onto another table.
Such controls when building the IAM ROLE are created in the following spec:
functions:signin:
handler: auth/signin.handler
events:
- httpApi:
path: /signin
method: post
role: SigninLambdaRoleresources:
Resources:SigninLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: zero-trust-confidential-files-api-signin-lambda-role-${self:provider.stage}
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: SigninLambdaPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
Resource:
- arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/Users-${self:provider.stage}
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/MfaCodes-${self:provider.stage}
As it can be seen, no permissions that are not necessary are granted any of the resources.
Microsegmentation & Isolation
Microsegmentation is fundamental to Zero trust, where the idea is to divide the environment in small and isolated blocks, allowing that each block has its own scope of security. By following such approach, in case of a breach, the impact is limited.
In practical terms, for doing so in our project, we have achieved it by splitting the Lambdas by its own IAM role as shown in the previous example.
Other than that, such approach also helped us to achieve a few good software engineering practices, such as SRP (Single Responsibility Principle), while also facilitating logs auditing, faster and secure deployments, higher scalability, etc.
Observability & Auditing
Observability is another essential component of a Zero trust architecture, even further when thinking about serverless environments.
For our project, we are using CloudWatch to collect logs, and the serverless dashboard for understanding what is happening at all times. With it, we can always have ideas about the status of our application, understand patterns, and check for possible errors or security breaches.
Securing the CI/CD Pipeline
As you can imagine though, zero trust go beyond the security of the production application, but rather it also requires Zero trust for the deployments, and the code itself.
Therefore, the usage of proper configuration on your CI/CD, the usage of parameters, environment variables and limited access of such components are a MUST when implementing a zero trust architecture.
For this project, the parameters have been put on the serverless configuration, and injected onto the yml file by them.
name: aws
runtime: nodejs20.x
stage: ${opt:stage, 'dev'}
environment:
RESEND_API_KEY: ${param:resendApiKey}
JWT_SECRET: ${param:jwtSecret}
EMAIL_FROM: ${param:emailFrom}
S3_UPLOAD_BUCKET_NAME: ${param:s3UploadBucketName}-${self:provider.stage}
When implementing Zero Trust architecture in serverless environments, it’s fundamental to follow some best practices and be prepared to avoid some anti-patterns. Following such practices will help you to achieve a more secure, scalable, and maintainable system.
Following, I have listed some of the patterns that are recommended to build a secure serverless application:
- Validate inputs and payloads
- Use secret rotation
- Use environment variables and parameters stored in a safe environment
- Frequently monitor your application
- Avoid the usage of wildcard permissions (*)
- Avoid long-running functions or critical external dependencies without fallback mechanisms
The Zero Trust architecture has become essential for modern applications. In a situation where cloud and serverless are highly used, adopting Zero Trust means assuming that no component, user, or service is automatically trusted, and can represent a significant reduction in the risks of accidental exposures, unauthorized access, and even insider attacks.
Given that misconfiguration can and is the main reason for security breaches, following best practices, checking on your configuration, and using the right tools is the best way to help you unlock the benefits that serverless can provide.
Also, it’s always important to remember that security does not end at the moment of deployment: it is an ongoing process that involves constant monitoring, detailed auditing, and frequent adjustments to respond to new threats and vulnerabilities.
For those who want to delve deeper into the subject, some recommended readings are the OWASP Serverless Top 10, which highlights the main risks specific to this model, and the official Serverless Framework documentation focused on security, in addition to other best practices recognized in the market.
Sources and inspired content:
https://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.SP.800-207.pdf
https://www.datadoghq.com/state-of-serverless/
https://www.crowdstrike.com/en-us/cybersecurity-101/cloud-security/cloud-vulnerabilities/