Efficient infrastructure testing with LocalStack & Terraform tests framework
Terraform tests framework can integrate with LocalStack to perform local testing of your AWS cloud infrastructure. We'll explore how to use this to test a serverless workflow and enable rapid, cost-effective validation of your Terraform configurations right on your local machine.
Introduction
With the introduction of Terraform 1.6, the Terraform tests became generally available. However, using Terraform tests to create real cloud infrastructure presents challenges, such as long deployment times and unnecessary costs that can lead to slower development and testing cycles. LocalStack addresses these issues by allowing integration testing of cloud solutions and configurations locally and in CI/CD environments. With LocalStack’s Terraform integration (tflocal), you can now use the testing framework to test your IaC configurations locally without creating actual cloud resources.
In this blog, we will walk through setting up an event-driven serverless workflow on your local machine using Terraform and LocalStack, and how to configure the Terraform tests to run tests locally. This method eliminates the need for actual AWS services, thereby avoiding costs related to managing resources in AWS. This approach would also set up a rapid feedback loop for accelerated cloud development and testing using Terraform & LocalStack.
Table of Contents
How LocalStack works with Terraform
LocalStack runs as a Docker container on your local machine or in an automated environment. Once running, you can use LocalStack with Terraform to create and manage AWS resources locally. For local deployment and testing of Terraform configurations, LocalStack provides a CLI wrapper called tflocal. tflocal utilizes the Terraform Override mechanism and creates a temporary file localstack_providers_override.tf, which sets the AWS provider endpoints to point to the LocalStack API (http://localhost:4566).
To set up tflocal, you can use install the PyPI package with these commands:
$ pip install terraform-local...$ tflocal --versionTerraform v1.9.8Since tflocal acts as a wrapper over the terraform CLI, you can use all the Terraform CLI commands that you are used to, including terraform test. Instead of deploying and testing resources on the real cloud, resources are deployed locally, and the tests verify their correctness and availability.
Image Resizer with Lambda & S3
In this tutorial, we’ll setup a serverless workflow to resize images uploaded to an S3 bucket. For simplicity, we’ll setup S3 bucket notifications to trigger a Python Lambda that runs image resizing operation using Pillow and uploads the resized image to another S3 bucket. The infrastructure will be setup using Terraform, and we’ll use LocalStack to deploy & test it locally.

Prerequisites
localstackCLI with LocalStack Auth Token- Terraform v1.6.0 and later with
tflocalwrapper script - Docker
- LocalStack Web Application account
Setup the Lambda
To start, create a new file named lambda_function.py. This Lambda function automatically resizes images uploaded to an S3 bucket named original-images, ensuring they don’t exceed 400x400 pixels while maintaining aspect ratio. The resized images are then saved to a separate resized-images bucket. Add the following code to the file:
import osimport boto3from PIL import Imageimport tempfileimport traceback
s3_client = boto3.client('s3')
MAX_DIMENSIONS = (400, 400)
def resize_image(image_path, resized_path, original_format): with Image.open(image_path) as image: width, height = image.size max_width, max_height = MAX_DIMENSIONS if width > max_width or height > max_height: ratio = max(width / max_width, height / max_height) width = int(width / ratio) height = int(height / ratio) size = (width, height)
image.thumbnail(size) image.save(resized_path, format=original_format)
def lambda_handler(event, context): try: # Get bucket and object key from the event source_bucket = event['Records'][0]['s3']['bucket']['name'] source_key = event['Records'][0]['s3']['object']['key'] destination_bucket = 'resized-images'
print(f"Source bucket: {source_bucket}, Source key: {source_key}")
with tempfile.TemporaryDirectory() as tmpdir: download_path = os.path.join(tmpdir, source_key) # Extract the filename and extension base_filename, ext = os.path.splitext(source_key) resized_filename = f"{base_filename}{ext}" upload_path = os.path.join(tmpdir, resized_filename)
# Download the image from S3 s3_client.download_file(source_bucket, source_key, download_path)
# Determine the image format with Image.open(download_path) as image: original_format = image.format
# Resize the image resize_image(download_path, upload_path, original_format)
# Upload the resized image to the destination bucket s3_client.upload_file(upload_path, destination_bucket, resized_filename)
return { 'statusCode': 200, 'body': f"Image {source_key} resized and uploaded to {destination_bucket}" } except Exception as e: print("Error occurred:", e) traceback.print_exc() raise eTo deploy the Lambda, we’ll use the ZIP archive. Create a text file named requirements.txt and add Pillow as a dependency. Now, run the following commands to package your Lambda function:
docker run --platform linux/x86_64 --rm -v "$PWD":/var/task "public.ecr.aws/sam/build-python3.11" /bin/sh -c "pip3 install -r requirements.txt -t libs; exit"
cd libs && zip -r ../lambda.zip . && cd ..zip lambda.zip lambda_function.pyrm -rf libsThese commands use Docker to install the required Python packages in a Lambda-compatible environment, then create a ZIP file containing both the dependencies and your function code. The final ZIP file lambda.zip will be ready to use while creating the Lambda function.
Setup the Terraform configuration
The next step involves creating a Terraform configuration that accomplishes the following:
- Creates two S3 buckets named
original-imagesandresized-images. - Creates a Lambda function named
ImageResizerFunctionto resize images. - Sets up the S3 bucket notification configuration to trigger the Lambda function when images are uploaded to the
original-imagesbucket.
Create a new file named main.tf and add the following Terraform configuration:
resource "aws_s3_bucket" "original_images" { bucket = "original-images" force_destroy = true tags = { Name = "Original Images Bucket" }}
resource "aws_s3_bucket" "resized_images" { bucket = "resized-images" force_destroy = true tags = { Name = "Resized Images Bucket" }}
resource "aws_lambda_function" "image_resizer" { filename = "lambda.zip" function_name = "ImageResizerFunction" handler = "lambda_function.lambda_handler" runtime = "python3.11" timeout = 60 role = "arn:aws:iam::000000000000:role/lambda-role"}
resource "aws_s3_bucket_notification" "original_images_notification" { bucket = aws_s3_bucket.original_images.id
lambda_function { lambda_function_arn = aws_lambda_function.image_resizer.arn events = ["s3:ObjectCreated:*"] }}It’s important to note that this Terraform configuration only sets up S3 buckets, a Lambda function, and bucket notifications, without any IAM roles or permissions. LocalStack doesn’t enforce IAM roles strictly, as it is a permit-all system. However, you should configure IAM roles and permissions before moving to production.
You’re now ready to test our infrastructure deployment with tflocal!
Deploy the local infrastructure
Before starting a local deployment with tflocal, first start your LocalStack container using your LOCALSTACK_AUTH_TOKEN:
localstack auth set-token <YOUR_LOCALSTACK_AUTH_TOKEN>localstack startOnce the LocalStack container is running, initialize your Terraform configuration with this command:
tflocal initFinally, deploy your Terraform configuration using:
tflocal applyYou will be prompted to confirm the resource actions. After confirmation, your entire infrastructure will be deployed locally. The output will look like this:
aws_lambda_function.image_resizer: Creating...aws_s3_bucket.original_images: Creating...aws_s3_bucket.resized_images: Creating...aws_s3_bucket.resized_images: Creation complete after 0s [id=resized-images]aws_s3_bucket.original_images: Creation complete after 0s [id=original-images]aws_lambda_function.image_resizer: Creation complete after 6s [id=ImageResizerFunction]aws_s3_bucket_notification.original_images_notification: Creating...aws_s3_bucket_notification.original_images_notification: Creation complete after 0s [id=original-images]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.If you have an account on the LocalStack Web Application, you can check the Status Page and Resource Browsers to verify that your resources have been successfully created using Terraform.

Asserting the resource provisioning
You can now begin writing Terraform tests using HashiCorp Configuration Language (HCL). Terraform identifies test files by their extensions: .tftest.hcl or .tftest.json.
A tests file generally includes the following components:
- An optional
providerblock to customize the provider configuration. - A
variablesblock that contains the input variables passed into the module. - A
runblock to execute a specific test scenario, to be run in sequence.
Tests in Terraform serve two main purposes:
- Unit testing focuses on individual components to ensure each part functions correctly.
- Integration testing ensures that the deployed infrastructure operates as expected as a whole.
For unit testing, the framework typically uses terraform plan commands within its run blocks. This approach speeds up testing by avoiding the actual provisioning of infrastructure. Assertions are then used to confirm that the configuration produces the expected values.
In contrast, integration tests uses terraform apply to deploy the infrastructure and then check its functionality, often using data sources to validate expected responses from the deployed resources. Instead of specifying individual commands directly, the command attribute is used within the run block to indicate whether to execute plan or apply (default being apply).
To get started, create a new directory named tests and within it, a file called assert.tftest.hcl. Here’s how to add a test to check the created S3 buckets:
run "verify_s3_buckets" { command = plan
assert { condition = aws_s3_bucket.original_images.bucket == "original-images" error_message = "Original images bucket not created with correct name" }
assert { condition = aws_s3_bucket.resized_images.bucket == "resized-images" error_message = "Resized images bucket not created with correct name" }}In this run block:
- The label
verify_s3_bucketsnames the test. - The
commandis set toplan, which executes theterraform plancommand. - The
assertblock contains aconditionargument where the expression should evaluate totrueif the test passes andfalseif it fails.
This test ensures that the S3 buckets were created with the correct name specified to them. As a note, you can include multiple run blocks in your test file, and each run block can contain multiple assert blocks. Terraform further executes run blocks sequentially within the configuration directory.
Let’s run these tests using the Terraform testing framework. Start by restarting your LocalStack container for a fresh state:
localstack restartNow, execute the following command to run your tests:
tflocal testThe output should resemble this:
tests/assert.tftest.hcl... in progress run "verify_s3_buckets"... passtests/assert.tftest.hcl... tearing downtests/assert.tftest.hcl... pass
Success! 1 passed, 0 failed.Next, to verify if the Lambda function was created correctly, add a new block to assert.tftest.hcl:
run "verify_lambda_function" { command = plan
assert { condition = output.lambda_function_arn != null error_message = "Lambda function not created" }}When you run the tests again, you might encounter an error:
run "verify_s3_buckets"... pass run "verify_lambda_function"... fail╷│ Error: Unknown condition value││ on tests/assert.tftest.hcl line 19, in run "verify_lambda_function":│ 19: condition = output.lambda_function_arn != null││ Condition expression could not be evaluated at this time. This means you have executed│ a `run` block with `command = plan` and one of the values your condition depended on is│ not known until after the plan has been applied. Either remove this value from your│ condition, or execute an `apply` command from this `run` block.╵tests/assert.tftest.hcl... tearing downtests/assert.tftest.hcl... fail
Failure! 1 passed, 1 failed.This error indicates that instead of using command = plan, you should use command = apply, because the Lambda function ARN can only be retrieved after the Terraform configuration is applied. The plan command only simulates changes without creating resources, so these runtime values remain undefined. Make this change and re-run the tests to confirm functionality:
tests/assert.tftest.hcl... in progress run "verify_s3_buckets"... pass run "verify_lambda_function"... passtests/assert.tftest.hcl... tearing downtests/assert.tftest.hcl... pass
Success! 2 passed, 0 failed.As you can see, Terraform processes run blocks in the order they appear in the test file, executing them sequentially. Each run block can depend on the state changes made by the previous ones.
You can similarly test other resources, like verifying if the bucket notification is correctly configured with the Lambda function or the S3 bucket ARNs.
Integration Testing with Modules
With the Terraform tests, modules can be used to design and test workflows. You can use modules to:
- Set up infrastructure with a setup module
- Validate secondary infrastructure with a loading module
For example, a setup module deploys core infrastructure (S3 buckets, Lambda function), and a loading module uploads an image to the original-images S3 bucket and verifies the resized image.
To start, create execute and verify directories in tests, each with a main.tf file.
In tests/execute/main.tf, add the following configuration:
provider "aws" { access_key = "test" secret_key = "test" region = "us-east-1"
s3_use_path_style = true skip_requesting_account_id = true skip_credentials_validation = true skip_metadata_api_check = true
endpoints { s3 = "http://localhost:4566" sts = "http://localhost:4566" }}
variable "original_bucket_name" { type = string}
variable "image_path" { type = string}
variable "test_image_key" { type = string}
resource "aws_s3_bucket_object" "test_image" { bucket = var.original_bucket_name key = var.test_image_key source = var.image_path}In this file, the provider block specifies mock AWS credentials, routes requests to LocalStack, and includes flags to bypass account and credentials checks. The S3 bucket object resource uploads an image to the S3 bucket, triggering the Lambda function.
As a note, you can set or override providers using provider and providers blocks in Terraform testing files. Without these, Terraform defaults to initializing providers with default configuration.
In tests/verify/main.tf, add:
terraform { required_providers { time = { source = "hashicorp/time" version = "0.12.1" } }}
provider "aws" { access_key = "test" secret_key = "test" region = "us-east-1"
s3_use_path_style = true skip_requesting_account_id = true skip_credentials_validation = true skip_metadata_api_check = true
endpoints { s3 = "http://localhost:4566" sts = "http://localhost:4566" }}
variable "resized_bucket_name" { type = string}
variable "test_image_key" { type = string}
resource "time_sleep" "wait_10_seconds" { create_duration = "10s"}
data "aws_s3_bucket_object" "resized_image" { depends_on = [time_sleep.wait_10_seconds] bucket = var.resized_bucket_name key = var.test_image_key}This file includes the AWS provider as well as the time provider, which adds a 10-second delay before the aws_s3_bucket_object data source retrieves the resized image from resized_bucket_name. This ensures the Lambda function has time to process and upload the resized image.
Now, in the tests directory, create integration.tftest.hcl to specify values for Input Variables:
variables { original_bucket_name = "original-images" resized_bucket_name = "resized-images" image_path = "image.png" test_image_key = "image.png"}Ensure that a PNG image named image.png is downloaded in your root directory, where the tests will be executed. You can alternatively download it from our GitHub repository. These variables will be passed to the modules defined in this section, specifying the original bucket, resized bucket, image path, and image key for retrieval.
Next, add the module block in your test files, specifying the source attribute to point to the desired module path. This source can be a path to a local module or a registry module reference, with only these two options supported.
Here’s how you might structure this:
run "setup" { module { source = "./" }}
run "execute" { module { source = "./tests/execute" }}
run "verify" { module { source = "./tests/verify" }
assert { condition = data.aws_s3_bucket_object.resized_image.id != "" error_message = "Resized image not found in resized-images bucket" }}This configuration:
- Deploys the primary infrastructure using the
main.tffile in the root directory. - Uploads
image.pngto theoriginal-imagesS3 bucket. - Waits 10 seconds, then verifies the resized image exists in the
resized-imagesbucket.
Run the tests using tflocal test to see the following output:
tests/assert.tftest.hcl... in progress run "verify_s3_buckets"... pass run "verify_lambda_function"... passtests/assert.tftest.hcl... tearing downtests/assert.tftest.hcl... passtests/integration.tftest.hcl... in progress run "setup"... pass run "execute"... pass run "verify"... passtests/integration.tftest.hcl... tearing downtests/integration.tftest.hcl... pass
Success! 5 passed, 0 failed.The module block in the run block specifies which module to execute. The inputs and assertions in each run block configure the module and verify expected results.
As you might have noticed, Terraform automatically attempts to destroy all resources created during a test after each run block completes in the reverse order of their creation, as specified by the state file. However, with LocalStack, this cleanup process is simplified. You can simply stop or restart your container to achieve a fresh state, ensuring there are no lingering cloud resources that could incur additional costs.
Conclusion
That’s the long and the short of how can use the Terraform tests with LocalStack. If you already have test files set up to validate your Terraform deployments, you can get started by swapping the terraform command with LocalStack’s tflocal. This allows you to validate Terraform deployments locally, giving you confidence in your configuration by closely emulating real cloud behavior with LocalStack’s cloud emulator.
Terraform 1.7 has also introduced test mocking to simulate providers, resources, and data sources, generating fake data for tests without creating infrastructure or requiring credentials. In contrast, LocalStack provides a full replication of real-world behavior as shown in the example above. With LocalStack’s focus on parity with AWS, you can avoid building mock data to simulate specific behaviors, and rely on our high-fidelity, fully local cloud developer experience.
You can find the complete example and a sample GitHub Actions workflow pattern in our repository.