LocalStack LogoLocalStack Icon

Terraform vs AWS CLI

AWS offers multiple ways to create and manage cloud resources. Two popular tools are the AWS CLI (Command Line Interface) and Terraform, each with its own strengths and trade-offs. Let’s explore the process of creating resources using both AWS CLI and Terraform, and dive into the key differences between these approaches.

Terraform vs AWS CLI

When working with AWS, you have multiple tools at your disposal to create and manage resources and knowing which one to choose can be daunting. Two of the most widely used options you’ll hear about are the AWS CLI and Terraform, but each has its own strengths and trade-offs.

The AWS CLI (Command Line Interface) is a powerful tool for interacting directly with AWS services. It allows you to issue commands to create, manage, and delete resources on AWS in real-time. On the other hand, Terraform is an infrastructure as code (IaC) tool that lets you define your cloud infrastructure in declarative configuration files, which can then be versioned, reviewed, and deployed consistently across environments. Both tools make it easier to work with cloud resources, but they come with distinct trade-offs. While the AWS CLI is often favored for its simplicity and ease of use for quick, ad-hoc tasks, Terraform is designed for automation, repeatability, and managing infrastructure at scale.

In this blog post, we’ll highlight some aspects of how each tool works by using a few practical examples. We’ll explore the differences between these two approaches, when to use what, and dive into which tool might be better suited for your specific use case.

How the AWS CLI works

The AWS CLI is tightly integrated with AWS and operates directly through the AWS API. Every command issued is immediately executed, allowing you to interact with AWS resources without the need for managing an external state file. This makes the CLI straightforward for small tasks or for users who need to manually manage certain resources. However, this also means that there’s no built-in tracking of changes, and you have to also manually manage the lifecycle of resources, including deletions.

If you’ve explored the LocalStack documentation you’ve probably seen that the AWS CLI is the go-to tool for interacting with LocalStack and its resources. You may have also noticed we use awslocal command, which is a wrapper around the AWS CLI that simplifies the process of working with LocalStack. But don’t worry about the differences between awslocal and aws, they’re essentially the same, the first one just points to LocalStack. If for any reason you can not install awslocal, you can use the AWS CLI with the --endpoint-url=http://localhost.localstack.cloud:4566 flag to point to LocalStack. I will generically refer to the AWS CLI in this post, but know that awslocal does the same thing.

How Terraform works

In contrast, Terraform operates with a concept called “Terraform state”—a JSON file that records all the resources Terraform has created. This state file is essential, as it keeps track of which resources exist and their current configurations. When you make changes to your infrastructure, Terraform reads this state to determine what needs to be updated, added, or removed, and adjusts the infrastructure accordingly. LocalStack also provides a tool called tflocal that simplifies the process of working with Terraform and LocalStack, in the same way that the awslocal command does for the AWS CLI.

Example: Creating a static website S3 bucket

Let’s get our hands dirty with the smallest functional example we can do: creating a static website hosted in an S3 bucket. This simple task will give us a great foundation for comparing the AWS CLI and Terraform. By setting up this S3 bucket to host a website, we’ll explore the differences in approach, usability, and how each tool handles infrastructure management.

AWS CLI

Using the AWS CLI to create an S3 bucket and host a static website is relatively straightforward. Here are the commands:

aws s3api create-bucket --bucket static-website-bucket --region us-east-1
aws s3 sync ./ s3://static-website-bucket
aws s3 website s3://static-website-bucket/ --index-document index.html
  • The first command creates an S3 bucket with the specified name (static-website-bucket).
  • The second command syncs the local directory (where your website files are located) to the S3 bucket, meaning it will just copy all the files from your local directory to the bucket.
  • The third command configures the bucket to serve as a static website, with index.html as the default file for the root of the site.

There are a lot of properties of your bucket that you might want to adjust when using an S3 bucket to host a static site including the permissions, the bucket policy, the CORS configuration, error handling, CNAME record, but the most important aspect is to have your bucket policy set up correctly, so that the bucket is publicly accessible. Therefore, your bucket policy.json file should look something like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::static-website-bucket/*"
        }
    ]
}

And you’ll need to apply it using the following command:

aws s3api put-bucket-policy --bucket static-website-bucket --policy file://policy.json

You’re not quite done yet as you’ll be hit with an error message that complains about the fact that public policies are blocked by the BlockPublicPolicy block public access setting. No worries, there’s a command for that too:

aws s3api put-public-access-block \                                                                                                     
      --bucket static-website-bucket \
      --public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false

It’s up to you to make sure that your user has the correct admin permissions (as you’ll see, this also applies to Terraform). Try it out using this simple HTML file:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Your Website</title>
</head>
<body>
<div>
    <div class="message">Welcome to the website.</div>
</div>
</body>
</html>

Overall, this is pretty straightforward. You’ll have your HTML, JavaScript, CSS files in the bucket, and you’ll be able to access your website at the bucket’s URL: http://static-website-bucket.s3-website-us-east-1.amazonaws.com/. We managed to set everything up with 5 commands. But what if you need to do this again? Or what if you need to do this for multiple buckets? Or what if you need to do this for multiple buckets in multiple regions? Or what if you need to do this for multiple buckets in multiple regions with different permissions? As you probably guessed, you’ll have to repeat the process each time.

The AWS CLI good…

The simplicity is the strength of the AWS CLI. The commands are straightforward, and it’s great for beginners. It provides direct access to all AWS services and their APIs, allowing you to perform any action available through AWS’s web console without needing to open a browser. This makes it ideal for quick, ad-hoc tasks or automating workflows.

The AWS CLI can be integrated with shell scripts or other automation tools to streamline repetitive tasks, such as starting and stopping instances, uploading files to S3, or backing up databases. You’ll find it available on multiple platforms, including Windows, macOS, and Linux, ensuring consistency and flexibility. And most importantly, it supports all AWS services, meaning you don’t need multiple tools for different services. Whether you’re working with EC2, S3, Lambda, or any other AWS service, you can use the same CLI interface efficiently.

Something that I really appreciate about the AWS CLI is that it’s very well documented. You can always use the --help flag to get more information about a command, and the command reference is very detailed and easy to follow. For example, if you use the wrong subcommand, it will give you options:

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

aws help
aws <command> help
aws <command> <subcommand> help

aws: error: argument operation: Invalid choice, valid choices are:

abort-multipart-upload                   | complete-multipart-upload               
copy-object                              | create-bucket                           
create-multipart-upload                  | delete-bucket                           
delete-bucket-analytics-configuration    | delete-bucket-cors                      
delete-bucket-encryption                 | delete-bucket-intelligent-tiering-configuration
delete-bucket-inventory-configuration    | delete-bucket-lifecycle                 
.............................................................................
put-bucket-ownership-controls            | put-bucket-policy                       
put-bucket-replication                   | put-bucket-request-payment              
put-bucket-tagging                       | put-bucket-versioning                   
put-bucket-website                       | put-object                              
put-object-acl                           | put-object-legal-hold                   
put-object-lock-configuration            | put-object-retention                    
put-object-tagging                       | put-public-access-block                 
restore-object                           | select-object-content                   
upload-part                              | upload-part-copy                        
write-get-object-response                | wait                                    
help


Invalid choice: 'put-bucket-public-access-block', maybe you meant:

* put-public-access-block
* get-public-access-block

…and the AWS CLI bad

The AWS CLI lacks built-in state management, meaning it doesn’t track the state of resources. Once you create a bucket, there’s no automatic way of knowing its state or keeping track of changes. You would need to manually delete resources, or script additional commands to handle cleanup.

If you need an intricate network of resources or have complex dependencies, you’ll have to manage these manually or script them out. Here’s a “fun” example that will make you appreciate Terraform more: try creating a VPC, and two subnets. Notice how you need to manually manage the output of the VPC creation to use it in the subnet creation. And I can guarantee you it doesn’t stop with the subnets.

export VPC_ID=$(aws ec2 create-vpc --cidr-block 10.0.0.0/16 | jq -r '.Vpc.VpcId')

export SUBNET_ID1=$(aws ec2 create-subnet \
  --vpc-id $VPC_ID \
  --cidr-block 10.0.1.0/24 \
  --availability-zone us-east-1a \
  | jq -r '.Subnet.SubnetId')

export SUBNET_ID2=$(aws ec2 create-subnet \
  --vpc-id $VPC_ID \
  --cidr-block 10.0.2.0/24 \
  --availability-zone us-east-1b \
  | jq -r '.Subnet.SubnetId')

And then try to delete them. You’ll have to do it in the correct order, and you’ll have to make sure you’re not deleting something that’s still in use. This is where Terraform shines, but we’ll get to that in a bit.

Terraform

Creating an S3 bucket with Terraform involves writing Infrastructure as Code (IaC). Below is an example configuration:

resource "aws_s3_bucket" "static-website-bucket" {
   bucket        = "static-website-bucket"
   force_destroy = true

   lifecycle {
      prevent_destroy = false
   }
}

resource "aws_s3_object" "files" {
   for_each     = fileset(path.module, "website/**/*.{html,css,js,png}")
   bucket       = aws_s3_bucket.static-website-bucket.id
   key          = replace(each.value, "/^website//", "")
   source       = each.value
   content_type = lookup(local.content_types, regex("\\.[^.]+$", each.value), null)
   source_hash  = filemd5(each.value)
}

resource "aws_s3_bucket_website_configuration" "configs" {
   bucket = aws_s3_bucket.static-website-bucket.id

   index_document {
      suffix = "index.html"
   }
}

resource "aws_s3_bucket_public_access_block" "bucket_access_block" {
   bucket = aws_s3_bucket.static-website-bucket.id

   block_public_acls       = false
   block_public_policy     = false
   ignore_public_acls      = false
   restrict_public_buckets = false
}

resource "aws_s3_bucket_policy" "bucket_policy" {
   depends_on = [aws_s3_bucket_public_access_block.bucket_access_block]
   bucket     = aws_s3_bucket.static-website-bucket.id
   policy = jsonencode(
      {
         "Version" : "2012-10-17",
         "Statement" : [
            {
               "Sid" : "PublicReadGetObject",
               "Effect" : "Allow",
               "Principal" : "*",
               "Action" : "s3:GetObject",
               "Resource" : "arn:aws:s3:::${aws_s3_bucket.static-website-bucket.id}/*"
            }
         ]
      }
   )
}

Additionally, we’ll have our locals.tf file with the content types:

locals {
  content_types = {
    ".html" : "text/html",
    ".css" : "text/css",
    ".js" : "text/javascript",
    ".png" : "image/png"
  }
}

This Terraform configuration does the following:

  • aws_s3_bucket: Creates the S3 bucket with options like force_destroy to ensure the bucket is removed during cleanup.
  • aws_s3_object: Syncs local files (HTML, CSS, JS) to the bucket. Each file is uploaded with its metadata, such as content type and source hash.
  • aws_s3_bucket_website_configuration: Enables static website hosting, defining index.html as the default entry point.
  • aws_s3_bucket_public_access_block: Configures public access settings for the bucket.
  • aws_s3_bucket_policy: Sets the bucket policy to allow public read access to objects.

All the resources are there, exactly the same as the above using the AWS CLI. But the difference is that Terraform will manage the state of these resources, and it will keep track of them.

The Terraform good…

Although we’re discussing AWS here, Terraform is designed to work across multiple cloud providers (AWS, Azure, Google Cloud, etc.), which makes it versatile for multi-cloud strategies. You can manage resources from different providers within the same configuration.

Terraform follows a declarative approach, meaning you define what you want, and Terraform figures out how to achieve it. This simplifies infrastructure provisioning, as you don’t have to manually define the exact sequence of steps to create resources.

Another important feature is support for modules, which allows you to define reusable and shareable infrastructure components. This makes it easier to apply best practices and maintain consistency across environments.

You’ve probably also noticed the two-step process: terraform plan shows you the changes it will make, and terraform apply actually implements those changes. This prevents unexpected modifications and provides visibility into what’s happening, while automatically understanding the relationship between resources and managing dependencies. It ensures that resources are created, updated, or destroyed in the correct order.

And we’ve already mentioned the use of state files to track the current state of your infrastructure. This allows it to manage and compare the current setup with what’s defined in the code, making it easy to detect changes, deletions, or updates needed to achieve the desired state.

The Terraform CLI comes with some unexpected surprises. The full list of commands is below:

terraform -help                                                                                                                         19:14:39
Usage: terraform [global options] <subcommand> [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.

Main commands:
  init          Prepare your working directory for other commands
  validate      Check whether the configuration is valid
  plan          Show changes required by the current configuration
  apply         Create or update infrastructure
  destroy       Destroy previously-created infrastructure

All other commands:
  console       Try Terraform expressions at an interactive command prompt
  fmt           Reformat your configuration in the standard style
  force-unlock  Release a stuck lock on the current workspace
  get           Install or upgrade remote Terraform modules
  graph         Generate a Graphviz graph of the steps in an operation
  import        Associate existing infrastructure with a Terraform resource
  login         Obtain and save credentials for a remote host
  logout        Remove locally-stored credentials for a remote host
  metadata      Metadata related commands
  output        Show output values from your root module
  providers     Show the providers required for this configuration
  refresh       Update the state to match remote systems
  show          Show the current state or a saved plan
  state         Advanced state management
  taint         Mark a resource instance as not fully functional
  test          Execute integration tests for Terraform modules
  untaint       Remove the 'tainted' state from a resource instance
  version       Show the current Terraform version
  workspace     Workspace management

Global options (use these before the subcommand, if any):
  -chdir=DIR    Switch to a different working directory before executing the
                given subcommand.
  -help         Show this help output, or the help for a specified subcommand.
  -version      An alias for the "version" subcommand.

For example, terraform console allows you to interactively query Terraform state or test interpolations before using them in configurations:

terraform console

> local.content_types
{
  ".css" = "text/css"
  ".html" = "text/html"
  ".js" = "text/javascript"
  ".png" = "image/png"
}

… or the terraform graph command that generates a representation of the dependency graph between different objects in the current configuration and state:

terraform graph                                                                                                                                                                22:25:31
digraph G {
  rankdir = "RL";
  node [shape = rect, fontname = "sans-serif"];
  "aws_dynamodb_table.gadgets" [label="aws_dynamodb_table.gadgets"];
  "aws_dynamodb_table_item.gadget" [label="aws_dynamodb_table_item.gadget"];
  "aws_iam_role.lambda_role" [label="aws_iam_role.lambda_role"];
  "aws_lambda_function.function-lambda" [label="aws_lambda_function.function-lambda"];
  "aws_s3_bucket.static-website-bucket" [label="aws_s3_bucket.static-website-bucket"];
  "aws_s3_bucket_policy.bucket_policy" [label="aws_s3_bucket_policy.bucket_policy"];
  "aws_s3_bucket_public_access_block.bucket_access_block" [label="aws_s3_bucket_public_access_block.bucket_access_block"];
  "aws_s3_bucket_website_configuration.hosting" [label="aws_s3_bucket_website_configuration.hosting"];
  "aws_s3_object.files" [label="aws_s3_object.files"];
  "aws_dynamodb_table_item.gadget" -> "aws_dynamodb_table.gadgets";
  "aws_lambda_function.function-lambda" -> "aws_iam_role.lambda_role";
  "aws_s3_bucket_policy.bucket_policy" -> "aws_s3_bucket_public_access_block.bucket_access_block";
  "aws_s3_bucket_public_access_block.bucket_access_block" -> "aws_s3_bucket.static-website-bucket";
  "aws_s3_bucket_website_configuration.hosting" -> "aws_s3_bucket.static-website-bucket";
  "aws_s3_object.files" -> "aws_s3_bucket.static-website-bucket";
}

…and the Terraform bad

Terraform has a steeper learning curve compared to the AWS CLI, especially for beginners. While Terraform automatically manages dependencies between resources, complex dependencies can sometimes be difficult for Terraform to handle efficiently. For example, you need to watch out for circular dependencies, Terraform might not be able to resolve the order of operations correctly. Let’s say you have resource A depending on resource B and vice versa. You might need some good ol’ manual intervention to sort things out, meaning you’ll have to define an additional resource where you define the interdependency.

Terraform tracks infrastructure through its state file, but if changes are made directly in the cloud (outside of Terraform), these modifications are not automatically reflected in Terraform’s state. This is known as drift. Imagine you create an S3 bucket with Terraform, but someone manually updates the bucket’s settings in the AWS Console. Terraform won’t know about this change until the next deployment, which can result in unexpected behavior or even resource overwrites.

Terraform’s state file is a blessing and a curse. If the state file becomes out of sync with the actual cloud environment, or if multiple users are modifying the state simultaneously without proper locking mechanisms, it can lead to conflicts, corruption, or inaccurate representations of resources.

We won’t get into the details about error handling and reporting, but let’s just say we’re not exactly friends in that regard. Like all tools, Terraform is powerful and flexible, but it’s important to be aware of its limitations.

The TLDR: Key Differences

  1. State Management
    • AWS CLI: The CLI doesn’t track the state of resources. Once you create a bucket, there’s no automatic way of knowing its state or keeping track of changes.
    • Terraform: Terraform keeps track of the infrastructure state in a state file. It knows exactly what resources have been created and can manage changes.
  2. Ease of Use
    • AWS CLI: It’s very simple and easy to use, especially for beginners. The commands are straightforward, and the documentation is highly readable.
    • Terraform: It has a steeper learning curve. The concept of managing everything as code and defining every resource in detail can be overwhelming for beginners.
  3. Fine-Grained Control
    • AWS CLI: You have access to the same APIs that AWS services provide, but without abstraction. Each command directly reflects an API action.
    • Terraform: Terraform provides fine-grained control over all resources as part of its declarative configuration.
  4. Resource Management
    • AWS CLI: Every action has to be manually defined, and there is no built-in way to decouple aspects of services, such as separating resource definitions from configuration details.
    • Terraform: Terraform treats everything as a resource, allowing you to decouple resource creation from configuration. This makes it easy to modularize your infrastructure and apply best practices such as DRY (Don’t Repeat Yourself).
  5. Automation
    • AWS CLI: You would have to write scripts and use other tools (like shell scripts or Python) to automate resource provisioning and handle changes.
    • Terraform: Terraform is inherently designed for automation, especially for infrastructure changes. It works well in CI/CD pipelines, automating deployments with tracking, and ensuring that changes are safe and consistent.

How LocalStack Fits In

LocalStack plays a very important role in testing and running automation tests for Infrastructure as Code (IaC) and infrastructure automation scripts that utilize the AWS CLI. It provides a fully functional local cloud environment that emulates AWS services, allowing developers to test their IaC configurations and AWS CLI scripts without needing access to the actual cloud.

By running tests against LocalStack, you can ensure that your infrastructure code behaves as expected, validate complex automation workflows, and catch potential issues early in the development process, all without incurring cloud costs or risks. This local testing environment also supports integration with CI pipelines, making it an ideal tool for continuous testing and validation of AWS-related automation scripts.

Conclusion: Understand the cloud services you’re working with

If you’re still comparing tools, here’s a quick summary to help you decide which one to use:

AWS CLI is ideal for quick, simple tasks or for users who are just starting with AWS. It’s intuitive and easy to use for one-off actions but lacks advanced features like state tracking and resource management.

Terraform, on the other hand, shines in scenarios where infrastructure needs to be managed at scale, tracked, and modified over time. If you’re building out larger cloud infrastructures or working within a CI/CD pipeline, Terraform’s automation and state management capabilities are invaluable.

I like using them both, depending on the task at hand. At the end of the day, it’s the engineer that needs to have the knowledge of what actions/services/policies to implement. The tools are there to help you, but they won’t do the work for you.


Anca Ghenade
Anca Ghenade
Developer Relations Engineer at LocalStack
With seven years of experience as a developer, Anca transitioned into the developer advocate role. Freshly based in San Francisco, she's always running—both literally and figuratively—to keep pace with and explore the latest in tech.