LocalStack LogoLocalStack Icon

Is AWS's New Kiro IDE a Cursor Killer?

AWS's new agentic IDE, Kiro, brings some significant changes to the status quo of AI-assisted coding IDEs by adding a new, more structured way to build applications using LLMs

Is AWS's New Kiro IDE a Cursor Killer?

There are a lot of AI-assisted code editors out there – GitHub Copilot, Cursor, Windsurf, Qodo, Claude Code, to name just a few. While each has it’s pros and cons, they all essentially work the same. The majority of AI assistance is done either via suggestions as you code or via chat with the LLM that will suggest new code changes or often even make them on your behalf.

In my own experience, this approach can be extremely useful when you need to write boilerplate-style code, remember the syntax for something or fix bugs. Even in these tasks, mistakes are common but usually easy to identify – especially because you are already familiar with the codebase and tools involved. However, when I’ve asked these tools to go beyond small, focused tasks and build more complete applications, they tend to break down, with the results being unusable.

Last week AWS released Kiro, a new AI IDE that aims to solve this last problem by adding more structure to the process of generating an application with AI. Is it enough to dethrone the current champ in the AI coding space, Cursor? I’ll take a look by exploring what makes Kiro different and try to build a “real world” app using it.

What is Kiro?

Kiro is an IDE with a ton of built-in AI features – or, as AWS refers to it, an agentic IDE. If you are familiar with using Visual Studio Code with GitHub Copilot enabled or Cursor, you’ll be familiar with how it works. The AI-enabled auto-complete and chat are a persistent part of the IDE experience.

Part of what I believe Kiro gets right (full disclosure: I was part of pre-release user validation) is that it doesn’t aim to reinvent the wheel. The IDE is based off Code - OSS, the open source project maintained by Microsoft that is the basis of VSCode and Cursor. This means that the IDE experience is not only familiar, but you can maintain your existing project settings and plugins if you’re already using one of these tools. As developers, our IDE is a central part of our daily workflow, so significant UI/UX changes can be a major hurdle to adoption.

In addition, if you use Kiro in “vibe coding” mode (gonna admit that I don’t like this term but 🤷), it feels and behaves like Cursor or GitHub Copilot. That’s not a bad thing, as I’ve found both of these tools helpful.

the Kiro interface looks very similar to VSCode with GitHub Copilot

What Makes Kiro Different?

If everything in Kiro were the same it wouldn’t really be worth discussing, but Kiro introduces two key features that are unique.

  1. Hooks – I mention this first because, while I think they are useful, the use case is, in my opinion, somewhat niche. In this case, hooks (again a term I don’t love but this time because it’s overloaded) are AI workflows that can be triggered by a change in the project. An example is a change to a component that triggers an update the test file. This may be a feature that becomes more useful over time as you add it to your toolbox and encounter more use cases for it, but, for now, it doesn’t seem to me to be something that will significantly change your current workflow.
  2. Specs – This is the change that I think shows some important innovation and a lot of promise. Rather than “vibe coding” in the typical sense of chatting back and forth with the LLM, specs provides a much more structured and documented approach to building with AI. It takes your prompt and moves through multiple phases from planning to requirements to implementation, creating detailed documentation along the way. Each step provides the opportunity to fine-tune the work the LLM will do with the goal of reducing issues and getting you closer to a usable finished product once the process is complete.

choosing vibe coding or spec coding options in Kiro

Do Specs Work as Promised?

Up front, I will say that my attempts at using specs during pre-release testing yielded surprisingly good results, but the scope of what I was doing was still pretty limited. The fact that it produces documentation along the way was helpful and can, in my opinion, help with the long term success of using AI-generated code since it offers you and your coworkers insight into what the LLM did. However, it was also time-consuming for small changes – using chat would get to a similar result in far less time but with a little more manual intervention and without documentation. So it’s value is highly dependent on how much valuably you view this documentation.

What I wanted to know, now that it’s released in a public preview, is: can Kiro successfully help build a full application using spec mode? As I mentioned, I have had no success using other tools like GitHub Copilot or Cursor for these more comprehensive tasks. Can Kiro’s structured approach to this succeed where other tools have failed?

TL;DR

What follows below this section is essentially me journaling my coding session with Kiro to build a full-stack serverless application deployed to AWS and, for faster development and testing, LocalStack. It’s long but, if you’re up for reading it, it has a lot of details about what worked, what didn’t and specific stumbling blocks I encountered. You can find the end-result project here on GitHub.

To save you some time, here’s my quick summary:

What worked

  • Kiro did what I felt was a very good job of translating a relatively short application description into detailed user stories, requirements and a development plan.
  • The generated artifacts could be incredibly useful for applications being developed by a team.
  • Kiro did a good job of inferring necessary elements that I had not even specified in my prompt. For example, I wanted list pages, but Kiro added requirements to filter and sort the list based on logical criteria.
  • Kiro automatically adds unit tests and integration tests by default, even if they aren’t specifically requested in the prompt.
  • When making updates (after finishing a step), Kiro will frequently create additional artifacts that are useful for testing changes. For example, when I was struggling to get the deployment working, it made code changes and then created a new test to verify the changes it made to the deployment.
  • Kiro automatically includes CI/CD deployment scripts for you, even though they were never specifically requested. In my case, it generated GitHub actions to automate my deployment.

That’s all a good start and it’s worth noting before I get to what didn’t work that this is a prerelease product, so rough edges are to be expected.

What didn’t work

  • The finished application did not work. As you’ll see from the live coding journal I did below, I kept running into issues deploying the app and issues with the generated code. Ultimately, for purposes of this demo, I gave up after determining that I needed to do quite a bit of manual refactoring.
  • Where Kiro failed was often a bit surprising given that it is an AWS product. For example, the generated CDK code had circular dependencies that Kiro, despite multiple attempts, could not fix (in fairness, Cursor failed at this too but ChatGPT succeeded). Or it wrote Lambdas in TypeScript but had not accounted for building and deploying them, with their dependencies (it tried pushing raw TypeScript).
  • Kiro repeatedly attempted to run commands in the wrong directory to the point that I had to carefully monitor this as I walked through the implementation steps. It would change directories in the console but then attempt to run commands there that also included a cd which, in most cases, failed or, in some cases, created directories in the wrong location.
  • The Spec coding process takes a long time. What you see below took many hours and, in retrospect, should have taken longer, as I think my chances of success would have improved with more time reviewing and editing the generated artifacts before continuing. The good news is that you can walk away and come back and it will not break the flow, but the processing often took time, so you might walk off and do something else while it cooked only to come back and find it waiting on a prompt. It’s still early, and they were likely under heavy load when I tested this (they’ve since moved to an invite process due to demand) so those could have played a factor in the speed of responses.
  • This is more of a personal preference, but when using chat to update code, Kiro will automatically accept any changes, so it can be a bit more difficult to see what changes were made. By contrast, Cursor is much clearer in showing you diffs and ensuring you accept the changes it intends to make.

In the end, the biggest issue I see with this spec-driven approach to building a full application using AI generated code is one not specific to Kiro: when there are issues – and there will be issues – you, the developer, are unfamiliar with the specifics of the codebase. It’s your project, built from scratch, but also, at the same time, it’s like your first day on the job being thrown in to fix major bugs. The more code the AI generates, the bigger potential for this to be a very painful problem. Kiro’s planning and implementation specs can help you find your way and even give you the opportunity to step in along the way, but that doesn’t make it easy.

My Verdict

I think the concept and implementation show a ton of promise. Kiro can already do most, if not all, of what you may already use Cursor or GitHub Copilot for. It keeps the IDE UX you may already be familiar and comfortable with and tacks on some meaningful improvements, even if it’s sometimes still a bit rough around the edges.

Keep in mind, this is one test and I know enough about working with AI coding tools to know that it’s a learning process figuring out how to get the best results out of them and understanding their limitations. Perhaps on future iterations I’ll pay closer attention to the specs and review them in greater detail to catch potential stumbling blocks. I could provide more specific guidance around how I want it built and deployed. Maybe I start smaller and build the backend and then the frontend separately so that I can test each piece as it is completed rather than find my way through a maze of errors only at the very end. Could these strategies have mitigated some of the issues I faced? Perhaps. The difference between success and failure with AI coding tools can be unpredictable.

I also expect Kiro to get better and I think, conceptually, it’s already taken a step in the right direction. I believe its structured approach stands a better chance of getting us to a point where, for better or worse, real application code starts as fully AI-generated as opposed to AI being a souped-up autocomplete. It’s definitely worth trying and keeping an eye on.

Live Coding Journal

First some background on what I am building. My wife and I are what you might call “foodies.” We love to go out to eat and try new restaurants, so we’re always on the lookout for new places to try. Right now, I manually keep track of places we’d like to try and places we have already tried in an Obsidian note. For this test, I wanted to see if Kiro could build me a better way to track this.

Planning Phase

I started with the prompt below. I tried to be a bit more explicit in my instructions than I normally would via chat on account of the larger scope of the project, but still tried to keep the instructions open-ended enough to see how the LLM would think through the problem.

We need to build a serverless application deployed to AWS that allows individuals to save
and track restaurants that they wish to visit or have visited. The goal is to give foodies
a way to keep track of new restaurants they wish to try and keep details about their
experiences at the restaurant once they've tried it.
The restaurant should have a name, location, cuisine (this should allow them to choose from
a list of common cuisine types) and description. All of these fields other than name should
be optional to allow them to add things quickly.
The user should be able to review the restaurant by giving it stars (from 0 to 5 with 5
being the best) and add some text notes to the review. They should be able to add multiple
text notes but only add one star rating. However, the star rating should be adjustable
based on subsequent visits.
The user should be able to create an account with just an email using passwordless auth.
Once logged in they should be able to view any restaurants they've added with details on
location, cuisine and whether they've visited/reviewed this restaurant or not and they
should be able to add new restaurants. Clicking on the restaurant will give full details
including the description and any reviews they added. They can add additional review notes
from this point and add or adjust their star rating.

The generated requirements are very long, so I will only show a snippet of them here, but I have pasted the full requirements on GitHub. It’s worth noting that the generated application was placed into a restaurant-tracker subfolder. The main project folder contained only an additional .kiro folder where all the artifacts were placed.

completing the planning phase of spec coding

## Introduction
The Restaurant Tracker is a serverless application deployed to AWS that allows food
enthusiasts to save and track restaurants they wish to visit or have already visited. Users
can maintain a personal collection of restaurants with details such as name, location,
cuisine type, and description. After visiting a restaurant, users can add reviews with star
ratings and text notes to document their experiences. The application provides a simple way
for foodies to organize their dining experiences and discover new places to try.
## Requirements
### Requirement 1: User Authentication
**User Story:** As a food enthusiast, I want to create an account with just my email address
using passwordless authentication, so that I can securely access my restaurant collection
without remembering another password.
#### Acceptance Criteria
1. WHEN a user visits the application for the first time THEN the system SHALL present an
option to sign up or log in.
2. WHEN a user enters their email address for signup THEN the system SHALL send a magic link
to that email.
3. WHEN a user clicks the magic link THEN the system SHALL authenticate the user and create
an account if one doesn't exist.
4. WHEN a user is authenticated THEN the system SHALL maintain their session across visits
until explicitly logged out.
5. WHEN a user attempts to access protected features without authentication THEN the system
SHALL redirect them to the login page.

If you look at the full specs, you’ll notice that it added some additional detail that I had not requested but are important and meaningful requirements. For example, in the excerpt above you may notice that it doesn’t just think about login, but handling session timeouts and redirection. Another example is that it included the ability to filter restaurants in the list view, which I had not specified. It also correctly chose the appropriate AWS services needed to deploy this application.

It’s worth emphasizing that I can choose to make changes to the requirements at this stage before moving to the next phase simply by editing the document. For the purposes of this demo (and because they seemed correct), I decided to proceed with the requirements as is.

Design Phase

The next phase involves creating a design document from the requirements. This is not the look and feel of the app, but things like the architecture, components, data model, error handling, testing, security, deployment and so on. Once again, this document is very long but I am including an excerpt below as well as a screenshot of the architecture diagram it included. You can find the full design document on GitHub here.

## Overview
The Restaurant Tracker is a serverless application that allows users to track restaurants
they want to visit or have visited, along with reviews and ratings. The application will be
built using AWS serverless technologies to ensure scalability, cost-effectiveness, and
minimal maintenance requirements.
This design document outlines the technical approach, architecture, data models, and
component structure for implementing the Restaurant Tracker application based on the
requirements.
## Architecture
The application will follow a serverless architecture pattern using AWS services:
\`\`\` mermaid
graph TD
Client[Web Client] --> CloudFront[Amazon CloudFront]
CloudFront --> S3[Amazon S3 - Static Website]
Client --> APIGateway[Amazon API Gateway]
APIGateway --> Lambda[AWS Lambda Functions]
Lambda --> DynamoDB[Amazon DynamoDB]
Lambda --> Cognito[Amazon Cognito]
subgraph "Frontend"
S3
CloudFront
end
subgraph "Backend"
APIGateway
Lambda
DynamoDB
Cognito
end
\`\`\`
### Key Components:
1. **Frontend**:
- Single Page Application (SPA) built with React
- Hosted on Amazon S3 and distributed via CloudFront
- Responsive design for mobile and desktop use
2. **Authentication**:
- Amazon Cognito for passwordless email authentication
- JWT tokens for maintaining user sessions
3. **API Layer**:
- Amazon API Gateway for RESTful API endpoints
- API Gateway authorization using Cognito User Pools
4. **Business Logic**:
- AWS Lambda functions for handling business logic
- Separate functions for different domains (auth, restaurants, reviews)
5. **Data Storage**:
- Amazon DynamoDB for NoSQL data storage
- Optimized access patterns for user-restaurant relationships
6. **Infrastructure**:
- AWS CDK for infrastructure as code
- CI/CD pipeline for automated deployments

the proposed architecture Kiro created

Once again there is a great deal of detail in this document. What I like about the approach Kiro is taking here is that, by moving from one phase to the next, it is able to infer a lot of technical detail from what was essentially a relatively vague, four-paragraph prompt. The flip side of this (and I’m adding this in retrospect), is that the amount of documentation means that you should probably spend a good deal of time reviewing it and poking holes rather than just accepting what the LLM generates and hoping for the best. As with the prior phase, if you want to make changes, simply edit the document as needed.

Implementation Phase

Now that I’ve approved Kiro’s design document, it’s time to move to creating an implementation plan. This phase is a little different as the artifact it creates has interactive elements. First, let’s look at a snippet of the implementation plan Kiro created, which is broken up into 10 steps, each with multiple sub-steps (you can find the full task list here on GitHub).

# Implementation Plan
- [x] 1. Set up project infrastructure
- [x] 1.1 Initialize frontend React application
- Create React application with TypeScript
- Set up routing with React Router
- Configure authentication context
- _Requirements: 1.1, 1.4_
- [x] 1.2 Set up AWS CDK project
- Initialize CDK project with TypeScript
- Define basic stack structure
- Configure environment variables
- _Requirements: 6.1, 6.6_
- [x] 1.3 Create CI/CD pipeline configuration
- Set up GitHub Actions workflow
- Configure deployment stages
- Add testing steps to pipeline
- _Requirements: 6.6_
- [x] 2. Implement authentication system
- [x] 2.1 Set up Amazon Cognito user pool
- Configure user pool for email authentication
- Set up app client
- Configure passwordless authentication
- _Requirements: 1.1, 1.2, 1.3, 6.3_
- [x] 2.2 Create authentication Lambda functions
- Implement login request handler
- Implement token verification
- Create user profile on first login
- _Requirements: 1.2, 1.3, 1.4_
- [x] 2.3 Implement frontend authentication components
- Create login/signup form
- Implement authentication context provider
- Add protected route component
- _Requirements: 1.1, 1.4, 1.5_

I will note that, before I began the actual implementation, I modified step 9 for deployment to configure it to deploy to LocalStack for testing rather than a dev environment on AWS. As we’ll see later, it nonetheless created plans for both LocalStack for testing and AWS for production.

As I mentioned, the artifact Kiro creates here is interactive. As you can see in the screenshot below, each step and sub-step has a “start task” option above it. You can click on this to start the implementation of the task and all subtasks or you can click on each subtask one-by-one. Additionally, you can just proceed by using the chat.

Finishing the implementation phase in spec coding in Kiro

You’ll need to monitor the progress as Kiro will regularly stop for permission. This permission is entirely focused on console commands. Kiro will not ask permission to make file changes.

Kiro requests permission only to run commands

Some Stumbling Blocks

A note in retrospect is that it will probably yield a bit more success if you take these sub-steps one by one and verify the results before moving on. Additionally, as I was implementing these steps, I regularly ran into errors because Kiro had changed directory in the terminal for a prior step and thus was trying to run commands in the wrong directory.

Kiro occassionally tried to run commands in the wrong folder

I learned to watch for this issue but it still led to multiple errors if I wasn’t paying close attention before running a command. Taking these steps individually may have helped to resolve that issue as well.

It’s worth pointing out that if you follow this process along for a substantial set of tasks, as I did, eventually you may run into a limit on the context window. This is especially true if you encounter errors and start asking Kiro to fix issues you encounter. In my limited experience, it’s worth making these changes in a separate chat tab (using the vibe coding option) to avoid running into this limit in your main spec-coding session.

In spec coding, you may more easily run into limits on the context window

Part of why I hit this limitation is likely because there were a number of errors during the task implementation, particularly around building the React frontend. Kiro made a number of errors when writing the frontend code but was able to course correct with some guidance. The other good news is that, after pausing to correct issues in another chat session, it understood and tracked that some portions were already done, so you maintained your progress in the process.

Also, to be clear, the implementation phase was a very time consuming process. Considering all that Kiro was doing, perhaps that’s reasonable (and I reiterate that it is early days and the services that support Kiro’s AI were clearly under heavy load), but I completed the implementation, monitoring for when it was awaiting my input, over the course of hours.

Initial Development Completed

Finally, once it finished, it came up with these deployment instructions for LocalStack:

Terminal window
# Start LocalStack
localstack start
# Deploy backend to LocalStack
cd restaurant-tracker/backend
./deploy-localstack.sh
# Verify deployment
./verify-deployment.sh
# Deploy frontend
cd ../frontend
USE_LOCALSTACK=true ./deploy.sh
# Run end-to-end tests
cd ..
USE_LOCALSTACK=true ./e2e-test.sh

…and for AWS:

Terminal window
# Deploy backend
cd restaurant-tracker/backend
npm run deploy:dev # or npm run deploy:prod for production
# Deploy frontend (with environment variables set)
cd ../frontend
./deploy.sh
# Run end-to-end tests
cd ..
./e2e-test.sh

Let’s Deploy This Thing!

Given Kiro’s success in fleshing out the requirements, design documents and tasks, I was quite hopeful when I was ready to run the deployment scripts. Yes, there were a few bumps along the way of completing the tasks, but nothing dramatic.

Unfortunately, when trying to deploy the CDK backend to LocalStack I received the following error around CloudFormation circular depencies:

Terminal window
Deploying the backend using the deploy-localstack.sh file encounters the following error when creating the CloudFromation changeset:
2025-07-16T14:00:27.382 ERROR --- [et.reactor-3] l.aws.handlers.logging     : exception during call chain: Circular dependency found.
2025-07-16T14:00:27.382  INFO --- [et.reactor-3] localstack.request.aws     : AWS cloudformation.CreateChangeSet => 500 (InternalError)

This isn’t an uncommon error, especially when dealing with IAM and Cognito. At first I decided to give Kiro the opportunity to fix it via a vibe coding chat session (remember, I ran out of context window for my spec coding session). I asked Kiro to resolve it and it did seem to recognize the problem.

Looking at the backend-stack.ts file, I can see a potential circular dependency in how the
Lambda execution role and Lambda functions are set up. The issue is likely related to how
permissions are granted between resources.

In response it made a number of changes – again, without confirming the changes first. This is not something I personally love. I found it hard to tell exactly what it did. Here’s what it came up with:

I've fixed the circular dependency issue in your CloudFormation template. Here's a summary
of the changes:
Root Cause: The circular dependency was caused by how permissions were granted to Lambda
functions. Each Lambda function was individually granted permissions to the DynamoDB table,
creating circular references in the CloudFormation template.
Solution: I simplified the permission model by:
Granting all necessary DynamoDB permissions to the Lambda execution role upfront
Removing the individual permission grants to each Lambda function
This breaks the circular dependency by ensuring that the DynamoDB table doesn't have
resource policies that directly reference the Lambda functions
Additional Improvements:
Fixed a deprecation warning for S3Origin in the CloudFront distribution
Enhanced the e2e-test.sh script with better error handling and validation
Created a test-deploy.sh script to simplify testing the deployment
Added documentation explaining the circular dependency issue and its solution
Testing: You can test the fix by:
Running the test-deploy.sh script to deploy the stack to LocalStack
Verifying that the deployment completes successfully without circular dependency errors
Running the e2e-test.sh script with USE_LOCALSTACK=true to test the deployed application
The changes are minimal and focused on resolving the circular dependency while maintaining
the same functionality. The documentation I've added will help anyone who encounters similar
issues in the future.

This sounded like it understood the issue but the fixes didn’t work. This is a perfect example of where AI coding tools can get you into errors that are hard to fix, especially when you are not fully aware of the code they wrote. Circular dependencies like this, while not uncommon, are very difficult to spot. A couple more attempts to get Kiro to fix the issue did not wield better results.

Bringing the issue back to Cursor

I was curious if Cursor would have better success where Kiro failed. I gave it a very similar prompt:

This CDK script is giving errors for circular dependencies during the deployment phase that
are preventing it from deploying, can you identify what is causing this issue?

This was the response summary of what it found and did:

## The Problem
The circular dependency occurred because:
1. lambdaExecutionRole was used by Lambda functions
2. Later, Cognito permissions were added to the same lambdaExecutionRole that referenced
the User Pool ARN
3. The User Pool's lambda configuration referenced those same Lambda functions
4. This created a cycle: Role → User Pool → Lambda Functions → Role
## The Solution
I broke the circular dependency by:
1. Removed the problematic code that added Cognito permissions to the shared
lambdaExecutionRole using addToPolicy()
2. Added specific permissions directly to the individual Lambda functions that need Cognito
access:
- createAuthChallengeFn gets permissions for user management operations
- initiateAuthFn gets permissions for authentication operations
1. Proper timing - Added permissions after each Lambda function is created and after the
User Pool exists
## Why This Works
- The lambdaExecutionRole no longer has any dependency on the User Pool
- Lambda functions get their Cognito permissions independently via addToRolePolicy()
- The User Pool can safely reference the Lambda functions without creating a cycle
- All functionality is preserved - each Lambda function still has the exact permissions it
needs
Your CDK stack should now deploy successfully without circular dependency errors. The
authentication flow will work exactly the same, but the infrastructure dependencies are
properly structured.

In Kiro’s defense, Cursor’s changes didn’t work either. Another iteration with Cursor where I provided the full stack trace didn’t seem to help.

ChatGPT to the rescue

As a last resort, I tried ChatGPT with similar prompts and – guess what!? – it found the problem…

Here's the problematic loop:
- Cognito User Pool references the **Lambda functions' ARNs** (to use as triggers).
- Lambda functions require a role (`lambdaExecutionRole`) with access to **Cognito**
(`userPool.userPoolArn`).
- That role is created **before** the Cognito User Pool, so CDK/CloudFormation must
**resolve the user pool ARN** in order to create the role.
- But the User Pool itself **can’t be fully created** until those Lambda ARNs are known and
permissions are attached — **deadlock**.

…and provided a solution which I fed back into Kiro with the following prompt.

We need to do the following: Instead of including Cognito permissions in the shared
lambdaExecutionRole, define separate roles for Cognito-auth Lambdas, or defer
addToRolePolicy() until after both the Lambdas and User Pool are fully defined.

With the ChatGPT provided fix, Kiro implemented the necessary changes and it seemed to deploy fine (more on that later). At this point, I decided to start skipping their overly complex deployment scripts and just used cdklocal bootstrap and cdklocal deploy. The rest of their shell script steps are really unnecessary.

On to the frontend!

Now that the backend is read (foreshadowing: it isn’t!), let’s see if we can deploy the frontend.

The first stumbling block was a bizarre syntax error in one of the frontend list pages that initially caused a failure. If this were hand coded I’d say it seemed almost like a copy/paste error, but everything so far has been done by Kiro. It was an easy fix though.

The next error was a TypeScript compilation error:

Terminal window
Failed to compile.
TS18046: 'error' is of type 'unknown'.
130 | return {
131 | success: false,
> 132 | message: error.message || 'Failed to initiate authentication',
| ^^^^^
133 | };
134 | } finally {
135 | setIsLoading(false);

Once I located the file that was causing this, Kiro was able to correct via vibe coding chat. At this point it compiled and deployed. 🥳

Let’s Give the App a Go!

Alright, the deployments worked. I could see the resources created on LocalStack and I can open it in the browser. Success!

The restaurant tracker app home page in the browser

But, once I try to log in, it fails 😩:

the sign in failed

Checking the logs, I see the following:

Terminal window
2025-07-16T19:55:41.715  WARN --- [et.reactor-9] l.s.l.i.executor_endpoint  : Execution environment startup failed: {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'initiate-auth'\nRequire stack:\n- /var/runtime/index.mjs","trace":["Runtime.ImportModuleError: Error: Cannot find module 'initiate-auth'","Require stack:","- /var/runtime/index.mjs","    at _loadUserApp (file:///var/runtime/index.mjs:1109:17)","    at async UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:1148:21)","    at async start (file:///var/runtime/index.mjs:1335:23)","    at async file:///var/runtime/index.mjs:1342:1"]}
2025-07-16T19:55:41.715  INFO --- [et.reactor-9] localstack.request.http    : POST /_localstack_lambda/e0bff364409b57a1152358e4beb094b9/status/e0bff364409b57a1152358e4beb094b9/error => 202
2025-07-16T19:55:41.715  WARN --- [et.reactor-7] l.s.l.i.execution_environm : Failed to start execution environment e0bff364409b57a1152358e4beb094b9: Environment startup failed
2025-07-16T19:55:41.715  WARN --- [et.reactor-7] l.s.l.i.execution_environm : Execution environment e0bff364409b57a1152358e4beb094b9 for function arn:aws:lambda:us-east-1:000000000000:function:RestaurantTrackerStack-dev-InitiateAuth1E6ED934-d84c7ed1:$LATEST failed during startup. Check for errors during the startup of your Lambda function.
2025-07-16T19:55:41.820  INFO --- [et.reactor-5] l.s.a.n.e.h.gateway_except : Error raised during invocation: DEFAULT_5XX

The issue is that Kiro wrote the Lambdas using TypeScript but was deploying the raw code and not building the files first. Basically, the Lambda resource exists but there’s no code in it.

I prompted Kiro to fix this and it created a shell script to build the files. The only problem was that there were 33 errors reported when trying to build them. I fed those error back into Kiro and it fixed 6 of them before hitting an unexpected error and stopping (my assumption here was this was a load related issue on Kiro’s backend). A second run fixed all the remaining build issues.

One Last Try

Ok, the Lambdas built using Kiro’s build script. This should work now, right? RIGHT!?! Nope, we got a new error:

Terminal window
Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'aws-sdk'\nRequire stack:\n- /var/task/initiate-auth.js\n- /var/runtime/index.mjs","trace":["Runtime.ImportModuleError: Error: Cannot find module 'aws-sdk'","Require stack:","- /var/task/initiate-auth.js","- /var/runtime/index.mjs","    at _loadUserApp (file:///var/runtime/index.mjs:1109:17)","    at async UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:1148:21)","    at async start (file:///var/runtime/index.mjs:1335:23)","    at async file:///var/runtime/index.mjs:1342:1"]}

This one is fairly obvious, even to me. Kiro’s script is compiling the TypeScript to JavaScript but not bundling the dependencies. These errors (missing the build step and not bundling dependencies) seems like a fairly obvious ones that I’m a bit surprised that they were overlooked. The thing that is easy to forget when using them is that LLMs are probabilistic engines – they don’t understand what they are doing.

I’ve described LLMs as a junior developer in their first week at work, except they are always that way and don’t gain knowledge or experience (outside of what’s in their limited context window). Kiro recognized the issue around needing to build the TypeScript files once I explained it to the LLM, but it failed to infer the necessary additional steps that would make those files work (i.e. including the dependencies).

At this point I began to manually step in and gave up on the AI for making the changes. I started changing the functions to the the NodejsFunction construct in in the AWS CDK to compile the Lambda code from TypeScript. Unfortunately, I realized there was quite a bit of manual refactoring I’d need to do to get everything actually functioning (and this is just a blog post demo), so I ended it here.

Final Thoughts

Kiro’s spec-driven process, and its success with creating what appeared to be very accurate specs, may have unreasonably raised my expectations of its ability to complete these without careful attention on my part. In retrospect, it was unreasonable of me to expect Kiro to complete such a large task without careful supervision and manual validation at each step.

As I mentioned earlier in the post, the spec-driven process, even without this supervision and validation, is time consuming. The step-by-step process and artifacts give you the opportunity to make clarifications and corrections as needed. They allow you to create a comprehensive plan but still move forward slowly. The AI will still absolutely make mistakes – many of them. Some of these mistakes will be mind-numbingly stupid (like not compiling the TypeScript), but the key to using any AI-coding tool is understanding these limitations and working around them. It’s obviously early days, but Kiro still feels like a step in the right direction – the structured process is useful – and I am hopeful to see how it matures.


Brian Rinaldi
Brian Rinaldi
Developer Relations Lead at LocalStack
Brian Rinaldi leads the Developer Relations team at LocalStack. Brian has over 25 years experience as a developer – mostly for the web – and over a decade in Developer Relations for companies like Adobe, Progress Software and LaunchDarkly.