The 7 steps between 'kubectl apply' and a running a pod

You've run kubectl apply a thousand times. But what actually happens? This post breaks down the full Kubernetes deployment lifecycle, every component, every handoff, and shows you how to watch it happen live with LocalStack.

The 7 steps between 'kubectl apply' and a running a pod

You’ve typed it a thousand times: kubectl apply. One command, a beat of silence, then deployment.yaml configured. And your pod just… shows up somewhere.

But what happened in between? If you’ve ever had to debug a broken deploy, you know that “it just works” isn’t a satisfying answer when it doesn’t.

Let’s demystify the whole thing.

First, meet the cast

Kubernetes is split into two planes:

  1. The control plane is the brain that makes all the decisions.
  2. The data plane is where your actual workload lives on the worker nodes.

In the control plane, four components do all the heavy lifting:

  1. The API Server is the bouncer. Every single thing in Kubernetes (your kubectl commands, internal components talking to each other, all of it) goes through the API Server. Nothing talks directly to anything else.
  2. etcd is the database. It’s a distributed key-value store that holds your cluster’s desired state, every object and every config. Without etcd, no component in the cluster knows what should be running.
  3. The Scheduler watches for pods that don’t have a node yet and decides where to place them.
  4. The Controller Manager is a bundle of control loops (deployment controller, ReplicaSet controller, and others) constantly watching for drift between what etcd says should exist and what’s actually running, then “un-drifting” it.

On the data plane, each worker node runs a kubelet (the local agent that takes orders from the control plane), a container runtime like containerd, and kube-proxy for networking.

If you’re on EKS: AWS manages the entire control plane for you. Worker nodes, however, are your responsibility. That’s the deal.

The 7-step lifecycle

Here’s exactly what happens when you run kubectl apply.

Step 1: You hit send

kubectl reads your YAML, serializes it to JSON, and POSTs it to the API Server. It knows where to find the API Server from your ~/.kube/config file, which has your cluster endpoint and credentials.

Step 2: The API Server does its checks.

Authentication first (are you who you say you are?), then authorization (are you allowed to do this?), then schema validation (is this a well-formed deployment object?). All three have to pass. If they do, the API Server writes to etcd and returns a 200.

Here’s the key insight: the moment that write to etcd completes, Kubernetes considers the desired state stored. Everything else from here is just the system converging toward it.

Step 3: The Controller Manager wakes up.

It watches the API Server via a long-running HTTP connection (the list/watch API) that gets notified of changes, no polling required. When it sees your new deployment object, the deployment controller checks whether a matching ReplicaSet exists. If not, it creates one.

Step 4: The ReplicaSet controller creates Pod objects.

Say your spec wants 3 replicas. The controller sees zero pods and writes 3 Pod objects to the API Server. These pods are in Pending state with no node assigned yet. They’re just intentions at this point.

Step 5: The Scheduler picks nodes.

It’s also watching the API Server and sees the unscheduled pods. It scores every available node based on CPU, memory, affinity rules, taints and tolerations, and picks the best fit. Then it writes the nodeName back to each pod’s spec in the API Server. That’s it. The Scheduler doesn’t start anything. It just labels the pod with a destination.

Step 6: The kubelet gets to work.

Each worker node’s kubelet watches the API Server for pods assigned to its node. When it sees one, it calls the container runtime (containerd on most clusters), pulls the image from the registry, mounts volumes, sets up the network namespace, and starts the container.

Step 7 — The pod becomes Ready.

Kubernetes doesn’t route traffic to the pod right away. First it waits for readiness probes to pass. Once they do, the endpoints controller adds the pod to the Service’s EndpointSlice. kube-proxy is watching for EndpointSlice changes and reacts by updating the iptables rules on every node so that Service traffic can actually reach it.

That’s what happened between kubectl apply and “configured.”

When things break, this is your map

This mental model is most useful in the middle of the night when something is wrong. Once you know which step failed, you know which component to blame.

  • RBAC error means Step 2 failed. The API Server authorization check didn’t pass. Check your ClusterRoleBindings.
  • Pod stuck in Pending is a Step 5 problem. The Scheduler can’t find a valid node, usually due to resource constraints or a taint that nothing tolerates. Run kubectl describe pod and it’ll tell you exactly why.
  • ImagePullBackOff is Step 6. The kubelet tried to pull the image and failed, either because the name is wrong or ECR credentials aren’t set up correctly.
  • CrashLoopBackOff is also Step 6. The image pulled fine, but the process inside keeps exiting. Start with kubectl logs.
  • Readiness probe failing is Step 7. Your app is running but not healthy yet, or the probe is pointed at the wrong port or path.

Try it yourself with LocalStack

The best way to make this mental model stick is to watch it happen. LocalStack lets you run AWS services locally, including EKS, so you can do this without touching a real AWS account.

Note: EKS support requires a LocalStack Pro account.

Create a local EKS cluster:

Terminal window
awslocal eks create-cluster \
--name demo-cluster \
--role-arn arn:aws:iam::000000000000:role/eks-role \
--resources-vpc-config '{}'

Wait for the cluster to become active:

Terminal window
awslocal eks wait cluster-active --name demo-cluster

Add a node group so pods can actually be scheduled:

Terminal window
awslocal eks create-nodegroup \
--cluster-name demo-cluster \
--nodegroup-name demo-nodes \
--node-role arn:aws:iam::000000000000:role/eks-node-role \
--subnets subnet-abc123

Point kubectl at your local cluster:

Terminal window
awslocal eks update-kubeconfig --name demo-cluster

LocalStack runs a single control-plane node under the hood. Remove the taint so pods can schedule on it:

Terminal window
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-

Now take a look at the deployment we’ll be using. Go ahead and create this deployment.yaml file in a directory you can access.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: nginx-demo
namespace: default
spec:
selector:
app: nginx-demo
ports:
- port: 80
targetPort: 80

The readiness probe means you’ll see pods go through Running before they hit Ready, which is Step 7 playing out in real time.

Apply it:

Terminal window
kubectl apply -f <your-directory>/deployment.yaml

This is the moment everything kicks off. The API Server validates your request and writes to etcd, and from that point the cluster starts converging toward your desired state on its own.

To watch that happen in real time, run this in your terminal:

Terminal window
kubectl get events --watch -n default

Leave it running and you’ll see each step land as it happens: the Controller Manager creating pods, the Scheduler assigning them to nodes, then kubelet pulling the image and starting the containers. The REASON column maps almost exactly to the 7 steps we covered above.

Once everything is up, confirm your pods are running:

Terminal window
kubectl get pods -n default

Now for the interesting part. Open a second terminal tab and delete one of your pods while the events watch is still running in the first:

Terminal window
kubectl delete pod <pod-name>

Watch what happens in the first tab. The ReplicaSet controller detects that the actual replica count dropped below 3, immediately creates a replacement pod, and the Scheduler assigns it back to the node. The whole thing resolves in a few seconds with zero intervention from you. That’s the reconciliation loop doing exactly what it’s supposed to do.

The mental model to keep

Look back at the flow: you → API Server → etcd → controllers → Scheduler → kubelet → running pod. That’s the spine of Kubernetes.

Three ideas underpin all of it:

  1. Everything is declarative. You never tell Kubernetes what to do. You tell it what you want, and it figures out how to get there.
  2. etcd is the source of truth. Every component in the system talks to etcd through the API Server. Nothing acts on direct commands. Everything reacts to state changes.
  3. Kubernetes is a control loop. Watch, diff, act. Every controller does this, forever. That’s why Kubernetes can self-heal. If state drifts from what’s desired, something will notice and fix it.

The weird behaviors, the self-healing, the way things just sort themselves out: it’s all the same loop running over and over again.

The mental model only really lands when you break something on purpose. Spin up LocalStack, get your pods running, then kill a node and watch the cluster fix itself. If you try it out, share what you built in the LocalStack Community Slack, I would love to see what you’re running locally.


Kiah Imani
Kiah Imani
DevRel at LocalStack
Kiah Imani is a Senior Dev Advocate at LocalStack, where she turns cloud chaos into clarity. She’s all about making AWS dev feel local, fun, and way less stressful.