Skip to content
Engineering Log
Go back

Kustomize: Not Abstraction, Just Better Duplication

Updated:

In my daily work, I frequently work with Kubernetes manifests - YAML files that declare how resources should be applied to a cluster. Managing multiple environments (development, staging, and production) across many clusters becomes nearly impossible without declarative manifests. Applying commands directly to clusters would result in untraceable resources and make troubleshooting difficult to reproduce. Declarative YAML files are essential for collaboration and reproducibility.

However, managing manifests presents its own challenges as resources accumulate over years of cluster operation. I eventually realized I needed a way to reduce duplication across environments. The solution was Kustomize. Initially, I underestimated its capabilities - until I discovered overlays and patches. This fundamentally changed how I manage configuration complexity.

I avoid evangelizing tools because technology choices are deeply personal and must align with organizational culture and use cases. Instead, I’ll share specific scenarios where Kustomize improved readability and reduced clutter in my workflows.

Scenario 1: The Environment Proliferation (Overlays)

It started with three environments. Standard stuff: development, staging, production. Each needed the same Deployment, same Service, same ConfigMap - except for the image tags, the replica counts, the resource limits, and the domain names.

I created three directories. dev/, staging/, prod/. I copied the YAML files across. When I updated the liveness probe in development, I opened three editors. When staging drifted from production, I told myself I’d catch it in code review. I never did.

to overcome this challenges, I flattened the structure. One base/ directory with the common resources. Three overlays/ subdirectories development, staging, production each containing only a kustomization.yaml and the deltas.

community-dashboard
├── base
│   ├── deployment.yaml
│   ├── infisical-secret.yaml
│   ├── ingress.yaml
│   ├── kustomization.yaml
│   └── svc.yaml
└── stages
    ├── development     # This is the dev overlay
    │   └── kustomization.yaml
    ├── production      # This is the prod production
    │   ├── http-route.yaml
    │   └── kustomization.yaml
    └── staging         # This is the staging production
        ├── http-route.yaml
        └── kustomization.yaml

Previously: multiple files to update for a single change. Now, edit the base, patch the variance. kubectl apply -k overlays/prod deploys exactly what I reviewed in git diff. The duplication didn’t disappear - it became explicit.

The base itself imports common manifest that is repeatedly used between environtments. The base kustomization.yaml is as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - infisical-secret.yaml
  - svc.yaml
  - ingress.yamlbase/kustomization.yaml

Each kustomization in the overlays will refer to the base kustomization and will also may contain some other resources to be included. The overlay kustomization is as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - http-route.yamlstages/production/kustomization.yaml

Scenario 2: The Image Tag Proliferation (Generators)

Then came the image tag problem. Eight microservices, three environments, each building from the same source but promoting at different cadences. Development tracked latest. Staging pinned to specific SHAs. Production only accepted semver tags. I had twenty-four image references scattered across YAML files, and a deployment process that required manual find-and-replace for every promotion.

I wrote a sed script. Something like s/image: myapp:.*/image: myapp:${TAG}/ run in CI. It felt lightweight - just text replacement. But I saw the fragility: regex matching the wrong line, tags with special characters breaking the substitution, and the inevitable moment when someone asked ‘what version is actually in staging right now?’ with no single source of truth to answer.

lets fix this workflow by using images generator. Defined the image references in kustomization.yaml, not in the Deployment manifests themselves. The base declared what images existed. Each overlay declared which tag belonged to that environment. it may look like as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - http-route.yaml

images:
  - name: community-dashboard-image
    newTag: v2.3.1stages/production/kustomization.yaml

Scenario 3: The Configuration Drift (Patches)

Then came the configuration problem. Same application, six environments, each needing slightly different resource limits, environment variables, and replica counts. Development ran lean - 1 replica, minimal CPU. Production ran hot - 5 replicas, burst scaling enabled. Staging mirrored production except for the third-party API endpoints it pointed to. The differences were small but critical: a memory limit too low caused OOMKills, an environment variable wrong caused payment failures 😱.

I considered copying the entire Deployment for each environment. Six files, six full manifests, edit the specific fields per environment. It was direct - see the whole picture in one file. But I knew the cost: a liveness probe change meant six edits. A new environment variable meant six additions. The drift caused by human error was invisible until it caused failures and a terrible way to actually see what is being adjusted betweeen environment.

To tackle this I used patches - surgical, targeted modifications to the base resources. The base defined the common structure: container name, port, base liveness probe. Each overlay defined only what differed through patches, either inline or as separate files. in the implementation it looks as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - http-route.yaml

# Patch The Target Namespace
patches:
  - target:
      kind: InfisicalSecret
      name: dashboard
    patch: |-
      - op: replace
        path: /spec/managedSecretReference/secretNamespace
        value: dashboard-productionstages/production/kustomization.yaml

patches solved the representation problem, not the propagation problem. When production needed an emergency replica bump, I edited overlays/production/kustomization.yaml, committed, merged, ran kubectl apply. The change was traceable, but the mechanism was still manual.

Closing Synthesis

“Three problems: duplication, repetition, matrix explosion. Three solutions: bases with overlays, generators with patches, components with composition. The pattern wasn’t learning Kustomize syntax—it was recognizing where I was solving variation with duplication when I should have been solving it with layering.”

“This worked because kubectl apply - is just the client. The logic lives in git, reviewable, diffable, reversible. But client-side rendering has limits. The next step was running this same structure continuously, reacting to git changes without manual kubectl invocations. That’s where the GitOps layer comes in — and why I started looking at ArgoCD and Kargo.”


Share this post on: