📋 Quick Steps
When your Helm chart breaks, follow this diagnostic sequence before you rage-delete the namespace.
helm install my-release ./my-chart --dry-run --debug > debug.yaml
# 2. Check dependency resolution
helm dependency list ./my-chart
helm dependency update ./my-chart
# 3. See what values are actually being used
helm get values my-release --all
# 4. Template a specific subchart
helm template my-release ./my-chart --include-crds --show-only charts/subchart/templates/deployment.yaml
Welcome to the Helm Chart Haunted House
Your helm install just failed with an error message that looks like YAML vomit. The deployment is in a state that shouldn't be physically possible. You're staring at a values.yaml file that's 500 lines long, half of which are commented out from previous debugging sessions that also failed.
This isn't a deployment tool—it's an escape room where the puzzles are written in templated YAML and the only clues are cryptic error messages. You're not a developer anymore; you're a ghostbuster, and your Helm chart is the haunted house.
🚀 TL;DR: Your Escape Route
- Helm's dependency resolution isn't magic—it's just confusing. Subcharts override parent values in a specific order that will betray you.
- Your
values.yamlis probably lying through omission, inheritance, or because you forgot a nested key three months ago. - Debugging templates doesn't require tears—just systematic inspection of what Helm actually generates versus what you think it generates.
The Helm Chart Autopsy: 7 Places Your Values Are Lying
When Helm behaves unexpectedly, it's usually because values aren't flowing where you think. Here's where to check when your configuration seems to be ignoring you.
1. The Inheritance Graveyard
Subcharts don't inherit values automatically. If your parent chart has image.tag: "latest" but the subchart expects image.tag under its own namespace, you're deploying empty containers. Helm's value merging follows this hierarchy: command-line values → parent values.yaml → subchart values.yaml → subchart defaults.
Example mistake: Setting database.enabled: true in the parent when the PostgreSQL subchart expects postgresql.enabled: true. They're different namespaces entirely.
2. The Global Values Mirage
The .Values.global section is supposed to be shared across all charts. But if a subchart doesn't explicitly reference .Values.global.someValue in its templates, your global setting does nothing. Check each subchart's template files to see if they actually use the global values you're setting.
Pro tip: Use helm template --debug and search for "global" in the output to see which references actually resolve.
3. The Required Value Trap
Some charts have required functions in templates that fail if values aren't provided. The error message might be generic. Run helm lint on your chart first—it catches some (but not all) of these issues before deployment.
4. The Type Mismatch Ghost
YAML types haunt everyone. enabled: "false" (string) versus enabled: false (boolean) will give different results in {{ if .Values.enabled }} conditions. One deploys, the other doesn't.
5. The Missing Default Phantom
Check the chart's values.yaml for defaults. If a value isn't set anywhere in your override chain, it might default to something surprising. The chart maintainer's "sensible default" might be your production outage.
6. The Conditional Nesting Nightmare
Nested conditionals in templates can swallow values whole. If a parent condition fails ({{ if .Values.feature.enabled }}), everything inside that block—including your carefully configured sub-values—gets skipped during template rendering.
7. The Chart Version Zombie
You updated your requirements.yaml (or Chart.yaml dependencies) but didn't run helm dependency update. You're deploying yesterday's zombie chart version with today's values. The mismatch will create undead bugs.
Template Debugging Without Crying Into Your Coffee
Helm templates are Go templates with extra functions. When they fail, you get Go error messages. Here's how to debug them like a human.
Step 1: Isolate the Offending Template
Use --show-only to render specific templates. When you get "template: : bad character U+002D '-'", it means a template syntax error. Narrow it down:
helm template my-release ./chart --show-only templates/deployment.yaml
This renders just that file, making the error location obvious.
Step 2: See the Actual Values Context
Add --debug to see the values being passed to templates. Better yet, use this trick to dump the entire values context:
Create a debug template: {{ toYaml .Values }} in a temporary file and render it with --show-only. You'll see exactly what each chart receives.
Step 3: Check Function Outputs
Helm functions like include, toYaml, and tpl can fail silently. Test them by creating a simple ConfigMap with their output:
{{ define "test" }}{{ .Values.someValue | default "missing" }}{{ end }}
Then reference it: data: {{ include "test" . | indent 2 }}
Dependency Resolution: It's Not Magic, Just Confusing
Helm processes dependencies (subcharts) in a specific order that feels arbitrary until you understand it.
The Actual Order of Operations
- Parent chart values are loaded
- Subchart values from
Chart.yamldependencies are loaded - Values are merged: command line overrides parent, parent overrides subchart
- Templates are rendered from the bottom of the dependency tree up
- All manifests are combined and sent to Kubernetes
Critical insight: A subchart can override a parent's value for itself, but not for other subcharts. This is why global exists—as a workaround for this limitation.
Visualizing the Flow
Create a dependency tree diagram. For each subchart, note:
- Which values it expects (check its
values.yaml) - Which values it actually receives (use the debug template method)
- What it provides to its own sub-subcharts
This manual mapping solves 80% of "why isn't my value working?" problems.
Emergency Escape Hatches: When to Use Which Option
Helm gives you several debugging flags. Using the wrong one is like using a flamethrower to light a candle.
--dry-run: The "Look Before You Leap"
Always use --dry-run first. It shows what would be sent to Kubernetes. Combine with --debug to see the rendered templates with values. This catches template errors and value mismatches before they touch your cluster.
Common mistake: Not realizing --dry-run still needs access to the Kubernetes API for CRD validation. If your cluster is down, use --disable-openapi-validation.
--debug: The "What Actually Happened?"
--debug adds the rendered manifests to the output. Use it when:
- Templates render but produce wrong YAML
- You need to see the final manifest before debugging
- Checking if a specific value made it through the chain
It's verbose. Pipe it to a file: helm install ... --debug > debug-output.yaml.
--force: The Nuclear Option (Handle With Care)
--force makes Helm delete and recreate resources. It's not a debugging tool—it's a recovery tool. Use it only when:
- A resource is stuck in a bad state (like a failed StatefulSet)
- You've confirmed the new manifest is correct (with --dry-run)
- You accept potential downtime
Warning: --force on a StatefulSet with persistent volumes can cause data loss. Check volume reclamation policies first.
helm get: The "What's Already There?"
When debugging an existing release, helm get is your friend:
helm get values RELEASEshows user-supplied valueshelm get values RELEASE --allshows all values (including defaults)helm get manifest RELEASEshows what's actually deployedhelm get hooks RELEASEshows completed hooks (often the culprit)
Pro Tips From the Helm Trenches
1. Create a debug subchart: Add a simple debug chart to your dependencies that creates a ConfigMap with {{ toYaml .Values }}. Instant values inspection.
2. Use named templates for complex logic: Instead of inline conditionals, create named templates. They're easier to test and debug in isolation.
3. Version pin everything: In your Chart.yaml dependencies, specify exact versions (1.2.3) not ranges (^1.2.0). Surprise updates cause midnight debugging sessions.
4. The three-pass review: Before deploying, check: 1) helm lint, 2) helm template --dry-run, 3) kubectl apply --dry-run=client on the output.
5. Keep a cheat sheet: Document your chart's value structure and dependencies. In six months, future-you will thank present-you.
Escaping the Helm Haunted House
Helm charts fail in predictable ways once you know where to look. The haunted house metaphor works because the ghosts (bugs) always appear in the same rooms: value inheritance, template rendering, and dependency resolution. Your flashlight is --dry-run --debug, your map is the dependency tree, and your proton pack is systematic debugging.
Next time helm install fails with cryptic YAML, don't reach for kubectl delete --all immediately. Run the diagnostic sequence from the Quick-Value Box. Check the seven lying places in your values. Use the escape hatches appropriately. You'll fix the issue in minutes instead of hours, and maybe—just maybe—you'll keep your sanity intact.
Now go update those dependencies (yes, you forgot to run helm dependency update again).
Quick Summary
- What: Developers waste hours debugging Helm chart issues where dependencies break, values override incorrectly, and templates produce unexpected results - often leading to 'just delete and redeploy' solutions
💬 Discussion
Add a Comment