Kubernetes env var configuration flowchart.

Kubernetes Env Vars: The Complete Guide

Master Kubernetes env variables with this complete guide. Learn how to define, manage, and secure environment variables for scalable and secure deployments.

Michael Guarino
Michael Guarino

Effectively managing environment variables in Kubernetes requires more than just setting key-value pairs. You need a clear understanding of when to use each approach—whether hardcoded values, ConfigMaps, Secrets, or dynamically injected data.

This hands-on guide is built for engineers solving real-world problems in production environments. We’ll walk through everything from using the basic env field in your Pod manifest to advanced techniques like injecting metadata with the Downward API. You'll also learn how to debug environment variable issues with kubectl, implement configuration changes without breaking rolling updates, and avoid pitfalls such as variable dependency ordering. Whether you're deploying a microservice or troubleshooting a failed rollout, this guide gives you the tools to master environment variable management in Kubernetes.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Externalize configuration from code: Use ConfigMaps for non-sensitive data and Secrets for credentials to inject settings at runtime. This core practice makes your application images portable across environments and allows you to update configurations without rebuilding your code.
  • Secure data with Secrets and granular RBAC: Always use Kubernetes Secrets for sensitive data, and for maximum security, mount them as files instead of environment variables. Enforce the principle of least privilege by creating strict RBAC policies that limit which users and services can access these configurations.
  • Automate management to prevent configuration drift: Adopt a GitOps workflow to version-control your configurations and create a single source of truth. A platform like Plural automates this process, continuously reconciling your live clusters with their intended state in Git to ensure consistency across your entire fleet.

What Are Kubernetes Environment Variables?

Kubernetes environment variables are key-value pairs that supply configuration data to containers running inside Pods. They enable the separation of configuration from application logic, which is critical for building portable and maintainable containerized applications. Instead of hardcoding values like database URLs, API tokens, or feature flags into your source code, you inject them at runtime via the Pod spec. This allows you to use the same container image across multiple environments—development, staging, and production—while only changing the environment-specific values.

This approach simplifies deployment workflows, promotes consistency across environments, and aligns with the Twelve-Factor App methodology. As your infrastructure scales, consistently managing environment variables across clusters becomes increasingly complex—highlighting the need for centralized configuration strategies and observability.

Their Purpose in Containerized Applications

The main goal of environment variables in Kubernetes is to externalize configuration so that your container images remain immutable and reusable. This pattern allows you to:

  • Swap database endpoints or service URLs between environments.
  • Toggle features using flags without changing code.
  • Manage credentials securely using Kubernetes Secrets.
  • Avoid image rebuilds when configurations change.

For example, in a development environment, you might set:

- name: DATABASE_URL
  value: postgres://dev-user:dev-pass@dev-db:5432/dev

In production, the same image can use:

- name: DATABASE_URL
  valueFrom:
    secretKeyRef:
      name: prod-database-secret
      key: url

This level of flexibility is essential in dynamic systems where rapid deployments and configuration changes are routine.

How Kubernetes Injects Environment Variables into Pods

Kubernetes injects environment variables into containers before the main application process starts. This is handled by the kubelet on the node where the Pod is scheduled. Applications can then access these variables using standard OS-level environment variable APIs (e.g., os.getenv() in Python, System.getenv() in Java, etc.).

You define environment variables in the Pod spec using two main fields:

env: Explicitly define one or more variables.

env:
  - name: LOG_LEVEL
    value: "debug"

envFrom: Load all key-value pairs from a ConfigMap or Secret.

envFrom:
  - configMapRef:
      name: app-config
  - secretRef:
      name: app-secrets

This separation of code and configuration not only improves clarity but also makes it easier to rotate credentials, update runtime parameters, and comply with best practices for scalable systems.

How to Define and Manage Environment Variables

Environment variables in Kubernetes are a core mechanism for injecting configuration into your applications. By externalizing these values, you decouple your application logic from environment-specific settings, enabling a single container image to be reused across development, staging, and production.

Kubernetes provides multiple methods for defining and managing environment variables, each suited to different levels of complexity, sensitivity, and scalability. Choosing the right approach is crucial for building secure, maintainable, and production-ready systems.

Define Variables Directly with the env Field

The simplest way to define environment variables is by using the env field in your Pod spec. This lets you explicitly set key-value pairs within each container definition:

containers:
  - name: app
    image: my-app
    env:
      - name: LOG_LEVEL
        value: "debug"
      - name: DEFAULT_LANGUAGE
        value: "en"

Each container in a Pod can have its own unique set of variables. This method is ideal for static, non-sensitive values that rarely change.

Drawback: As your application scales, maintaining large numbers of hardcoded variables across multiple manifests becomes tedious and error-prone. Additionally, configuration tightly coupled with deployment manifests is harder to update and track.

Source Variables with the envFrom Field

For cleaner and more modular configuration, Kubernetes supports the envFrom field. This allows you to bulk-import environment variables from a ConfigMap or Secret:

envFrom:
  - configMapRef:
      name: app-config
  - secretRef:
      name: app-secrets

This approach eliminates the need to define each variable manually and makes your Pod specs more maintainable. It's particularly useful when a container needs many related configuration values.

Pro Tip: You can combine envFrom with env for overrides—direct values in env will take precedence if a key exists in both places.

Use ConfigMaps and Secrets for Scalable Configuration

To scale your configuration strategy, you’ll want to store environment variables outside your manifests using:

  • ConfigMaps: For plain-text, non-sensitive configuration such as URLs, feature flags, or logging levels.
  • Secrets: For sensitive values like API tokens, credentials, and TLS certificates.

These can be used either with env or envFrom, or mounted as volumes for greater control and security:

volumeMounts:
  - name: secret-volume
    mountPath: "/etc/secrets"
volumes:
  - name: secret-volume
    secret:
      secretName: db-credentials

Mounting Secrets as files is often preferred for security, as it avoids exposing secrets in the environment, which some language runtimes may inadvertently log.

Managing Configuration at Scale

As the number of clusters and environments increases, maintaining consistency across your configuration becomes a major operational challenge. Solutions like Plural help you manage Kubernetes configs at scale with a GitOps workflow, enabling:

  • Declarative configuration stored in version control.
  • Automatic syncing of ConfigMaps and Secrets to your clusters.
  • Rollback and change tracking for all environment-related updates.

This prevents configuration drift and gives you a centralized view into how your infrastructure is behaving across environments.

Best Practices for Managing Environment Variables

Managing environment variables effectively is not just about organization; it's a core component of operational excellence in a Kubernetes environment. Without a clear strategy, you can face configuration drift, security vulnerabilities, and complex debugging sessions that slow down your team. Adopting a set of best practices ensures your configurations are consistent, secure, and easy to manage, regardless of how many clusters or environments you're running. These practices aren't just suggestions; they are foundational principles that prevent technical debt and streamline the entire application lifecycle, from development to production. By implementing them, you build a more robust, secure, and maintainable system.

Separate Configuration from Your Application Code

A core tenet of modern application development, famously outlined in the Twelve-Factor App methodology, is to strictly separate configuration from code. Hardcoding values like database connection strings or external API endpoints into your container image makes your application brittle and insecure. Every time a value changes, you have to rebuild and redeploy the entire image.

Instead, you should treat configuration as a dependency that is injected into the container at runtime. Kubernetes facilitates this through objects like ConfigMaps for non-sensitive data and Secrets for sensitive credentials. This decoupling allows you to use a single, immutable container image across all your environments—from local development to production—by simply applying the appropriate configuration for each. This practice dramatically simplifies your CI/CD pipeline and improves the portability of your services.

Establish Clear Naming Conventions

When you're managing a handful of variables, naming might seem trivial. But in a microservices architecture with dozens of services, a lack of consistency quickly leads to chaos. Adopting a clear and predictable naming convention is essential for maintainability.

A robust strategy involves prefixing variables with the name of the application or component they belong to—for example, PAYMENT_API_TIMEOUT_MS is far more descriptive than a generic TIMEOUT. This prevents naming collisions when multiple containers are running in the same Pod and makes your configuration self-documenting. Standardizing on a format, such as ALL_CAPS_WITH_UNDERSCORES, further enhances readability. Enforcing these conventions across your fleet ensures that anyone can understand the purpose of a variable at a glance, which is invaluable for debugging and onboarding new engineers.

Secure Your Environment Variables

While environment variables are a convenient way to pass configuration to your applications, they can also be a significant security risk if not handled properly. Sensitive data like API keys, database passwords, and private certificates can be accidentally exposed through application logs, error messages, or even by inspecting a running process.

A robust security posture requires a defense-in-depth strategy that goes beyond simply avoiding hardcoded values. You need to think about the entire lifecycle of your sensitive data. This involves using the right tools to store secrets, implementing strict access controls to limit who and what can read them, and encrypting the data both when it's stored and when it's in transit. The following practices form the pillars of a secure configuration management strategy in Kubernetes.

Protect Sensitive Data with Secrets

For any data that is sensitive, you must use Kubernetes Secrets. Storing credentials or tokens in plain-text ConfigMaps or directly in a Pod definition is a major security anti-pattern. While Secrets are only base64-encoded by default—which is easily reversible and offers no real protection—they are a distinct API object that can be targeted with specific security policies.

For an even higher level of security, it's best practice to mount Secrets as files into a temporary filesystem (tmpfs) within your container. This prevents the secret from being exposed as an environment variable, which could be leaked in logs or accessed by inspecting the process environment. For enterprise-grade security, you can also integrate Kubernetes with external secret managers like HashiCorp Vault or AWS Secrets Manager.

Use RBAC for Granular Access Control

The principle of least privilege is fundamental to Kubernetes security. Role-Based Access Control (RBAC) is the mechanism you use to enforce it. You must create granular RBAC policies that strictly define which users, groups, or ServiceAccounts can access your Secrets and ConfigMaps. A Pod’s ServiceAccount should only have get permissions for the specific Secrets it absolutely needs to run.

Manually managing these policies across a large fleet of clusters is error-prone and doesn't scale. Plural solves this by letting you define your RBAC policies once in a Git repository and then automatically sync them across all your clusters using a Global Service. This GitOps–based approach ensures consistent, least-privilege access control is enforced everywhere, reducing the risk of misconfiguration.

Encrypt Data at Rest and in Transit

A complete security strategy must protect data everywhere it lives. This means encrypting it both at rest and in transit. Encryption at rest protects your data when it is stored on disk. In Kubernetes, all cluster state—including Secrets—is stored in a key-value database called etcd. If an attacker gains access to the underlying storage, etcd encryption ensures they cannot read your sensitive data. Most managed Kubernetes providers enable this by default, but it's critical to verify.

Encryption in transit protects data as it moves across the network between Kubernetes components, like from the API server to a kubelet. This is typically handled by TLS. Implementing both layers of encryption provides a critical safeguard, ensuring that even if one layer is compromised, your sensitive configuration data remains protected.

Common Use Cases and Advanced Techniques

Once you’ve mastered the basics of environment variables in Kubernetes, you can start using them to unlock more powerful configuration patterns. Far from being just simple key-value pairs, environment variables allow your containers to dynamically adapt to their runtime context—switching behavior between environments, toggling features on demand, and integrating with infrastructure metadata. They’re the backbone of adaptive, scalable systems in Kubernetes.

These advanced techniques help you build resilient, modular applications. For example, you can point an app to different databases for staging vs. production without rebuilding your container. Or use a feature flag to roll out functionality to just 10% of users without touching the application code. However, as your footprint grows across multiple clusters and environments, managing these variables consistently becomes complex. Ensuring that every environment has the right mix of ConfigMaps, Secrets, and flags can quickly spiral into manual toil.

This is where centralized configuration platforms like Plural prove essential. With GitOps-native workflows, Plural provides a unified control plane to synchronize configurations across clusters, enforce security policies, and eliminate drift. Below, we explore the most impactful ways to leverage environment variables for robust day-to-day operations.

Configure Applications and Database Connections

One of the most common—and critical—uses for environment variables is managing external service configurations like database connections. Hardcoding credentials or endpoints into application code or images introduces fragility and security risk. Instead, use environment variables to inject these values at runtime.

For example, store database credentials (DB_USER, DB_PASSWORD) in a Secret and non-sensitive values like DB_HOST or DB_PORT in a ConfigMap. You can then inject them into your container using the envFrom or env fields in the Pod spec. This approach enables you to reuse the same container image across development, staging, and production by simply varying the injected config, improving portability and simplifying deployments.

Implement Feature Flags and Environment-Specific Settings

Environment variables are a simple but effective way to toggle application behavior without redeploying code. A feature flag like ENABLE_NEW_CHECKOUT_FLOW=true lets you selectively enable new functionality for specific environments, customers, or traffic percentages—ideal for A/B testing and canary deployments.

You can also use variables such as LOG_LEVEL, REGION, or ENVIRONMENT_NAME to tune behavior between dev, staging, and prod environments. This reduces branching in your code and allows a single build artifact to support multiple deployment contexts, keeping your CI/CD pipeline clean and fast.

Inject Dynamic Runtime Metadata with the Downward API

For scenarios where a container needs metadata about its own Pod or runtime environment, Kubernetes offers the Downward API. This allows you to expose information such as the Pod’s name, namespace, node, or resource limits directly into the container—via environment variables or mounted files.

For example, injecting POD_NAME or POD_NAMESPACE helps tie application logs to a specific pod instance, simplifying observability and debugging. Similarly, exposing CPU or memory limits can let the application tune itself based on available resources. This is particularly valuable for service discovery, registration, and autoscaling logic.

Using the Downward API, you can make applications more self-aware and context-sensitive, which is especially useful in distributed, cloud-native environments.

How to Troubleshoot Environment Variables

Misconfigured environment variables are a common root cause of pod failures, connection errors, and unexpected behavior in Kubernetes. Thankfully, with the right practices and tools, they’re also one of the easiest things to debug—if you know where to look.

Avoid Common Pitfalls

The most effective troubleshooting begins before a problem even occurs. Most environment variable issues stem from avoidable mistakes—such as typos in variable names, incorrect references to ConfigMap or Secret keys, or assuming that changes to these resources automatically propagate to running pods.

One key principle to remember: environment variables in Kubernetes are immutable once a Pod is running. If you update a ConfigMap or Secret, the change won’t affect existing Pods unless you trigger a redeployment (e.g., by restarting the deployment or updating its spec).

To prevent these issues:

  • Use consistent naming conventions (DB_PASSWORD, API_KEY, etc.) across environments.
  • Version your ConfigMaps and Secrets to make changes explicit and trackable.
  • Validate your configurations using tools like kubeval or kubeconform before applying them.
  • Leverage kubectl diff to preview changes before deploying.

Essential kubectl Commands for Debugging

When something goes wrong, your first task is to inspect the actual environment inside the container.

To list the environment variables for a running container, use:

kubectl exec <pod-name> -- printenv

This shows exactly what the container sees at runtime, helping you confirm whether values were injected as expected.

If a Pod fails to start, use:

kubectl describe pod <pod-name>

This reveals event logs that may point to issues like CreateContainerConfigError or CrashLoopBackOff. These typically indicate that a ConfigMap or Secret was misnamed, missing, or malformed.

You can also inspect the Pod spec directly:

kubectl get pod <pod-name> -o yaml

This allows you to confirm whether the environment variables defined in your YAML file were correctly rendered into the actual Pod configuration.

Monitor and Validate Your Configurations

While kubectl is great for spot-checking individual pods, you also need a monitoring strategy that detects configuration errors across the entire fleet.

Environment-related issues often manifest as subtle symptoms:

  • Application crash loops due to bad secrets or missing database hosts.
  • Increased latency caused by feature flags toggling experimental code paths.
  • Partial outages in staging but not production due to diverging configurations.

Monitoring health and error metrics from your applications can surface these misconfigurations early. Combine this with deployment observability to pinpoint the root cause.

Platforms like Plural offer a unified dashboard that correlates configuration changes (e.g., new ConfigMap versions or feature flag toggles) with performance metrics, crash logs, and pod restarts. This eliminates the guesswork from debugging and ensures that misconfigured environment variables don’t turn into production incidents.

Managing Environment Variables in Deployments

Managing environment variables in Kubernetes isn’t just about initial setup—it’s about overseeing their lifecycle as your application evolves. From rolling updates to large-scale configuration consistency, your approach to environment variables can make or break deployment stability. In this section, we’ll walk through best practices that ensure your environment variables remain reliable, traceable, and secure across all deployment stages.

Handle Rolling Updates with Configuration Changes

When an environment variable changes—especially if it’s sourced from a ConfigMap or Secret—Kubernetes uses a rolling update strategy. This means:

  • Old Pods are gradually terminated.
  • New Pods are created using the updated configuration.
  • The application stays online without downtime.

This works well for single applications, but in larger environments, visibility becomes challenging. You may not know which services have successfully adopted the new configuration.

This is where platforms like Plural provide value. Their dashboard offers a centralized view of deployment activity, helping you verify that configuration changes are rolled out completely and correctly across all services and clusters.

Tip: To trigger a rolling update when changing a ConfigMap or Secret, update an annotation in your Deployment, such as:

spec:
  template:
    metadata:
      annotations:
        config-reload-timestamp: "{{ now | timestamp }}"

This forces a new Pod template hash, ensuring Pods are redeployed even if only a referenced ConfigMap changed.

Version Your Environment Configurations

Your configuration should be treated with the same discipline as your application code. That means:

  • Store your ConfigMaps and Secrets as YAML files in Git.
  • Use clear version control and change tracking.
  • Review and approve changes via pull requests.

This GitOps model turns configuration management into a transparent, repeatable, and auditable process.

Plural is built around GitOps best practices. Any updates you make to environment variable definitions in Git are automatically synced to your Kubernetes clusters, ensuring a consistent and reliable rollout. And if something goes wrong? Simply revert the Git commit to restore the last known good state.

Recommended tools:

Prevent Configuration Drift Across Your Fleet

Configuration drift happens when a cluster's live state diverges from its source-of-truth in Git. This often results from:

  • Manual kubectl edits that bypass Git
  • Inconsistent update processes
  • Ad hoc fixes that never make it into version control

Drift creates unpredictable behavior, debugging nightmares, and potential security gaps.

To prevent this, use continuous reconciliation: a process where your desired Git state is constantly compared to the cluster’s actual state—and corrected if they differ.

Plural’s deployment agent enables this by running inside each managed cluster. It checks for drift and automatically re-aligns live state with your Git repository, ensuring configuration consistency without manual oversight.

Why it matters:

  • You avoid hidden misconfigurations across environments.
  • You eliminate the “it works on staging, but not on prod” problem.
  • You increase operational confidence with every deploy.

Enforce Security and Compliance

Managing environment variables securely goes beyond just using Kubernetes Secrets. To operate safely at scale, you need a systematic approach that enforces rules, detects threats, and provides a clear audit trail for every configuration change. This involves implementing automated policies and maintaining visibility across your entire fleet to ensure your configurations remain compliant and secure from development to production.

Enforce Policies with OPA Gatekeeper

A powerful way to enforce configuration standards is with Open Policy Agent (OPA) Gatekeeper, a specialized admission controller for Kubernetes. Gatekeeper intercepts requests to the Kubernetes API server and validates them against a set of policies you define. This allows you to prevent non-compliant configurations from ever entering your cluster. For example, you can create a policy that blocks deployments from mounting Secrets directly as environment variables, or a rule that requires every container to have CPU and memory limits defined. Plural simplifies this process by letting you manage and roll out OPA Gatekeeper policies across your entire fleet from a central interface, ensuring consistent enforcement everywhere.

Detect and Manage CVEs in Your Configurations

Environment variables often specify container image tags, directly linking your deployment to a potential set of vulnerabilities. A critical part of security is continuously scanning these images for Common Vulnerabilities and Exposures (CVEs). By integrating vulnerability scanning into your CI/CD pipeline, you can catch threats before they reach production. Plural enhances this process by aggregating CVE scan results from across your clusters into a single dashboard. This gives you a clear, prioritized view of which services are affected by which vulnerabilities, allowing you to quickly identify at-risk configurations and remediate them based on severity.

Set Up Audit Logging for Configuration Changes

When a configuration changes, you need to know who changed it, what they changed, and when. Kubernetes offers built-in audit logging capabilities that record every API request, providing a detailed log for compliance and incident response. This is especially important for tracking modifications to ConfigMaps and Secrets. Plural complements this by enforcing a GitOps workflow for all configuration changes. This creates a human-readable, version-controlled audit trail in Git for every update to your environment variables. By combining Kubernetes audit logs with a clear Git history, you get a comprehensive view of your configuration's lifecycle, making it easier to meet compliance requirements and investigate any issues.

How Plural Streamlines Environment Variable Management

Managing environment variables, ConfigMaps, and Secrets across a fleet of Kubernetes clusters introduces significant operational overhead. Ensuring consistency, enforcing security policies, and preventing configuration drift requires a robust, automated solution. Plural provides a unified platform that directly addresses these challenges, simplifying the entire lifecycle of your application configurations. By integrating configuration management into a GitOps-based workflow, Plural helps platform teams maintain control while enabling developers to work efficiently.

Simplify Configuration with a Single Pane of Glass

Plural provides a unified interface for handling the entire lifecycle of your applications, which includes managing environment variables and other configuration data. This single pane of glass allows you to externalize environment-specific settings from your application code, making it easier to manage deployments without rebuilding container images. Instead of juggling multiple tools and contexts, your team gets a centralized view of all configurations across every cluster. This approach not only improves deployment efficiency but also reduces the complexity of managing distinct settings for development, staging, and production environments. With Plural’s embedded dashboard, you gain full visibility into your workloads, simplifying troubleshooting and day-to-day management tasks.

Integrate Security and RBAC Automatically

Securing environment variables, especially sensitive data stored in Secrets, is critical. Plural integrates security and Role-Based Access Control (RBAC) directly into its management workflows. The platform connects to your identity provider via OIDC and uses Kubernetes impersonation, meaning all access control resolves to your console user's email and groups. This creates an effective SSO experience for Kubernetes. You can define granular permissions for who can view or modify ConfigMaps and Secrets, ensuring only authorized personnel can access critical data. Plural also allows you to use a Global Service to replicate RBAC policies across your entire fleet, ensuring consistent security posture and eliminating manual configuration on a per-cluster basis.

Automate Policy Enforcement and Compliance Checks

Plural’s GitOps-based workflows are designed to automate policy enforcement and compliance. When you manage environment variables and configurations as code in a Git repository, every change goes through a pull request. This process provides a clear audit trail and allows for automated checks before anything is merged and deployed. You can integrate policy-as-code tools like OPA Gatekeeper to enforce organizational standards, such as requiring resource limits or restricting access to specific registries. This automation ensures that all configurations adhere to predefined policies, which drastically reduces the risk of human error and misconfigurations. By automating these checks, you can maintain a compliant and reliable state across all your Kubernetes deployments.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What's the real difference between using a ConfigMap and a Secret for my variables? Think of it this way: ConfigMaps are for non-sensitive, plain-text data like feature flags or API endpoints. Secrets are specifically for sensitive data like passwords or tokens. While Secrets are only base64 encoded by default, they are a distinct object type in Kubernetes, which allows you to apply stricter RBAC policies and audit rules to them. For maximum security, you should avoid injecting Secrets as environment variables and instead mount them as files in a volume.

I updated my ConfigMap, but my application is still using the old values. What did I do wrong? This is a very common situation, and you didn't necessarily do anything wrong. Environment variables in Kubernetes are immutable, meaning they can't be changed for a running Pod. When you update a ConfigMap, existing Pods won't see the new values. To apply the changes, you must trigger a redeployment, which terminates the old Pods and creates new ones that will pull in the updated configuration. A GitOps workflow, like the one Plural automates, handles this process seamlessly by orchestrating a rolling update whenever you commit a configuration change.

How can I stop configurations from becoming inconsistent across my development, staging, and production clusters? This problem, known as configuration drift, is best solved with automation and a single source of truth. The most effective strategy is to store all your configurations, like ConfigMaps and Secrets, in a Git repository. This gives you versioning and a clear audit trail. Then, use a platform like Plural to automatically sync these configurations to all your target clusters. Plural’s Global Services feature is designed for this exact purpose, ensuring that a change made in your central repository is consistently replicated everywhere, eliminating drift.

Is it ever safe to put sensitive data directly into an environment variable? It's a practice you should generally avoid. While using Kubernetes Secrets is the correct first step, injecting them as environment variables can still expose them in application logs or through process inspection. The more secure method is to mount the Secret as a file into a temporary in-memory filesystem within the container. Your application can then read the secret from this file. This approach significantly reduces the attack surface by keeping sensitive data out of the process environment.

My Pod is stuck in a CreateContainerConfigError state. How do I debug this? This error almost always means Kubernetes can't find a ConfigMap or Secret that your Pod is trying to reference. The first step is to run kubectl describe pod <pod-name> and look at the events section at the bottom. It will usually tell you exactly which ConfigMap or Secret is missing. From there, check for typos in your Pod manifest, make sure the referenced configuration object actually exists in the correct namespace, and verify that the Pod's ServiceAccount has the necessary RBAC permissions to read it.

Guides