Listing Kubernetes deployments with the kubectl get deployments command to check their status.

kubectl get deployments: List, Filter & Inspect

Learn how to use `kubectl get deployments` to list, filter, and inspect Kubernetes deployments for efficient application management and troubleshooting.

Michael Guarino
Michael Guarino

Most engineers are familiar with running a basic kubectl get deployments to see a list of their running applications. But its true power lies beyond the default table view. When you need to automate health checks, back up configurations, or perform targeted queries in a complex environment, you need to leverage its advanced features. This guide moves past the basics and shows you how to use label selectors, field selectors, and custom output formats like JSON and YAML. We'll explore how to combine this command with other tools like jq to build powerful scripts that can streamline your operational workflows and automate your deployment validation processes.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Verify deployment health at a glance: Use kubectl get deployments as your first step to confirm an application's status. The READY, UP-TO-DATE, and AVAILABLE columns provide an immediate snapshot of whether the current state matches your desired configuration.
  • Follow a systematic troubleshooting path: When deployments fail, move from the general to the specific. Start with kubectl describe deployment to review events, then inspect individual pods and their logs with kubectl get pods and kubectl logs to uncover the root cause.
  • Use the CLI for precision and a platform for scale: While kubectl is essential for targeted tasks, managing a fleet of deployments creates operational drag. Plural's unified dashboard provides the single-pane-of-glass visibility needed to efficiently monitor and manage deployments across all your clusters without constant context-switching.

What Is kubectl get deployments?

kubectl get deployments queries the Kubernetes API server and lists Deployment resources in a namespace. By default, it operates in the current context’s namespace and returns a tabular summary of workload state. For most engineers, it’s the fastest way to inspect whether a rollout converged and whether the control plane reflects the desired state.

At a glance, the default output includes:

  • NAME – Deployment identifier
  • READY – ready replicas vs. desired replicas
  • UP-TO-DATE – replicas matching the current Pod template
  • AVAILABLE – replicas passing availability checks
  • AGE – time since creation

These fields expose the reconciliation status between .spec (desired state) and .status (observed state). If READY or AVAILABLE diverge from the desired replica count, the Deployment controller has not converged, and further inspection (describe, events, or ReplicaSets/Pods) is required.

What Information Does kubectl get deployments Show?

kubectl get deployments returns a tabular summary of Deployment resources from the Kubernetes API server. It exposes the reconciliation state between the desired configuration (.spec) and the observed state (.status). For operators, this is the fastest way to verify rollouts, scaling actions, and controller convergence before inspecting ReplicaSets or Pods.

The default output answers core operational questions:

  • How many replicas are requested?
  • How many are ready?
  • How many match the current Pod template?
  • Are replicas available to serve traffic?
  • How old is the Deployment?

Because the data is pulled directly from the API server, the output reflects controller state at query time.

Decoding the Output Columns

A standard invocation:

kubectl get deployments

Produces columns similar to:

  • NAME – Deployment resource name (.metadata.name)
  • READY – Ready replicas / desired replicas (.status.readyReplicas / .spec.replicas)
  • UP-TO-DATE – Replicas matching the current Pod template (.status.updatedReplicas)
  • AVAILABLE – Replicas considered available (.status.availableReplicas)
  • AGE – Time since creation (.metadata.creationTimestamp)

READY depends on Pod readiness probes. If probes are misconfigured or absent, this signal becomes unreliable.
UP-TO-DATE is critical during rolling updates—it indicates rollout progress toward the new template revision.
AVAILABLE reflects replicas that have met minimum availability requirements defined by the Deployment strategy.

Interpreting Deployment Status

The key diagnostic pattern is comparing READY, UP-TO-DATE, and AVAILABLE against the desired replica count (.spec.replicas).

Common failure modes:

  • AVAILABLE < desired: Pods may be failing readiness probes, crashing, or unschedulable due to resource constraints.
  • UP-TO-DATE < desired during rollout: The update may be blocked by failing new Pods or by rollout strategy limits (maxUnavailable, maxSurge).
  • READY fluctuating: Indicates unstable Pods or intermittent probe failures.

This command does not surface root cause—it identifies whether the Deployment controller has converged. From there, the next step is typically kubectl describe deployment, inspecting ReplicaSets, or analyzing Pod-level events.

For teams standardizing cluster operations with Plural, kubectl get deployments functions as a lightweight validation gate: confirm controller state first, then escalate to deeper diagnostics only if invariants fail.

Getting Started: Basic Syntax and Usage

kubectl get deployments is the canonical read operation for inspecting Deployment resources. By default, it queries the Kubernetes API using the current kubeconfig context and namespace, returning a tabular summary. Effective use requires understanding namespace scoping and output customization.

Basic invocation:

kubectl get deployments

This lists Deployments in the active namespace of your current context.

Working with the Default Namespace

kubectl is namespace-scoped unless told otherwise. If no namespace is specified, it queries the default namespace configured in the current context.

This commonly causes confusion: a Deployment created in staging or production will not appear unless you explicitly target that namespace or change context. The command is functioning correctly—it’s enforcing namespace isolation.

To inspect your current context:

kubectl config get-contexts
kubectl config current-context

Understanding this scoping model is foundational to avoiding false assumptions about missing workloads.

How to Specify a Custom Namespace

Use -n or --namespace to override the default scope:

kubectl get deployments -n api-services

For a cluster-wide view:

kubectl get deployments --all-namespaces

In multi-tenant or multi-environment clusters, explicit namespace targeting should be standard practice. Relying on implicit context increases the risk of querying or modifying the wrong environment.

While CLI flags are precise, teams operating many namespaces often prefer centralized visibility. Platforms like Plural provide a unified control plane view across namespaces and clusters, reducing context-switch overhead while preserving Kubernetes-native workflows.

Essential Flags

Several flags significantly increase the utility of this command:

Structured output

kubectl get deployments -o yaml
kubectl get deployments -o json

Use for debugging, drift inspection, and automation.

Extended tabular output

kubectl get deployments -o wide

Adds fields such as container images and selectors.

Label selection

kubectl get deployments --selector=app=backend

Filters by .metadata.labels. This is the preferred way to scope workloads logically.

Specific resource

kubectl get deployment api-server

Queries a single named Deployment.

Mastering namespace scoping and output flags turns kubectl get deployments from a basic listing command into a precise query interface over Deployment controller state.

How to Filter and Customize Your Output

In large clusters, the default tabular output of kubectl get deployments is insufficient. Efficient operations require two capabilities:

  1. Precise filtering (label and field selectors)
  2. Structured or extended output (JSON, YAML, wide, custom columns)

These primitives allow you to move from manual inspection to deterministic queries suitable for automation and policy validation.

Filtering by Label Selectors

Labels (.metadata.labels) are user-defined key–value pairs used to group resources logically. Label selectors are the primary mechanism for workload scoping.

Use -l or --selector:

kubectl get deployments -l app=nginx

You can combine expressions:

kubectl get deployments -l 'environment=prod,tier=backend'
kubectl get deployments -l 'environment in (staging,prod)'

Label selectors are deterministic only if labeling standards are enforced. Inconsistent labeling undermines automation and cluster introspection.

Using Field Selectors

Field selectors filter on Kubernetes-defined object fields, not user-defined metadata. They are more constrained than label selectors and support only specific indexed fields.

Example:

kubectl get deployments --field-selector metadata.name=api-server

Unlike label selectors, field selectors cannot query arbitrary JSON paths. They are limited to supported fields exposed by the API server. For complex state-based filtering, export JSON and post-process.

Changing the Output Format (JSON, YAML)

The -o flag switches output formats. For automation and reproducibility, use structured output:

kubectl get deployments -o yaml
kubectl get deployments -o json

Use cases:

  • YAML: configuration backup, drift inspection
  • JSON: scripting with tools like jq
  • name: list only resource identifiers
  • custom-columns: lightweight structured tabular output

Example:

kubectl get deployments -o custom-columns=NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image

This avoids parsing full JSON while preserving machine-readable structure.

Getting More Details with the Wide Format

For expanded tabular context without full object output, use:

kubectl get deployments -o wide

The wide format typically adds:

  • CONTAINERS
  • IMAGES
  • SELECTOR

This is useful for rapid validation of container image tags or confirming the label selector used to match Pods.

Operational Pattern

A common workflow:

  1. Filter with label selectors.
  2. Export JSON.
  3. Post-process with tooling (jq).
  4. Assert invariants (replicas, image tags, rollout status).

Example:

kubectl get deployments -l app=backend -o json \
| jq '.items[] | {name: .metadata.name, replicas: .spec.replicas}'

This pattern scales cleanly across environments and clusters. Teams standardizing workflows with Plural typically treat kubectl as a structured query interface, not just a CLI inspection tool.

How to Check Deployment Health and Status

Deployment health is determined by whether the controller has converged .status to match .spec. At small scale, you can inspect this via kubectl. At fleet scale, constantly switching kubeconfigs and contexts becomes operationally expensive. Platforms like Plural centralize visibility across clusters, but engineers still need to understand the underlying signals exposed by the Kubernetes API.

The following sections focus on interpreting those signals directly from the CLI.

Interpreting Replica Counts

Start with:

kubectl get deployments

Key columns:

  • READY – ready replicas / desired replicas
  • UP-TO-DATE – replicas matching the current Pod template revision
  • AVAILABLE – replicas meeting availability criteria

Interpretation patterns:

  • READY < desired: Pods are not passing readiness probes, still starting, crashing, or unschedulable.
  • UP-TO-DATE < desired during rollout: The new ReplicaSet has not fully replaced the old one.
  • AVAILABLE < desired: Deployment has not met minimum availability guarantees.

If any invariant fails, escalate:

kubectl describe deployment <deployment-name>

This surfaces events, ReplicaSet transitions, and condition details.

Understanding Deployment Conditions

kubectl describe deployment exposes .status.conditions, which are authoritative lifecycle indicators.

Primary conditions:

  • AvailableTrue when the minimum number of replicas are available (respects minReadySeconds).
  • ProgressingTrue while a rollout is in progress or a new ReplicaSet is being created.

If Progressing remains True without convergence and eventually reports a timeout, the rollout has stalled. Conditions are machine-readable and suitable for CI/CD validation or automated rollback triggers.

Identifying a Failed Deployment

A Deployment effectively fails when it cannot complete rollout within progressDeadlineSeconds.

Common root causes:

  • Image pull errors
  • CrashLoopBackOff
  • Failing readiness probes
  • Resource constraints (unschedulable Pods)
  • Invalid configuration in the Pod template

Diagnosis sequence:

  1. Inspect Deployment conditions.
  2. Inspect ReplicaSets.
  3. Inspect Pods and events.

Rollback if necessary:

kubectl rollout undo deployment/<deployment-name>

Kubernetes maintains revision history, enabling rapid reversion to the previous stable template.

Monitoring in Real Time

For active rollouts, stream updates:

kubectl get deployments --watch

This continuously updates READY, UP-TO-DATE, and AVAILABLE as the controller reconciles state.

For more precise rollout tracking:

kubectl rollout status deployment/<deployment-name>

This blocks until rollout completion or failure, making it suitable for CI pipelines.

In multi-cluster environments, teams often combine these primitives with centralized tooling like Plural to avoid context switching while still relying on Kubernetes-native status signals for correctness.

What to Do When a Deployment Fails

When a deployment doesn't proceed as expected, it's essential to have a systematic approach to diagnose the problem. A failed deployment isn't just an inconvenience; it can directly impact application availability and user experience, leading to downtime. The key is to move methodically from high-level symptoms down to the specific root cause, using kubectl to gather evidence at each step.

A Kubernetes Deployment moves through distinct stages: Progressing while it's rolling out new pods, Complete when all new pods are ready and available, and Failed if it encounters an unrecoverable issue. A common symptom of failure is a deployment that remains stuck in the Progressing state indefinitely. You might observe this when the READY column in kubectl get deployments shows a count lower than the DESIRED count for an extended period. This indicates that the new pods are unable to start or become healthy. Other signs can include pods repeatedly restarting or never leaving the Pending state. The following sections will guide you through a standard workflow for identifying the source of the failure, starting with the most common culprits and moving to more detailed diagnostics.

Common Issues and Their Symptoms

A deployment can fail for several reasons, often related to the pods it's trying to create. One frequent issue is resource constraints. If the cluster doesn't have enough CPU or memory to schedule a new pod, or if you've hit a resource quota, the pod will remain in a Pending state. Another common problem is image pull errors. If the image name or tag in your manifest is incorrect, or if the cluster lacks the credentials to access a private registry, you'll see an ImagePullBackOff error. Finally, application-level problems, like a misconfiguration or a bug that causes the container to exit immediately after starting, will result in a CrashLoopBackOff status as Kubernetes repeatedly tries and fails to restart the pod.

Getting Detailed Diagnostics with kubectl describe

Your first step in diagnosing a failed deployment should be to get a detailed overview of its state. The kubectl describe deployment <deployment-name> command is the best tool for this. It provides a comprehensive summary, including the deployment strategy, replica set status, and, most importantly, a chronological list of events. The Events section at the bottom of the output is often the most revealing, as it logs actions taken by the deployment controller and any errors it encountered. For example, it might explicitly state that it was unable to scale up a new replica set due to insufficient resources, pointing you directly to the problem.

Checking Pod Status and Events for Clues

Since deployments manage pods, the root cause of a failure often lies at the pod level. Use kubectl get pods to inspect the status of the pods associated with your deployment. Look for any pods that are not in a Running state. If you see statuses like Pending, ImagePullBackOff, or CrashLoopBackOff, you've found a strong clue. To dig deeper, use kubectl describe pod <pod-name> on a failing pod to see its specific events. For application errors, kubectl logs <pod-name> will show you the container's standard output, which can reveal configuration errors or runtime exceptions. In complex environments, Plural's unified dashboard simplifies this by letting you view deployment status, drill down to pod events, and access logs without switching between multiple terminal commands.

How to View Deployments Across Namespaces

In a multi-tenant or complex Kubernetes environment, applications are often segregated into different namespaces for organization and security. As a result, viewing deployments isn't just about checking a single namespace; it's about gaining a holistic view of the entire cluster. Limiting your checks to the default namespace can lead to blind spots, where failing deployments in other areas go unnoticed. To effectively manage a cluster, you need commands that can cut across these logical boundaries and give you a complete picture of all running workloads.

Using the --all-namespaces Flag

By default, kubectl commands are scoped to the current namespace. To see deployments across every namespace in your cluster, you can use the --all-namespaces flag. This is one of the most common and useful flags for cluster administrators and developers working in shared environments. The command provides a comprehensive list of all deployments, making it easy to scan their status cluster-wide.

You can run the command like this: kubectl get deployments --all-namespaces

For a quicker version, you can use the shorthand -A: kubectl get deploy -A

Both commands produce the same output, which includes a NAMESPACE column, allowing you to see exactly where each deployment lives. This is an essential first step for any cluster-wide health check.

Tips for Cluster-Wide Monitoring

For a more comprehensive snapshot of your cluster's state, you can expand your query to include all resource types using kubectl get all --all-namespaces. This command lists pods, services, deployments, and other key resources, giving you a broader view of how components are interacting. While this is useful for ad-hoc checks, relying solely on the command line for continuous monitoring can be inefficient, especially at scale. It requires you to constantly run commands and manually piece together information from different outputs.

A more streamlined approach is to use a platform that provides a persistent, graphical interface. Plural’s embedded Kubernetes dashboard offers a single pane of glass to view and manage deployments across your entire fleet. This eliminates the need to juggle kubeconfigs or run repetitive commands, giving you an immediate, intuitive understanding of your cluster's health.

Advanced kubectl get deployments Techniques

While the basic kubectl get deployments command is useful for quick checks, its real power emerges when you integrate it into more complex workflows. For automation, scripting, and deep-dive troubleshooting, you need to move beyond the default table output. Advanced techniques involve leveraging different output formats, combining commands, and building simple scripts to automate health checks. These methods allow you to programmatically interact with deployment data, turning the command from a simple inspection tool into a core component of your operational toolkit. By mastering these approaches, you can build more robust and efficient processes for managing your Kubernetes applications.

Using JSON and YAML Output for Automation

The default table format of kubectl get deployments is designed for human readability, but it’s not ideal for scripts. For automation, you need structured, machine-readable data. By using the -o json or -o yaml flags, you can get the complete object definition for a deployment. For instance, you can export the running configuration of a deployment with kubectl get deployment my-app -o yaml.

This structured output can be piped directly into other command-line tools for processing. A common partner for JSON output is jq, a lightweight JSON processor. You can use it to extract specific fields, such as checking the number of available replicas: kubectl get deployment my-app -o json | jq '.status.availableReplicas'. This approach is fundamental for building scripts that validate deployment states or back up configurations.

Combining get deployments with Other Commands

The kubectl get deployments command is often the starting point for a deeper investigation. Once you identify a deployment of interest, you can combine it with other commands to get more context. For example, if you see a deployment is in the process of updating, you can monitor its progress in real-time using kubectl rollout status deployment/my-app-deployment. This command provides live updates on the number of replicas that have been updated, telling you exactly when the rollout is complete.

You can also refine the initial output by sorting or chaining commands. To organize a long list of deployments, you can sort the results by metadata fields like name or creation timestamp: kubectl get deployments --sort-by=.metadata.name. This is especially useful in namespaces with dozens of deployments, allowing you to quickly find what you're looking for without manual scanning.

Scripting Your Deployment Health Checks

You can combine these techniques to create powerful shell scripts for automated health checks. A simple script can iterate through all deployments in a namespace, check their status, and flag any that are unhealthy. For example, a script could use kubectl get deployments -o json to fetch all deployment objects, then use jq to compare the .spec.replicas field with the .status.readyReplicas field for each one.

If a discrepancy is found, the script can automatically run follow-up commands to gather more diagnostic information. It could execute kubectl describe deployment <deployment-name> to get a detailed event log and status conditions. To understand recent changes that might have caused an issue, the script could also pull the deployment’s modification history with kubectl rollout history deployment/<deployment-name>. This automates the initial, often repetitive, steps of troubleshooting.

Beyond the CLI: Managing Deployments at Scale with Plural

While kubectl get deployments is an indispensable command for day-to-day interactions with Kubernetes, its effectiveness diminishes as your environment grows. Managing deployments across dozens or hundreds of clusters, namespaces, and teams using only the command line becomes a significant operational burden. Switching contexts, running repetitive commands, and manually correlating outputs is inefficient and prone to error. This is where the command line reaches its limits for fleet-wide visibility and management.

At scale, you need a higher-level abstraction that provides a consolidated view and streamlines complex workflows. A platform approach offers a single source of truth, making it easier to monitor health, enforce standards, and troubleshoot issues across your entire infrastructure. Plural provides this single pane of glass for Kubernetes, moving beyond individual CLI commands to offer a holistic management experience. It integrates GitOps principles with a powerful UI to give platform teams and developers the tools they need to manage deployments efficiently, securely, and at any scale.

A Unified Dashboard for All Deployments

Instead of juggling kubeconfigs and running kubectl commands against individual clusters, Plural provides a unified dashboard to visualize all your deployments in one place. This centralized view allows you to instantly assess the health and status of applications across different environments, from development to production. You can see which versions are running, monitor resource utilization, and track rollout progress without ever leaving the interface. Plural integrates with your existing identity provider, using Kubernetes impersonation to map your console identity to RBAC policies. This delivers a seamless SSO experience and eliminates the security and operational overhead of distributing and managing individual kubeconfig files for every user and cluster.

Streamlining Your Troubleshooting Workflow

When a deployment fails, the CLI requires you to run a series of commands—describe deployment, get rs, get pods, logs—to piece together the root cause. Plural streamlines this entire process by consolidating all relevant information into a single, intuitive view. From the dashboard, you can drill down from a failing deployment to its underlying ReplicaSets and pods, view associated events, and inspect container logs directly. Because Plural is built on a GitOps foundation, every deployment is tied to a specific commit. This creates a clear audit trail, allowing you to quickly identify the exact code change that may have introduced an issue, dramatically reducing the time it takes to diagnose and resolve problems.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What’s the real difference between the READY and AVAILABLE columns? This is a common point of confusion. The READY column shows you a ratio of how many pods have passed their readiness probes against the desired number of replicas. The AVAILABLE column tells you how many of those ready pods have also been running for a minimum amount of time, which you can define with the minReadySeconds field in your deployment spec. For most cases, they will be the same, but AVAILABLE is a slightly stricter health check, ensuring a pod is stable before it's officially counted.

I just created a deployment, but kubectl get deployments shows nothing. Why? The most common reason for this is that you're looking in the wrong namespace. By default, kubectl operates on the default namespace. If you created your deployment in a different one, like production or dev, you won't see it. You can either switch your context to the correct namespace or, more directly, use the -n <namespace-name> flag to specify where to look. To see everything at once, kubectl get deployments -A will list deployments across all namespaces.

How can I quickly see which container image a deployment is using without looking at the full YAML? The fastest way to do this is by using the wide output format. Run kubectl get deployments -o wide to add several useful columns to the standard output, including one named IMAGES. This column will show you the exact container images being used by the pods in that deployment, which is incredibly helpful for quickly verifying which version of your application is running.

My deployment seems stuck during an update. What's the first command I should run to figure out why? When a deployment is stuck, your first move should be to get more details with kubectl describe deployment <deployment-name>. Pay close attention to the Events section at the bottom of the output, as it often contains the specific error, such as a failure to pull an image or insufficient cluster resources. If the events don't reveal the problem, the next step is to inspect the pods themselves with kubectl get pods to look for statuses like ImagePullBackOff or CrashLoopBackOff.

Is there a better way to monitor deployments across many different clusters? While using the --all-namespaces flag is great for a single cluster, managing deployments across an entire fleet with just the command line becomes difficult. Constantly switching kubeconfig contexts is slow and error-prone. This is where a platform like Plural becomes essential. It provides a single, unified dashboard to view and manage deployments across all your clusters, giving you a complete picture of your applications' health without needing to run repetitive commands.

Guides