`kubectl get statefulsets`: A Practical Guide
Learn how to use `kubectl get statefulsets` to monitor, troubleshoot, and manage stateful applications in Kubernetes.
As a platform engineer operating multiple Kubernetes clusters, you routinely switch contexts to validate the health of stateful infrastructure. For workloads like PostgreSQL clusters or Kafka brokers, continuous verification is mandatory.
kubectl get statefulsets is the fastest way to inspect StatefulSet status, but executing it manually across clusters and namespaces does not scale. This guide focuses on practical techniques: efficient filtering, structured outputs for automation, and troubleshooting patterns. It also examines how to extend this visibility beyond the CLI to maintain a unified operational view of stateful workloads across environments.
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Key takeaways:
- Interpret the
READYcolumn for quick health checks: Thekubectl get statefulsetscommand is your starting point for any investigation. AREADYstatus like2/3is an immediate signal that a pod is unhealthy and requires further inspection. - Follow a
describethenlogsworkflow for troubleshooting: When a StatefulSet is unhealthy, usekubectl describeto check for cluster-level events like storage or image pull errors. From there, usekubectl logson the failing pod to find application-specific issues. - Use a centralized dashboard to manage StatefulSets at scale: Relying solely on
kubectlacross many clusters leads to context-switching and errors. Plural’s dashboard provides a single view of all your StatefulSets, simplifying monitoring and helping you diagnose issues faster.
What Is a Kubernetes StatefulSet?
A StatefulSet is a Kubernetes workload API designed for stateful systems that require stable identity and persistent storage. Unlike stateless services, stateful applications depend on predictable pod identity, durable volumes, and ordered lifecycle guarantees.
Consider distributed systems such as PostgreSQL clusters or Kafka brokers. Each replica has a defined role, stores a distinct data subset, and participates in peer coordination. If a pod restarts, it must retain its network identity and reattach to the same storage to preserve cluster consistency.
A StatefulSet enforces these guarantees:
- Stable pod identity via deterministic naming (
<name>-0,<name>-1, etc.). - Stable network identity through a Headless Service.
- Dedicated PersistentVolumeClaims per replica.
- Ordered create, update, and delete semantics.
When a pod is rescheduled, Kubernetes rebinds it to its original volume and ordinal index. This deterministic behavior is critical for consensus systems, replication protocols, and quorum-based clusters. At fleet scale, maintaining visibility into these resources across clusters requires more than ad hoc CLI checks, which is where centralized observability becomes important for platform teams.
StatefulSets vs. Deployments
Although both manage pods, StatefulSets and Deployments target different workload classes.
Deployments are optimized for stateless replicas. Pods are interchangeable, identities are ephemeral, and scaling events do not require ordering guarantees.
StatefulSets are designed for non-interchangeable replicas with strict identity and storage coupling.
Key distinctions:
Pod identity: Deployment pods receive random suffixes and can be replaced freely. StatefulSet pods use stable ordinal names that persist across restarts.
Storage: Deployments typically share or dynamically attach volumes without identity coupling. StatefulSets generate one PersistentVolumeClaim per pod, preserving data affinity.
Lifecycle semantics: Deployments update replicas in parallel by default.
StatefulSets enforce ordered rollout and scale operations (e.g., 0 → 1 → 2).
For clustered systems, unordered replacement can break quorum, replication topology, or leader election. StatefulSets prevent this class of failure.
When to Use a StatefulSet
Use a StatefulSet when:
- Pods must retain stable hostnames.
- Each replica requires dedicated persistent storage.
- Replica ordering matters during startup, scaling, or shutdown.
- Loss of a specific pod’s identity would cause data inconsistency.
Common use cases include:
- Clustered databases (MySQL, PostgreSQL, Cassandra, MongoDB)
- Distributed messaging systems (Kafka, RabbitMQ)
- Coordination systems (ZooKeeper, etcd)
- Any workload requiring stable peer discovery
If your architecture treats replicas as unique, state-bearing entities rather than disposable workers, a StatefulSet is the correct abstraction.
What Does kubectl get statefulsets Do?
kubectl get statefulsets queries the Kubernetes API server for StatefulSet resources and returns their current status. It provides a concise operational snapshot of stateful workloads, including replica counts and readiness.
StatefulSets manage applications that require stable identity and persistent storage. When you run this command, you retrieve metadata and status fields that reflect controller reconciliation state—primarily desired replicas versus ready replicas. This is the first diagnostic checkpoint when validating rollout success, investigating degraded clusters, or verifying scaling events.
For platform engineers managing databases, message brokers, or coordination systems, this command is part of the core observability loop.
Core Functionality
At execution, kubectl issues a GET request to the Kubernetes API for StatefulSet objects within a namespace (or cluster-wide with --all-namespaces).
Default output columns typically include:
- NAME – StatefulSet identifier
- READY – Ready replicas / desired replicas
- AGE – Resource lifetime
Example:
kubectl get statefulsets -n data-platform
The READY column is operationally significant. A mismatch (e.g., 2/3) indicates at least one replica has not passed readiness checks, which may signal scheduling issues, failed probes, PVC binding problems, or crash loops.
Why Monitor StatefulSets?
StatefulSets manage systems where replica identity and storage affinity are coupled. Failure modes are materially different from stateless workloads:
- A missing replica may break quorum.
- A failed PVC binding may block pod startup.
- An unordered restart may impact leader election or replication.
Regularly inspecting StatefulSet status allows you to:
- Validate scaling operations.
- Detect readiness degradation early.
- Confirm ordered rollouts completed successfully.
- Identify drift between desired and actual state.
For fleet-scale environments, manual CLI checks do not scale. Aggregating this signal into centralized dashboards or automation pipelines provides a consistent control-plane view across clusters.
How to Use kubectl get statefulsets
kubectl get statefulsets is the primary read-only command for checking the status of stateful workloads. It provides a compact, API-derived snapshot of StatefulSet objects in a namespace and is typically the first command you run during health checks or incident triage.
While kubectl describe exposes event-level detail, get statefulsets answers the immediate question: does the controller’s desired state match the cluster’s actual state? For databases, message brokers, and coordination systems, that distinction is operationally critical.
Understanding the Default Output
Running the command without flags:
kubectl get statefulsets
returns a tabular summary scoped to the current namespace. The output is intentionally minimal and optimized for fast inspection.
Example:
NAME READY AGE
postgres 3/3 12d
kafka 2/3 5d
This view allows you to quickly detect replica drift, incomplete rollouts, or readiness failures before diving into deeper diagnostics.
For multi-namespace inspection:
kubectl get statefulsets --all-namespaces
This is especially useful in platform environments where stateful workloads are distributed across teams.
Decoding the Columns: NAME, READY, AGE
The default columns provide high-signal operational data:
NAME
The unique identifier of the StatefulSet within its namespace. This value is required for subsequent commands such as describe, scale, or rollout status.
READY
Represents readyReplicas / spec.replicas.
3/3→ all replicas are scheduled, running, and passing readiness probes.2/3→ at least one replica is not ready. This can indicate probe failures, PVC binding issues, image pull errors, or scheduling constraints.
This is the most important column for immediate health assessment.
AGE
Time elapsed since the StatefulSet object was created. This provides rollout context. For example, if AGE is 2m and READY is 1/3, the rollout may still be progressing. If AGE is 30d and READY drops from 3/3 to 2/3, that suggests regression or runtime failure.
Mastering this output enables rapid triage. It reduces the feedback loop between detection and root cause analysis, which is critical when managing stateful infrastructure across multiple clusters.
How to Read StatefulSet Status
kubectl get statefulsets gives you a controller-level health signal. Interpreting it correctly allows you to distinguish between rollout progress, steady-state operation, and failure.
At fleet scale, repeatedly switching kubeconfig contexts to inspect this signal becomes operationally expensive. A centralized control plane like Plural aggregates StatefulSet status across clusters into a single dashboard, eliminating context switching and making drift or replica degradation immediately visible.
Interpret the READY Column
The READY column represents:
readyReplicas / spec.replicas
- Second value (
spec.replicas) → desired replica count. - First value (
status.readyReplicas) → pods currently running and passing readiness probes.
Examples:
3/3→ all replicas are ready.2/3→ one replica is not passing readiness checks.0/3→ complete unavailability.
A mismatch does not automatically imply controller failure. Possible causes include:
- Pod stuck in
Pendingdue to scheduling constraints. - PVC not bound.
- CrashLoopBackOff.
- Failing readiness probe.
- Rolling update still in progress.
The READY column is your first anomaly detector. It indicates whether reconciliation has achieved steady state.
Check AGE and Contextual Signals
AGE reflects how long the StatefulSet resource has existed.
Operationally, AGE provides temporal context:
- High AGE + sudden READY degradation → likely runtime regression (image update, config change, node failure).
- Low AGE + partial readiness → rollout still converging or misconfiguration in initial spec.
- Low AGE + persistent 0/N → probable configuration or storage issue.
For deeper inspection, correlate with pod-level status:
kubectl get pods -l app=<label>
kubectl describe statefulset <name>
kubectl describe pod <pod-name>
kubectl logs <pod-name>
Stateful workloads often fail at the storage or network identity layer, so PVC binding status and DNS resolution are common inspection points.
Identify Healthy vs. Unhealthy StatefulSets
A StatefulSet is considered healthy when:
readyReplicas == spec.replicas
Any deviation indicates partial or full unavailability.
Healthy:
3/3
Unhealthy:
2/3
0/3
When a mismatch occurs, follow a deterministic debugging sequence:
When a mismatch occurs, follow a deterministic debugging sequence:
- Inspect StatefulSet events:
kubectl describe statefulset <name>
- Identify the failing pod (
<name>-ordinal). - Inspect pod events and conditions.
- Check container logs.
- Verify PVC status and node scheduling constraints.
For platform teams managing multiple clusters, repeating this workflow manually does not scale. Plural centralizes StatefulSet health signals, allowing you to detect replica drift, readiness failures, and rollout stalls across environments without manual CLI aggregation.
How to Filter StatefulSets by Namespace
Kubernetes uses namespaces for logical isolation across teams, environments, and applications. StatefulSets are namespace-scoped resources, so visibility and operations must be explicitly targeted.
At small scale, namespace filtering via kubectl is straightforward. At fleet scale, repeatedly switching namespaces and contexts increases operational friction and the risk of acting on the wrong environment. Plural mitigates this by providing a centralized Kubernetes dashboard that aggregates StatefulSets across clusters while enforcing existing RBAC boundaries.
Filter by a Specific Namespace with -n
To retrieve StatefulSets in a single namespace:
kubectl get statefulsets -n production-db
or equivalently:
kubectl get statefulsets --namespace=production-db
This scopes the API request to production-db and prevents accidental inspection of staging or development workloads.
This pattern is essential when:
- Validating a rollout in a specific environment
- Investigating namespace-scoped incidents
- Automating health checks per team or service boundary
Namespace scoping should be treated as a default safety mechanism in multi-tenant clusters.
View StatefulSets in All Namespaces with --all-namespaces
For cluster-wide visibility:
kubectl get statefulsets --all-namespaces
This returns a table that includes a NAMESPACE column:
NAMESPACE NAME READY AGE
production-db postgres 3/3 30d
messaging kafka 2/3 12d
Use this when:
- Auditing cluster-wide stateful workloads
- Detecting replica drift across teams
- Performing incident correlation
- Inventorying persistent workloads
While effective, this approach becomes inefficient across multiple clusters. Plural consolidates this view across environments, eliminating repeated context switches and enabling consistent cross-cluster StatefulSet visibility without manual CLI aggregation.
How to Customize kubectl get statefulsets Output
The default output of kubectl get statefulsets provides a solid overview, but it often doesn't tell the whole story. For effective troubleshooting and automation, you need to tailor the output to show the exact information you need. Customizing the command's output allows you to extract specific fields, format the data for scripting, or simply get a more detailed view of your resources without having to run multiple commands.
Whether you're debugging a pod scheduling issue or building an automation script, knowing how to manipulate kubectl output is a critical skill. These techniques allow you to work more efficiently from the command line. While a unified dashboard like Plural's console provides this information in a user-friendly interface, mastering CLI customization is essential for scripting and deeper operational tasks. Let's explore three common ways to customize the output: -o wide, JSON/YAML formatting, and custom columns.
Get More Details with -o wide
The simplest way to get more information is by adding the -o wide flag to your command. This option expands the standard output to include additional columns that are useful for at-a-glance troubleshooting. When used with kubectl get statefulsets, it doesn't add new columns for the StatefulSet itself, but when used with pods managed by the StatefulSet (kubectl get pods -o wide), it reveals crucial details like the IP address of each pod and the node it's scheduled on. This is incredibly helpful for diagnosing network connectivity issues or identifying which node a problematic pod is running on. It’s a quick, low-effort way to gain more context directly in your terminal.
Use JSON and YAML for Automation
When you need to programmatically interact with Kubernetes resources, the standard table format isn't practical. The -o json and -o yaml flags instruct kubectl to output the full resource definition in either JSON or YAML format. This machine-readable output is perfect for scripting, as you can pipe it to tools like jq to parse JSON or use it in automation workflows that need to read or modify resource configurations. For example, you could write a script that fetches all StatefulSets in JSON format, extracts their container image tags, and verifies they are using the correct version. This is a foundational technique for building reliable GitOps-based deployment pipelines and custom automation.
Create Custom Columns with Go Templates
For the ultimate control over your output, kubectl supports Go templates via the -o custom-columns flag. This feature lets you define exactly which columns to display and what data to pull from the resource's fields. You can specify a custom header for each column and use JSONPath expressions to access any field within the StatefulSet object, including nested ones. For instance, you could create a custom view that shows the StatefulSet's name, the number of desired replicas, and the container image being used. This is particularly powerful for creating tailored reports or focused views that show only the information relevant to a specific task, cutting through the noise of the full YAML or JSON output.
What to Do When StatefulSets Don't Appear
Running kubectl get statefulsets and receiving an empty response can be frustrating, especially when you're certain a StatefulSet should be there. This issue typically stems from one of three common areas: an incorrect kubectl configuration, connectivity or permission problems, or a failed deployment. Before diving into complex troubleshooting, it's best to work through these possibilities systematically.
This process of checking context, verifying permissions, and confirming resource status is a routine part of managing Kubernetes. While essential, it can become tedious across a large fleet of clusters. Platforms like Plural streamline this by providing a unified dashboard that offers a single pane of glass for all your clusters. This reduces the chances of context-switching errors and gives you immediate visibility into resource status without needing to manually verify your kubectl configuration for each cluster. The following steps will help you diagnose why your StatefulSets might not be appearing.
Verify Your Namespace and Kubectl Configuration
The most common reason for not seeing a resource is operating in the wrong namespace. Kubernetes resources are isolated by namespaces, and kubectl commands are scoped to your current context unless specified otherwise. First, ensure your kubectl command-line tool is configured to communicate with the correct cluster. If you manage multiple clusters, your kubeconfig file (~/.kube/config) might be pointing to a different environment.
Once you've confirmed you're targeting the right cluster, check the namespace. If your StatefulSet was deployed to a namespace like production, but your current context is default, the command will return nothing. You can explicitly query the correct namespace using the -n flag, like kubectl get statefulsets -n production. For a comprehensive overview, you can also check all namespaces with kubectl get statefulsets --all-namespaces.
Check Cluster Connectivity and Permissions
If your namespace and configuration are correct, the next step is to verify your connection to the cluster and your permissions. A simple way to check connectivity is by running kubectl cluster-info, which displays the address of the Kubernetes control plane and its services. If this command fails, it indicates a problem with your network connection to the API server.
Permission issues are also a frequent cause, particularly in enterprise environments with strict Role-Based Access Control (RBAC) policies. Your user account or service account may not have the necessary permissions to list or get StatefulSets in the target namespace. You can use the kubectl auth can-i list statefulsets -n <namespace> command to check your access. Plural simplifies this by integrating with your identity provider, allowing you to manage RBAC policies based on user emails and groups directly from the console.
Confirm the StatefulSet Deployed Correctly
If you have ruled out configuration and permission issues, it's possible the StatefulSet was never created successfully. A syntax error or invalid value in your YAML manifest can cause the deployment to fail without creating the resource. The kubectl apply command might not have thrown an error, but the Kubernetes controller could have rejected the configuration.
To investigate this, check the Kubernetes events for the relevant namespace using kubectl get events -n <namespace>. Look for any warnings or errors related to your StatefulSet's name. These events provide detailed logs about what the cluster's controllers are doing and can reveal why the resource creation failed. Ensuring your manifests follow StatefulSet best practices can help prevent these deployment failures from happening in the first place.
Monitor StatefulSets with Plural's Dashboard
While kubectl is an essential tool for interacting with Kubernetes, managing StatefulSets across an entire fleet of clusters calls for a more centralized and intuitive solution. Command-line interfaces can become cumbersome when you need a high-level overview or have to quickly diagnose an issue in a complex environment. Plural’s integrated dashboard provides a single pane of glass to monitor, manage, and troubleshoot your stateful applications efficiently.
Get a Unified View Across All Clusters
Stateful applications like databases and message queues rely on StatefulSets for stable network identities and persistent storage. As these critical services are deployed across various environments—from development to production—maintaining a clear picture of their health becomes a significant challenge. Constantly switching kubectl contexts to check on different clusters is inefficient and prone to error. Plural’s embedded Kubernetes dashboard solves this by consolidating all your clusters into a single, unified interface. This allows your team to see the status of every StatefulSet across your entire fleet at a glance, ensuring you can proactively manage the applications that matter most.
Monitor StatefulSets Without kubectl
Running pods with StatefulSets gives each one a unique and persistent identity, which is ideal for tracking the status of individual application instances. While you can find this information with kubectl, a visual interface often makes it easier to spot trends and anomalies. Plural provides an intuitive UI where you can monitor your StatefulSets without needing to run complex commands. You can easily view pod statuses, resource consumption, and recent events directly in the dashboard. This approach democratizes observability, allowing developers, SREs, and platform engineers to get the insights they need quickly, without requiring deep kubectl expertise or juggling multiple terminal windows.
Troubleshoot Faster with an Integrated Dashboard
Kubernetes automatically manages the lifecycle of pods within a StatefulSet, replacing failed ones while preserving their unique identities. When an issue occurs, time is critical. Instead of chaining together kubectl get, describe, and logs commands to piece together what happened, you can use Plural's dashboard to streamline the process. With just a few clicks, you can navigate from a high-level StatefulSet view down to the specific events and logs of a failing pod. This integrated experience provides the context needed to diagnose root causes faster. By centralizing all relevant information, Plural helps your team reduce mean time to resolution (MTTR) and maintain the reliability of your stateful services.
Related Articles
- StatefulSets vs Deployments: Kubernetes Showdown
- Kubernetes StatefulSet Pod Persistence: A Deep Dive
- Kubernetes StatefulSet: The Ultimate Guide (2024)
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
Why can't I just use a Deployment with a PersistentVolume for my database? While you can attach a PersistentVolume to a Deployment, it lacks the critical guarantees that stateful applications like databases require. A Deployment treats its pods as interchangeable, meaning if a pod is replaced, the new pod is a completely new entity. A StatefulSet, however, provides a stable, unique identity (like db-0, db-1) and ensures that a rescheduled pod always reconnects to its specific storage volume. This predictable identity and ordered scaling are essential for clustered systems where each node has a distinct role and data set.
My StatefulSet's READY status is 2/3. What's my immediate next step? A 2/3 status means one of your three desired pods is not in a ready state. Your first step should be to get more details on the StatefulSet itself by running kubectl describe statefulset <your-statefulset-name>. Pay close attention to the "Events" section at the bottom of the output, as it often contains error messages like image pull failures or issues with mounting storage. If the events don't reveal the problem, identify the not-ready pod with kubectl get pods and then check its logs using kubectl logs <pod-name> to find application-specific errors.
I ran kubectl get statefulsets and got an empty response. What are the most common reasons for this? This usually happens for one of three reasons. The most frequent cause is that you are in the wrong namespace or your kubectl context is pointing to the wrong cluster. Double-check this by running the command with the --all-namespaces flag. Second, you might lack the necessary RBAC permissions to view resources in that namespace. You can verify this with the kubectl auth can-i list statefulsets command. Finally, it's possible the StatefulSet manifest had an error and the resource was never successfully created in the first place; check the cluster events for any deployment failures.
How can I watch a rolling update on a StatefulSet as it happens? Instead of repeatedly running the get command, you can use the watch flag (-w) to stream real-time updates directly to your terminal. By running kubectl get statefulsets <your-statefulset-name> -w, you can monitor the ordered process as each pod is terminated and replaced one by one. This is especially useful for ensuring that the graceful, sequential update process that StatefulSets guarantee is proceeding as expected without any interruptions.
When does it make sense to use a dashboard instead of kubectl for managing StatefulSets? While kubectl is perfect for specific, targeted tasks, its efficiency diminishes as you manage more clusters. A dashboard like Plural's becomes essential when you need a consolidated view of all your StatefulSets across an entire fleet without constantly switching contexts. It simplifies monitoring by providing a high-level overview of application health and allows you to drill down into logs and events through a user interface, which is often faster for troubleshooting and more accessible to team members who aren't kubectl experts.
Newsletter
Join the newsletter to receive the latest updates in your inbox.