How to Use `kubectl delete pvc` & Fix a Stuck PVC
When kubectl delete pvc hangs in Terminating, it’s not a kubectl bug—it’s Kubernetes protecting your data. A dependent resource, typically a Pod or controller, still holds a reference to the volume, and a finalizer is preventing deletion until that dependency is resolved. This guide focuses on a systematic approach: identify which workloads are still using the PVC, understand how finalizers enforce safe cleanup, and remove the blockers cleanly. The goal is to delete the claim safely without jumping straight to forceful, high-risk commands that can leave orphaned resources or corrupt data.
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Key takeaways:
- Follow a pre-deletion checklist: Before running
kubectl delete pvc, always verify active Pod dependencies, check the PV'sreclaimPolicyto prevent accidental data loss, and ensure critical data is backed up. This simple process avoids most common deletion failures. - Resolve stuck PVCs by removing blockers: A PVC stuck in the
Terminatingstate is usually blocked by a finalizer waiting for a dependent Pod to be removed. The correct fix is to scale down the associated workload to release the volume, not to immediately force-delete the claim. - Manage storage declaratively with GitOps: Instead of manually managing PVCs with
kubectl, define your storage resources—including StorageClasses and reclaim policies—as code. This automated, policy-driven approach prevents configuration drift and simplifies cleanup across a large fleet of clusters.
What Is a Persistent Volume Claim?
A Persistent Volume Claim (PVC) is Kubernetes’ abstraction for requesting storage. It lets developers declare what storage an application needs—capacity, access modes, storage class—without caring about how that storage is provisioned. This decouples application configuration from the underlying storage backend, improving portability across environments and clusters.
A PVC effectively forms a contract between an application and the cluster. You define the requirements in YAML, and Kubernetes ensures that suitable storage is made available and mounted into Pods. Crucially, the data lifecycle is independent of the Pod lifecycle, which is why PVCs are foundational for stateful workloads.
How PVCs Work
When a PVC is created, the control plane tries to satisfy it by binding it to a Persistent Volume (PV) that matches its constraints. The binding is one-to-one and exclusive. If no matching PV exists, the claim stays in Pending.
Provisioning can be:
- Static: an administrator pre-creates PVs.
- Dynamic: a
StorageClassprovisions a PV on demand via a CSI driver.
From the application’s perspective, both models are identical—it just consumes the claim.
PVC Lifecycle: Pending, Bound, Lost
PVCs move through a small, well-defined state machine:
- Pending: no suitable PV is available yet.
- Bound: the PVC is successfully bound to a PV and ready for use.
- Lost: the backing PV was deleted or became inaccessible, indicating a serious storage failure that requires manual intervention.
These states are signals for operators, not just status noise—they often explain why Pods can’t start or why cleanup is blocked.
PVCs vs. PVs
PVs and PVCs deliberately separate supply from demand:
- PVs represent the cluster’s storage inventory, managed by platform or infrastructure teams.
- PVCs represent application-level requests for that storage, owned by developers.
This separation of concerns is what allows teams to standardize storage at the platform level while letting application teams remain storage-agnostic.
When to Delete a PVC
Deleting a PVC is a normal part of managing stateful workloads, but it should always be intentional. A PVC represents real storage with real cost and data implications. In practice, PVC deletion typically falls into four categories: environment cleanup, cost optimization, compliance, and error recovery. Each case requires awareness of workload dependencies and the PV reclaim policy to avoid surprises.
Cleaning Up Environments
Ephemeral environments (feature branches, preview deployments, short-lived staging clusters) tend to leak storage if teardown isn’t disciplined. Orphaned PVCs accumulate, consume capacity, and complicate operations. The most common reason a PVC gets stuck in Terminating during cleanup is that a Pod, Deployment, or StatefulSet still references it. Correct teardown order matters: delete workloads first, then claims.
Declarative workflows help here. With GitOps-style environment definitions, entire stacks can be created and destroyed coherently. Plural helps orchestrate this by managing applications and their storage as a single unit, reducing the risk of dangling PVCs.
Optimizing Resources and Costs
A Bound PVC usually maps directly to a billable cloud volume. On platforms like Amazon Web Services (EBS) or Google Cloud (Persistent Disk), unused claims still incur charges. Periodically identifying and deleting PVCs that are no longer attached to running workloads is one of the simplest ways to control storage spend.
This is destructive by design. If the underlying PV uses a Delete reclaim policy, removing the PVC will permanently delete the backing volume and its data. Fleet-level visibility tools, such as the Plural dashboard, make it easier to spot idle or detached PVCs across multiple clusters before they turn into silent cost sinks.
Complying With Data Retention Policies
Regulated environments often require explicit data lifecycle enforcement. Standards like GDPR or HIPAA mandate that data be purged after a defined retention period. In Kubernetes, deleting the PVC is the canonical way to trigger this process.
What happens next depends on the PV reclaim policy. Delete ensures the storage backend wipes the volume. Retain leaves the PV and data intact, shifting responsibility to operators for manual cleanup. Understanding this distinction is critical for compliance audits.
Recovering From Errors
PVC deletion is sometimes part of recovery, not cleanup. Common scenarios include corrupted volumes, failed application migrations that leave data inconsistent, or misconfigured claims that never successfully bind. PVCs stuck in Terminating during these incidents almost always indicate that Pods are still mounted to the volume.
A safe recovery flow is predictable: back up any required data, scale the application down to release the mount, delete the PVC, and redeploy to provision a fresh volume. This resets application state and avoids force-deleting resources in ways that can leave orphaned volumes or dangling finalizers.
Before You Delete: A Checklist
Deleting a Persistent Volume Claim is a routine task, but it carries risks if not handled carefully. A mistake can lead to data loss or leave your cluster in an inconsistent state with resources stuck in a Terminating status. Before you run kubectl delete pvc, walk through this pre-flight checklist to ensure a clean and safe deletion process. These steps help you verify dependencies, understand the consequences for your data, and create a fallback in case something goes wrong.
Check for active Pod dependencies
The main reason a PVC gets stuck in the Terminating state is that other parts of your Kubernetes setup, like Pods, are still using it. Kubernetes intentionally blocks the deletion to prevent disrupting a running application. Before deleting a PVC, you must identify and terminate any Pods that depend on it. You can find which Pod is using a specific PVC by inspecting the Pod specifications in the relevant namespace. Once identified, scale the managing controller (like a Deployment or StatefulSet) to zero replicas. This safely unmounts the volume and allows the PVC deletion to proceed.
Understand the Reclaim Policy
Always check the persistentVolumeReclaimPolicy of your PersistentVolume before deleting its PVC. This policy tells Kubernetes what to do with the underlying storage after the claim is gone. You can inspect the PV’s configuration with kubectl get pv <your-pv-name> -o yaml.
Delete: The PV and the physical storage are automatically deleted. Your data is gone permanently.Retain: The PV is moved to aReleasedstate, but the underlying storage and its data remain intact. This is the safest option for critical data, as it allows for manual recovery. Understanding this reclaim policy is critical to avoid accidental data loss.
Back up critical data
Deleting a PVC can be a permanent and destructive action, especially if the associated PV’s reclaim policy is Delete. Once the underlying storage is gone, you cannot get your data back. Before proceeding, always back up any data you cannot afford to lose. For cloud-based volumes, you can create a snapshot directly through your cloud provider’s console. For a more robust and Kubernetes-native approach, tools like Velero can create backups of your cluster’s persistent volumes. A recent backup is your best safety net, ensuring you can restore the data if the deletion goes wrong.
How to Delete a PVC with kubectl
The primary tool for managing Kubernetes resources is the command-line interface, kubectl. Deleting a Persistent Volume Claim (PVC) is a fundamental operation performed with the kubectl delete pvc command. While the basic command is simple, understanding its variations is essential for managing storage resources effectively across different environments and namespaces. Correctly targeting PVCs for deletion prevents accidental data loss and ensures that you are cleaning up the intended resources without impacting other applications. The following sections break down the specific commands for various deletion scenarios, from removing a single claim to clearing out an entire
The basic delete command
To delete a single Persistent Volume Claim, you use the kubectl delete pvc command followed by the name of the PVC. This is the most direct way to remove a specific claim that is no longer needed. Executing this command signals to the Kubernetes control plane to begin the termination process for the specified PVC. The associated Persistent Volume (PV) will then be handled according to its reclaim policy.
kubectl delete pvc <pvc-name>
Use this command when you have identified a specific, individual PVC for removal, such as after decommissioning a single application pod that used it.
Delete multiple PVCs at once
For broader cleanup operations, especially in non-production environments, you can delete multiple PVCs simultaneously. Using the --all flag with the kubectl delete pvc command will remove every PVC within the current active namespace. This is an efficient way to reset a development or testing environment by clearing all storage claims. However, this command is destructive and should be used with extreme caution, particularly in production clusters, as it will permanently remove all PVCs in the namespace without individual confirmation.
kubectl delete pvc --all
Delete PVCs in a specific namespace
In multi-tenant clusters, it's critical to operate within the correct namespace to avoid impacting other teams or applications. To delete a specific PVC located in a namespace other than your current context, you must specify the target namespace using the -n or --namespace flag. This ensures the deletion request is routed correctly and only affects the intended resources. This command is essential for precise, targeted resource management in complex environments where multiple applications and services coexist.
kubectl delete persistentvolumeclaim <pvc-name> --namespace <namespace>
Verify the deletion
After you issue a delete command, you should always verify that the resource has been successfully removed. You can check the status of PVCs in a namespace by running kubectl get pvc. If the deletion was successful, the PVC you targeted will no longer appear in the list. If it still appears with a status of Terminating, it may be stuck due to a finalizer or an active pod dependency. This verification step is a crucial part of the workflow, confirming the operation's success and helping you identify issues early if the PVC fails to terminate properly.
kubectl get pvc
What Happens When You Delete a PVC?
Deleting a Persistent Volume Claim (PVC) is not just an API cleanup—it directly affects the bound Persistent Volume (PV) and the underlying storage asset. The exact outcome depends on the PV’s reclaimPolicy. Understanding this flow is essential to avoid accidental data loss and to manage storage lifecycle correctly in Kubernetes.
The Deletion Flow
When you run kubectl delete pvc <name>, the API server marks the PVC for deletion and its status moves to Terminating. Kubernetes then verifies whether any Pods still have the volume mounted. If a Pod is actively using the PVC, deletion is blocked and the claim remains in Terminating. This guardrail prevents filesystem corruption.
Once all references are released, Kubernetes removes the PVC object from etcd. At this point, the claim is gone, but the storage lifecycle is not finished.
What Happens to the Bound PV
After the PVC is deleted, the associated PV immediately transitions from Bound to Released. A Released PV is no longer associated with any claim, but the underlying storage still exists. It is effectively orphaned, awaiting a reclaim action. What happens next is entirely controlled by the PV configuration.
How the Reclaim Policy Determines Data Fate
The reclaimPolicy on the PV defines the final outcome:
- Retain: The PV stays in
Released, and the storage backend is left intact. Data persists, and manual cleanup is required. This is commonly used for production data where accidental deletion must be avoided. - Delete: Kubernetes deletes the PV object and instructs the storage provider to destroy the backing volume.
On cloud platforms like Amazon Web Services (EBS) or Google Cloud (Persistent Disk), Delete is irreversible. Always check the reclaim policy before deleting a PVC.
In larger fleets, platforms like Plural make this safer by giving operators visibility into PVCs, PVs, and reclaim policies across clusters, reducing the risk of deleting storage with unintended consequences.
Troubleshooting a PVC Stuck in the 'Terminating' State
When you run kubectl delete pvc, you expect the resource to disappear. But sometimes, it gets stuck in a Terminating state, refusing to go away. This is a common issue that usually indicates the PVC is still in use by another resource within the cluster. Kubernetes intentionally prevents the deletion to protect against data loss, but this safety feature can become a roadblock if you don't know how to resolve the underlying dependency.
Troubleshooting a stuck PVC involves a methodical process of identifying and removing the resources that are blocking its deletion. In most cases, a Pod, StatefulSet, or other workload is still mounted to the volume. In rarer cases, the issue lies with a finalizer that wasn't properly removed. Understanding these dependencies is the key to resolving the issue and successfully deleting the PVC.
Common causes for a stuck PVC
The most frequent reason a PVC gets stuck in the Terminating state is that an active resource, typically a Pod, is still using it. Kubernetes is designed to protect data integrity, so it will not detach a volume that is actively mounted. The system uses a control loop that waits for all dependencies to be cleared before finalizing the deletion.
This protection is enforced through a mechanism called finalizers. These are special metadata keys that tell Kubernetes to wait for specific conditions to be met before fully deleting a resource. For PVCs, the kubernetes.io/pvc-protection finalizer prevents deletion as long as the PVC is attached to a Pod. Until the Pod is deleted and the volume is unmounted, the finalizer will remain, and the PVC will stay in the Terminating state.
Identify blocking resources
To resolve a stuck PVC, your first step is to find out which Pods are still using it. You can inspect the PVC's details to find clues. Run the describe command to see events and other metadata associated with the claim:
kubectl describe pvc <pvc-name> -n <namespace>
Look for the "Used By" field in the output, which will list any Pods currently mounting the PVC. If a Pod is listed, you must delete it before Kubernetes can proceed with deleting the PVC. For workloads managed by a controller like a Deployment or StatefulSet, you may need to scale the replica count down to zero or delete the entire workload. Once the dependent Pods are terminated, the storage controller will detach the volume, remove the finalizer, and allow the PVC deletion to complete.
Check for Pod mount conflicts and finalizers
If you've deleted all associated Pods and the PVC is still stuck, the issue might be a lingering finalizer. While this protection is usually helpful, the cleanup process can sometimes fail, leaving the finalizer in place and blocking deletion indefinitely. This can happen if the node where the Pod was running goes offline or if there are issues with the storage provisioner.
You can check for finalizers by inspecting the PVC's YAML definition:
kubectl get pvc <pvc-name> -n <namespace> -o yaml
Look for a finalizers block in the metadata section. If the kubernetes.io/pvc-protection finalizer is present even after you've removed all dependent Pods, it's the likely cause of the problem. In these cases, you may need to manually intervene to remove the finalizer and force the deletion.
How to Force-Delete a Stuck PVC
When a Persistent Volume Claim (PVC) gets stuck in the Terminating state, it can bring your workflow to a halt. You’ve run kubectl delete pvc <pvc-name>, but nothing happens. This issue is almost always caused by a Kubernetes feature called a finalizer, which is designed to protect data but can sometimes prevent deletion even when you’re sure the resource is no longer needed.
Force-deleting a PVC should be approached with caution, as it bypasses the standard safety checks that protect your data and system stability. Before you proceed, you must be certain that no pods or other resources depend on the PVC and that any critical data has been backed up. Once you've confirmed it's safe to proceed, you can manually intervene to remove the blocking finalizers and complete the deletion process. The following steps will guide you through identifying the cause and applying the necessary commands to resolve the issue.
What are finalizers?
Finalizers are keys on a Kubernetes object that signal to controllers that specific cleanup actions must be performed before the resource can be fully deleted. They are essentially a list of conditions that must be met for deletion to proceed. For PVCs, a common finalizer is kubernetes.io/pvc-protection. This finalizer prevents the deletion of a PVC as long as it is actively mounted by a pod.
This is a crucial safety mechanism that prevents data loss by ensuring a volume isn't detached while in use. However, if the pod using the PVC was terminated improperly or if the storage controller is unresponsive, the finalizer might never be removed automatically. This leaves the PVC in a Terminating state indefinitely, waiting for a confirmation that will never arrive. Understanding Kubernetes finalizers is key to troubleshooting these kinds of issues.
A step-by-step guide to force deletion
When a PVC is stuck, follow a methodical approach to force its deletion safely. Don't jump straight to the most aggressive command. First, double-check that no pods are using the PVC by running kubectl get pods --all-namespaces | grep <pvc-name>. If any pods are listed, delete them properly and see if the PVC terminates on its own.
If the PVC is still stuck, the next step is to manually remove the finalizer that is blocking its deletion. This is the most common fix. As a final resort, if removing the finalizer doesn't work, you can use a command that forces the deletion immediately by setting the grace period to zero. This last step should only be used when you are certain it won't disrupt your storage backend.
Manually remove the finalizer
To remove the finalizer from a stuck PVC, you will use the kubectl patch command. This command directly modifies the PVC's metadata, allowing you to set the finalizers field to null. This action tells the Kubernetes control plane that all necessary cleanup operations are complete, which unblocks the deletion process.
Execute the following command, replacing <pvc-name> with the name of your stuck PVC and <namespace> with its namespace:
kubectl patch pvc <pvc-name> -n <namespace> -p '{"metadata":{"finalizers":null}}'
After running this command, Kubernetes should proceed with deleting the PVC. You can verify this by running kubectl get pvc <pvc-name> -n <namespace>. If the command returns "Not Found," the deletion was successful.
Use grace periods and force flags
If manually removing the finalizer doesn't resolve the issue, you can use a more forceful approach. The kubectl delete command includes flags to bypass the standard termination process. The --grace-period=0 flag tells Kubernetes to delete the resource immediately, without waiting for the default graceful shutdown period. The --force flag is used to compel the deletion of resources that are otherwise unresponsive.
This method is a last resort, as it can cause issues with the underlying storage provisioner if not handled carefully. To force-delete the PVC, run:
kubectl delete pvc <pvc-name> -n <namespace> --grace-period=0 --force
This command sends a direct instruction to the API server to remove the PVC object from the cluster's state. While effective, it's a powerful tool that bypasses safeguards, so use it only when you fully understand the implications for your Kubernetes PVC setup.
Advanced Troubleshooting for Deletion Issues
When a PVC remains stuck after you’ve removed finalizers and checked for basic Pod dependencies, the root cause often lies deeper within the cluster’s event logs or in the underlying storage infrastructure. These advanced scenarios require a more systematic approach to diagnose issues related to StatefulSets, storage provisioners, and backend connectivity. Instead of resorting to force-deletion, which can mask underlying problems, inspecting these components can reveal the true source of the issue and prevent it from recurring.
This diagnostic work becomes exponentially harder when managing a large fleet of clusters. Tracking down dependencies across different environments often means switching between multiple contexts and tools, which slows down resolution time. With a platform like Plural, you can gain a unified view of your entire Kubernetes fleet. The embedded Kubernetes dashboard provides a single pane of glass to inspect events, logs, and resource states across all your clusters. This simplifies the process of tracking down complex dependencies without needing to juggle multiple kubeconfig files or terminal windows, allowing your team to focus on the fix rather than the friction of access.
Inspect events and logs for clues
Your first step in any advanced troubleshooting scenario should be to check the events associated with the stuck PVC. Kubernetes events provide a high-level overview of what the controller manager is trying to do and why it might be failing. Use the command kubectl describe pvc <pvc-name> to see a detailed history. Often, the message will point directly to the problem, such as a Pod that is still actively using the claim. As one engineer noted, "If a PVC is stuck: It's probably because its Pods or StatefulSets are still running. Make sure you delete them first." This highlights how critical it is to check resource events for clues about lingering dependencies that are preventing deletion.
Resolve StatefulSet dependencies
StatefulSets are a common source of PVC deletion issues because they provide strong guarantees around network and storage identity. Unlike Deployments, a StatefulSet maintains a sticky identity for its Pods and their associated storage. To properly release a PVC used by a StatefulSet, you must first ensure the set no longer requires it. The correct procedure is to first delete the resources using the storage. Before attempting to delete the PVC, scale the StatefulSet down to zero replicas using kubectl scale statefulset <statefulset-name> --replicas=0. This action terminates the Pods gracefully and ensures they release their claim on the persistent volumes, allowing the deletion process to proceed without conflicts.
Fix storage backend connectivity
Sometimes, the issue isn't with a Kubernetes resource but with the connection to the physical storage backend. Every Persistent Volume has a reclaim policy that tells the cluster what to do with the underlying storage after the PVC is deleted. According to the Kubernetes documentation, "When an application no longer needs storage, the PVC is deleted. The PV then becomes 'released.' The reclaim policy of the PV tells Kubernetes what to do next." If the storage provisioner (e.g., the AWS EBS CSI driver) cannot communicate with the cloud provider's API to delete the underlying disk, the PV will remain, and the PVC deletion will hang. Check the logs of your storage provisioner pods for authentication errors or network timeouts.
Best Practices for Managing PVCs
Deleting PVCs is a routine task, but managing them effectively requires a proactive strategy, especially in large-scale environments. Adopting best practices helps prevent data loss, control costs, and reduce the operational burden on your team. While kubectl is effective for direct actions, a systematic approach to storage management ensures your clusters remain healthy and efficient. This is particularly critical when managing a fleet of clusters, where manual oversight is not scalable. Platforms like Plural help enforce these practices consistently through GitOps and centralized management, turning manual checklists into automated, repeatable workflows.
Use labels to stay organized
As your cluster grows, keeping track of dozens or hundreds of PVCs becomes difficult. Using labels is a simple yet powerful way to organize and manage storage resources. Labels are key-value pairs attached to Kubernetes objects that allow you to filter and select them based on specific criteria. You can apply labels to distinguish between production and development storage, making it easier to perform bulk operations or audit resource usage safely.
A well-defined labeling strategy simplifies day-to-day tasks. Instead of tracking individual PVC names, you can use label selectors with kubectl to list, delete, or monitor all PVCs associated with a specific service. For instance, kubectl get pvc -l app=postgres,env=staging instantly shows you all PVCs for the staging PostgreSQL database. This reduces the risk of human error, like accidentally deleting a production PVC when cleaning up a development environment.
Monitor storage usage
Running out of disk space is a common cause of application failure. Proactive monitoring of your storage usage is essential to prevent unexpected downtime and performance degradation. When a PVC reaches its capacity, the application relying on it can crash or become unresponsive. Regular monitoring helps you identify volumes that are nearing their limit, allowing you to resize them or clean up unnecessary data before it becomes a critical issue. This is especially important for stateful applications like databases, where data integrity is paramount.
Tools like Prometheus and Grafana are commonly used to scrape metrics and visualize storage capacity over time. However, managing monitoring configurations across multiple clusters adds complexity. Plural simplifies this by providing a unified dashboard for your entire Kubernetes fleet. From a single interface, you can track PVC capacity, I/O performance, and other critical metrics across all your environments, ensuring you have complete visibility into your storage health without juggling multiple tools.
Automate cleanup with lifecycle policies
Kubernetes provides a persistentVolumeReclaimPolicy field on Persistent Volumes to define what happens to the underlying storage when a bound PVC is deleted. The three options are Retain, Delete, and Recycle (deprecated). Setting the appropriate reclaim policy is key to automating cleanup and preventing both data loss and resource leakage. For transient data, like caches or temporary test environments, using the Delete policy ensures the underlying storage is automatically removed when the PVC is deleted. This prevents orphaned volumes from accumulating and driving up cloud costs.
For critical production data, the Retain policy is the safest choice. It ensures that the underlying volume is not deleted, giving you an opportunity to recover the data manually. Managing these policies consistently at scale can be challenging. By defining your storage resources using a GitOps workflow, you can enforce these policies programmatically. Plural’s GitOps automation allows you to declare PVCs and their corresponding storage classes in code, ensuring every environment is configured correctly and consistently according to your organization's standards.
Simplify PVC Management at Scale with Plural
Manually running kubectl commands and troubleshooting finalizers across a handful of clusters is manageable, but this approach does not scale. For platform teams responsible for dozens or hundreds of Kubernetes clusters, managing the lifecycle of Persistent Volume Claims becomes a significant operational burden. Tracking dependencies, ensuring proper cleanup, and enforcing storage policies manually is inefficient and prone to error. Without a centralized system, engineers must context-switch between different clusters, each with its own set of tools and configurations, increasing the likelihood of mistakes and making it difficult to maintain a consistent security and compliance posture. This operational friction slows down development cycles and diverts valuable engineering time to routine maintenance.
Plural provides a unified platform to automate and streamline PVC management across your entire Kubernetes fleet. By centralizing control and leveraging GitOps principles, Plural transforms storage management from a reactive, command-line-driven task into a proactive, policy-driven workflow. Instead of relying on ad-hoc scripts and manual checks, you can define your storage policies as code and let Plural enforce them automatically. This allows teams to enforce consistency, improve resource utilization, and reduce the risk of data loss or misconfiguration at scale, freeing up your team to focus on higher-value work.
Automate the storage lifecycle
Plural automates the deployment and configuration of storage-related tools across all your clusters. Instead of manually setting up backup solutions like Velero on each cluster, you can define its configuration once and use Plural’s GitOps engine to roll it out everywhere. This ensures that critical data within your PVCs is consistently backed up according to your policies without manual intervention. Plural’s PR Automation API can also be used to self-serviceably generate the manifests needed for new storage configurations, standardizing the process for developers and reducing the burden on the platform team.
Monitor and clean up PVCs across clusters
Identifying unused or orphaned PVCs in a large environment is a common challenge that leads to wasted resources and unnecessary costs. Plural’s embedded Kubernetes dashboard offers a single pane of glass to view and manage resources across your entire fleet. From one interface, you can monitor the status of all PVCs, identify which ones are unbound, and trace their dependencies without having to kubectl into individual clusters. This centralized visibility simplifies the process of cleaning up stale resources, ensuring optimal performance and cost-efficiency.
Manage storage with GitOps
With Plural, all aspects of your Kubernetes configuration, including storage, are managed as code. You can define StorageClasses, PVCs, and backup policies in a Git repository, providing a version-controlled source of truth for your storage infrastructure. Plural’s continuous deployment capabilities automatically sync these configurations to the target clusters, ensuring consistency and preventing configuration drift. For the underlying cloud storage resources, Plural Stacks allows you to manage Terraform code with the same API-driven, GitOps-based approach, creating a seamless workflow from the infrastructure layer all the way to the application.
Related Articles
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
What's the most common reason a PVC gets stuck in the 'Terminating' state? The primary cause is an active Pod dependency. Kubernetes uses a finalizer called kubernetes.io/pvc-protection to prevent a PVC from being deleted as long as it's mounted by a Pod. Even if you've deleted the managing controller like a Deployment, a Pod might still be running or in a terminating state, holding onto the volume. You must ensure the Pod is fully terminated before the PVC can be released.
Can I recover data after deleting a PVC? It depends entirely on the persistentVolumeReclaimPolicy of the associated Persistent Volume (PV). If the policy is Retain, the underlying storage volume and its data will persist after the PVC is deleted, allowing you to manually re-attach it or recover the data. If the policy is Delete, Kubernetes will automatically instruct the cloud provider to destroy the storage volume, making data recovery impossible. Always verify the reclaim policy before deletion.
Is it safe to manually remove the pvc-protection finalizer? Manually removing the finalizer should be a last resort. It bypasses the safety mechanism that prevents data corruption by ensuring a volume is unmounted before deletion. Before you patch the PVC to remove the finalizer, you must be absolutely certain that no Pod is using the volume and that the underlying storage controller has successfully detached the disk from the node. Forcing the removal without this confirmation can lead to an inconsistent state in your storage backend.
How does deleting a PVC used by a StatefulSet differ from one used by a Deployment? StatefulSets provide stable, unique network identifiers and persistent storage for each Pod. Because of this tight coupling, you must scale the StatefulSet down to zero replicas before deleting its PVCs. This ensures a graceful shutdown of the Pods and proper release of their volumes. With a Deployment, Pods are generally stateless and interchangeable, so the process is often simpler, but the core principle of terminating the Pod before deleting the PVC remains the same.
How can I find all the PVCs that aren't being used by any pods? Identifying unused PVCs typically requires scripting to cross-reference all PVCs in a cluster with all running Pods to see which claims are not mounted. This can be complex in large environments. A more direct approach is to use a centralized dashboard, like the one provided by Plural, which gives you a fleet-wide view of all your resources. This allows you to quickly filter for PVCs in an Available or Released state, indicating they are no longer bound to a claim or in active use.