Checking Kubernetes API deprecations.

How to Check for Deprecated Kubernetes APIs

Learn how to check for deprecated Kubernetes APIs, avoid upgrade failures, and keep your clusters stable with practical detection and migration strategies.

Michael Guarino
Michael Guarino

Upgrading a Kubernetes cluster should be a predictable, low-risk maintenance task—but for many platform teams, it often feels risky and uncertain. The usual cause is a deprecated API that slips through unnoticed before the upgrade. When Kubernetes removes that API version, any manifest, controller, or operator still referencing it fails, breaking deployments and sometimes causing downtime.

This issue isn’t an anomaly, it’s an expected outcome of Kubernetes’ continuous API evolution. To prevent it, you need a disciplined lifecycle management workflow. This guide outlines that workflow, beginning with the most essential part: detecting deprecated APIs using command-line tools, static analysis, and automation built into your platform.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Adopt a proactive framework for API management: Instead of reacting during upgrades, continuously track Kubernetes release notes and maintain an inventory of API versions. This turns potential emergencies into routine operational tasks.
  • Automate deprecation checks within your GitOps Workflow: Manual audits are unreliable and don't scale. Integrate automated scanning into your CI/CD pipeline to block pull requests with deprecated APIs, catching issues before they ever reach a cluster.
  • Plan migrations holistically: A successful migration addresses the entire ecosystem, including your manifests, third-party CRDs, and application dependencies. Always validate changes in a staging environment and have a tested rollback plan to minimize risk.

What Is a Deprecated Kubernetes API

A deprecated Kubernetes API is an older API version that’s being phased out to make room for a newer, more stable release. Deprecation is part of Kubernetes’ core design philosophy—it allows the project to evolve, improve performance, strengthen security, and simplify maintenance without being weighed down by legacy interfaces.

For platform and DevOps teams, managing API deprecations is a key responsibility. When you upgrade a cluster to a version where a deprecated API has been removed, any workloads, controllers, or operators still using it will break. This can trigger deployment failures and potential downtime. The first step toward reliable upgrades is understanding how Kubernetes manages API lifecycles and how to stay ahead of removals.

How Kubernetes API Versioning Works

Kubernetes uses a structured versioning system to balance innovation with stability. Each API progresses through three main stages:

  • Alpha (v1alpha1): Experimental and unstable. These versions are for early testing and may change without notice.
  • Beta (v1beta1, v1beta2): More stable, but still subject to changes as feedback is incorporated.
  • Stable (v1): Production-ready and guaranteed to be maintained long-term.

When a stable API is introduced, its older beta (or alpha) version is marked as deprecated. Deprecated APIs continue to work for several releases before being fully removed. This grace period gives teams time to update manifests, controllers, and Helm charts to use the latest API versions.

How Deprecated APIs Affect Cluster Stability

Continuing to use deprecated APIs puts your cluster at risk during upgrades. Once the Kubernetes API server stops serving a removed version, any resource definitions, CRDs, or automation tools referencing it will fail.

For example, if your CI/CD pipeline applies a manifest using a removed API version, the kubectl apply command will be rejected, blocking deployments. This can cascade into downtime or partial rollouts. Proactive detection and migration of deprecated APIs is essential to maintaining cluster reliability and upgrade safety.

Common Deprecation Scenarios

Deprecation usually occurs when a resource moves to a new API group or reaches a more stable version. For instance:

  • Ingress moved from extensions/v1beta1networking.k8s.io/v1
  • CronJob moved from batch/v1beta1batch/v1

Before every upgrade, review the Kubernetes Deprecation Guide for your target version. It lists all APIs scheduled for removal. For example, upgrading to v1.25 requires migrating all CronJob resources off batch/v1beta1, as that API version is no longer served.

Staying ahead of these changes prevents deployment failures and ensures a seamless upgrade path.

Common Misconceptions About API Deprecation

API deprecation in Kubernetes often sounds more alarming than it is. Many teams associate it with sudden outages, broken manifests, and forced emergency migrations. In reality, Kubernetes deprecation is a structured, predictable process with clear communication and timelines. Once you understand how the lifecycle works, upgrades become far less stressful.

Deprecation isn’t a failure of the system—it’s a sign of healthy evolution. The goal isn’t to avoid deprecation entirely, but to manage it systematically with awareness and preparation. By clarifying common misconceptions, you can turn what feels like a risk into a routine part of cluster maintenance.

How Deprecation Timelines Work

A deprecated API does not mean immediate breakage. In Kubernetes, deprecation serves as a formal notice that a particular API version will be removed in a future release. The Kubernetes project follows a well-defined deprecation policy that typically allows several minor releases of overlap between deprecation and removal.

This gives teams ample time to plan and migrate without impacting production. Treat deprecation notices as scheduled maintenance windows rather than critical alerts—they’re opportunities to stay ahead of change, not signals of imminent failure.

What Happens to Existing Resources

One of the most common fears is that upgrading to a new Kubernetes version will instantly break workloads using removed APIs. That’s not how it works. When an API version is removed, existing objects stored in etcd remain intact and continue running.

For example, a Deployment created using a deprecated API version will keep managing its pods even after the upgrade. However, those objects become read-only—you can’t modify or update them with kubectl or any other client until the resource definition is updated to a supported API version. The workloads remain functional, but administrative operations are blocked until migration is complete.

How Complex is the Migration Process

The effort required to migrate off deprecated APIs depends on your environment’s size and complexity. In simple cases, it may involve nothing more than updating the apiVersion field in a few manifests. In larger environments, discovery becomes the main challenge.

You’ll need to locate deprecated APIs across raw YAML manifests, Helm charts, Kustomize overlays, and custom operators. This is especially difficult when resources span multiple namespaces or CI/CD pipelines. A structured audit—combining command-line tools, static analysis, and cluster inspection—is the first and most important step. Once you have a complete inventory, the actual migrations are usually straightforward.

How to Find Deprecated APIs in Your Cluster

Detecting deprecated APIs is one of the most important pre-upgrade tasks in Kubernetes. Once an API version is marked for removal, continuing to use it risks failed deployments or broken workloads after the next upgrade. While the deprecation policy is well-documented, actually finding all instances of deprecated APIs across manifests, Helm charts, and live resources can be challenging—especially in multi-team, multi-cluster environments.

The good news is that Kubernetes offers multiple ways to identify these APIs, from simple command-line checks to deeper log-based analysis. Using a combination of methods gives you complete visibility into your cluster’s API usage and ensures you’re ready for any upgrade.

Use kubectl to Find Deprecated APIs

kubectl is the most direct way to detect deprecated APIs in use. When you apply or create a resource with an outdated API version, kubectl typically prints a warning in your terminal. For example, applying a Deployment with apps/v1beta1 on a modern cluster will trigger a message recommending apps/v1 instead.

To scan manifests proactively, you can use:

kubectl apply --dry-run=client -f your-manifest.yaml

This performs client-side validation and surfaces deprecation warnings without modifying your cluster. Running this command across your repository gives an immediate snapshot of which manifests rely on deprecated APIs.

Inspect API Versions in Your Manifests

For earlier detection—before manifests ever reach the cluster—review the apiVersion fields in your YAML files. Cross-reference these with the official Kubernetes API documentation for your target upgrade version.

In smaller environments, a quick grep can help:

grep -R "v1beta1" ./manifests

However, in large environments with hundreds of manifests or multiple Git repos, manual searches don’t scale. It’s easy to miss deprecated APIs embedded in Helm templates, Kustomize overlays, or custom resources. Automating this step using CI-based scanning or policy-as-code tools provides much better coverage and repeatability.

Interpret API Server Warnings

The Kubernetes API server itself records detailed warnings whenever a deprecated API is used. Each time a client—such as kubectl, a controller, or an operator—calls a deprecated endpoint, the API server logs a warning event with an annotation like k8s.io/deprecated.

By inspecting audit logs and filtering for that annotation, you can identify which APIs are still being used and which components are calling them. This method is particularly effective for spotting deprecated API usage by automated systems, where warnings may not appear in any developer-facing CLI output.

Manually Detect Deprecated APIs

A manual audit combines several of the above methods—CLI validation, manifest inspection, and API server log analysis—guided by Kubernetes release notes and the official deprecation guide. Engineers typically build a checklist of deprecated APIs for the next target version and then search for their presence across cluster resources and source repositories.

While this offers complete control, it’s time-intensive and error-prone. Manual detection doesn’t scale well for enterprise or multi-cluster operations and can easily miss hidden dependencies, leading to failed upgrades or downtime. This is why most mature teams eventually transition from manual auditing to automated detection and policy enforcement—reducing human error and improving confidence in every cluster upgrade.

Tools for Detecting Deprecated APIs

Manually checking for deprecated APIs is inefficient and prone to error, especially across multiple clusters. Several tools can automate this process, providing clear reports on which resources need attention before your next cluster upgrade. Integrating these tools into your workflow helps maintain cluster health and prevents upgrade failures.

Kubent (kube-no-trouble)

Kubent, or kube-no-trouble, is a command-line utility that scans live Kubernetes clusters to find resources using deprecated API versions. It connects to your cluster using your local kubeconfig file and inspects the resources directly. The output provides a clear list of problematic resources, including their kind, namespace, name, and the specific deprecated API version they use. This makes it straightforward to identify exactly what needs to be updated. Because it inspects the live state of your cluster, Kubent gives you an accurate snapshot of your current deployments, which is essential for planning migrations ahead of a Kubernetes version upgrade. It's a simple yet effective tool for ad-hoc checks and pre-upgrade audits.

Pluto and Other Open-Source Tools

While Kubent inspects live clusters, tools like Pluto take a different approach by performing static analysis on your Infrastructure as Code (IaC) files. Pluto can scan local directories containing Kubernetes manifest files and Helm charts to detect deprecated API versions before they are ever deployed. This allows you to catch issues early in the development lifecycle, typically within a CI/CD pipeline. By checking the code itself rather than the running cluster, Pluto helps enforce best practices and prevents new resources with deprecated APIs from being introduced into your environment. This proactive approach complements the reactive scanning of a tool like Kubent, giving you comprehensive coverage from development to production.

Use Plural's Built-in Detection

For teams managing Kubernetes fleets at scale, relying on standalone tools can create operational overhead. Plural addresses this by integrating API deprecation detection directly into its platform. The Plural CD system provides continuous visibility into the health of your applications and clusters. As part of its deployment and monitoring capabilities, Plural flags resources that use deprecated APIs, presenting this information within its unified dashboard. This means you don't have to run separate checks or stitch together different tools. Instead, deprecation warnings become a standard part of your fleet management workflow, allowing you to identify and remediate issues across all your clusters from a single control plane without compromising security or adding complexity.

Set Up Automated Scanning

To effectively manage API deprecations, detection should be an automated, continuous process, not a manual, periodic task. Integrating tools like Kubent or Pluto into your CI/CD pipeline is a critical step. For example, you can run Kubent with the --exit-error flag, which causes the command to return a non-zero exit code if it finds any deprecated APIs. This will automatically fail a pipeline build, preventing problematic code from being deployed and forcing developers to address the issue immediately. By automating these checks, you create a reliable gatekeeper that maintains the integrity of your Kubernetes configurations. This approach reduces the risk of human error and ensures that your clusters remain stable and ready for future upgrades.

A Framework for Managing API Deprecation

A reactive approach to API deprecation—waiting for things to break during an upgrade—is a recipe for downtime. A better strategy is to build a systematic framework for managing the API lifecycle. This involves proactive tracking, rigorous testing, clear documentation, and careful planning. By treating API migration as a continuous process rather than a one-off emergency, you can ensure smooth cluster upgrades and maintain application stability across your fleet.

Track API Versions

Kubernetes abides by a stringent API versioning protocol, resulting in multiple deprecations of v1beta1 and v2beta1 APIs across several releases. The first step in any management framework is awareness. Your team needs a reliable way to track which APIs are used in your clusters and which are slated for removal in upcoming versions. Maintain an inventory of the API versions used in your manifests, Helm charts, and custom controllers. Regularly consult the official Kubernetes release notes and deprecation guides to stay ahead of changes. This proactive monitoring allows you to identify dependencies on deprecated APIs long before they become a critical issue during an upgrade.

Test and Validate Changes

Once you've identified a deprecated API that needs to be migrated, the next step is to validate the change. Never push updated manifests directly to production. Instead, use a staging or development environment that mirrors your production setup to test the new API versions. Regularly test your applications to confirm they function as expected with the updated configurations. This includes running automated integration tests and performing manual validation to catch any subtle behavioral changes. The goal is to verify that the migration to a stable API version doesn't introduce regressions or disrupt application performance, ensuring a safe rollout.

Document Your Migration Plan

A successful migration relies on clear communication and a well-defined plan. Documenting your strategy ensures every team member understands the scope, timeline, and responsibilities. Your migration plan should detail which deprecated APIs are being replaced, the new stable versions to use, and the specific manifests or applications affected. This underscores the importance for maintainers to be aware of deprecated Kubernetes API versions and the Kubernetes release in which they are set to be removed. This document serves as a single source of truth for the engineering team, reducing confusion and aligning stakeholders on the path forward.

Plan Your Cluster Updates

API migration should be a core part of your cluster upgrade strategy, not a separate task. Before upgrading to a new Kubernetes version, always check the official Kubernetes Deprecation Guide for APIs removed in your target version. For example, the batch/v1beta1 CronJob API was removed in v1.25, and any attempt to upgrade with manifests using that version would fail. By resolving all API deprecations before initiating an upgrade, you prevent failed deployments and unexpected downtime. Using a platform that provides a unified view of your clusters, like Plural's multi-cluster dashboard, can help you manage these dependencies and execute upgrades systematically.

How to Automate API Detection

Manual methods for finding deprecated Kubernetes APIs don’t scale. As your infrastructure expands, relying on engineers to periodically run kubectl or inspect manifests becomes inconsistent and risky. One overlooked deprecation can derail an entire upgrade. The only sustainable approach is automation—embedding API detection into your development, deployment, and monitoring workflows.

Automating this process enforces consistency, reduces manual overhead, and ensures that your clusters remain compliant across all environments. A well-designed automation strategy applies checks at every stage of the lifecycle—version control, CI/CD, and live cluster monitoring—so deprecated APIs are caught early and addressed proactively.

Integrate Checks into Your CI/CD Pipeline

CI/CD pipelines are the ideal first layer for automation. By integrating tools like kubent, you can automatically scan manifests before deployment. Running:

kubent --exit-error

causes the command to return a non-zero exit code if any deprecated APIs are found, failing the pipeline and preventing those resources from being deployed. This “shift-left” approach delivers immediate feedback to developers, ensuring that API compliance is enforced long before changes reach production.

Integrating this check into every pipeline standardizes enforcement across teams, eliminating manual drift and reducing upgrade surprises.

Set Up Continuous Monitoring

While CI/CD automation prevents new issues, continuous monitoring ensures that existing workloads remain compliant. Over time, previously deployed resources or legacy services may still depend on deprecated APIs.

Platforms like Plural make it easier to track this by continuously scanning active clusters and surfacing deprecated API usage through a unified dashboard. Continuous scanning provides a real-time inventory of affected resources, helping teams plan migrations systematically instead of reacting during upgrades.

This visibility is especially critical for environments where some workloads predate your automation efforts or were deployed manually.

Configure Alerts for Deprecated APIs

Detection is valuable only if teams act on it. Kubernetes 1.19+ includes built-in warnings when applying deprecated APIs, but relying solely on CLI feedback isn’t enough. You should extend alerting to your organization’s monitoring and incident response tools.

Configure systems like Prometheus, Grafana, or Plural’s monitoring integrations to trigger alerts when deprecated APIs are detected. Send notifications to channels such as Slack, PagerDuty, or email to ensure rapid response. This closes the loop between detection and remediation, preventing issues from going unnoticed in dashboards.

Integrate with Version Control

For even earlier detection, integrate API scanning directly into your version control system. Tools such as Plural CD can automatically analyze manifests in your Git repositories and block pull requests that introduce deprecated APIs.

Running these checks at the Git level aligns perfectly with GitOps workflows—ensuring that your declarative desired state is always compliant before it ever reaches production. This approach enforces consistency across teams and simplifies upgrade planning by identifying deprecated resources well before they cause runtime failures.

By embedding automation throughout your pipeline—from Git to runtime—you turn API lifecycle management from a reactive process into a continuous, predictable part of Kubernetes operations.

Strategies for Migration and Updates

Once you have identified the deprecated APIs in your cluster, the next step is to create a methodical plan for migration. A successful Kubernetes upgrade depends on a proactive strategy that addresses not only your own manifests but also the entire ecosystem of tools and custom resources running in your environment. This involves updating configurations, managing third-party dependencies, and preparing a solid rollback plan in case of unexpected issues.

Update Your Manifests

The most direct part of the migration process is updating your own Kubernetes manifests. While Kubernetes provides official documentation to track deprecated APIs, identifying every resource in your cluster that uses them can be challenging, especially in large environments. The process involves replacing the old apiVersion with the new one and adjusting any fields that may have changed between versions. For example, an Ingress resource might need to be updated from extensions/v1beta1 to networking.k8s.io/v1.

Managing these changes manually across hundreds of files is prone to error. Adopting a GitOps workflow with a tool like Plural CD centralizes your manifests in a version-controlled repository. This allows you to perform find-and-replace operations across your entire configuration base, review changes through pull requests, and apply them consistently to all target clusters.

Handle Custom Resource Definitions

Custom Resource Definitions (CRDs) introduce another layer of complexity because their lifecycle is managed by third-party applications, not the Kubernetes core. Before upgrading, you must check the Kubernetes Deprecation Guide for APIs removed in your target version. For instance, the batch/v1beta1 CronJob API was removed in v1.25, which could affect CRDs that depend on it. This validation is crucial for ensuring your CRDs remain compatible.

For each CRD, consult the documentation of the application that installed it (e.g., Prometheus Operator, Istio, or cert-manager) to confirm its compatibility with your target Kubernetes version. You may need to upgrade the application or its operator before upgrading the cluster itself. Plural’s open-source application marketplace simplifies this by providing curated, up-to-date versions of popular tools, helping you manage their lifecycle alongside your cluster upgrades.

Manage Application Dependencies

Your cluster’s dependencies extend beyond CRDs to include Helm charts, client libraries, and any custom controllers that interact with the Kubernetes API. An outdated monitoring agent or CI/CD tool could fail after an upgrade if it relies on a removed API. Using tools like kubent can help automate scans for deprecated API versions across your clusters, making it easier to identify problematic dependencies.

Before any upgrade, create an inventory of all applications and components running in your cluster. Review their release notes and compatibility matrices to ensure they support the target Kubernetes version. A unified platform like Plural provides a single-pane-of-glass dashboard that gives you deep visibility into all deployed resources, simplifying the process of auditing application dependencies across your entire fleet.

Create a Rollback Plan

Even with careful planning, upgrades can fail. One of the most common challenges during a Kubernetes upgrade is an unforeseen issue with a deprecated or removed API. Having a tested rollback plan is essential to mitigate risk and minimize downtime. Your plan should include taking a snapshot of your etcd datastore, versioning all manifests in Git, and documenting the exact procedure for reverting the control plane and worker nodes to their previous state.

The most important part of any rollback plan is testing it in a non-production environment. This ensures your process works as expected and gives your team the confidence to execute it under pressure. With Plural’s API-driven Infrastructure-as-Code management, your entire cluster configuration is defined in code. This makes rollbacks more reliable, as you can revert a commit in Git and allow the GitOps agent to automatically sync the cluster back to its last known good state.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What’s the real risk of ignoring a deprecated API warning? Ignoring a deprecation warning is like ignoring a check engine light. Your cluster will continue to function for a while, but you're setting yourself up for a major failure during your next upgrade. When you update to a Kubernetes version that removes the deprecated API, any deployment pipelines, controllers, or manual kubectl commands trying to use that old API will be rejected. This means you won't be able to update or manage those resources, effectively halting deployments and creating operational bottlenecks until you fix the underlying manifests.

If I upgrade my cluster and discover resources using a now-removed API, are they completely broken? Not exactly, but they are effectively frozen in time. The existing objects, like pods managed by a Deployment defined with the old API, will continue to run in the cluster. However, you lose the ability to manage them through the Kubernetes API. You won't be able to update, scale, or delete them using kubectl until you update their manifest to a supported API version. The workload is still there, but it's unmanageable until you correct its definition.

Manual scanning seems overwhelming for a large fleet of clusters. How can I manage this at scale? You're right, manual scanning doesn't scale. For multiple clusters, especially with different teams and applications, you need an automated and centralized approach. This is where a platform like Plural becomes essential. Instead of running ad-hoc checks, Plural integrates API deprecation scanning directly into its GitOps workflow. It provides a single dashboard to give you continuous visibility across your entire fleet, flagging resources that use deprecated APIs so you can address them systematically from one control plane.

How should I handle third-party tools and CRDs that use deprecated APIs? This is a critical step that's often overlooked. Before a cluster upgrade, you must verify that every tool, operator, and CRD in your cluster is compatible with the target Kubernetes version. This usually involves checking the documentation for tools like Prometheus Operator, Istio, or cert-manager. You will likely need to upgrade these applications before you upgrade Kubernetes itself. This ensures their custom resources are migrated to supported API versions and won't break when the cluster is updated.

What's the most effective first step to get ahead of API deprecations? The best place to start is by integrating automated checks into your CI/CD pipeline. This "shifts left" the responsibility for API health. By adding a tool that scans your Kubernetes manifests during the build process, you can configure the pipeline to fail if it detects a deprecated API. This prevents new, non-compliant configurations from ever reaching your clusters and provides immediate feedback to developers, making API management a routine part of your workflow rather than a pre-upgrade fire drill.

Guides