Diagram of the Kubernetes admission controller API request lifecycle.

Kubernetes Admission Controllers: A Practical Guide

Learn how a Kubernetes admission controller enforces security, automates policy, and streamlines governance for your clusters with practical examples.

Michael Guarino
Michael Guarino

A “paved road” platform uses automated guardrails to maximize delivery speed without compromising stability or security. Kubernetes admission controllers are a foundational primitive for this model. Rather than acting purely as enforcement gates, they can mutate and validate workloads at deploy time—injecting required sidecars, applying default resource requests and limits, and enforcing labeling standards for ownership and cost allocation.

This approach shifts best practices from documentation into code. Developers no longer need to remember cluster-specific rules, and entire classes of misconfigurations are eliminated before they reach production. The result is lower cognitive overhead, more predictable deployments, and a consistent runtime environment that teams can trust. This guide explains how to use admission controllers to build a resilient, self-service platform that feels fast without being fragile.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Enforce policy at the API Gateway: Admission controllers act as the final checkpoint for API requests, allowing you to enforce security and governance rules before any resource is written to etcd. This prevents non-compliant configurations from ever entering your cluster.
  • Understand the two-phase process: Admission control operates in a strict sequence: mutating webhooks first modify objects to inject configurations or defaults, and then validating webhooks inspect the final object to approve or deny the request. This order is critical for ensuring policies are evaluated correctly.
  • Automate fleet-wide management: To avoid configuration drift and operational overhead, manage admission controllers declaratively using a GitOps workflow. A platform like Plural centralizes policy definitions and automates controller deployments across your entire fleet, providing the unified observability needed to troubleshoot failures quickly.

What are Kubernetes admission controllers?

Kubernetes admission controllers are API server plugins that enforce how a cluster is used. They intercept requests after authentication and authorization but before objects are persisted to etcd. At this point in the request lifecycle, they can validate, mutate, or reject requests based on cluster-defined policies.

This makes admission controllers a core mechanism for enforcing security posture, resource governance, and configuration standards. Without them, any authenticated user could create workloads with excessive resource requests, privileged containers, or inconsistent metadata—issues that directly impact cluster stability, cost, and security at scale.

Why Kubernetes admission controllers matter

At a technical level, admission controllers are pieces of code compiled into the kube-apiserver and enabled by cluster administrators. They apply rules to object creation, updates, and deletions. Common use cases include enforcing non-root containers, requiring standardized labels, setting default resource requests and limits, and applying quota or policy constraints per namespace.

By moving these rules into the control plane, enforcement becomes automatic and non-optional. Teams no longer rely on documentation or reviews to catch mistakes, and platform operators gain a consistent, auditable way to encode operational best practices directly into the cluster.

Their role in the API request lifecycle

Admission control is a distinct phase in the Kubernetes API request flow. Once a request is authenticated and authorized, it enters admission control, which runs in two ordered steps.

First, mutating admission controllers execute. These can modify incoming objects—for example, injecting sidecars, adding default fields, or normalizing labels. After mutation and schema validation, validating admission controllers run. These are read-only and can only accept or reject the request based on policy.

If the request passes both phases, the final object is persisted to etcd. This ordering is what enables admission controllers to both standardize workloads and enforce invariants, making them essential for building secure, scalable, and developer-friendly Kubernetes platforms.

How do admission controllers work?

Admission controllers are an enforcement layer inside the Kubernetes API server. They intercept API requests after authentication and authorization but before state is written to etcd. Every create, update, or delete operation flows through a chain of admission controllers, each of which can allow the request, reject it, or modify the object.

This positioning makes admission controllers ideal for centralized policy enforcement. They let platform teams enforce security constraints, resource governance, and operational standards without requiring developers to manually encode those rules into every manifest.

Tracing an API request’s journey

When a request like kubectl apply -f pod.yaml hits the Kubernetes API server, it passes through a fixed sequence:

  1. Authentication verifies the caller’s identity.
  2. Authorization checks whether that identity is allowed to perform the requested action.
  3. Admission control inspects the content of the request and enforces cluster-wide policy.

If authentication or authorization fails, the request is rejected immediately. If admission control fails, the request is also rejected—even though the caller was otherwise permitted to act. Only requests that pass all admission controllers are persisted and become part of the cluster’s desired state.

The two phases of admission control

Admission control runs in two ordered phases, each with a distinct responsibility.

Mutating admission controllers run first. They are allowed to modify incoming objects to enforce defaults or apply standard behavior. Typical examples include injecting sidecars, setting default resource requests and limits, normalizing labels, or applying security-related fields.

Once mutation completes and the object is schema-valid, validating admission controllers run. These controllers are read-only. They inspect the final object and either accept or reject it based on policy—for example, blocking privileged containers, rejecting images from untrusted registries, or enforcing required labels.

This ordering ensures that validation always happens against the final, canonical version of the object.

How admission control differs from authentication and authorization

Admission control complements—but does not replace—authentication and authorization. Authentication answers who the caller is. Authorization answers whether they are allowed to perform an action. Admission control answers a different question: is this object acceptable?

A developer might be authorized via RBAC to create Pods in a namespace, but an admission controller can still reject the request if the Pod violates security or operational policy. This separation of concerns enables fine-grained governance: RBAC controls access to the API, while admission controllers control the shape and safety of what enters the cluster.

Mutating vs. validating controllers: what’s the difference?

Kubernetes admission controllers fall into two categories—mutating and validating—with distinct responsibilities and a strict execution order. Both intercept API requests before they are persisted, but one reshapes objects while the other enforces invariants. Understanding this split is essential when designing reliable policy enforcement in a cluster.

Mutating controllers: the editors

Mutating admission controllers can modify objects as they pass through the API server. Think of them as automated editors that apply cluster-wide defaults and conventions without requiring developers to change their manifests.

Common use cases include injecting sidecars (for service meshes, logging, or security), applying default resource requests and limits, normalizing labels and annotations, or setting security-related fields. By mutating objects in-flight, these controllers reduce boilerplate, eliminate footguns, and ensure workloads conform to platform standards by default.

Validating controllers: the gatekeepers

Validating admission controllers enforce policy without changing objects. They inspect the final object definition and either accept or reject the request. This is where hard rules live.

Typical validations include blocking privileged containers, rejecting images from untrusted registries, requiring specific labels for ownership or cost allocation, enforcing HTTPS on Ingress resources, or capping resource usage. Because they are read-only, validating controllers provide a clean, auditable enforcement layer for security and governance.

Execution order: mutate, then validate

Admission control always runs mutating controllers before validating controllers. This ordering is critical. Validation is performed against the final version of the object, after all defaults and injections have been applied.

For example, if a policy requires resource limits, a mutating controller can add defaults when they are missing. A validating controller then confirms that limits exist and approves the request. Reversing this order would cause valid workloads to be rejected unnecessarily. The mutate-then-validate flow ensures policies are evaluated against the object as it will actually exist in the cluster.

Why are admission controllers critical for security?

Admission controllers are a fundamental component of a robust Kubernetes security posture. They function as gatekeepers for your cluster's API server, intercepting and processing every request to modify the cluster's state before it is persisted. This provides a powerful mechanism to enforce security and governance policies automatically and at scale. Without them, security relies on manual reviews, CI/CD pipeline checks, and runtime scanning—all of which can be bypassed or fail.

By shifting policy enforcement to the point of entry, admission controllers ensure that non-compliant or insecure configurations never make it into the cluster in the first place. This proactive approach is far more effective than reactive measures that detect problems after they are already running. For platform engineering teams managing large fleets of clusters, this automated enforcement is not just a best practice; it's a necessity. It allows you to establish consistent security guardrails across all environments, ensuring that every workload, regardless of which team deployed it, adheres to your organization's security standards. This prevents configuration drift and hardens the cluster against both accidental misconfigurations and malicious attacks.

Enforce security policies at the source

Admission controllers enforce policies by intercepting requests to the Kubernetes API server before an object is persisted in etcd. This means they act at the source of every change, serving as the final checkpoint for resource creation and modification. By validating requests at this critical juncture, you can prevent insecure resources from ever being created. This is the most effective point to apply security rules, as it stops problems before they can manifest.

For example, you can use an admission controller to enforce a policy that blocks any container from running with root privileges or using the hostPath volume type, which could expose the underlying node. You can also mandate that all container images must be pulled from a specific, trusted registry. These rules are defined once and then applied consistently to every relevant API request, creating a powerful, automated security enforcement layer for your cluster.

Prevent misconfigurations and vulnerabilities

A significant portion of security incidents stems from simple misconfigurations. Admission controllers help eliminate this risk by validating resource configurations against a predefined set of rules. Custom policies, implemented via admission webhooks, allow you to codify your organization's specific security requirements and operational best practices. This provides a flexible way to address unique security contexts and prevent common vulnerabilities.

For instance, a validating webhook can reject any Deployment that does not define resource requests and limits, preventing potential resource exhaustion attacks. Another could ensure that all Ingress resources are configured with proper TLS settings and do not expose internal services to the public internet. By automating these checks, you reduce the reliance on manual code reviews and empower developers to self-correct their configurations before deployment, catching errors early in the development lifecycle.

Achieve compliance and governance goals

For organizations subject to regulatory standards like PCI DSS, HIPAA, or SOC 2, admission controllers are an essential tool for achieving and demonstrating compliance. They provide a clear, automated, and auditable mechanism for enforcing the strict configuration requirements mandated by these frameworks. Instead of relying on periodic manual audits, you can codify compliance rules directly into the cluster's control plane.

This approach strengthens your overall governance model. Platform teams can establish a "paved road" for developers, ensuring that all applications deployed on the platform automatically adhere to security and operational standards. For example, a policy can enforce that all resources are tagged with appropriate labels for cost allocation and ownership. This not only improves security but also enhances operational efficiency and ensures that your clusters remain manageable, compliant, and secure as they scale.

Key built-in admission controllers to know

Kubernetes includes a set of default admission controllers that are compiled into the kube-apiserver binary. While you can enable or disable them, most standard cluster setups have them active from the start. Understanding these controllers is fundamental to managing security, resource allocation, and policy enforcement within your clusters. They provide a baseline of control that you can build upon with custom webhooks.

ResourceQuota and LimitRanger

ResourceQuota and LimitRanger are two controllers that work together to manage resource consumption. ResourceQuota sets aggregate limits on resources—like total CPU and memory—that can be used within a single namespace. This prevents one project from consuming all available cluster resources. LimitRanger, on the other hand, operates at the container level. It enforces default resource requests and limits for pods in a namespace, ensuring that every container has a defined resource profile. Using them in tandem provides robust control over resource allocation, which is critical for maintaining cluster stability and ensuring fair usage across multiple teams or applications.

PodSecurityPolicy and Pod Security Standards

PodSecurityPolicy (PSP) was a powerful but complex controller for defining a pod's security context, controlling aspects like privileged containers and host network access. However, PSP was deprecated in Kubernetes v1.21 and removed entirely in v1.25. The replacement is Pod Security Standards (PSS), which are enforced by the built-in PodSecurity admission plugin. PSS simplifies security enforcement by defining three standard policies—privileged, baseline, and restricted. This tiered approach makes it easier to apply consistent, cluster-wide security profiles without the operational overhead that came with managing individual PSPs, offering a more streamlined path to securing workloads.

NetworkPolicy and other security controllers

The NetworkPolicy controller is essential for implementing network segmentation within your cluster. By default, all pods in a Kubernetes cluster can communicate with each other. NetworkPolicy allows you to define rules that restrict this traffic, acting as a distributed firewall at the pod level. You can specify which pods can connect to each other and to other network endpoints, effectively minimizing the attack surface of your applications. Implementing NetworkPolicies is a foundational step for securing multi-tenant environments or isolating sensitive workloads. Other security-focused controllers, like NodeRestriction, further limit the permissions of kubelets to enhance overall cluster security.

How to create a custom admission controller

While Kubernetes provides a robust set of built-in admission controllers, your organization may have unique policy or security requirements that demand custom logic. Instead of forking and maintaining your own version of the Kubernetes API server, you can implement custom controllers using admission webhooks. These webhooks are external services that the API server calls during the admission process, allowing you to inject your own validation or mutation logic into the API request lifecycle. This approach provides a flexible and decoupled way to extend Kubernetes' capabilities to fit your specific needs.

An overview of admission webhooks

Admission webhooks are HTTP callbacks that intercept requests to the Kubernetes API server before an object is persisted in etcd. They function as gatekeepers, allowing you to programmatically enforce custom policies. When a resource is created, updated, or deleted, the API server sends an AdmissionReview object to any registered webhooks that match the request's criteria. The webhook inspects the object, performs its logic, and sends a response back to the API server indicating whether the request should be allowed, denied, or modified. This mechanism is central to dynamic admission control, enabling you to implement sophisticated governance without altering core Kubernetes components.

Build a mutating admission webhook

Mutating admission webhooks are the first to act on an API request. Their primary function is to modify incoming objects to enforce standards or inject required configurations. For example, you could build a mutating webhook that automatically adds a sidecar container for logging or monitoring to every new pod. Another common use case is to set default resource limits, security contexts, or labels on deployments that don't have them explicitly defined. By modifying objects on the fly, these webhooks ensure that resources conform to your cluster's operational standards before they are even created, reducing manual configuration and preventing errors.

Build a validating admission webhook

Validating admission webhooks run after all object modifications are complete, including those made by mutating webhooks. Their sole purpose is to validate an object against a set of policies and either approve or reject the request. Unlike mutating webhooks, they cannot change the object. A typical use case is to enforce security policies, such as ensuring all container images are pulled from a trusted private registry or preventing the creation of pods with root privileges. By acting as a final gatekeeper, a validating webhook ensures that no non-compliant resource ever makes it into your cluster, strengthening your security and governance posture.

Configure and deploy your webhook

To deploy a custom admission controller, you need two main components: the webhook server and a configuration object. The server is an HTTP service you build that listens for AdmissionReview requests from the API server. This service must be secured with TLS to ensure encrypted communication. Once your server is running, you register it with the Kubernetes API server by creating either a MutatingWebhookConfiguration or ValidatingWebhookConfiguration object. This configuration tells the API server how to contact your webhook—including its service URL and the CA bundle for verifying its TLS certificate—and specifies rules for which resource types and operations it should intercept.

Common challenges with admission controllers

While admission controllers are powerful, they introduce operational complexities that teams must manage carefully. From performance bottlenecks to configuration drift, understanding these challenges is the first step toward building a resilient and secure control plane.

Managing complex configurations

As you deploy more admission controllers, their configurations can become difficult to manage. Each controller acts as a gatekeeper, and their order of execution matters. One controller might modify a request in a way that causes a subsequent validating controller to reject it unexpectedly. Tracking these interactions across a large fleet of clusters is a significant challenge. To maintain control, you should manage controller configurations declaratively using a GitOps workflow. This approach provides versioning, peer review, and a clear audit trail for every policy change, preventing configuration drift and making complex environments easier to reason about.

Handling performance and latency

Admission webhooks are synchronous and sit directly in the critical path of API server requests. A slow or unresponsive webhook can introduce significant latency, slowing down all kubectl operations and automated deployments. A complete webhook failure can even bring your cluster to a halt. To mitigate this, you must design your webhooks to be highly performant and resilient. Set aggressive timeouts for webhook calls and monitor their latency closely. Running multiple replicas of your webhook server behind a load balancer is essential for high availability, ensuring that a single pod failure doesn't impact cluster operations.

Testing, debugging, and failure planning

Like any application code, admission controllers require rigorous testing. A buggy webhook can deny valid requests or, worse, allow non-compliant resources into your cluster. Implement unit and integration tests to validate your controller's logic before deployment. For debugging, Kubernetes audit logs are invaluable, as they record every request rejected by a controller. It's also crucial to have a failure plan. The failurePolicy field in a webhook configuration lets you define whether an unavailable webhook should cause the API call to Fail (blocking the request) or be Ignored. Choosing the right policy depends on the webhook's criticality.

Avoiding security risks and side effects

While controllers enhance security, they can also become a target. A compromised webhook could be used to bypass policies or inject malicious configurations. Always secure communication between the API server and your webhooks using TLS. You also need to consider unintended side effects. If a webhook interacts with external systems, you must properly configure its sideEffects field. Setting it to None tells the API server that the webhook is safe to call during dry-run requests, like kubectl apply --dry-run, preventing accidental changes to outside infrastructure during validation checks.

Best practices for implementation

Implementing admission controllers requires careful planning to ensure they enhance, rather than disrupt, your Kubernetes environment. Adhering to best practices for policy definition, security, availability, and monitoring is critical for building a robust and scalable policy enforcement layer. These practices help prevent common pitfalls like performance degradation, deadlocks, and operational blind spots, ensuring your controllers are reliable assets for security and governance.

Define clear policies and handle errors

Policies enforced by admission controllers must be explicit and well-documented. When a controller denies a request, it should return a clear, actionable error message explaining why the request was rejected and how to fix it. Ambiguous denials frustrate developers and slow down workflows. The controller’s logic must also be resilient. It should be designed to fail open or closed based on a deliberate policy decision, preventing it from crashing the API server or causing cascading failures. A well-defined failure mode ensures that a bug in the controller doesn't bring down your entire cluster.

Secure communication and optimize performance

Admission webhooks are a critical part of the API request path and must be secured. Always use TLS to encrypt communication between the API server and your webhook endpoint to protect sensitive request data in transit. Performance is equally critical; a slow controller introduces latency into every matching API request, creating a bottleneck that can degrade cluster performance. Design your controller to execute its logic quickly and efficiently. Monitor webhook latency closely and set aggressive timeouts to prevent a slow controller from impacting the API server's availability, especially in large clusters where API throughput is high.

Prevent deadlocks and ensure high availability

A common pitfall is creating a controller that intercepts requests for resources it needs to run, leading to a deadlock. For example, if your webhook pod is in a namespace that the webhook itself validates, it might block its own creation or updates. To prevent this, use a namespaceSelector in your webhook configuration to explicitly exclude the controller's own namespace. For production environments, ensure your controller is highly available by running multiple replicas. This prevents the controller from becoming a single point of failure and ensures that policy enforcement continues even if one instance becomes unavailable.

Implement comprehensive testing and monitoring

Thoroughly test your admission controllers before deploying them. This includes unit tests for the core logic and integration tests to verify their behavior within a live cluster. Once deployed, continuous monitoring is essential. Track key metrics like request latency, error rates, and the number of admitted versus rejected requests. Use Kubernetes audit logs to review the decisions made by your controllers, which is invaluable for debugging and security analysis. Centralized observability, like that provided by Plural's Kubernetes dashboarding, simplifies tracking these metrics across an entire fleet of clusters, providing a unified view of policy enforcement and performance.

How to Troubleshoot Admission Controller Issues

When an admission controller fails or is misconfigured, it can disrupt cluster operations by blocking legitimate API requests. Because they act as gatekeepers for the API server, a non-responsive or faulty webhook can become a single point of failure, preventing deployments, updates, or even the creation of essential resources. Troubleshooting these issues requires a systematic approach that combines inspecting webhook configurations, analyzing logs, and monitoring performance metrics.

Effective troubleshooting starts with understanding the potential failure points. Is the webhook server down? Is there a network connectivity issue between the API server and the webhook? Is the controller logic itself flawed, causing it to reject valid requests or time out? Answering these questions quickly is critical to restoring cluster stability. For teams managing multiple clusters, these challenges are magnified, highlighting the need for centralized observability and management tools. A platform that provides a unified view of logs, metrics, and configurations across a fleet can significantly reduce the time it takes to diagnose and resolve admission controller problems.

Identifying common webhook failures

Admission webhooks are a frequent source of failure because they introduce external dependencies into the critical path of the Kubernetes API server. If a webhook service is unavailable or returns an error, the API server may be unable to process requests for certain resources.

Common causes include:

  • Network Connectivity: The API server cannot reach the webhook service due to firewall rules, network policies, or DNS resolution failures.
  • Service Unavailability: The pods running the webhook server are down, in a crash loop, or not exposed correctly via a Kubernetes service.
  • TLS Certificate Issues: The TLS certificates used to secure communication between the API server and the webhook are expired, invalid, or misconfigured.
  • Incorrect Configuration: The ValidatingWebhookConfiguration or MutatingWebhookConfiguration objects contain incorrect service names, paths, or failure policies.

Troubleshooting steps involve:

  • Use kubectl describe on the webhook configuration to verify the service details and failurePolicy.
  • Check the status and logs of the pods backing the webhook service to look for crashes or errors.
  • Verify network connectivity from the API server to the webhook endpoint.
  • Inspect the caBundle and TLS certificates to ensure they are valid and correctly configured.

Using audit logs for debugging

Kubernetes audit logs provide a chronological record of requests made to the API server, making them an invaluable tool for debugging admission controller issues. When a request is denied, the audit log entry contains information about which controller rejected it and why.

To use audit logs for debugging:

  • Enable Auditing: Ensure that audit logging is enabled on the Kubernetes API server with a policy that captures relevant events, particularly for the resources your webhook manages.
  • Search for Rejections: Filter the audit logs for requests that resulted in a "Forbidden" status. The annotations field in the log entry often specifies the name of the admission controller that denied the request.
  • Analyze the Reason: Review the responseObject in the audit event to understand the reason for the rejection. This often includes a detailed message from the webhook explaining why the request was denied.

In a multi-cluster environment, aggregating and searching these logs can be challenging. Plural’s centralized dashboard simplifies this by providing a single interface to view and analyze logs from all managed clusters, helping you quickly pinpoint which controller is causing issues.

Monitoring controller performance

A slow or inefficient admission controller can introduce significant latency into API requests, degrading the performance of the entire cluster. Monitoring the performance of your webhooks is essential to ensure they don't become a bottleneck.

Key metrics to monitor include:

  • Request Latency: Track the time it takes for your webhook to process and respond to admission requests. The apiserver_admission_webhook_admission_duration_seconds metric is critical for this.
  • Error Rate: Monitor the rate of failed requests to the webhook. A spike in errors can indicate a problem with the webhook server itself.
  • Resource Usage: Keep an eye on the CPU and memory consumption of the pods running your webhook. High resource usage could indicate an inefficient implementation that needs optimization.
  • Timeouts: A high number of timeouts suggests the webhook is taking too long to respond, which can be caused by performance issues or an overly aggressive timeoutSeconds setting in the webhook configuration.

By integrating these metrics into a unified observability platform like Plural, you can set up alerts to proactively detect performance degradation and troubleshoot issues before they impact cluster users.

Manage Admission Controllers at Scale with Plural

While admission controllers are powerful tools for enforcing security and governance, managing them across a large fleet of Kubernetes clusters introduces significant operational complexity. Each cluster requires consistent policy application, controller deployment, and ongoing monitoring. Manually handling these tasks is inefficient and prone to error, leading to configuration drift and security gaps.

Plural provides a unified platform to manage the entire lifecycle of admission controllers at scale. By leveraging a GitOps-based workflow and a centralized control plane, Plural ensures that your policies are deployed consistently, your controllers are always up-to-date, and your teams have the visibility needed to troubleshoot issues quickly. This approach transforms admission controller management from a cluster-by-cluster chore into a streamlined, automated process that scales with your infrastructure.

Centralize policy management across your fleet

Admission controllers act as gatekeepers by intercepting requests to the Kubernetes API server, allowing you to enforce policies before objects are persisted. To be effective, these policies must be applied uniformly across all relevant clusters. Plural centralizes policy management by treating your Git repository as the single source of truth. You can define all your admission control policies—such as OPA Gatekeeper constraints or Kyverno policies—in a central repository.

Plural’s GitOps engine then automatically syncs these configurations to every cluster in your fleet. Using a GlobalService manifest, you can designate a specific folder in your repository to be applied globally, ensuring that any change to a policy is rolled out consistently everywhere. This eliminates configuration drift and guarantees that all clusters adhere to the same security and compliance standards without manual intervention.

Deploy controllers fleet-wide

Before you can enforce policies, the admission controllers themselves (like Kyverno, OPA Gatekeeper, or custom webhooks) must be deployed to each cluster. Plural simplifies this process through its application deployment capabilities. You can package an admission controller as a Plural application and use the platform to manage its deployment and lifecycle across your entire fleet. This ensures that every cluster runs the correct version of the controller with the right configuration.

Plural’s agent-based pull architecture allows it to manage deployments in any environment—public cloud, on-premises, or at the edge—without requiring inbound network access to your clusters. The agent installed on each workload cluster securely polls the central control plane for updates, making it easy to scale your deployments while maintaining a strong security posture.

Streamline monitoring and troubleshooting

Effective management requires visibility. When an admission controller blocks a request or a webhook fails, engineers need a straightforward way to diagnose the problem. Plural provides a single-pane-of-glass dashboard that offers deep visibility into all your managed clusters. From the Plural UI, you can inspect the health of admission controller pods, view logs, and analyze events without juggling multiple kubeconfigs or command-line tools.

This centralized observability simplifies troubleshooting by providing immediate context. If a deployment fails due to a policy violation, developers can quickly identify the responsible controller and the specific policy that was triggered. By integrating monitoring and management into one platform, Plural reduces the time it takes to identify, diagnose, and resolve issues related to admission control.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

When should I use a built-in controller versus a custom one like OPA Gatekeeper? Built-in controllers like PodSecurity and LimitRanger are best for enforcing fundamental, widely accepted Kubernetes best practices with minimal configuration. Use them to establish a baseline for security and resource management. You should turn to custom controllers, typically implemented with policy engines like OPA Gatekeeper or Kyverno, when you need to enforce business-specific logic. This includes rules like requiring specific cost-center labels, validating image signatures from your internal CI system, or ensuring all Ingress objects use a specific annotation.

Can a misconfigured admission webhook break my entire cluster? Yes, a faulty admission webhook can halt cluster operations. Since webhooks are a synchronous part of the API request lifecycle, a slow or unavailable webhook can block the creation and modification of resources. A common failure scenario is a webhook that blocks its own pods from starting, creating a deadlock. To prevent this, you must configure a failurePolicy of Ignore for non-critical webhooks, use a namespaceSelector to exclude the controller's own namespace from its scope, and run multiple replicas for high availability.

Why use a policy engine like Kyverno instead of writing my own webhook from scratch? Building a webhook from scratch requires you to manage the entire HTTP server, its TLS configuration, and the logic for handling AdmissionReview objects. Policy engines like Kyverno or OPA Gatekeeper abstract this complexity away. They provide a pre-built, hardened admission controller and allow you to define policies in a high-level, declarative language. This approach lets your team focus on writing the policy logic itself rather than the underlying plumbing, which reduces development time and the potential for errors.

How can I apply different admission policies to my development and production clusters? Managing distinct policies for different environments is best handled with a GitOps workflow. You can structure your policy repository with a base set of rules that apply everywhere, along with environment-specific overlays for dev, staging, and prod. For example, production policies might strictly enforce that all images come from a trusted registry, while development policies could be more permissive. A platform like Plural can then automate the application of the correct policy set to each target cluster, ensuring consistency and control.

How does Plural specifically help with the day-to-day management of these controllers? Plural addresses the operational challenges of managing admission controllers across a fleet. It provides a centralized GitOps workflow to deploy both the controllers themselves and the policies they enforce, which eliminates configuration drift. When a controller blocks a deployment, Plural's unified dashboard offers a single place to monitor the health of controller pods, view their logs, and analyze audit logs to see exactly why a request was denied. This simplifies troubleshooting by removing the need to manually connect to each cluster to diagnose issues.

Guides