Guide to Kubernetes Ingress Controllers: Simplified
Running applications in Kubernetes is powerful, but you'll quickly run into a common hurdle: how do you get outside traffic to your services safely and efficiently? Just creating services won't cut it when you need to handle HTTPS traffic, route requests to different backends, or manage SSL certificates.
That's where Kubernetes Ingress controllers come in. Think of them as smart gatekeepers for your cluster—they take incoming requests and route them exactly where they need to go based on rules you set up. If you're doing DevOps or platform engineering work, understanding Ingress controllers is key to building apps that actually work well in production.
This guide breaks down how Ingress controllers work, covers the popular options, and shows you how to deploy and manage them effectively.
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Key takeaways:
- Simplify external access: Employ Ingress controllers to centralize how external HTTP/S traffic reaches your Kubernetes services. This offers intelligent load balancing, SSL/TLS termination, and flexible routing from one point, all observable via Plural's dashboard.
- Customize controller behavior: Define clear Ingress rules and actively manage settings like resource allocation and scaling. Use annotations or Custom Resource Definitions (CRDs) for advanced needs such as WebSockets or traffic shaping, consistently deployed with Plural CD.
- Choose intelligently and operate proactively: Select an Ingress controller that aligns with your environment and establish robust practices for monitoring, troubleshooting, and security. This ensures application reliability, supported by Plural's comprehensive fleet management.
What Exactly Is a Kubernetes Ingress Controller?
When you're running applications in Kubernetes, one of the first challenges you'll encounter is how to make your services accessible to the outside world. You need a way to manage external HTTP/S traffic and route it to the correct services within your cluster. This is precisely where a Kubernetes Ingress Controller comes into play. It acts as the traffic manager for your cluster, directing incoming requests to their intended destinations. Understanding its role is fundamental to effectively managing and scaling your applications.
What It Does and Why It's Essential
At its core, an Ingress controller is responsible for managing external access to the services running inside your Kubernetes cluster. Think of it as a highly specialized and intelligent load balancer designed specifically for Kubernetes environments. It accepts traffic originating from outside the Kubernetes platform—whether from users, other applications, or external systems—and then intelligently distributes that traffic to the appropriate pods.
Without an Ingress controller, exposing multiple services would require provisioning and configuring a separate external load balancer for each service. This approach is not only costly but also operationally complex to manage at scale. An Ingress controller simplifies this by providing a unified entry point, enabling sophisticated routing rules, and handling tasks like SSL/TLS termination, all of which are crucial for modern application delivery. It ensures that your applications are reachable, secure, and can handle varying loads efficiently.
Key Components and How They Fit Together
To understand how an Ingress controller functions, it's important to distinguish between two key Kubernetes concepts: the Ingress resource and the Ingress controller itself.
The Ingress resource is a Kubernetes API object where you define the rules for routing external HTTP/S traffic to your internal services. These rules can specify things like which hostnames or paths should direct traffic to which services and ports.
However, defining these rules in an Ingress resource is only half the story. The Ingress controller is the actual engine that reads these Ingress resources and implements the defined routing logic. It actively watches for Ingress resources and configures an underlying proxy server (like NGINX, Traefik, or HAProxy) to enforce those rules. Essentially, the Ingress resource tells Kubernetes what you want to achieve with your traffic routing, and the Ingress controller is the component that figures out how to make it happen. Without an active Ingress controller, your Ingress resources will simply sit there, and no external traffic will be routed as you've specified.
At its core, an Ingress controller is a specialized piece of software, often a reverse proxy implementation like NGINX or HAProxy, that operates within your Kubernetes cluster. Its primary function is to continuously monitor the Kubernetes API server for any Ingress resources. When you create or update an Ingress resource, you are essentially defining a set of routing instructions. For example, you might specify a rule stating, "direct all traffic for app.yourdomain.com/api
to service A, and route traffic destined for blog.yourdomain.com
to service B."
The Ingress controller takes these declarative rules and translates them into concrete configurations for the underlying proxy server it manages. So, while an Ingress resource outlines what routing behavior you desire, the controller is the active component that implements and enforces these rules. It dynamically updates the proxy's configuration to ensure that incoming traffic is consistently routed according to the most current rules you've established. It is the Ingress controller's responsibility to continuously watch for these Ingress resources, interpret the rules defined within them, and then implement the necessary routing configurations. Without an active Ingress controller running and properly configured in your cluster, any Ingress rules you define will remain dormant, and no external traffic will be routed to your services as intended.
Managing these Ingress resources consistently, especially across a fleet of clusters, is greatly simplified with tools like Plural CD, which leverages a GitOps methodology to deploy and synchronize your Kubernetes configurations, including all your Ingress definitions.
Why Use an Ingress Controller? Key Features and Benefits
Using an Ingress controller in your Kubernetes environment isn't just a good idea; it's a fundamental component for managing external access to your applications effectively and securely.
Achieving Smart Load Balancing and Traffic Distribution
One of the primary functions of an Ingress controller is to provide smart load balancing. Ingress controllers manage external access to the services in a cluster. It acts as a load balancer, distributing traffic across multiple application instances to prevent any single application from being overloaded. This is critical for maintaining application availability and performance, especially under heavy traffic. By intelligently distributing incoming requests across your backend pods, an Ingress controller ensures that no single instance becomes a bottleneck, leading to a smoother experience for your users.
This distribution isn't just round-robin; many Ingress controllers support more sophisticated algorithms, allowing for weighted balancing or session affinity (sticky sessions), ensuring users are consistently routed to the same pod if needed. This capability is vital for stateful applications. Effective load balancing also contributes to efficient resource utilization, as traffic is spread optimally, preventing underutilization or over-provisioning of resources. For teams using Plural's Kubernetes fleet management, this means your carefully orchestrated deployments can handle fluctuating loads gracefully.
Securing Connections with SSL/TLS Termination
Security is paramount, and Ingress controllers play a significant role here by handling SSL/TLS termination. Ingress can handle secure connections (HTTPS), making your applications safer. This means the Ingress controller decrypts incoming HTTPS traffic and forwards unencrypted traffic to your internal services. Offloading SSL/TLS termination to the Ingress controller simplifies certificate management, as you only need to configure SSL certificates at the Ingress level rather than for each individual service.
This approach also reduces the computational load on your application pods, as they no longer need to handle the encryption and decryption processes. Centralizing SSL/TLS termination at the Ingress layer makes it easier to enforce security policies consistently across all your services. This is particularly important in complex environments where numerous microservices are running. Plural’s architecture, with its focus on secure, egress-only communication from managed clusters, complements this by ensuring that the management plane itself maintains a strong security posture while you secure your application traffic via Ingress.
Implementing Flexible Path-Based and Host-Based Routing
Ingress controllers offer powerful and flexible routing capabilities, primarily through host-based and path-based routing. Ingress directs web traffic to different applications based on the website address (URL) requested. With host-based routing, you can direct traffic for serviceA.yourdomain.com
to one set of pods and serviceB.yourdomain.com
to another, all using the same external IP address. This is invaluable for hosting multiple distinct applications or tenants within the same cluster.
Path-based routing takes this further by allowing you to route traffic based on the URL path. For example, yourdomain.com/api
could go to your API service, while yourdomain.com/app
goes to your frontend application. This fine-grained control enables complex application architectures, such as microservices, where different components are exposed under a unified domain. This routing logic is defined in Ingress resources, which are declarative configurations that the Ingress controller interprets and implements, simplifying the management of traffic flow.
Creating a Single, Simplified Entry Point for Multiple Services
Perhaps the most significant benefit is that an Ingress controller provides a single, simplified entry point for all external traffic into your Kubernetes cluster. An Ingress controller is absolutely necessary for any Ingress resource to work within a Kubernetes cluster. It acts as the gatekeeper, managing how external users and services access your applications. This consolidation simplifies your network topology, reduces the number of external load balancers you need (often translating to cost savings), and makes your overall setup easier to understand and manage.
By having one well-defined entry point, you can centralize concerns like SSL/TLS termination, authentication, and rate limiting. This makes it easier to apply consistent policies across all your services. For DevOps teams, this means less complexity when deploying new services or updating existing ones. Within Plural's ecosystem, where managing Kubernetes at scale is key, a well-configured Ingress controller contributes to the clarity and control provided by Plural's built-in Kubernetes dashboard, allowing teams to visualize and manage how traffic enters their managed clusters.
Exploring Your Options: Types of Kubernetes Ingress Controllers
When managing external access to your services in Kubernetes, you'll encounter a variety of Ingress controllers. The choice of controller directly influences how your cluster handles essential functions like load balancing across your application pods, SSL/TLS termination for secure connections, and complex path-based or host-based routing.
Different controllers bring distinct strengths. Some are community-driven, offering extensive customization and broad compatibility, while others are tightly integrated with specific cloud provider ecosystems, promising smoother integration with other managed services. Understanding these distinctions is key to designing a network ingress strategy that aligns with your application architecture, security posture, and scalability needs.
Whether you opt for a widely-used open-source solution or a cloud provider's native offering, an informed decision establishes a strong foundation for your Kubernetes networking. As your Kubernetes deployments grow and diversify, platforms like Plural can help you streamline the management of your entire cluster fleet, providing a consistent operational experience even if you're using different Ingress technologies across various clusters. This unified approach simplifies oversight and control, which is particularly valuable when dealing with the complexities of large-scale Kubernetes environments.
Popular Choices: NGINX, Traefik, HAProxy, and Others
Several battle-tested Ingress controllers are popular within the Kubernetes community. The NGINX Ingress Controller is arguably the most widely adopted, largely because it's maintained by the Kubernetes project itself and builds upon the well-known NGINX reverse proxy. It's a versatile and powerful option suitable for many use cases. Another strong contender is Traefik, which has gained a loyal following for its ease of use, dynamic configuration capabilities, and automatic service discovery features, making it a favorite for teams prioritizing simplicity and automation.
HAProxy Ingress is also frequently chosen, particularly for its high-performance characteristics and robust load-balancing features. These controllers, along with others like Istio Gateway or Kong Gateway, essentially combine the roles of an Ingress resource, a load balancer, and an API gateway. They use the set of rules you define to efficiently manage traffic flow both into your Kubernetes cluster and between the services running within it, ensuring requests reach their intended destinations reliably.
Understanding Cloud Provider-Specific Controllers
Beyond the general-purpose controllers, major cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure offer their own native Ingress solutions. For instance, AWS provides the AWS Load Balancer Controller, which provisions and manages AWS Application Load Balancers (ALBs) or Network Load Balancers (NLBs) to route external traffic to your Kubernetes services. Similarly, GCP offers its GKE Ingress controller, and Azure has the Application Gateway Ingress Controller (AGIC).
These cloud-specific controllers often boast tight integration with their respective cloud ecosystems, offering benefits such as seamless Identity and Access Management (IAM) integration and leveraging the provider's managed load balancing infrastructure. This can simplify operations and billing if your workloads reside entirely within a single cloud environment. However, if your organization operates clusters across multiple clouds or in hybrid environments, these cloud provider-specific controllers may introduce complexities because of their tight reliance on a specific provider. Plural's platform can provide a unified dashboard and consistent management experience, helping you oversee diverse setups that might utilize different Ingress solutions without added complexity.
Getting Started: Set Up and Configure Your Ingress Controller
After selecting an Ingress controller, the next phase involves deploying it to your Kubernetes cluster and fine-tuning its configuration for optimal operation. This process includes the initial installation, adhering to configuration best practices, and ensuring it integrates effectively with your existing services and overall cluster architecture.
Walking Through the Installation Process
It's important to understand that for an Ingress resource to function, the cluster must have an ingress controller running. Unlike some core Kubernetes components, Ingress controllers are not started automatically with a cluster. You'll need to deploy the controller software yourself.
Common methods for installation include using Helm charts, which bundle all necessary Kubernetes manifests, or by directly applying YAML files. Many resources, such as this comprehensive tutorial on setting up an Nginx Ingress Controller, provide detailed steps. With Plural, our Plural CD (Continuous Deployment) can streamline the deployment and lifecycle management of your chosen Ingress controller, ensuring consistent setups across your entire fleet of clusters.
Applying Configuration Best Practices for Optimal Performance
Installing your Ingress controller is the first step; configuring it correctly is crucial for achieving the performance and reliability your applications demand. This isn't a set-it-and-forget-it task. You should plan to regularly audit and optimize your Ingress configurations. Pay close attention to resource allocation—ensuring your controller pods have sufficient CPU and memory is fundamental.
Fine-tuning controller-specific settings, like connection timeouts or worker process numbers, can also yield significant improvements. Effective monitoring is essential; think of Ingress controllers as "traffic signals" and monitoring tools as "real-time traffic cameras" that help you understand Kubernetes performance. Plural's built-in multi-cluster dashboard offers deep visibility into your Ingress controller's behavior and resource usage, enabling data-driven tuning decisions.
Integrating Seamlessly with Your Kubernetes Clusters
An Ingress controller serves as the primary gateway for external HTTP/HTTPS traffic reaching your services. To fulfill this role, it must be exposed externally. This is typically done by creating a Kubernetes Service of type LoadBalancer
or NodePort
that directs traffic to your Ingress controller pods, a common practice detailed in guides like those from NGINX for exposing their controller. This service receives external requests, which the Ingress controller then routes according to your defined Ingress rules.
This setup makes the Ingress controller an integral part of your cluster's networking. Plural simplifies this integration, especially across multiple clusters, including those in private networks. Our agent-based architecture and auth proxy facilitate secure, consistent access for managing and observing Ingress configurations without requiring complex network modifications, all through a unified console.
Going Further: Leveraging Advanced Ingress Controller Capabilities
Once you've mastered the basics of Ingress controllers, you're ready to explore their more advanced capabilities. These allow you to handle truly complex routing scenarios, significantly enhance application security, and meticulously fine-tune performance characteristics.
Advanced features often involve extending the standard Kubernetes Ingress resource through CRDs or by using a rich set of annotations specific to your chosen Ingress controller. For instance, you might need to implement sophisticated traffic shaping for canary releases, configure intricate A/B testing rollouts based on user geography or headers, or integrate seamlessly with external authentication systems like OAuth2 or OIDC.
Managing these advanced configurations consistently and reliably across a large and potentially diverse fleet of Kubernetes clusters can quickly become a significant operational challenge. Ensuring that every cluster has the correct version of CRDs, the right set of annotations, and that security policies are uniformly applied requires robust automation. This is precisely where a platform like Plural can transform your operational workflow.
By leveraging Plural's GitOps-based continuous deployment engine, you ensure that all your sophisticated routing rules, custom resource definitions, and specific annotations are version-controlled within your Git repositories and automatically synchronized to all target clusters. This approach not only drastically reduces manual effort and the inherent risk of configuration drift but also provides a clear, immutable audit trail for all changes. This makes it substantially easier to maintain stability, enforce security postures, and demonstrate compliance as your applications and underlying infrastructure scale and evolve.
Extending Functionality with Custom Resource Definitions (CRDs)
Standard Kubernetes Ingress resources are excellent for defining basic HTTP/HTTPS routing, but sometimes your applications demand more specialized features not covered by the default specification. This is where CRDs become invaluable. Many Ingress controller providers, such as NGINX, Traefik, or Istio, extend Kubernetes by defining their own CRDs. These custom resources empower them to offer richer functionalities, including advanced traffic splitting for blue/green deployments, fine-grained canary release management, or more granular control over middleware and request transformation. Plural’s own Stacks feature for infrastructure-as-code management is built upon CRDs, demonstrating their power for creating API-driven, manageable complexity within Kubernetes environments.
Customizing Behavior and Fine-Tuning with Annotations
Annotations offer a flexible and often simpler way to customize your Ingress controller's behavior without always needing to define entirely new CRDs for every adjustment. Think of them as key-value pairs that you add to your Ingress resource's metadata section, providing specific instructions directly to the controller. Common uses include enabling SSL/TLS passthrough, setting up URL rewrite rules, configuring Cross-Origin Resource Sharing (CORS) headers, defining client request rate limits, or specifying custom timeouts for backend services. Each Ingress controller type supports its own unique set of annotations, so it's crucial to consult its specific documentation.
With Plural CD, these annotations, defined within your Kubernetes manifests, are consistently applied across all your clusters, ensuring predictable behavior and simplifying fleet-wide configuration management.
Enabling Support for WebSockets and Sticky Sessions
Modern interactive applications frequently require more than simple HTTP request-response patterns. WebSockets, for example, enable real-time, bidirectional communication channels between a client and a server, which are vital for features like chat applications, live notifications, or interactive online gaming. Your Ingress controller needs proper configuration, often through specific annotations or dedicated settings, to correctly handle and proxy these persistent WebSocket connections.
Another common requirement is "sticky sessions," also known as session affinity. This ensures that all requests from a particular client are consistently routed to the same backend pod. This is critical for stateful applications that maintain user session data locally on the pod and don't replicate it across all instances.
When implementing these advanced networking features, it's wise to "Regularly audit and optimize Ingress controllers and load balancers," as advised in Kubernetes performance best practices. Plural's unified dashboard provides crucial visibility into services that rely on WebSockets or sticky sessions, helping you monitor their health and performance effectively across your entire Kubernetes fleet.
Optimize Performance and Scale Your Ingress Controller Effectively
Once your Ingress controller is up and running, the work doesn't stop there. The next step is to ensure it performs optimally and can gracefully handle your application's traffic demands. Effective scaling strategies and traffic management techniques are fundamental to maintaining a responsive and resilient system. By strategically managing resources and implementing smart traffic control mechanisms, you can prevent bottlenecks and ensure smooth operation, even when traffic surges.
Managing these configurations consistently across a large fleet of Kubernetes clusters can become complex. This is where a platform like Plural can significantly simplify operations. With Plural CD, you can define and deploy your Ingress controller settings uniformly. Additionally, the visibility provided by Plural's built-in Kubernetes dashboard helps you monitor resource usage and performance metrics, enabling you to make data-driven decisions for scaling and tuning your Ingress controllers effectively.
Strategies for Smart Horizontal and Vertical Scaling
Scaling your Ingress controller ensures it can adapt to fluctuating traffic loads. You have two primary approaches: horizontal and vertical scaling. Horizontal scaling involves increasing the number of Ingress controller pods. Imagine this as adding more lanes to a highway during peak hours; it distributes the incoming traffic across more instances, preventing any single pod from becoming overwhelmed. Vertical scaling, conversely, means increasing the resources—such as CPU and memory—allocated to your existing Ingress controller pods. This is like making each existing highway lane wider and more capable of handling a higher volume of traffic.
An efficient use of Kubernetes Ingress controllers is vital for overall cluster health. If your current worker nodes lack sufficient capacity to support scaling your Ingress pods, implementing cluster autoscaling becomes beneficial. This allows your Kubernetes cluster to automatically provision new nodes when needed, ensuring your applications, including the Ingress controller itself, remain available and responsive. Monitoring resource metrics through a unified dashboard, such as the one provided by Plural, allows you to identify precisely when and how to scale—whether by adjusting replica counts for horizontal scaling or modifying resource requests and limits for vertical scaling.
Implementing Caching, Rate Limiting, and Traffic Shaping
Beyond simply scaling the number or size of your Ingress controller pods, several sophisticated traffic management techniques can significantly enhance performance and resilience. Caching is a particularly powerful tool. Many Ingress controllers can be configured to cache frequently requested static content at the edge. This means subsequent requests for the same content are served directly from the Ingress controller's cache, reducing latency for users and significantly decreasing the load on your backend services.
Rate limiting is another essential feature for protecting your applications. By implementing rate limits at the Ingress level, you can control the amount of traffic that any single client or IP address can send to your services within a given time period. This is crucial for preventing abuse or overload and ensuring fair usage of resources among all users. Traffic shaping offers even more granular control, allowing you to manage the flow of traffic by prioritizing requests for critical applications. This ensures that essential services receive the necessary resources, especially during peak times, while less critical services might be throttled if necessary. These advanced configurations are often managed via Kubernetes annotations or CRDs, and fleet management tools can help apply and monitor them consistently across all your clusters.
Troubleshooting Common Issues and Bolstering Security
Even with a solid understanding of Ingress controllers, you'll inevitably encounter operational hiccups. Effective troubleshooting and robust security practices are crucial for maintaining a healthy Kubernetes environment. When problems arise, a systematic approach can save you significant time and effort. Similarly, proactively implementing security measures will protect your applications and data. This involves not just fixing what's broken, but also continuously monitoring your setup and ensuring access controls are correctly configured.
For teams managing multiple Kubernetes clusters, tools that offer centralized visibility and control become indispensable. Plural, for instance, provides a unified dashboard that simplifies observing and managing resources across your entire Kubernetes fleet, making it easier to spot and address issues quickly. This single pane of glass is particularly valuable when you need to correlate events or apply consistent configurations across many environments.
When your Ingress controller isn't routing traffic as expected or is throwing errors, a few common culprits are usually at play. Start by meticulously verifying your Ingress resource definitions for any typos or misconfigurations in paths, service names, or ports. The logs from the Ingress controller pods themselves often provide direct clues about what’s going wrong, so make those your next stop. Ensure your backend services are running correctly and are reachable within the cluster; an Ingress can't route traffic to a non-functional service. Sometimes, the issue might lie with DNS resolution not pointing to your Ingress controller's external IP, or network policies inadvertently blocking traffic. Standard tools like kubectl describe ingress <ingress-name>
and kubectl logs <ingress-controller-pod-name>
are your first line of defense for gathering this information. For a more visual approach, Plural’s Kubernetes dashboard can help you quickly inspect the status of your Ingress objects, services, and pods, streamlining the diagnostic process considerably.
Implementing Robust Access Control and Authentication Methods
Securing access to your Kubernetes services starts with strong access control and authentication, not just for your applications but also for your Ingress resources and the Kubernetes API itself. Role-Based Access Control (RBAC) is the standard mechanism in Kubernetes for managing permissions effectively. You should define clear roles and responsibilities, granting users and service accounts only the minimum necessary permissions to interact with Ingress objects and other cluster resources. When exposing services externally via Ingress, always use HTTPS to encrypt traffic in transit. Consider adding an authentication layer at the Ingress level using tools like OAuth2 Proxy or integrating with an OIDC provider. Plural simplifies secure access to your clusters by integrating its embedded Kubernetes dashboard with your existing OIDC provider. This setup leverages Kubernetes Impersonation, ensuring that users interact with clusters based on their pre-defined RBAC policies, providing a seamless and secure single sign-on experience.
Related Articles
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
Is an Ingress controller always necessary, or can I get by without one for simpler setups? While you could technically expose a single service using a Kubernetes Service of type LoadBalancer
, an Ingress controller becomes incredibly valuable very quickly. Even with just a couple of services, an Ingress controller gives you sophisticated routing capabilities, like directing traffic based on the URL path, and simplifies SSL/TLS certificate management. As your application grows, these features move from "nice-to-have" to essential for managing traffic efficiently and cost-effectively.
I'm still a bit fuzzy on the difference between an "Ingress resource" and an "Ingress controller." Can you clarify? Think of it this way: the Ingress resource is like your set of instructions or a blueprint. You create this YAML file to tell Kubernetes what you want to happen – for example, "send traffic for myapp.com/api
to the api-service
." The Ingress controller, on the other hand, is the actual worker that reads your blueprint and makes it happen. It's a piece of software running in your cluster that watches for these Ingress resources and configures the underlying network routing to match your instructions. Without an active controller, your Ingress resource is just a wish list.
Managing Ingress across many Kubernetes clusters sounds complicated. How can Plural make this easier? You're right, keeping Ingress configurations consistent and up-to-date across a fleet of clusters can be a real challenge. Plural helps significantly here. With Plural CD, our continuous deployment engine, you can manage your Ingress configurations using a GitOps workflow. This means your routing rules are version-controlled and automatically applied to all your clusters, ensuring consistency. Plus, Plural's unified dashboard gives you a single place to see how traffic is flowing and check the status of your Ingress setups across your entire environment, which simplifies monitoring and troubleshooting.
If my application isn't accessible through my Ingress, what are some quick troubleshooting steps I can take? When traffic isn't reaching your application as expected, a few key areas are good to check first. Double-check your Ingress resource definition for any typos in hostnames, paths, or service names. The logs from your Ingress controller pods are often very revealing, so take a look there for error messages. Also, confirm that the backend services your Ingress is trying to route to are actually running and healthy within the cluster. Sometimes, the issue can be as simple as a DNS misconfiguration not pointing to your Ingress controller's external IP. Using Plural's Kubernetes dashboard can also help you visually inspect the status of your Ingress objects, services, and pods to spot issues more quickly.
Should I stick with a generic Ingress controller like NGINX, or use the one my cloud provider offers? This is a common decision point, and there's no single right answer; it depends on your team's needs and your infrastructure. Generic controllers like NGINX or Traefik are very popular, highly configurable, and work across different environments. Cloud provider-specific controllers, like AWS Load Balancer Controller or GKE Ingress, often offer tighter integration with their cloud's other services, which can sometimes simplify things if you're all-in on one provider. The good news is that a platform like Plural can help you manage diverse setups, so if you end up using different Ingress controllers across various clusters, you can still maintain a consistent operational view and deployment strategy.