Kubernetes cluster on Linode.

Your Guide to Kubernetes on Linode

Learn how to effectively manage Kubernetes on Linode, from setup to scaling, with insights on deployment strategies, security, and cost optimization.

Elzet Blaauw

Adopting a GitOps workflow is essential for achieving repeatable, auditable, and automated deployments in modern infrastructure. While Linode Kubernetes Engine (LKE) offers a robust managed Kubernetes foundation, it doesn’t prescribe how you manage your application configurations or infrastructure code.

This guide outlines how to implement an enterprise-grade GitOps workflow on LKE using declarative configurations to manage everything from application deployments and network policies to persistent volumes. By treating your entire infrastructure as code, you can build a scalable, consistent, and low-touch platform that significantly reduces manual intervention and operational risk.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • LKE provides a solid foundation for individual clusters: Linode Kubernetes Engine simplifies running containerized applications by managing the control plane, offering a cost-effective and direct way to get started with Kubernetes for single projects or teams.
  • Manual management becomes unsustainable at scale: As you expand from one cluster to a fleet, relying on manual configuration leads to inconsistencies, security gaps, and operational bottlenecks. A declarative, automated approach is necessary to manage multiple environments effectively.
  • A unified platform is essential for fleet management: Implement a platform like Plural to apply consistent GitOps workflows, infrastructure-as-code practices, and centralized security policies across all your LKE and multi-cloud clusters, solving the challenges that arise with scale.

What is Kubernetes and Why Run it on Linode?

Kubernetes is the industry-standard platform for orchestrating containerized applications. Managed services like the LKE simplify its adoption by abstracting away control plane complexities, enabling teams to deploy faster and focus on development rather than infrastructure. LKE offers a stable, cloud-native environment ideal for both beginners and growing teams.

As organizations scale from a single cluster to multiple environments, new challenges emerge, such as maintaining deployment consistency, enforcing security policies, and achieving unified observability. Solving these requires not just familiarity with Kubernetes, but also a robust management layer. While LKE provides the foundation, a platform like Plural equips you to manage your entire fleet with consistent GitOps workflows, automated upgrades, and centralized visibility.

Understanding Container Orchestration

At its core, container orchestration automates the deployment, scaling, networking, and management of containers. As applications grow in complexity, manual oversight becomes error-prone and unsustainable. Orchestration platforms like Kubernetes provide a declarative API, powerful scheduling, service discovery, and automated rollouts, ensuring workloads are resilient and infrastructure is abstracted away from the developer experience.

Kubernetes has become the de facto orchestration standard due to its flexibility, reliability, and thriving ecosystem. It manages the full container lifecycle—scheduling, scaling, load balancing, and self-healing—allowing teams to ship features without worrying about low-level infrastructure.

Why Kubernetes Is a Smart Choice for Your Applications

Running applications on Kubernetes brings significant benefits:

  • Auto-scaling in response to demand spikes
  • Self-healing for failed containers and nodes
  • Zero-downtime deployments via rolling updates

Services like LKE make getting started simple, provisioning production-ready clusters in minutes. But while Kubernetes offers power and flexibility, it also introduces challenges around configuration complexity, resource tuning, and multi-cluster management.

That’s where tools like Plural come in. It provides a centralized platform to manage infrastructure as code, enforce policy and security best practices, and maintain full observability across your clusters—turning your LKE environments into a scalable, reliable platform that’s ready for long-term growth.

What Is LKE?

LKE is a fully managed Kubernetes service from Akamai’s Linode, designed to simplify the deployment, scaling, and management of containerized applications. Its key strengths lie in combining ease of use, reliable performance, and predictable pricing, making it a strong choice for teams new to Kubernetes as well as seasoned users seeking a cost-effective alternative to hyperscalers.

While LKE streamlines single-cluster operations, platform teams often operate across multi-cloud or hybrid environments. Managing clusters on LKE, AWS (EKS), and GCP (GKE) introduces complexity and fragmentation. This is where centralized management becomes essential. Tools like Plural offer a unified control plane for orchestrating deployments, managing infrastructure as code, and enforcing security policies consistently across all environments—allowing you to leverage LKE's affordability without sacrificing control or visibility.

Core Features of LKE

LKE provides all the essentials needed to run production-grade Kubernetes workloads:

  • Managed control plane: Linode handles availability, patching, and version upgrades of the Kubernetes master components, abstracting away significant operational overhead.
  • High availability by default: Control planes are designed for resilience, ensuring application uptime.
  • Integration with Linode ecosystem: Seamlessly connect with NodeBalancers for load balancing and Block Storage for persistent volumes.
  • Cluster autoscaling: Worker node pools automatically scale based on workload demand.

You can explore the full list of capabilities in the LKE documentation. These features create a solid foundation for modern app deployment—even without deep Kubernetes expertise.

How LKE Simplifies Cluster Management

LKE’s key simplification lies in abstracting the Kubernetes control plane. You define your cluster size by selecting Linode Compute Instances as worker nodes, while Linode manages the master components behind the scenes, including upgrades and availability.

Clusters can be deployed quickly using the Linode Cloud Manager, API, or Terraform provider. This reduces administrative overhead and accelerates developer workflows.

For organizations managing multiple clusters, Plural’s GitOps-based platform extends this simplicity by enabling centralized, declarative configuration across all clusters—whether on LKE, EKS, or GKE.

LKE’s Competitive Advantages

LKE stands out for its:

  • Predictable pricing: Flat hourly rates for control plane usage and transparent per-node billing—unlike the often opaque pricing models of hyperscalers.
  • Ease of use: A clean UI, simple provisioning, and tight integrations make it beginner-friendly.
  • Cost efficiency: Ideal for startups and SMBs looking to manage Kubernetes infrastructure without overspending.

For larger enterprises, LKE supports advanced use cases when paired with tools like Plural. You can uniformly manage RBAC policies, deploy applications declaratively, and maintain consistent security and compliance standards across your entire multi-cloud Kubernetes fleet.

How to Set Up Your First LKE Cluster

Setting up a Linode Kubernetes Engine (LKE) cluster is a streamlined way to begin deploying containerized applications. LKE, part of Akamai Connected Cloud, offers a managed Kubernetes control plane that abstracts away the complexity of running master nodes—freeing you to focus on your workloads.

This guide walks you through creating your first LKE cluster using the Linode Cloud Manager and connecting to it with kubectl. While this manual process is suitable for individual clusters or development environments, we’ll also highlight how to scale operations using a platform like Plural to automate and manage Kubernetes across your fleet.

Step 1: Fulfill Prerequisites and Set Up Your Account

Before provisioning an LKE cluster, you’ll need:

  • A Linode account
  • kubectl installed on your local machine

Once your account is active, log in to the Linode Cloud Manager—the web UI for provisioning and managing cloud resources. kubectl is the standard CLI for interacting with Kubernetes clusters, used to deploy apps, inspect resources, and monitor health.

Step 2: Create a Cluster in the Linode Cloud Manager

To create a cluster:

  1. Navigate to the Kubernetes section in Cloud Manager and click Create Cluster.
  2. Assign a label, choose a region, and select a Kubernetes version—preferably the latest stable release.
  3. Define node pools, which determine the hardware resources for your worker nodes. Select a Linode plan and specify the number of nodes (start with 1–2 for testing).

Linode will provision both the control plane and worker nodes automatically. This UI-driven setup is great for learning, but in production environments, declarative tools like Terraform and platforms like Plural Stacks provide better scalability and repeatability.

Step 3: Configure kubectl to Access Your Cluster

After your cluster is provisioned:

  1. Download the kubeconfig file from your cluster’s detail page in Cloud Manager.
  2. Either merge it with your existing configuration at ~/.kube/config or set the KUBECONFIG environment variable:
export KUBECONFIG=/path/to/your/kubeconfig.yaml
  1. Verify the connection:
kubectl get nodes
  1. You should see the list of your cluster’s worker nodes.

For teams managing multiple clusters, distributing individual kubeconfig files becomes cumbersome. Plural solves this by offering a centralized, SSO-enabled Kubernetes dashboard, streamlining access and eliminating manual config management.

Scaling Beyond the First Cluster

While LKE’s ease of use makes it ideal for quick starts, managing dozens of clusters through the UI doesn't scale. Tasks like patching, RBAC enforcement, and infrastructure provisioning quickly become operational bottlenecks.

This is where Plural adds significant value. With its GitOps-based automation, you can define infrastructure and applications as code and apply them uniformly across LKE, EKS, GKE, and other environments. Changes are tracked in Git, environments are consistently provisioned, and your deployments become repeatable and auditable.

Deploy and Manage Applications on LKE

Once your LKE cluster is up and running, the next step is deploying and managing your applications. Kubernetes offers robust primitives for orchestrating containerized workloads, but turning those primitives into reliable, scalable deployments requires a well-defined workflow. Successful application management involves standardized configuration, deployment automation, and controlled scaling strategies—especially when operating across multiple clusters.

A GitOps-driven, automated deployment strategy becomes essential for ensuring consistency, minimizing manual effort, and enabling repeatability across environments.

Apply Kubernetes Manifests

Kubernetes manifests—YAML or JSON files defining resources like Deployments or Services—are applied using:

kubectl apply -f <filename>.yaml

This method works for small projects, but managing numerous manifests manually across multiple clusters is error-prone and unscalable. It also lacks version control and auditing.

GitOps addresses this by using a Git repository as the single source of truth. Changes are proposed through pull requests and automatically synchronized to clusters. Plural CD supports this model by detecting Git changes and applying them to your LKE clusters, enabling consistent, auditable, and automated deployments.

Use Helm Charts to Streamline Deployments

As application complexity grows, managing raw manifests becomes cumbersome. Helm, Kubernetes’ package manager, solves this by packaging resources into charts that support templating and versioning. This makes it easier to:

  • Deploy complex applications
  • Reuse configurations across environments
  • Simplify upgrades and rollbacks

Plural integrates natively with Helm, allowing you to manage charts declaratively within your GitOps pipeline. You can define values per environment, automate Helm releases across your LKE fleet, and ensure applications are deployed consistently.

Scale and Update Applications

Kubernetes supports scaling and rolling updates out of the box. You can:

  • Manually scale a deployment with kubectl scale
  • Set up a Horizontal Pod Autoscaler (HPA) to auto-adjust replicas based on CPU/memory usage
  • Use rolling updates to deploy changes with zero downtime

While these features work at the cluster level, managing them across a fleet requires a higher level of automation. With Plural, you can define HPAs and update strategies as code. Pull request automation then propagates changes across environments, ensuring uniform scaling and update policies with minimal manual effort.

Manage Networking, Storage, and Resources in LKE

While LKE abstracts away much of the complexity of cluster setup, effective management of networking, storage, and resources is essential for running stable, secure, and performant applications. Misconfigurations in these areas are a common Kubernetes deployment challenge and can lead to service downtime, data loss, or security vulnerabilities.

As your environment grows beyond a single cluster, maintaining consistency becomes a significant operational burden. Manually configuring each cluster is not only time-consuming but also prone to human error, leading to configuration drift that can be difficult to track and resolve.

Using a GitOps–based approach to declaratively manage these configurations ensures that every LKE cluster in your fleet adheres to the same standards for networking, storage, and security. Platforms like Plural provide the necessary tooling to implement this at scale. By managing these critical components as code in a central repository, you can automate deployments and enforce policies consistently across your entire fleet from a single pane of glass. This allows your team to focus on application logic rather than manually configuring each cluster's underlying infrastructure.

The following sections cover the core components you'll need to manage within your LKE environment to build a robust and scalable platform.

Configure Load Balancers and Ingress Controllers

To expose your applications to the internet, you need to manage incoming traffic. LKE simplifies this by integrating directly with Linode NodeBalancers. When you create a Kubernetes Service of type: LoadBalancer, LKE automatically provisions and configures a NodeBalancer to distribute traffic to your pods. This is ideal for exposing single services over TCP/UDP.

For more complex HTTP/S routing, you'll need an Ingress Controller. An Ingress controller acts as a reverse proxy, managing access to multiple services based on hostnames or URL paths. You can deploy popular controllers like NGINX Ingress Controller or Traefik onto your LKE cluster using Helm charts. This gives you fine-grained control over traffic, enabling features like SSL termination and path-based routing, which are essential for hosting multiple applications on a single LKE cluster.

Manage Persistent Volumes

Stateless applications are straightforward, but most real-world applications require persistent storage to maintain state. Kubernetes handles this with PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). LKE streamlines this process by integrating with Linode Block Storage. When you create a PVC, LKE’s built-in storage provisioner dynamically creates a Block Storage Volume and attaches it to the appropriate node for your pod.

To ensure stability, it's critical to define resource requests and limits for pods that use persistent storage. This guarantees that your stateful applications have the necessary CPU and memory to function correctly, preventing resource contention and performance degradation. For complex applications that rely on external databases or object storage, Plural’s Infrastructure-as-Code management allows you to provision and manage these resources alongside your Kubernetes deployments, ensuring your entire application stack is managed declaratively.

Implement Network Security Policies

By default, all pods in a Kubernetes cluster can communicate with each other. To secure your applications, you must implement Network Policies that restrict traffic flow. These act as a firewall for your pods, allowing you to define explicit rules for ingress and egress traffic. For example, you can create a policy that only allows frontend pods to communicate with backend API pods on a specific port.

LKE uses the Cilium CNI by default, which enforces NetworkPolicies out of the box. This enables you to create a zero-trust network environment where communication is denied by default and only explicitly allowed where necessary.

Managing these policies, along with user permissions via Role-Based Access Control (RBAC), is critical for security. Plural simplifies this by allowing you to define RBAC and NetworkPolicies in a Git repository and apply them consistently across your entire fleet of LKE clusters.

How to Monitor Your LKE Clusters

While LKE simplifies cluster setup by managing the control plane, you are still responsible for the health of your applications and worker nodes. Effective monitoring is essential for maintaining reliability, diagnosing issues, and optimizing resource usage. A solid monitoring strategy provides visibility into every layer of your cluster, from node-level metrics to application-specific logs, ensuring you can catch problems before they impact your users.

Use LKE's built-in monitoring tools

LKE provides foundational monitoring capabilities directly within the Linode Cloud Manager. You can view essential metrics for your worker nodes, including CPU usage, memory, disk I/O, and network traffic. These built-in graphs offer a quick, high-level overview of your cluster's health and are useful for spotting immediate resource constraints. While Linode manages the control plane, these tools give you necessary visibility into your compute instances. However, they are limited to the node level and don't provide deep insights into the applications running inside your cluster. For comprehensive observability, you'll need to look beyond these default metrics.

Integrate third-party observability platforms

To overcome the limitations of basic monitoring, teams often integrate specialized observability platforms. While LKE simplifies cluster creation, the inherent complexity of Kubernetes requires a more robust solution for true visibility. This is where a platform like Plural becomes critical. Plural provides a unified dashboard to monitor your entire Kubernetes fleet, including any clusters running on LKE. By deploying the Plural agent, you can stream metrics, logs, and events into a single pane of glass. This allows you to correlate data across all your clusters and applications, simplifying troubleshooting. Instead of juggling multiple tools, your team gets a consistent, secure way to observe your infrastructure.

Manage and analyze logs effectively

Logs are your primary tool for debugging application failures. In a dynamic Kubernetes environment where pods are constantly created and destroyed, collecting and analyzing logs presents a significant challenge. A centralized logging solution is necessary to capture all relevant data before a pod disappears. Misconfigurations can lead to failed deployments, and these issues are often first visible in logs. Plural simplifies this by enabling you to deploy and manage popular logging stacks from its open-source marketplace. Furthermore, Plural’s embedded Kubernetes dashboard provides direct access to logs without needing to manage complex kubectl commands. This streamlines the process of diagnosing issues, allowing your engineers to find the root cause of a problem efficiently.

Secure Your LKE Environment

Running applications on Linode Kubernetes Engine provides a robust and scalable foundation, but securing your workloads is a shared responsibility. A secure LKE environment requires a multi-layered approach that addresses access control, image integrity, and network traffic. By implementing security best practices from the start, you can protect your applications and data from potential threats and ensure your deployments are both reliable and compliant. These practices are not one-time fixes but part of a continuous process of vigilance and improvement for your infrastructure.

Implement RBAC and security contexts

Role-Based Access Control (RBAC) is your first line of defense for managing permissions within your LKE cluster. It dictates who can access which Kubernetes API resources and what actions they can perform. Without RBAC, you risk granting overly broad permissions that could lead to accidental misconfigurations or malicious activity. For example, you can create a ClusterRoleBinding to grant specific permissions to a user or group. Platforms like Plural streamline this process by integrating with your identity provider (IdP) via OIDC. This allows you to manage Kubernetes RBAC using familiar user emails and groups, ensuring consistent and auditable access policies across your entire fleet of LKE clusters.

Scan container images for vulnerabilities

Container images often include base layers and third-party libraries that can contain known vulnerabilities. A compromised image deployed to your LKE cluster can expose your entire environment to attack. To mitigate this, you must integrate vulnerability scanning into your CI/CD pipeline. Tools like Trivy or Clair can scan your images for security issues before they are pushed to a registry or deployed. Using Plural, you can standardize this process by deploying a Trivy operator to all your LKE clusters from a central management plane. This ensures every workload is automatically scanned, providing a unified view of your security posture and enforcing compliance across your organization.

Enforce network security and encryption

By default, all pods in a Kubernetes cluster can communicate with each other. This unrestricted "east-west" traffic can allow a breach to spread quickly. Kubernetes NetworkPolicies let you segment your network and define explicit rules for how pods can communicate. For example, you can create a policy that only allows your frontend pods to talk to your backend database pods. Beyond segmentation, Plural’s agent-based architecture enhances network security by using an egress-only connection from your LKE cluster to the management plane. This means you don't need to expose your Kubernetes API to the internet, significantly reducing the attack surface while maintaining full control and visibility.

Optimize LKE for Performance and Cost

Running Kubernetes on LKE isn't just about deploying applications—it’s about operating them efficiently throughout their lifecycle. As usage scales, even minor inefficiencies can lead to significant costs and performance degradation. To avoid this, you need to proactively manage resource allocation, plan upgrades, and understand LKE’s consumption-based pricing.

These optimizations aren't one-time tasks. They require continuous monitoring and adjustment. In multi-cluster environments, maintaining consistency becomes especially challenging—configuration drift in a single cluster can result in unexpected expenses or degraded performance. A declarative, centralized configuration approach helps ensure autoscaling, upgrades, and resource tuning are applied uniformly across your infrastructure. Tools like Plural make this possible by codifying operational standards, allowing you to enforce performance and cost-efficiency best practices from the start.

Manage Cluster Autoscaling and Node Pools

Effective resource management is critical for balancing cost and performance. LKE supports cluster autoscaling, which automatically adjusts the number of nodes in a pool based on workload demand. This prevents over-provisioning during quiet periods while ensuring sufficient capacity during spikes.

You can further optimize usage by creating multiple node pools with different Linode instance types, each tailored to specific workloads. For example, general-purpose nodes for stateless apps and compute-optimized nodes for batch processing. Plural enables you to define these autoscaling and node pool strategies as Infrastructure-as-Code, ensuring they’re applied consistently across your LKE clusters.

Plan Kubernetes Version Upgrades

Staying current with Kubernetes versions is essential for security patches, new features, and performance improvements. LKE lets you upgrade clusters via the Cloud Manager or API, but upgrades should be approached with caution. Always test against a staging environment to catch issues related to deprecated APIs or behavioral changes before touching production workloads.

Scheduling upgrades during low-traffic windows minimizes user impact. In multi-cluster environments, a GitOps workflow can treat version upgrades as code, enabling consistent, repeatable rollouts across environments, reducing manual effort and risk.

Optimize Costs with the LKE Pricing Model

LKE’s pricing model is transparent—you pay only for what you use: compute nodes, load balancers, and storage. This makes it easy to monitor spend but also places responsibility on you to avoid overprovisioning.

Start by choosing the right instance types for your needs. Use smaller plans for dev/test environments and reserve higher-tier plans for production. Regularly audit usage to identify inefficiencies such as idle nodes or unused Block Storage volumes. While LKE gives you the primitives, Plural offers centralized visibility across your entire cluster fleet, helping you enforce cost-conscious practices and spot waste before it compounds.

How LKE Fits into the Broader Kubernetes Ecosystem

LKE provides a solid, managed Kubernetes control plane, abstracting away much of the operational overhead required to run a cluster. This allows teams to focus on deploying and managing their applications rather than maintaining the underlying Kubernetes components. However, LKE is one piece of a larger puzzle. As organizations scale, they often adopt a multi-cluster or even multi-cloud strategy, running workloads on LKE alongside other providers like EKS or GKE. This introduces heterogeneity that can complicate management, deployment, and observability.

This is where a unified orchestration layer becomes essential. While LKE manages the cluster itself, a platform like Plural provides a consistent workflow for managing the entire fleet. Plural’s agent-based architecture allows you to connect to any Kubernetes cluster, including those on LKE, to enforce consistent GitOps-based deployments, manage infrastructure as code, and gain visibility through a single dashboard. By pairing LKE's simplified cluster management with Plural's fleet management capabilities, you can build a scalable, secure, and efficient platform that isn't locked into a single vendor's ecosystem. This combination allows you to leverage LKE's cost-effectiveness and simplicity while maintaining a standardized operational model across all your Kubernetes environments.

Compare LKE with other managed K8s solutions

The primary value of any managed Kubernetes service—be it LKE, GKE, EKS, or AKS—is the reduction of operational complexity. Manually setting up and maintaining a production-grade Kubernetes control plane is a significant undertaking that requires specialized expertise. LKE positions itself as a strong competitor by emphasizing ease of use and cost-effectiveness, making it an attractive option for teams that need a straightforward, no-frills Kubernetes environment. While hyperscalers may offer a wider array of integrated services, LKE delivers a clean, conformant Kubernetes experience without the intricate billing and configuration of larger platforms. However, regardless of the provider, you are still responsible for application lifecycle management. Plural provides a consistent GitOps deployment engine that works identically across LKE, EKS, and any other cluster, ensuring your deployment pipeline remains standardized even in a hybrid environment.

Integrate LKE with other Linode services

LKE is designed to work seamlessly within the broader Linode ecosystem. You can easily integrate your clusters with other Linode products like NodeBalancers for load balancing, Block Storage for persistent volumes, and VLANs for private networking. Linode provides extensive documentation and guides for configuring these integrations, allowing you to build a complete application stack using their suite of tools. For teams that manage infrastructure as code, these resources can be provisioned using Terraform. Plural extends this capability with Plural Stacks, which provides an API-driven framework for managing Terraform. This allows you to automate the provisioning of your LKE cluster and all its dependent Linode resources from a single, version-controlled workflow, ensuring infrastructure changes are repeatable, auditable, and scalable.

Understand the LKE product roadmap

Linode has structured LKE to accommodate growth, offering both a standard version for general use and an enterprise-grade version, LKE-Enterprise, for larger, more demanding production workloads. This tiered approach demonstrates a commitment to supporting users from small-scale projects to large, mission-critical applications. The key difference lies in the dedicated resources, higher throughput, and stronger SLAs offered by the enterprise version. As your organization’s use of Kubernetes matures, you might find yourself managing a mix of standard and enterprise LKE clusters, or even expanding to other cloud providers. This is a natural scaling point where a centralized management platform becomes critical. Plural’s single-pane-of-glass console provides unified visibility and control over your entire fleet, allowing you to manage deployments, monitor health, and enforce RBAC policies consistently, no matter how diverse your underlying infrastructure becomes.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What's the difference between what LKE manages and what Plural manages? Linode Kubernetes Engine (LKE) manages the foundational Kubernetes control plane for an individual cluster, abstracting away the complexity of its setup and maintenance. Plural operates at the layer above, providing a unified platform to manage the entire lifecycle of applications and infrastructure configurations across your entire fleet of clusters. Think of LKE as providing the engine, while Plural provides the comprehensive dashboard and automated systems to operate a whole fleet of vehicles consistently and securely.

Why can't I just use LKE's tools to manage multiple clusters? While LKE's tools are effective for a single cluster, they don't scale to a fleet. Managing multiple clusters individually through a UI or CLI leads to operational bottlenecks and inevitable configuration drift, where each cluster slowly becomes different. Plural solves this by enforcing a GitOps workflow from a central control plane, ensuring that every application deployment, RBAC policy, and infrastructure setting is applied consistently across all your LKE clusters from a single source of truth.

Can I use Plural to manage my LKE clusters alongside clusters on other clouds like AWS or GCP? Yes, this is a primary use case for Plural. Our platform is cloud-agnostic and uses a lightweight agent to connect to any conformant Kubernetes cluster, regardless of the provider. This allows you to standardize your deployment and management workflows across a hybrid environment, giving you a single pane of glass to control your LKE clusters right next to your EKS and GKE clusters.

How does Plural connect to my LKE clusters without exposing their APIs to the internet? Plural installs a secure agent inside each LKE cluster that initiates an egress-only connection to the Plural management plane. All commands and API requests from the Plural console are tunneled through this secure, outbound-only channel. This architecture allows for full remote management and observability without ever requiring you to expose your cluster's Kubernetes API to inbound traffic from the internet, significantly reducing the attack surface.

If I'm already using Terraform to manage my LKE clusters, what additional benefit does Plural provide? Plural enhances your existing Terraform workflows with a scalable, API-driven automation layer called Plural Stacks. Instead of relying on manual runs or complex CI pipelines, you can manage Terraform declaratively through a GitOps process. Plural will automatically run terraform plan on pull requests and terraform apply on merges, providing a complete audit trail and integrating your infrastructure changes seamlessly with your application deployment pipeline across the entire fleet.

Guides