Install Kubernetes on Ubuntu: Step-by-Step Tutorial
Kubernetes is the industry standard for container orchestration, and Ubuntu is one of the most popular OS choices for deploying it in production. For any infrastructure engineer, installing Kubernetes on Ubuntu from scratch is a must-have skill. This guide walks you through the process using kubeadm
—the official tool for bootstrapping Kubernetes clusters. You’ll set up the nodes, initialize the control plane, join worker nodes, and verify the installation. While this manual setup builds a strong foundation for understanding how Kubernetes works under the hood, it's also the first step toward automating and scaling your cluster for real-world production use.
Key takeaways:
- Understand core components through manual installation: Building a cluster on Ubuntu with
kubeadm
is a practical way to learn how Kubernetes works. The process requires preparing each node by disabling swap, installing a container runtime, and configuring kernel settings before initializing the cluster. - Go beyond installation to harden your cluster: A default installation is not production-ready. You must implement foundational security with RBAC, ensure application stability by setting resource requests and limits, and deploy observability tools for monitoring and logging.
- Adopt automation for fleet management: While manual setup is educational, it doesn't scale. A platform like Plural is essential for managing multiple clusters, using a secure, GitOps-driven workflow to automate deployments, upgrades, and security policies, which prevents configuration drift and reduces operational overhead.
What Is Kubernetes and Why Run It on Ubuntu?
Kubernetes: The Backbone of Modern Infrastructure
Kubernetes has become the go-to platform for managing containerized applications at scale. It automates deployment, scaling, and operations across clusters of machines, whether on-premise or across multiple clouds. Its scheduler, self-healing, and auto-scaling features make it ideal for running everything from web apps to complex machine learning pipelines.
However, with flexibility comes complexity. Running Kubernetes in production means dealing with intricate configuration, networking, resource limits, RBAC policies, and monitoring. Mastering these areas is key to operating resilient systems at scale.
Why Ubuntu Is a Solid Base for Kubernetes
Ubuntu is one of the most widely used operating systems for Kubernetes nodes, favored for its stability, simplicity, and strong community support. Canonical maintains a "pure upstream" Kubernetes distribution, ensuring close compatibility with the official project and timely updates.
With a well-documented installation process via apt
, robust LTS releases, and first-class support from major cloud providers, Ubuntu strikes the right balance between reliability and ease of use. It's a proven foundation that powers production clusters across startups and enterprises alike.
Prepare Your Environment for Installation
Before you can install Kubernetes, you need to lay the proper groundwork. A well-prepared environment is the key to a smooth installation and a stable cluster. Taking the time to configure your nodes correctly now will save you from troubleshooting headaches later. This section covers the essential prerequisites and configuration steps for each of your Ubuntu servers.
Review System Requirements and Dependencies
To get started with a minimal Kubernetes cluster, you’ll need at least two Ubuntu 22.04 machines—one control plane node and one worker node. Each should have a non-root user with sudo
privileges.
As a baseline, allocate at least 2 vCPUs and 2 GB of RAM per node. These specs are sufficient for test clusters and small workloads. For anything more demanding, especially in production, you'll need to scale resources accordingly.
Before installing anything, update the system packages:
sudo apt update && sudo apt upgrade -y
Configure Your Ubuntu Nodes
Before bootstrapping your cluster, you need to prep each node with some essential configurations.
Disable Swap
Kubernetes requires swap to be disabled to ensure predictable memory management:
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Load Required Kernel Modules
For container networking to work correctly, enable overlay
and br_netfilter
:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Set System Networking Parameters
Ensure bridged IPv4/IPv6 traffic is visible to iptables
:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Install the Container Runtime (containerd)
Kubernetes relies on a container runtime. We’ll use containerd
, a widely supported and performant option:
sudo apt install -y containerd
# Generate default config
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Enable and start the service
sudo systemctl restart containerd
sudo systemctl enable containerd
These steps prepare the system to run Kubernetes reliably and are required on every node in your cluster.
Install Kubernetes with kubeadm
With your nodes configured and containerd
running, it’s time to install the Kubernetes components and bootstrap your cluster.
Install kubeadm
, kubelet
, and kubectl
Run these commands on all nodes (control plane and workers):
# Add Kubernetes APT repository
sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install the components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
# Prevent automatic updates
sudo apt-mark hold kubelet kubeadm kubectl
Initialize the Control Plane (on Master Node Only)
Now on your control-plane node only, run:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
- The
--pod-network-cidr
flag configures the cluster for the CNI plugin you'll install later.192.168.0.0/16
works for most setups (e.g., Calico, Flannel).
Once the initialization completes, you'll see a kubeadm join
command—copy this for the worker nodes.
To start using your cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install a Pod Network (CNI Plugin)
Kubernetes doesn't include networking by default. You’ll need to install a CNI plugin. Here's how to install Calico:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
Wait for the pods in kube-system
to be ready:
kubectl get pods -A
Join Worker Nodes
On each worker node, run the kubeadm join
command output from Step 2. It looks like this:
sudo kubeadm join <control-plane-ip>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
This securely connects the worker node to the control plane.
To verify:
kubectl get nodes
You should see both the control plane and worker nodes in a Ready
state.
Verify Your Kubernetes Installation
After completing the installation process, the next critical step is to verify that your Kubernetes cluster is fully operational. This ensures that all components are communicating correctly and that the cluster is ready to schedule workloads. Skipping this step can lead to troubleshooting headaches down the line when you start deploying applications. A proper verification involves checking the health of the cluster's core components and running a simple test application to confirm end-to-end functionality. These checks confirm that your foundation is solid before you build more complex systems on top of it.
Check Cluster Status and Component Health
First, you need to confirm that all your nodes have successfully joined the cluster and are in a healthy state. You can get a status report on all nodes by running a single command from your control-plane node. This is one of the most fundamental checks for a new cluster.
kubectl get nodes
The output should list all your nodes—both control-plane and worker—with a STATUS
of Ready
. If a node shows a different status, you'll need to investigate its logs to diagnose the issue. To get more details about the control plane components, use kubectl cluster-info
. This command displays the addresses of the Kubernetes master and its services, confirming that kubectl
can communicate with the API server. In a multi-cluster environment, Plural’s built-in dashboard provides this visibility from a single console, showing live updates of cluster state and resource conditions across your entire fleet without requiring manual CLI commands.
Run a Test Deployment to Confirm Functionality
With the cluster components confirmed as healthy, the final verification is to deploy a sample application. This tests whether the scheduler can assign a pod to a worker node and that the pod network is functioning correctly. A simple way to do this is to deploy a basic Nginx application.
kubectl create deployment nginx --image=nginx
This command tells Kubernetes to create a deployment that pulls the Nginx image and runs it in a pod. You can check the status of the deployment with kubectl get deployments
and the pod with kubectl get pods
. If the pod enters the Running
state, your cluster is working correctly. This simple test validates that your pod network (like Flannel or Calico) is enabling communication. While this manual check is useful for initial setup, production deployments are best managed with GitOps. Plural’s Continuous Deployment engine automates this entire lifecycle, providing a structured and repeatable process for deploying applications at scale.
Troubleshooting Common Installation Issues
Even with a clean setup, Kubernetes installations can fail due to subtle misconfigurations or version mismatches. Most issues during a kubeadm
install fall into a few predictable categories: swap not disabled, missing kernel modules, networking misconfigurations, or version drift between components.
Systematically checking these areas can save hours of debugging. Below are the most common errors seen during Kubernetes setup on Ubuntu, along with actionable fixes.
Resolve Network and Configuration Errors
Network misconfigurations and missing privileges are common causes of failed installations. Here’s how to address them:
1. Use Root or Sudo Access
Most setup commands require root privileges. Make sure you’re using sudo
consistently, especially when modifying system files or installing packages.
2. Check Firewall Rules
Kubernetes nodes must be able to communicate freely on specific ports. On Ubuntu systems using UFW (Uncomplicated Firewall), check the status:
sudo ufw status
Ensure the following ports are open:
- Control plane node:
- TCP 6443 – Kubernetes API server (must be reachable from worker nodes)
- All nodes:
- TCP 10250 – kubelet API
- TCP 30000–32767 – NodePort Services (optional, for testing)
If UFW is active, you can allow required ports like so:
sudo ufw allow 6443/tcp
sudo ufw allow 10250/tcp
Or disable UFW for testing:
sudo ufw disable
3. Node Fails to Join the Cluster
If a worker node fails to join, the most common issue is an expired bootstrap token. By default, the token created during kubeadm init
is valid for 24 hours.
To generate a new one on the control-plane node:
sudo kubeadm token create --print-join-command
Run the new command on the worker node to successfully join the cluster.
Fix Compatibility and Version Conflicts
Version mismatches between Kubernetes, container runtimes, and the OS are a frequent source of cluster instability.
1. Verify Container Runtime Version
Kubernetes v1.26+ requires containerd v1.6.0+. Some Ubuntu releases may include older versions in the default repos. Check your installed version:
containerd --version
If it’s outdated, install the latest containerd manually or from the official upstream package source.
2. Align the cgroup Driver
Kubernetes and containerd
must use the same cgroup driver—preferably systemd
. Mismatches here can cause pod failures, resource limits not being enforced, and kubelet errors.
To check the kubelet
driver:
ps -ef | grep kubelet | grep cgroup
To set systemd
in containerd, ensure your config at /etc/containerd/config.toml
contains:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Then restart containerd:
sudo systemctl restart containerd
And ensure kubelet is also configured accordingly via systemd drop-in or flags.
Manually tracking these dependencies can be error-prone. Tools like Plural’s Continuous Deployment engine automate version checks and enforce runtime compatibility across clusters—helping teams avoid drift and keep infrastructure stable.
Secure and Optimize Your Kubernetes Cluster
Installing Kubernetes is just the beginning. To support production workloads, you need to harden your cluster, manage resource usage, and bake security into your deployment workflows. These steps turn a basic install into a resilient, production-grade platform.
Implement Core Security Best Practices
1. Harden Network Access
The control plane is the heart of your cluster—keep it locked down.
- Restrict access to critical ports like:
6443
(Kubernetes API server)10250
(kubelet)
- Use firewalls (e.g. UFW, cloud security groups) to allow traffic only from known management IPs or bastion hosts.
2. Enforce Role-Based Access Control (RBAC)
RBAC is Kubernetes' native mechanism for managing permissions. Define least-privilege policies for users, service accounts, and automation tools:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: read-only
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
Apply roles using RoleBinding
or ClusterRoleBinding
depending on the scope.
Plural makes this easier by integrating with your identity provider. You can assign RBAC policies using user emails or SSO groups—streamlining access control across your team.
3. Egress-Only by Default
Expose only what’s necessary. Plural’s egress-only architecture ensures your control plane and internal services remain non-public while still being fully manageable via the web UI or CLI.
Fine-Tune Resource Allocation for Stability and Performance
Unbounded resource usage is a common cause of pod crashes, latency spikes, and node-level instability. To avoid this, define CPU and memory requests and limits for every container.
- Requests: minimum guaranteed resources; used by the scheduler to place pods.
- Limits: maximum resources a container can consume; enforced at runtime.
Example:
resources:
requests:
cpu: "250m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
This ensures fair scheduling and prevents "noisy neighbor" problems—where a misbehaving app impacts others on the same node.
Kubernetes also uses these settings to assign QoS (Quality of Service) classes, which affect pod eviction priority under pressure. For production-grade reliability, setting requests and limits should be non-negotiable.
For deeper insights into how QoS works, refer to the official Kubernetes docs on QoS classes.
Follow Key Security Best Practices
Securing a Kubernetes cluster isn’t a one-time task—it requires continuous attention. Here are a few foundational practices to keep your environment safe:
1. Respect Version Skew
Always follow Kubernetes’ version skew policy.
kubectl
must be no more than one minor version ahead or behind the API server.
Using mismatched versions can lead to subtle, hard-to-debug errors—especially during upgrades or automation.
2. Shift Left on Image Security
Integrate vulnerability scanning early in your CI/CD pipeline. Tools like Trivy can scan container images for known CVEs before they ever hit your cluster:
trivy image your-app:latest
Catching issues at build time is faster, cheaper, and safer than patching in production.
3. Automate Scanning Across Clusters
With Plural, you can deploy the Trivy Operator as a Global Service—enabling real-time image scanning and policy enforcement across your entire fleet. This standardizes security coverage and reduces manual overhead, especially as you scale into multi-cluster environments.
Add Advanced Features to Your Cluster
Once your Kubernetes cluster is up and running on Ubuntu, the next step is to layer on the tooling that enables real-world operations: UI management, monitoring, and centralized logging. A bare cluster may be functional, but to run production workloads effectively, you need observability and control.
While deploying these tools on a single cluster is straightforward, maintaining consistency across multiple clusters is complex—exactly the kind of operational burden platforms like Plural are designed to reduce.
Install the Kubernetes Dashboard (UI Management)
The Kubernetes Dashboard is a web-based UI for managing workloads, viewing resources, and debugging issues.
Deploy it with:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
It’s helpful for:
- Visualizing workloads and cluster health
- Managing resources without needing
kubectl
- Debugging with pod logs and metrics
However, managing dashboard access securely across multiple clusters becomes a pain—especially with isolated networks and rotating credentials.
Plural solves this by providing a built-in multi-cluster dashboard:
- Unified UI for all clusters (no more juggling kubeconfigs)
- SSO integration using your identity provider
- Egress-only access—no VPN or exposed API servers required
Set Up Monitoring: Prometheus + Grafana
- Prometheus scrapes metrics from nodes, pods, and apps.
- Grafana visualizes those metrics through custom dashboards.
Install both using Helm or an operator, and use prebuilt Grafana dashboards for Kubernetes insights like:
- Node and pod resource usage
- API server latency
- Control plane health
Set Up Logging: EFK Stack (Elasticsearch, Fluentd, Kibana)
- Fluentd collects logs from across the cluster
- Elasticsearch indexes and stores logs
- Kibana provides powerful log search and visualization
While powerful, deploying and maintaining observability tooling on every cluster requires time and expertise.
Plural simplifies this by treating monitoring and logging stacks as managed, reusable components:
- Deploy pre-configured Prometheus, Grafana, and EFK stacks
- Apply them consistently across clusters with Global Services
- Scale with your fleet while minimizing configuration drift
Simplify Kubernetes Management at Scale with Plural
Spinning up a Kubernetes cluster on Ubuntu is a solid milestone, but scaling to dozens or hundreds of clusters is where operational complexity explodes. Manual processes don’t scale, and stitching together multiple open-source tools introduces drift, inefficiencies, and security risks.
Plural is built to solve this. It’s a unified platform for managing Kubernetes at scale, handling deployments, upgrades, security, and observability across fleets of clusters from a single control plane.
Plural’s Architecture: Scalable, Secure, and Private by Default
Plural runs a centralized control plane on a management cluster and installs a lightweight agent on each workload cluster. Key design principles:
- Egress-only communication: Agents initiate all traffic, so workload clusters stay in private networks with no exposed ports.
- No centralized kubeconfig storage: Eliminates the risk of leaked credentials.
- Secure-by-default architecture: All access is governed via your identity provider (SSO), and RBAC policies are enforced natively in Kubernetes.
With this model, platform teams can manage infrastructure, apps, and security without compromising network boundaries or cobbling together VPNs and access proxies.
Automate Deployments and Upgrades with GitOps
Plural’s Continuous Deployment engine uses a GitOps-driven model to manage application and infrastructure lifecycles.
- Automated cluster upgrades: Stay on supported Kubernetes versions with minimal disruption.
- Unified app deployment: Roll out apps and add-ons across your fleet with consistency and traceability.
- Eliminate drift: Every cluster is version-controlled and aligned to your Git repo.
This ensures predictable, reproducible infrastructure—no more config drift or out-of-sync environments.
Centralize Security and Compliance
Managing security policies across environments is painful at scale. Plural brings it all into a single, centralized console:
- SSO + RBAC enforcement: Integrates with your identity provider and uses Kubernetes impersonation to apply fine-grained access controls.
- Audit logging across clusters: Full visibility into who did what, where, and when.
- Consistent policy application: Whether deploying Trivy, OPA, or custom admission controllers, Plural ensures security tooling is applied uniformly across all clusters.
Combined with its secure networking model and built-in dashboarding, Plural helps teams meet compliance standards without slowing down developers.
In short: Plural replaces the fragmented toolchain with a single platform for fleet-scale Kubernetes operations—purpose-built for security, scalability, and developer productivity.
What's Next on Your Kubernetes Journey?
With your Kubernetes cluster running on Ubuntu, you have a solid foundation for deploying containerized applications. The next phase of your journey moves beyond installation and into operation. This involves deepening your practical skills through hands-on experience and starting to explore the advanced topics that distinguish a basic setup from a resilient, production-grade system. As you scale, the focus shifts from getting a single cluster running to managing an entire fleet efficiently and securely.
Find Recommended Resources and Tools
Practice is essential for building confidence and competence with the Kubernetes API and its core concepts. Running Kubernetes locally is an excellent way to experiment without risk. Tools like Minikube, K3s, and KIND allow you to spin up lightweight clusters on your own machine, providing a sandbox to test deployments, network policies, and configurations. Many cloud providers also offer free tiers, giving you another low-cost path to practice Kubernetes in a more realistic environment. As you move toward production, you'll need a robust set of Kubernetes management tools to streamline operations and maintain visibility.
Explore Advanced Kubernetes Topics
Once you master the basics, you will encounter the real-world complexities of production Kubernetes. These often include managing intricate configurations, optimizing resource allocation, securing network traffic, and fine-tuning performance. Understanding how to solve these common Kubernetes deployment challenges is critical for building reliable and scalable applications. Studying real-world use cases can also provide valuable insight into how other engineering teams approach these problems. While mastering these topics is important, Plural’s platform is designed to handle much of this complexity for you. Our Continuous Deployment engine and security features automate lifecycle management and policy enforcement, letting your team focus on delivering applications instead of managing infrastructure.
Related Articles
- Best Orchestration Tools for Managing Kubernetes
- What Is Docker and Kubernetes? A DevOps Guide
- What Is Kubernetes Used For? A DevOps Guide
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
Why should I learn to install Kubernetes manually if platforms like Plural can automate it? Understanding the manual installation process with a tool like kubeadm
is valuable because it reveals how the core components of Kubernetes—like the API server, etcd, and scheduler—fit together. This foundational knowledge is incredibly helpful for troubleshooting. However, this manual approach is best suited for learning or a single test cluster. For production environments, especially at scale, manual processes introduce risk and operational overhead. Plural automates the entire lifecycle, from provisioning to upgrades and security patching, ensuring your clusters are consistent, secure, and managed efficiently so your team can focus on building applications.
The guide mentions disabling swap memory. Why is this a requirement for Kubernetes? Kubernetes needs to have precise control over system resources to function correctly. Its scheduler, the kubelet, makes decisions about where to place pods based on the available CPU and memory on each node. Swap memory makes memory accounting less predictable because it allows the OS to move memory pages to disk. This can interfere with the kubelet's ability to enforce resource limits and guarantee Quality of Service (QoS) for your applications, leading to performance degradation and instability. Disabling swap ensures that the scheduler is working with a clear and accurate picture of available memory.
This guide covers one cluster. What happens when I need to manage ten or a hundred? Managing a fleet of clusters introduces significant challenges that don't exist with a single instance. The primary issues are maintaining configuration consistency, executing upgrades securely across all clusters, enforcing uniform security policies, and gaining visibility without juggling dozens of credentials and network configurations. This is precisely the problem Plural solves. Our Continuous Deployment engine uses GitOps to prevent configuration drift, and our built-in dashboard provides a secure, single-pane-of-glass view into your entire fleet, even for clusters in private networks.
What is the most critical security step to take after a fresh installation? Beyond basic firewall rules, the most important first step is to implement strong Role-Based Access Control (RBAC). By default, a new cluster can have overly permissive access settings. You should immediately define Roles
and ClusterRoles
that grant permissions based on the principle of least privilege, ensuring users and services can only perform the actions they absolutely need. Plural simplifies this by integrating with your SSO provider, allowing you to tie Kubernetes RBAC policies directly to your organization's user and group identities for consistent and auditable access control.
How does Plural handle infrastructure changes differently than the manual approach? Manually, you might apply changes using kubectl
or run Terraform from a local machine, which isn't scalable, repeatable, or auditable. Plural formalizes this process using a GitOps-driven workflow. Application deployments are managed by our Continuous Deployment engine, which syncs state from your Git repository. For infrastructure-as-code tools like Terraform, Plural Stacks provides a Kubernetes-native way to manage runs, ensuring that all infrastructure changes are version-controlled, automatically planned in pull requests, and executed securely within your target environment.