`kubectl get nodes`: A Practical Introduction

The kubectl get nodes command is fundamental for inspecting the state of a single Kubernetes cluster. It returns a real-time snapshot of node readiness, roles, Kubernetes version, and basic health signals. For ad hoc diagnostics or cluster-level validation, it’s efficient and low-friction.

The limitation emerges at fleet scale. If you operate 10, 50, or 100 clusters, running kubectl get nodes against each context becomes operationally inefficient. Context switching, credential management, and manual aggregation of results introduce friction and increase the likelihood of blind spots.

This guide covers the full capabilities of kubectl get nodes, but it also clarifies where it breaks down in multi-cluster environments. At that point, centralized control planes become necessary. Platforms like Plural provide an embedded, unified dashboard that aggregates node state across clusters into a single-pane-of-glass view, eliminating repetitive CLI workflows while preserving operational visibility.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Use kubectl get nodes as your primary diagnostic tool: Its default output provides an immediate health check of your cluster's infrastructure. Pay close attention to the STATUS column to instantly spot unhealthy or cordoned nodes before they impact your applications.
  • Customize output with flags for targeted queries: Move beyond a simple list by using flags like -o wide for more detail, -l to filter by labels, and -o custom-columns to create specific reports. This transforms the command into a powerful tool for precise troubleshooting and automation.
  • Scale your monitoring with a unified dashboard: While kubectl is essential for individual clusters, managing a fleet requires a centralized solution. Plural's dashboard aggregates node health and resource data from all your clusters into a single view, eliminating repetitive commands and context switching.

What Is kubectl get nodes?

kubectl get nodes is a core Kubernetes CLI command used to inspect the node layer of a cluster. Nodes are the worker machines—VMs or bare metal hosts—that run your pods. The command queries the Kubernetes API server for Node objects and renders their current state in a tabular format.

In practice, it’s one of the first commands engineers run during cluster validation, incident response, or capacity analysis. It provides immediate visibility into node readiness, role assignment, uptime, and kubelet version—signals that directly impact workload scheduling and reliability.

What It Does

At execution, kubectl get nodes retrieves all registered Node resources from the cluster and displays key metadata and status fields.

The default columns include:

  • NAME – Node identifier
  • STATUS – Readiness condition (Ready, NotReady, etc.)
  • ROLES – Assigned roles (e.g., control-plane, worker, or <none>)
  • AGE – Time since the node registered with the cluster
  • VERSION – Kubelet version running on the node

The STATUS field is derived from node conditions reported by the kubelet. A NotReady state typically indicates issues such as kubelet failure, network partitioning, or resource pressure. Because of this, the command functions as a primary health check for the cluster’s compute layer.

Basic Syntax and Usage

The minimal invocation is:

kubectl get nodes

This requires:

  • kubectl installed locally
  • A valid kubeconfig context
  • RBAC permissions to list Node resources

For extended operational detail, use:

kubectl get nodes -o wide

The -o wide flag augments the output with:

  • Internal and external IP addresses
  • OS image
  • Kernel version
  • Container runtime

This extended view is particularly useful for diagnosing networking inconsistencies, OS drift, kernel mismatches, or runtime-specific issues (e.g., containerd vs. CRI-O).

In short, kubectl get nodes provides direct, low-latency insight into the health and configuration of the cluster’s compute substrate.

How to Read the kubectl get nodes Output

Running kubectl get nodes provides a high-level summary of your cluster's health and composition. At first glance, the output is a simple table, but each column contains critical information for diagnosing issues, planning capacity, and understanding your cluster's architecture. Misinterpreting this output can lead to incorrect assumptions about resource availability or node health. Learning to read it correctly is a fundamental skill for anyone managing Kubernetes environments. It’s the quickest way to get a pulse check on your infrastructure before diving deeper into logs or events.

Breaking Down the Default Columns

When you run kubectl get nodes, the command returns a table with five default columns that offer a snapshot of each node in your cluster. Understanding these columns is the first step to effective cluster monitoring.

  • NAME: This is the unique identifier for the node, typically its hostname.
  • STATUS: This column shows the current condition of the node. A Ready status means the node is healthy and can accept new pods.
  • ROLES: This indicates the node's function. Common roles include control-plane for nodes running the cluster's main components and <none> or worker for nodes that run your applications.
  • AGE: This shows how long the node has been part of the cluster.
  • VERSION: This displays the Kubernetes version the kubelet is running on the node.

This basic output is your starting point for most troubleshooting and monitoring tasks.

Decoding Node Statuses

The STATUS column is arguably the most important for assessing cluster health at a glance. It tells you whether a node is operational and available to run workloads. The most common statuses you'll encounter are Ready, indicating the node has passed all health checks and is fully functional, and NotReady, which signals a problem. A NotReady node cannot accept new pods and may be experiencing issues with its network, disk, or memory. Another status, SchedulingDisabled, means the node has been intentionally cordoned off, preventing the scheduler from placing new pods on it, which is often done before performing node maintenance.

Identifying Node Roles

The ROLES column helps you understand the architecture of your cluster by defining each node's purpose. Nodes with the control-plane role are critical; they run the core components that manage the cluster's state, like the API server and scheduler. You typically want to avoid scheduling regular application workloads on these nodes to ensure stability. Most other nodes will have the role <none> or worker, signifying they are available to run your application pods. In larger or more complex clusters, you might also see custom roles defined to designate nodes for specific tasks, such as GPU-intensive computing or storage operations.

How to Monitor Your Cluster with kubectl get nodes

kubectl get nodes is a low-latency way to inspect cluster health at the infrastructure layer. It surfaces the readiness and metadata of every Node object, making it the first command many engineers run during incident triage, rollout validation, or routine health checks.

However, it is a point-in-time CLI query scoped to a single cluster context. At fleet scale, manually executing the command across environments does not provide durable observability. While kubectl is effective for ad hoc diagnostics, centralized systems such as Plural provide persistent, aggregated visibility across clusters—shifting operations from reactive inspection to proactive monitoring.

Spot Unhealthy Nodes

The primary signal is the STATUS column.

  • Ready indicates the kubelet is reporting healthy node conditions.
  • NotReady indicates the control plane has marked the node unavailable.

A NotReady state typically results from:

  • Kubelet failure or crash loops
  • Network partition between node and API server
  • Resource pressure conditions (memory, disk, PID)
  • Runtime failures

When pods fail scheduling or become unresponsive, checking node readiness is a standard first diagnostic step.

Across many clusters, this manual inspection becomes operationally expensive. Plural aggregates node conditions across environments, allowing operators to identify unhealthy nodes without iterating through kubeconfig contexts.

Track Node Availability

The default output columns—NAME, STATUS, ROLES, AGE, VERSION—provide a compact readiness snapshot.

Tracking node availability over time helps identify:

  • Flapping nodes (frequent Ready/NotReady transitions)
  • Version drift across the fleet
  • Aging infrastructure nearing rotation

Because kubectl get nodes returns a single snapshot, it does not provide historical context. Trend analysis requires external tooling. Plural delivers continuous updates and centralized visibility, enabling operators to detect availability degradation patterns before they escalate into outages.

Monitor Cluster Capacity

By default, the command does not expose resource capacity. You can extend it using custom columns:

kubectl get nodes -o custom-columns=NAME:.metadata.name,CPU:.status.capacity.cpu,MEMORY:.status.capacity.memory

This queries each node’s declared capacity:

  • .status.capacity.cpu
  • .status.capacity.memory

This data represents allocatable limits at the node level—not real-time usage. It is useful for:

  • Evaluating cluster sizing
  • Validating scaling decisions
  • Planning workload placement

For utilization metrics, you would typically use kubectl top nodes or a metrics backend.

In multi-cluster environments, correlating declared capacity with real-time usage across fleets requires aggregation. Plural centralizes both capacity and utilization signals, reducing the operational overhead of piecing together cluster-level data manually.

In summary, kubectl get nodes remains essential for direct inspection and debugging, but sustained monitoring and capacity management at scale require centralized aggregation beyond the CLI.

Essential kubectl get nodes Flags and Options

The default kubectl get nodes output is intentionally minimal. Its flexibility comes from output modifiers and selectors that let you extract structured data, filter subsets of nodes, and integrate results into automation workflows. These flags are critical for production-grade cluster operations, audits, and scripting.

Get More Details with -o wide

The -o wide flag expands the default table with additional infrastructure metadata:

kubectl get nodes -o wide

Additional columns include:

  • INTERNAL-IP
  • EXTERNAL-IP
  • OS-IMAGE
  • KERNEL-VERSION
  • CONTAINER-RUNTIME

This is useful for:

  • Verifying node network addresses during connectivity debugging
  • Detecting OS or kernel drift across nodes
  • Confirming container runtime consistency (e.g., containerd vs. CRI-O)

It provides more operational context without requiring a full object inspection.

Format Output as JSON or YAML

For automation, structured output is essential. Use:

kubectl get nodes -o json
kubectl get nodes -o yaml

These flags return the complete Node resource definition from the Kubernetes API, including:

  • Labels and annotations
  • Taints
  • Capacity and allocatable resources
  • Detailed condition states
  • Node addresses and system info

This format is intended for programmatic consumption—piping into jq, CI pipelines, compliance checks, or custom tooling. It exposes far more detail than the tabular CLI view.

For field-level understanding, refer to the official Kubernetes API reference for the Node resource.

Filter Nodes with Label Selectors

Labels enable targeted queries. Nodes can be labeled by environment, region, hardware profile, or workload specialization.

To filter:

kubectl get nodes -l env=prod

Common use cases:

  • Isolating GPU nodes (instance-type=gpu)
  • Targeting a specific region or zone
  • Scoping maintenance operations

To inspect available labels:

kubectl get nodes --show-labels

Label selectors are fundamental for controlled rollouts, draining subsets of infrastructure, and enforcing topology-aware operations.

Create Custom Output Columns

For precise reporting, use -o custom-columns with JSONPath expressions:

kubectl get nodes -o custom-columns=NAME:.metadata.name,CPU:.status.capacity.cpu,MEMORY:.status.capacity.memory

This produces a tailored table containing only the requested fields.

Typical use cases:

  • Quick capacity summaries
  • Compliance checks
  • Lightweight reporting without post-processing

Because this relies on the Node object schema, it requires familiarity with Kubernetes API fields. In multi-cluster environments, manually crafting and running these queries across contexts becomes inefficient. Plural’s Kubernetes dashboard centralizes node metadata and resource metrics across clusters, reducing the need for repetitive CLI extraction while preserving operational visibility.

In short, mastering these flags turns kubectl get nodes from a basic listing command into a flexible infrastructure inspection tool.

Common Beginner Challenges

kubectl get nodes is straightforward syntactically, but interpreting its output and acting on it correctly requires understanding Kubernetes node semantics. Most beginner friction falls into three areas: status interpretation, resource visibility, and troubleshooting NotReady states.

Interpreting Status Conditions

The default output shows:

  • NAME
  • STATUS
  • ROLES
  • AGE
  • VERSION

The STATUS column reflects aggregated node conditions reported by the kubelet.

Common values:

  • Ready – The node is healthy and eligible for scheduling.
  • NotReady – The control plane cannot confirm node health. New pods will not be scheduled.
  • SchedulingDisabled – The node has been cordoned (kubectl cordon), preventing new pod placement. This is an administrative state, not a health failure.

A common mistake is conflating NotReady with intentional cordoning. One indicates failure; the other indicates maintenance control.

For deeper condition details (e.g., MemoryPressure, DiskPressure, PIDPressure, NetworkUnavailable), inspect the full node object rather than relying on the summary column.

Understanding Resource Allocation

By default, kubectl get nodes does not show CPU or memory capacity. Beginners often assume resource health from readiness alone, which is incomplete.

To view declared capacity:

kubectl get nodes -o custom-columns=NAME:.metadata.name,CPU:.status.capacity.cpu,MEMORY:.status.capacity.memory

Important distinctions:

  • capacity – Total resources available on the node.
  • allocatable – Resources available for pods after system reservations.
  • Neither represents real-time usage.

For live utilization, you would typically use metrics-based commands such as kubectl top nodes (requires metrics-server).

At fleet scale, manually aggregating capacity and utilization across clusters becomes inefficient. Plural centralizes node-level resource metrics, enabling cross-cluster capacity analysis without repeated CLI queries.

Troubleshooting Connection Issues

When a node transitions to NotReady, the next step is detailed inspection:

kubectl describe node <node-name>

This surfaces:

  • Node condition transitions
  • Recent events
  • Resource pressure indicators
  • Kubelet heartbeat status

Typical root causes include:

  • Kubelet crash or restart loop
  • Network partition between node and API server
  • Disk or memory exhaustion
  • Container runtime failure

Beginners often stop at the summary view. Effective troubleshooting requires examining condition timestamps and event logs to determine whether the failure is transient or systemic.

In multi-cluster environments, repeating this diagnostic process node-by-node is operationally expensive. Plural consolidates node health signals and diagnostics across clusters, reducing time-to-detection and accelerating root cause analysis.

Mastering these patterns moves you from running the command to using it as a structured diagnostic entry point into Kubernetes node behavior.

How to Troubleshoot Node Statuses

When you run kubectl get nodes, the STATUS column gives you a quick health check. But when a node isn't Ready, you need to know what the other statuses mean and how to react. Understanding these states is the first step toward resolving issues and maintaining a healthy cluster.

Diagnosing Ready vs. NotReady

The Ready status is the healthy state you want to see. It confirms the node is functioning properly, its kubelet is running, and it’s prepared to accept new pods. If a node’s status is NotReady, it signals a problem. The node controller has detected an issue, and the scheduler will no longer place new pods on it. Common culprits for a NotReady status include resource exhaustion (like memory or disk pressure), network connectivity problems, or a malfunctioning kubelet process on the node itself. This status is your cue to start investigating the underlying health of the machine.

What SchedulingDisabled Means

Seeing a SchedulingDisabled status doesn't necessarily mean the node is unhealthy. Instead, it indicates an administrator has intentionally cordoned the node using the kubectl cordon <node-name> command. This action tells the Kubernetes scheduler not to place any new pods on the node. This is a standard procedure during planned maintenance, such as performing a kernel upgrade or decommissioning a machine. It allows existing pods to continue running while preventing new workloads from being assigned, enabling a graceful shutdown or update process. It’s a controlled state, not an error condition.

Handling Unknown and Other Statuses

The Unknown status is typically the most critical. It means the control plane has lost communication with the node’s kubelet entirely. This usually points to a significant problem, such as the node being powered off, a complete network partition between the node and the control plane, or a crashed kubelet. After a default timeout period, Kubernetes will begin evicting the pods from the Unknown node to ensure the workloads can be rescheduled on healthy nodes. Quick action is required to diagnose the connectivity issue or bring the machine back online to prevent application downtime.

Key Troubleshooting Steps

When a node reports an unhealthy status, follow a systematic approach to find the cause.

  1. Get more details: Start by running kubectl describe node <node-name>. This command provides a wealth of information, including recent events and a Conditions section that can tell you if the node is experiencing MemoryPressure, DiskPressure, or other issues.
  2. Check the kubelet: If the describe command doesn't reveal the problem, inspect the kubelet service directly on the node. You may need to SSH into the machine and check the kubelet logs using a command like journalctl -u kubelet.
  3. Verify resources: Ensure the node isn't running out of CPU, memory, or disk space.

For teams managing multiple clusters, Plural’s embedded Kubernetes dashboard simplifies this process by providing a centralized view of all nodes, logs, and events without needing to juggle multiple contexts or SSH credentials.

Advanced kubectl get nodes Techniques

Once you're comfortable with the basics, you can use kubectl get nodes for more than just listing nodes. By combining it with flags, selectors, and other commands, you can create powerful queries and automation scripts to manage your cluster more effectively. These techniques help you move from simply observing your nodes to actively querying and managing them at scale. While these command-line methods are powerful, managing a large fleet often requires a more centralized approach. Plural's embedded Kubernetes dashboard provides a unified view, allowing you to filter and inspect nodes across all your clusters from a single interface without juggling kubeconfigs.

Sort and Filter Results

In a large cluster, you need ways to narrow your focus. Kubernetes labels are key-value pairs that help you organize resources. For example, you can label nodes by environment, instance type, or geographic region. To view nodes with a specific label, use the -l or --selector flag. This command finds all nodes designated for production workloads: kubectl get nodes -l environment=production. You can also sort results to quickly identify outliers. The --sort-by flag accepts a JSONPath expression, allowing you to order nodes by name, creation time, or resource capacity. For instance, kubectl get nodes --sort-by=.status.capacity.cpu lists nodes from lowest to highest CPU capacity.

Combine with Other kubectl Commands

The kubectl get nodes command is often the first step in a troubleshooting or management workflow. You can combine it with other commands to perform more complex actions. For example, if you want to inspect all nodes in a specific availability zone, you can use a label selector with kubectl describe. The command kubectl describe nodes -l failure-domain.beta.kubernetes.io/zone=us-central1-a provides detailed information, including resource allocation, conditions, and recent events for every node in that zone. This is much more efficient than describing each node individually. You can also use field selectors to find all pods running on a particular node: kubectl get pods --all-namespaces --field-selector spec.nodeName=<node-name>.

Use in Scripts and Automation

For automation, you need predictable, machine-readable output. The -o or --output flag is essential for scripting. To create a custom report for human review, use the custom-columns option. This command generates a clean table showing only the node name, CPU, and memory capacity: kubectl get nodes -o custom-columns=NAME:.metadata.name,CPU:.status.capacity.cpu,MEMORY:.status.capacity.memory. For true automation, format the output as JSON (-o json) and pipe it to a command-line processor like jq. This allows you to extract precise information, such as pulling all internal IP addresses from your nodes to update a firewall rule or monitoring configuration. This approach makes kubectl a powerful component in your infrastructure automation toolkit.

Common kubectl get nodes Use Cases

Beyond simple status checks, kubectl get nodes is a foundational command for several critical operational workflows. Integrating it into your daily routine helps ensure cluster stability, optimize resource allocation, and streamline maintenance. For engineers managing a single cluster, it provides immediate, actionable insights directly from the command line. However, as environments scale, running these checks across an entire fleet becomes cumbersome and error-prone. This is where a centralized platform like Plural becomes essential, offering a single-pane-of-glass view that aggregates node status and health data from all your clusters, saving you from repetitive manual commands and context switching.

Whether you're working with one cluster or many, the underlying principles remain the same. Mastering this command is key for three primary activities: validating cluster health before deployments, planning for future capacity needs, and efficiently scheduling node maintenance. Each use case leverages the command's output to inform critical decisions that impact the performance and reliability of your applications. By making these checks a standard part of your process, you can proactively manage your Kubernetes environment instead of just reacting to problems. This command serves as the entry point for deeper investigation, helping you quickly identify which nodes are healthy, which are under pressure, and which require immediate attention.

Run Pre-Deployment Checks

Before deploying a new application or pushing an update, a quick health check is essential to prevent simple issues from escalating into major incidents. Running kubectl get nodes allows you to verify that all nodes are in a Ready state. Deploying to a cluster with one or more NotReady nodes can lead to unschedulable pods, failed deployments, and potential service disruptions.

This simple, proactive step ensures your cluster has the healthy, available nodes required to accommodate the new workload. As one source notes, it's crucial to "check if nodes are ready" before performing any updates. If a node isn't ready, the Kubernetes scheduler won't place new pods on it. By confirming cluster readiness beforehand, you can proceed with confidence or pause to troubleshoot any underlying issues first.

Plan Cluster Capacity

Effective capacity planning is crucial for maintaining performance and controlling costs. The kubectl get nodes command provides a high-level overview of your cluster's current size and composition. By viewing the number of nodes and their respective roles, you can quickly assess if your infrastructure is equipped to handle upcoming workloads or traffic spikes.

For more detailed insights, you can combine it with other commands to inspect resource allocation, such as CPU and memory usage. This data helps you make informed decisions about scaling your cluster, whether that means adding more nodes or optimizing existing resource requests and limits. Understanding your available resources is the first step toward building a scalable and cost-efficient cluster management strategy, ensuring you have the capacity you need without over-provisioning.

Schedule Node Maintenance

Sooner or later, you will need to perform maintenance on a node, whether it's for a kernel upgrade, hardware replacement, or troubleshooting. The kubectl get nodes command is your starting point for this process. A node reporting a NotReady status is an immediate signal that it requires investigation.

Similarly, a node in the SchedulingDisabled state indicates it has been intentionally cordoned off to prevent new pods from being scheduled on it, usually in preparation for maintenance. According to vCluster, these statuses mean a node is "either unhealthy or has been intentionally stopped from running new applications." By identifying these nodes, you can safely use commands like kubectl drain to evict existing workloads before shutting the node down for maintenance, ensuring a smooth process with minimal disruption to your services.

Simplify Node Management with Plural

While kubectl is a powerful tool for interacting with a single cluster, its efficiency diminishes as you scale. Managing a fleet of Kubernetes clusters requires a centralized approach to avoid the operational drag of switching contexts and running repetitive commands. Plural provides a unified platform to streamline node management, offering deep visibility and intelligent troubleshooting across your entire infrastructure. Instead of relying on manual CLI commands for each cluster, you can use a single interface to monitor health, diagnose issues, and maintain performance fleet-wide.

Get Real-Time Visibility Across All Clusters

Running kubectl get nodes -o wide is effective for checking the status, IP addresses, and OS versions of nodes in one cluster. However, performing this check across dozens or hundreds of clusters is tedious and inefficient. Plural’s embedded Kubernetes dashboard consolidates this information into a single pane of glass. It provides a live, comprehensive view of every node in your fleet, eliminating the need to juggle kubeconfigs or VPN credentials.

This is made possible by Plural’s secure, agent-based architecture, which uses an egress-only communication model. You gain full visibility into clusters across different VPCs or even on-prem environments without exposing internal cluster endpoints. The dashboard respects your existing security policies by using Kubernetes impersonation, ensuring that engineers only see the resources their RBAC roles permit.

Troubleshoot Faster with AI-Powered Insights

When a node reports a NotReady status, the typical next step is to run kubectl describe node <node-name> to parse through events and conditions. This output can be dense and requires expertise to interpret correctly. Plural accelerates this process with AI-powered diagnostics that translate raw error data into clear, actionable insights. Instead of just showing you that a node is unhealthy, Plural explains why.

Our AI-powered chat interface allows engineers to ask direct questions like, "Why is this node in a NotReady state?" and receive context-specific answers. The system analyzes node conditions, recent events, and resource metrics to identify root causes, such as memory pressure or disk exhaustion. By integrating AI into the troubleshooting workflow, Plural reduces the time to resolution and empowers engineers of all experience levels to solve complex node issues confidently.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What’s the difference between kubectl get nodes and kubectl describe node? Think of kubectl get nodes as a high-level summary. It gives you a quick, one-line status report for every node in your cluster, making it ideal for assessing the overall health of your fleet at a glance. In contrast, kubectl describe node <node-name> provides a detailed diagnostic report for a single, specific node. It shows you everything from labels and taints to current conditions, resource allocation, and a log of recent events, which is essential for deep-dive troubleshooting.

Why would a node be in a Ready state if my pods can't be scheduled on it? A Ready status simply confirms that the node is healthy and communicating with the control plane. It doesn't guarantee that it can accept your specific workload. Pods may fail to schedule on a Ready node for several reasons, such as insufficient CPU or memory resources. Another common cause is the presence of taints on the node, which are designed to repel pods that don't have a matching toleration.

How can I see the current resource usage of my nodes, not just their total capacity? The kubectl get nodes command shows a node's total resource capacity, but not its real-time consumption. To see current CPU and memory usage, you need to use the kubectl top node command. This requires the Kubernetes Metrics Server to be installed in your cluster. For a more persistent and visual approach, Plural's observability dashboard provides real-time utilization metrics across your entire fleet without requiring extra setup.

Is it safe to just terminate a node instance directly from my cloud provider's console? You should avoid terminating a node instance directly through your cloud provider. Doing so bypasses the Kubernetes control plane, which can leave orphaned resources and disrupt workloads that were running on that node. The correct procedure is to first use kubectl drain <node-name> to safely evict all pods, then kubectl delete node <node-name> to remove it from the cluster, and only then terminate the underlying cloud instance.

If I'm already proficient with kubectl, what's the benefit of using a platform like Plural? While kubectl is an indispensable tool for interacting with a single cluster, its efficiency breaks down when managing a large fleet. Plural provides a single pane of glass to monitor and manage all your nodes across every cluster, eliminating the need to constantly switch contexts. Instead of running commands manually, you get a unified dashboard with aggregated health statuses, real-time resource metrics, and AI-powered diagnostics that help you identify and resolve issues faster at scale.