A notebook with a flowchart for using kubectl get endpoints to troubleshoot Kubernetes, next to a laptop.

Mastering `kubectl get endpoints`: A Practical Guide

Learn how to use `kubectl get endpoints` to troubleshoot Kubernetes service connectivity, interpret output, and monitor pod health with practical examples.

Michael Guarino
Michael Guarino

Your application may appear down even when the deployment reports healthy Pods and the Service configuration looks correct. In many cases, the issue lies in the connection between the Service and the Pods. Running kubectl get endpoints exposes the current set of Pod IPs that Kubernetes considers valid backends for that Service.

If the endpoints list is empty, Kubernetes has not associated any Pods with the Service. This immediately narrows the scope of troubleshooting. The problem is typically not network routing or the Service object itself, but service discovery—most commonly caused by failing readiness probes or label selector mismatches between the Service and the Pods.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • Endpoints connect Services to healthy Pods: The Endpoint object is a dynamic list of IP addresses for ready Pods that match a Service's label selector. It's the crucial component that enables service discovery, and inspecting it is the first step in diagnosing network issues.
  • An empty Endpoint list signals a configuration error: When kubectl get endpoints returns <none>, it means the Service cannot find any ready Pods. This immediately narrows your investigation to common problems like mismatched labels, failing readiness probes, or Pod resource issues.
  • Manage fleet-wide endpoints from a single dashboard: Using kubectl across many clusters is inefficient for troubleshooting. Plural provides a centralized view of all your Services and Endpoints, allowing you to monitor health and diagnose connectivity issues at scale without constant context switching.

What Are Kubernetes Endpoints?

In Kubernetes, an Endpoint is an API object that maps a Service to the Pods backing it. It effectively stores the current list of Pod IP addresses and ports that should receive traffic for that Service.

Pods are ephemeral and their IPs can change as workloads are rescheduled. The Endpoint object tracks the set of healthy Pods associated with the Service so traffic is always routed to valid backends. Kubernetes automatically creates and updates Endpoints for most Services, making them a key component of cluster networking and an important resource for troubleshooting connectivity issues.

Services do not directly connect to Pods. Instead, Kubernetes uses an Endpoint object as the intermediary. When a Service is created with a label selector, the control plane generates a corresponding Endpoint resource.

The Endpoint controller continuously watches for Pods that match the Service selector and are in a Ready state. It then populates the Endpoint object with the Pod IPs and ports of those healthy instances.

Components like kube-proxy watch for changes to Endpoint objects. When the list changes—for example, when a Pod becomes ready or is terminated—kube-proxy updates the node’s networking rules so traffic sent to the Service’s virtual IP is forwarded to the correct backend Pods.

Why Endpoints Matter for Service Discovery

Endpoints are central to service discovery in Kubernetes. They decouple the Service’s stable virtual IP from the dynamic set of backend Pods.

Applications interact with a Service through its DNS name or cluster IP without needing to track Pod addresses. Kubernetes resolves the Service to one of the Pod IPs listed in the Endpoint object, ensuring traffic reaches a healthy backend.

For debugging connectivity problems, the Endpoint object is often the first place to check. If the Endpoint list is empty, the Service has no ready Pods to route traffic to—typically indicating label selector mismatches or failing readiness probes.

What Does kubectl get endpoints Do?

The kubectl get endpoints command exposes how a Kubernetes Service maps to the Pods actually receiving traffic. A Service provides a stable virtual IP and DNS name, but the Endpoints object tracks the concrete Pod IPs and ports backing that Service.

Running this command lets you inspect that mapping directly. It shows the set of Pod addresses that match the Service’s selector and are currently considered Ready. In practice, this bridges the abstraction of the Service with the actual runtime state of the Pods. When troubleshooting connectivity, the command answers a critical question: which Pods is Kubernetes currently routing traffic to for this Service?

Because of this, kubectl get endpoints is often one of the first commands used when a Service appears unresponsive. It quickly reveals whether traffic routing is working as expected or whether the issue lies with Pod readiness or service selection.

What kubectl get endpoints Returns

The default output lists the Endpoint object name—typically the same as the Service name—and the associated network endpoints.

Each endpoint appears as an IP_ADDRESS:PORT pair representing a Pod backing the Service. These entries correspond to Pods that both match the Service selector and have passed readiness checks.

If many Pods back the Service, the default table may truncate the list with a message such as +1 more.... To inspect the full object and all addresses, you can use more detailed output formats like -o yaml or -o wide.

This output effectively shows the Service’s active backend targets at that moment.

When to Use the Command

kubectl get endpoints is primarily used for diagnosing Service connectivity issues. If a Service is reachable but returns errors—or appears completely unavailable—the endpoint list helps determine whether Kubernetes has discovered any valid backend Pods.

An empty endpoint list indicates the Service is not selecting any Ready Pods. This typically points to label selector mismatches or failing readiness probes.

The command is also useful after deployments or rollouts to verify that new Pods are registered as Service backends once they become Ready. For broader operational visibility, platforms like Plural expose endpoint and Service health across clusters through a centralized dashboard, reducing the need to manually inspect each cluster with kubectl.

How to Use kubectl get endpoints

The kubectl get endpoints command lets you inspect which Pod IPs a Service is currently routing traffic to. It exposes the runtime state of Kubernetes service discovery by showing the addresses registered in the Endpoint object.

This command is commonly used during debugging, deployments, and operational monitoring. Understanding its syntax and output options allows you to quickly inspect service backends or integrate endpoint checks into automation scripts.

Review Basic Syntax and Options

The simplest usage lists all Endpoint objects in the current namespace:

kubectl get endpoints

Each row corresponds to an Endpoint resource, typically named after its Service, along with the Pod IP and port combinations currently serving traffic.

For more detailed information about a specific Endpoint object, use:

kubectl describe endpoints <endpoint-name>

describe provides additional context such as port definitions and related metadata, which is useful when diagnosing service routing issues.

Customize Output Formats

The default table output is useful for quick inspection, but structured formats are better suited for scripting and automation. Use the -o flag to return JSON or YAML representations of the resource:

kubectl get endpoints my-service -o yaml
kubectl get endpoints my-service -o json

For targeted data extraction, you can use JSONPath expressions. For example, to extract endpoint IP addresses:

kubectl get endpoints my-service -o jsonpath='{.subsets[*].addresses[*].ip}'

This approach is useful when building scripts or CI checks that validate service backends.

Filter Endpoints by Namespace

By default, kubectl operates in the current namespace. If a Service resides in another namespace, specify it explicitly:

kubectl get endpoints -n production

To inspect endpoints across the entire cluster, use:

kubectl get endpoints -A

This provides a cluster-wide overview of service backends without switching contexts.

Use Labels and Selectors

Endpoint objects can also be filtered using labels. The --selector (or -l) flag limits results to resources matching a specific label:

kubectl get endpoints -l app=payment-service

This helps isolate endpoints associated with a particular component or application.

While these CLI queries are useful for direct inspection, platforms like Plural provide centralized views of Services and their endpoints across multiple clusters, making it easier to monitor service connectivity without running individual kubectl commands.

How to Interpret the kubectl get endpoints Output

Running kubectl get endpoints returns a compact view of the backend Pods associated with a Service. Although the output is simple, each column represents important information about how Kubernetes routes traffic.

The command effectively exposes the list of Pod IP addresses and ports currently registered as backends. Reading this output correctly helps determine whether service discovery is functioning or whether the Service is failing to connect to any Pods.

A typical output includes three columns: NAME, ENDPOINTS, and AGE. Together they describe which Service the endpoints belong to, which Pods are currently serving traffic, and how long the Endpoint object has existed.

When managing many services across clusters, manually checking this output becomes inefficient. Platforms like Plural provide a centralized dashboard that aggregates Service and endpoint data across clusters, making it easier to inspect service health without repeatedly querying each cluster.

The NAME Column

The NAME column shows the name of the Endpoint resource. In most cases, Kubernetes creates an Endpoint object with the same name as its corresponding Service.

This column links the endpoint data to the Service responsible for routing traffic. If the name matches the expected Service, you can confirm that you are inspecting the correct backend mapping.

Endpoint objects store the network addresses (IP and port) of the Pods backing a Service, allowing Kubernetes networking components to route traffic correctly.

The ENDPOINTS Column

The ENDPOINTS column lists the active backend addresses for the Service.

Each entry appears as an IP:PORT pair representing a Pod that matches the Service selector and has passed its readiness checks. For example:

10.1.1.5:8080,10.1.1.6:8080

If entries appear here, the Service has discovered valid backend Pods and should be able to route traffic successfully.

This column is the most useful indicator when troubleshooting Service connectivity.

The AGE Column

The AGE column indicates how long the Endpoint object has existed.

A short age often means the Service or its associated Pods were recently created or restarted. This is common during deployments or scaling events.

However, if Endpoint objects are repeatedly recreated, it may indicate unstable Pods—for example, containers restarting frequently or failing readiness probes.

Stable services typically show Endpoint objects with longer lifetimes.

Healthy vs. Empty Endpoints

The most important signal in the output is whether the ENDPOINTS column contains entries.

A healthy Service has at least one IP:PORT entry, indicating that Kubernetes has discovered ready Pods and registered them as traffic targets.

If the column shows <none>, the Service has no active backends. This usually means one of the following:

  • The Pods are not running.
  • Pods exist but are failing readiness probes.
  • The Service selector does not match any Pod labels.

When this occurs, the Service cannot route traffic and effectively becomes a dead endpoint. Investigating Pod status, labels, and readiness checks is the next step.

Common Use Cases for kubectl get endpoints

The kubectl get endpoints command is a core operational tool for working with Kubernetes networking. It exposes the real-time relationship between Services and their backend Pods, making it especially useful for troubleshooting connectivity and validating deployments.

Because the command shows the Pod IPs currently registered as Service backends, it helps answer several key operational questions: whether a Service is connected to any Pods, whether those Pods are considered ready, and whether traffic is being distributed across the expected instances.

While kubectl works well for inspecting a single cluster, diagnosing service connectivity across multiple clusters becomes cumbersome when switching contexts manually. Platforms like Plural provide a centralized Kubernetes dashboard that aggregates endpoint and Service health across clusters, enabling operators to monitor connectivity without repeatedly running CLI commands.

Debug Service Connectivity

When a Service appears unreachable, kubectl get endpoints is usually the first diagnostic step.

If the ENDPOINTS column shows <none>, the Service has no registered backend Pods. This immediately rules out most Service configuration or routing problems and shifts the investigation toward the Pods themselves.

The next step is typically to inspect Pod status:

kubectl get pods

From there you can determine whether Pods are pending, crashing, or failing readiness probes.

Services discover Pods through label selectors. kubectl get endpoints confirms whether this selector successfully matches running Pods.

For example, a Service selector such as:

selector:
  app: my-api

requires Pods with the same label:

labels:
  app: my-api

If labels do not match exactly—such as app: my-app instead of app: my-api—the Service will not associate with those Pods, and the Endpoint list will remain empty. Inspecting endpoints quickly reveals these configuration mismatches.

Monitor Service Health

Endpoint membership also reflects application readiness. Kubernetes only adds a Pod to an Endpoint object after it passes its readiness probe.

Watching the resource in real time can reveal unstable Pods:

kubectl get endpoints -w

If Pod IPs repeatedly appear and disappear, it indicates readiness probe failures or unstable workloads. In these cases, running:

kubectl describe endpoints <service-name>

can provide additional context through events and metadata.

Check Load Balancing

The IP addresses listed in an Endpoint object form the backend pool used by the Service’s load balancing mechanism.

Each IP corresponds to a Pod eligible to receive traffic. If a Deployment is scaled to five replicas, you should typically see five Pod IPs in the endpoint list once all Pods become ready.

If fewer endpoints appear, some Pods are either not ready or not selected by the Service. This reduces the available backend pool and may concentrate traffic on fewer Pods than expected.

What Causes Empty or Missing Endpoints?

If kubectl get endpoints returns an empty list, the Service exists but Kubernetes has not registered any Pods as valid backends. This means the Service has nothing to route traffic to, which typically results in failed requests or unavailable applications.

In most cases, the problem is not with the Service itself but with how Kubernetes discovers or validates Pods. The most common causes include failing readiness probes, label selector mismatches, Pods that never reach the running state, or networking rules that prevent health checks or traffic.

These issues are usually isolated to configuration or workload state rather than cluster-wide failures. By checking each of these areas systematically, you can quickly restore the connection between the Service and its Pods. Platforms like Plural help streamline this process by exposing Pod status, logs, and resource configuration in a centralized Kubernetes dashboard, reducing the need to manually inspect multiple resources through CLI commands.

Pod Readiness Probe Failures

Kubernetes only registers Pods as Service endpoints after they pass their readiness probes. If a readiness probe fails, the Pod remains in a NotReady state and its IP address is excluded from the Endpoint list.

This prevents traffic from being sent to containers that cannot yet handle requests.

Common causes include:

  • Applications taking longer to start than the probe timeout allows
  • Incorrect health check paths or ports
  • Dependencies such as databases or external services being unavailable

When this occurs, inspecting Pod events and logs typically reveals why the readiness probe is failing.

Service Selector Mismatches

A Service discovers Pods using label selectors. If the selector does not exactly match the labels assigned to Pods, Kubernetes cannot associate those Pods with the Service.

For example, a Service selector like:

selector:
  app: my-service

will only match Pods labeled:

labels:
  app: my-service

Even small inconsistencies—such as myservice vs. my-service—prevent matching. When labels do not align, the Endpoint object remains empty because no Pods satisfy the Service selector.

Pod Scheduling and Resource Issues

Pods must be successfully scheduled and running before they can become endpoints. If Pods remain in the Pending state, they cannot be registered as Service backends.

Common scheduling problems include:

  • Insufficient CPU or memory on nodes
  • Node taints that the Pod does not tolerate
  • PersistentVolume mount failures
  • Node selector or affinity constraints

Checking Pod status is a typical next step when endpoints are missing. Running:

kubectl describe pod <pod-name>

usually reveals scheduling errors reported by the Kubernetes scheduler.

Network Policy Restrictions

NetworkPolicies control which traffic is allowed to reach Pods. Restrictive policies can unintentionally block the communication required for readiness probes or Service traffic.

If a policy denies traffic from the components performing health checks or from the Service proxy path, the Pod may never become a valid endpoint even if the container is running.

When troubleshooting endpoint issues, review the NetworkPolicies in the namespace to confirm that required ports and sources are allowed. Blocking these connections can prevent Pods from being added to the Service’s endpoint list even when they appear otherwise healthy.

How to Troubleshoot Unexpected Endpoint Results

If a Service behaves incorrectly—returning errors or failing to route traffic—the associated Endpoint object often reveals the problem. Missing or incorrect endpoints indicate that Kubernetes cannot properly link the Service with its backend Pods.

Troubleshooting usually involves verifying Pod health, confirming configuration consistency, and checking cluster networking components. Working through these areas methodically helps isolate where the Service-to-Pod connection is breaking.

Check Pod Status and Readiness

Endpoints only include Pods that are both running and ready. Pods that fail to start or fail readiness probes will not appear in the endpoint list.

Start by inspecting the Pods selected by the Service:

kubectl get pods -l app=my-app

Check the STATUS and READY columns.

Common problematic states include:

  • CrashLoopBackOff
  • Pending
  • Error

If a Pod is running but not ready, the readiness probe may be failing. Since Kubernetes only registers ready Pods as endpoints, probe failures will prevent the Pod IP from appearing in the Endpoint object.

Verify Service Configurations and Selectors

A frequent cause of empty endpoints is a mismatch between the Service selector and Pod labels.

The Service selector must match the labels defined in the Pod template. For example:

Service definition:

spec:
  selector:
    app: my-app

Pod template:

metadata:
  labels:
    app: my-app

Any mismatch prevents Kubernetes from associating Pods with the Service.

Although Kubernetes normally manages Endpoint objects automatically, manually defined Endpoints may appear when connecting Services to external resources. In those cases, verify that the configured IP addresses and ports are correct.

Inspect Network Policies and DNS

If Pods are healthy and selectors match, the issue may involve cluster networking.

NetworkPolicies can restrict Pod-to-Pod communication. A misconfigured policy may block traffic from the Service proxy or interfere with readiness probes.

Check active policies in the namespace:

kubectl get networkpolicy

DNS issues can also disrupt service discovery. If the cluster DNS service (commonly CoreDNS) is unhealthy, Services may fail to resolve to their endpoints.

You can test DNS resolution from inside the cluster by running a temporary debugging Pod and using tools like nslookup or dig.

Use kubectl describe for Detailed Analysis

When kubectl get output is insufficient, kubectl describe provides a more detailed view of the Endpoint object:

kubectl describe endpoints <endpoint-name>

This command shows additional metadata and recent events associated with the resource. These events may indicate readiness probe failures or other issues preventing Pods from being registered as endpoints.

In environments with multiple clusters, repeatedly running CLI commands can become inefficient. Plural’s embedded Kubernetes dashboard consolidates Service configurations, Pod status, and related events across clusters, enabling faster diagnosis of endpoint and connectivity issues.

Advanced kubectl get endpoints Techniques

After mastering basic endpoint inspection, you can use more advanced techniques to turn kubectl get endpoints into a practical tool for diagnostics, monitoring, and automation. These approaches help extract precise data, observe endpoint changes in real time, and integrate endpoint checks into operational workflows.

Combine With Other kubectl Commands

Endpoint inspection is most useful when combined with other resource queries. For example, inspecting a Service alongside its endpoints helps verify that selectors and backend Pods align:

kubectl describe service <service-name>

This reveals the Service selector, which determines which Pods populate the Endpoint object.

If a Service has many backing Pods, the default output may truncate the endpoint list with a message like +1 more.... To expand the view, you can use wider output or custom columns:

kubectl get endpoints <service-name> -o wide

For more precise control, custom column output can expose specific fields from the resource.

Use Watch Mode for Real-Time Monitoring

The --watch flag streams updates whenever the Endpoint object changes:

kubectl get endpoints <endpoint-name> --watch

This is useful for observing changes during rolling deployments or scaling events. As Pods become ready or terminate, their IP addresses appear or disappear from the endpoint list in real time.

Watch mode is also useful when debugging intermittent failures or monitoring how autoscaling events affect the backend pool.

Extract Specific Data With JSONPath

For scripting and automation, you often need to extract specific fields rather than view the full object. kubectl supports JSONPath expressions for this purpose.

For example, to extract all backend Pod IP addresses:

kubectl get endpoints <service-name> \
  -o jsonpath='{.subsets[*].addresses[*].ip}'

This produces a clean list of Pod IPs that can be piped into other tools for validation or monitoring tasks.

Using JSONPath effectively turns kubectl into a programmable interface for Kubernetes state.

Automate Endpoint Checks With Scripts

JSONPath output can be incorporated into scripts that validate service health. For example, a script could:

  • Iterate through Services
  • Extract endpoint IPs
  • Run connectivity tests such as curl against each Pod

You can also automate checks for Services that have zero endpoints across namespaces, which often indicates configuration or readiness issues.

These scripts work well for small environments, but maintaining them across many clusters can become difficult. Platforms like Plural provide a centralized Kubernetes dashboard that surfaces endpoint health across clusters, reducing the need for custom monitoring scripts.

Manage Endpoints Across Your Fleet with Plural

While kubectl is an indispensable tool for inspecting resources within a single cluster, its utility diminishes when you manage a fleet of dozens or hundreds of clusters. Running commands manually across each environment is not a scalable solution for troubleshooting or monitoring. This is where a centralized management platform becomes critical. Plural provides a unified control plane to manage Kubernetes applications and infrastructure consistently, no matter the scale. Instead of treating each cluster as an isolated island, you can manage your entire fleet from a single interface, which fundamentally changes how you approach endpoint management.

Gain centralized visibility across all clusters

Managing endpoints at scale starts with visibility. Without a central point of control, engineers are forced to context-switch between different clusters, manually running kubectl get endpoints to diagnose issues. This process is slow, error-prone, and inefficient. Plural offers a unified control plane that provides a comprehensive view of your entire Kubernetes fleet. By using a secure, agent-based architecture, Plural can connect to and manage clusters in any environment, whether in the cloud or on-premises. This gives your team a single source of truth for endpoint status across all clusters, eliminating the need to juggle multiple kubeconfigs and terminal sessions to find the information you need.

Automate endpoint health monitoring

Relying on manual checks to monitor endpoint health is not a viable strategy for production systems. A service can fail at any time, and the delay between an endpoint becoming empty and an engineer noticing it can lead to significant downtime. Plural helps you move from a reactive to a proactive monitoring posture. By integrating continuous deployment and infrastructure-as-code management into a single platform, you can automate health checks for your endpoints. You can configure alerts to trigger automatically when a service’s endpoints disappear or when pods in a service become unhealthy. This ensures that your team is notified of potential issues immediately, allowing them to resolve problems before they impact users.

Simplify troubleshooting with a unified dashboard

When an issue arises, speed is essential. Plural’s embedded Kubernetes dashboard provides a secure, SSO-integrated interface for ad-hoc troubleshooting. Instead of relying solely on the command line, engineers can use the dashboard to visualize the relationships between Services, Endpoints, and Pods. This makes it much easier to spot the root cause of a problem, such as a selector mismatch or a failing readiness probe. Because the dashboard uses Kubernetes impersonation, you can define RBAC policies as code in Git and apply them consistently across your fleet. This gives engineers the access they need to troubleshoot effectively without compromising the security of your clusters.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What's the real difference between a Kubernetes Service and an Endpoint? Think of a Service as the permanent address for your application and an Endpoint as the dynamic list of who is currently home to receive mail. The Service provides a stable IP address and DNS name that other applications can use to connect. The Endpoint object is the behind-the-scenes list that Kubernetes maintains, tracking the actual IP addresses of the healthy, ready Pods that the Service should send traffic to. The Service is the "what," and the Endpoint is the "where."

Why would my Endpoints list be empty even if my Pods are running? This is a classic Kubernetes puzzle. If your Pods show a Running status but the Endpoints list is empty, the most likely cause is a failed readiness probe. A Pod can be running but not yet "ready" to accept traffic. Kubernetes will only add a Pod's IP to the Endpoints list after its readiness probe passes. Another common reason is a simple typo or mismatch between the labels on your Pods and the selector defined in your Service manifest.

Can I create or modify an Endpoint object manually? Yes, you can, but it's generally not recommended for standard workloads. Kubernetes excels at managing Endpoints automatically based on Service selectors. However, a manual Endpoint is necessary when you want a Kubernetes Service to route traffic to an external resource, like a database running outside your cluster. In this case, you create a Service without a selector and then create a corresponding Endpoint object by hand, populating it with the external IP addresses and ports.

How do readiness probes directly impact the Endpoints list? Readiness probes are the gatekeepers for your Endpoints. The Kubernetes control plane constantly checks the readiness probe of each Pod. As soon as a Pod's probe starts passing, its IP address is added to the Endpoints list, and it begins receiving traffic from the Service. If the probe starts failing for any reason, Kubernetes immediately removes the Pod's IP from the list to prevent traffic from being sent to an unhealthy instance. This makes readiness probes a critical tool for ensuring traffic only reaches Pods that can handle it.

How does Plural help when kubectl get endpoints isn't enough for my whole fleet? While kubectl is great for inspecting a single cluster, it doesn't scale when you're managing dozens or hundreds. Plural provides a centralized Kubernetes dashboard that gives you a single-pane-of-glass view of service health and endpoint status across all your clusters. Instead of switching contexts and running commands repeatedly during an incident, you can use Plural's UI to quickly visualize service connections, diagnose selector mismatches, and check Pod readiness across your entire fleet from one place.

Guides