`kubectl get services`: The Definitive Guide
For engineers operating a single Kubernetes cluster, kubectl get services is a core operational command—fast, reliable, and sufficient for day-to-day diagnostics. Its limitations surface in enterprise environments where teams manage dozens or hundreds of clusters. Repeating the same command across contexts is slow, error-prone, and impossible to scale, making it difficult to answer even basic questions about service exposure and availability across the fleet. This article starts by breaking down kubectl get services in depth for single-cluster workflows, including output formats and advanced filtering. It then shifts to fleet-level service visibility, explaining why the CLI breaks down and how platforms like Plural provide a centralized control plane for consistent, at-scale service discovery and management.
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Key takeaways:
- Master the basics for network inspection:
kubectl get servicesis your go-to command for viewing a cluster's network layer. Understanding the output—including service types like ClusterIP and LoadBalancer, and how to query specific namespaces with the-nflag—is the first step to diagnosing any connectivity problem. - Use flags and selectors to troubleshoot faster: Move beyond the default view to efficiently debug issues. Use label selectors (
-l) to isolate specific services, custom output formats (-o yaml) to inspect full configurations, andkubectl describeto check events and find the root cause of pending IPs or missing endpoints. - Centralize management to overcome CLI limitations at scale: While essential, the command line is inefficient for managing services across a fleet of clusters. A platform like Plural provides the necessary single-pane-of-glass visibility and GitOps automation to manage configurations, RBAC, and service health consistently across your entire environment.
What Is kubectl get services?
kubectl get services is a core Kubernetes CLI command used to list Service resources in a cluster. It queries the Kubernetes API and returns the current network-facing configuration of your workloads, including service type, cluster IPs, external IPs, and exposed ports. For day-to-day operations, this command is the fastest way to understand how applications are reachable inside and outside the cluster, making it essential for deployments, debugging connectivity, and validating configuration changes. The output represents a real-time snapshot of the cluster’s service layer.
The Role of Services in Kubernetes
Pods in Kubernetes are inherently ephemeral. They can be rescheduled, recreated, or replaced at any time, and each lifecycle event may assign a new IP address. Relying on Pod IPs directly is therefore not viable for stable communication. A Service abstracts this volatility by defining a logical group of Pods selected via labels and exposing them through a stable virtual IP and DNS name. Traffic sent to the Service is load-balanced across healthy Pods, ensuring reliable connectivity regardless of Pod churn. This abstraction is foundational to how microservices communicate within a cluster and how external traffic is routed to applications.
Basic Command Syntax and Structure
The command follows the standard kubectl resource pattern:
kubectl get <resource-type> [resource-name]
For Services, the resource type can be specified as services or its shorthand svc.
To list all Services in the current namespace:
kubectl get services
Or equivalently:
kubectl get svc
From there, the command can be extended with flags to target specific namespaces, change output formats, or filter results—capabilities that become increasingly important as cluster complexity grows and will be covered in later sections.
A Breakdown of Kubernetes Service Types
In Kubernetes, a Service defines a logical set of Pods and the policy used to access them. When you run kubectl get services, the TYPE column is critical because it determines how traffic reaches your workload. Kubernetes exposes four primary Service types, each mapping to a different networking and exposure model. Choosing the correct type is fundamental to building secure, predictable application connectivity.
ClusterIP
ClusterIP is the default Service type. It exposes the Service on a virtual IP that is only routable inside the cluster. This is the standard choice for internal service-to-service communication, such as a frontend calling a backend API. It provides a stable endpoint while keeping the workload isolated from external traffic, making it the baseline for most microservice architectures.
NodePort
NodePort exposes a Service on a fixed port across every node in the cluster. Traffic sent to <NodeIP>:<NodePort> is forwarded to the underlying ClusterIP Service. This is useful for development, testing, or simple environments where you need external access without provisioning a cloud load balancer. In production, it is less common due to limited control over routing, security, and lifecycle management.
LoadBalancer
On managed platforms like AWS, Google Cloud, or Microsoft Azure, LoadBalancer is the standard way to expose services publicly. Kubernetes automatically provisions a cloud provider load balancer and assigns a stable external IP. Incoming traffic is routed through the cloud load balancer to NodePorts and then to Pods. This is the preferred option for internet-facing production services due to its scalability, availability, and managed infrastructure.
ExternalName
ExternalName does not select Pods or create proxy rules. Instead, it creates a DNS alias inside the cluster by returning a CNAME record. This allows in-cluster workloads to reference an external service using a Kubernetes-style DNS name. A common use case is pointing to managed services like Amazon RDS or third-party APIs, improving configuration consistency and portability across environments.
Understanding how these Service types appear in kubectl get services is essential for debugging exposure issues in a single cluster—and becomes even more important when platforms like Plural aggregate and standardize service visibility across many clusters.
How to View Services Across Namespaces
Namespaces in Kubernetes partition a single cluster into logical environments for teams, applications, or stages like dev and prod. Because Services are namespace-scoped, visibility depends entirely on how you query them. Knowing how to control scope with kubectl is essential for both application debugging and cluster administration.
Default Namespace Behavior
Running kubectl get services without flags only queries the default namespace. This frequently trips up engineers when Services are deployed into custom namespaces and appear to be “missing.” The behavior is intentional: it reduces noise and limits accidental interaction with system namespaces such as kube-system.
Target a Specific Namespace with -n
Use the -n or --namespace flag to query Services in a specific namespace:
kubectl get services -n kube-system
This scoped approach is the norm for day-to-day operations, allowing you to focus on a single application or environment without scanning unrelated resources.
View All Namespaces with --all-namespaces
For a cluster-wide view, use:
kubectl get services --all-namespaces
or the shorthand:kubectl get services -A
This adds a NAMESPACE column to the output and returns every Service in the cluster. It’s useful for audits, troubleshooting unknown deployments, or understanding overall service sprawl. At scale, however—especially across many clusters—this quickly becomes unwieldy, which is why platforms like Plural centralize service visibility across namespaces and clusters into a single control plane.
Key Flags and Output Options for kubectl get services
The default table output of kubectl get services is fine for quick checks, but real efficiency comes from its flags. These options let you shape, filter, and stream data directly from the API server, making the command suitable for automation, debugging, and operational workflows. For platform teams, mastering these flags is the difference between ad-hoc inspection and repeatable, scriptable service discovery.
Format Output with -o (wide, JSON, YAML)
The -o (--output) flag controls how results are rendered.
-o wideextends the table with additional columns such as selectors, useful for fast, human-readable diagnostics.-o jsonand-o yamlreturn the full Service object in machine-readable formats, ideal for scripting, audits, and piping into other tools.
These formats are commonly used in CI/CD pipelines and automation where structured output is required.
Create Custom Output with JSONPath
When you only need specific fields, JSONPath avoids parsing entire objects. With -o jsonpath, you can extract exactly what you need directly from the API response.
Example: list fully qualified service DNS names across all namespaces:
FILTER='{range .items[*]}{.metadata.name}.{.metadata.namespace}.svc.cluster.local{"\n"}{end}'
kubectl get services -A -o=jsonpath="${FILTER}"
This approach produces clean, minimal output without relying on external tools like jq or grep, which is especially useful in scripts and reports.
Filter with Label and Field Selectors
Large clusters require targeted queries.
Label selectors (-l or --selector) filter by labels you control:
kubectl get services -l app=backend
Field selectors (--field-selector) filter by resource fields such as name or type:
kubectl get services --field-selector spec.type=LoadBalancer
Selectors are fundamental for managing Services at scale and should align with your labeling strategy.
Sort and Watch Services in Real Time
To organize output, use --sort-by with a JSONPath expression:
kubectl get services --sort-by=.metadata.name
For live updates, --watch (-w) streams changes as they happen:
kubectl get services -w
This is particularly useful during deployments or when debugging service creation and deletion events.
These flags turn kubectl get services from a simple listing command into a powerful inspection and automation tool. However, once you need consistent visibility across namespaces and clusters, CLI-based workflows hit their limits—this is where platforms like Plural provide centralized, queryable service views without relying on per-cluster terminal access.
What Does the kubectl get services Output Mean?
Running kubectl get services returns a compact but information-dense view of how network traffic is routed inside and outside your cluster. Each row represents a Service object, and each column maps directly to how that Service is exposed, addressed, and forwarded. Being able to read this output quickly is essential for diagnosing unreachable workloads, validating deployments, and understanding your cluster’s networking model.
Breaking Down the Default Columns
The default table includes the following columns:
- NAME
The Service’s unique name within its namespace. - TYPE
How the Service is exposed. Common values includeClusterIP(internal-only),NodePort(exposed on each node at a fixed port), andLoadBalancer(provisioned through a cloud provider). - CLUSTER-IP
The stable virtual IP assigned to the Service. This address is only routable inside the cluster and is the primary mechanism for service-to-service communication. - EXTERNAL-IP
ForLoadBalancerServices, this is the externally reachable IP. For other types, this typically shows<none>. - PORT(S)
The ports exposed by the Service and their protocols, for example80/TCPor80:3000/TCPwhen atargetPortis defined. - AGE
How long the Service has existed since creation.
Together, these fields give you a fast, high-level picture of service exposure and lifecycle.
Interpreting Endpoints and Port Mappings
A Service provides a stable endpoint in front of a dynamic set of Pods selected by labels. Instead of connecting to ephemeral Pod IPs, clients connect to the Service’s CLUSTER-IP, which forwards traffic to healthy backend Pods.
The PORT(S) column reflects the port mapping. An entry like 80:3000/TCP means the Service listens on port 80 and forwards traffic to port 3000 on the selected Pods. This indirection lets you standardize external ports while keeping application-level ports flexible.
Reading External IPs and Load Balancer Status
For LoadBalancer Services, Kubernetes integrates with your cloud provider to create an external load balancer. While provisioning is in progress, the EXTERNAL-IP field may show <pending>. Once an IP appears, the Service is ready to accept external traffic.
If <pending> persists, it usually points to a cloud integration issue—missing permissions, unsupported environments, or quota limits. In those cases, inspecting the Service events with kubectl describe service <name> typically reveals the underlying cause.
Understanding this output is manageable in a single cluster. At scale, however, correlating service exposure and external endpoints across namespaces and clusters quickly becomes impractical without a centralized view—an area where platforms like Plural significantly reduce operational overhead.
How to Troubleshoot Common kubectl get services Issues
Even when you understand the command, kubectl get services can return results that don’t match expectations. Services may be missing, stuck in transitional states, or unreachable. Effective troubleshooting is about narrowing the problem quickly—namespace scope, selectors, permissions, or cluster context—before digging deeper.
Why a Service Might Not Appear
The most common reason a Service is “missing” is namespace scoping. By default, kubectl queries only the namespace in your current context, typically default. If the Service was created elsewhere, it won’t appear.
Use -n <namespace> to target a specific namespace, or -A (--all-namespaces) if you’re unsure where it lives. Also verify the Service name for simple typos. In practice, most “missing service” issues are resolved by correcting namespace scope.
Diagnosing Service States and Configuration Errors
If a Service appears but doesn’t work, inspect it directly:
kubectl describe service <name>to review configuration, events, and selectorskubectl get endpoints <name>to confirm backend Pods are registered
If endpoints are empty, the Service selector isn’t matching any Ready Pods. At that point, inspect the Pods themselves: labels, readiness probes, and startup logs. This service → endpoints → pods loop is fundamental but repetitive. Centralized dashboards like Plural reduce friction by showing Services, Pods, and logs in a single view.
Solving RBAC and Permission Errors
A Forbidden error indicates an RBAC issue. Kubernetes enforces access at the API level, and your user or service account may not be authorized to list Services in a namespace or cluster-wide.
Resolution requires updating a Role or ClusterRole and binding it with a RoleBinding or ClusterRoleBinding. At fleet scale, manually managing RBAC becomes brittle. Plural addresses this with GitOps-driven RBAC definitions applied consistently across clusters, reducing drift and surprise permission failures.
Checking Context and Cluster Connectivity
Sometimes the issue isn’t the Service—it’s your kubectl context. Verify where you’re connected:
kubectl config current-contextkubectl config get-contexts
Switch if necessary with kubectl config use-context <name>. If commands fail entirely, validate API connectivity using kubectl cluster-info. Client/server version skew can also cause subtle issues, so checking kubectl version is a good final step.
Systematic checks—namespace, selectors, RBAC, context—resolve the majority of kubectl get services problems. The challenge is that repeating this workflow across many clusters doesn’t scale, which is why platforms like Plural focus on consolidating visibility and diagnostics at the fleet level.
Advanced Techniques for kubectl get services
While the basic kubectl get services command is a great starting point, managing services in a production environment requires more sophisticated techniques. As your cluster grows, you'll need to move beyond simple lists and learn how to extract detailed information, filter large outputs, and monitor changes effectively. These advanced methods help you troubleshoot faster and maintain a clearer understanding of your service architecture.
Combine with describe and Other kubectl Commands
To get a complete picture of a service, you need more than the one-line summary from kubectl get. This is where the describe command comes in. Running kubectl describe service <service-name> provides a detailed report, including its configuration, endpoints, and a list of recent events. This event log is invaluable for debugging issues like failing endpoint connections. You can also create resources on the fly for testing. For example, kubectl create deployment nginx --image=nginx quickly spins up a new deployment that you can expose and inspect. This workflow is a fundamental part of day-to-day Kubernetes operations. For a full list of commands, the official kubectl quick reference is an excellent resource.
Filter and Parse Large Service Lists
In an environment with hundreds of services, a raw kubectl get services dump is overwhelming. To manage this, you can filter and sort the output. Use the --sort-by flag to organize the list, such as kubectl get services --sort-by=.metadata.name to list services alphabetically. More powerfully, you can use label selectors to find specific services. If you've labeled your services by application, you can run kubectl get services --selector=app=cassandra to see only related services. To get a complete overview of every service running in your cluster, a common need for operators, append the --all-namespaces flag. This command is essential for a holistic view of your entire Kubernetes environment.
Monitor Service Changes Over Time
To maintain stability, you need to track how your services change. The best way to do this is by managing service configurations declaratively using YAML or JSON files. Instead of making imperative changes with kubectl edit, you define the desired state in a file and apply it with kubectl apply -f <filename>. This approach creates an auditable trail of every change. When a service isn't behaving as expected, a systematic debugging process is key. Start by examining the service's configuration with kubectl get service <service-name> -o yaml. Then, check its endpoints and review the logs and events of the associated pods to find the root cause. This structured approach helps you quickly pinpoint and resolve issues.
Managing Services at Scale in Enterprise Environments
While kubectl get services is an indispensable command for inspecting a single cluster, its utility diminishes as your infrastructure grows. In enterprise settings, teams often manage dozens or even hundreds of Kubernetes clusters spread across multiple clouds and on-premise environments. As any seasoned operator will tell you, it’s when you expand into running Kubernetes in production at scale that you come across the real pain. Running commands manually against each cluster is inefficient, error-prone, and doesn't scale. The broad adoption of cloud-native technologies presents a variety of challenges, from cluster instability to inconsistent service configurations. This is where a unified management plane becomes essential for maintaining visibility, control, and consistency across your entire fleet of services.
The Challenge of Managing Services Across Multiple Clusters
Managing a fleet of Kubernetes clusters introduces significant complexity. Each cluster has its own context, and constantly switching between them to run kubectl commands is a tedious process that slows down troubleshooting. This fragmented approach makes it difficult to get a holistic view of your services, identify inconsistencies, or enforce standards across your entire environment. Challenges like managing multiple Kubernetes distributions, dealing with different cloud provider interfaces, and a general lack of visibility can lead to configuration drift and operational blind spots, making it nearly impossible to manage services effectively and securely at scale.
Gain Comprehensive Service Visibility with Plural's Dashboard
Plural provides a single pane of glass to cut through the complexity of multi-cluster management. Instead of juggling kubeconfigs and running repetitive commands, you can use Plural’s embedded Kubernetes dashboard to view and interact with services across your entire fleet from a single, unified interface. This centralized view gives you immediate insight into the health and status of all your services, regardless of where they are running. By consolidating service information, Plural helps you quickly diagnose issues, compare configurations between clusters, and ensure your applications are running as expected, turning a chaotic, multi-cluster environment into a manageable one.
Scale Service Management with Automation
Viewing services is only half the battle; managing them consistently at scale requires automation. Plural’s GitOps-based workflow allows you to define and deploy services declaratively. Using features like Global Services, you can create a service definition once and have Plural automatically replicate it across any number of target clusters. For example, you can enforce consistent RBAC policies or deploy a standard monitoring agent everywhere with a single configuration. This automation-driven approach eliminates manual toil, prevents configuration drift, and ensures that your service architecture remains consistent and compliant as you scale.
Best Practices for Managing Kubernetes Services
Using kubectl get services is fundamental, but managing services effectively, especially across a fleet of clusters, requires a disciplined approach. Relying solely on CLI commands for visibility and troubleshooting doesn't scale and can lead to operational blind spots. Establishing solid operational practices is key to maintaining a healthy, resilient, and manageable Kubernetes environment. These practices ensure that your services are not only discoverable but also observable, well-organized, and understood by your entire team. By implementing consistent monitoring, clear labeling, and collaborative documentation, you can move from reactive firefighting to proactive management, regardless of the complexity of your infrastructure.
Implement Regular Service Health Monitoring
Proactive health monitoring is essential for identifying issues before they impact users. A standard debugging workflow involves checking a service's configuration, verifying its endpoints are correctly pointing to healthy pods, and inspecting pod logs for errors. While you can do this manually with kubectl describe and kubectl logs, this approach is time-consuming and inefficient at scale. A centralized platform that automates this process is a far better solution for fleet management. Plural’s embedded Kubernetes dashboard provides a single pane of glass to monitor service health across all your clusters, aggregating logs and events so you can quickly diagnose connectivity issues without juggling multiple terminals and contexts. This unified view helps you maintain service reliability across your entire infrastructure.
Adopt Clear Naming and Labeling Strategies
A consistent naming and labeling strategy is the foundation of an organized Kubernetes environment. Clear labels make it significantly easier to find, manage, and debug resources. For example, instead of a generic label like app: my-app, adopt a structured approach using recommended labels like app.kubernetes.io/name: api-gateway and app.kubernetes.io/component: authentication. This allows you to precisely filter services with commands like kubectl get services -l app.kubernetes.io/component=authentication. This level of organization is critical for automation, policy enforcement, and GitOps workflows. Within the Plural platform, these labels are used to create filtered, intuitive views of your services, helping you quickly isolate resources by application, team, or environment without complex queries.
Document and Collaborate with Your Team
Managing Kubernetes is a team effort, and clear documentation ensures everyone is on the same page. Keep detailed records of service architecture, dependencies, and debugging procedures. This knowledge sharing reduces reliance on a few key individuals and empowers the entire team to troubleshoot effectively. A great practice is to store this documentation alongside your Kubernetes manifests in a Git repository. This docs-as-code approach creates a single source of truth. Plural’s GitOps-based workflow naturally supports this by using pull requests as a forum for discussion and review. This ensures that changes are documented and understood before they are merged, creating a transparent and collaborative operational model for your team.
Related Articles
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
What's the difference between a Kubernetes Service and an Ingress? A Service operates at Layer 4 (TCP/UDP) and provides a stable IP address to route traffic to a set of pods within the cluster. Think of it as an internal load balancer. An Ingress, on the other hand, operates at Layer 7 (HTTP/HTTPS) and manages external access to services, typically handling things like URL-based routing, SSL termination, and name-based virtual hosting. You'll almost always use a Service to expose your pods internally, and then use an Ingress to expose that Service to the outside world.
My LoadBalancer service's external IP is stuck on <pending>. What should I check first? A pending external IP usually means the cloud provider's load balancer is still being provisioned or has failed to create. First, use kubectl describe service <your-service-name> and look at the Events section at the bottom for error messages from the cloud provider's controller. Common causes include insufficient permissions for the cluster to create load balancers, hitting a quota limit on your cloud account, or a misconfiguration in your cloud controller manager.
How does a Service know which Pods to send traffic to? A Service identifies its target Pods using a label selector. When you define a Service, you specify a selector that matches a set of labels you've applied to your Pods. The Kubernetes control plane continuously scans for Pods with matching labels and updates the Service's list of endpoints accordingly. This is why a consistent labeling strategy is so important; it's the core mechanism that connects your stable network endpoint (the Service) to your dynamic application workloads (the Pods).
Can I use a Service to connect to an application outside my Kubernetes cluster? Yes, that's exactly what the ExternalName service type is for. Instead of routing traffic to pods inside the cluster, it creates a DNS alias to an external service, like a managed database or a third-party API. Your in-cluster applications can connect to a stable, internal service name, and Kubernetes DNS will resolve it to the external endpoint. This makes your application configurations more portable, as you can change the external endpoint without having to update every application that connects to it.
Why manage services with a platform like Plural if kubectl works fine for me? While kubectl is an essential tool for interacting with a single cluster, its effectiveness breaks down when you're managing a fleet of them. Constantly switching contexts to inspect services across different environments is inefficient and prone to error. Plural provides a single dashboard to view and manage services across all your clusters, giving you a unified view of your entire infrastructure. More importantly, it enables a GitOps workflow, allowing you to define services declaratively and automate their deployment consistently, which is critical for maintaining control and preventing configuration drift at scale.