`kubectl get namespaces`: A Complete User Guide
Running kubectl get namespaces on a single cluster is simple. But what happens when your responsibilities expand to ten clusters? Or a hundred? The command-line, once a source of efficiency, becomes a bottleneck. Constantly switching contexts, running repetitive commands, and trying to maintain a mental model of a sprawling fleet is inefficient and prone to error. This is where the real challenge of namespace management begins. It’s not about knowing the commands; it’s about applying them consistently and securely across your entire infrastructure. This guide covers the essential kubectl techniques while also addressing the critical strategies for automating namespace governance in a large-scale, multi-cluster environment.
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Key takeaways:
- Enforce security with namespace-level controls: Treat namespaces as your primary isolation boundary by consistently applying Role-Based Access Control (RBAC), ResourceQuotas, and NetworkPolicies to prevent resource conflicts and unauthorized access between teams.
- Use
kubectlfor inspection, not fleet operations: While essential for debugging and ad-hoc tasks, relying on manualkubectlcommands for fleet-wide namespace management is not scalable and introduces significant operational risk and configuration drift. - Automate the namespace lifecycle with GitOps: Implement a declarative, Git-based workflow to manage namespace creation, configuration, and deletion. This creates an auditable and repeatable process that platforms like Plural can extend with developer self-service capabilities.
What Is a Kubernetes Namespace?
A Kubernetes namespace partitions a single cluster into multiple logical scopes. It provides name isolation, allowing identical resource names (for example, Pods or Services) to coexist as long as they live in different namespaces. For platform and DevOps teams, namespaces are foundational: they enable multi-tenancy, structure resources by team or workload, and serve as the primary boundary for security and governance as clusters scale.
Why Namespaces Matter for Cluster Organization
As multiple teams share a cluster, unmanaged resource sprawl and naming conflicts quickly become operational friction. Namespaces introduce clear logical boundaries so teams can deploy independently without collisions. More importantly, they enable enforcement. ResourceQuotas cap CPU, memory, and storage per namespace to prevent noisy-neighbor failures, while RBAC rules scope permissions to specific namespaces so teams only access what they own. In practice, namespaces are the control plane for fair usage, security isolation, and predictable operations.
Default Namespaces You Should Understand
Every cluster starts with system-defined namespaces that serve specific purposes:
- default: The catch-all namespace for resources without an explicit namespace. Treat it as a placeholder, not a deployment target; production workloads should live in dedicated namespaces.
- kube-system: Reserved for control plane and system components. Modifying resources here risks cluster stability.
- kube-public: Globally readable, including by unauthenticated users. Typically used for non-sensitive cluster metadata.
- kube-node-lease: Stores Lease objects for node heartbeats, enabling efficient node liveness tracking.
Understanding and respecting these namespaces is essential before layering automation, policy enforcement, and governance at scale.
How to Use kubectl get namespaces
kubectl get namespaces is the entry point for understanding how a cluster is logically partitioned. It gives a quick inventory of namespaces, which is essential when debugging workloads, validating isolation boundaries, or performing routine administration. On a single cluster, this workflow is straightforward. At fleet scale, however, repeatedly switching contexts and re-running the same commands becomes slow and error-prone.
This is where a centralized control plane matters. Plural’s embedded Kubernetes dashboard aggregates namespaces across all connected clusters, providing a unified view of their state and configuration without juggling kubeconfigs or issuing repetitive CLI commands. This shifts namespace visibility from an imperative, cluster-by-cluster task to a fleet-level operation.
Basic Command Syntax and Filtering
To list all namespaces in the current context:kubectl get namespaces
To fetch specific namespaces by name:kubectl get namespaces <namespace-1> <namespace-2>
Label-based filtering is supported and is critical in well-organized clusters:kubectl get namespaces -l team=frontend
This pattern is consistent across kubectl and forms the basis for querying most Kubernetes resources.
Interpreting the Output
By default, the command returns a table with NAME, STATUS, and AGE. Most namespaces are Active; Terminating indicates a deletion in progress. AGE reflects how long the namespace has existed.
For automation or deeper inspection, use structured output:kubectl get namespaces -o yaml or -o json
This exposes labels, annotations, and other metadata commonly used by policy engines and scripts. For verbose, human-oriented diagnostics—such as quotas and recent events—use:kubectl describe namespace <namespace-name>
Together, these commands cover day-to-day inspection, while platforms like Plural handle fleet-wide visibility and governance at scale.
What Information Does kubectl get namespaces Provide?
kubectl get namespaces offers a fast, high-level view of a cluster’s logical layout. While the default output is intentionally minimal, it surfaces the core signals engineers rely on for routine checks, initial debugging, and cluster hygiene. In practice, this command is often the first step before deeper inspection, policy validation, or automation workflows.
Understanding NAME, STATUS, and AGE
The default table includes three columns:
- NAME: The unique identifier of each namespace.
- STATUS: The namespace lifecycle phase.
Activeindicates normal operation, whileTerminatingsignals that deletion is in progress and resources are being garbage-collected. - AGE: How long the namespace has existed.
This data is useful for quick assessments, such as spotting namespaces stuck in termination or identifying long-lived development namespaces that may need cleanup.
Inspecting Metadata and Labels
Namespaces typically carry richer metadata than the default view shows. To surface labels inline, use:kubectl get namespaces --show-labels
Labels are central to organization and automation, commonly encoding ownership (team=infra) or environment (env=prod). They enable targeted queries, policy enforcement, and scripted workflows.
For programmatic access, structured output is more effective. For example:kubectl get namespaces -o=jsonpath="{.items[*].metadata.labels}"
This extracts label data directly for use in scripts or higher-level tooling, forming the basis for label-driven governance and fleet-wide automation.
How to Filter and Format Namespace Output
The default table from kubectl get namespaces is sufficient for quick inspection, but real-world operations require more control. Filtering and formatting allow you to target specific namespaces, feed structured data into scripts, and produce deterministic output for automation. While a centralized UI can accelerate ad-hoc exploration—Plural’s embedded Kubernetes dashboard supports fleet-wide filtering and sorting without CLI friction—these kubectl techniques remain essential for repeatable, command-line–driven workflows.
Format Output with JSON, YAML, and Wide Views
Use the -o (or --output) flag to change how results are rendered. JSON and YAML are machine-readable and foundational for automation and scripting. The wide format expands the table view to include additional columns, such as labels, for quick terminal-based inspection.
kubectl get namespaces -o yaml
kubectl get namespaces -o json
kubectl get namespaces -o wide
Filter with Labels and Field Selectors
Large clusters demand selective queries. Label selectors (-l) filter namespaces by metadata you control, such as ownership or environment:kubectl get namespaces -l team=backend
Field selectors filter on resource fields exposed by the API. For example, to list only active namespaces:kubectl get namespaces --field-selector=status.phase=Active
Combining label and field selectors yields precise, API-efficient queries that scale well in automation.
Sort and Customize Results
To improve readability, sort output using --sort-by with a JSONPath expression:kubectl get namespaces --sort-by=.metadata.name
For tailored views, define explicit columns with -o custom-columns:kubectl get namespaces -o custom-columns='NAME:.metadata.name,STATUS:.status.phase'
Custom columns can also be stored in a template file and reused with -o custom-columns-file, which is useful for standardizing output across scripts and teams.
How to Manage Namespaces with kubectl
kubectl is the primary interface for interacting with Kubernetes, and namespace management is a core competency for platform engineers. Namespaces define logical boundaries for teams, applications, and environments, and kubectl provides the primitives to create, inspect, and remove them.
While direct kubectl usage is ideal for learning, debugging, and one-off operations, it does not scale operationally. Running imperative commands across many clusters increases error rates and creates drift. In production environments, namespaces are typically managed declaratively through GitOps workflows, where changes are reviewed, versioned, and applied consistently. Plural builds on this model by acting as a unified control plane, coordinating configuration and deployments across an entire fleet. The commands below are best viewed as low-level building blocks that underpin those higher-level workflows.
Create and Delete Namespaces
Creating a namespace provisions a new logical partition in the cluster and is usually the first step when onboarding a team or application:kubectl create namespace my-new-space
Deleting a namespace is a destructive operation. It removes the namespace and all contained resources, including workloads and persistent volume claims:kubectl delete namespace my-new-space
This action is irreversible and should be tightly controlled in production, ideally gated behind review and automation rather than executed manually.
Set a Default Namespace Context
When working repeatedly in the same namespace, constantly passing -n or --namespace adds friction and increases the risk of mistakes. Each kubectl context includes a default namespace that can be updated:kubectl config set-context --current --namespace=my-new-space
Once set, all subsequent kubectl commands in that context default to this namespace, improving efficiency and reducing accidental cross-namespace operations.
View Namespaces and Inspect Their Resources
To list all namespaces in the cluster:kubectl get namespaces
This provides a high-level inventory with name, status, and age. To inspect resources inside a specific namespace, scope your queries explicitly:kubectl get pods -n my-new-space
For a deeper view of a namespace’s configuration—including labels, annotations, quotas, and recent events—use:kubectl describe namespace my-new-space
These commands form the basis of namespace-level visibility. At scale, platforms like Plural surface the same information through a centralized dashboard, removing the need for constant context switching while preserving kubectl as the underlying execution model.
Common Namespace Management Challenges to Avoid
Effective namespace management underpins cluster stability, security, and operational efficiency. As environments scale, issues around resource contention, access control, and destructive operations become systemic risks rather than edge cases. The consistent pattern behind these failures is reliance on manual, ad-hoc intervention instead of standardized, automated controls. Platform teams should treat namespaces as governed infrastructure objects, not disposable CLI artifacts.
Resource Quota Conflicts and Noisy Neighbors
Without enforced limits, a single workload can monopolize CPU or memory and degrade unrelated services. Kubernetes addresses this with ResourceQuota, which caps aggregate resource consumption per namespace and enforces fair sharing.
The challenge is consistency. Hand-crafted quotas across many namespaces and clusters quickly drift. Defining quotas declaratively and applying them through GitOps keeps limits versioned, auditable, and uniform. This reduces operational overhead and makes policy changes predictable as demand evolves.
RBAC Sprawl in Multi-team Clusters
Namespaces are the primary boundary for Role-Based Access Control. Properly scoped Role and RoleBinding objects ensure teams can manage their own workloads without impacting others.
At scale, manually maintaining RBAC per namespace is brittle. Inconsistent bindings create either privilege gaps or developer friction. Centralized, policy-driven RBAC—defined once and synced everywhere—eliminates this class of error. Platforms such as Plural CD support fleet-wide RBAC synchronization, ensuring every cluster adheres to the same security model.
Unsafe Namespace Deletion
Deleting a namespace triggers a cascading, irreversible removal of all contained resources, including persistent volume claims. An accidental deletion in production can cause immediate downtime and permanent data loss.
This operation should never depend on a direct kubectl delete namespace invocation. Namespace lifecycle events—especially deletions—should flow through an auditable GitOps pipeline with mandatory review. Requiring pull requests and approvals introduces a critical safety check, preventing human error from becoming an outage.
By standardizing quotas, centralizing RBAC, and gating destructive actions behind automation, teams can avoid the most common namespace failures and maintain control as their Kubernetes footprint grows.
How to Troubleshoot Common Namespace Issues
Namespaces simplify cluster organization but introduce their own failure modes at scale. The most common operational blockers are namespaces stuck in a Terminating state and access failures caused by misconfigured RBAC. Both issues can stall deployments, block CI/CD, and leave clusters cluttered with partially deleted state. These problems are well understood and solvable with the right inspection and remediation steps. At fleet scale, centralized visibility—such as the Kubernetes dashboard in Plural—can significantly reduce time to resolution by surfacing resource state and policy drift across clusters.
Fix a Namespace Stuck in Terminating
A namespace that never finishes deleting is almost always blocked by a finalizer. Finalizers instruct Kubernetes to wait for a controller to clean up dependent resources before removal. If that controller is gone or stuck, deletion halts indefinitely.
Start by inspecting the namespace to confirm the presence of finalizers. To unblock deletion, remove them explicitly:kubectl patch namespace <namespace-name> -p '{"metadata":{"finalizers":null}}' --type=merge
This forces the namespace to delete. Use it cautiously: bypassing finalizers can leave orphaned resources if cleanup never completed. In production, this should be a last resort and ideally audited through an automated workflow.
Resolve RBAC Permission Errors
RBAC issues typically surface as Error from server (Forbidden) and indicate that a user or service account lacks the required permissions in a namespace. Begin by validating access explicitly:kubectl auth can-i <verb> <resource> -n <namespace>
If the action is denied, inspect the namespace’s RoleBindings to see which roles are applied and to whom. The fix usually involves creating or updating a RoleBinding to grant the missing permissions.
In multi-cluster environments, manually managing RBAC per namespace leads to drift and inconsistency. Defining RBAC declaratively and syncing it fleet-wide avoids both over-permissioning and accidental lockouts, ensuring predictable access control as teams and clusters grow.
Essential Security Controls for Namespaces
Namespaces provide logical separation, not security isolation. In multi-tenant clusters, they must be reinforced with explicit controls to prevent privilege escalation, resource starvation, and unintended network access. RBAC, resource quotas, and network policies collectively turn namespaces into enforceable security boundaries rather than organizational labels.
Enforce Isolation with RBAC
Role-Based Access Control defines who can perform which actions within a namespace. Properly scoped Role and RoleBinding objects implement least privilege, ensuring users and service accounts only access what they need. A common pattern is read-only access to production namespaces and full control in development namespaces.
At scale, RBAC must be centralized and declarative to avoid drift. Plural integrates with SSO providers and supports fleet-wide RBAC synchronization via Global Services, allowing a single policy definition to be applied consistently across clusters.
Apply Resource Quotas and Network Policies
ResourceQuota objects prevent noisy-neighbor failures by capping aggregate CPU and memory usage per namespace. Without quotas, a single misbehaving workload can degrade cluster-wide performance.
NetworkPolicy objects provide workload-level isolation. By default, pod-to-pod communication is unrestricted; network policies let you explicitly control ingress and egress. For example, you can allow frontend namespaces to reach backend services on specific ports while blocking all other traffic. This significantly reduces the blast radius of compromised or misconfigured workloads.
Secure Multi-tenant Namespaces by Default
True multi-tenancy emerges from combining namespaces with RBAC, quotas, and network policies. RBAC limits access, quotas enforce fair usage, and network policies restrict communication paths. Enforcing these controls manually does not scale. Platform teams should standardize secure namespace templates and apply them automatically. Plural’s self-service catalog supports this model by provisioning namespaces with predefined security controls, ensuring every new environment adheres to organizational standards from day one.
How to Automate Namespace Management at Scale
Manual namespace management with kubectl breaks down as cluster count grows. Imperative workflows create drift, slow onboarding, and introduce security gaps. At fleet scale, namespaces must be provisioned and governed declaratively, with consistent policies applied by default. The objective is a repeatable, auditable system that balances developer velocity with centralized control.
Automate Provisioning with GitOps
GitOps establishes Git as the single source of truth for cluster state. Namespace creation becomes a pull request that adds manifests to a repository; a controller reconciles that state into the cluster. This guarantees versioning, peer review, and auditability.
Critically, namespace manifests should be bundled with their guardrails—RBAC roles, ResourceQuota, and NetworkPolicy—so every namespace is fully configured on creation. This eliminates post-provisioning fixes and ensures consistent security and resource boundaries from day one.
Enable Developer Self-service with Plural
With GitOps in place, the next bottleneck is human mediation. Ticket-based provisioning does not scale. Plural’s Self-Service Catalog enables a controlled self-service model: platform teams define standardized namespace templates, and developers request them through a UI.
Plural automates PR creation with the correct manifests, preserving GitOps guarantees while removing manual toil. Platform teams retain review and policy enforcement, while developers get fast, predictable provisioning without privileged access to clusters.
Enforce Fleet-wide Consistency
Multi-cluster environments require uniform baselines. RBAC, security policies, and other critical configurations must be identical everywhere to avoid compliance and security gaps. This is best enforced declaratively and synchronized automatically.
Plural’s Global Services address this by allowing teams to define fleet-wide services—such as standard RBAC policies—once and replicate them across all clusters. New clusters inherit these baselines automatically, reducing operational overhead and ensuring consistent governance as the fleet grows.
Related Articles
Unified Cloud Orchestration for Kubernetes
Manage Kubernetes at scale through a single, enterprise-ready platform.
Frequently Asked Questions
Why shouldn't I just use the default namespace for my projects? Using the default namespace for your applications is a common misstep when getting started. While it works for simple tests, it quickly leads to a cluttered and unmanageable cluster. Without the logical separation that namespaces provide, you lose the ability to apply specific access controls, resource limits, or network policies to different applications. This means you can't stop one team's application from consuming all the cluster's resources or prevent it from accessing another team's services. Creating dedicated namespaces for each project or team is a foundational best practice for maintaining a clean, secure, and well-organized cluster.
What's the real difference between using namespaces and just running separate clusters? Namespaces offer logical isolation within a single cluster, while separate clusters provide complete physical isolation. The choice depends on your needs for security and resource management. Namespaces are great for separating teams or environments that can safely share a control plane and node infrastructure, which is more cost-effective and reduces management overhead. Separate clusters are better when you need strict security boundaries, such as for PCI compliance, or when different workloads have vastly different performance requirements. Managing a fleet of separate clusters introduces its own complexity, which is why a unified control plane is essential for maintaining visibility and control.
Is it a good idea to let developers create their own namespaces? Empowering developers with self-service is key to moving faster, but unrestricted access to create namespaces can lead to chaos. The ideal approach is a controlled self-service model. Instead of letting developers run kubectl create commands directly, you can provide them with a standardized way to request a new namespace. Plural’s Self-Service Catalog enables this by allowing platform teams to define templates that come pre-configured with the correct RBAC policies, resource quotas, and network rules. Developers can then provision a new, compliant namespace through a simple UI, which triggers an automated PR workflow for the platform team to approve. This gives you the best of both worlds: developer autonomy and centralized governance.
How do I enforce consistent security policies across all my namespaces without manual work? Manually applying security configurations like RBAC rules or network policies to every namespace across a fleet of clusters is not scalable and is bound to create inconsistencies. The most effective strategy is to manage these policies as code in a central Git repository. By adopting a GitOps workflow, you establish a single source of truth for your configurations. Plural’s Global Services feature is built for this exact purpose. You can define a baseline set of RBAC policies once and configure a Global Service to automatically synchronize them across every cluster in your fleet, ensuring that all namespaces, new and old, adhere to your security standards.
My namespace is stuck deleting. Is it safe to just force-remove the finalizer? When a namespace is stuck in the Terminating state, it's because a resource within it has a finalizer that is preventing its deletion. While you can manually patch the namespace to remove the finalizer, you should treat this as a last resort. Doing so can leave orphaned resources behind, such as cloud load balancers or storage volumes, which can cause conflicts and incur costs. Before forcing the deletion, you should first investigate which resource is holding up the process and try to resolve the underlying issue with its controller. If you must proceed, do so with caution and be prepared to perform manual cleanup afterward.