OpenShift Kubernetes Tutorial: A Step-by-Step Guide

A common question in cloud-native systems is whether OpenShift is just another name for Kubernetes. The accurate answer is: OpenShift is Kubernetes, but not just Kubernetes.

OpenShift packages upstream Kubernetes, so workloads that run on Kubernetes are generally compatible. OpenShift is a curated, enterprise distribution that bundles opinionated defaults and integrated components across security, CI/CD, networking, and cluster operations.

A useful mental model is the difference between the Linux kernel and a production-ready distribution like Red Hat Enterprise Linux. Kubernetes provides the orchestration primitives; OpenShift delivers a cohesive platform with pre-integrated tooling and guardrails.

This OpenShift Kubernetes tutorial focuses on clarifying that boundary and giving you a practical understanding of how the platform behaves in real-world environments.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Key takeaways:

  • OpenShift extends Kubernetes with enterprise features: It packages core Kubernetes with a suite of pre-integrated tools for CI/CD, monitoring, and security, offering a complete Platform-as-a-Service that simplifies the setup of production-ready environments.
  • Developer tools and security are built-in, not bolted on: The platform includes a developer-friendly console, automated build processes with Source-to-Image, and strict default security policies. These features help teams accelerate application delivery while maintaining strong compliance and security standards from the start.
  • Managing OpenShift at scale requires a unified approach: While OpenShift is powerful for a single cluster, maintaining consistency across a fleet is challenging. A centralized platform like Plural is crucial for applying uniform GitOps workflows, RBAC policies, and observability across all your clusters from a single interface.

What Is OpenShift? A Kubernetes Distribution Explained

OpenShift is an enterprise container platform developed by Red Hat. It embeds Kubernetes for orchestration and layers on integrated services to deliver a cohesive Platform-as-a-Service. Rather than replacing Kubernetes, it packages upstream components with opinionated defaults for security, developer workflows, and operations.

Vanilla Kubernetes provides primitives, but teams must assemble networking, observability, registry, and CI/CD. OpenShift pre-integrates these components to reduce setup overhead and standardize production environments. This consistency becomes valuable at scale, especially across multiple clusters. Managing that fleet still requires coordination; platforms like Plural add a unified control plane to enforce GitOps workflows and consistent infrastructure across clusters.

How OpenShift extends Kubernetes

OpenShift tracks upstream Kubernetes closely, so workload portability is preserved. On top, it introduces curated tooling and stricter defaults focused on enterprise security and developer productivity. The result is a controlled environment that reduces platform drift and operational overhead compared to assembling a bespoke Kubernetes stack.

Key architectural differences

A notable distinction is Red Hat Enterprise Linux CoreOS, an immutable, container-optimized OS used for nodes. This design simplifies patching and hardens the runtime. OpenShift also includes a built-in container registry, eliminating the need to integrate external services like Docker Hub or Amazon Elastic Container Registry. This tight integration streamlines image build, storage, and deployment within the platform boundary.

Enterprise-grade features

OpenShift ships with developer tooling and a web console for deploying and operating applications. CI/CD is integrated via Tekton, enabling in-cluster pipelines. Security defaults are stricter than upstream Kubernetes, including non-root container execution and robust RBAC. These guardrails support compliance and reduce the need for custom hardening, making the platform production-ready out of the box.

Key Features and Benefits of OpenShift

Enhanced security and compliance

OpenShift applies a security-first model with stricter defaults than upstream Kubernetes. It enforces non-root execution and constrains pod privileges using Security Context Constraints (SCCs), giving fine-grained control over capabilities, volumes, and user IDs. Combined with a comprehensive RBAC model, this reduces the need for custom hardening and supports compliance requirements out of the box.

At scale, enforcing these policies consistently across clusters becomes non-trivial. Plural provides centralized RBAC and policy management, ensuring uniform security posture across a fleet without duplicating configuration.

Developer-centric tools and console

OpenShift prioritizes developer ergonomics with an integrated web console that abstracts many low-level Kubernetes workflows. It exposes application topology, build pipelines, and service relationships in a visual interface, reducing reliance on CLI-heavy operations. This lowers the barrier to entry for developers while preserving access to underlying Kubernetes primitives when needed.

For multi-cluster environments, Plural extends this model with a unified dashboard, enabling teams to observe and operate workloads across clusters from a single control plane.

Integrated CI/CD pipelines

OpenShift includes native CI/CD via OpenShift Pipelines, built on Tekton. Pipelines are defined as Kubernetes resources, allowing build, test, and deploy stages to execute directly within the cluster. This removes the need for external CI/CD infrastructure and aligns delivery workflows with Kubernetes-native patterns.

These pipelines integrate naturally with GitOps practices. Plural can orchestrate and scale these workflows across clusters, ensuring consistent deployment behavior and version control across environments.

Built-in monitoring and logging

OpenShift ships with a pre-configured observability stack. Metrics are handled by Prometheus, alerting via Alertmanager, and visualization through Grafana. Logging is typically implemented with the EFK stack—Elasticsearch, Fluentd, and Kibana—providing centralized log aggregation and querying.

Because these components are integrated and pre-wired, teams avoid assembling and maintaining a bespoke observability pipeline, while still retaining flexibility to extend or replace components as needed.

How to Set Up an OpenShift Development Environment

A usable development environment is the baseline for working with OpenShift. The setup path depends on whether you need a lightweight sandbox or a production-like shared cluster. Common options include local single-node clusters, managed cloud services, or self-managed installations. Each involves trade-offs across resource usage, cost, and operational complexity.

Local environments optimize for fast iteration, while cloud or self-managed clusters provide consistency for team workflows and integration testing. As environments scale across clusters, enforcing consistent deployment and infrastructure patterns becomes harder—this is where Plural helps standardize GitOps workflows across environments.

Set up a local environment with CodeReady Containers

For local development, Red Hat CodeReady Containers (CRC) provides a single-node OpenShift cluster running on your machine. It’s the fastest way to experiment with platform features without provisioning external infrastructure.

Setup is straightforward: download the CRC binary, run initialization, and start the VM. The cluster exposes both the API and web console locally. This environment is intentionally constrained—single-node, limited resources—and should be treated strictly as a dev/test sandbox.

Explore cloud deployment options

For team environments, OpenShift can be deployed via managed services or self-managed installations:

  • Managed services: Offerings like Red Hat OpenShift Service on AWS and Azure Red Hat OpenShift offload control plane and infrastructure management to the provider.
  • Self-managed: Install OpenShift on your own infrastructure (on-prem or cloud). This gives full control over networking, upgrades, and security, but requires Kubernetes and platform ops expertise.

These models let you align cluster ownership with your team’s operational maturity and compliance needs.

Configure and verify your setup

After provisioning, validate cluster access via the OpenShift web console. The console exposes both administrator and developer views, covering resource management, application topology, and cluster health.

At this stage, verify:

  • API server accessibility
  • Node readiness and cluster operators health
  • Ability to create projects and deploy a sample workload

This ensures the control plane and core services are functioning before layering additional tooling.

Install essential CLI tools

The CLI is required for automation and day-to-day operations. OpenShift uses the oc client, which extends kubectl with OpenShift-specific resources such as builds and routes.

Install oc from the cluster’s console to match the server version, then authenticate via:

  • token-based login
  • kubeconfig context

From there, you can script deployments, inspect resources, and integrate with CI/CD pipelines.

What Are OpenShift's Core Components?

OpenShift extends Kubernetes with higher-level resources that streamline developer workflows and platform operations. These abstractions don’t replace Kubernetes primitives—they compose on top of them to reduce glue code across build, deploy, and exposure pipelines. Understanding how these map to native Kubernetes objects is key to operating the platform effectively.

Projects and namespaces

In OpenShift, a Project wraps a Kubernetes Namespace with additional defaults and policy bindings. Creating a Project implicitly provisions the Namespace along with preconfigured RBAC, service accounts, and quota scaffolding.

This model simplifies multi-tenancy: teams operate within isolated Projects while administrators enforce boundaries centrally. Compared to raw Namespaces, Projects reduce the need for manual bootstrap of access controls and resource governance.

Routes vs. Ingress

Kubernetes exposes services externally via Ingress resources that depend on an external controller. OpenShift introduces Routes as a first-class API backed by a built-in router.

Routes provide integrated capabilities such as TLS termination (edge, passthrough, re-encrypt), traffic splitting, and automatic host management. While Ingress is still supported, Routes remove the need to install and operate a separate ingress controller for common use cases, making service exposure more turnkey.

ImageStreams and BuildConfigs

OpenShift formalizes build pipelines through ImageStreams and BuildConfigs.

  • ImageStreams act as logical references to container images, tracking tags across internal or external registries.
  • BuildConfigs define build pipelines, including source repositories, build strategies (e.g., Source-to-Image), and output targets.

Together, they enable event-driven workflows: a Git commit triggers a build, updates an ImageStream, and can automatically roll out a new deployment. This reduces reliance on external CI glue for basic build-and-deploy loops.

DeploymentConfigs vs. Deployments

Application rollout is handled either by Kubernetes Deployments or OpenShift DeploymentConfigs.

Deployments are the standard Kubernetes resource for declarative updates and are now the recommended default. DeploymentConfigs, an earlier OpenShift abstraction, add features like image-change triggers and lifecycle hooks tied to ImageStreams.

While DeploymentConfigs remain supported, aligning with Kubernetes Deployments improves portability and reduces platform-specific coupling.

How to Deploy Your First Application on OpenShift

OpenShift supports multiple deployment paths, from UI-driven workflows to fully automated pipelines. Common entry points include the web console, the oc CLI, and Source-to-Image (S2I). Each abstracts underlying Kubernetes resources, automatically provisioning Deployments, Pods, Services, and external access.

This abstraction simplifies single-cluster workflows, but multi-cluster environments require consistent deployment orchestration. Plural standardizes GitOps-driven delivery across clusters, ensuring repeatability and control as infrastructure scales.

Deploy using the web console

The OpenShift web console provides Developer and Administrator perspectives, with the Developer view optimized for application lifecycle tasks. From here, you can deploy workloads from a container image, Git repository, or Dockerfile.

OpenShift scaffolds required resources automatically:

  • Deployment for pod lifecycle
  • Service for internal networking
  • Route for external exposure

This path is efficient for onboarding and ad hoc deployments, avoiding direct YAML authoring while still producing standard Kubernetes objects.

Deploy with the oc CLI

The oc CLI extends kubectl with OpenShift-specific capabilities. It’s the primary interface for scripting, automation, and CI/CD integration.

A key command is oc new-app, which builds and deploys directly from source:

  • Detects application runtime (e.g., Node.js, Python)
  • Triggers image build using appropriate builder
  • Creates and deploys Kubernetes resources

This condenses build and deployment into a single operation, making it suitable for repeatable workflows and pipeline integration.

Build with Source-to-Image (S2I)

Source-to-Image (S2I) is OpenShift’s native build mechanism for generating container images without a Dockerfile. It operates via BuildConfigs that define source, strategy, and output.

S2I workflow:

  • Combine application source with a builder image (runtime + tooling)
  • Execute build scripts inside the builder
  • Produce a runnable image and push to the internal registry

This approach standardizes builds and reduces developer overhead around containerization, especially for common language stacks.

Expose services with routes

External access is handled through Routes, an OpenShift-native abstraction over service exposure. A Route maps a public hostname to a Service, routing traffic to backing pods.

Routes provide built-in capabilities:

  • TLS termination (edge, passthrough, re-encrypt)
  • Load balancing
  • Traffic splitting for progressive delivery

Unlike Kubernetes Ingress, Routes are fully integrated and do not require a separate controller, simplifying production-grade exposure and security.

Best Practices for OpenShift Deployments

Deploying workloads on OpenShift is only the baseline. Maintaining reliability, security, and efficiency requires consistent operational patterns. These practices align with core Kubernetes principles while leveraging OpenShift’s stricter defaults and integrated tooling.

As environments scale across teams and clusters, standardization becomes critical. Plural helps enforce these practices through centralized policy, GitOps workflows, and consistent configuration across fleets.

Implement security and RBAC

OpenShift enforces non-root execution and hardened defaults. Build on this by defining strict RBAC policies using the principle of least privilege. Limit access for users and service accounts to only required resources, and scope permissions at the project level wherever possible.

In multi-cluster setups, Plural can synchronize RBAC policies globally, reducing drift and simplifying audits.

Manage resources and quotas

Unbounded resource usage leads to contention and instability. Define CPU and memory requests and limits for every workload:

  • Requests guarantee scheduling and baseline performance
  • Limits cap maximum consumption

Use ResourceQuotas and LimitRanges at the project level to enforce constraints across teams. This ensures fair allocation and prevents noisy-neighbor issues in shared clusters.

Handle configuration with ConfigMaps and Secrets

Separate configuration from container images. Use:

  • ConfigMaps for non-sensitive data (environment variables, config files)
  • Secrets for sensitive data (credentials, tokens, certificates)

This improves portability and security. Managing these resources through GitOps—such as with Plural—ensures version control, reviewability, and consistent rollout across environments.

Set up health checks and readiness probes

Resilience depends on properly configured probes:

  • Readiness probes gate traffic until the application is ready
  • Liveness probes detect failure and trigger restarts

These signals enable safe rollouts, automated recovery, and zero-downtime updates. Without them, deployments risk serving traffic to unhealthy instances or failing silently under load.

How to Troubleshoot Common Deployment Issues

Even on a hardened platform like OpenShift, deployments can fail due to issues in image access, configuration, permissions, or networking. Effective debugging requires a systematic approach that correlates pod state, events, and logs.

While CLI workflows are essential, centralized visibility reduces mean time to resolution. Plural aggregates logs, events, and resource state across clusters, making it easier to trace failures from commit to runtime.

Solve image pull and registry auth failures

Errors like ImagePullBackOff or ErrImagePull indicate that Kubernetes cannot fetch the container image.

Typical root causes:

  • Incorrect image name or tag
  • Missing or invalid registry credentials
  • Network restrictions to the registry

Debug flow:

  • Verify image reference in manifests
  • Confirm registry accessibility
  • Check ImagePullSecrets and ensure they are attached to the service account or pod spec

These failures occur before container startup, so focus on pod events (kubectl describe pod) rather than logs.

Validate configurations and YAML

Kubernetes manifests are strict; minor syntax or schema errors can block deployments.

Common issues:

  • Invalid indentation or structure
  • Incorrect API versions or resource kinds
  • Broken references (e.g., missing ConfigMaps or Secrets)

Use client-side validation before applying:

  • kubectl apply --dry-run=client -f <file>.yaml

In GitOps workflows, Plural adds validation gates through pull requests, enabling automated checks and peer review before changes reach the cluster.

Debug permission and RBAC issues

If pods start but fail to interact with cluster resources, RBAC is a likely cause. Each pod runs under a service account, and insufficient permissions will block operations like reading Secrets or querying the API.

Debug flow:

  • Identify the service account used by the pod
  • Inspect associated Roles/ClusterRoles and bindings
  • Validate required verbs (get, list, watch, etc.) on target resources

Fleet-wide RBAC drift is a common issue. Plural enables centralized RBAC definitions, ensuring consistent permissions across clusters.

Fix network connectivity problems

Network failures can affect image pulls, service-to-service communication, or external API access.

Common causes:

  • Restrictive NetworkPolicies
  • DNS resolution failures
  • Firewall or egress restrictions

Debug approach:

  • Inspect NetworkPolicies in the namespace
  • Exec into a pod (kubectl exec) and test connectivity using curl or DNS lookups
  • Verify service endpoints and cluster DNS behavior

Plural simplifies cross-cluster access with an agent-based model using secure egress connections, reducing reliance on complex ingress configurations and improving operability in restricted environments.

Explore Advanced OpenShift Features

As workloads scale, OpenShift extends beyond baseline Kubernetes with platform-level capabilities for lifecycle automation, multi-cluster control, and service-level networking. These features reduce operational overhead while improving consistency, resilience, and observability across environments.

For organizations operating heterogeneous fleets, Plural complements these capabilities with a unified control plane, enabling consistent workflows across OpenShift and other Kubernetes distributions.

Use Operators and Custom Resource Definitions

Operators encapsulate operational logic for complex, stateful systems. Built on Kubernetes primitives, they extend the API via Custom Resource Definitions (CRDs), allowing domain-specific resources (e.g., databases, message queues) to be managed declaratively.

Instead of scripting lifecycle tasks manually, Operators encode:

  • Provisioning and configuration
  • Upgrades and patching
  • Backup and recovery
  • Failover handling

OpenShift includes OperatorHub, a curated catalog of certified Operators. This enables teams to deploy production-grade services with built-in operational intelligence, reducing the need for custom runbooks.

Manage multiple clusters

Enterprise deployments typically span multiple clusters across regions, environments, or teams. OpenShift provides fleet management tooling (e.g., via Platform Plus), but consistency becomes harder when environments include mixed Kubernetes distributions.

Plural addresses this by acting as a unified orchestrator:

  • Centralized application deployment via GitOps
  • Consistent policy enforcement across clusters
  • Secure, agent-based connectivity without complex ingress setup

This approach standardizes operations across OpenShift, EKS, GKE, and on-prem clusters, reducing fragmentation.

Integrate a service mesh

For microservices architectures, OpenShift integrates a service mesh based on Istio. The mesh introduces a dedicated control plane for managing service-to-service communication.

Core capabilities:

  • Traffic management (canary releases, A/B testing, retries)
  • Mutual TLS (mTLS) for service-to-service encryption
  • Fine-grained policy enforcement
  • Observability (metrics, logs, distributed tracing)

Because these features are implemented at the infrastructure layer (via sidecar proxies), they require minimal or no application code changes. This makes it possible to enforce security and gain deep visibility across distributed systems without modifying services directly.

Unified Cloud Orchestration for Kubernetes

Manage Kubernetes at scale through a single, enterprise-ready platform.

GitOps Deployment
Secure Dashboards
Infrastructure-as-Code
Book a demo

Frequently Asked Questions

What's the main difference between OpenShift and standard Kubernetes? Think of standard Kubernetes as a powerful engine. To build a complete car, you need to add the chassis, wheels, and steering yourself. OpenShift is like a fully assembled car from a specific manufacturer; it bundles the Kubernetes engine with a pre-integrated set of tools for networking, monitoring, logging, and CI/CD. This creates a more opinionated, out-of-the-box Platform-as-a-Service (PaaS) that simplifies setup for development teams.

Can I use my existing Kubernetes manifests and tools with OpenShift? Yes, absolutely. Since OpenShift is a certified Kubernetes distribution, any application that runs on Kubernetes will run on OpenShift. Your standard Kubernetes manifests for resources like Deployments, Services, and ConfigMaps will work without changes. While OpenShift offers its own extended command-line tool (oc), the standard kubectl CLI is fully compatible.

What are Operators and why are they a key feature of OpenShift? Operators are a powerful way to automate the management of complex, stateful applications like databases or message queues. An Operator is essentially a piece of software that runs in your cluster and acts like an automated site reliability engineer for a specific application. It handles complex tasks like updates, backups, and failover automatically, which reduces manual effort and the risk of human error.

How does OpenShift's security model differ from standard Kubernetes? OpenShift is designed to be more secure by default. A key difference is its use of Security Context Constraints (SCCs), which, for example, prevent containers from running as the root user out of the box. In a standard Kubernetes environment, you would typically need to configure these security policies yourself. OpenShift's approach provides a more hardened platform from the start, which is a significant benefit for enterprise environments.

My organization uses OpenShift and other Kubernetes distributions. How can I manage them all consistently? Managing a mixed fleet of OpenShift, EKS, and GKE clusters often leads to inconsistent security policies, deployment workflows, and monitoring. A unified management platform like Plural solves this by providing a single pane of glass for your entire Kubernetes environment. Plural allows you to apply consistent GitOps workflows, RBAC policies, and observability standards across all clusters, regardless of the underlying distribution, ensuring uniform operations at scale.