Red Hat OpenShift

Red Hat® OpenShift® is a unified platform to build, modernize, and deploy applications at scale. Work smarter and faster with a complete set of services for bringing apps to market on your choice of infrastructure.

OpenShift logo

Platform overview

Developers and DevOps can quickly build, deploy, run, and manage applications anywhere, securely and at scale with the Red Hat OpenShift. Built on Red Hat Enterprise Linux operating system and Kubernetes, Red Hat OpenShift is an enterprise-ready application platform with deployment and infrastructure options that support every application and environment.

Getting started

Powered by open source technologies

Kubernetes is an open-source container orchestration engine for automating the deployment, scaling, and management of containerized applications. Red Hat OpenShift provides enterprise-ready enhancements to Kubernetes, including integrated Red Hat technologies that have been tested and certified. It also promotes an open-source development model where open collaboration fosters innovation and rapid improvements.

Related resources:

Built for developer workflow

Red Hat OpenShift ships with everything you need to manage the development lifecycle, including standardized workflows, support for multiple environments, continuous integration, release management, and more. For continuous integration/continuous delivery (CI/CD), the platform needs to enable automated processes and drive software through a path of building, testing, and deploying. With options like OpenShift Pipelines, integration with your existing tools and workflows, or a combination of both, it's easy to achieve whatever level of automation desired. It’s also easy to deploy to multiple infrastructures.

Related resources:

Continuous security

Control, defend, and extend the security of Kubernetes clusters and applications running on them, with continuous checks throughout the application life cycle and automated updates at every level of the stack. Red Hat OpenShift monitors security throughout the software supply chain to make applications more stable without reducing developer productivity.

Related resources:

Red Hat OpenShift in the public cloud

Control, defend, and extend the security of Kubernetes clusters and applications running on them, with continuous checks throughout the application life cycle and automated updates at every level of the stack. Red Hat OpenShift monitors security throughout the software supply chain to make applications more stable without reducing developer productivity.

Related resource:

Red Hat OpenShift Container Platform capabilities

OpenShift Kubernetes engine

Red Hat OpenShift Kubernetes Engine delivers the foundational, security-focused capabilities of enterprise Kubernetes on Red Hat Enterprise Linux CoreOS to run containers in hybrid cloud environments. OpenShift Kubernetes Engine and OpenShift Container Platform are built on the same enterprise Kubernetes core platform and contain key Linux, container runtime, networking, management, and security capabilities. OpenShift Kubernetes Engine is ideal for those who prefer to use their existing infrastructure and developer tool investments.

Enterprise Kubernetes

Red Hat OpenShift delivers a modern, scalable approach to securing the entire application platform stack, from the operating system to containers to applications running in containers.

kubectl and OpenShift command-line interface (CLI)

Use kubectl, the native Kubernetes command-line interface (CLI) or the OpenShift CLI, to build, deploy, and manage applications–or even OpenShift Cluster itself.

Administrator web console

Use the browser-based web console to administer, visualize, browse, and manage OpenShift resources.

Red Hat OpenShift Virtualization

Red Hat OpenShift Virtualization lets you run and manage virtual machine workloads alongside container workloads. OpenShift Virtualization combines two technologies into a single management platform. This way, organizations can take advantage of the simplicity and speed of containers and Kubernetes, while still benefiting from the applications and services that have been architected for virtual machines.

Related resource:

Operator Lifecycle Manager (OLM)

Operator Lifecycle Manager (OLM) is a part of Operator Framework, an open source toolkit designed to manage Operators in an effective, automated, and scalable way. OLM helps developers install, update, and manage the lifecycle of Kubernetes native applications (Operators) and associated services running across their OpenShift Container Platform clusters.

Kubernetes cluster services

A key component of the OpenShift Kubernetes Engine is the core cluster services that automate the container application environment installations, upgrades, and life-cycle management without downtime.

Installing and Updating Clusters

The OpenShift Installer is a simple, flexible, and matured command-line tool that can deploy an OpenShift Container Platform cluster on any infrastructure of your choice. OpenShift Container Platform also provides over-the-air and disconnected cluster updates, including updates to machine configurations and operating systems such as Red Hat Enterprise Linux CoreOS. With OpenShift Container Platform, administrators can perform cluster updates with a single operation either via the web console or the OpenShift CLI and are notified when an update is available or completed.


OpenShift Container Platform includes a preinstalled, preconfigured, and self-updating monitoring stack that provides monitoring for core platform components. Several pre-built monitoring dashboards and sets of alerts notify cluster administrators about cluster health and help to quickly troubleshoot the issues. From the OpenShift web console, admins can view and manage metrics, and alerts for the cluster and can enable monitoring for user-defined projects.


Red Hat OpenShift Networking is an ecosystem of features, plugins, and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. As the default Container Network Interface (CNI) plugin that provides Kubernetes networking for OpenShift clusters, Open Virtual Network (OVN) Kubernetes will grow with your application deployments, enable enterprise-grade Zero Trust security features, and can have its functionality extended through combinations of additional certified OpenShift CNI plugins.


OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.

Container Registry

As a part of cluster services, OpenShift provides a built-in container image registry, an out-of-the-box solution for developers to store and manage container images that run their workloads. This internal registry can be scaled up or down like any other cluster workload without infrastructure provisioning. OpenShift registry is also integrated into the cluster's authentication and authorization system, enabling developers to have fine-grained control over container images.

Related resource:

Authentication and Authorization

OpenShift Container Platform comes pre-built with central authentication and authorization services. The authentication layer identifies the user associated with requests to the API, while the authorization layer then uses information about the requesting user to determine if the request is allowed. Administrators can define permissions and assign them to users using RBAC objects, roles, and bindings. Administrators can also control access through namespaces, projects, and security context constraints (SCCs).

App Management using Helm

Operators are among the most important components of the OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on OpenShift. As a developer, you can install operator SDK CLI to create Go, Ansible, or Helm-based operators. The OpenShift web console can be used to select and install a chart from the Helm charts listed in the Developer Catalog, as well as add custom Helm chart repositories. The Helm CLI is integrated with the OpenShift web terminal making it easy to visualize, browse, and manage information regarding projects.


Manage workloads

Start with a complete set of services to build applications, including Red Hat OpenShift Serverless, Red Hat OpenShift Service Mesh, and Red Hat OpenShift Pipelines. These developer-friendly workflows enable developers to go straight from application code to container.

Platform services


Builds for Red Hat OpenShift is an extensible build framework that enables developers to build container images from source code and Dockerfiles by using image build tools, such as source-to-Image (S2I) and Buildah. It is based on the open source Shipwright project, enabling you to create and apply build resources, view logs of build runs, and manage builds in your OpenShift Container Platform namespaces.

CI/CD pipelines

Red Hat OpenShift Pipelines enables developers to create cloud-native, continuous integration and continuous delivery (CI/CD) solutions on OpenShift. It builds on the open source Tekton project, automating application deployments across multiple platforms. OpenShift pipelines supports integration with Git repositories, such as GitHub, GitLab, and Bitbucket. Optionally, you can maintain your CI/CD definitions as part of the source repository.


Red Hat OpenShift GitOps is built from the open source Argo CD project, enabling developers to deliver declarative configuration and continuous delivery of cloud-native applications, including code, components, and infrastructure, deployed across single or multi-cluster OpenShift and Kubernetes infrastructure using Argo CD.


Red Hat OpenShift Serverless delivers Kubernetes-native building blocks that enable developers to create, manage, and deploy event-driven cloud-native applications and Functions on OpenShift. With the power of open source Knative,  you can use OpenShift Serverless to build, deploy and run event-driven applications with out-of-the-box traffic routing, security, and configurable capabilities.   to scale resources up and down, even back to zero, based on demand.

Related resources:

Service mesh

Red Hat OpenShift Service Mesh provides a uniform way to secure, manage, and observe microservices that make up applications running on OpenShift. Based on the open source Istio and Kiali projects, OpenShift Service Mesh simplifies security, traffic control, and observability to applications without requiring any changes to the service code.

Cost visibility

Red Hat OpenShift Cost Management helps business leaders, IT managers as well as developers to effectively visualize costs aggregated across hybrid infrastructure so your business can stay on budget. Cost management understands spending habits and distributes costs into projects, organizations, and regions. It also models costs to align operations, developers, and business and make the responsible accountable.

Build cloud-native apps

Red Hat’s product development cycle has always been rooted in open source and the communities that help to steer Red Hat’s products’ direction. Like Fedora is the upstream project for Red Hat Enterprise Linux, the projects listed here are the upstream versions of products that make up the Red Hat Ansible Automation Platform.

Application services and runtimes

Languages and runtimes

Red Hat OpenShift Runtimes is a collection of runtimes and frameworks, designed and optimized to run on OpenShift and accelerate the development and delivery of business solutions. The runtime collection includes support for Quarkus, Spring Boot, Vert.x, Thorntail, and Node.js, as well as provide prescriptive architectures, design patterns, developer tools, best practices, and ready-made example applications.

Related resource: 

API management

Red Hat OpenShift API Management provides a streamlined developer experience for building, deploying, and scaling cloud-native, integrated applications. API management enables developers for API-first, microservices-based application development allowing organizations to easily reuse existing assets and create modern cloud-native applications.

Related resource: 


Red Hat Integration gives developers and DevOps the cloud-native tools needed for integrating applications and systems; including application programming interface (API) connectivity, API management and security, data transformation, service composition, service orchestration, real-time messaging, data streaming, change data capture, and cross-data center consistency.

Related resource: 


Based on Apache Kafka and Apache ActiveMQ, Red Hat AMQ equips developers with everything needed to build messaging applications that are fast, reliable, and easy to administer. AMQ Broker supports multiple protocols and fast message persistence. AMQ Interconnect leverages the AMQP protocol to distribute and scale your messaging resources across the network. AMQ Clients provides a suite of messaging APIs for multiple languages and platforms.

Related resource: 

Data-driven insights

Data Services makes data management in the hybrid cloud or multi-cloud environment simple by simplifying access to software-defined storage and data services. With OpenShift, you can run data analytics in a consistent way across clouds to accelerate the delivery of cloud-native applications.

Data services

Data Analytics

Red Hat OpenShift AI is a cloud-based service that provides a platform for data scientists and developers to build intelligent applications. Data scientists can build artificial intelligence/machine learning (AI/ML) models with Jupyter notebooks, TensorFlow, and PyTorch support. Developers can port these AI/ML models to other platforms and deploy them in production, on containers, and in hybrid cloud and edge environments.

Related resources:    


Developer tools and services

Find the tools that you need to build in the cloud. Red Hat’s developer tools for Kubernetes simplify your workflow while giving you the capabilities of this powerful platform. Developers and DevOps who have chosen Java for application development can enhance their development pipeline in the world of cloud development using Red Hat’s Java tools all at no cost.

Red Hat OpenShift Dev Spaces

Red Hat OpenShift Dev Spaces uses Kubernetes technology and containers to provide any member of a development or IT team with a consistent, secure, zero-configuration, and instant cloud development environment (CDE). OpenShift Dev Spaces CDEs are defined as code by using CNCF Devfiles and run IDEs such as Visual Studio Code and JetBrains IntelliJ. OpenShift Dev Spaces CDEs are designed to work on enterprise disconnected networks. As a developer, you only need a laptop with a connection to the OpenShift cluster for the purposes of coding, building, testing, and deploying code on OpenShift. By using OpenShift Dev Spaces, you do not need to apply complex Kubernetes cluster configurations to test your application on OpenShift. See the Red Hat OpenShift Dev Spaces overview on the Red Hat Developer Customer Portal, where you can access Red Hat Customer Portal documentation for OpenShift Dev Spaces.

Developer CLI

The odo command-line interface (CLI) is a tool that you can use to write, build, and deploy applications on Kubernetes. The tool supports fast, simplified, and iterative development capabilities. Existing tools like oc and kubectl are more operator-focused, while odo is developer-focused; you can build and deploy applications without needing to know the underlying Kubernetes and OpenShift concepts. See Important update on odo on the Red Hat Customer Portal to read the support status of odo. To learn more about odo, see the upstream documentation.

Web Terminal

You can use the Web Terminal Operator, which is listed in the OpenShift Container Platform OperatorHub, to install the web terminal. The web terminal integrates command-line tools with the web console and the Linux environment that runs pods on your cluster. By using the web terminal, you can reduce steps such as installing software, configuring network connections, and authenticating service that would be required on your local system terminal. The web terminal also extends its support for devices, such as tablets and mobile phones, which lack a native terminal. See the web terminal documentation on the Red Hat Customer Portal.

OpenShift plug-ins and extensions

As a developer or DevOps engineer, you can use a supported IDE, such as Microsoft VS Code or JetBrains IntelliJ, to interact with the OpenShift Container Platform by installing a plug-in. By installing the OpenShift Toolkit plug-in through the VS Code marketplace or IntelliJ marketplace, you can access a suite of extensions in your IDE for the purposes of creating, deploying, and debugging container applications that run on OpenShift. A plug-in also exists for the Red Hat build of Quarkus with a Quarkus Tools extension; see VS Code Marketplace and JetBrains Marketplace).

Service Binding

You can use the Service Binding Operator to connect your applications to backing services such as REST endpoints, databases, and event buses to reduce the complexity of balancing multiple backing service requirements. The Service Binding Operator reduces the need to manually configure, manage, and bind workloads to backing services, so that you can focus on connecting applications to different backing services without the tedious, inefficient, and error-prone manual operations. See the Understanding Service Binding Operator documentation on the Red Hat Customer Portal.

Development environment

Developer Sandbox for Red Hat OpenShift

The Developer Sandbox for Red Hat OpenShift is a private, multitenant cluster that operates in a virtual machine that is a separate environment from your production environment. The Developer Sandbox includes pre-configured core developer tools and pre-built sample applications. The Developer Sandbox environment is an ideal space to safely test an application before moving it to a production environment. The Developer Sandbox includes tutorials that provide a useful resource for quickly getting familiar with the OpenShift Container Platform. See Start exploring in the Developer Sandbox for free. Consider also viewing Configuring access to a Developer Sandbox in the Podman Desktop community documentation.

Related resources:   

Red Hat OpenShift Local

Use Red Hat OpenShift Local to access a minimal OpenShift Container Platform cluster and a Podman container runtime on your local computer. By using Red Hat OpenShift Local, you can quickly develop and test applications in a simplified environment before you move the applications to a production environment. Red Hat OpenShift Local established this simplified environment by using a single node, disabling the Cluster Monitoring Operator, and including a crc command-line interface tool for interacting with a cluster instance. See Creating an OpenShift Local instance to use the Red Hat OpenShift Local extension on Podman Desktop to create this cluster instance. See the Red Hat OpenShift Local documentation on the Red Hat Customer Portal.

Podman Desktop

You can manage pods and container images by using Podman, which is a daemon-less container engine for developing, managing, and running containers that comply with the Open Container Initiative (OCI). On Red Hat Enterprise Linux (RHEL) 8 and later versions, you can install a container-tools module that includes the Podman CLI. If you want to use a GUI version of Podman, install Podman Desktop. You can integrate this GUI with OpenShift Local by installing an OpenShift Local extension. With this extension, you can build, run, and manage containers on OpenShift while accessing key container management functionality on an intuitive GUI. For example, you can run the Podman Machine API to view allocated memory, CPU, and storage resources. See the Podman Desktop documentation and Getting container tools in the RHEL 8 (Red Hat Customer Portal) documentation.

Multicluster Management

With multicluster management, get visibility and control to manage the cluster and application life cycle, security, and compliance of the entire Kubernetes domain across multiple data centers, and private and public clouds.

Advanced application life-cycle management

With multicluster management, developers can use open standards and deploy application policies that are integrated into existing CI/CD pipelines; define and deploy applications across clusters based on defined policies; use cluster labels and application rules to move workloads across clusters and between multiple cloud providers.

Related resource:

Multicluster observability for health and optimization

With out-of-the-box multicluster dashboards, store long-term historical data and visualize multicluster health and optimization, store long-term data, and get an aggregated view of individual cluster or multicluster metrics for quick troubleshooting.


Red Hat Quay

The Global Registry is a private container registry that stores, builds, and deploys container images focusing on cloud-native and DevSecOps development models and environments. Developers and DevOps can analyze images for security vulnerabilities and identify issues to help reduce security risks.

Image management

With Red Hat Quay, tag history is available. This feature allows you to view image tags by history and revert them to a previous state. This field is configurable, and users can set their preference of tag history from zero up to four weeks. After the defined period, image tags expire.

Related resource: 

Security scanning

Red Hat Quay uses Clair to scan containers for vulnerabilities. Clair provides container vulnerability reporting to users, identifying vulnerabilities that could be used to exploit images. By providing users with a vulnerability report, Clair can guide users to develop remediation strategies for vulnerable images.

Geo-replication mirroring

Developers and DevOps have all of the content needed for their Kubernetes environments with multicluster and multi-region content management. Red Hat Quay’s continuous geographic distribution provides improved performance, ensuring your content is always available close to where it is needed most.

Image builds

Red Hat Quay features a built-in image build service that can be triggered by GitHub Actions. Streamline the CI/CD pipeline with build triggers, Git hooks, or robot accounts. Developers can easily build, or rebuild, container images that are then automatically stored in their Red Hat Quay registry based on filters and custom tagging rules.


Cluster Data Management

Cluster data management is persistent software-defined storage based on Ceph, Noobaa, and Rook technologies for both hybrid and multi-cloud environments. It is easy to install and manage as a part of the container-based application lifecycle.

Multi-cloud portability

A multi-cloud gateway abstracts storage infrastructure, so data can be stored in many different places but act as one persistent repository. It allows users to start small and scale as needed on-premise, in multiple clusters, and with cloud-native storage.

Related resource:

Simplified application development

OpenShift Data Foundation provides accessible data and support for all Red Hat OpenShift apps. It also simplifies data management across the hybrid cloud. Developers can provision storage directly from Red Hat OpenShift without switching to a separate user interface.

Simplified application development

OpenShift Data Foundation provides accessible data and support for all Red Hat OpenShift apps. It also simplifies data management across the hybrid cloud. Developers can provision storage directly from Red Hat OpenShift without switching to a separate user interface.