By Ben Parees

There have been a lot of announcements lately around Red Hat’s OpenShift v3 plans, specifically around Docker and Kubernetes. OpenShift v3 is being built around the central idea of user applications running in Docker containers with scheduling/management support provided by the Kubernetes project, and augmented deployment, orchestration, and routing functionality built on top.

openshift logo 121 × 121

This means if you can run your application in a container, you can run it in OpenShift v3. Let’s dig in and see just how you can do that with code that’s available today. I’m going to walk through the setting up OpenShift and deploying a simple application. Along the way, I’ll explain some details of the underlying components that make it all work.

Background

If you’re not familiar with the basic concepts of Docker images/containers, and Kubernetes Replication Controllers, Services, and Pods, it will be helpful to read up on those items first:

However if you want to skip that, the primary concepts you need to know are:

Docker image: Defines a filesystem for running an isolated Linux process (typically an application)

Docker container: Running instance of a Docker image with its own isolated filesystem, network, and process spaces.

Pod: Kubernetes object that groups related Docker containers that need to share network, filesystem or memory together for placement on a node. Multiple instances of a Pod can run to provide scaling and redundancy.

Read the whole article at OpenShift V3 Deep Dive | The Next Generation of PaaS.

Last updated: June 19, 2023