Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Container Images for OpenShift – Part 4: Cloud readiness

 

October 17, 2017
Frédéric Giloux
Related topics:
ContainersKubernetes
Related products:
Red Hat OpenShift Container Platform

Share:

    This is a transcript of a session I gave at EMEA Red Hat Tech Exchange 2017, a gathering of all Red Hat solution architects and consultants across EMEA. It is about considerations and good practices when creating images that will run on OpenShift. This fourth and last part focuses on the specific aspects of cloud-ready applications and the consequences concerning the design of the container images.

    Requirements

    The reference in term of requirements for cloud readiness is The Twelve-Factor App. OpenShift makes easier the creation of such applications. In previous parts of this blog, we have seen how you can create images that for instance make possible to store configuration in the environment, to separate build and run stages and to expose ports for binding services. We will now cover other points: processes, concurrency, disposability, and logs. We will then go further with aspects like self-healing, security and the externalization of technical functions from the application code to the infrastructure.

    Processes

    In the previous parts of this blog, we have seen that it is recommended to have a single process running in a container. OpenShift can then monitor this process and signal handling is easier (signals are simply directed to the process).  This also allows to horizontally scale the process independently by starting a new container instance.  When there is a need for a second process communicating with the first one through shared memory, local network or file system the best approach is to have a container for each process part of the same pod. Monitoring and signal handling work unchanged. It is however not possible to scale independently anymore. For legacy applications, it may still be required to have several processes part of a single container. In that case, the recommended approach is to use systemd containers. Systemd will receive signals and manage child processes accordingly.

    Cloud native applications are best stateless, meaning that the state may be delegated to a NoSQL, RDBMS or data grid backend. It makes up and down scaling, automated failure recovery and zero-downtime upgrades easier as each instance can be considered independently. That said, stateful legacy applications might still benefit from running as containers on a PaaS like OpenShift. The challenge here is that containers are best considered as pets from the pet vs. caddle analogy. Toolkits used for synchronization in a cluster like JGroups need to adapt and may need to interrogate the OpenShift API or use a discovery protocol for retrieving the members of a cluster. The cluster may also not rely on fix identities as containers may be stopped and restarted with different IP addresses and on different hosts. Statefulset, which is still beta in Kubernetes 1.8 and technology preview in OpenShift 3.6, is useful for supporting "pets" in OpenShift and run workload like Elasticsearch or applications using Zookeeper.

    This was quite a digression from the primary focus of this blog: container images for OpenShift but these considerations are critical when you design your applications and the container images that package them.

    Disposability

    Container disposability plays a critical role in self-healing (we will look at it later in this blog), automated upgrades and downscaling. Besides, the signal handling aspects that have been covered in the previous chapter, the application inside the container needs to support graceful shutdowns. When OpenShift or a user decides to terminate a container:

    • The container is taken out of the service and route rotation for request handling.
    • A SIGTERM is then sent to the application, which needs to finish processing in-flight requests, release resources and shut down. If the container hasn't terminated in the allocated time it receives a SIGKILL.

    When you create your application and container, you need to support this so that your application behaves nicely in a cloud environment.

    Logs

    When running applications inside containers the recommendation is to write logs to the standard output. They are then gathered by journald running on the host. With OpenShift, they are made available in the web console or through the CLI. OpenShift also supports a Fluentd daemon per host that collects application and infrastructure logs from journald and send them to an Elasticsearch cluster running on OpenShift or to the customer infrastructure for log aggregation. You can then build in Kibana cross-containers and cross-applications queries and dashboards. Another aspect seen in the second part of the blog is that it is best to run containers read-only, which is also achieved by writing logs to the standard output. Sending logs directly from the containerized application to a remote aggregator (Rsyslog, Fluentd, Logstash server) may be acceptable but logs are then neither visible in OpenShift web console nor accessible through the CLI.

    With legacy applications, it may not be possible to have the process writing directly to the standard output. In that case, the sidecar container pattern is the best approach. A sidecar container is a container collocated with your application container in the same pod. Both containers share volumes, including emptyDir. You can create a file backed by a fifo (mkfifo) on the mount point of the emptyDir and have your application writes logs into it. The process inside your sidecar container can then read the logs from the fifo and forward them to the standard output.

    Self Healing

    OpenShift provides self-healing mechanisms that improve application resilience. As mentioned in the second part of this blog they rely on readiness and liveness probes, which allow the orchestration platform to know about the health of your application. Probes provide a binary result: OK/NOK.

    • Liveness probe: it states whether the application is properly working. When a liveness probe sends a NOK result OpenShift restarts the container, which brings it back to its initial state.
    • Readiness probe: it states whether the application is able to process requests. When a readiness probe sends a NOK result OpenShift does not forward any request (from services or routes) to it. It may be the case because your application has not finished starting or because it is waiting for the availability of a dependency like a backend service or data store. Restarting a container would not help in that case. When the probe eventually sends an OK back the container is put back into the rotation for processing requests.

    Liveness and readiness probes can either be implemented as shell scripts inside the container, HTTP or TCP sockets. When a shell script returns 0, it is OK. Anything else is NOK. A 200 is OK for HTTP anything else is NOK. If a connection with a TCP socket can be established it is OK, otherwise it is NOK. When no probe has been defined, OpenShift will per default monitor the process. As long as it runs it is OK.

    It is the responsibility of the image creator to define probes that accurately represent the state of the application.

    Externalization of Technical Functions

    Externalization of technical functions, what does it mean? Many non-business functions like access control, circuit breaking, client-side load balancing, routing, tracing, request rate throttling, policy enforcement have historically been built within the application code. Later implementations may have used libraries like Hystrix, Zipkin, and others. These functions, orthogonal to the business logic, can be externalized from the application code by using API gateways or the sidecar container pattern mentioned above. Projects like Istio are looking at that. The aim is to have a simpler, cleaner application code focused on business logic.

    Security

    Security is a broad topic. I will just list a few points here that need to be considered when you create container images.

    Security is often about limiting. With containers, you will need to limit:

    • What is installed inside the container.
    • The capabilities required for running the container.
      • By avoiding root privilege, the mount of the host file system or binding host ports when it is not required.
      • Running privileged operations like setting access rights on the file system at build time
      • Accessing protected repositories or running special operations at startup through init containers so that credentials are not part of the container running the application.
    • Access to the application by using API gateways, sidecar embedded reverse proxies besides what can be done at the network level.
    • Resources (CPU, RAM, storage, network bandwidth) that can be consumed by the container.

    And to avoid:

    • Having an SSH daemon running inside the container: docker or oc exec can be used instead.
    • Sudo as it has unpredictable TTY and signal-forwarding behavior.
    • Setting default passwords in the image.

    But you should:

    • Support arbitrary user IDs. NSS wrapper can be used for user mapping when it is required by legacy applications.

    Another aspect of security is guaranteeing that your image is what it pretends to be. Container image signing is done for that.

    Finally, you should keep your container images up-to-date and apply security patches whenever they are made available. OpenSCAP can help you with identifying security risks in that respect.

    You have now reached the end of my blog series on container images for OpenShift. I hope this was of value to you. Happy coding!

    Last updated: April 3, 2023

    Recent Posts

    • More Essential AI tutorials for Node.js Developers

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    What’s up next?

     

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue