Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Modern Fortune Teller: Using GitOps to automate application deployment on Red Hat OpenShift

June 21, 2021
Ken Lee Preska Sharma Keyvan Pishevar
Related topics:
Artificial intelligenceGitOpsKubernetes
Related products:
Streams for Apache KafkaRed Hat OpenShift

Share:

Our team recently created an application called Beer Horoscope, which we used to illustrate the extensive possibilities for modern software development and deployment with Red Hat OpenShift and community-built free software tools. The application's front end collects user preferences and makes beer recommendations. The back end performs machine learning on users and products (beers) to make appropriate recommendations. Figure 1 shows how we combined an event-driven architecture with machine learning models that are applicable to numerous real-world scenarios.

In the Beer Horoscope application, we run a cycle of analyzing new ratings, retraining models, and redeploying the service.
Figure 1: The application flow of data collection, analysis, and service redeployment.

This article summarizes our talk at the 2021 Red Hat Summit break-out session titled Modern Fortune Teller: Your Beer Horoscope with AI/ML. We'll discuss how we used GitOps on OpenShift, along with ArgoCD, to continuously deploy our application as we were developing it. We'll also explain how we used Open Data Hub as a one-stop machine learning environment to create and test our algorithms on OpenShift. See our GitHub repository for application source code and additional documentation.

What is GitOps?

GitOps is a way of implementing continuous deployment (CD) for cloud-native applications. GitOps extends CD from development to cloud deployment using tools developers are already familiar with, including Git.

The core idea of GitOps is to create a Git repository that contains declarative descriptions of the infrastructure. These are updated so they always indicate the images currently desired in the production environment, as shown in Figure 2.

GitOps continuously evaluates the desired state of production systems and automatically updates those systems in the cloud.
Figure 2: Flow of events in GitOps that keeps production systems up to date in the cloud.

GitOps's advantages in practice include:

  • Faster and more frequent application deployments
  • Easier and faster error recovery
  • Simplified credential management
  • Self-documenting deployments
  • Shared knowledge throughout the team

Red Hat's GitOps Operator

Red Hat OpenShift is a Kubernetes platform meeting the declarative principles that allow administrators to configure and manage deployments using GitOps. Working within a Kubernetes-based infrastructure and applications, you can apply consistency across clusters and development life cycles.

Red Hat collaborates with open source projects such as Argo CD and Tekton Pipeline to implement a framework for GitOps.

For this application, we leveraged Red Hat's GitOps Operator for application deployment. This operator allows for continuous updates and delivery via ArgoCD and Git, thus implementing GitOps.

ArgoCD pulls the deployment instructions from our Git repository and installs Red Hat AMQ Streams. AMQ Streams is based on the Apache Kafka streaming tool. The AMQ Streams Operator allows developers to use Kafka and various components, such as Kafka Connectors, to support complex event processing. For this project, we use AMQ Streams in high availability mode.

The Open Data Hub Operator

We used Open Data Hub as a one-stop environment for machine learning and artificial intelligence (AI/ML) services and tools on OpenShift. Open Data Hub provides tools at every stage of the AI/ML workflow, and for multiple user personas including data scientists, DevOps engineers, and software engineers. The Open Data Hub Operator (Figure 3) lets developers use a best-of-breed machine learning toolset and focus on building the application.

The Open Data Hub Operator gave the Beer Horoscope project access to a range of tools, including operators for AMQ Streams, Prometheus, and others.
Figure 3: Tools provided by the Open Data Hub Operator to the Beer Horoscope project.

Figure 4 shows the platform components, such as Jupyter Notebook, Apache Spark, and others, that Open Data Hub makes available for data scientists via Kubeflow.

Kubeflow provides a wide range of open source tools for data processing and analysis.
Figure 4: Data processing and analysis tools provided by Kubeflow.

Developing the recommendation system

To develop the algorithms for the Beer Horoscope project, we needed a recommendation system. To choose the correct algorithms for the system, we first had to determine the relationships involved when users get a beer recommendation. Three main types of relationships occur in this scenario:

  • User-product: What kind of beer does this user like to drink?
  • Product-product: What beers are similar to each other?
  • User-user: Which users have similar taste in beer?

Two of these relationships can be established by two very popular algorithms used in recommendation systems:

  • Collaborative filtering: Used to establish user-user relationships. If one user rates Beer A very highly, and another user also rates Beer A very highly, we can assume these users have similar taste in beer. We can then start recommending to each user the beers that are highly rated by the other user.
  • Content-based filtering: Used to establish product-product relationships. If a user likes Beer A, it's safe to recommend beers similar to Beer A to the user.

We developed models using these two algorithms on JupyterHub through Open Data Hub. We had access there to all of the components that made up the development environment, including environment variables, databases, configurations settings, and so on.

Once we created our models, our application deployment life cycle became very similar to the software development life cycle.

Next, we'll take a look at how cloud-native development utilizes containers, DevOps, continuous delivery, and microservices to automate these formerly time-consuming steps.

How we built the Beer Horoscope

Up to this point, we've covered how we automated the infrastructure that our application runs on, from the perspective of a DevOps engineer, by leveraging GitOps. We then discussed how we trained and created the data models from the perspective of a data engineer and data scientist.

Here, we turn to the viewpoint of an application developer. We'll go over how an application interacts with trained data models and how to leverage the OpenShift infrastructure and platform to make these interfaces possible.

The AMQ Streams Operator, GitOps Operator, and Open Data Hub Operator, discussed earlier, were all available from the OperatorHub, shown in Figure 5.

A number of open source, community-based tools came together in Open Data Hub and are exposed by OpenShift as Operators in OperatorHub.
Figure 5: Community-based tools that contribute to OperatorHub.

Application architecture

Figure 6 lists the systems involved in this architecture and roughly indicates how they relate to each other. The application components are:

  • Users: These are evaluated for their preferences and are served by the project.
  • Applications and services tier: This collection typically houses artifacts such as web applications, REST services, and internal services. Much of the APIs and underlying business logic were extrapolated from Jupyter notebooks developed by data engineers. The front-end component was written using Vue.js. The API services are built on Python and Flask.
  • Data tier: This tier stores both raw and structured data, as well as trained data models. This data is used by the consuming tiers. The Beer Horoscope application uses both the MySQL relational database and file storage to store data.
  • Event-processing tier: In this tier, we orchestrate how we process any new data introduced into our ecosystem. We create data streams to handle complex event processing scenarios and business rules. For this tier, we used Kafka Connectors and Streams for complex event processing.
  • Logging, monitoring, and analytics: This tier provides functions for logging and monitoring, so that we can analyze what's going on in real time and keep historical records. This tier used the Grafana and Prometheus Operators.
  • Container registry: To facilitate the versioning, storage, and retrieval of container image artifacts in our OpenShift cluster, we stored all application image artifacts within a container registry. The Beer Horoscope uses Quay.io to host these artifacts.
The Beer Horoscope application includes many components and tiers, described in the text.
Figure 6: Components of the Beer Horoscope application.

Conclusion

In this article, we explored the tools and methods we used to operationalize our machine learning models, and how we then brought algorithms written on a Jupyter notebook into production through a user-friendly web application.

Before starting a full-stack project, you need an environment that supports continuous delivery. We used Red Hat's GitOps Operator on OpenShift, along with ArgoCD, to continuously deploy our application as we were developing it.

Then, we used OpenShift's Open Data Hub Operator as a one-stop machine learning environment to create and test our algorithms. The MySQL databases and file system store our large datasets and trained data models, respectively.

Finally, we created an application that interacts with our models. Our full-stack application includes the front-end user interface, API services written in Flask that talk to our machine-learning model training services written in Python, the data tier, and the event-processing tier that uses Kakfa Streams through AMQ Streams.

By closely examining and optimizing the software development cycle, we were able to collaborate and deploy into production an intelligent application—one that's telling you to go grab a cold beer right now!

Last updated: September 19, 2023

Related Posts

  • Why should developers care about GitOps?

  • From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

  • Introduction to machine learning with Jupyter notebooks

  • Continuous learning in Project Thoth using Kafka and Argo

Recent Posts

  • More Essential AI tutorials for Node.js Developers

  • How to run a fraud detection AI model on RHEL CVMs

  • How we use software provenance at Red Hat

  • Alternatives to creating bootc images from scratch

  • How to update OpenStack Services on OpenShift

Red Hat Developers logo LinkedIn YouTube Twitter Facebook

Products

  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform

Build

  • Developer Sandbox
  • Developer Tools
  • Interactive Tutorials
  • API Catalog

Quicklinks

  • Learning Resources
  • E-books
  • Cheat Sheets
  • Blog
  • Events
  • Newsletter

Communicate

  • About us
  • Contact sales
  • Find a partner
  • Report a website issue
  • Site Status Dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Report a website issue