In this three-part demo, Red Hat Principal Software Engineer Chris Chase uses Red Hat OpenShift Data Science to show how to build and deploy an object detection model within an intelligent application. Follow along with the tutorial using the workshop website: https://red.ht/rhods-od-workshop
OpenShift Data Science is a comprehensive environment based in the Red Hat OpenShift cloud service. It integrates Jupyter notebooks with important model development frameworks such as TensorFlow and PyTorch, as well as key open source partner technologies.
In this workshop, you'll learn an easy way to incorporate data science and artificial intelligence/machine learning (AI/ML) into your Red Hat OpenShift development workflow and how to use an object detection model in several different ways. In particular, you will:
- Use Jupyter notebooks and TensorFlow to explore a pre-trained object detection model.
- Serve the model in a REST API as a Flask application.
- Use Source-to-Image (S2I) to build and deploy the Flask application.
- Explore Apache Kafka Streams from notebooks.
- Deploy a Kafka consumer with the same object detection model.
What's more, you don't have to install anything on your own computer, thanks to Red Hat OpenShift Data Science and Red Hat OpenShift Streams for Apache Kafka.
Create a Jupyter notebook using TensorFlow
In Part 1, Chris shows how to create a Jupyter notebook in the cloud service using a TensorFlow image, clone a Git repository, run the object detection model, and then deploy the model as an API using Flask.
Build and deploy a containerized image
Then, in Part 2, see how to build a containerized image and deploy it into Red Hat OpenShift—using the console, accessing a Git repo to detect the Python app, running it in an OpenShift pod, and deploying the application using a secure route.
Integrate your intelligent application with Apache Kafka
In Part 3, Chris shows how to integrate the intelligent application with Kafka using the Red Hat OpenShift Streams for Apache Kafka service. Chris demonstrates how to create the Kafka instance, create topics for the Jupyter notebook and the application, connect to the instance, run Kafka to detect and send messages, and integrate the Kafka streaming service into the application.
Continue your data science journey
Here are a few additional resources to explore:
Last updated: September 20, 2023