Managing trained models on MLflow server with OpenShift AI

In this learning exercise, we'll focus on registering your trained model with an MLflow server. This will ensure proper model tracking and version control. To simplify environment management, we'll leverage OpenShift AI's capabilities. By the end of this exercise, you'll be able to manage and deploy your models effectively across different environments within OpenShift AI.

Start your OpenShift AI

Overview: Managing trained models on MLflow server with OpenShift AI

In the world of Machine Learning (ML), managing trained models effectively is crucial. MLflow Pipelines offer a powerful tool to automate and streamline the ML lifecycle. This blog post delves into creating a custom component for MLflow Pipelines that simplifies model creation and management within your pipeline.

Let's iterate through the MLflow implementation within the image prediction code, as previously conducted a related learning exercise on data collection and processing. Our primary aim in this learning exercise is to thoroughly log all activities within the MLflow server. To achieve this, we employ the autolog functionality provided by MLflow modules. The advantage of utilizing autolog lies in its seamless integration and its capability to gather pertinent information automatically.