data model

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Data scientists often use notebooks to explore data and create and experiment with models. At the end of this exploratory phase is the product-delivery phase, which is basically getting the final model to production. Serving a model in production is not a one-step final process, however. It is a continuous phase of training, development, and data monitoring that is best captured or automated using pipelines. This brings us to a dilemma: How do you move code from notebooks to containers orchestrated in a pipeline, and schedule the pipeline to run after specific triggers like time of day, new batch data, and monitoring metrics?

Continue reading From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

Share