A feature of OpenShift is jobs and today I will be explaining how you can use jobs to run your spark machine, learning data science applications against Spark running on OpenShift. You can run jobs as a batch or scheduled, which provides cron like functionality. If jobs fail, by default OpenShift will retry the job creation again. At the end of this article, I have a video demonstration of running spark jobs from OpenShift templates against Spark running on OpenShift v3.
Continue reading “Running Spark Jobs On OpenShift”
Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads.