< Summit 2020

odo evolution - A Kubernetes CLI for developer

Introductory | An open source extensible command line interface (CLI)
Elson YuenElson Yuen

Log in to watch the session

Through next year, log in to view the on-demand developer sessions that will help you go from here, anywhere.

More organizations are switching to a cloud-native approach but developing cloud-native applications is still difficult. In this session, we'll talk about how the development process can be simplified with the next generation of OpenShift Do (odo). The existing odo provides users with a simple way to write, build, and deploy applications on OpenShift. The next generation of odo is an open-source extensible CLI used for development of cloud-native applications for Kubernetes with major contributions by IBM and Red Hat. The new odo provides the same experience but adds extensible language support, which enables fast inner loop development, debugging, and fine-grained configuration for any language/framework. It also does not require anything to be installed on your cluster and uses standard Kubernetes resources so your application can be deployed to any Kubernetes cluster.

Talk Text

Elson Yuen (00:04):

Hello. Welcome to the odo evolution talk. Today we're going to talk about a Kubernetes CLI for developers. My name is Elson Yuen, and I've got a co-speaker with me, Tomas Kral. We'll be talking about this new changes on the CLI. Okay. Today's agenda is—we'll be just talk about: what is odo to start with, two levels that on what the CLI actually does. And then we'll talk about the tech view on the odo evolution—basically the new changes that we are doing on odo. And then we'll talk about how to customize and build and run your components. And component, I will discuss a little bit more on what that is. And then we'll do a demo on that to spice it up a little bit. And then we'll talk about the future plans for other changes that we're going to make.


Elson Yuen (01:07):

Okay. So what is odo? Odo is actually developed for CLI that will help you to build and run your applications on OpenShift. So, odo today, this is just for OpenShift, and the changes that we're making is making it a little bit more generic. So, odo has called—something called a component. Components are defined, basically defining how to build and run of your application is, and it defines, for example, the images used for geo runtime. And it defines the commands that you use for the builds as well. Basically making all the build and run components storing within their components inside odo that makes what you're doing as application developers. You don't have to worry about: How am I going to build the application? How am I going to run the applications?


Elson Yuen (02:04):

And all I need to do is to provide the applications and focus on your source code and then default your applications in place. And you don't have to understand how you can actually deploy it to Kubernetes. Odo is going to handle all of those for you. Odo can also define multiple—an app, a notion called applications that you can actually define multiple components in there, so that you can have multiple surfaces linked together to become much bigger applications. Odo itself is actually open source project, so you can actually find your source from the repo that's listed in there.


Elson Yuen (02:46):

Okay. So, let's talk about what the tech preview of odo does. So, as I mentioned before, odo is working on OpenShift only at the beginning. So, we are making it quite generic. We actually want the odo CLI to become the Kubernetes CLIs for developers. So, what we're doing is, we are removing the dependencies of the OpenShift resources—for example image streams, OC client, and the routes as well—to make it running in any communities that you have. The other thing that we do is to improve the inner-loop experiences. Basically, we're trying to minimize the time between you're modifying the code until you can actually see your changes to take place. By doing that we are actually injecting different commands into the devfile itself to help you to do development.


Elson Yuen (03:50):

And then one of the other things that we have do is, we are actually moving the definition of the components into a more generic format. It's a chatter file format, which is used by the Eclipse Che project as well. We want to make it to be able to reuse across different components or different systems so that people can actually share, do one development off the devfile, and then can share across to it. That basically means you can actually bring the build and run information across to different types of environments. And you can do it using odo, or you can do it using some other systems like Eclipse Che. So, let's go through the commands that we are supporting for the tech preview. For odo, for the devfile usages—basically the usage flow, if it happens to be family with the existing odo, their usage flow is pretty much the same as before.


Elson Yuen (04:52):

So, we try to minimize the changes that is done so we have the same experience across the existing odo and also the devfiles type of experience as well. So, first, commands that we are supporting is catalog. Catalog is providing a listing of the components, basically knowing which framework that you have. Some examples of their components can be NodeJS applications, or can be a Spring applications. So, you can actually tell odo what type of application that you have so that we can actually do the proper bill for you. And then the idea is to actually create component. This is where the actual linkage is in. Catalog"provides you the list of components that's available; create component allows you to link your particular project—your particular application to that particular component, to do the build.


Elson Yuen (05:46):

And then you can also have the URL command that allows you to expose the URL of your applications so that you can actually access and test your applications on. We can also do a push as well. That's where the actual build and run of the application occurs. That's where the main actions are coming from. And then we also have the watch commands. That allows you to basically listening to the source tree of your changes. Whenever there's changes, they will automatically kick off an odo push and do a build and run for your changes to take place.


Elson Yuen (06:21):

Then last but not least, we also providing a daily function for during the cleanup. Tomas will be doing a demo on the entire flow of that. He will go through these into more details. But one thing to note, that for odo, for the tech review, currently all the devfile functions naturally hiding behind an experimental flag. So in order for you to use the devfile functions, you need to turn on the preferences on odo with the experimental flag turned to true. Then you'll be able to use all these functions.


Elson Yuen (06:58):

Cool. So, I'll talk about customizing the build and run components, because once you're working with a particular application—it may be a Node application or maybe a Spring application—but there's something special about your application that needs to be built and run slightly differently. Maybe you need to put on different dependencies on your runtime continual software. Then customizing build and run allows an easy way for you to basically override the default behavior, the build and run behavior from the original devfile that comes with it.


Elson Yuen (07:33):

So, the devfile itself is actually a standardized format. You can actually see the link from there as that file format that you can actually see a schema from there as well. But then during the component creation—what we do is, when we do a component creation, we'll go to our repository of the devfile registry and basically download the specific devfile that is applicable for the component that you have selected and put it into the root of the project. So, basically what that means is, the build and run information is actually storing within your project. You can actually, in case you do want to customize your build, you can actually go to and open up that file YAML, and you can actually modify that so that you can changes to the way that the build and run goes.


Elson Yuen (08:25):

So here is an example of a devfile in case you're interested in it. So, you can see that in the devfile, you will typically have the sessions of the Docker images in there. For example, these are the two samples that we provide ourself of the tech review on our default devfile repository. So, the first one is Spring one, which contains DR. The first one and second one is using a different model. Basically the first one is using separate build and run containers. So, your run container can be—runtime container can be very close to what you do on production.


Elson Yuen (09:05):

The second one is an Open Liberty samples, for which you can see is actually using the same Docker images for both the build and run. So that means you can have a combined container for it. But if you take a look at the devfile, basically what it allows you to do is to define the containers for your build and run and then the associated commands. For example, if I'm doing a build, this is the command time running within the containers to execute a build. So, by knowing that file, opening this up and you can actually modify it. So, you can actually modify how your application behaves into runtime.


Elson Yuen (09:48):

Okay. So, without further ado, I will pass on to Tomas to do a demo on that so that you can actually see it live in action.


Tomas Kral (10:00):

So, hi, everyone. So, I will show a quick demo. How quickly you can develop application directly on the cluster. So, I'm only showing you this using web application. It's two-tier application. It's here in this window. I have a simple Java application using Spring Boot framework, and here in the second tab, it's my front end. And that one is Node.js application using Nuxt.js Framework. So, as I said, I will show how quickly you can develop and work with application. And the thing that I would like you to notice here is that how quickly and fast you can see the changes in the cluster once the application is deployed.


Tomas Kral (10:52):

Let me—I also mentioned also all the functionalities that I will be demoing is currently hidden under the experimental flag. So, in order to use this and converting it from odo, you need to do first odo preference set experimental true. I've already done that, but just from in here. So, yeah. So, now let's start actually doing something interesting. I will first create project, so I will do odo project create. I'll call it demo. And once that's done, we can start. So, odo uses the components. Components are skilled microservices. So, multiple components are getting an application. Each component has a type that is best scale language or programming language or the framework. So, in odo you can use catalog command command list components, and this will give you a list all the possible component types that you can create using odo.


Tomas Kral (12:13):

So, this is list of the components that in the old-style odo. So, that's what we have there right now. It's basically OpenShift specific, and it uses S2i builder images that are available in the OpenShift cluster. What we are interested in right now and what I want to show you is the devfile components. You can see it in the table here. And those are all we are going to use right now. So, I am going to use just the generic Maven one with my application. So, now I know that I want to use Maven and it's available, I can do odo component create. I want to create Maven component, and I will call it backend. Now, what this is going to do is that it will actually fetch the Maven—the file from the default registry—and it will place it or it already placed it in the current directory.


Tomas Kral (13:21):

So, if I check in here and see the data, it's already devfile. It's similar to what Elson showed you earlier. This one uses just one image, one container, and what's most important here is, is the DEV and build command—DEV run and build command. So, those files define how our application is going to be built and run. But if I actually, I didn't need to provide this. It's already provided. So, I don't have to worry about that. Now, when I put my component created from the previous step, you can do just odo push.


Tomas Kral (14:07):

What odo push does is that it takes the devfile description that we have here and it translates it into the Kubernetes resources. In this case, it will create the deployment and the service because there is an end point defined. And once that's created, it will upload source code from the current directory that was here, and after that is finished, or it was already finished, it's executing the DEV build command as defined in the devfile. So, in this case, it's executing Maven package. This might take a few seconds, because it's also downloading all the Maven dependency. And the odo will execute the DEV run command. In this case, it means that it will execute the jar in the cluster.


Tomas Kral (15:12):

And we will have our application running. So, my note is that the action of building the container here, we are just uploading the source code into a container that's already running and then building and starting application inside already running container. So, this is great for quick development, but it's not ideal for production runs. But right now we are concerned about quick development. So, we can see that it's done. And now we have our Spring Boot application running in the cluster. I'm not going to access it directly. Instead I will actually use my front end to verify it's working properly. So, here I have my front-end application—Node.js application. I already have the devfile here, same as before.


Tomas Kral (16:13):

It's interesting here to notice is that there are 11 variables defined. So, those are actually describing how my Node.js application is going to—where my Node.js application can find the backend one. I don't need to do anything else here. I will just run odo push, and it will do the same steps as before. It's already uploading the source code. Now it's running npm install and downloading all the dependencies, and then it will execute the devfile command. So, I noticed that once that's done, the application will be running in the container, in the cluster, but we won't have any easy way to access it. And we know that it's actually exporting or exposing or it's actually listening or doing something on port 3000.


Tomas Kral (17:14):

So, what we can do, and normally you would create something like Ingress on Kubernetes or out on OpenShift to expose this port on some URL where you or your customers can access it. Odo is a really nice command for that. It's odo url create—it automatically create the ingress for you. So odo url create the dashboard. I know that it's 3000 with base hostname to use. So, for my cluster, this is going to be a demo. And now it's indicating that it's actually not created anything on the cluster yet. It just recorded the information that the URL should be created and in the local file. And now I can run odo push, and it will create the URL for me.


Tomas Kral (18:34):

There is also a second option, how I could do this. And if—I could also use just odo URL create and provide now flag. And that would do the same thing but in one step. So now we can check this URL and can see my application is running. It's a very simple application. Just put name here, and it will greet you, saying, "Ahoy, Tomas." So, now let's do something more interesting. We can try to change something our backend. So, instead of "ahoy, Tomas," it says, "Hello, Tomas," for example. All right, so hello to and the name.


Tomas Kral (19:25):

So, let's check the source code. All right. What you're going to do first is, I will show you odo watch command. Execute that, and you can see that what Odo watch is doing is that it's checking the current directory. It's waiting for some file to change in the directory. And when it detects the change, it will automatically push the source code. So, if I go to my source code, I change this to hello. Save it. Oh, I don't do that right now. I hit the save. And now you can see that it noticed the file changed, and it's automatically pushing the changes. So, I can use this for really quick testing to see my changes. I can just switch between them within my editor and between the browser and see change happening without actually worrying about anything.


Tomas Kral (20:42):

So, I can do odo, and it should say "Hello, odo." So, you can see that the change was there. Now I could just switch. I have still watch running there. So, I could just switch to the editor, do another change, wait a few seconds, and then I can see the changes very quickly in my cluster happening almost instantly. I'm now going to be out—done with that, we can use odo delete to clean everything outside—it will automatically delete everything that was created before. And do same here. And that's all for the demo. Now Elson will give you more insights on what are our future plans for odo.


Elson Yuen (21:36):

Okay. So, what Tomas show is just what we have currently implemented so far. We're definitely more than just that. So, let's talk about what we are going to do later. So, first of all, we are going to—the current odo is more for Kubernetes, generic Kubernetes CLI for developers. We want to make it to be able to run anywhere as well. So, we're trying to expand the scope of the odo CLI itself to be able to run in the Kubernetes environments, for example is a local Docker environment. We want to support that.


Elson Yuen (22:21):

So, next one is the current list of devfiles or the registry itself is actually predefined in odo. What we want to be able to do is to allow user to configure their own hybrid devfile registry, so that they can actually provide their list of components that they support themselves. Or they can have some modified components that they want to provide by default. And then they will be able to add and remove repositories as part of their listing. So, once you do odo catalog list and we will be able to see all the private ones that you have created as well.


Elson Yuen (23:02):

And then add notification mechanism is more for ID integration, because we see odo can be the base framework for any other IDs for consuming it so that you can use it for base engine for doing the build and run. There are other projects like Eclipse Codewind that is actually interesting. Doing the IT integrations in there as well. So we want the odo CLI to be able to use it for generic Kubernetes scenarios, use it for local Docker coop, but it's also be the base engines for IDE to do the build and run as well.


Elson Yuen (23:45):

Okay, so those are all the things that we want to talk about. In case you're interested to find out more information, there are a couple of references that we put in here that you can actually find. The main odo repro itself, definitely the first place that you want to visit. And then we've got the existing devfile registry that we have, and that you can actually see some samples on. You can actually try creating some of your samples as well. And then devfile to some of the newer specifications that we're moving forward as well. Okay, thank you. Thank you for listening to this session. I hope you get some useful information out of the session for odo, and hopefully you'll find odo style is actually a good generic Kubernetes CLI for you to do application development on your Kubernetes systems. Thank you.

Session speakers

Elson Yuen

Elson Yuen


Senior Software Engineer at IBM.