Why not couple an Operator’s logic to a specific Kubernetes platform?

Why not couple an Operator’s logic to a specific Kubernetes platform?

You might find yourself in situations where you believe that a logic implementation should occur only if and when your Operator is running on a specific Kubernetes platform. So, you probably want to know how to get the cluster vendor from the operator. In this article, we will discuss why relying on the vendor is not a good idea. Also, we will show how to solve this kind of scenario.

Why not develop solutions based on the vendor?

Let’s think about the scenario where we have an Operator that will take further actions only if the Operator Lifecycle Manager (OLM) is installed in the cluster. Then, could we not check whether the cluster is an OpenShift platform because the OLM is provided by default?

No, we cannot. Note that besides the OLM being installed and available in OpenShift 4.X via the default installation, it also might not always be truthful. And then, it is essential to highlight that the OLM can be installed on other vendors as well.

Thus, this example provides a pretty good idea of the problems that can be caused by relying on such assumptions.

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

SIGN UP

What’s the best approach, then?

The best approach to handle this scenario is to look for the exact API resource that is required for the solution, instead of using a resource that assumes the Operator is running or not on a specific Kubernetes platform.

In the above case, we were looking to see if the OLM’s Custom Resources Definitions (API Resource), which are essential to the solution, are present or not.

If you want to create a Route resource when the project is running in OpenShift and an Ingress (instead of, for example, only if it is running in the Minikube), then we should be checking whether the specific v1.Route is installed or not.

How to implement this approach

Let’s start by creating the discovery client:

    // Get a config to talk to the apiserver
    cfg, err := config.GetConfig()
    if err != nil {
        log.Error(err, "")
        os.Exit(1)
    }
    
    // Create the discoveryClient
    discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg)
    if err != nil {
        log.Error(err, "Unable to create discovery client")
        os.Exit(1)
    }

Note: The cfg *rest.Config is created in the main.go, by default in all projects built with Operator-SDK.

Now, we can search for all API resources as follows:

    // Get a list of all API's on the cluster
    apiGroup, apiResourceList , err := discoveryClient.ServerGroupsAndResources()
    if err != nil {
        log.Error(err, "Unable to get Group and Resources")
        os.Exit(1)
    }

Note that by using the kubectl api-resources command, it is possible to check the resources available in the cluster as follows:

$ kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
...

Now, we can check and verify if the resource is or is not on the cluster:

    // Looking for group.Name = "apiextensions.k8s.io"
    for i := 0; i < len(apiGroup); i++ {
        if  apiGroup[i].Name == name {
             // found the api
        }
    }

Note: It is possible to use other attributes, such as the Group, Version, and Kind. For further information, check the APIGroup GoDoc.

It is possible to use the result of apiResourceList:

    // Looking for Kind = "PersistentVolume"
    for i := 0; i < len(apiResourceList); i++ {
        if  apiResourceList[i].Kind == kind {
            // found the Kind
        }
    }

Note: Check the APIResourceList GoDoc to see other options.

How to get the cluster version information

Let’s follow a code implementation as an example that can be used to get the cluster version by using the DiscoveryClient as well:

// getClusterVersion will create and use an DiscoveryClient
// to return the cluster version.
// More info: https://godoc.org/k8s.io/client-go/discovery#DiscoveryClient
func getClusterVersion(cfg *rest.Config) string {
	discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg)
	if err != nil {
		log.Error(err, "Unable to create discovery client")
		os.Exit(1)
	}
	sv, err := discoveryClient.ServerVersion()
	if err != nil {
		log.Error(err, "Unable to get server version")
		os.Exit(1)
	}
	return sv.String()
}

Keep in mind that these ideas can be beneficial and allow you to manage your solutions dynamically and programmatically. See that they can be also useful to handle issues on specific scenarios as well. An then, used beyond the installation and configuration process.

Also, before finishing, I’d like to thank @Joe Lanford and @Jeff McCormick, who collaborated and provided feedback and input for this article.

Join Red Hat Developer and get access to handy cheat sheets, free books, and product downloads.

 

Do you need DevOps training and tutorials?

Visit our DevOps Topic page.

Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads that can help you with your DevOps efforts.

Share