Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Why not couple an Operator's logic to a specific Kubernetes platform?

 

January 22, 2020
Camila Macedo
Related topics:
DevOpsKubernetesOperators
Related products:
Red Hat OpenShift

Share:

    You might find yourself in situations where you believe that a logic implementation should occur only if and when your Operator is running on a specific Kubernetes platform. So, you probably want to know how to get the cluster vendor from the operator. In this article, we will discuss why relying on the vendor is not a good idea. Also, we will show how to solve this kind of scenario.

    Why not develop solutions based on the vendor?

    Let's think about the scenario where we have an Operator that will take further actions only if the Operator Lifecycle Manager (OLM) is installed in the cluster. Then, could we not check whether the cluster is an OpenShift platform because the OLM is provided by default?

    No, we cannot. Note that besides the OLM being installed and available in OpenShift 4.X via the default installation, it also might not always be truthful. And then, it is essential to highlight that the OLM can be installed on other vendors as well.

    Thus, this example provides a pretty good idea of the problems that can be caused by relying on such assumptions.

    What's the best approach, then?

    The best approach to handle this scenario is to look for the exact API resource that is required for the solution, instead of using a resource that assumes the Operator is running or not on a specific Kubernetes platform.

    In the above case, we were looking to see if the OLM's Custom Resources Definitions (API Resource), which are essential to the solution, are present or not.

    If you want to create a Route resource when the project is running in OpenShift and an Ingress (instead of, for example, only if it is running in the Minikube), then we should be checking whether the specific v1.Route is installed or not.

    How to implement this approach

    Let's start by creating the discovery client:

        // Get a config to talk to the apiserver
        cfg, err := config.GetConfig()
        if err != nil {
            log.Error(err, "")
            os.Exit(1)
        }
        
        // Create the discoveryClient
        discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg)
        if err != nil {
            log.Error(err, "Unable to create discovery client")
            os.Exit(1)
        }

    Note: The cfg *rest.Config is created in the main.go, by default in all projects built with Operator-SDK.

    Now, we can search for all API resources as follows:

        // Get a list of all API's on the cluster
        apiGroup, apiResourceList , err := discoveryClient.ServerGroupsAndResources()
        if err != nil {
            log.Error(err, "Unable to get Group and Resources")
            os.Exit(1)
        }

    Note that by using the kubectl api-resources command, it is possible to check the resources available in the cluster as follows:

    $ kubectl api-resources
    NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
    bindings                                                                      true         Binding
    componentstatuses                 cs                                          false        ComponentStatus
    configmaps                        cm                                          true         ConfigMap
    endpoints                         ep                                          true         Endpoints
    events                            ev                                          true         Event
    limitranges                       limits                                      true         LimitRange
    namespaces                        ns                                          false        Namespace
    nodes                             no                                          false        Node
    persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
    persistentvolumes                 pv                                          false        PersistentVolume
    pods                              po                                          true         Pod
    podtemplates                                                                  true         PodTemplate
    replicationcontrollers            rc                                          true         ReplicationController
    resourcequotas                    quota                                       true         ResourceQuota
    secrets                                                                       true         Secret
    serviceaccounts                   sa                                          true         ServiceAccount
    services                          svc                                         true         Service
    mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
    validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
    customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
    apiservices                                    apiregistration.k8s.io         false        APIService
    ...

    Now, we can check and verify if the resource is or is not on the cluster:

        // Looking for group.Name = "apiextensions.k8s.io"
        for i := 0; i < len(apiGroup); i++ {
            if  apiGroup[i].Name == name {
                 // found the api
            }
        }

    Note: It is possible to use other attributes, such as the Group, Version, and Kind. For further information, check the APIGroup GoDoc.

    It is possible to use the result of apiResourceList:

        // Looking for Kind = "PersistentVolume"
        for i := 0; i < len(apiResourceList); i++ {
            if  apiResourceList[i].Kind == kind {
                // found the Kind
            }
        }

    Note: Check the APIResourceList GoDoc to see other options.

    How to get the cluster version information

    Let's follow a code implementation as an example that can be used to get the cluster version by using the DiscoveryClient as well:

    // getClusterVersion will create and use an DiscoveryClient
    // to return the cluster version.
    // More info: https://godoc.org/k8s.io/client-go/discovery#DiscoveryClient
    func getClusterVersion(cfg *rest.Config) string {
    	discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg)
    	if err != nil {
    		log.Error(err, "Unable to create discovery client")
    		os.Exit(1)
    	}
    	sv, err := discoveryClient.ServerVersion()
    	if err != nil {
    		log.Error(err, "Unable to get server version")
    		os.Exit(1)
    	}
    	return sv.String()
    }

    Keep in mind that these ideas can be beneficial and allow you to manage your solutions dynamically and programmatically. See that they can be also useful to handle issues on specific scenarios as well. An then, used beyond the installation and configuration process.

    Also, before finishing, I'd like to thank @Joe Lanford and @Jeff McCormick, who collaborated and provided feedback and input for this article.

    Last updated: February 5, 2024

    Recent Posts

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue