Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Using the Kubernetes Client for Go

November 25, 2016
Mike Dame
Related topics:
ContainersDeveloper ToolsDevOpsGoKubernetes

Share:

    The Kubernetes client package for Go provides developers with a vast range of functions to access data and resources in a cluster. Taking advantage of its capabilities can allow the opportunity to build powerful controllers, monitoring and managing your cluster, beyond the scope of what is offered by stock OpenShift or Kubernetes setups.

    For example, the PodInterface allows you to list, update, delete, or get specific pods either by namespace or across all namespaces. This interface is complemented by similar implementations for many other cluster resource types such as ReplicationControllers and ResourceQuotas.

    // These are the imports used throughout this article
    import (
       "log"
       "time"
    
       "github.com/openshift/origin/pkg/client/cache"
       "github.com/openshift/origin/pkg/cmd/util/clientcmd"
    
       "github.com/spf13/pflag"
       kapi "k8s.io/kubernetes/pkg/api"
       kcache "k8s.io/kubernetes/pkg/client/cache"
       kclient "k8s.io/kubernetes/pkg/client/unversioned"
       ctlresource "k8s.io/kubernetes/pkg/kubectl/resource"
       "k8s.io/kubernetes/pkg/runtime"
       "k8s.io/kubernetes/pkg/watch"
    )
    
    func main() {
       var kubeClient kclient.Interface
       config, err := clientcmd.DefaultClientConfig(pflag.NewFlagSet("empty", pflag.ContinueOnError)).ClientConfig()
       if err != nil {
          log.Printf("Error creating cluster config: %s", err)
       }
    
       kubeClient, err = kclient.New(config)
       podInterface := kubeClient.Pods(namespace)
       podList, err := podInterface.List(kapi.ListOptions{})
    
       if err != nil {
          return err
       }
    }

    And that PodInterface can be used to directly operate on resources in the cluster, such as deleting pods:

    for _, pod := range podList.Items {
       err = podInterface.Delete(pod.Name, &kapi.DeleteOptions{})
    
       if err != nil {
          log.Printf(“Error: %s”, err)
       }
    }

    When combined with a ListWatch from Kubernetes’ cache package, you can easily monitor and handle incoming events in the cluster related to that type of object. To store and process these events at your leisure, Kubernetes provides a DeltaFIFO struct, while OpenShift’s cache package provides an EventQueue struct, which just expands on the use cases of DeltaFIFO when processing object change events.

    podQueue := cache.NewEventQueue(kcache.MetaNamespaceKeyFunc)
    
    podLW := &kcache.ListWatch{
       ListFunc: func(options kapi.ListOptions) (runtime.Object, error) {
          return s.kubeClient.Pods(kapi.NamespaceAll).List(options)
       },
       WatchFunc: func(options kapi.ListOptions) (watch.Interface, error) {
          return s.kubeClient.Pods(kapi.NamespaceAll).Watch(options)
       },
    }
    kcache.NewReflector(podLW, &kapi.Pod{}, podQueue, 0).Run()
    
    go func() {
       for {
          event, pod, err := podQueue.Pop()
          err = handlePod(event, pod.(*kapi.Pod))
          if err != nil {
             log.Fatalf("Error capturing pod event: %s", err)
          }
       }
    }()

    These different event types also let you handle events that create, modify, or delete resources differently:

    func handlePod(eventType watch.EventType, pod *kapi.Pod) {
       switch eventType {
       case watch.Added:
          log.Printf(“Pod %s added!”, pod.Name)
       case watch.Modified:
          log.Printf(“Pod %s modified!”, pod.Name)
       case watch.Deleted:
          log.Printf(“Pod %s deleted!”, pod.Name)
       }
    }

    Putting it all together in an example, say you wanted to restrict a certain namespace from creating new pods during specific hours. The full code could look like this:

    import (
       "log"
       "time"
    
       "github.com/openshift/origin/pkg/client/cache"
       "github.com/openshift/origin/pkg/cmd/util/clientcmd"
    
       "github.com/spf13/pflag"
       kapi "k8s.io/kubernetes/pkg/api"
       kcache "k8s.io/kubernetes/pkg/client/cache"
       kclient "k8s.io/kubernetes/pkg/client/unversioned"
       ctlresource "k8s.io/kubernetes/pkg/kubectl/resource"
       "k8s.io/kubernetes/pkg/runtime"
       "k8s.io/kubernetes/pkg/watch"
    )
    
    func main() {
       var kubeClient kclient.Interface
       config, err := clientcmd.DefaultClientConfig(pflag.NewFlagSet("empty", pflag.ContinueOnError)).ClientConfig()
       if err != nil {
          log.Printf("Error creating cluster config: %s", err)
       }
    
       kubeClient, err = kclient.New(config)
       podQueue := cache.NewEventQueue(kcache.MetaNamespaceKeyFunc)
    
       podLW := &kcache.ListWatch{
          ListFunc: func(options kapi.ListOptions) (runtime.Object, error) {
             return kubeClient.Pods(kapi.NamespaceAll).List(options)
          },
          WatchFunc: func(options kapi.ListOptions) (watch.Interface, error) {
             return kubeClient.Pods(kapi.NamespaceAll).Watch(options)
          },
       }
       kcache.NewReflector(podLW, &kapi.Pod{}, podQueue, 0).Run()
    
       go func() {
          for {
             event, pod, err := podQueue.Pop()
             err = handlePod(event, pod.(*kapi.Pod), kubeClient)
             if err != nil {
                log.Fatalf("Error capturing pod event: %s", err)
             }
          }
       }()
    }
    
    func handlePod(eventType watch.EventType, pod *kapi.Pod, kubeClient kclient.Interface) {
       switch eventType {
       case watch.Added:
          log.Printf(“Pod %s added!”, pod.Name)
          if pod.Namespace == “namespaceWeWantToRestrict” {
             hour := time.Now().Hour()
             if hour >= 5 && hour <= 10 {
                err := kubeClient.Pods(pod.Namespace).Delete(pod.Name, &kapi.DeleteOptions{})
                if err != nil {
                   log.Printf(“Error deleting pod %s: %s”, pod.Name, err)
                }
             }
          }
       case watch.Modified:
          log.Printf(“Pod %s modified!”, pod.Name)
       case watch.Deleted:
          log.Printf(“Pod %s deleted!”, pod.Name)
       }
    }

    Of course, if this project is using a ReplicationController the pod we deleted will just be recreated causing a loop for the next 5 hours, which is undesirable. In that case, you might also want to use a ReplicationControllerInterface to also scale down any RCs in this project:

    if pod.Namespace == “namespaceWeWantToRestrict” {
       hour := time.Now().Hour()
       if hour >= 5 && hour <= 10 {
          rcList, err := kubeClient.ReplicationControllers(pod.Namespace).List(kapi.ListOptions{})
          if err != nil {
             log.Printf(“Error getting RCs in namespace %s: %s”, pod.Namespace, err)
          }
    
          for _, rc := range rcList.Items {
             rc.Spec.Replicas = 0
             _, err := kubeClient.ReplicationControllers(pod.Namespace).Update(rc)
             if err != nil {
                log.Printf(“Error scaling RC %s: %s”, rc.Name, err)
             }
          }
    
          err = kubeClient.Pods(pod.Namespace).Delete(pod.Name, &kapi.DeleteOptions{})
          if err != nil {
             log.Printf(“Error deleting pod %s: %s”, pod.Name, err)
          }
       }
    }

    You may find more practical uses for the Kubernetes client than this, but it’s a good showcase of just how easy it is to interact with your cluster in a small (or maybe large) controller you own. Using a ListWatch allows you to react more dynamically to incoming events, rather than having to try to predict situations you need to control.

    Last updated: September 3, 2019

    Recent Posts

    • Simplify OpenShift installation in air-gapped environments

    • Dynamic GPU slicing with Red Hat OpenShift and NVIDIA MIG

    • Protecting virtual machines from storage and secondary network node failures

    • How to use OCI for GitOps in OpenShift

    • Using AI agents with Red Hat Insights

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue