Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Kubernetes Operators 101, Part 2: How operators work

June 22, 2021
Bobby Woolf
Related topics:
ContainersKubernetesOperators
Related products:
Red Hat OpenShift

Share:

    Kubernetes Operators might seem complex at first, but if you understand Kubernetes, you can easily extend that knowledge to Kubernetes Operators. Part 1 of this series provided an overview of Kubernetes Operators and what they do. This article explains aspects of how a Kubernetes cluster works, including the structure of a cluster, how workloads are managed, and the reconciliation process.

    The structure of Kubernetes Operators

    A Kubernetes cluster treats an operator like an application that's deployed as a workload. This specialized application manages another resource, such as another application hosted in Kubernetes.

    An operator manages an operand using a set of managed resources:

    • Operand: The managed resource that the operator provides as a service.
    • Managed resources: The Kubernetes objects that an operator uses to create the operand.

    Out of the box, Kubernetes is good at managing stateless workloads. They are similar enough that Kubernetes uses the same logic to manage all of them. Stateful workloads are more complex, and each one is different, requiring custom management. This is where operators come in.

    A basic operator consists of the components depicted in Figure 1.

    A diagram depicting components of a Kubernetes operator described in the article.
    Figure 1: The operator structure.

    The following components form the three main parts of an operator:

    • API: The data that describes the operand’s configuration. The API includes:
      • Custom resource definition (CRD), which defines a schema of settings available for configuring the operand.
      • Programmatic API, which defines the same data schema as the CRD and is implemented using the operator’s programming language, such as Go.
      • Custom resource (CR), which specifies values for the settings defined by the CRD; these values describe the configuration of an operand.
    • Controller: The brains of the operator. The controller creates managed resources based on the description in the custom resource; controllers are implemented using the operator’s programming language, such as Go.
    • Role and service accounts: Kubernetes role-based access control (RBAC) resources with permissions that allow the controller to create the managed resources.

    A particular operator can be much more complex, but it will still contain this basic structure.

    The Kubernetes architecture

    A Kubernetes cluster consists of a control plan, worker nodes, and a cloud provider API, as Figure 2 shows.

    Diagram of a Kubernetes cluster consisting of a control plan, worker nodes, and a cloud provider API.
    Figure 2: The Kubernetes architecture.

    The following components form the two main parts of a cluster:

    • Worker nodes: The computers that run the workloads.
    • Control plane: The components that manage the cluster, its nodes, and workloads.
      • API server: An API for the control plane that clients use to manage the cluster.
      • Controller manager: Runs the controller processes; each controller has a specific responsibility as part of managing the cluster.

    Operators only use the worker nodes and control plane components.

    Operators run in the worker nodes. However, operators implement controllers, which run in the control plane. Because operators run controllers in the worker nodes, they effectively extend the control plane into the worker nodes.

    A cluster always has two states: desired and current. The desired state represents objects that should exist in the cluster. The current state represents the objects that actually exist. Controllers manage a cluster’s state, reconciling the current state to match the desired state. Kubernetes controllers run in the control plane.

    Almost every Kubernetes object includes two nested object fields that store the objects' desired and current state—specification (represented in YAML by the spec section) and status (represented in YAML by the status section). These two fields are what an operator's controller uses to reconcile its operands. When you want to update the desired state, you update the settings in the specification field in the custom resource. After the cluster has updated the operand, the controller will save the currently observed state of the managed resources in the status field, thereby storing the custom resource's representation of the current state.

    Deploying workloads in Kubernetes

    The way an operator deploys and manages a workload is similar to how an administrator deploys and manages a workload. A basic workload deployed into a Kubernetes cluster has the structure shown in Figure 3.

    Diagram of a workload deployed into a Kubernetes cluster by an admin or an operator.
    Figure 3: Kubernetes workload structure.

    The workload consists of a Deployment that runs a set of Pod replicas, each of which runs a duplicate Container. The Deployment is exposed as a Service, which provides a single fixed endpoint for clients to invoke behavior in the set of replicas.

    How operators deploy a workload

    An operator deploys a workload in very much the same way that a human administrator (or a build pipeline) does. Operators are, after all, meant to automate tasks done by IT administrators.

    As Figure 4 illustrates, the Kubernetes API does not know whether the client is an admin or an operator, so the cluster deploys the workload the same way. Everything that happens in the control plane is the same.

    Diagram showing how operators deploy a workload.
    Figure 4: How workloads are deployed.

    Kubernetes’ workload deployments

    An administrator uses client tools such as the kubectl CLI and YAML files to tell the cluster what resources to create, such as those to deploy a workload. When an admin runs a command like kubectl apply -f my-manifest.yaml, what actually happens?

    • The client tool talks to the Kube API, which is the interface for the control plane.
    • The API performs its commands by changing the cluster’s desired state, such as adding a new resource described by my-manifest.yaml.
    • The controllers in the control plane make changes to the cluster’s current state to make it match the desired state.

    Voilà! A workload is deployed.

    Operators’ workload deployment

    When an operator deploys a workload, it does pretty much the same thing:

    • The CR acts like the administrator's YAML file, providing an abstract description of the resource that should be deployed.
    • The controller uses its API to read the CR and uses the Kubernetes API to create the resource described by the CR, much like an admin running kubectl commands.

    The Kubernetes API doesn't know whether its client is an admin using client tools or an operator running a controller. Either way, it performs the client's commands by updating the desired state, which Kubernetes' controllers use to update the current state. In this way, the operator does what the admin would do, but in an automated way that's encapsulated in its controller's implementation.

    How operators reconcile Kubernetes cluster states

    Remember that a Kubernetes cluster always has two states. The desired state represents objects that should exist in the cluster, and the current state represents the objects that actually exist.

    In Kubernetes Operators, controllers manage a cluster’s state, reconciling the current state to match the desired state. Kubernetes controllers run in the control plane. The following section examines how reconciliation happens in regular Kubernetes workloads, then how Kubernetes Operators extend that reconciliation into the worker nodes.

    The reconciliation loop in the control plane

    In a typical Kubernetes cluster, a controller manager runs controllers in a reconciliation loop in the control plane. Each controller is responsible for managing a specific part of the cluster's behavior. The controller manager runs a control loop that allows each controller to run by invoking its Reconcile() method.

    When a controller reconciles, its task is to adjust the current state to make it match the desired state. Therefore, the control loop in the controller manager is a reconciliation loop, as the diagram in Figure 5 shows.

    Diagram of a Kubernetes cluster with controllers running in a reconciliation loop in the control plane.
    Figure 5: The Kubernetes reconciliation loop.

    The reconciliation loop in the worker nodes

    While Kubernetes controllers run in the control plane, the operators' controllers run in the worker nodes. This is because an operator is deployed into a Kubernetes cluster as a workload. And just like any other workload, the cluster hosts an operator's workload in the worker nodes.

    Each operator extends the reconciliation loop by adding its custom controller to the controller manager's list of controllers (see Figure 6).

    Diagram of the Kubernetes reconciliation loop with operators added.
    Figure 6: The reconciliation loop with operators.

    When the controller manager runs the reconciliation loop, it does two things:

    1. Tells each controller in the control plane to reconcile itself.
    2. Tells each operator’s custom controller to reconcile itself.

    And as with a standard controller, Reconcile() allows the custom controller to react to any changes since the last time it reconciled itself.

    Reconcile states

    So far, we've covered the relationship between a cluster's desired state and its current state and how a controller reconciles between those two states for the resources it manages.

    The way Kubernetes' controllers and an operators' custom controllers reconcile is analogous, as illustrated by Figure 7.

    Diagram showing how Kubernetes' controllers and an operators' custom controllers reconcile between a cluster's desired state and current state.
    Figure 7: Reconcile states.

    Operator controllers work one level of abstraction higher than the Kubernetes controllers. The Kubernetes controllers reconcile built-in kinds like Deployment and Job into lower-level kinds like Pods. Custom controllers reconcile CRDs like Memcached and Etcd into workload kinds like Deployment and Service. So, a custom controller's current state becomes a Kubernetes controller's desired state.

    Both kinds of controllers reconcile between the desired and current state, but it takes two rounds of transformation to deploy a workload for an operator's custom resource:

    1. The operator’s controller transforms the custom resource into a set of managed resources (that is, the workload) that are the operator’s current state but are also the control plane’s desired state.
    2. The Kubernetes controllers transform the managed resources into running pods (aka the operand) in the control plane’s current state.

    Summary

    This article has demonstrated how you can extend your understanding of Kubernetes to their operators. Operators work like Kubernetes in several respects:

    • An operator's brain is a controller whose responsibilities are like those of a typical Kubernetes controller in the control plane.
    • The way an operator deploys a workload is similar to how an administrator deploys a workload. The control plane doesn't know the difference.
    • The control plane implements a reconciliation loop that allows each controller to reconcile itself, and operators add their controllers to that loop.
    • While Kubernetes controllers and custom controllers adjust between their desired state and their current state, operators manage desired state as a custom resource and reconcile it into a current state that is a set of managed resources that Kubernetes controllers use as their desired state.

    With this knowledge, you'll be better prepared to write your own operators and understand how they work as a part of Kubernetes.

    Last updated: August 24, 2022

    Related Posts

    • Kubernetes Operators 101, Part 1: Overview and key features

    • Operator SDK: Build Kubernetes Operators and deploy them on OpenShift

    • Deploying Kubernetes Operators with Operator Lifecycle Manager bundles

    Recent Posts

    • Expand Model-as-a-Service for secure enterprise AI

    • OpenShift LACP bonding performance expectations

    • Build container images in CI/CD with Tekton and Buildpacks

    • How to deploy OpenShift AI & Service Mesh 3 on one cluster

    • JVM tuning for Red Hat Data Grid on Red Hat OpenShift 4

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue