Overview
“Kubernetes has emerged as the world’s most powerful container orchestration platform, but its true power is hidden behind an extensible API and automation framework that will redefine how future platforms are built and operated; this book is the missing manual.”
—Kelsey Hightower, Technologist, Google Cloud
Kubernetes Operators are software extensions that extend Kubernetes to automate the management of the entire life cycle of a particular application. Operators serve as a packaging mechanism for distributing applications on Kubernetes, and they monitor, maintain, recover, and upgrade the software they deploy.
Download Kubernetes Operators, a hands-on guide to automating container orchestration. This comprehensive book covers:
- How to deploy and use existing Operators
- How to create and distribute Operators for your applications
- Best practices for designing, building, and distributing Operators
- The relationship between Operators and site reliability engineering (SRE) principles
- How to deploy an Operator and observing its behavior during failures, scaling, and upgrades
- How to use the Operator SDK to build an Operator
- SRE concepts related to Operators, including reducing operational effort and cost, increasing service reliability, and fostering innovation by reducing repetitive maintenance tasks
Excerpt
An Operator is a way to package, run, and maintain a Kubernetes application. A Kubernetes application is not only deployed on Kubernetes, it is designed to use and to operate in concert with Kubernetes facilities and tools.
An Operator builds on Kubernetes abstractions to automate the entire lifecycle of the software it manages. Because they extend Kubernetes, Operators provide application-specific automation in terms familiar to a large and growing community. For application programmers, Operators make it easier to deploy and run the foundation services on which their apps depend. For infrastructure engineers and vendors, Operators provide a consistent way to distribute software on Kubernetes clusters and reduce support burdens by identifying and correcting application problems before the pager beeps.
Before we begin to describe how Operators do these jobs, let’s define a few Kubernetes terms to provide context and a shared language to describe Operator concepts and components.
How Kubernetes Works
Kubernetes automates the lifecycle of a stateless application, such as a static web server. Without state, any instances of an application are interchangeable. This simple web server retrieves files and sends them on to a visitor’s browser. Because the server is not tracking state or storing input or data of any kind, when one server instance fails, Kubernetes can replace it with another. Kubernetes refers to these instances, each a copy of an application running on the cluster, as replicas.
A Kubernetes cluster is a collection of computers, called nodes. All cluster work runs on one, some, or all of a cluster’s nodes. The basic unit of work, and of replication, is the pod. A pod is a group of one or more Linux containers with common resources like networking, storage, and access to shared memory.
The Kubernetes pod documentation is a good starting point for more information about the pod abstraction.
At a high level, a Kubernetes cluster can be divided into two planes. The control plane is, in simple terms, Kubernetes itself. A collection of pods comprises the control plane and implements the Kubernetes application programming interface (API) and cluster orchestration logic.
The application plane, or data plane, is everything else. It is the group of nodes where application pods run. One or more nodes are usually dedicated to running applications, while one or more nodes are often sequestered to run only control plane pods. As with application pods, multiple replicas of control plane components can run on multiple controller nodes to provide redundancy.
The controllers of the control plane implement control loops that repeatedly compare the desired state of the cluster to its actual state. When the two diverge, a controller takes action to make them match. Operators extend this behavior. The schematic in Figure 1-1 shows the major control plane components, with worker nodes running application workloads.
While a strict division between the control and application planes is a convenient mental model and a common way to deploy a Kubernetes cluster to segregate workloads, the control plane components are a collection of pods running on nodes, like any other application. In small clusters, control plane components are often sharing the same node or two with application workloads.
The conceptual model of a cordoned control plane isn’t quite so tidy, either. The kubelet agent running on every node is part of the control plane, for example. Likewise, an Operator is a type of controller, usually thought of as a control plane component. Operators can blur this distinct border between planes, however. Treating the control and application planes as isolated domains is a helpful simplifying abstraction, not an absolute truth.