Kubernetes Operators are constructed from different parts and components. This cross-referenced guide will list components you need to know to get started developing operators using the Operator Framework. You'll find a handy list of the links used at the end.
NoteNote
Excerpts in this guide are in Go. Tools used are part of the Operator Framework.
What is an Operator?
An Operator—aka a Kubernetes-native application—is software running and configured in a Kubernetes-based cluster, adhering to the Operator Pattern. We can write operators in any language Kubernetes has a client for. The Operator Framework offers two SDKs: Operator SDK for Go and Java Operator SDK for Java.
An operator typically describes an API and configures a manager to run controllers. Operators are deployed like any other application, using resources. As such, operators can be deployed manually or using Helm. With the Operator Framework, they are often installed using the framework's Operator Lifecycle Manager, OLM.
What is a manager?
A manager is used for creating and running our operator. We create a manager using the NewManager utility function. We use the manager's Options for configuring the various aspects of our manager instance, i.e., Scheme, Cache, Metrics, LeaderElection, HealthProbe, WebhookServer, etc.
kubeConfig := config.GetConfigOrDie()
mgr, err := ctrl.NewManager(kubeConfig, ctrl.Options{
// manager options go here
})
Once created, we use our manager instance to create one or more controllers, a health probe, and a webhook server before using our manager's Start receiver function to run everything, which marks the start of our operator run.
// we need to include the controllers and all configurations before we start
err := mgr.Start(ctx)
What is an API?
We extend Kubernetes API using CustomResourceDefinition resources (CRD). An instance of a CRD type is called a CustomResource (CR). Resources, in general, are identified using a GroupVersionKind (GVK). The version part is reflected in our code layout as part of our API package, and we'll typically have a subpackage-per-version. From a code perspective, a CRD must implement the Object interface and have a metadata
field. We use ObjectMeta as the metadata
field, and TypeMeta for providing various functions related to GVK and marshaling, including the required GetObjectKind
function (more on the Object interface next).
Info alert: Note
Note the marshaling markers used for the serialization of our types.
type OurCustomAPI struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
}
We also need an API List type. Used for encapsulating a list of our type instances when fetching lists. The List type implements the Object interface as well. We again use TypeMeta for the implementation of GetObjectKind
. The metadata
field is a ListMeta. A List type also requires an items
field for encapsulating an array of instances.
type OurCustomAPIList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []OurCustomAPI `json:"items"`
}
API types will typically have a spec
field for declaratively describing requirements and a status
field for controllers to report back their operation status.
type OurCustomAPI struct {
// meta objects removed for brevity
Spec OurCustomAPISpec `json:"spec,omitempty"`
Status OurCustomAPIStatus `json:"status,omitempty"`
}
type OurCustomAPISpec struct {
// spec fields go here
}
type OurCustomAPIStatus struct {
// status fields go here
}
Next, we must implement the DeepCopyObject
receiver function to implement the Object interface fully. This is considered boilerplate and can be generated alongside other useful copy-related functions by including object generation markers in our code and running the controller-gen tool to generate the boilerplate Go code in a file named zz_generated.deepcopy.go
right next to our types code.
// +kubebuilder:object:root=true
type OurCustomAPI struct {
// fields removed for brevity
}
// +kubebuilder:object:root=true
type OurCustomAPIList struct {
// fields removed for brevity
}
We also use CRD generation markers and CRD validation markers in our code, running the controller-gen tool to generate our CRD manifests, which will be later used for deploying our operator in the project's config/crd
folder.
// +kubebuilder:object:root=true
// +kubebuilder:resource:scope=Namespaced,shortName=ocapi
type OurCustomAPI struct {
// fields removed for brevity
Spec OurCustomAPISpec `json:"spec,omitempty"`
}
type OurCustomAPISpec struct {
// +kubebuilder:default=1
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=3
Replicas *int32 `json:"replicas,omitempty"`
}
We must regenerate every time our types get any sort of modification.
As described, a CR holds the requirements in its spec
and triggers one or more controllers that may reflect their operation status asynchronously in its status
field. Further endpoints, called subresources, can be coupled with our custom resource.
What are subresources?
Subresources are specific simple resources that can be coupled with other resources. They are used as endpoints on top of enabled resources for performing particular operations without modifying the parent resource. In the context of CRDs, we need to be concerned with two, /scale
and /status
.
The /scale
endpoint is used for reading and writing scaling-related data. We must add the following directive to our CRD for configuring data paths to enable scaling. Note that scaling requires us to implement a health probe to determine the health status of our operator.
Info alert: Note
We should enable the scale subresource if our operator requires scaling.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
...
spec:
...
versions:
- name: v1alpha1
...
subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
The corresponding code will be:
type OurCustomAPISpec struct {
Replicas *int32 `json:"replicas,omitempty"`
}
type OurCustomAPIStatus struct {
Replicas int32 `json:"replicas,omitempty"`
Selector string `json:"selector,omitempty"`
}
Next, the /status
subresource is used for decoupling status
reported by controllers from spec
declarations by consumers. We add an empty object to enable it. This will effectively make our OurCustomAPIStatus
a subresource.
Info alert: Note
We should enable the status
subresource if we need to decouple our status from our spec. This is also considered best practice.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
...
spec:
...
versions:
- name: v1alpha1
...
subresources:
status: {}
Once we enable the status
subresource, all API requests for creating/modifying/patching instances of the related CRD will ignore the status
of any object pushed through. This means that our status
field must be optional. We need to include the omitempty
marshaling property for it. To update the status, we use the /status
endpoint, which will update only the status
part. Programmatically, we can get a Client from our manager. Clients provide a Status API for creating/modifying/patching the status
when it's set to be a subresource.
// when the status subresource is enabled,
// this will update only the spec and ignore the status of instanceofOurCustomAPI.
// when the status subresource is disabled,
// this will update both the spec and the status.
mgr.GetClient().Update(ctx, instanceofOurCustomAPI)
// when the status subresource is enabled,
// this will update only the status instanceofOurCustomAPI.
mgr.GetClient().Status().Update(ctx, instanceofOurCustomAPI)
Both subresources can be automatically enabled in our generated CRD manifests using the scale and status markers.
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.selector
type OurCustomAPI struct {
// fields removed for brevity
}
What is a controller?
A controller, the center of the entire operator, watches the cluster's desired state for changes and attempts to reconcile the current state accordingly. We use the NewControllerManagedBy utility function to create a Builder using our manager instance, configure the API types triggering our controller, configure the Reconciler implementation invoked for every triggering event, and build our controller. We can use Predicates to fine-grain the events.
err := ctrl.NewControllerManagedBy(mgr).For(&OurCustomApi{}).Complete(&OurCustomReconciler{})
A Reconciler implementation exposes a Reconcile function, also known as a reconciliation loop. This function will be invoked for events based on the controller's configuration. The invocation will include the name and namespace of the triggering resource in a Request object. For every invocation, we typically fetch the latest version of the resource in question, analyze the declarative Spec
, reconcile the underlying application, and update the resource's Status
. The reconcile function is expected to return a Result indicating whether a requeue of the request is required and its scheduling.
NoteNote
When turning off requeuing, our reconciler will be invoked again only for the next triggering event.
type OurCustomReconciler struct {}
func (r *OurCustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// reconciliation logic goes here
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute * 5}, nil
}
When running inside a cluster, our operator will need permissions to various resources, such as our custom ones. This is achieved using RBAC. We include RBAC generation markers in our code and run the controller-gen tool to generate our Role
/ClusterRole
manifests, which will be later used for deploying our operator, in the project's config/rbac
folder. We must regenerate as our code evolves and require RBAC modifications.
NoteNote
Bindings
and ServiceAccount
s are not generated.
// +kubebuilder:rbac:groups=group.example.com,resources=ourcustomapis,verbs=get;list;watch;create;update;patch
func (r *OurCustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// removed for brevity
}
When reconciling for every event, we also need to consider Delete events. Our operator might require cleanup. For cleanup logic, we use finalizers.
What are finalizers?
Finalizers are a list of keys used as part of a mechanism for cleanup logic. When an object with a finalizer set to it is being deleted, Kubernetes will mark it for deletion by setting its DeletionTimestamp and block it from deletion until all finalizers are removed. We, the operator's developers, must add our finalizer if we need to implement any cleanup logic. We use the AddFinalizer utility function:
controllerutil.AddFinalizer(instanceofOurCustomAPI,ourOwnFinalizerString)
err := mgr.GetClient().Update(ctx, instanceofOurCustomAPI)
It's also our responsibility to remove our finalizer once the cleanup is done so the object can be deleted. We use the RemoveFinalizer utility function:
controllerutil.RemoveFinalizer(instanceofOurCustomAPI, ourOwnFinalizerString)
err := mgr.GetClient().Update(ctx, instanceofOurCustomAPI)
We use the DeletionTimestamp value and the ContainsFinalizer utility function to decide if a cleanup is in order.
if !instanceofOurCustomAPI.DeletionTimestamp.IsZero() {
if controllerutil.ContainsFinalizer(instanceofOurCustomAPI, ourOwnFinalizerString) {
// cleanup code goes here
}
}
What is a scheme?
We use a Scheme reference when creating our manager to register the various types we own/work with. A Scheme acts like an object mapper, introducing types to Kubernetes. Preparing a scheme is a two-step process. First, if our code introduces new types, we must create a Builder for our GroupVersion and set our types to Register with it at runtime. A standard convention is encapsulating the Builder alongside the API, exposing its AddToScheme receiver function. This decouples the types from whomever requires them.
var (
groupVersion = schema.GroupVersion{Group: "group.example.com", Version: "v1"}
schemeBuilder = &scheme.Builder{GroupVersion: groupVersion}
AddToScheme = schemeBuilder.AddToScheme
)
type OurCustomAPI struct {
// fields removed for brevity
}
type OurCustomAPIList struct {
// fields removed for brevity
}
func init() {
schemeBuilder.Register(&OurCustomAPI{}, &OurCustomAPIList{})
}
For the second step, in our operator code, just before we create our manager, we need to initiate our scheme and load types onto it using the AddToScheme
function mentioned above. We create our initial scheme with the NewScheme utility function.
scheme := runtime.NewScheme()
err := ourapipkg.AddToScheme(scheme)
What are metrics?
Kubernetes metrics are Prometheus Metrics used for analyzing various aspects of running applications. Every manager exposes a metrics server by default. This can disabled or fine-grained. We use a preconfigured Prometheus Registry for registering metrics created using the prometheus package. We then use these metrics to report our data that, in turn, will be reflected in the metrics server.
var OurCustomCounterMetric = *prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "our_custom_counter",
Help: "Count the thingy",
}, []string{"thingy_added_label"})
func init() {
metrics.Registry.MustRegister(OurCustomCounterMetric)
}
Next, we can increment the metric from our code:
ourmetricspkg.OurCustomCounterMetric.WithLabelValues("labelvalue").Inc()
It's worth noting that the different components of Kubernetes report metrics that can help analyze our application.
What is a metrics server?
A metrics server is an HTTP server exposing textual data for scraping by Prometheus. We can fine-grain our server configuration when configuring our manager using the metric server's Options.
options := server.Options{BindAddress: "127.0.0.1:8080"}
By default, this server has no authentication layer. This can be mitigated with the commonly used kube-rbac-proxy as a proxy layer enforcing authentication using TLS certificates or Tokens.
What is a health probe?
Every manager can enable a HealthProbe, which means serving endpoints for checking the health of the underlying application, our operator. HealthProbe exposes two endpoints: readiness and liveliness. These are configured using our manager's receiver functions, AddHealthzCheck and AddReadyzCheck. Both take a Checker, a function that takes an HTTP Request and returns an Error. We can use the pre-built Ping checker if a custom health check is not required. Failure tolerations can be fine-grained per endpoint when designing the deployment.
Info alert: Note
Note that HealthProbe is mandatory for scaling.
err := mgr.AddHealthzCheck("healthz", healthz.Ping)
err = mgr.AddReadyzCheck("readyz", healthz.Ping)
What is a webhook server?
Our manager can expose a Webhook Server. An HTTP server serving endpoints for Dynamic Admission Controllers, AKA Admission Webhooks. These controllers are used for verifying and mutating resources before applying them to the system. There are two types of Admission Webhooks: Validating Admission Webhook and Mutating Admission Webhook. Kubernetes will invoke these webhooks for any admission of a CR. First, all the Validating Webhooks configured for the CRD will be invoked. They are expected to verify the validity of the admitted CR. Next, all the Mutating Webhooks configured for the CRD will be invoked. They are expected to mutate the CR, i.e., add labels, annotations, etc. Only after successfully invoking all webhooks, the CR will be applied to the system.
There are two approaches to implementing Admission Webhooks. The first one is by adding receiver functions on top of our API type to make it implement either or both the Validator and the Defaulter interfaces, essentially making our API type handle the logic for validation and mutation, respectively.
type OurCustomAPI struct {
// fields removed for brevity
}
func (a *OurCustomAPI) Default() {
// mutating logic goes here
}
func (a *OurCustomAPI) ValidateCreate() (warnings Warnings, err error) {
// create validation logic goes here
return nil, nil
}
func (a *OurCustomAPI) ValidateUpdate(old runtime.Object) (warnings Warnings, err error) {
// update validation logic goes here
return nil, nil
}
func (a *OurCustomAPI) ValidateDelete() (warnings Warnings, err error) {
// delete validation logic goes here
return nil, nil
}
We then use the NewWebhookManagedBy utility function to create a WebhookBuilder with our configured manager, introduce our API, and build our webhook.
err := ctrl.NewWebhookManagedBy(mgr).For(&OurCustomApi{}).Complete()
As for the second implementation approach, If we prefer decoupling our webhooks from our types, we can use the CustomValidator and CustomDefaulter interfaces, supported by the WithValidator and WithDefaulter builder steps, respectively.
type OurCustomWebhook struct {}
func (w *OurCustomWebhook) Default(ctx context.Context, obj runtime.Object) error {
// mutating logic goes here
return nil
}
func (w *OurCustomWebhook) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings Warnings, err error) {
// create validation logic goes here
return nil, nil
}
func (w *OurCustomWebhook) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings Warnings, err error) {
// update validation logic goes here
return nil, nil
}
func (w *OurCustomWebhook) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings Warnings, err error) {
// delete validation logic goes here
return nil, nil
}
And create our Webhook:
ourWebhook := &OurCustomWebhook{}
err := ctrl.NewWebhookManagedBy(mgr).For(&OurCustomApi{}).WithValidator(ourWebhook).WithDefaulter(ourWebhook).Complete()
Next, we need to tell Kubernetes about our webhooks. This is achieved using the ValidatingWebhookConfiguration and MutatingWebhookConfiguration APIs. We can include webhook generation markers in our code and run the controller-gen tool in the project's folder to generate these manifests, which will be later used for deploying our operator, in the project's config/webhook
folder. We must regenerate as our code evolves and API versions get bumped or modified.
// +kubebuilder:webhook:verbs=create;update;delete,path=/validate-group-example-com-v1beta1-ourcustomapi,mutating=false,failurePolicy=fail,groups=group.example.com,resources=ourcustomapis,versions=v1beta1,name=ourcustomapi.group.example.com,sideEffects=None,admissionReviewVersions=v1
// +kubebuilder:webhook:verbs=create;update;delete,path=/mutate-group-example-com-v1beta1-ourcustomapi,mutating=true,failurePolicy=fail,groups=group.example.com,resources=ourcustomapis,versions=v1beta1,name=ourcustomapi.group.example.com,sideEffects=None,admissionReviewVersions=v1
type OurCustomWebhook struct {}
What about versioning?
We differentiate our operator versioning from CRD versioning. For instance, our operator version 1.2.3 can support API v1beta1
and V1
while deprecating v1alpha2
and not supporting v1alpha1
. Each CRD version has three properties relating to versioning. These can be fine-grained in our generated CRD manifests using CRD generation markers.
- served indicates whether this version is enabled.
- storage indicates the version to be used for persisting data with etcd. A mandatory one and only one version can be marked for storage.
- deprecated indicates whether this version is being deprecated.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
...
spec:
...
versions:
- name: v1alpha1
...
served: false
storage: false
- name: v1alpha2
...
served: true
storage: false
deprecated: true
- name: v1beta1
...
served: true
storage: true
- name: v1
schema:
...
served: true
storage: false
What is Helm?
Helm is used for packaging and managing Kubernetes applications. The package format is called charts. The most notable files in a chart are the Chart.yaml file containing metadata about the chart itself, the values.yaml
containing default values for our application, and the templates
folder containing template YAML files and helpers for constructing our application manifests files used for deploying.
Another part of the Helm project is the Helm CLI (command-line interface), which is used for installing, upgrading, uninstalling, and managing charts on clusters. A chart installed on a cluster is called a Release. We use repositories for sharing charts. We can install a chart from a repository using the Helm CLI. We can also override the various values used for configuring the underlying application from the command line.
Helm doesn't require any operator or application installed on the target cluster. The connection between the release's components is annotation and discovery-based.
What is OLM?
The Operator Lifecycle Manager is used for managing our operator's lifecycle. OLM encapsulates operators as Managed Services, providing over-the-air updates using catalogs, a dependency model, discoverability, scalability, stability, and much more.
We first need to discuss the ClusterServiceVersion (CSV) CRs to understand OLM. The OLM operator handles these. We use a CSV to describe our operator's deployments, requirements, metadata, supported install strategies, etc. We can use the operator-sdk CLI tool and API markers to generate our CSV as a Kustomize base. By convention, we generate it in the project's config/manifests
folder. We must regenerate the CSV as our code evolves and requires such modifications.
//+operator-sdk:csv:customresourcedefinitions:displayName="Our Operator API"
type OurCustomAPI struct {
// fields removed for brevity
Spec OurCustomAPISpec `json:"spec,omitempty"`
}
type OurCustomAPISpec struct {
//+operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Number of replicas",xDescriptors={"urn:alm:descriptor:com.tectonic.ui:podCount"}
Replicas int `json:"replicas"`
}
The next OLM component we'll cover is the Bundle. After generating our CSV, we'll include additional supported resources in the config
folder based on the common structure, use kustomize, and pipe the result into the operator-sdk cli tool to generate our Bundle manifests. We use a bundle per operator version. Bundles are container images built from scratch, adhering to a specific folder layout:
- The
/manifests
folder contains our operator's CSV mentioned above and other supported resources required for our operator to run, i.e., CRDs, RBAC, etc. - The
/metadata
folder contains a file calledannotations.yaml
with all the annotations used for configuring our Bundle, i.e. package and channel data. We can optionally include dependencies.yaml file to leverage OLM's dependency resolution mechanism. - The (optional)
/tests/scorecard
folder contains aconfig.yaml
file, used for statically validating and testing our operator.
After building and pushing our Bundle image to an accessible registry, we can potentially install or upgrade the Bundle directly from the image using the operator-sdk cli tool. But the next logical step would be to include our Bundle in a Catalog. The Catalog is a container image built from OPM's Image serving the Catalogs configured in its /configs
folder. The configs are YAML files describing multiple schemas. The olm.package
schema describes a package. The olm.bundle
schema describes bundles for a package. Each Bundle represents a specific version of our operator and targets a specific Bundle image. Next, the olm.channel
schema is used for associating package bundles to a channel, i.e., selecting bundles for the stable and alpha channels. See example. We use the opm cli tool to generate the various components.
After building and pushing our Catalog image to an accessible registry, consumers can create a CatalogSource targeting our image and install packages from our Catalog. Consumers create OperatorGroups, setting the allowed permission for installed operators. And Subscriptions for requesting a package to be installed by OLM. This will create an InstallPlan describing the installation of the package. Once approved, manually or automatically, the underlying operator's Bundle selected for the target Channel of the desired Package will be installed based on its CSV.
What is Kustomize?
Kustomize helps create and manage resource files template-free by merging overlays on top of bases and introducing various built-in and customizable transformers and generators. We use a kustomize.yaml file to declaratively orchestrate our manifests construction and the kustomize cli tool to build our manifests. Throughout this guide, we noticed that the generated file target is usually config/X
in our project's root, we typically design our kustomization on top of the config
folder.
Quick links
- Project Common Layout
- Kubernetes Operator Pattern
- Operator Framework / Operator SDK (Go) / Java Operator SDK
- API Conventions
- Admissions Controllers
- Prometheus Metric Types
- Kubernetes Components Metrics List
- Operator Lifecycle Manager (OLM)
- Additional Resources in Bundle Manifests
- OLM Dependency Resolution
- Operator Hub
Development tools
- controller-gen generates CRD, RBAC, and Webhook manifests, as well as DeepCopy boilerplate functions.
- operator-sdk generates CSV and Bundles for OLM.
- opm composes Catalogs for OLM.
- kustomize constructs Kubernetes manifests using overlays, transformers, and generators.
- helm deployes and manages Helm Chart Releases.
Development toolbox
- GetConfigOrDie is used for fetching the configuration for accessing the running server.
- NewScheme is used for creating a new scheme for introducing types to Kubernetes.
- NewManager is used for creating a new manager for creating controllers and running the operator.
- NewControllerManagedBy is used for configuring a new controller with a manager.
- NewWebhookManagedBy is used for configuring a new admission webhook with a manager.
- Ping Health Checker is a useful health probe checker.
- Prometheus Registry Kubernetes' pre-configured Prometheus registry.
- Unstructured Type provides the Unstructured type.
Useful packages
- Controller Utils Package provides various utils for working with controllers.
- Meta Package contains valuable functions for working with objects meta.
- Prometheus Package for creating metrics.
- Errors Package provides useful utility functions for working with errors.
- CLI Package provides utility functions for creating Cobra commands (common approach for creating operators).
- Zap Package provides utility functions for createing loggers.
- Wait Package provides tools for listening for Conditions.
- JSON Package provides utility function for marsharling and unmarshaling JSON.
- Rand Package provides tools related to randomization.
- Version Package provides utility function for inspecting versions.
- Yaml Package provides utility function for marsharling and unmarshaling Yaml.
Common interfaces
- Runtime Object
- Controller Reconciler
- Admission Defaulter / Custom Defaulter
- Admission Validator / Custom Validator
- Predicates