This is the first of two articles that show how to simplify the management of services offered for Kubernetes by Amazon Web Services (AWS), through the use of Amazon's AWS Controllers for Kubernetes (ACK). You'll also learn how to use an Operator to simplify installation further on Red Hat OpenShift clusters. Together, these tools provide standardized and familiar interfaces to AWS services from a Kubernetes environment.
This first article lays out the reasons for using controllers and Operators, and sets up your environment for their use. A subsequent article will show how to install AWS services and how to use them within OpenShift. The ideas behind these articles, and a demo showing their steps, appear in my video Using AWS Controllers for Kubernetes (ACK) with Red Hat OpenShift.
So many services, so little time
If you deploy any kind of code on AWS, you know that the platform offers plenty of services for your applications to consume. Perhaps you use AWS services to support your application's infrastructure—to create registries with Amazon's Elastic Container Registry (ECR) or compute instances with EC2, for instance. Or maybe you're integrating AWS services directly into your development pipeline, deploying an RDS database on the back end, or building, training, and deploying machine learning (ML) models with Amazon SageMaker.
Whatever you are doing, AWS probably has a service to make your work easier, your application better, and your use of time more efficient.
But with so many services, how do you integrate them easily into your code? Do you write complex CloudFormations scripts to call from your CI/CD workflows? Are you more old school and like to write wrappers in a familiar language? Or maybe it's all about APIs for you? One thing is for sure: There are a lot of options (and a lot of hoops to jump through).
And what if you're running Kubernetes in AWS—maybe on EKS, maybe on OpenShift, or maybe you're just rolling your own? How do you stay within the efficient, familiar, and friendly framework of coding for Kubernetes and still access AWS services without coming up with complex, confusing, and hard-to-maintain workarounds that break your workflow?
It's easy: You ACK it
With the growth of Kubernetes for mission-critical production workloads, AWS is a match made in heaven for Kubernetes developers. With so many resilient services, Kubernetes in AWS is a smorgasbord of functionality to improve, scale, and prepare your app for anything. Bring on Black Friday sales and Click Frenzy shopping days—Kubernetes in AWS has cloud-native efficiencies and AWS resiliency built-in.
And now, with AWS Controllers for Kubernetes (ACK), you can easily define and use AWS resources directly from Kubernetes. ACK allows Kubernetes users to define AWS resources using the Kubernetes API. This means you can declaratively define and create an AWS RDS database, S3 bucket, or many other resources, using the same workflow as the rest of your code. There's no need to break out and learn AWS-specific languages or processes. Instead, when an ACK controller is installed, you can use the Kubernetes API to instruct the controller to interact with the AWS service. As a bonus, Kubernetes continues to manage the service for you.
To enable these rich possibilities, each ACK instance is a unique Docker image available in the ACK public gallery. An image is combined with a custom resource definition (CRD), allowing you to easily request custom resources (CR) to define the service and use it within your project. Additionally, integration with AWS's Identity and Access Management (IAM) ensures that all steps and interactions are secure and that role-based access control (RBAC) is managed transparently.
To understand how the controllers are built, including the generation of the artifacts, and to learn about the history of the ACK project, see the AWS blog post Introducing the AWS Controllers for Kubernetes (ACK).
Operator, Operator, could you be so kind
Although the default distribution method for ACK employs Helm charts, a popular deployment tool on Kubernetes, we'll look at a simpler way to manage controllers through an Operator. Operators are a sleek aid to deployment and life cycle management for Kubernetes services. Publicly available Operators can be downloaded from OperatorHub, a project started by Red Hat but used by many communities.
Operators, along with the Operator Framework, make it super easy to install, manage, and maintain Kubernetes resources and applications across OpenShift clusters. The Operator Framework is an open source toolkit designed to manage Operators in an effective, automated, and scalable way.
So it's a no-brainer that we at Red Hat have worked hard to ensure that installing ACK service controllers with Operators is easy. We work closely with AWS engineers to ensure nothing is lost in the process. See Attention developers: You can now easily integrate AWS services with your applications on OpenShift, a post on the Red Hat Cloud blog, for more details about that collaboration.
Setup in AWS and Kubernetes
Before installing a controller via OperatorHub, a cluster administrator needs only to carry out a few simple pre-installation steps in AWS to provide the controller credentials and authentication context for interacting with the AWS API.
Let's take a look at that process now. I'm using an OpenShift installation in AWS provided by Red Hat OpenShift Service on AWS, but you can use the Operators on any OpenShift cluster running on AWS with the Operator Lifecycle Manager (OLM) installed.
Let's keep our installation tidy and create a namespace for our controllers. This is the namespace the Operators will expect to find when you install from OperatorHub, so it's best not to change the name after choosing it:
$ oc new-project ack-system
Now using project "ack-system" on server "https://api.rosatest.c63c.p1.openshiftapps.com:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname
Next, we need a user in IAM that can own the controllers and be our service account. We are going to attach our security principals to this account. To maintain clear lines between services, you could create a user for each of your controller Operators: one user for your S3 interactions, another for your RDS interactions, and so on. But to keep things simple for this example, we'll use the same user for all controllers:
$ aws iam create-user --user-name ack-user
{
"User": {
"Path": "/",
"UserName": "ack-user",
"UserId": "AIDARDQA3BHOTGU5KGN24",
"Arn": "arn:aws:iam::1234567890:user/ack-user",
"CreateDate": "2022-03-24T01:24:03+00:00"
}
}
Next, we need an access key ID and secret access key for this user. Record the AccessKeyId
and SecretAccessKey
strings generated by the following command, because you are going to store them in Kubernetes. Storing the keys allows the controllers to interact programmatically with the AWS resources. We will use these keys in a moment, but make sure to write them down and protect them (don't worry about my security, because my keys have been deleted):
$ aws iam create-access-key --user-name ack-user
{
"AccessKey": {
"UserName": "ack-user",
"AccessKeyId": "AKIARDQA3BHOVF4ZSHNW",
"Status": "Active",
"SecretAccessKey": "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",
"CreateDate": "2022-03-24T01:24:24+00:00"
}
}
Now you need to attach the correct AWS policy for each service to the principal (user) you just created. This procedure grants the user control over the specific, correct, AWS resources.
Grant access by assigning a policy's Amazon Resource Name (ARN) to the user directly. As mentioned, you could attach policies in a one-for-one relationship to unique users (e.g., an EC2 ARN to the ack-ec2-user
user), or all policies to a single user. For our example, you are going to attach all our policies to the same user (ack-user
).
Recommended policy ARNs are provided in each service controller's GitHub repository in the config/iam/
directory. For instance, the EC2 controller's policy is in the following file on GitHub:
https://github.com/aws-controllers-k8s/ec2-controller/blob/main/config/iam/recommended-policy-arn
OK, let's connect some controllers' policies to our ack-user
service account:
$ aws iam attach-user-policy \
--user-name ack-user \
--policy-arn 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
$ aws iam attach-user-policy \
--user-name ack-user \
--policy-arn 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'
These commands grant our ack-user
IAM user access to AWS S3 and EC2. If you want to install additional ACK operators for other AWS services, you need to enable those policies, too.
Next, you need to present the ack-user
secure credentials in a way that can be safely passed to the controller, so it will be allowed to make changes to the AWS service for which it is responsible. These credentials are a Kubernetes Secret and a ConfigMap, defined with some specific information. The naming of these assets is intentional and cannot be varied, or the Operator will not be able to find the credentials.
Place the AccessKeyId
and SecretAccessKey
values you recorded early in a file called secrets.txt
:
AWS_ACCESS_KEY_ID=AKIARDQA3BHOVF4ZSHNW
AWS_SECRET_ACCESS_KEY=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
Create a secret with these keys in the ack-system
namespace:
$ oc create secret generic \
--namespace ack-system \
--from-env-file=secrets.txt ack-user-secrets
secret/ack-user-secrets created
You also need to set some environment variables for the controllers. Do this by creating a ConfigMap in the ack-system
namespace. Add the following to a file called config.txt
:
ACK_ENABLE_DEVELOPMENT_LOGGING=true
ACK_LOG_LEVEL=debug
ACK_WATCH_NAMESPACE=
AWS_REGION=ap-southeast-2
ACK_RESOURCE_TAGS=acktagged
AWS_ENDPOINT_URL=
You'll need to adjust the values to suit your own environment. Most of the values should be obvious, but it's important to keep ACK_WATCH_NAMESPACE
blank so that the controller can properly watch all namespaces. Additionally, you should not rename these variables, as the operators are preconfigured to use them as they are here.
Create the ConfigMap:
$ oc create configmap \
--namespace ack-system \
--from-env-file=config.txt ack-user-config
configmap/ack-user-config created
Where to now?
This article has you set up to use as many AWS services as you want through ACK and OperatorHub. The second article in the series will install EC2 and S3 as examples, and perform an S3 operation from Kubernetes.
Share your experiences
If you'd like to help, learn more, or just connect in general, head on over to the Kubernetes Slack channel and join us in #provider-aws to say hello to the AWS and Red Hat engineers creating the code, various ACK users, and even the occasional blog post author.
We're looking for more good examples of complex deployments created in AWS via ACK. If you've got a deployment you think would be made easier with ACK, or one you've already made better, let us know on the Slack channel or in the comments section below. We might showcase your work in some upcoming articles or videos.
Last updated: September 20, 2023