Red Hat Connectivity Link is a new offering from Red Hat, which brings a Kubernetes/Red Hat OpenShift native solution for complex application connectivity and API management in hybrid and multi-cloud environments. Connectivity Link is based on the upstream Kuadrant project and was released as a Developer Preview at Red Hat Summit 2024, with the goal of going GA in the fall of 2024.
In this article, I will demonstrate how to get started with Connectivity Link on Red Hat OpenShift. Connectivity Link is not limited to OpenShift, and can be installed and used on other Kubernetes distributions like EKS. However, the installation is much simpler on OpenShift as the required operators can be installed and managed through the Operator Lifecycle Manager (OLM).
Prerequisites
The first prerequisite is a OpenShift cluster running on AWS. An excellent article on how to install a single-node OpenShift on AWS was recently published on Red Hat Developer. Note that a GPU is not required for Connectivity Link, so a general-purpose EC2 instance type with 32 or 64GB of RAM is sufficient for the getting started experience. If you prefer to install OpenShift on bare metal or a virtualized environment, you can refer to this article.
The second prerequisite is access to a DNS domain (called a hosted zone in AWS Route53), different from the domain used for the OpenShift cluster.
If you followed the instructions to install a single node OpenShift on AWS, the OpenShift cluster will use subdomains of your AWS top domain (typically api.<AWS top domain>
for the cluster API server, and *.apps.<AWS top domain>
for the applications running on the OpenShift cluster).
In AWS Route53 you can add another hosted zone as a subdomain (for example, managed.<AWS top domain>
) for the applications that you want to manage and secure with Connectivity Link. Refer to the the AWS documentation on Route53 for instructions on how to setup the hosted zone. You will have to create NS records in the top domain hosted zone in order to be able to route traffic to the subdomain, as explained in this article.
If you are using a bare metal or virtualized OpenShift instance, you will need access to a hosted zone on AWS Route53 to use the DNS routing functionalities provided by Connectivity Link.
Install Connectivity Link
For the installation of Connectivity Link on Red Hat OpenShift, you can follow the steps as outlined in the upstream Kuadrant documentation, with some minor differences.
The rate limiting functionality provided by Connectivity Link can use a shared Redis instance to provide global rate limiting counters in a multi-cluster environment. However, if you are getting started with only one cluster, the Redis instance is optional (counters will be stored in memory). This simplifies the setup. In that case, you don't need to create a secret for the Redis credentials, and you can omit the Redis configuration in the Kuadrant
Custom Resource:
apiVersion: kuadrant.io/v1beta1
kind: Kuadrant
metadata:
name: kuadrant
namespace: kuadrant-system
spec: {}
Secure, protect, and connect APIs with Connectivity Link
Once you have installed the Connectivity Link operators, you can proceed with the instructions in the Secure, protect, and connect APIs with Kuadrant on OpenShift section of the upstream documentation.
We can make a distinction between the platform engineering workflow and the application developer/owner workflow. In this scenario, the platform engineer sets up the cluster-wide resources like the ingress Gateway, global auth policy and rate limiting, and observability stack, while the application developer/owner would create application specific resources such as an HTTPRoute for the application and application specific auth policy and rate limiting.
When following the instructions in the upstream documentation, take note of the following points:
- The managed zone corresponds to the hosted zone that you created for Connectivity Link (ex:
managed.<AWS top domain>
). The ID of the hosted zone can be obtained from the AWS Route53 console. Therootdomain
environment variable should be set to the domain name of the hosted zone used for Connectivity Link. If your OpenShift cluster is installed on AWS, you can use Let's Encrypt as TLS certificate provider. The example given in the documentation uses the staging environment of Let's Encrypt. If your cluster is installed on bare metal or a virtual machine, you can use self-signed certificates provided by the cert manager operator installed as part of the Connectivity Link installation. In this case the TLS issuer should look like:
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: kuadrant-ca spec: selfSigned: {}
- The upstream documentation for Connectivity Link on OpenShift shows how
HTTPRoute
andAuthPolicy
Custom Resources can be created by using Kuadrant specific extensions in an OpenAPI spec document. As an alternative, you can also create the Custom Resources directly, without using thekuadrantctl
CLI. You will find examples of these Custom Resources here and here.
Where to go from here
At this point, you have installed Red Hat Connectivity Link on a single OpenShift cluster, and onboarded an application to be managed, secured and protected with Connectivity Link. As a suggested next step, you could, for instance, repeat the setup on a second cluster and experiment with geo-based routing.
If you want to dig deeper into concepts and architecture of Connectivity Link, the upstream documentation goes into great detail about these topics.