Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How to deploy Azure Red Hat OpenShift using Terraform

September 4, 2025
Mario Dietner
Related topics:
Developer productivityDevOpsDevSecOpsGitOps
Related products:
Microsoft Azure Red Hat OpenShift

In this article, we will explore how to use Terraform for Microsoft Azure Red Hat OpenShift deployment with Azure policy deployment for Azure infrastructure governance and enrich our cluster with a compliance operator for cluster-level resource governance.

Policies are governance rules that ensure resources stay compliant within a defined scope. They can enforce compliance by either monitoring running resources or prohibiting creation of non-compliant resources. Policy-as-code means expressing governance rules in programming code. This gives us the possibility to declare governance rules in a centralized repository, sometimes called "the single source of truth." The advantage of this approach is reusability, documentation, and support for automation. 

Prerequisites

Before we begin, we need to have following tools and information available:

  • Terraform CLI
  • Azure CLI
  • OC
  • An Azure subscription and user with contributor rights
  • User with Azure tenant role: cloud application administrator
  • azurerm Terraform provider (ver 4.37.0)
  • azuread Terraform provider (ver 3.4.0)
  • Azure DNS configured and accessible from the same subscription (out of scope of this article).
  • Obtain your Red Hat pull secret from the Red Hat customer portal in the code referred to as: YOUR_RED_HAT_ARO_PULL_SECRET.

We need to make sure to have the following resource providers registered in our subscription: 

az provider register --namespace Microsoft.RedHatOpenShift --wait
az provider register -n Microsoft.Compute --wait
az provider register -n Microsoft.Storage --wait
az provider register -n Microsoft.Authorization --wait

Registering Microsoft.RedHatOpenShift enables us to reference the Azure Red Hat OpenShift resource provider service principal ID in our terraform code. Azure Red Hat OpenShift uses this service principal to deploy and manage all cluster resources. 

Set up networking and security

Let's start by preparing network components, which will consist of a dedicated virtual network with two subnets (for master nodes and worker nodes).

In addition to the network, let's define network security group rules, where we allow api and ingress port from outside. 

NOTE: This is for demo purposes only. Enterprises are encouraged to use private endpoint integration and block open network access. 

resource "azurerm_virtual_network" "aro_vnet" {
  name                = "aro-vnet"
  address_space       = ["10.0.0.0/22"]
  location            = "northeurope"
  resource_group_name = "aro1"
  tags = {
    environment = "prod"
    project = "aro"
  }
}

resource "azurerm_subnet" "control_plane_subnet" {
  name                 = "control-plane-subnet"
  resource_group_name   = "aro1"
  virtual_network_name  = azurerm_virtual_network.aro_vnet.name
  address_prefixes     = ["10.0.0.0/23"]
  service_endpoints    = ["Microsoft.Storage", "Microsoft.ContainerRegistry"]
}

resource "azurerm_subnet" "worker_subnet" {
  name                 = "worker-subnet"
  resource_group_name   = "aro1"
  virtual_network_name  = azurerm_virtual_network.aro_vnet.name
  address_prefixes     = ["10.0.2.0/23"]
  service_endpoints    = ["Microsoft.Storage", "Microsoft.ContainerRegistry"]
}

resource "azurerm_network_security_group" "aronsg" {
  name                = "aro_nsg"
  location            = "northeurope"
  resource_group_name = "aro1"
  tags = {
  	environment = "prod"
  	project = "aro"
	}
}

resource "azurerm_network_security_rule" "aro_inbound_api" {
  name                        = "aro-inbound-api"
  network_security_group_name = azurerm_network_security_group.aronsg.name
  resource_group_name         = "aro1"
  priority                    = 200
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "6443"
  source_address_prefix       = "0.0.0.0/0"
  destination_address_prefix  = "*"
}

resource "azurerm_network_security_rule" "aro_inbound_https" {
  name                        = "aro-inbound-https"
  network_security_group_name = azurerm_network_security_group.aronsg.name
  resource_group_name         = "aro1"
  priority                    = 300
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "443"
  source_address_prefix       = "0.0.0.0/0"
  destination_address_prefix  = "*"
}

Creating service principals and RBAC

The service principal of the registration of our newly created Azure Red Hat OpenShift cluster, and the resource provider registered identity, which has a predefined unique ID.

These principals need the following RBAC role assignments on Azure resources:

  • Network Contributor on the Virtual Network, that will be created for both principals.
  • Reader at subscription level for Red Hat OpenShift principal (good for Azure health monitoring).
  • Contributor for Red Hat OpenShift principal on the resource group under which the Azure Red Hat OpenShift will be deployed.
data "azuread_service_principal" "redhatopenshift" {
  // This is the Azure Red Hat OpenShift RP service principal id, considered to have that same client_id, at this time.
  client_id = "f1dd0a37-89c6-4e07-bcd1-ffd3d43d8875"
}

data "azuread_client_config" "current" {}

resource "azurerm_resource_group" "aro_resource_group" {
  name     = "aro1"
  location = "northeurope"
  tags = {
    environment = "prod"
    project = "aro"
  } 
}

resource "azuread_application" "my_aro_app" {
  display_name = "my-aro-app"
  owners = [data.azuread_client_config.current.object_id]
}

resource "azuread_service_principal" "sp_aro_app" {
  client_id = azuread_application.my_aro_app.client_id
  owners = [data.azuread_client_config.current.object_id]
}

resource "azuread_service_principal_password" "sp_aro_app_pwd" {
  service_principal_id = azuread_service_principal.sp_aro_app.id
}

resource "azurerm_role_assignment" "role_network_sp_aro_app" {
  scope                = azurerm_virtual_network.aro_vnet.id
  role_definition_name = "Network Contributor"
  principal_id         = azuread_service_principal.sp_aro_app.object_id
}

resource "azurerm_role_assignment" "role_network_redhatopenshift" {
  scope                = azurerm_virtual_network.aro_vnet.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azuread_service_principal.redhatopenshift.object_id
}

resource "azurerm_role_assignment" "role_subscription_redhatopenshift" {
  scope                = "<AZURE_SUBSCRIPTION_ID>"
  role_definition_name = "Reader"
  principal_id         = data.azuread_service_principal.redhatopenshift.object_id
}

resource "azurerm_role_assignment" "role_resourcegroup_redhatopenshift" {
  scope                = azurerm_resource_group.aro_resource_group.id
  role_definition_name = "Contributor"
  principal_id         = data.azuread_service_principal.redhatopenshift.object_id
}

Deploying the cluster

With the resources configured, we can go ahead and configure the Azure Red Hat OpenShift deployment. As you can see in the following, inside the cluster_profile object, we need to set a default domain. The reason for this is that the Azure resource provider requires this setting. This is also the reason we need to have a DNS zone in place prior to starting the deployment.  

resource "azurerm_redhat_openshift_cluster" "aro_cluster" {
  name                = "cluster-1"
  location            = "northeurope"
  resource_group_name = "aro1"
  
  cluster_profile {
    domain  = "mydomain.com"
    version = "4.17.27"
    pull_secret = "<YOUR_RED_HAT_ARO_PULL_SECRET>"
    managed_resource_group_name = "aro1-vms"
  }
  
  network_profile {
    pod_cidr     = "10.128.0.0/14"
    service_cidr = "172.30.0.0/16"
  }
  
  main_profile {
    vm_size   = "Standard_D8s_v3"
    subnet_id = azurerm_subnet.control_plane_subnet.id
  }
  
  api_server_profile {
    visibility = "Public"
  }
  
  ingress_profile {
    visibility = "Public"
  }
  
  worker_profile {
    vm_size      = "Standard_D4s_v3"
    disk_size_gb = 128
    node_count   = 3
    subnet_id    = azurerm_subnet.worker_subnet.id
  }
  
  service_principal {
    client_id     = azuread_application.my_aro_app.client_id
    client_secret = azuread_service_principal_password.sp_aro_app_pwd.value
  }
  
  tags = {
  	environment = "prod"
  	project = "aro"
	}
}

DNS configuration

With the configurations complete, we can access the newly created IP addresses for api and ingress, as well as the API URL address. 

NOTE: At this stage, the API endpoint will exist, but won't be reachable until we configure DNS zone records. 

First, we reference the existing DNS zone: 

data "azurerm_resource_group" "dns_zone_rg" {
  name = "<NAME_OF_DNS_RESOURCE_GROUP>"
}
data "azurerm_dns_zone" "aro_dns_zone" {
  name = "mydomain.com"
  resource_group_name = data.azurerm_resource_group.dns_zone_rg.name
}

Then, we create DNS records:

resource "azurerm_dns_a_record" "apiserver" {
  name                = "api"
  zone_name           = data.azurerm_dns_zone.aro_dns_zone.name
  resource_group_name = data.azurerm_resource_group.dns_zone_rg.name
  ttl                 = 300
  records             = [azurerm_redhat_openshift_cluster.aro_cluster.api_server_profile[0].ip_address]
}
resource "azurerm_dns_a_record" "ingress_apps" { name = "*.apps" zone_name = data.azurerm_dns_zone.aro_dns_zone.name resource_group_name = data.azurerm_resource_group.dns_zone_rg.name ttl = 300 records = [azurerm_redhat_openshift_cluster.aro_cluster.ingress_profile[0].ip_address] }

Creating infrastructure governance in Azure

The next step after creating the deployment declaration of Azure Red Hat OpenShift is to provide the Azure Policy for infrastructure governance. To keep things simple, we will enforce tagging resources and resource groups. This means, that Azure will block deploying any resources that do not have a predefined set of tag names and values. For the demo, let's enforce the tags project and environment. 

data "azurerm_policy_definition_built_in" "tag_resources_policy" {
  display_name = "Require a tag on resources"
}
data "azurerm_policy_definition_built_in" "tag_resourcegroup_policy" { display_name = "Require a tag on resource groups" }
resource "azurerm_subscription_policy_assignment" "assign_environment_tag_resources_policy" {
  name                 = "require-environment-tag-on-resources"
  policy_definition_id = data.azurerm_policy_definition_built_in.tag_resources_policy.id
  subscription_id      = "<AZURE_SUBSCRIPTION_ID>"
  parameters = jsonencode({
      "tagName" = {
      "value" = "environment"
      }
    })
}
resource "azurerm_subscription_policy_assignment" "assign_project_tag_resources_policy" { name = "require-project-tag-on-resources" policy_definition_id = data.azurerm_policy_definition_built_in.tag_resources_policy.id subscription_id = "<AZURE_SUBSCRIPTION_ID>" parameters = jsonencode({ "tagName" = { "value" = "project" } }) }
resource "azurerm_subscription_policy_assignment" "assign_tag_resourcegroup_policy" { name = "require-a-tag-on-resource-groups" policy_definition_id = data.azurerm_policy_definition_built_in.tag_resourcegroup_policy.id subscription_id = "<AZURE_SUBSCRIPTION_ID>" parameters = jsonencode({ "tagName" = { "value" = "environment" } })
}
resource "azurerm_subscription_policy_assignment" "assign_project_tag_resourcegroup_policy" { name = "require-project-tag-on-resource-groups" policy_definition_id = data.azurerm_policy_definition_built_in.tag_resourcegroup_policy.id subscription_id = "<AZURE_SUBSCRIPTION_ID>" parameters = jsonencode({ "tagName" = { "value" = "project" } }) }

Deploying the compliance operator

With the Azure policy in place, the last thing to do is to configure in-cluster policy governance with compliance operator. For this, we will use the Terraform support for executing external program scripts and will provision the compliance operator bootstrap process with bash and cloud native declaration manifests. 

NOTE: When deploying Azure Red Hat OpenShift cluster there is always some time to pass for the domain configuration to take place. So we need to make sure, we have the api server reachable and ready. As best practices it is recommended to check the /readyz endpoint, that is provisioned by the OpenShift cluster upon successful deployment. 

data "external" "readyz" {
  program = ["bash", "${path.module}/readyz.sh"]
  query = {
    URL = "https://api.mydomain.com:6443/readyz"
  }
}
resource "terraform_data" "bootstrap" { input = { resource_group = "aro1" cluster_name = "cluster-1" cluster_api_url = azurerm_redhat_openshift_cluster.aro_cluster.api_server_profile[0].url }
triggers_replace = [data.external.readyz]
provisioner "local-exec" { on_failure = fail command = <<EOT bash bootstrap.sh EOT
working_dir = "${path.module}"
environment = { RESOURCE_GROUP = self.input.resource_group CLUSTER_NAME = self.input.cluster_name API_URL = self.input.cluster_api_url } } }

With the Azure infrastructure in place, we can define the Kubernetes manifests for compliance operator. For this demo, let's choose compliance profile ocp4-cis. 

NOTE: We recommend you study and select the profiles that best fit your requirements.

The namespace.yaml creates a Kubernetes namespace resource.

apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: openshift-compliance

The operator_group.yaml creates a Kubernetes operator group, which will be used for the compliance operator.

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: compliance-operator
  namespace: openshift-compliance
spec:
  targetNamespaces:
    - openshift-compliance

The subscription.yaml creates the subscription for the compliance operator and its version. This basically installs the compliance operator into the cluster.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: compliance-operator-sub
  namespace: openshift-compliance
spec:
  channel: "stable"
  installPlanApproval: Automatic
  name: compliance-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace

The scanSettingBinding.yaml creates the ScanSettingBinding Kubernetes resource that will deploy the ocp4-cis compliance profile.

apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: cis-compliance
  namespace: openshift-compliance
profiles:
# CIS Red Hat OpenShift Container Platform Benchmark v1.7.0
# https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/security_and_compliance/compliance-operator#cis-profiles_compliance-operator-supported-profiles
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
  # following the compliance scan with oc get compliancescan -w -n openshift-compliance
  # get some status scans commands: https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/security_and_compliance/compliance-operator#compliance-operator-remediation

Lets have a look at the supporting bash scripts readyz.sh and bootstrap.sh.

The readyz.sh script waits for the readyz endpoint to be ready. This signals that the API endpoint is ready for the bootstrap script, which comes later. 

#!/bin/bash
eval "$(jq -r '@sh "URL=\(.URL)"')"
while (true); do # https://<master node IP address>:6443/readyz readyz=$(curl -ks "$URL");
if [[ $readyz == "ok" ]]; then jq -n '{"ready":"true"}' exit 0; fi
sleep 2; done
jq -n '{"ready":"false"}' exit 1

The bootstrap.sh script waits for the operator hub to enable and become ready to install operators.  

#!/bin/bash
# This shell script if invoked from Terraform should be invoked through the local_exec provisioner in order to use the "shared connection" to Azure
echo $CLUSTER_NAME echo $RESOURCE_GROUP echo $API_URL
CREDS_JSON=$(az aro list-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP) USR=$(echo $CREDS_JSON | jq -r '.kubeadminUsername') PWD=$(echo $CREDS_JSON | jq -r '.kubeadminPassword') echo $USR
# do not use --insecure-skip-tls-verify if you can provide signed-certificates on the deployment time
oc login $API_URL -u $USR -p $PWD --insecure-skip-tls-verify oc patch operatorhub cluster --insecure-skip-tls-verify --type=merge \ -p '{ "spec": { "disableAllDefaultSources": false, "sources": [ {"name": "certified-operators", "disabled": false}, {"name": "community-operators", "disabled": false}, {"name": "redhat-marketplace", "disabled": false}, {"name": "redhat-operators", "disabled": false} ] } }'
oc apply -f ../manifests/namespace.yaml --insecure-skip-tls-verify oc apply -f ../manifests/operator_group.yaml --insecure-skip-tls-verify oc apply -f ../manifests/subscription.yaml --insecure-skip-tls-verify
# some wait time needs to be introduced until the operator hub is enabled and the compliance operator is installed
for i in {1..200}; do echo "checking compliance-operator operator installation-$i"
INSTALL_PLAN=$(oc get subscription compliance-operator-sub --insecure-skip-tls-verify -n openshift-compliance -o jsonpath='{.status.installplan.name}') echo $INSTALL_PLAN
OPERATOR_STATUS=$(oc get installplan "$INSTALL_PLAN" -n openshift-compliance -o jsonpath='{.status.phase}' 2>/dev/null) echo $OPERATOR_STATUS
if [ "$OPERATOR_STATUS" = "Complete" ]; then echo "compliance-operator installed successfully" break fi
sleep 3
done
oc apply -f ../manifests/scanSettingBinding.yaml --insecure-skip-tls-verify

Now the trick in the bootstrap.sh script is that we need to apply the manifests progressively, with each progressing part that has a dependency to wait for the previous part to complete. Meaning, after enabling the operatorhub, it takes some time for the compliance operator to install. But we are catching that by checking the operator status from OpenShift. After the operator is successfully installed, we can safely apply the scanSettingBinding, which will deploy the compliance profile. 

Wrap up

Everything-as-a-code may be gaining momentum, but implementation can still be challenging. Thus, enterprises must rely on compliant automation to support their teams and ensure governance across all environments. 

This demo showed how an entire Azure Red Hat OpenShift cluster can be deployed with integrated governance by combining Azure Policies with a compliance operator in-cluster policies. 

Use the following learning resources to implement the automated Azure Red Hat OpenShift cluster deployment that meets your governance requirements:

  1. Compliance Operator
  2. Compliance operator profiles
  3. Compliance operator scanning for compliance
  4. How to use Red Hat OpenShift Operator Hub
  5. Learn more about Red Hat Advanced Cluster Security

All Terraform code and manifests from this article are available in this GitHub repository.

Related Posts

  • Introduction to the Red Hat OpenShift deployment extension for Microsoft Azure DevOps

  • Deallocate an Azure VM Using the Azure CLI on RHEL

  • Deploy JBoss EAP on Microsoft Azure Red Hat OpenShift

  • Create an Azure Red Hat OpenShift cluster in less than 5 minutes

Recent Posts

  • Federated identity across the hybrid cloud using zero trust workload identity manager

  • Confidential virtual machine storage attack scenarios

  • Introducing virtualization platform autopilot

  • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

  • Best Practice Configuration and Tuning for Linux and Windows VMs

What’s up next?

Read Operating OpenShift, a practical guide to running and operating OpenShift clusters more efficiently using a site reliability engineering (SRE) approach. Learn best practices and tools that can help reduce the effort of deploying a Kubernetes platform. 

Get the e-book
Red Hat Developers logo LinkedIn YouTube Twitter Facebook

Platforms

  • Red Hat AI
  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform
  • See all products

Build

  • Developer Sandbox
  • Developer tools
  • Interactive tutorials
  • API catalog

Quicklinks

  • Learning resources
  • E-books
  • Cheat sheets
  • Blog
  • Events
  • Newsletter

Communicate

  • About us
  • Contact sales
  • Find a partner
  • Report a website issue
  • Site status dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit
© 2026 Red Hat

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Chat Support

Please log in with your Red Hat account to access chat support.