For security teams responsible for Red Hat OpenShift, the compliance operator is a powerful ally. Running built-in profiles for CIS or PCI-DSS provides a clear, automated baseline and produces necessary evidence for auditors.
This article is your technical guide to turning auditor requests into automated rules that generate evidence. We will walk through practical examples of how to write, apply, and integrate these CustomRules into your existing scanning workflows, expanding automated compliance scanning coverage for OpenShift.
Problem solved
What happens when the auditor hands you a list of findings that aren't part of those standard benchmarks? Perhaps they’re asking you to label all in-scope resources to generate and maintain an inventory, or that only specific users within your organization have access to applications that process card holder data, or that all applications must use a specific database. These are the specific, custom controls unique to your organization's security posture.
Until now, proving this meant falling back on manual, time-consuming checks. You were forced to write fragile ad-hoc scripts, manually grep through oc command outputs, or worst of all, mark a control as "manually verified" and hope the evidence is sufficient. These methods are not automated, not repeatable, and leave a significant blind spot in your compliance posture.
The new CustomRule custom resource definition (CRD) solves this precise problem. This capability enables you to transform your organization's specific security rules—including those based on auditor feedback or enterprise IT policies—into code. You can then execute these custom checks alongside Red Hat's standard checks, maintained and shipped within the compliance operator's standard profiles.
Note on Feature Status: Currently, the CustomRule feature is designated as a Tech Preview. Consequently, it is available for testing and evaluation purposes only. Support for the feature is not yet available from Technical Support. Because the feature evolves based on user feedback, it may undergo changes, including changes that are not backward-compatible. Therefore, avoid use in production environments until the feature reaches General Availability (GA).
Understanding the CustomRule CRD
The CustomRule CRD is designed to provide organizations with the flexibility to define and automate custom compliance checks. It functions similarly to the existing Rule CRD, but it uses the Common Expression Language (CEL) as its underlying scanning technology.
Note: CEL is a small, lightweight language well-suited for policy evaluation. If you’re familiar with the CNCF ecosystem and Kubernetes, you may have seen CEL expressions as part of how Kubernetes validates admission policies.
A CustomRule manifest is defined by several key fields. The expression field contains the CEL logic that evaluates the compliance condition. The inputs field specifies the Kubernetes resources that the expression will evaluate. To use this feature, set the scannerType to CEL. The manifest also includes standard metadata fields: title, description, severity, and a failureReason message to provide a clear context when a check does not pass.
Example 1: Auditing cluster-admin access
In this example, we will audit cluster-admin access (CIS OpenShift 5.1.1) with an Allow-List. A more advanced use case is auditing privileged API access to the cluster, such as the policy defined in CIS OpenShift 5.1.1, which recommends restricting access to the cluster-admin role.
This type of control is an ideal candidate for a CustomRule because the authorized list of administrators varies significantly between organizations and clusters. While Red Hat cannot provide a universal default for this, organizations can implement a precise control that reflects their access policies.
The following example defines an "allow-list" of approved principals. The rule will pass only if subjects (Users, Groups, and ServiceAccounts) bound to the cluster-admin role are explicitly included in the lists defined in the expression.
apiVersion: compliance.openshift.io/v1alpha1kind: CustomRulemetadata: name: cluster-admin-allow-list namespace: openshift-compliancespec: description: |- Audits subjects bound to the 'cluster-admin' role against a pre-defined allow-list. This aligns with CIS OpenShift Benchmark 5.1.1, which recommends restricting 'cluster-admin' access. This list must be customized by the organization. Find users bound to the 'cluster-admin' role by inspecting the binding subjects: $ oc get clusterrolebinding cluster-admin -o json | jq .subjects failureReason: |- Found subject(s) bound to 'cluster-admin' that are NOT on the organizational allow-list. expression: |- crbs.items.filter(crb, crb.metadata.name == 'cluster-admin')[0] .subjects.all(subject, (subject.kind == 'User' && subject.name in [ 'kubeadmin', 'system:admin', 'alice@my-company.com' ]) || (subject.kind == 'Group' && subject.name in [ 'system:masters', 'ocp-sre-team' ]) || (subject.kind == 'ServiceAccount' && has(subject.namespace) && (subject.namespace + '/' + subject.name) in [ 'openshift-monitoring/prometheus-k8s' ] ) ) id: cluster_admin_allow_list checkType: Platform inputs: - kubernetesInputSpec: apiVersion: rbac.authorization.k8s.io/v1 resource: clusterrolebindings name: crbs scannerType: CEL severity: high title: Audit cluster-admin access against an allow-listThis rule evaluates to true (pass) only if every subject found in a cluster-admin binding is present in one of the predefined allowed lists. If it discovers any principal that is not on the list, the rule will fail, signaling the need for a manual review.
Example 2: Discovering unmanaged databases
Another common auditor request, especially for environments handling sensitive data (i.e., PCI), is to ensure that application teams are not running their own unmanaged, shadow database instances. The policy may dictate that all applications must use a central, DBA-managed database that is properly hardened, monitored, and backed up.
The CustomRule enforces that policy by scanning all pods. It defines a list of common database image names and an allow-list of namespaces where those images are permitted to run (e.g., the DBA’s namespace).
If it finds a pod running a postgres, mysql, or mongo image in any namespace other than central-dba-prod, the rule will fail. Note that, as the author of the rule, you have the opportunity to supply specific guidance to guide users through the investigation and remediation process using the failureMessage field.
apiVersion: compliance.openshift.io/v1alpha1
kind: CustomRule
metadata:
name: disallow-shadow-databases
namespace: openshift-compliance
spec:
description: |-
Ensures application teams do not deploy their own 'shadow'
database instances (e.t., postgres, mysql, mongo, redis).
All applications requiring a database must use the centrally
managed instance in the 'central-dba-prod' namespace.
This is critical for maintaining compliance and ensuring
all sensitive data is properly secured, backed up, and audited.
failureReason: |-
One or more pods were found running a disallowed database image
(e.g., postgres, mysql, mongo) in a namespace not approved for
database workloads.
Policy dictates that only the 'central-dba-prod' and
'openshift-monitoring' namespaces are permitted to run these images.
All other applications must use the central, managed database.
To find the non-compliant pods, run this command:
$ oc get pods -A -o json | jq -r '.items[] | "Namespace: \(.metadata.namespace) Pod: \(.metadata.name) Images: \([.spec.containers[].image] | join(","))"' | grep -E "postgres|mysql|mariadb|mongo|redis" | grep -vE "central-dba-prod|openshift-monitoring"
To remediate this finding, you must:
1. Review the pods identified by the command above.
2. Delete the non-compliant pods or reconfigure the application
to use the approved central database.
3. Consider using an ACS policy or Kubernetes validating admission policy to prevent future regressions
expression: |
pods.items.all(pod,
pod.metadata.namespace in [
'central-dba-prod',
'openshift-monitoring'
] ||
pod.spec.containers.all(container,
!['postgres', 'mysql', 'mariadb', 'mongo', 'redis'].exists( db, container.image.contains(db) )
)
)
id: disallow_shadow_databases
checkType: Platform
inputs:
- kubernetesInputSpec:
apiVersion: v1
resource: pods
name: pods
scannerType: CEL
severity: high
title: Disallow Unapproved Database Pods in App NamespacesYou can couple this with a rule to ensure all images must be pulled from a private registry:
apiVersion: compliance.openshift.io/v1alpha1
kind: CustomRule
metadata:
name: allowed-registries-configured
namespace: openshift-compliance
spec:
title: "Allowed registries are configured"
description: |-
Allowed registries should be configured to restrict the registries
that the OpenShift container runtime can access, and all other
registries should be blocked.
severity: medium
failureReason: |-
The required trusted registry
'my-trusted-registry.internal.example.com' was not found
in the 'spec.registrySources.allowedRegistries' list for
the 'image.config.openshift.io/cluster' resource.
To remediate this, you must patch the cluster image
configuration to add this specific registry to the list.
id: allowed-registries-configured
checkType: Platform
scannerType: CEL
inputs:
- kubernetesInputSpec:
apiVersion: config.openshift.io/v1
resource: images
name: imageConfigs
expression: |
imageConfigs.items.exists(
img, img.metadata.name == 'cluster' &&
has(img.spec.registrySources) &&
img.spec.registrySources != null &&
has(img.spec.registrySources.allowedRegistries) &&
img.spec.registrySources.allowedRegistries != null &&
type(img.spec.registrySources.allowedRegistries) == list &&
'my-trusted-registry.internal.example.com' in img.spec.registrySources.allowedRegistries
)Using both rules, you can check that OpenShift is configured to pull images from a trust registry you control, and you can check that only certain database images are used in specific namespaces.
Integrating CustomRule resources into a scan
Incorporating a new CustomRule into the scanning process involves three main steps.
- Create the CustomRule: Apply the
CustomRulemanifest (as in the previous examples) to the cluster within theopenshift-compliancenamespace. Add the rule to a TailoredProfile:
CustomRuleresources are not scanned directly. You must add them to aTailoredProfile, which bundles one or more rules into a scannable profile. Important: A singleTailoredProfilecannot contain both standardRuleresources andCustomRuleresources. You must manage these types in separateTailoredProfiledefinitions.apiVersion: compliance.openshift.io/v1alpha1kind: TailoredProfilemetadata: name: custom-security-checks namespace: openshift-compliancespec: description: Custom security compliance profile using CEL-based CustomRules enableRules: - kind: CustomRule # Ensure the kind is specified as CustomRule name: cluster-admin-allow-list rationale: CIS 5.1.1 requires auditing cluster-admin bindings against anallow-list - kind: CustomRule name: disallow-shadow-databases rationale: Ensure the usage of a central, managed database - kind: CustomRule name: allowed-registries-configured rationale: Ensure the cluster only pulls images from trusted registries title: Custom Security Profile- Execute the scan with a ScanSettingBinding: Create a
ScanSettingBindingresource. This object links theTailoredProfileto aScanSetting(e.g., thedefaultsetting), which instructs the compliance operator to execute the scan.
apiVersion: compliance.openshift.io/v1alpha1kind: ScanSettingBindingmetadata: name: custom-security-scan namespace: openshift-complianceprofiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: custom-security-checkssettingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: defaultReviewing scan results
Once you've applied the ScanSettingBinding, the compliance operator initiates the scan. You can review the results after a few moments.
$ oc get compliancecheckresultsNAME STATUS SEVERITYcustom-security-scan-cluster-admin-allow-list FAIL highcustom-security-scan-disallow-shadow-databases FAIL highcustom-security-scan-allowed-registries-configured FAIL highIn this context, a FAIL status indicates that the custom rules executed successfully and correctly identified resources not compliant with the defined policies.
Best practices for authoring rules
When developing CustomRule definitions, consider the following guidelines:
- Passing logic: Design CEL expressions to evaluate to
truewhen the resource is compliant (i.e., the rule passes). - Naming conventions: Use specific and descriptive
metadata.namevalues (e.g.,disallow-hostpath-volumes) rather than generic identifiers (my-rule-1). - Rule descriptions: Utilize
spec.descriptionto explain the purpose of the rule and the risk it mitigates, not just its function. It’s also a great idea to tell users how to find resources that violate the rule and supplement the description with a command that helps them find that information. - Actionable failure messages: Write
spec.failureReasonmessages that clearly state what an administrator must do to remediate the non-compliant finding. - Resource efficiency: In
spec.inputs, request only the resources that are strictly necessary for the CEL expression. Requesting superfluous resources can increase scan duration and memory consumption. - Use consistent names and IDs: Technically the
CustomRuleID and name can be different, but keeping them the same is simpler and creates less ambiguity.
Basic troubleshooting
If a CustomRule does not function as expected, verify the following common issues:
CustomRulestatus: Check the rule's status (oc get customrule <your-rule> -o yaml). An invalid CEL expression will cause the rule to enter anErrorstate. It must beReadyto be included in aTailoredProfile.TailoredProfilestatus: TheTailoredProfilemust also be in aReadystate before aScanSettingBindingcan reference it.- Rule type mixing: Confirm that the
TailoredProfiledoes not attempt to mix standardRuleandCustomRuletypes. - Rescanning: After modifying a
CustomRule, you can rescan the environment by annotating theComplianceScanresources to generate updatedComplianceCheckResultobjects.
Support and feedback
Since this feature is in Tech Preview, feedback is essential for its development and stabilization. To provide feedback, report issues, or ask questions, contact Red Hat Support through the Red Hat Customer Portal. Our support teams are available to assist with challenges encountered during the evaluation of this feature.
It's important to note that the CustomRule feature is currently designed for platform-level compliance checks, allowing it to evaluate Kubernetes and OpenShift API resources (pods, roles, and cluster configurations). This capability does not extend to host-level scanning, meaning you cannot use it to perform checks on individual cluster nodes, such as verifying file permissions, auditing file contents, or inspecting running processes on the host filesystem. However, we may include host-level scanning capabilities in the future, depending on user feedback. Continue using the existing node Rule resources shipped with the compliance operator for host-level scanning.
Next steps
The CustomRule CRD provides a significant enhancement to the automated compliance scanning capabilities of Red Hat OpenShift.
The Red Hat OpenShift Compliance Operator is available with all Red Hat OpenShift installations and you can install it from the OperatorHub via the Operator Lifecycle Manager (OLM). The complete installation guide is available in the documentation.
To begin experimenting with this feature, review the CustomRule examples on GitHub.