This guide outlines a GitOps-approach workflow for achieving per tenant object storage quota enforcement within Red Hat OpenShift Data Foundation. While user management typically happens manually on the RGW backend, you can leverage the rook-ceph operator to automate the creation and modification of user accounts and their associated quotas. This approach uses a combination of Red Hat OpenShift GitOps (ArgoCD) for declaration and the external secrets operator (ESO) to securely propagate credentials into developer namespaces.
This article is a proof-of-concept for automated storage administration, providing an example for secure cross-namespace credential delivery and a study on ceph RGW quota behavior. This article does not provide a production-ready reference architecture, a guarantee of hard quota enforcement, or a best practices manual for RBAC.
Prerequisites
Before you begin, ensure you have the following prerequisites installed and running:
- OpenShift Data Foundation installed (rook-ceph-operator running in openshift-storage), performed with operators available to Red Hat OpenShift v4.20.8
- OpenShift GitOps
- External secrets operator (ESO)
- External Ceph cluster with RGW
- OpenShift Data Foundation configured to provide persistent volume storage via the external Ceph cluster
- Git repository accessible from OpenShift
Before proceeding, it is imperative to ensure the control plane is stable. The OpenShift Data Foundation and rook-ceph operators must be active in the openshift-storage namespace. The rook-ceph operator acts as the intelligent translator between Kubernetes custom resources and the ceph backend. Without it running, our GitOps declarations will never be reconciled.
The rook custom resource definitions (CRDs) must be present, and the CephCluster resource must report a HEALTH_OK status. This ensures the OpenShift environment recognizes ceph-specific objects and has established a valid, authenticated connection to the external cluster.
Verify the operator installation as follows:
$ oc get pods -n openshift-storage | grep operator
lvms-operator-587769c565-jgdck 1/1 Running 0 7m45s
noobaa-operator-7c4594bc9-c5mtd ...
rook-ceph-operator-f4df968c6-dcz4d 1/1 Running 0 2m30sVerify the rook ceph operator is running.
$ oc get deployment rook-ceph-operator -n openshift-storage
NAME READY UP-TO-DATE AVAILABLE AGE
rook-ceph-operator 1/1 1 1 4m53s$ oc logs -n openshift-storage deployment/rook-ceph-operator --tail=20
2026-02-11 21:14:12.377348 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists
...
2026-02-11 21:14:55.024418 I | ceph-csi: successfully updated ceph-csi for default clientProfile CR "openshift-storage"Verify the rook CRDs are available.
$ oc get crd | grep ceph.rook.io
cephblockpoolradosnamespaces.ceph.rook.io 2026-02-11T21:12:18Z
...
cephrbdmirrors.ceph.rook.io 2026-02-11T21:12:19ZVerify the CephCluster is connected.
$ oc get cephcluster -n openshift-storage
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
ocs-external-storagecluster-cephcluster 6m6s Connected Cluster connected successfully HEALTH_OK true c32a3c0c-ef51-11f0-aab1-00163e251a70Verify the installation of the external secrets operator.
$ oc get csv -n external-secrets-operator | grep external-secrets-operator
external-secrets-operator.v1.0.0 External Secrets Operator for Red Hat OpenShift 1.0.0 SucceededConnect the rook-ceph to ceph
To establish a working connection to the ceph RGW backend, the fsid, monitor endpoints, and the client.admin key from the external ceph node are required to recreate the rook-ceph-mon secret and the monitor ConfigMap. The default secrets created during initial OpenShift Data Foundation installation often lack the administrative depth required for user management. By manually populating these, we grant the rook-ceph operator the authority to execute administrative tasks on the external RGW.
On your ceph admin node:
# ceph fsid
c32a3c0c-ef51-11f0-aab1-00163e251a70
# ceph mon dump
epoch 4
fsid c32a3c0c-ef51-11f0-aab1-00163e251a70
last_changed 2026-01-12T01:15:34.621593+0000
created 2026-01-12T00:59:34.142597+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:192.168.1.210:3300/0,v1:192.168.1.210:6789/0] mon.cephadmin
1: [v2:192.168.1.211:3300/0,v1:192.168.1.211:6789/0] mon.ceph-01
2: [v2:192.168.1.213:3300/0,v1:192.168.1.213:6789/0] mon.ceph-03
3: [v2:192.168.1.212:3300/0,v1:192.168.1.212:6789/0] mon.ceph-02
dumped monmap epoch 4
# ceph auth get client.admin
[client.admin]
key = AQB1R2RpbknXCxAAnXw3qm7AfcgUaRxGRdBMcA==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"Create ceph connection secrets. The rook-ceph-mon secret already exists, but it doesn’t contain the correct admin-secret. We first delete the existing secret, and then create a new one.
$ oc delete secret -n openshift-storage rook-ceph-mon
secret "rook-ceph-mon" deleted
$ oc create secret generic rook-ceph-mon \
--from-literal=fsid='c32a3c0c-ef51-11f0-aab1-00163e251a70' \
--from-literal=mon-secret='' \
--from-literal=admin-secret='AQB1R2RpbknXCxAAnXw3qm7AfcgUaRxGRdBMcA==' \
--from-literal=cluster-name=external \
-n openshift-storage
$ oc apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-ceph-mon-endpoints
namespace: openshift-storage
data:
data: mon1=192.168.1.211:6789,mon2=192.168.1.212:6789,mon3=192.168.1.213:6789
mapping: '{"node": {}}'
maxMonId: "0"
EOF
Warning: resource configmaps/rook-ceph-mon-endpoints is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
configmap/rook-ceph-mon-endpoints configuredWe created an administrative user with broad capabilities (i.e., users, buckets, metadata) directly on the ceph cluster, and saved its credentials as a secret in OpenShift. This "admin-ops" user serves as the service account for the rook operator, allowing it to programmatically manage S3 users on our behalf.
Create RGW admin user in ceph on the ceph admin node:
# radosgw-admin user create \
--uid=rgw-admin-ops-user \
--display-name="RGW Admin Ops User" \
--caps="users=*;buckets=*;metadata=*;usage=*;zone=*"
{
"user_id": "rgw-admin-ops-user",
"display_name": "RGW Admin Ops User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "rgw-admin-ops-user",
"access_key": "33HYT5DA5YO9GQ99K87U",
"secret_key": "b2VvdwdhtiTWS3rDDlKV2ma6lXWO0PGAN5thPpaA"
}
],
...
Create RGW admin secret in OpenShift.
$ oc create secret generic rgw-admin-ops-user \
--from-literal=accessKey='33HYT5DA5YO9GQ99K87U' \
--from-literal=secretKey='b2VvdwdhtiTWS3rDDlKV2ma6lXWO0PGAN5thPpaA' \
-n openshift-storage
secret/rgw-admin-ops-user createdA CephObjectStore pointing to the external RGW endpoint will be required. This resource tells rook exactly where the external RGW gateways reside, effectively onboarding the external storage service into the Kubernetes management plane.
Create the CephObjectStore as follows:
oc apply -f - <<EOF
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: external-store
namespace: openshift-storage
spec:
gateway:
type: s3
port: 80
externalRgwEndpoints:
- ip: 192.168.1.210
healthCheck:
bucket:
disabled: true
EOF
cephobjectstore.ceph.rook.io/external-store createdTo create just enough permissions for ArgoCD, create a ClusterRole and ClusterRoleBinding to give the ArgoCD application controller full permissions over cephobjectstoreusers. By default, ArgoCD may not have the rights to create or modify these specific storage-related CRDs. This step is the GitOps enabler, allowing our automated pipelines to manipulate ceph user accounts.
Grant ArgoCD permissions as follows:
$ oc apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ceph-objectstoreuser-manager
rules:
- apiGroups: ["ceph.rook.io"]
resources: ["cephobjectstoreusers"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argocd-ceph-objectstoreuser-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ceph-objectstoreuser-manager
subjects:
- kind: ServiceAccount
name: openshift-gitops-argocd-application-controller
namespace: openshift-gitops
EOF
clusterrole.rbac.authorization.k8s.io/ceph-objectstoreuser-manager created
clusterrolebinding.rbac.authorization.k8s.io/argocd-ceph-objectstoreuser-manager createdStructure of the Git repository
At this stage, the following structure demonstrates how to manage users and configure the Red Hat OpenShift Pipelines.
Create the repository structure as follows:
```repo/
├── users/
│ ├── kustomization.yaml
│ ├── devuser.yaml
│ ├── testuser.yaml
│ └── produser.yaml
└── argocd/
└── application.yaml
$ cat users/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openshift-storage
resources:
- testuser.yaml
- produser.yaml
- devuser.yaml
commonLabels:
app: ceph-rgw-users
managed-by: gitops
$ cat users/testuser.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
name: testuser
namespace: openshift-storage
labels:
managed-by: gitops
spec:
store: external-store
displayName: "Test User with 2GB Quota"
quotas:
maxSize: 2Gi
maxObjects: -1
maxBuckets: 10
$ cat users/produser.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
name: produser
namespace: openshift-storage
labels:
managed-by: gitops
spec:
store: external-store
displayName: "Production User with 5GB Quota"
quotas:
maxSize: 5Gi
maxObjects: -1
maxBuckets: 50
$ cat users/devuser.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
name: devuser
namespace: openshift-storage
labels:
managed-by: gitops
spec:
store: external-store
displayName: "Dev User with 1GB Quota"
quotas:
maxSize: 1Gi
maxObjects: -1
maxBuckets: 10
$ cat argocd/application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ceph-rgw-users
namespace: openshift-gitops
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://github.com/momoah/ceph-rgw-gitops.git
targetRevision: main
path: users
destination:
server: https://kubernetes.default.svc
namespace: openshift-storage
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
syncOptions:
- CreateNamespace=false
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
# Ignore TLS verification for local GitLab
ignoreDifferences: []
---
apiVersion: v1
kind: Secret
metadata:
name: github-repo
namespace: openshift-gitops
labels:
argocd.argoproj.io/secret-type: repository
stringData:
type: git
url: https://github.com/momoah/ceph-rgw-gitops.git
insecure: "true"Configure the ArgoCD repository
After defining the Git repository structure, configure the repository secret and deploy the ArgoCD application. This establishes the source of truth. Once deployed, ArgoCD will begin monitoring our YAML files in Git and reconciling any differences it finds in the cluster.
$ oc apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: github-repo
namespace: openshift-gitops
labels:
argocd.argoproj.io/secret-type: repository
stringData:
type: git
url: https://github.com/momoah/ceph-rgw-gitops.git
# insecure: "true"
EOF
secret/github-repo createdDeploy the ArgoCD application.
$ oc apply -f argocd/application.yaml
Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/ceph-rgw-users created
secret/gitlab-repo createdVerify the GitOps workflow.
$ oc get application -n openshift-gitops
NAME SYNC STATUS HEALTH STATUS
ceph-rgw-users Synced Healthy
$ oc get cephobjectstoreuser -n openshift-storage
NAME PHASE AGE
devuser Ready 18s
noobaa-ceph-objectstore-user Ready 8m16s
produser Ready 18s
testuser Ready 18sActivate the external secrets operator
Initializing the ESO controllers and creating a rgw-secret-reader ClusterRole allows reading secrets within the openshift-storage namespace. Standard Kubernetes security boundaries prevent one namespace from reading secrets in another. This ClusterRole provides a controlled way to bypass that restriction, specifically for the S3 credentials rook generates.
The Red Hat version of ESO requires an ExternalSecretsConfig CR to activate the controllers.
$ cat <<EOF | oc apply -f -
apiVersion: operator.openshift.io/v1alpha1
kind: ExternalSecretsConfig
metadata:
name: cluster
spec:
managementState: Managed
EOF
externalsecretsconfig.operator.openshift.io/cluster createdVerify the deployment of the ESO controllers.
$ oc get deployment -n external-secrets-operator
NAME READY UP-TO-DATE AVAILABLE AGE
external-secrets-operator-controller-manager 1/1 1 1 8m22sCreate reusable ClusterRole
Create a ClusterRole that can be reused across all namespaces, this allows service accounts in relevant namespaces to access the newly created (or updated) secrets in the openshift-storage namespace.
$ cat <<EOF | oc apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rgw-secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
EOF
clusterrole.rbac.authorization.k8s.io/rgw-secret-reader createdVerify OpenShift GitOps is synchronized and healthy.
$ oc get application ceph-rgw-users -n openshift-gitopsps
NAME SYNC STATUS HEALTH STATUS
ceph-rgw-users Synced HealthyTest the user update/creation process.
Once the GitOps process is complete, you should see the following list:
# OpenShift client:
$ oc get cephobjectstoreuser -n openshift-storage
NAME PHASE AGE
devuser Ready 33m
noobaa-ceph-objectstore-user Ready 41m
produser Ready 33m
testuser Ready 33m
$ oc get secrets -n openshift-storage | grep user
rgw-admin-ops-user Opaque 2 45m
rook-ceph-object-user-external-store-devuser kubernetes.io/rook 3 14m
rook-ceph-object-user-external-store-noobaa-ceph-objectstore-user kubernetes.io/rook 3 44m
rook-ceph-object-user-external-store-produser kubernetes.io/rook 3 36m
rook-ceph-object-user-external-store-testuser kubernetes.io/rook 3 36m
# On the Ceph RGW node (check devuser only for now):
$ radosgw-admin user info --uid=devuser | jq .user_quota
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 1073741824,
"max_size_kb": 1048576,
"max_objects": -1
}Now, test it further by modifying the quota for devuser and testuser:
- Modify
devuserto have 500MB quota. - Modfiy
testuserto have 500MB quota.
$ git diff
diff --git a/users/devuser.yaml b/users/devuser.yaml
index 1d368c7..9145a67 100644
--- a/users/devuser.yaml
+++ b/users/devuser.yaml
@@ -7,9 +7,9 @@ metadata:
managed-by: gitops
spec:
store: external-store
- displayName: "Dev User with 1GB Quota"
+ displayName: "Dev User with 500MB Quota"
quotas:
- maxSize: 1Gi
+ maxSize: 500Mi
maxObjects: -1
maxBuckets: 10
diff --git a/users/testuser.yaml b/users/testuser.yaml
index cfdab60..2a79d39 100644
--- a/users/testuser.yaml
+++ b/users/testuser.yaml
@@ -7,8 +7,8 @@ metadata:
managed-by: gitops
spec:
store: external-store
- displayName: "Test User with 2GB Quota"
+ displayName: "Test User with 500MB Quota"
quotas:
- maxSize: 3Gi
+ maxSize: 500Mi
maxObjects: -1
maxBuckets: 10
$ git add .; git commit -m 'update quotas for devuser and testuser'; git push;
[main e7629ab] update quotas for devuser and testuser
2 files changed, 4 insertions(+), 4 deletions(-)
Enumerating objects: 9, done.
Counting objects: 100% (9/9), done.
Delta compression using up to 12 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 437 bytes | 437.00 KiB/s, done.
Total 5 (delta 4), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To github.com:momoah/ceph-rgw-gitops.git
8448ede..e7629ab main -> mainCheck the OpenShift GitOps ceph-rgw-users application status (wait a minute).
$ oc get application ceph-rgw-users -n openshift-gitops -w
NAME SYNC STATUS HEALTH STATUS
ceph-rgw-users Synced Healthy
# If the application does not sync for some reason, trigger a sync manually:
$ oc patch application ceph-rgw-users -n openshift-gitops \
--type merge -p '{"operation":{"initiatedBy":{"username":"admin"},"sync":{}}}'
application.argoproj.io/ceph-rgw-users patchedCheck the user quotas. Both devuser and testuser should have the 500MB quota, while produser remains unchanged.
$ radosgw-admin user info --uid=devuser | jq .user_quota
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 524288000,
"max_size_kb": 512000,
"max_objects": -1
}
$ radosgw-admin user info --uid=testuser | jq .user_quota
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 524288000,
"max_size_kb": 512000,
"max_objects": -1
}
$ radosgw-admin user info --uid=produser | jq .user_quota
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 5368709120,
"max_size_kb": 5242880,
"max_objects": -1
}Set up ESO for the developer namespace
Now that the admin has configured the backend, we shift to the developer's perspective. For access, a ServiceAccount, RoleBinding, and SecretStore within the developer namespace are required. The SecretStore acts as a bridge, using the local ServiceAccount to reach back into the openshift-storage namespace via the permissions we granted earlier.
The ExternalSecret resource is defined to map the rook-generated secret to a local secret in the developer's namespace. This is the final step in the credential delivery pipeline. It ensures that whenever rook updates a user's keys, the developer's local secret is updated automatically.
The following kustomize structure sets up ESO for the developer namespace:
$ tree namespaces/
namespaces/
└── developer
├── externalsecret-devuser.yaml
├── kustomization.yaml
├── rolebinding.yaml
├── secretstore.yaml
└── serviceaccount.yaml
$ cat namespaces/developer/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- serviceaccount.yaml
- rolebinding.yaml
- secretstore.yaml
- externalsecret-devuser.yaml
$ cat namespaces/developer/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: eso-reader
namespace: developer
$ cat namespaces/developer/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: eso-reader-developer
namespace: openshift-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rgw-secret-reader
subjects:
- kind: ServiceAccount
name: eso-reader
namespace: developer
$ cat namespaces/developer/secretstore.yaml
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: openshift-storage-store
namespace: developer
spec:
provider:
kubernetes:
remoteNamespace: openshift-storage
server:
caProvider:
type: ConfigMap
name: kube-root-ca.crt
key: ca.crt
auth:
serviceAccount:
name: eso-reader
$ cat namespaces/developer/externalsecret-devuser.yaml
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: devuser-credentials
namespace: developer
spec:
refreshInterval: 1m
secretStoreRef:
name: openshift-storage-store
kind: SecretStore
target:
name: rook-ceph-object-user-external-store-devuser
creationPolicy: Owner
data:
- secretKey: AccessKey
remoteRef:
key: rook-ceph-object-user-external-store-devuser
property: AccessKey
- secretKey: SecretKey
remoteRef:
key: rook-ceph-object-user-external-store-devuser
property: SecretKey
- secretKey: Endpoint
remoteRef:
key: rook-ceph-object-user-external-store-devuser
property: EndpointGitOps needs permission to manage resources in the developer namespace. Grant the OpenShift GitOps permissions as follows:
$ cat <<EOF | oc apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-developer-manager
namespace: developer
rules:
- apiGroups: [""]
resources: ["serviceaccounts", "secrets"]
verbs: ["*"]
- apiGroups: ["external-secrets.io"]
resources: ["externalsecrets", "secretstores"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-developer-manager-binding
namespace: developer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argocd-developer-manager
subjects:
- kind: ServiceAccount
name: openshift-gitops-argocd-application-controller
namespace: openshift-gitops
EOF
role.rbac.authorization.k8s.io/argocd-developer-manager created
rolebinding.rbac.authorization.k8s.io/argocd-developer-manager-binding createdCreate the OpenShift GitOps application for the developer namespace. The argocd/application-developer.yaml file isalready part of the repository.
$ cat argocd/application-developer.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ceph-rgw-developer
namespace: openshift-gitops
spec:
project: default
source:
repoURL: https://github.com/momoah/ceph-rgw-gitops.git
targetRevision: main
path: namespaces/developer
destination:
server: https://kubernetes.default.svc
namespace: developer
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=false
$ oc create -f argocd/application-developer.yaml
application.argoproj.io/ceph-rgw-developer created
$ oc get application -n openshift-gitops
NAME SYNC STATUS HEALTH STATUS
ceph-rgw-developer Synced Healthy
ceph-rgw-users Synced HealthyVerify the ESO secret sync.
# Check all ESO resources in developer namespace
$ oc get serviceaccount,secretstore,externalsecret -n developer
NAME SECRETS AGE
serviceaccount/builder 0 39m
serviceaccount/default 0 39m
serviceaccount/deployer 0 39m
serviceaccount/eso-reader 0 61s
NAME AGE STATUS CAPABILITIES READY
secretstore.external-secrets.io/openshift-storage-store 60s Valid ReadWrite True
NAME STORETYPE STORE REFRESH INTERVAL STATUS READY
externalsecret.external-secrets.io/devuser-credentials SecretStore openshift-storage-store 1m SecretSynced True
# Verify the secret was created by ESO
$ oc get externalsecret devuser-credentials -n developer
NAME STORETYPE STORE REFRESH INTERVAL STATUS READY
devuser-credentials SecretStore openshift-storage-store 1m SecretSynced True
$ oc get secretstore openshift-storage-store -n developer
NAME AGE STATUS CAPABILITIES READY
openshift-storage-store 100s Valid ReadWrite True
$ oc get secret rook-ceph-object-user-external-store-devuser -n developer
NAME TYPE DATA AGE
rook-ceph-object-user-external-store-devuser Opaque 3 110s
# Verify credentials
$ oc get secret rook-ceph-object-user-external-store-devuser -n developer -o jsonpath='{.data.AccessKey}' | base64 -d && echo
QG6910E4KTJJT45C6Y1G
$ oc get secret rook-ceph-object-user-external-store-devuser -n developer -o jsonpath='{.data.SecretKey}' | base64 -d && echo
pldTuR4H69x3CVYs6JwYF80pkmdDbWQzuitH2o2M
$ oc get secret rook-ceph-object-user-external-store-devuser -n developer -o jsonpath='{.data.Endpoint}' | base64 -d && echo
http://192.168.1.210:80
# Verify ESO owns the secret (check ownerReferences)
$ oc get secret rook-ceph-object-user-external-store-devuser -n developer -o jsonpath='{.metadata.ownerReferences}' | jq
[
{
"apiVersion": "external-secrets.io/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ExternalSecret",
"name": "devuser-credentials",
"uid": "9f4fb219-eb86-4b28-ae68-19ee5c4e6ac0"
}
]Grant developers access to the namespace:
$ oc adm policy add-role-to-user admin developer -n developer
clusterrole.rbac.authorization.k8s.io/admin added: "developer"
# Verification
$ oc get rolebinding -n developer
NAME ROLE AGE
admin ClusterRole/admin 46m
admin-0 ClusterRole/admin 46m
argocd-developer-manager-binding Role/argocd-developer-manager 11m
system:deployers ClusterRole/system:deployer 46m
system:image-builders ClusterRole/system:image-builder 46mTest bucket creation and quota limits
Deploy a Python-based "bucket filler" application to stress-test the RGW quotas we defined in Git and to provide empirical proof of this configuration. By attempting to exceed the 500MB limit, we can observe the RGW "Quota Exceeded" response in real-time and verify that our GitOps-defined limits are “softly” enforced at the storage layer.
Deploy the bucket filler
Now we will create the application that tests quota enforcement.
AWS_ACCESS_KEY_ID="QG6910E4KTJJT45C6Y1G"
AWS_SECRET_ACCESS_KEY="pldTuR4H69x3CVYs6JwYF80pkmdDbWQzuitH2o2M"
S3_ENDPOINT="http://192.168.1.210:80"
$ cat <<'EOF' | oc apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: bucket-filler-script
namespace: developer
data:
fill-buckets.py: |
#!/usr/bin/env python3
import boto3
import os
import time
from botocore.exceptions import ClientError
# Get credentials from environment
access_key = os.environ['AWS_ACCESS_KEY_ID']
secret_key = os.environ['AWS_SECRET_ACCESS_KEY']
endpoint = os.environ['S3_ENDPOINT']
# Create S3 client
s3 = boto3.client(
's3',
endpoint_url=endpoint,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
buckets = ['app-bucket-1', 'app-bucket-2']
# Create buckets
print("Creating buckets...")
for bucket in buckets:
try:
s3.create_bucket(Bucket=bucket)
print(f"Created bucket: {bucket}")
except ClientError as e:
if e.response['Error']['Code'] == 'BucketAlreadyOwnedByYou':
print(f"Bucket {bucket} already exists")
else:
print(f"Error creating bucket {bucket}: {e}")
# Upload files until quota exceeded
file_size = 10 * 1024 * 1024 # 10MB per file
file_data = b'0' * file_size
counter = 0
print(f"\nStarting to upload {file_size / (1024*1024)}MB files...")
print("Press Ctrl+C to stop\n")
while True:
bucket = buckets[counter % 2] # Alternate between buckets
filename = f"file-{counter:04d}.dat"
try:
s3.put_object(Bucket=bucket, Key=filename, Body=file_data)
counter += 1
print(f"[{counter}] Uploaded {filename} to {bucket}")
time.sleep(2) # Wait 2 seconds between uploads
except ClientError as e:
error_code = e.response['Error']['Code']
if error_code == 'QuotaExceeded':
print(f"\nQUOTA EXCEEDED! Cannot upload {filename} to {bucket}")
print(f"Total files uploaded: {counter}")
print(f"Approximate data uploaded: {counter * file_size / (1024*1024*1024):.2f} GB")
break
else:
print(f"Error uploading {filename}: {e}")
time.sleep(5)
# List final bucket contents
print("\nFinal bucket status:")
for bucket in buckets:
try:
response = s3.list_objects_v2(Bucket=bucket)
count = response.get('KeyCount', 0)
total_size = sum(obj['Size'] for obj in response.get('Contents', []))
print(f" {bucket}: {count} objects, {total_size / (1024*1024):.2f} MB")
except ClientError as e:
print(f" {bucket}: Error - {e}")
print("\nDone! Quota enforcement working.")
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bucket-filler
namespace: developer
spec:
replicas: 1
selector:
matchLabels:
app: bucket-filler
template:
metadata:
labels:
app: bucket-filler
spec:
containers:
- name: filler
image: quay.local.momolab.io/mirror/ubi9/python-311:latest
command: ["/bin/bash", "-c"]
args:
- |
pip install boto3 --quiet
python3 /scripts/fill-buckets.py
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rook-ceph-object-user-external-store-devuser
key: AccessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rook-ceph-object-user-external-store-devuser
key: SecretKey
- name: S3_ENDPOINT
valueFrom:
secretKeyRef:
name: rook-ceph-object-user-external-store-devuser
key: Endpoint
volumeMounts:
- name: script
mountPath: /scripts
volumes:
- name: script
configMap:
name: bucket-filler-script
defaultMode: 0755
EOF
configmap/bucket-filler-script created
deployment.apps/bucket-filler createdMonitor quota enforcement
Quota enforcement in ceph RGW is not a hard quota enforcement. While you can certainly tune it to be more precise, it is generally seen as a soft enforcement and depends on the size of the files copied to the object store.
Do not be surprised if the following logs show the limit as "exceeded."
$ oc get all
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
NAME READY STATUS RESTARTS AGE
pod/bucket-filler-64955d54d5-8qhtd 1/1 Running 0 51s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/bucket-filler 1/1 1 1 52s
NAME DESIRED CURRENT READY AGE
replicaset.apps/bucket-filler-64955d54d5 1 1 1 52s
$ $ oc logs -f deployment/bucket-filler -n developer
[notice] A new release of pip is available: 24.2 -> 26.0.1
[notice] To update, run: pip install --upgrade pip
Creating buckets...
Created bucket: app-bucket-1
Created bucket: app-bucket-2
Starting to upload 10.0MB files...
Press Ctrl+C to stop
[1] Uploaded file-0000.dat to app-bucket-1
[2] Uploaded file-0001.dat to app-bucket-2
[3] Uploaded file-0002.dat to app-bucket-1
[4] Uploaded file-0003.dat to app-bucket-2
[5] Uploaded file-0004.dat to app-bucket-1
[6] Uploaded file-0005.dat to app-bucket-2
[7] Uploaded file-0006.dat to app-bucket-1
[8] Uploaded file-0007.dat to app-bucket-2
[9] Uploaded file-0008.dat to app-bucket-1
[10] Uploaded file-0009.dat to app-bucket-2
[11] Uploaded file-0010.dat to app-bucket-1
[12] Uploaded file-0011.dat to app-bucket-2
[13] Uploaded file-0012.dat to app-bucket-1
[14] Uploaded file-0013.dat to app-bucket-2
[15] Uploaded file-0014.dat to app-bucket-1
[16] Uploaded file-0015.dat to app-bucket-2
[17] Uploaded file-0016.dat to app-bucket-1
[18] Uploaded file-0017.dat to app-bucket-2
[19] Uploaded file-0018.dat to app-bucket-1
[20] Uploaded file-0019.dat to app-bucket-2
[21] Uploaded file-0020.dat to app-bucket-1
[22] Uploaded file-0021.dat to app-bucket-2
[23] Uploaded file-0022.dat to app-bucket-1
[24] Uploaded file-0023.dat to app-bucket-2
[25] Uploaded file-0024.dat to app-bucket-1
[26] Uploaded file-0025.dat to app-bucket-2
[27] Uploaded file-0026.dat to app-bucket-1
[28] Uploaded file-0027.dat to app-bucket-2
[29] Uploaded file-0028.dat to app-bucket-1
[30] Uploaded file-0029.dat to app-bucket-2
[31] Uploaded file-0030.dat to app-bucket-1
[32] Uploaded file-0031.dat to app-bucket-2
[33] Uploaded file-0032.dat to app-bucket-1
[34] Uploaded file-0033.dat to app-bucket-2
[35] Uploaded file-0034.dat to app-bucket-1
[36] Uploaded file-0035.dat to app-bucket-2
[37] Uploaded file-0036.dat to app-bucket-1
[38] Uploaded file-0037.dat to app-bucket-2
[39] Uploaded file-0038.dat to app-bucket-1
[40] Uploaded file-0039.dat to app-bucket-2
[41] Uploaded file-0040.dat to app-bucket-1
[42] Uploaded file-0041.dat to app-bucket-2
[43] Uploaded file-0042.dat to app-bucket-1
[44] Uploaded file-0043.dat to app-bucket-2
[45] Uploaded file-0044.dat to app-bucket-1
[46] Uploaded file-0045.dat to app-bucket-2
[47] Uploaded file-0046.dat to app-bucket-1
[48] Uploaded file-0047.dat to app-bucket-2
[49] Uploaded file-0048.dat to app-bucket-1
[50] Uploaded file-0049.dat to app-bucket-2
[51] Uploaded file-0050.dat to app-bucket-1
[52] Uploaded file-0051.dat to app-bucket-2
[53] Uploaded file-0052.dat to app-bucket-1
[54] Uploaded file-0053.dat to app-bucket-2
[55] Uploaded file-0054.dat to app-bucket-1
[56] Uploaded file-0055.dat to app-bucket-2
[57] Uploaded file-0056.dat to app-bucket-1
[58] Uploaded file-0057.dat to app-bucket-2
[59] Uploaded file-0058.dat to app-bucket-1
[60] Uploaded file-0059.dat to app-bucket-2
[61] Uploaded file-0060.dat to app-bucket-1
[62] Uploaded file-0061.dat to app-bucket-2
[63] Uploaded file-0062.dat to app-bucket-1
[64] Uploaded file-0063.dat to app-bucket-2
[65] Uploaded file-0064.dat to app-bucket-1
[66] Uploaded file-0065.dat to app-bucket-2
[67] Uploaded file-0066.dat to app-bucket-1
[68] Uploaded file-0067.dat to app-bucket-2
[69] Uploaded file-0068.dat to app-bucket-1
[70] Uploaded file-0069.dat to app-bucket-2
[71] Uploaded file-0070.dat to app-bucket-1
[72] Uploaded file-0071.dat to app-bucket-2
[73] Uploaded file-0072.dat to app-bucket-1
[74] Uploaded file-0073.dat to app-bucket-2
[75] Uploaded file-0074.dat to app-bucket-1
[76] Uploaded file-0075.dat to app-bucket-2
[77] Uploaded file-0076.dat to app-bucket-1
[78] Uploaded file-0077.dat to app-bucket-2
[79] Uploaded file-0078.dat to app-bucket-1
[80] Uploaded file-0079.dat to app-bucket-2
[81] Uploaded file-0080.dat to app-bucket-1
[82] Uploaded file-0081.dat to app-bucket-2
[83] Uploaded file-0082.dat to app-bucket-1
QUOTA EXCEEDED! Cannot upload file-0083.dat to app-bucket-2
Total files uploaded: 83
Approximate data uploaded: 0.81 GB
Final bucket status:
app-bucket-1: 42 objects, 420.00 MB
app-bucket-2: 41 objects, 410.00 MB
Done! Quota enforcement working.Verify in ceph on your ceph admin node.
$ radosgw-admin user info --uid=devuser | jq .user_quota
{
"enabled": true,
"check_on_raw": false,
"max_size": 524288000,
"max_size_kb": 512000,
"max_objects": -1
}
$ radosgw-admin user stats --uid=devuser --sync-stats
{
"stats": {
"size": 870318080,
"size_actual": 870318080,
"size_kb": 849920,
"size_kb_actual": 849920,
"num_objects": 83
},
"last_stats_sync": "2026-02-12T00:21:48.232019Z",
"last_stats_update": "2026-02-12T00:21:48.223448Z"
}
$ radosgw-admin bucket stats --bucket=app-bucket-1 | jq .usage
{
"rgw.main": {
"size": 440401920,
"size_actual": 440401920,
"size_utilized": 440401920,
"size_kb": 430080,
"size_kb_actual": 430080,
"size_kb_utilized": 430080,
"num_objects": 42
}
}
radosgw-admin bucket stats --bucket=app-bucket-2 | jq .usage
{
"rgw.main": {
"size": 429916160,
"size_actual": 429916160,
"size_utilized": 429916160,
"size_kb": 419840,
"size_kb_actual": 419840,
"size_kb_utilized": 419840,
"num_objects": 41
}
}
$ radosgw-admin bucket list --uid=devuser
[
"app-bucket-1",
"app-bucket-2"
]Wrap up
This guide demonstrated how to stop managing storage users by hand and start using a GitOps approach to automate it. By using OpenShift GitOps and the ESO, you can define storage users as code in a Git repository, which the rook-ceph operator then translates into actual accounts and quotas on your ceph cluster. This creates a more secure, one-way pipeline where the Git repository is the boss. Any manual changes made by users are automatically overwritten to match the code.
The "bucket filler" test highlights a critical operational nuance: Ceph RGW quota enforcement acts as a soft limit rather than a hard stop. As seen in the results, a user capped at 500MB might successfully upload closer to 0.81GB depending on file size and sync intervals. This setup provides a reliable, repeatable framework for managing object storage at scale, provided administrators account for these enforcement tolerances in their resource planning.
By implementing this workflow, you can bridge the gap between traditional storage administration and modern GitOps practices. The desired outcome is to declare ceph users and their tenant level quotas as code, automate their creation via rook-ceph, and use ESO to ensure that S3 credentials land exactly where they are needed without manual intervention.