Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

OpenShift GitOps recommended practices

March 5, 2025
Gerald Nunn
Related topics:
GitOps
Related products:
Red Hat OpenShift GitOps

    I’m often asked for best practices when it comes to the OpenShift GitOps, but I’ve never been a fan of the term “best practices,” as it implies a “one true way” to do things. When it comes to GitOps, so much depends on a variety of factors, such as organizational (DevOps versus traditional silos) structure, YAML management tools (Helm versus Kustomize versus others), and more.

    As a result, I like to categorize these practices into a set of buckets as follows:

    • Recommended: Practices that are recommended for all organizations and situations.
    • Suggested: Applicable for most organizations and use cases, but might vary in specific situations.
    • Situational: Practices that are highly dependent on the organization, use case, and other factors.

    In the subsequent sections we will dive into these categories with the practices that align to each one.

    Recommended practices

    Keep source code and manifests in different repositories

    It is not uncommon to see folks starting with GitOps to mix source code (i.e., Java, Go, Python, etc.) and manifests (i.e., Deployment, Service, Ingress/Route yaml) in the same repo. This is not recommended because the two typically have different life cycles, often with different teams maintaining each. It also leads to Argo CD potentially doing more reconciliation work as source code changes drive repository updates even though the manifests themselves haven’t changed.

    Use a YAML management tool

    Do not manage raw YAML directly in the Git repository as this will lead to a lot of YAML duplication. Instead use a YAML management tool like Helm or Kustomize, which will enable you to deploy largely the same YAML across multiple environments and clusters with the specific changes required for the target cluster and/or environment.

    As a side corollary, while it is OK to state that a specific tool is the preferred tool, do not get locked into only that tool. For example, with Helm and Kustomize, I have a strong personal preference for Kustomize; however, there are use cases where Helm is the better fit (i.e., when templating is needed). I use Helm in those cases rather than try to contort Kustomize to make it work. Good carpenters have the ability to use the right tool for the job.

    Version manifests

    Regardless of the YAML management tool, you should always be deploying versioned manifests and not deploying from the Head of the Git repo or Latest from an OCI repository. Versioning ensures that changes can be rolled out across environments and clusters in a controlled manner with adequate testing and safeguards. The Argo CD documentation does a good job of covering this (Tracking and Deployment Strategies) but it is often overlooked.

    With Kustomize I like using tag tracking or commit pinning. With Helm, I like keeping my versioned charts in a Helm or OCI repository (i.e., not accessing the chart directly from Git) and using tag tracking or commit pinning with the value files.

    Validate manifests via Linting

    It's not uncommon to introduce an error when creating and modifying Kubernetes YAML manifests. Validating manifests in the Git repository, or even better validating Pull Requests before they are merged, helps to ensure the correctness of the repository and catches these errors early. 

    My colleague Trevor Royer wrote a great blog on how to validate manifests, including ones generated by Kustomize and Helm, that is well worth checking out. 

    Use annotation tracking

    By default Argo CD uses Label Tracking where it adds a label to resources it manages. The issue is that labels in Kubernetes are limited to 63 characters of data which means Argo CD can only include a limited amount of tracking information. When operators or controllers create resources they often copy the tracking label which leads Argo CD to think these operator resources are managed by Argo CD but are not in git resulting in an Out Of Sync status.

    Annotation Tracking enables Argo CD to include much more information and automatically weed out these false positives without having to rely on cumbersome additional labels like IgnoreExtraneous to work around the issue. Note that in the future Argo 3.0 version annotation tracking will be the default.

    Do not use the Default AppProject

    Argo CD Application Projects, or AppProject, provide a way to logically group Applications. The AppProject can be used to restrict who has access to the Applications as well as restrict what resources these Applications are permitted to deploy.

    Organizations starting with Argo CD often just roll with the Default AppProject that is available Out-Of-The-Box with Argo CD. The issue with this is that as more and more Applications rely on the Default AppProject you lose the ability to segment and manage these Applications.

    Always define your own AppProject(s) and never use the Default.

    Define Tenant RBAC in AppProject

    When defining Role Based Access Controls (RBAC), Argo CD allows you to define them globally in the argo-rbac-cm ConfigMap (or the Argo CD CR for the OpenShift GitOps operator) or in the individual AppProjects.

    When defining RBAC for tenants in a multi-tenant Argo CD you should always be creating separate AppProjects for each tenant (see recommendation above) and defining the RBAC for that tenant in the same AppProject. This prevents the global RBAC from becoming overly complicated, messy and difficult to maintain.

    Suggested practices

    Use Global AppProject for common settings

    A lesser known feature of Argo CD is the ability of AppProject to inherit settings from a Global AppProject. This can be very useful to centralize common settings such as resource inclusions and exclusions across a multitude of tenant AppProjects. Additionally this enables centralizing Sync Windows allowing Platform teams to define common maintenance windows without having to define them individually in each AppProject.

    Define custom health checks for custom resources

    In addition to health checks for standard Kubernetes resources, Argo CD provides out-of-the-box health checks for a number of custom resources which can be found here. These health checks enable Argo CD to determine the health of the associated resources and aggregate them up to the Application with the appropriate health status (Degraded, Progressing, Healthy, etc). 

    Monitoring and alerting on the health of the Application basically provides free, low effort health monitoring for all of the resources managed by the Application which have defined health checks. 

    However Argo CD does not include health checks for all types of resources and in an operator rich environment like OpenShift this can be an issue. Writing custom health checks for critical resources that are not covered by the OOTB ones is a great way to enhance operational readiness.

    Separate Argo CD instances for cluster configuration versus application deployments

    In Argo CD there is a single service account that is responsible for deploying all resources on the cluster. This service account requires sufficient Kubernetes permissions sufficient to deploy the resources for all users of that Argo CD instance and in the case of cluster configuration this is typically cluster-admin level permissions.

    While it is possible to limit what resources application teams can deploy via the Argo CD AppProject RBAC and resource inclusions/exclusions, it is not uncommon to make mistakes and accidentally leave holes resulting in the possibility of privilege escalation.

    As a result, separating cluster configuration and application deployment use cases into separate Argo CD instances for maximum isolation is recommended. Typically with OpenShift GitOps, using the instance in the openshift-gitops namespace for cluster configuration and spinning up a new instance in a different namespace for application teams is the recommended approach.

    Note: There is a new alpha feature in Argo CD (Developer Preview in OpenShift GitOps) that supports impersonation to mitigate privilege escalation. This feature will be very useful in multi-tenant Argo CD for additional isolation however my personal preference at this time is to still maintain separate instances for the two use cases given the level of privilege the cluster configuration use case requires and typically different personas interacting with each instance.

    Minimize the Application-Controller privileges

    As per the previous section, the application-controller service account requires sufficient Kubernetes permissions to deploy all of the Argo CD managed resources. You should ensure that this service account has only the minimum level of privileges needed to support the use case. For example, for the Application Deployment use case it should not have permissions to deploy cluster level resources like ClusterRole, ClusterRoleBinding, OLM’s Subscription, etc.

    In a previous blog when discussing the CONTROLLER_CLUSTER_ROLE setting I mentioned for the Application Deployment use case I like to leverage Kubernetes cluster role aggregation to tie the application-controller service account to the default Kubernetes admin role. You can see this in the blog in the gitops-controller-admin ClusterRole where it uses a clusterRoleSelectors to match the label rbac.authorization.k8s.io/aggregate-to-admin.

    This will grant it the permissions that the Kubernetes project and Red Hat have deemed necessary for a namespace administrator to deploy application level resources without having to manually manage this yourself. Additionally if new operators are added OLM will automatically add new permissions for the new operator Custom Resources (CRs) to the standard Kubernetes cluster roles (admin, edit and view).

    Of course if you operate in a highly regulated or secure environment you can choose to define the Kubernetes privileges independently and not use the default admin role.

    Use Apps-In-Any-Namespace for multi-tenant Argo CD

    When using multi-tenant Argo CD a common challenge is how to allow tenants to define Applications declaratively while preventing tenants from circumventing security by preventing them from modifying the referenced AppProject. Fortunately the Applications in any Namespace feature neatly solves this issue by enabling tenants to define Applications in their own namespaces while enforcing security by binding them to a specific AppProject controlled by the platform team.

    Note that this feature requires a cluster scoped instance of Argo CD so some additional consideration is required when configuring cluster roles for the Argo CD application-controller as per the previous suggested practice. One side benefit of using a cluster scoped instance is better scalability since namespace scoped instances tend to scale poorly past a significant number of namespaces (typically in the 50-100 range).

    Situational practices

    Use resource inclusion/exclusion to minimize managed resources

    OpenShift is very operator heavy and as a result it includes a large number of Custom Defined Resources (CRDs) out of the box. This can sometimes be a challenge for Argo CD since by default the Application Controller will monitor all resource types in the cluster potentially leading to more utilization in Argo CD, the Kubernetes API server and Etcd.

    The number of resources that Argo CD is monitoring can be minimized by using the resource inclusion/exclusion feature in Argo CD. When this is set Argo CD will completely ignore any resources that are not defined in this setting. One downside of this setting is that it can be onerous to manage on top of the Kubernetes RBAC permissions. A recent feature in Argo CD, Auto Respect RBAC, enables Argo CD to automatically exclude resources for which it has no privileges, minimizing this configuration.   

    Persist health status in Redis

    Argo CD will by default persist resource health status in the Argo CD Application object. There is a setting in Argo CD 2.x that will cause Argo CD to persist this information in Redis instead which can reduce the number of writes on the Application thereby improving performance and reducing load on the API server and Etcd. Benefits are typically modest but it can be useful if you are having performance issues and need to squeeze a little bit more out of things.

    Note that if you have tools that rely on reading this information out of the Application object that changing this setting could potentially impact those tools.

    This setting is planned to be the default in Argo CD 3.0.

    Monorepo scaling considerations

    Monorepos is when unrelated projects are stored in the same Git repository. This is a pattern that some larger organizations such as Google adopt and it is popular when organizations need to maintain a consistent commit history across multiple projects.

    However Monorepos introduces the need for additional consideration with Argo CD in terms of scaling. By default Argo CD maintains caches and detects changes at the repository level, when changes happen in the repository this can lead to Argo CD invalidating the cache for all Applications as well as doing unnecessary reconciliation work for Applications that are not impacted by the change.

    The Argo CD documentation does a great job of laying out these challenges in Monorepo Scaling Considerations  and how to mitigate them. In particular users need to be aware of the manifest-generate-paths annotation which enables you to specify the path(es) the Application is tied to in the monorepo. 

    Conclusion

    In this blog, we went through a set of OpenShift GitOps Practices and when to consider them. 

    Visit the GitOps topic page for tutorials, downloads, and more.

    Last updated: May 9, 2025
    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Related Posts

  • Manage namespaces in multitenant clusters with Argo CD, Kustomize, and Helm

  • Why should developers care about GitOps?

  • How to apply machine learning to GitOps

  • Multiple sources for Argo CD applications

  • How to set up your GitOps directory structure

  • Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.