Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Automating configuration of an existing Ansible instance

July 30, 2025
Konstantin Kuminsky
Related topics:
Automation and management
Related products:
Red Hat Ansible Automation Platform

Share:

    Note

    This article utilizes an Ansible collection (configify.aapconfig) that's developed, published, and maintained by the Ansible community. Refer to the validated infra.aap_configuration Ansible collection supported by Red Hat.

    In the first installment of this series, we discussed how to configure Red Hat Ansible Automation Platform for configify.aapconfig collection, how to export configurations from an existing instance for certain objects (i.e., organizations, users and credential types), and how to run automation to apply configurations and manage configuration drift for these objects.

    In this article, we will discuss how to automate an existing Ansible Automation Platform instance from start to finish, including:

    • Performing a cleanup of instance configuration.
    • Identifying problematic objects.
    • Exporting all configurations.
    • Formatting the output to get configuration in JSON format.
    • Verifying configurations.
    • Limiting administrative access to Ansible Automation Platform.
    • Creating a proper source control workflow.

    Pre-export steps

    The assumptions, requirements, and preparatory steps from our first article in this series have not changed. Please refer to the following sections for details:

    • Collections requirements

    • Accounts and tokens requirements

    • Ansible configuration requirements

    Before exporting existing configuration and converting it to configuration as code (CaC), a logical yet not mandatory step is to perform pre-export cleanup so that configurations not in use or no longer needed are not dragged into the CaC. While doing this is optional, it’s a good idea to use the migration as an opportunity to do cleanup.

    In addition, we want to identify problematic objects that may cause CaC to fail. These include objects with the same names and objects not assigned to an organization.

    To do that, let’s create the following playbook and run it:

    ---
    - name: Run playbook to identify unused and problematic objects
      import_playbook: configify.aapconfig.aap_audit_problematic_objects.yml

    At the end of the job log, you will find multiple reports that look similar to this:

    TASK [List projects without an organization] **********************************************************************************************************
    ok: [localhost] => {
        "projects_without_org": [
            "Project A"
        ]
    }
    TASK [List templates without inventory or project] *********************************************************************************************************
    skipping: [localhost]
    TASK [List credentials not used in credentials, templates, workflows, orgs and projects] *******************************************************************
    ok: [localhost] => {
        "unused_credentials": [
            "Credential GitHub A",
            "Credential Z (vault)"
        ]
    }

    The reports include:

    • Duplicate objects: Credentials, inventories, projects, templates, workflows, and notification profiles. These are the objects with the same names. In most cases they belong to different organizations, so duplicate names are allowed. Whether to rename duplicate objects depends on how we plan to apply and change configurations (i.e., everything at once or per organization), which is discussed further in the "verifying configuration" section. To avoid any issues going forward, it’s best to rename duplicates.
    • Projects without an organization: While an organization field is mandatory for projects, such objects may exist if an organization they belonged to was deleted. Make sure all projects belong to an organization, otherwise you can’t create them when migrating to another cluster or rebuilding it.
    • Job templates with no inventory or project: Similar to projects without an organization, this can happen in the case of a deleted inventory or project assigned to a template. With such job templates, you need to either delete or fix them with new values in mandatory fields.
    • Unused objects:
      • Credentials not used in other credentials, templates, workflows, organizations, and projects.
      • Custom credential types not used by any credentials.
      • Projects not used in templates, workflows, or dynamic inventories.
      • Notification profiles not used in templates, workflows, or projects.
      • Inventories not used in templates, workflows, workflow nodes, or constructed inventories.

    A human being must examine these reports. Just because certain objects were reported as unused does not automatically mean they are unnecessary, and you can remove them. For example, certain settings may exist to be selected when job templates or workflows prompt for them. Therefore, you should take the report about unused objects with a grain of salt.

    At this point, let’s go ahead and fix problematic objects and proceed with the export.

    Personal credentials

    While projects without an organization are an issue, credentials without an organization are acceptable. Such credentials are personal credentials owned by a user who created them and may not be visible to the members of an organization their owner belongs to, unless given permissions explicitly.

    While their usefulness seems questionable, personal credentials are currently allowed. Still, it is recommended to avoid duplicate names between personal and organizational credentials, as they may cause issues when objects need to be recreated.

    Exporting configuration data

    Exporting all configurations from an Ansible Automation Platform instance is easy. Create the following playbook and run it:

    ---
    - name: Run playbook to export AAP configurations
      import_playbook: configify.aapconfig.aap_audit_all.yml

    By default, it will export configurations from Controller and Hub. You can use tags to limit the scope. For example, to export only Controller objects, skip the following tags: export_collections, export_repositories. You can find a full list of available tags in the collection documentation.

    After completing the export, let’s download the full job log and move to the next section.

    Formatting the output

    Depending on the size of Ansible Automation Platform cluster and the amount of objects, job output for full export may be quite large. Manually copying certain parts from the output and formatting them may be time consuming and tedious.

    We will use a Batch Replacer extension in VSCode and a script created by the maintainers of the configify.aapconfig collection to accelerate and simplify this task.

    In the job output, let’s find the part where export playbook starts outputting the objects it gathered. Typically it will be somewhere in the middle of the log. We are looking for the first "Show" task that hasn’t been skipped.

    Assuming no tags were skipped, it should be the task that shows collections:

    PLAY [Output all objects] *********************************************************************************************************
    TASK [COLLECTIONS - Show published collections (formatted)] ************************************************************************************************
    skipping: [localhost]
    TASK [REPOS - Show configured remote repositories (formatted)] *********************************************************************************************
    ok: [localhost] => {
        "hub_objects_remotes": [
            "{'name': 'rh-certified', 'repo_url': 'https://console.redhat.com/api/automation-hub/content/published/', 'repo_auth_url': 'https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token', 'repo_auth_token': '$encrypted$', 'requirements': {'collections': ['ansible.platform', 'ansible.controller']}}",
            "{'name': 'community', 'repo_url': 'https://galaxy.ansible.com/api/', 'repo_auth_url': '', 'repo_auth_token': '', 'requirements': ''}"
        ]
    }
    TASK [SETTINGS - Show custom LDAP settings (formatted)] ****************************************************************************************************
    ok: [localhost] => {
        "controller_settings_ldap": {
            "AUTH_LDAP_1_BIND_DN": "CN=user,CN=users,DC=examplec,DC=com",
            "AUTH_LDAP_1_DENY_GROUP": "CN=user,OU=Users,DC=examplec,DC=com",
    <...>

    Copy the output starting from the first "how" task all the way to the end. Then, in VSCode:

    1. Install the Batch Replacer extension.
    2. Open an empty workspace (new VSCode window).
    3. Paste the output copied earlier into a new file and save it.
    4. Open a new file and paste Batch Replacer script from aapconfig_testing repository.
    5. Without saving the script, press Ctrl + Shift + P and select Batch Replace from the list. The extension uses the replacement rules from the active tab and applies them to all files in the current workspace.
    6. Watch for a message in the bottom right corner saying "Batch replace completed."

    At this point, the file that previously had export output should have formatted objects without additional Ansible output:

    hub_objects_remotes: [
      {'name': 'rh-certified', 'repo_url': 'https://console.redhat.com/api/aut…..',
       'repo_auth_url': 'https://sso.redhat.com/auth/realms/redhat…..',
       'repo_auth_token': '$encrypted$',
       'requirements': {'collections': ['ansible.platform', 'ansible.controller']}},
      {'name': 'community', 'repo_url': 'https://galaxy.ansible.com/api/',
       'repo_auth_url': '',
       'repo_auth_token': '',
       'requirements': ''}
    ] # type: ignore
    controller_settings_ldap: {
            'AUTH_LDAP_1_BIND_DN': 'CN=user,CN=users,DC=examplec,DC=com',
            'AUTH_LDAP_1_DENY_GROUP': 'CN=user,OU=Users,DC=examplec,DC=com',
            'AUTH_LDAP_1_GROUP_SEARCH': [
                'DC=examplec,DC=com',
                'SCOPE_SUBTREE',
                '(objectClass=group)'
    <...>

    While we don’t expect any issues with the formatted output, it may still be a good idea to look through the file and make sure there are no obvious issues with the formatting, no remaining Ansible job output, and that JSON format for each object is correct.

    In addition, let’s verify there are no issues with special characters, as described in the Handling objects with special characters section of Part 1.

    Furthermore, if certain variables have too many lines, they can be moved to a separate file. In this case, they can be included into the playbook this way:

    tasks:
    - name: Include objects
      ansible.builtin.include_vars:
        dir: objects
        extensions:
          - ''
      tags: always

    Otherwise, it can be just a single file as described in the previous part of the series:

      tasks:
        - name: Include variables
          ansible.builtin.include_vars: all_aap_objects
          tags: always

    Verify configuration

    With the CaC objects formatted and ready, we can now run automation to verify that configurations we created are correct. To be on the safe side, we should try to apply them in check mode first and see if certain objects are reported as changed.

    At this point it’s important to decide how we want to apply configurations. The first option is to use a global admin account and apply full configuration at once. In this case, it’s important to remove objects with duplicate names as discussed earlier.

    Another option is to use an account that is an administrator of a subset of organizations. A use case for that approach is a large organization with multiple teams or departments, each managing their subset of configurations separately. In this case, objects with the same names can exist as long as duplicates belong to a different subset of organizations.

    To implement this approach, it is not necessary to split configurations into separate files or repositories. It will be enough for each team to use the limit_organizations variable containing a list of organizations they manage, such as:

    limit_organizations: ['Org D','Org E']

    When implementing this approach it’s important to remember there are configurations that don’t belong to a specific organization, such as execution environments, authentication settings, instance groups, users, teams and credential types, as well as Hub configurations. Such settings will still need to be applied by a global admin, and for that purpose, they may need to be in a separate file.

    To verify configurations, we created during the export, let’s create a playbook that includes files with object descriptions and triggers configify.aapconfig.aap_configure.yml play. For verification purposes, we will run it in check mode first:

      tasks:
        - name: Include variables
          ansible.builtin.include_vars: all_aap_objects
          tags: always
    - name: Run playbook to apply AAP configurations
      import_playbook: configify.aapconfig.aap_configure.yml

    In the output, we are looking for any objects that report a change. You need to investigate such items to determine why they are marked as being different to make sure we don’t inadvertently make an unwanted change when we run the playbook in real mode.

    There are a few things to keep in mind:

    • At the time of writing this article, the authenticator module (which is a configuration specific to Ansible Automation Platform 2.5) does not honor check mode. Authentication settings will be applied during every run. This is an issue with a dependency collection and a ticket has been submitted with the maintainers.
    • There are some objects that will always report changes in check mode, such as inventories, hosts, workflows, and notification profiles with credentials. This is also an issue with one of the dependency collections and tickets have been created accordingly.
    • Execution environments with default pull policy will also report a change. This is related to the fact that the corresponding module in the dependency collection does not have a default option and does not allow empty values for the pull policy field.

    Until these issues with Red Hat certified collections are resolved, the easiest solution is to 

    • Double-check that values are correct.

    • Create a backup of Ansible Automation Platform instances.

    • Once all other reported changes have been investigated and fixed, turn off check mode and confirm that settings that were reporting a change in check mode are now green. 

    If this is still not the case for some occasional lines, you may need to verify what changed via the GUI.

    Follow-up steps

    This brings us to a good state where we have all Ansible Automation Platform settings described as code. We verified the configurations and are able to apply them or make changes as required. At this stage, it may be a good idea to limit the number of administrators who potentially make changes manually through the GUI. This includes global admins and administrators managing specific organizations or objects. Some organizations choose to disable all administrator accounts, with the exception of built-in admin.

    Obviously, we want to make that change via CaC. To do that, let’s remove:

    • Lines describing users with superuser field set to True.
    • Lines describing roles that have admin in the role name (i.e., organization admin, JobTemplate admin, etc.).

    Then run automation specifying delete_objects: true.

    Another follow-up point to consider is building proper workflows related to source control. Since going forward we are managing the Ansible Automation Platform clusters as code, it’s important to ensure proper change management and peer reviews, branching strategy, and permissions within the Git repository. The steps include, but are not limited to:

    • Disallowing direct commits into the branch used by automation.
    • Requiring one or more approvals for merge requests before they can be merged.
    • Limiting the amount of users who are allowed to approve and merge changes.
    • Requiring specific patterns for branch names and commit messages.

    The configuration steps differ depending on the source control system and are outside of the scope of this article.

    Final thoughts

    With suitable tools selected for the task, configuration as code becomes less of a coding and more of an organizational and process-related challenge. Stay tuned for the next installment in this series where we will discuss various migrations.

    Follow this series:

    • Part 1: The first steps on the path of managing an existing Ansible Automation Platform instance as CaC, setting up Ansible Automation Platform accounts, collections, credentials, projects, and job templates required to run the automation, exporting configuration of some objects, handling secrets and special strings in the CaC, and managing configuration drift.
    • Part 2 (this article): Completing the transition: exporting all objects, formatting configurations for readability, verification, access restrictions, and Git management.
    • Part 3: Migrating configurations from AWX 24 to Ansible Automation Platform 2.5.
    • Part 4: Migrating smart inventories (deprecated in future Ansible Automation Platform releases) to constructed inventories.
    • Part 5: Migrating configurations from Ansible Automation Platform 2.4 to Ansible Automation Platform 2.5.
    Last updated: August 24, 2025

    Related Posts

    • How to start configuration as code for an Ansible instance

    • How to deploy applications using Ansible Automation Platform

    • How to install VMs and Ansible Automation Platform on Mac M1

    • Secure RHEL systems using Ansible Automation Platform

    Recent Posts

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    • Confidential VMs: The core of confidential containers

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    What’s up next?

    Discover Ansible Content Collections through practical examples that explain their advantages, and then create and use them in your Ansible automation.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue