Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Automate AI workflows with Red Hat Ansible Certified Content Collection amazon.ai for generative AI

Automation unleashed

December 10, 2025
Alina Buzachis
Related topics:
Artificial intelligenceAutomation and managementDeveloper ProductivityDevOps
Related products:
Red Hat Ansible Automation Platform

    In part 1 of this blog, we introduced the Red Hat Ansible Certified Content Collection amazon.ai for generative AI and how it brings declarative automation to Amazon Bedrock and DevOps Guru.

    Now it's time to move from theory to practice. In this post, we'll explore hands-on use cases that demonstrate how to automate AI workflows, from deploying Bedrock Agents to orchestrating DevOps Guru monitoring.

    If you've ever felt the pain of manually managing AI agents, configuring multiple endpoints, or pulling operational insights for audits, this post is for you. By the end, you’ll see how you can treat AI infrastructure as code, which provides repeatable, auditable, and reliable automation.

    Why automation matters in practice

    Manual AI management isn't just slow; it's error-prone:

    • Inconsistent deployments: Recreating an agent or model in a new environment might produce subtle differences, which leads to unexpected failures.
    • Error-prone configuration: Complex action groups, API schemas, and IAM roles are easy to misconfigure manually.
    • Operational blind spots: Without automated monitoring, anomalies can go undetected, and audits become difficult or incomplete.
    • Limited scalability: Repeating manual tasks across multiple agents or services quickly becomes unsustainable.

    The Red Hat Ansible Certified Content Collection amazon.ai for generative AI helps solve these challenges by providing declarative modules for Bedrock and DevOps Guru. These modules allow teams to:

    • Deploy and validate AI agents automatically.
    • Invoke foundation models programmatically.
    • Configure and audit operational monitoring at scale.
    • Generate compliance-ready reports.

    In other words, you can now treat AI and its operational ecosystem as first-class code artifacts.

    Use cases: amazon.ai in action

    The playbooks below provide robust, comprehensive automation examples built around the new Red Hat Ansible Certified Content Collection amazon.ai.

    Use case 1: End-to-end agent deployment, validation, and auditing

    Suppose you want to deploy an AI-powered IT support assistant built on Amazon Bedrock. This agent helps employees resolve help desk issues, like password resets and service status checks by calling backend AWS Lambda functions.

    Using the Red Hat Ansible Certified Content Collection for amazon.ai, you can automate the full agent lifecycle. The following playbook does the following:

    • Deploy or update the Bedrock agent with the proper foundation model and IAM role.
    • Configure the action group linked to operational Lambda functions.
    • Create an alias for integration with the chat.
    • Validate the agent's functionality through a test query.
    • Collect and log all configuration details for auditing and compliance.

    Pro tip: Optionally upload a JSON audit report to Amazon S3 for long-term traceability and governance.

    Outcome: After execution, you will have a fully deployed Bedrock IT Support agent with:

    • An active alias endpoint for user queries.
    • A Lambda-powered action group performing automated tasks.
    • Validation logs showing the agent's responses.
    • A structured audit report, optionally uploaded to S3 for compliance.
    ---
    - name: Full Agent Lifecycle - Deploy, Validate, and Audit
      hosts: localhost
      gather_facts: false
      vars:
        agent_name: "ITSupportAssistant"
        alias_name: "support-alias"
        action_group_name: "SupportTasks"
        foundation_model: "anthropic.claude-v2"
        iam_role_arn: "arn:aws:iam::123456789012:role/BedrockAgentRole"
        lambda_arn: "arn:aws:lambda:us-east-1:123456789012:function:ITSupportLambda"
        upload_audit: true  # Set to true to enable S3 upload
        audit_bucket: "it-support-audit-logs"
        current_date: "{{ lookup('pipe', 'date +%Y-%m-%d') }}"
      tasks:
        - name: Create a Bedrock Agent
          amazon.ai.bedrock_agent:
            state: present
            agent_name: "{{ agent_name }}"
            foundation_model: "{{ foundation_model }}"
            instruction: "You are an internal IT Support Assistant that helps employees with technical issues like password resets or server status checks."
            agent_resource_role_arn: "{{ iam_role_arn }}"
          register: agent_deploy
        
        - name: Configure an Action Group
          amazon.ai.bedrock_agent_action_group:
            state: present
            agent_name: "{{ agent_name }}"
            action_group_name: "{{ action_group_name }}"
            description: "Handles IT support automation tasks (password reset, system checks)."
            lambda_arn: "{{ lambda_arn }}"
            api_schema: "{{ lookup('file', 'files/api_schema.yml') }}"
          register: action_group_deploy
        - name: Create an Alias
          amazon.ai.bedrock_agent_alias:
            state: present
            agent_name: "{{ agent_name }}"
            alias_name: "{{ alias_name }}"
            description: "Endpoint for the IT Support Assistant."
          register: alias_deploy
        
        - name: Validate agent with a test query
          amazon.ai.bedrock_invoke_agent:
            agent_id: "{{ agent_deploy.agent.agent_id }}"
            agent_alias_id: "{{ alias_deploy.agent_alias.agent_alias_id }}"
            input_text: "Can you reset my password for the dev portal?"
            enable_trace: true
          register: validation_test
        
        - name: Retrieve agent configuration details
          amazon.ai.bedrock_agent_info:
            agent_name: "{{ agent_name }}"
          register: agent_info
        - name: Retrieve Action Group configuration
          amazon.ai.bedrock_agent_action_group_info:
            agent_name: "{{ agent_name }}"
            action_group_name: "SupportTasks"
          register: action_group_info
        - name: List All Aliases for the Agent
          amazon.ai.bedrock_agent_alias_info:
            agent_name: "{{ agent_name }}"
          register: aliases_list
        - name: Render audit report from template
          ansible.builtin.template:
            src: "templates/audit_report.json.j2"
            dest: "/tmp/audit_report.json"
        - name: Optionally upload report to S3 for audit trail
          amazon.aws.s3_object:
            bucket: "{{ audit_bucket }}"
            object: "reports/audit_{{ current_date) }}.json"
            mode: put
            src: "/tmp/audit_report.json"
          when: upload_audit | default(false)
       
        # Final console summary
        - name: Final audit and validation summary
          ansible.builtin.debug:
            msg: |
              === IT Support Assistant Deployment Summary ===
              Agent: {{ agent_name }} ({{ agent_deploy.agent.agent_id | default('N/A') }})
              Alias: {{ alias_name }} ({{ alias_deploy.agent_alias.agent_alias_id | default('N/A') }})
              Model: {{ agent_info.agents.0.foundation_model | default('unknown') }}
              Validation: {{ validation_test.response_text | truncate(100) }}
              Total Aliases: {{ aliases_list.agent_aliases | length }}
              Audit Uploaded: {{ upload_audit }}
              =================================================
    

    This workflow ensures the agent is ready for production, fully validated, and auditable. Operations teams can confidently deploy agents at scale with consistent results.

    Use case 2: Personalized content generation

    Suppose you want to dynamically generate personalized email content for customers based on recent behavior, preferences, or purchase history.

    Pro tip: The generated content can optionally be stored in Amazon S3 for auditing or reuse, or sent to a ServiceNow ticket for review.

    Outcome: After execution:

    • A Bedrock model is automatically selected and invoked.
    • A fully generated personalized message is returned and stored in the generated_message variable.
    • This message can be logged, reviewed, sent, or processed downstream for marketing or operational workflows.
    ---
    - name: Personalized Content Generation
      hosts: localhost
      connection: local
      gather_facts: false
      vars:
        prompt_text: "Generate a personalized marketing email for a customer who purchased a smartwatch last week."
      tasks:
        - name: List available text generation models
          amazon.ai.bedrock_foundation_models_info:
            by_output_modality: 'TEXT'
          register: image_models
        - name: Select an on-demand compatible image model
          set_fact:
            chosen_text_model: >-
              {{ (image_models.foundation_models 
                | selectattr('inference_types_supported', 'defined')
                | selectattr('inference_types_supported', 'contains', 'ON_DEMAND')
                | map(attribute='model_id')
                | first) }}
        - name: Inspect the selected model
          amazon.ai.bedrock_foundation_models_info:
            model_id: "{{ chosen_text_model }}"
          register: model_details
        
        - name: Build payload for content generation
          set_fact:
            text_payload:
              messages:
                - role: "user"
                  content: "{{ prompt_text }}"
              max_tokens: 500
              temperature: 0.7
              top_p: 0.9
        
        - name: Generate personalized content
          amazon.ai.bedrock_invoke_model:
            model_id: "{{ chosen_text_model }}"
            body: "{{ text_payload }}"
            content_type: "application/json"
            accept: "application/json"
          register: model_response
        
        - name: Extract generated message
          set_fact:
            generated_message: >-
              {{ model_response.response.body.output_text
                 | default(model_response.response.body.completion)
                 | default(model_response.response.body.message)
                 | default('No output returned') }}

    This workflow combines AWS Bedrock AI capabilities with Ansible automation, ensuring agility, compliance, and operational visibility in a single process.

    Use case 3: Comprehensive DevOps Guru monitoring, diagnostics, and audit reporting

    For example, a compliance mandate requires that all resources associated with the core WebBackend tag must be monitored by DevOps Guru. The operations team automate the following steps:

    1. Configure a resource collection for the WebBackend service to ensure all relevant resources are monitored.
    2. Notify a Simple Notification Service (SNS) topic of high-severity alerts for operational visibility.
    3. Retrieve a full diagnostic package (anomalies and recommendations) for recently closed insights for post-mortem reporting.

    Pro tip: Optionally generate a structured audit report that can be:

    1. Uploaded to an S3 bucket for compliance and traceability.
    2. Attached to a ServiceNow or Jira ticket for operational follow-up or review.

    Outcome: After execution:

    • All WebBackend resources are actively monitored.
    • Alerts are routed automatically to the designated SNS topic.
    • A detailed diagnostic and audit report is generated for compliance and post-mortem analysis.
    ---
    - name: DevOps Guru Monitoring, Configuration, and Diagnostics
      hosts: localhost
      connection: local
      gather_facts: false
      vars:
        ops_sns_arn: "arn:aws:sns:us-east-1:123456789012:OpsAlertsTopic"
      tasks:
        - name: Configure Resource Collection to Monitor WebBackend Service
          amazon.ai.devopsguru_resource_collection:
            state: present
            tags:
              - app_boundary_key: "Devops-guru-Service"
                tag_values: ["WebBackend"]
            notification_channel_config:
              sns:
                topic_arn: "{{ ops_sns_arn }}"
              filters:
                severities: ["HIGH"]
                message_types: ["NEW_INSIGHT", "SEVERITY_UPGRADED"]
          register: config_result
        - name: Audit - Check the Configured Resource Collection Details          
          amazon.ai.devopsguru_resource_collection_info:
            resource_collection_type: "AWS_TAGS"
          register: collection_audit
        - name: Diagnostics - List Detailed Info for Insights
          amazon.ai.devopsguru_insight_info:
              status_filter:
                closed:
                  type: 'REACTIVE'
                  end_time_range:
                    from_time: "2025-10-20"
                    to_time: "2025-10-22"
              include_recommendations:
                locale: EN_US
              include_anomalies:
                filters:
                  service_collection:
                    service_names:
                      - EC2
            register: insight_details
        
        - name: Build Audit Report
          set_fact:
            audit_report:
              timestamp: "{{ lookup('pipe', 'date +%Y-%m-%dT%H:%M:%S') }}"
              resource_collection_status: "{{ config_result.msg }}"
              monitored_tags: "{{ collection_audit.resource_collection.tags | default('None') }}"
              insight_count: "{{ insight_details.reactive_insights | length }}"
              insights: "{{ insight_details.reactive_insights | default([]) }}"
        - name: Render DevOps Guru audit report from template
          ansible.builtin.template:
            src: "templates/devopsguru_audit_report.json.j2"
            dest: "/tmp/devopsguru_report.json"

    This workflow demonstrates a full AI-to-ops compliance and monitoring lifecycle, combining AWS DevOps Guru, Ansible automation, and optional audit/reporting integration.

    Final thoughts

    The launch of the Red Hat Ansible Certified Content Collection amazon.ai for generative AI is more than just the addition of new modules; it's an important step in bridging the gap between AI innovation and enterprise operations. Whether you're scaling foundation models, orchestrating intelligent agents, or monitoring complex systems with DevOps Guru, this collection lets you treat AI as code. That means deployments are repeatable, configuration drift is minimized, and auditability is built in from day one.

    Explore the full use case playbooks in the GitHub Repository. Migrating your configurations to these automated workflows is the first step toward building a fully automated AI ecosystem.

    Looking to get started with Ansible for Amazon Web Services?

    • Check out the Amazon Web Services Guide
    • Try out the hands-on interactive labs
    • Read the e-book: Using automation to get the most from your public cloud

    Where to go next

    • Visit us at the Red Hat booth at AWS re:Invent 2025
    • Check out Red Hat Summit 2025!
    • For further reading and information, visit other blogs related to Ansible Automation Platform.
    • Check out the YouTube playlist for everything about Ansible Collections.
    • Are you new to Ansible automation and want to learn? Check out our getting started guide on developers.redhat.com.

    Recent Posts

    • Integrate a custom AI service with Red Hat Ansible Lightspeed

    • Automate AI workflows with Red Hat Ansible Certified Content Collection amazon.ai for generative AI

    • Integrate OpenShift Gateway API with OpenShift Service Mesh

    • Your AI agents, evolved: Modernize Llama Stack agents by migrating to the Responses API

    • Semantic anomaly detection in log files with Cordon

    What’s up next?

    Whether you're new to Ansible or an experienced user, these learning paths, self-paced labs, and other resources will help you reach the next phase of your automation journey with confidence.

    Get started with Ansible Automation Platform
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue