Creating automation is one thing, but establishing automation code that remains reliable and resilient across evolving infrastructures is a far loftier goal. Tools like the Ansible VS Code extension and linters are invaluable for catching syntax issues and enhancing code quality, but they fall short when it comes to assessing the safety and effectiveness of the code. For this purpose, integration testing is essential. Red Hat Ansible Automation Platform offers several testing tools, with Ansible Molecule as the preferred method for integration testing. Red Hat OpenShift Dev Spaces is a container-based, cloud-native, in-browser IDE that enables rapid development.
Starting with version 3.7, we provided a workspace for Ansible development (including Molecule) out of the box. Code testing is instrumental in ensuring repeatability and automation scalability. Although it is often absent in many developer flows, code testing is the cornerstone for fostering trust and wider acceptance of automation initiatives.
In a previous article, Boost Ansible developer experience with OpenShift Dev Spaces, we delved into the key benefits of leveraging OpenShift Dev Spaces for Ansible development. In this article, we will explore how OpenShift Dev Spaces and Molecule could reshape the landscape of Ansible development and specifically, testing. These tools not only bring added flexibility to the table, but also streamline the testing process, positioning them as effective components of a modern development workflow. While currently just a proof of concept, it’s clear there is immense potential. It's time to embark on a journey towards heightened reliability and hassle-free testing. Let's dive in!
A sample Ansible collection
To demonstrate how to use Ansible testing tools for integration testing, we will use a sample role included in the sample Ansible workspace. Red Hat OpenShift users can follow along by installing the OpenShift Dev Spaces operator (version 3.11.0 or later). For those without a cluster of their own, you can try it for yourself in the no-cost Developer Sandbox for Red Hat OpenShift.
The example role is located at: collections/ansible_collections/sample_namespace/roles/backup_file
. This role is responsible for creating a backup of a file and storing it in a different folder. One could imagine it as the initial step in an upgrade process, ensuring that we have a usable backup in case we need to revert to a previous state. Included is the following task file:
---
# tasks file for backup_file
- name: Ping the host
ansible.builtin.ping:
- name: Backup File | Create backup directory
ansible.builtin.file:
path: "{{ backup_file_dest_folder }}"
state: directory
owner: "{{ backup_file_dest_dir_owner }}"
group: "{{ backup_file_dest_dir_group }}"
mode: "{{ backup_file_dest_dir_mode }}"
- name: Backup File | Copy source file to backup destination
ansible.builtin.copy:
src: "{{ backup_file_source }}"
dest: "{{ backup_file_dest_folder }}/{{ backup_file_source | basename }}{{ backup_file_dest_suffix }}"
owner: "{{ backup_file_dest_owner }}"
group: "{{ backup_file_dest_group }}"
mode: "{{ backup_file_dest_mode }}"
remote_src: "{{ backup_file_remote_source }}"
Ansible developers will recognize that this task uses a built-in Ansible module, ansible.builtin.copy, to perform the file backup operation. We have confidence in the module's reliability because it undergoes thorough testing as part of the Ansible builtin collection. However, using a dependable module doesn't guarantee that our specific usage is foolproof. To make this task versatile, we need to allow for flexibility in the module's parameters by using variables and loops. This flexibility introduces the possibility that users might provide unexpected input values.
Despite its apparent simplicity, there are scenarios where this automated process could fail during execution or complete without achieving its intended purpose. Therefore, testing is essential to prevent such issues.
Developing on OpenShift offers a significant advantage over many local development environments due to its ability to quickly create test systems as containers (or virtual machines when OpenShift Virtualization is enabled). For the purposes of this example, we'll focus on the container use case deployed as pods in OpenShift.
Molecule is configured within its directory inside the backup role. Molecule can be configured with multiple testing scenarios, but in this case, it is set up with only one scenario, named default. We won't delve into the detailed configuration and usage of Molecule in this article, but you can find more information if you're interested.
The most critical aspect of the testing process is the verification phase. Molecule assesses whether the automation executed during testing meets all the specified requirements. You can find an example of Molecule validation in this file: collections/.../roles/backup_file/molecule/default/verify.yml
. This example checks all the following conditions and fails if any of the following conditions aren't met:
- Check if the backup destination directory exists.
- Check if the backup destination directory is writable.
- Check if the backup file was created.
- Check that the backup file matches the source file.
These are basic checks that can serve as a starting point for testing this type of task. These tests are aligned with the requirements of the automation and can be written initially as part of a test-driven development (TDD) approach. As new requirements or issues arise, you can add additional tests to ensure the long-term reliability of the role.
Advantages of OpenShift Dev Spaces for testing
OpenShift Dev Spaces offer several advantages for testing in an OpenShift environment. This section will provide an overview of these benefits.
Flexible resource creation
OpenShift Dev Spaces users are granted the ability to create pods by default. However, platform operators can configure their specific permissions to align with organizational requirements. This access allows for the swift creation and removal of test infrastructure using a tool like Molecule. For example, you can find an illustration of this inside the backup_file
role within the molecule/default/create.yml
file. During the Molecule create stage, a pod definition template is populated to create pods based on the image specified in molecule/default/molecule.yml
. The details of the pod then serves as the inventory for testing the role.
Similarly, the same process executes during the Molecule destroy stage, but in reverse fashion with the state parameter set to absent destroys the previously created pod. This ensures that test resources are cleaned up, reducing overhead and facilitating future testing.
In the Developer Sandbox, you cannot create privileged pods. Privileged operations, such as installing packages using tools like yum or dnf, will not work in this environment. Red Hat OpenShift disables privileged pods by default as part of its security strategy. For privileged automation, safer alternatives like OpenShift sandboxed containers or OpenShift Virtualization should be used instead of modifying security context constraints to enable privileged pods.
No SSH connection
Ansible Automation Platform typically uses SSH as the default connection method, but in a container environment, this can add complexity. OpenShift provides a way to execute commands in pods using the OpenShift CLI tool (oc
). This connection plugin is included in the Red Hat OpenShift Collection for Ansible and eliminates the need to enable SSH on test pods. Ansible automation communicates directly with the pods using the user's OpenShift Dev Spaces credentials.
This simplifies the process of using standard or custom images as test systems. You can refer to the configuration example in the backup_file
role under molecule/default/converge.yml
. However, keep in mind that all communication is proxied through the OpenShift API server, which may introduce additional traffic overhead.
Kubernetes downward API
OpenShift leverages the Kubernetes downward API to pass relevant information about the operating environment directly into the OpenShift Dev Spaces pod. This information is available through environmental variables and includes details like the current namespace and available resources for the container. This simplifies the creation of testing templates because the developer's workspace automatically provides much of the required information.
How to develop a role in OpenShift Dev Spaces
To begin developing a role, we can follow the same flow as developing locally.
1. Open a terminal in the editor by expanding the menu, then select Terminal and New Terminal.
2. Using the ansible-galaxy
CLI, create a new collection. The following commands will create the folder structure and basic files for standard Ansible Roles with the name new_collection.my_collection
:
> ansible-galaxy collection init new_collection.my_collection --init-path collections
> cd collections/new_collection/my_collection
3. Next, initialize a role inside new_collection.my_collection
.
> ansible-galaxy role init roles/new_role
> cd roles/new_role
4. Update the role's meta/main.yml
file to provide accurate details about the role. Set at a minimum the role_name and author properties appropriately:
---
galaxy_info:
role_name: new_role
namespace: devspaces_test
5. Next, add business logic to populate a text file into the tasks/main.yml
file.
- name: Print the system version
ansible.builtin.copy:
content: "What a great role!"
dest: /tmp/example-config-file
mode: '0666'
6. Next, initialize a Molecule scenario. For simplicity, copy the Molecule testing folder from roles/backup_file
because it includes a custom plugin for testing in OpenShift Dev Spaces. This plugin dynamically spins up pods in OpenShift to test against.
Using the GUI, copy the Molecule folder from collections/ansible_collections/sample_namespace/roles/backup_file/molecule
to collections/new_collection/my_collection/roles/new_role
.
Then copy the requirements file from collections/ansible_collections/sample_namespace/roles/backup_file/requirements.yml
to collections/new_collection/my_collection/roles/new_role/
.
7. Write a test in the molecule/default/verify.yml
file:
---
- name: Verify
hosts: all
gather_facts: false
connection: community.okd.oc
vars:
- path: /tmp/example_config_file
- content: "What a great role!"
tasks:
- name: Verify that the file exists
ansible.builtin.stat:
path: "{{ path }}"
register: stat
failed_when:
- not stat.stat.exists
- stat.stat.mode != '0666'
- name: Check File Content
ansible.builtin.slurp:
src: "{{ path }}"
register: file_content
failed_when: not file_content['content'] | b64decode == content
8. Update the new_role/molecule/default/converge.yml
to point to the new_role:
---
- name: Converge
hosts: all
gather_facts: true
connection: community.okd.oc
tasks:
- name: "Include backup_file"
ansible.builtin.include_role:
name: "new_role"
9. Then, from the command line, we're able to run our full testing suite:
> molecule test
10. Finally, use the Git integration to initialize your repository from a local folder, commit, and push up to your Git provider of choice.
Streamlined automation in OpenShift Dev Spaces
Creating durable automation code that thrives amidst changing infrastructure is an aspirational summit for many enterprises. Established Ansible creators often adopt essential software development practices, like rigorous code testing, to ensure repeatability and scalability. While tools like the Ansible VS Code extension and linters enhance code quality, they fall short in evaluating code safety and effectiveness. This is where integration testing comes in, with Ansible Molecule emerging as a vital player and Ansible workspace in OpenShift Dev Spaces making it easier than ever to access.
Testing in OpenShift Dev Spaces has distinct advantages, with flexible resource creation at the forefront. Users can efficiently create and manage pods, configurable to meet organizational needs. Eliminating the need for SSH connections via the OpenShift CLI tool simplifies communication within container environments, while the Kubernetes Downward API automates the provisioning of vital environmental variables.
While this solution is still evolving, we have shown a possible path forward to a more accessible and streamlined automation creator environment.
Last updated: August 12, 2024