Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Container Testing in OpenShift with Meta Test Family

May 22, 2018
Petr Hracek
Related topics:
CI/CDContainersKubernetes
Related products:
Red Hat OpenShift Container Platform

    Without proper testing, we should not ship any container. We should guarantee that a given service in a container works properly. Meta Test Family (MTF) was designed for this very purpose.

    Containers can be tested as “standalone” containers and as “orchestrated” containers. Let’s look at how to test containers with the Red Hat OpenShift environment. This article describes how to do that and what actions are needed.

    MTF is a minimalistic library built on the existing Avocado and behave testing frameworks, assisting developers in quickly enabling test automation and requirements. MTF adds basic support and abstraction for testing various module artifact types: RPM-based, Docker images, and more. For detailed information about the framework and how to use it check out the MTF documentation.

    Installing MTF

    Before you can start testing, install MTF from the official EPEL repository using sudo:

    sudo yum install -y meta-test-family

    A COPR repository contains a development version of MTF that should not be used in a production environment.

    However, you can install MTF with these commands:

    dnf copr enable phracek/meta-test-family
    dnf install -y meta-test-family

    To install MTF directly from GitHub, run these commands:

    git clone git@github.com:fedora-modularity/meta-test-family.git
    cd meta-test-family
    sudo python setup.py install

    Now, you can start testing containers in the OpenShift environment.

    Prepare a Test for OpenShift

    Running your containers locally is dead-simple: just use the docker run command. But that’s not how you run your application in production—that’s OpenShift’s business. To make sure your containers are orchestrated well, you should test them in the same environment.

    Bear in mind that standalone and orchestrated environments are different. Standalone containers can be executed easily with a single command. Managing such containers isn’t as easy: you need to figure out persistent storage, backups, updates, routing, and scaling—all the things you get for free with orchestrators.

    The OpenShift environment has its own characteristics: security restrictions, differences in persistent storage logic, expectation of stateless pods, support for updates, a multi-node environment, native source-to-image support, and much much more.  Deploying an orchestrator here is not an easy task. This is the reason why MTF supports OpenShift: so you can easily test your containerized application in an orchestrated environment.

    Before running and preparing the OpenShift environment, you have to create a test and a configuration file for MTF in YAML format. These two files have to be in the same directory, and tests will be executed from the directory.

    Structure of MTF Tests

    Create a directory which will contain the following files:

    • config.yaml: The configuration file for MTF
    • sanity1.py: The container test that is run by MTF

    Configuration File for MTF

    The configuration file loos like this:

    document: modularity-testing
    version: 1
    name: memcached
    service:
        port: 11211
    module:
        openshift:
    
               start: VARIABLE_MEMCACHED=60 
            container: docker.io/modularitycontainers/memcached
    

    Here’s an explanation of each field in the YAML config file for MTF:

    • service.port: Port where the service is available
    • module.openshift: Configuration part relevant only for the OpenShift environment
    • module.openshift.start: Parameters that will be used for testing in OpenShift
    • module.openshift.container: Reference to the container, which will be used for testing in OpenShift

    Test for memcached Container

    Here’s an example of a memcached test for a container:

    $ cat memcached_sanity.py
    import pexpect
    from avocado import main
    from avocado.core import exceptions
    from moduleframework import module_framework
    from moduleframework import common
    
    class MemcachedSanityCheck(module_framework.AvocadoTest):
    
    """
    :avocado: enable
    """
    def test_smoke(self):
        self.start()
        session = pexpect.spawn("telnet %s %s " % (self.ip_address, self.getConfig()['service']['port']))
        session.sendline('set Test 0 100 4\r\n\n')
        session.sendline('JournalDev\r\n\n')
        common.print_info("Expecting STORED")
        session.expect('STORED')
        common.print_info("STORED was catched")
        session.close()
    
    if __name__ == '__main__':
        main()
    

    This test connects to memcached via telnet on the given IP address and port. The port is specified in the MTF configuration file. The following sections speak more about the IP address.

    Prepare OpenShift for Container Testing

    MTF can install the OpenShift environment on your local system with the mtf-env-set command.

    $ sudo MODULE=openshift OPENSHIFT_LOCAL=yes mtf-env-set
    Setting environment for module: openshift
    Preparing environment ...
    Loaded config for name: memcached
    Starting OpenShift
    Starting OpenShift using openshift/origin:v3.6.0 ...
    OpenShift server started.
    
    The server is accessible via web console at:
    https://127.0.0.1:8443
    
    You are logged in as:
    User: developer
    Password: <any value>
    
    To login as administrator:
    oc login -u system:admin
    

    The mtf-env-set command checks for a shell variable called OPENSHIFT_LOCAL. If it is specified, the command checks if the origin and origin-clients packages are installed. If they are not, then it installs them.

    In this case, a local machine performs the container testing. If you test containers on a remote OpenShift instance, you can ignore this step. If the OPENSHIFT_LOCAL variable is missing, tests are executed on the remote OpenShift instance specified by the OPENSHIFT_IP parameter (see below).

    Container Testing

    Now you can test your container either on a local or remote OpenShift instance by using mtf command. The only difference between the following commands and the previous command is the command parameters.

    In the following local testing case, sanity1.py uses 127.0.0.1 as the value for self.ip_address:

    $ sudo MODULE=openshift OPENSHIFT_USER=developer OPENSHIFT_PASSWORD=developer mtf memcached_sanity.py

    In the following remote testing case, sanity1.py uses OPENSHIFT_IP as the value for self.ip_address:

    $ sudo OPENSHIFT_IP=<ip_address> OPENSHIFT_USER=<username> OPENSHIFT_PASSWD=<passwd> mtf memcached_sanity.py

    Tests are then executed from the environment where you store the configuration file and tests for the given OpenShift instance. The output looks like this:

    JOB ID : c2b0877ca52a14c6c740582c76f60d4f19eb2d4d
    JOB LOG : /root/avocado/job-results/job-2017-12-18T12.32-c2b0877/job.log
    (1/1) memcached_sanity.py:SanityCheck1.test_smoke: PASS (13.19 s)
    RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
    JOB TIME : 13.74 s
    JOB HTML : /root/avocado/job-results/job-2017-12-18T12.32-c2b0877/results.html
    $
    

    If you open the /root/avocado/job-results/job-2017-12-18T12.32-c2b0877/job.log file, you’ll see contents similar to the example below.

    [...snip...]
    ['/var/log/messages', '/var/log/syslog', '/var/log/system.log'])
    2017-12-18 14:29:36,208 job L0321 INFO | Command line: /bin/avocado run --json /tmp/tmppfZpNe sanity1.py
    2017-12-18 14:29:36,208 job L0322 INFO |
    2017-12-18 14:29:36,208 job L0326 INFO | Avocado version: 55.0
    2017-12-18 14:29:36,208 job L0342 INFO |
    2017-12-18 14:29:36,208 job L0346 INFO | Config files read (in order):
    2017-12-18 14:29:36,208 job L0348 INFO | /etc/avocado/avocado.conf
    2017-12-18 14:29:36,208 job L0348 INFO | /etc/avocado/conf.d/gdb.conf
    2017-12-18 14:29:36,208 job L0348 INFO | /root/.config/avocado/avocado.conf
    2017-12-18 14:29:36,208 job L0353 INFO |
    2017-12-18 14:29:36,208 job L0355 INFO | Avocado config:
    2017-12-18 14:29:36,209 job L0364 INFO | Section.Key 
    [...snip...]
    
    :::::::::::::::::::::::: SETUP ::::::::::::::::::::::::
    
    2017-12-18 14:29:36,629 avocado_test L0069 DEBUG|
    
    :::::::::::::::::::::::: START MODULE ::::::::::::::::::::::::
    
    

    MTF verifies whether the application exists in the OpenShift environment:.

    2017-12-18 14:29:36,629 process L0389 INFO | Running 'oc get dc memcached -o json'
    2017-12-18 14:29:36,842 process L0479 DEBUG| [stderr] Error from server (NotFound): deploymentconfigs.apps.openshift.io "memcached" not found
    2017-12-18 14:29:36,846 process L0499 INFO | Command 'oc get dc memcached -o json' finished with 1 after 0.213222980499s
    

    In the next step, MTF verifies whether the pod exists in OpenShift:

    2017-12-18 14:29:36,847 process L0389 INFO | Running 'oc get pods -o json'
    2017-12-18 14:29:37,058 process L0479 DEBUG| [stdout] {
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "apiVersion": "v1",
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "items": [],
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "kind": "List",
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "metadata": {},
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "resourceVersion": "",
    2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "selfLink": ""
    2017-12-18 14:29:37,060 process L0479 DEBUG| [stdout] }
    2017-12-18 14:29:37,064 process L0499 INFO | Command 'oc get pods -o json' finished with 0 after 0.211796045303s
    

    The next step creates an application with the given label mtf_testing and with the name taken from the config.yaml file in the container tag.

    2017-12-18 14:29:37,064 process L0389 INFO | Running 'oc new-app -l mtf_testing=true docker.io/modularitycontainers/memcached --name=memcached'
    2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] --> Found Docker image bbc8bba (5 weeks old) from docker.io for "docker.io/modularitycontainers/memcached"
    2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout]
    2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
    2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout]
    2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] Tags: memcached
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout]
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * An image stream will be created as "memcached:latest" that will track this image
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * This image will be deployed in deployment config "memcached"
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * Port 11211/tcp will be load balanced by service "memcached"
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * Other containers can access this service through the hostname "memcached"
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout]
    2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] --> Creating resources with label mtf_testing=true ...
    2017-12-18 14:29:39,032 process L0479 DEBUG| [stdout] imagestream "memcached" created
    2017-12-18 14:29:39,043 process L0479 DEBUG| [stdout] deploymentconfig "memcached" created
    2017-12-18 14:29:39,063 process L0479 DEBUG| [stdout] service "memcached" created
    2017-12-18 14:29:39,064 process L0479 DEBUG| [stdout] --> Success
    2017-12-18 14:29:39,064 process L0479 DEBUG| [stdout] Run 'oc status' to view your app.
    2017-12-18 14:29:39,069 process L0499 INFO | Command 'oc new-app -l mtf_testing=true docker.io/modularitycontainers/memcached --name=memcached' finished with 0 after 2.00025391579s
    

    The next step verifies whether the application is really running and on which IP address it’s reachable:

    2017-12-18 14:29:46,201 process L0389 INFO | Running 'oc get service -o json'
    2017-12-18 14:29:46,416 process L0479 DEBUG| [stdout] {
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "apiVersion": "v1",
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "items": [
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] {
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "apiVersion": "v1",
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "kind": "Service",
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "metadata": {
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "annotations": {
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "openshift.io/generated-by": "OpenShiftNewApp"
    2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] },
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "creationTimestamp": "2017-12-18T13:29:39Z",
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "labels": {
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "app": "memcached",
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "mtf_testing": "true"
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] },
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "name": "memcached",
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "namespace": "myproject",
    2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "resourceVersion": "2121",
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "selfLink": "/api/v1/namespaces/myproject/services/memcached",
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "uid": "7f50823d-e3f7-11e7-be28-507b9d4150cb"
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] },
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "spec": {
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "clusterIP": "172.30.255.42",
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "ports": [
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] {
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "name": "11211-tcp",
    2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "port": 11211,
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "protocol": "TCP",
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "targetPort": 11211
    2017-12-18 14:29:46,420 process L0499 INFO | Command 'oc get service -o json' finished with 0 after 0.213701963425s
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] }
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] ],
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "selector": {
    2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "app": "memcached",
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "deploymentconfig": "memcached",
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "mtf_testing": "true"
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] },
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "sessionAffinity": "None",
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "type": "ClusterIP"
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] },
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "status": {
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "loadBalancer": {}
    2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] }
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] }
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] ],
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "kind": "List",
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "metadata": {},
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "resourceVersion": "",
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "selfLink": ""
    2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] }
    

    In the last phase, tests are executed.

    2017-12-18 14:29:46,530 output L0655 DEBUG| Expecting STORED
    2017-12-18 14:29:46,531 output L0655 DEBUG| STORED was catched
    2017-12-18 14:29:46,632 avocado_test L0069 DEBUG|
    
    :::::::::::::::::::::::: TEARDOWN ::::::::::::::::::::::::
    
    2017-12-18 14:29:46,632 process L0389 INFO | Running 'oc get dc memcached -o json'
    2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] {
    2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "apiVersion": "v1",
    2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "kind": "DeploymentConfig",
    2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "metadata": {
    

    At the end of the tests, you can verify whether the service is running in the OpenShift environment by using the command oc status:

    $ sudo oc status
    In project My Project (myproject) on server https://127.0.0.1:8443
    
    You have no services, deployment configs, or build configs.
    Run 'oc new-app' to create an application.
    

    From this output, you can see that you can test an arbitrary container and afterward, the OpenShift environment is cleared.

    Summary

    As you have seen in this article, writing tests for containers is really easy. Testing helps you guarantee that a container is working properly just as an RPM package would. In the near future, there are plans to extend MTF capabilities with S2I testing and testing containers with OpenShift templates. You can read more in the MTF documentation.

    Last updated: February 23, 2024

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.