Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Build trust in continuous integration for your Rust library

September 6, 2022
Gris Ge
Related topics:
CI/CDLinuxOpen sourcePythonRust
Related products:
Red Hat OpenShift

Share:

    This article concludes the 4-part series about how to take advantage of the recent Rust support added to Linux. I hope you have read the previous articles in the series:

    • Part 1: 3 essentials for writing a Linux system library in Rust

    • Part 2: How to create C binding for a Rust library

    • Part 3: How to create Python binding for a Rust library

    This article will demonstrate how to build trust in a continuous integration (CI) system for your Rust library.

    You can download the demo code from its GitHub repository. The package contains:

    • An echo server listening on the Unix socket /tmp/librabc
    • A Rust crate that connects to the socket and sends a ping packet every 2 seconds
    • A C/Python binding
    • A command-line interface (CLI) for the client

    The CI system of this GitHub repository is built on GitHub Action with:

    • Rust lint check using cargo fmt and cargo clippy.
    • Python lint check using pylint.
    • Rust unit test.
    • C memory leak test.
    • Integration test on CentOS stream 8 and CentOS stream 9.

    We find that pytest framework provides more control over how the test case runs than Rust native cargo test. Hence, we will use pytest as an integration test framework.

    How to build trust in continuous integration in 4 steps:

    Open source projects receive contributions around the world from contributors with different code skill sets and habits. Therefore, we strongly recommend maintaining trust in the CI system for every critical systems project. The goal of a CI system is to build trust by ensuring that:

    • The pending patch will not introduce any regression.
    • A test case attached to the pending patch proves what it fixed.
    • New features are tested with commonly used test cases.

    Step 1. Isolation test environment and test setup

    Effective isolation is the first thing to consider when you design the CI system. When the CI system utilizes a large bash script to set up a complex environment for your test case to run, porting this CI to a new CI platform or debugging a certain test case would be difficult. This arduous task would result in the following issues:

    • Mixing CI platform-related code with test setup code complicates the efforts of developers debugging specific test cases in their local environment.
    • New contributors would have to complete a lengthy document for their first contribution with a test case attached, making the project less friendly to the open source community.
    • A large bash script could become bloated with enormous race problems.

    Our demo project comprises three isolated layers for CI setup:

    • Layer 1: The .github/workflows/main.yaml and .github/runtest.sh contain the CI platform (Github Action) specific setup codes that: test artifacts, run test case run on the matrix of toolsets combinations, invoke the test in different containers, and install the package of the current project.

    The first layer is CI platform specific. Thus, you should refer to their documentation for detail.

    • Layer 2: The tests/runtest.sh contains the test environment setup codes that include a helper for running the test in developer mode, and a specific argument passing to pytest. It has zero lines of code for the environment setup of a specific test case.

    The second layer is test framework specific, the developer should find out the suitable test framework.

    • Layer 3: The rabc_daemon() pytest fixture in tests/integration/rabc_test.py contains the environment setup for a certain test case.

    Let's elaborate on the third layer's pytest fixture which is designed to setup the environment and clean up after the test case finishes (fail or pass).

    To use pytest fixture to setup the test environment, we have the following lines in tests/integration/rabc_test.py

    @pytest.fixture(scope="session", autouse=True)
    def rabc_daemon():
        daemon = subprocess.Popen(
            "rabcd",
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            preexec_fn=os.setsid,
        )
        yield
        os.killpg(daemon.pid, signal.SIGTERM)
    

    With the @pytest.fixture(scope="session", autouse=True) decorator, the whole test session will have rabcd daemon started beforehand and stopped afterward. Thepytest will handle the failure of daemon start/stop properly.

    The pytest also has scopes for module, class, and function for test environment setup/cleanup.

    By chaining the pytest fixtures, we could split test fixtures into small parts and reuse them between test cases. For example:

    @pytest.fixture
    def setup_a():
        start_a()
        yield
        stop_a()
    
    @pytest.fixture
    def setup_b():
        start_b()
        yield
        stop_b()
    
    @pytest.fixture
    def setup_ab(setup_a, setup_b):
        yield
    
    @pytest.fixture
    def setup_ba(setup_b, setup_a):
        yield
    

    In the above example code, the fixtures setup_ab and setup_ba are holding different orders of setup and cleanup:

    • setup_ab will run setup_a() and then setup_b() before test starts.
    • setup_ab will run stop_b() and then stop_a() after test ends.
    • setup_ba will run setup_b() and then setup_a() before test starts.
    • setup_ba will run stop_a() and then stop_b() after test ends.

    Step 2. Minimize the use of the internet during the test

    A single glitch of internet access in the CI platform might fail our test and will waste our efforts on debugging that failure. So we take the following actions to minimize the use of the internet during tests:

    • Use a prebuild container with all required packages.
    • Use CI platform-specific way for test host setup.
    • Use rpm/dpdk instead of dnf/apt-get to avoid repository problems.

    In the demo project, we use tests/container/Dockerfile.c9s-rabc-ci to define all the required packages necessary for building the project and running the test. The quay.io will automatically rebuild the container on every merged commit. Some CI platforms can even cache the container to speed up the test run.

    Using containers could also eliminate the failure caused by the upgrade of the test framework (e.g. pytest, tox, etc).

    Instead of using your own script to test the project on multiple branches of Rust or Python versions, you can trust the CI platform-specific way which is fast, well tested, and well maintained. For example, we could install Rust in Github Action within one second via:

    - name: Install Rust stable
      uses: actions-rs/toolchain@v1
      with:
          toolchain: stable
          override: true
          components: rustfmt, clippy
    

    Step 3. Group test cases into tiers

    Normally, I group test cases into these tiers:

    • tier1: Test cases for real use cases learned from project consumers. This tier is used for gating on building a downstream package or running downstream tests.

    • tier2: Not real use case but for code coverage.

    • slow: Slow test cases require massive CPU and memory resources. This tier is used for running special test cases on dedicated test hosts.

    You could put @pytest.mark.tier1 decorator on your pytest test cases, and invoke them via pytest -m tier1.

    Step 4. Enforcing the merging rules

    Once the CI system is up and running, developers with commit rights should enforce the following rules to maintain trust in the CI system:

    • A patch can only be merged with CI passing or explained CI failure.
    • Each bug fix should contain a test case proving what it fixed. The patch reviewer should run the test case without the fix to reproduce the original problem. A unit test case is required when possible.
    • Each new feature should contain an integration test case explaining the common use case of this feature.
    • Fix the random failures as soon as possible.

    Without enforcement, a CI system can violate this trust due to random failures and code coverage deficit.

    In the demo git project, we have a CI system built up with these guidelines.

    You may check the test results in the Checks tab of this pull request.

    Tips for the CI system of the Rust project

    Here are some tips for testing the system library in Rust:

    • Run a Rust unit test for non-public API.
    • Do a memory leak check for C binding written in Rust.
    • Pytest log collection

    Rust Unit Test for non-public API

    The unit test cases are supposed to test isolated function input and output without running the whole project in a real environment.

    The Rust official document demonstrates how cargo test uses the automation test. But that is for the integration test, hence your test code can only access pub functions and structures.

    Since we are building up unit test cases, testing on pub(crate) functions and structures is also required. We can place our test code as mod of the current crate and mark test functions with #[test].

    You may refer to the entire test code at src/lib/unit_tests of github repo or follow these steps:

    In lib.rs, we have the unit test case folder included as a normal crate internal module:

    mod unit_tests;
    

    In unit_tests/mod.rs, we mark unit_tests/timer.rs as the test module:

    #[cfg(test)]
    mod timer;
    

    Finally, in unit_tests/timer.rs, you can access pub(crate) structure using the code use crate::timer::RabcTimer.

    Memory leak check for C binding written in Rust

    I recommend a memory leak check since we used unsafe keywords for the Rust raw pointer in the C binding.

    You may refer to the complete test code at src/clib of the github repo or follow this example.

    At the top of Makefile, we defined make clib_check as:

    .PHONY: clib_check
    clib_check: $(CLIB_SO_DEV_DEBUG) $(CLIB_HEADER) $(DAEMON_DEBUG)
            $(eval TMPDIR := $(shell mktemp -d))
            cp $(CLIB_SO_DEV_DEBUG) $(TMPDIR)/$(CLIB_SO_FULL)
            ln -sfv $(CLIB_SO_FULL) $(TMPDIR)/$(CLIB_SO_MAN)
            ln -sfv $(CLIB_SO_FULL) $(TMPDIR)/$(CLIB_SO_DEV)
            cp $(CLIB_HEADER) $(TMPDIR)/$(shell basename $(CLIB_HEADER))
            cc -g -Wall -Wextra -L$(TMPDIR) -I$(TMPDIR) \
                    -o $(TMPDIR)/rabc_test src/clib/tests/rabc_test.c -lrabc
            $(DAEMON_DEBUG) &
            LD_LIBRARY_PATH=$(TMPDIR) \
                    valgrind --trace-children=yes --leak-check=full \
                    --error-exitcode=1 \
                    $(TMPDIR)/rabc_test 1>/dev/null
            rm -rf $(TMPDIR)
            pkill $(DAEMON_EXEC)
    

    Generally, it links the src/clib/tests/rabc_test.c to the Rust C binding stored in a temporary folder and runs valgrind for the memory check.

    Pytest log collection

    Many CI platforms support uploading test artifacts. Instead of outputting everything to the console, please store debug logs to files and only print necessary lines to the console. With that, we can identify what went wrong at the first glance of the test console output and still able to investigate it with the debug log in test artifacts.

    In pytest, we use these options:

    pytest -vvv --log-file-level=DEBUG \
        --log-file-date-format='%Y-%m-%d %H:%M:%S' \
        --log-file-format='%(asctime)s %(filename)s:%(lineno)d %(levelname)s %(message)s' \
        --log-file=${TEST_ARTIFACTS_FOLDER}/rabc_test.log \
    

    This will store the DEBUG+ logs to ${TEST_ARTIFACTS_FOLDER}/rabc_test.log file instead of dumping into the console. It will be uploaded by .github/runtest.sh.

    Summary

    The Rust language is an elegant language for a Linux system library. During my work on nmstate and nispor projects, it saved us from the worry of thread and memory safety. A trustworthy CI system enables us to embrace open source contributions around the world with confidence.

    Last updated: September 19, 2023

    Related Posts

    • Webinar: Introduce DevOps through continuous integration processes

    • Looking up a hash table library for caching in the 3scale Istio adapter

    • Integrate Continuous Integration with OpenShift Enterprise

    • Continuous Integration: A "Typical" Process

    Recent Posts

    • More Essential AI tutorials for Node.js Developers

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    What’s up next?

    Cover of the ebook OpenShift for Developers

    Get a hands-on introduction to daily life as a developer crafting code on OpenShift, the open source container application platform from Red Hat, with OpenShift for Developers.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue