How Rust makes the Rayon data parallelism library magical

This article concludes the 4-part series about how to take advantage of the recent Rust support added to Linux. I hope you have read the previous articles in the series:

This article will demonstrate how to build trust in a continuous integration (CI) system for your Rust library.

You can download the demo code from its GitHub repository. The package contains:

  • An echo server listening on the Unix socket /tmp/librabc
  • A Rust crate that connects to the socket and sends a ping packet every 2 seconds
  • A C/Python binding
  • A command-line interface (CLI) for the client

The CI system of this GitHub repository is built on GitHub Action with:

  • Rust lint check using cargo fmt and cargo clippy.
  • Python lint check using pylint.
  • Rust unit test.
  • C memory leak test.
  • Integration test on CentOS stream 8 and CentOS stream 9.

We find that pytest framework provides more control over how the test case runs than Rust native cargo test. Hence, we will use pytest as an integration test framework.

How to build trust in continuous integration in 4 steps:

Open source projects receive contributions around the world from contributors with different code skill sets and habits. Therefore, we strongly recommend maintaining trust in the CI system for every critical systems project. The goal of a CI system is to build trust by ensuring that:

  • The pending patch will not introduce any regression.
  • A test case attached to the pending patch proves what it fixed.
  • New features are tested with commonly used test cases.

Step 1. Isolation test environment and test setup

Effective isolation is the first thing to consider when you design the CI system. When the CI system utilizes a large bash script to set up a complex environment for your test case to run, porting this CI to a new CI platform or debugging a certain test case would be difficult. This arduous task would result in the following issues:

  • Mixing CI platform-related code with test setup code complicates the efforts of developers debugging specific test cases in their local environment.
  • New contributors would have to complete a lengthy document for their first contribution with a test case attached, making the project less friendly to the open source community.
  • A large bash script could become bloated with enormous race problems.

Our demo project comprises three isolated layers for CI setup:

  • Layer 1: The .github/workflows/main.yaml and .github/runtest.sh contain the CI platform (Github Action) specific setup codes that: test artifacts, run test case run on the matrix of toolsets combinations, invoke the test in different containers, and install the package of the current project.

The first layer is CI platform specific. Thus, you should refer to their documentation for detail.

  • Layer 2: The tests/runtest.sh contains the test environment setup codes that include a helper for running the test in developer mode, and a specific argument passing to pytest. It has zero lines of code for the environment setup of a specific test case.

The second layer is test framework specific, the developer should find out the suitable test framework.

Let's elaborate on the third layer's pytest fixture which is designed to setup the environment and clean up after the test case finishes (fail or pass).

To use pytest fixture to setup the test environment, we have the following lines in tests/integration/rabc_test.py

@pytest.fixture(scope="session", autouse=True)
def rabc_daemon():
    daemon = subprocess.Popen(
        "rabcd",
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        preexec_fn=os.setsid,
    )
    yield
    os.killpg(daemon.pid, signal.SIGTERM)

With the @pytest.fixture(scope="session", autouse=True) decorator, the whole test session will have rabcd daemon started beforehand and stopped afterward. Thepytest will handle the failure of daemon start/stop properly.

The pytest also has scopes for module, class, and function for test environment setup/cleanup.

By chaining the pytest fixtures, we could split test fixtures into small parts and reuse them between test cases. For example:

@pytest.fixture
def setup_a():
    start_a()
    yield
    stop_a()

@pytest.fixture
def setup_b():
    start_b()
    yield
    stop_b()

@pytest.fixture
def setup_ab(setup_a, setup_b):
    yield

@pytest.fixture
def setup_ba(setup_b, setup_a):
    yield

In the above example code, the fixtures setup_ab and setup_ba are holding different orders of setup and cleanup:

  • setup_ab will run setup_a() and then setup_b() before test starts.
  • setup_ab will run stop_b() and then stop_a() after test ends.
  • setup_ba will run setup_b() and then setup_a() before test starts.
  • setup_ba will run stop_a() and then stop_b() after test ends.

Step 2. Minimize the use of the internet during the test

A single glitch of internet access in the CI platform might fail our test and will waste our efforts on debugging that failure. So we take the following actions to minimize the use of the internet during tests:

  • Use a prebuild container with all required packages.
  • Use CI platform-specific way for test host setup.
  • Use rpm/dpdk instead of dnf/apt-get to avoid repository problems.

In the demo project, we use tests/container/Dockerfile.c9s-rabc-ci to define all the required packages necessary for building the project and running the test. The quay.io will automatically rebuild the container on every merged commit. Some CI platforms can even cache the container to speed up the test run.

Using containers could also eliminate the failure caused by the upgrade of the test framework (e.g. pytest, tox, etc).

Instead of using your own script to test the project on multiple branches of Rust or Python versions, you can trust the CI platform-specific way which is fast, well tested, and well maintained. For example, we could install Rust in Github Action within one second via:

- name: Install Rust stable
  uses: actions-rs/toolchain@v1
  with:
      toolchain: stable
      override: true
      components: rustfmt, clippy

Step 3. Group test cases into tiers

Normally, I group test cases into these tiers:

  • tier1: Test cases for real use cases learned from project consumers. This tier is used for gating on building a downstream package or running downstream tests.

  • tier2: Not real use case but for code coverage.

  • slow: Slow test cases require massive CPU and memory resources. This tier is used for running special test cases on dedicated test hosts.

You could put @pytest.mark.tier1 decorator on your pytest test cases, and invoke them via pytest -m tier1.

Step 4. Enforcing the merging rules

Once the CI system is up and running, developers with commit rights should enforce the following rules to maintain trust in the CI system:

  • A patch can only be merged with CI passing or explained CI failure.
  • Each bug fix should contain a test case proving what it fixed. The patch reviewer should run the test case without the fix to reproduce the original problem. A unit test case is required when possible.
  • Each new feature should contain an integration test case explaining the common use case of this feature.
  • Fix the random failures as soon as possible.

Without enforcement, a CI system can violate this trust due to random failures and code coverage deficit.

In the demo git project, we have a CI system built up with these guidelines.

You may check the test results in the Checks tab of this pull request.

Tips for the CI system of the Rust project

Here are some tips for testing the system library in Rust:

  • Run a Rust unit test for non-public API.
  • Do a memory leak check for C binding written in Rust.
  • Pytest log collection

Rust Unit Test for non-public API

The unit test cases are supposed to test isolated function input and output without running the whole project in a real environment.

The Rust official document demonstrates how cargo test uses the automation test. But that is for the integration test, hence your test code can only access pub functions and structures.

Since we are building up unit test cases, testing on pub(crate) functions and structures is also required. We can place our test code as mod of the current crate and mark test functions with #[test].

You may refer to the entire test code at src/lib/unit_tests of github repo or follow these steps:

In lib.rs, we have the unit test case folder included as a normal crate internal module:

mod unit_tests;

In unit_tests/mod.rs, we mark unit_tests/timer.rs as the test module:

#[cfg(test)]
mod timer;

Finally, in unit_tests/timer.rs, you can access pub(crate) structure using the code use crate::timer::RabcTimer.

Memory leak check for C binding written in Rust

I recommend a memory leak check since we used unsafe keywords for the Rust raw pointer in the C binding.

You may refer to the complete test code at src/clib of the github repo or follow this example.

At the top of Makefile, we defined make clib_check as:

.PHONY: clib_check
clib_check: $(CLIB_SO_DEV_DEBUG) $(CLIB_HEADER) $(DAEMON_DEBUG)
        $(eval TMPDIR := $(shell mktemp -d))
        cp $(CLIB_SO_DEV_DEBUG) $(TMPDIR)/$(CLIB_SO_FULL)
        ln -sfv $(CLIB_SO_FULL) $(TMPDIR)/$(CLIB_SO_MAN)
        ln -sfv $(CLIB_SO_FULL) $(TMPDIR)/$(CLIB_SO_DEV)
        cp $(CLIB_HEADER) $(TMPDIR)/$(shell basename $(CLIB_HEADER))
        cc -g -Wall -Wextra -L$(TMPDIR) -I$(TMPDIR) \
                -o $(TMPDIR)/rabc_test src/clib/tests/rabc_test.c -lrabc
        $(DAEMON_DEBUG) &
        LD_LIBRARY_PATH=$(TMPDIR) \
                valgrind --trace-children=yes --leak-check=full \
                --error-exitcode=1 \
                $(TMPDIR)/rabc_test 1>/dev/null
        rm -rf $(TMPDIR)
        pkill $(DAEMON_EXEC)

Generally, it links the src/clib/tests/rabc_test.c to the Rust C binding stored in a temporary folder and runs valgrind for the memory check.

Pytest log collection

Many CI platforms support uploading test artifacts. Instead of outputting everything to the console, please store debug logs to files and only print necessary lines to the console. With that, we can identify what went wrong at the first glance of the test console output and still able to investigate it with the debug log in test artifacts.

In pytest, we use these options:

pytest -vvv --log-file-level=DEBUG \
    --log-file-date-format='%Y-%m-%d %H:%M:%S' \
    --log-file-format='%(asctime)s %(filename)s:%(lineno)d %(levelname)s %(message)s' \
    --log-file=${TEST_ARTIFACTS_FOLDER}/rabc_test.log \

This will store the DEBUG+ logs to ${TEST_ARTIFACTS_FOLDER}/rabc_test.log file instead of dumping into the console. It will be uploaded by .github/runtest.sh.

Summary

The Rust language is an elegant language for a Linux system library. During my work on nmstate and nispor projects, it saved us from the worry of thread and memory safety. A trustworthy CI system enables us to embrace open source contributions around the world with confidence.

Last updated: September 19, 2023