Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Testing in Rust: A Comprehensive Guide

Tech May 13 1

Testing is rarely straightforward, and anyone who has spent time in software developpment knows this well.

As the author notes, there's a famous quote from Edsger W. Dijkstra's 1972 essay The Humble Programmer: "Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence."

Note: Edsger W. Dijkstra received the Turing Award in 1972.

Most programming books don't emphasize testing this early in the learning process. It's refreshing to see this topic covered so prominently. Given that Dijkstra—an algorithmic pioneer—placed such importance on testing, we should take it seriously too. Rust, with its strict compiler and ownership model, demands even more attention to testing practices.

Any developer who considers testing trivial or unnecessary is being unprofessional. This misconception is common among newcomers and those with gaps in their knowledge.

This chapter covers substantial ground. Walking through the examples will give you a solid foundation. Rust's testing is powered by the cargo test command, which offers numerous options—only a subset will be covered here. Getting comfortable with these options takes time and practice.

Every thing presented here is introductory material. However, mastering these basics equips you to handle most testing scenarios. Advanced testing strategies are a separate discipline, independent of any particular programming language.

Writing Unit Tests

Let's follow the book's approach by embedding unit tests directly in the library code.

The standard convention involves creating a tests module and placing your test code within it.

Key Concepts:

  • #[cfg(test)] — A conditional compilation attribute. The enclosed code is included during testing but excluded from release builds.
  • #[test] — Marks a function as a test case that the test runner should execute.
  • #[should_panic] — Indicates the test expects the function to panic. If no panic occurs, the test fails.
  • Macros: assert_eq!, assert_ne!, assert!

Controlling Test Execution

Key considerations:

  1. How to control parallelism
  2. How to target specific tests
  3. How to ignore or selectively run ignored tests

Controlling Parallelism

By default, tests run in parallel. You can restrict this with:

cargo test -- --test-threads=n

Targeting Tests

cargo test xxx   # Filter by test name (fuzzy matching)
cargo test --test xxx  # Target integration test files (fuzzy matching)

Ignoring Tests

#[ignore]  # Mark a test function to be skipped during normal runs
cargo test -- --ignored  # Run only tests marked with #[ignore]

Organizing Tests

Unit Test Structure

Follow the established pattern: use #[cfg(test)] to mark test code and #[test] to mark test functions. The module is conventionally named tests.

Integration Test Structure

Create a tests directory at the same level as src. Place any number of .rs files inside—each file becomes a separate test crate. These crates can access public modules from your library.

Shared Modules in Integration Tests

Integration tests use the traditional module definition pattern: directory plus mod.rs. For example, if you have tests/haha/mod.rs, other test files can import it as mod haha;.

Complete Examples

The following example uses a library project with three files:

  • src/lib.rs — Library source with unit tests
  • tests/itest1.rs — Integration test file
  • tests/haha/mod.rs — Integration test module

src/lib.rs

pub fn add(left: u64, right: u64) -> u64 {
    left + right
}

#[cfg(test)]
mod tests {
    use super::add;

    #[test]
    fn test_addition_basic() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }

    #[test]
    fn test_addition_with_message() {
        let result = add(1, 3);
        assert!(result == 5, "Expected 1+3={}, but got {}", result, result);
    }

    #[test]
    fn test_addition_inequality() {
        let result = add(1, 4);
        println!("This prints when --show-output is enabled: 1+4={}", result);
        assert_ne!(result, 6);
    }

    #[test]
    #[should_panic]
    fn test_expected_panic() {
        let result = add(1, 4);
        assert_eq!(result, 6);
    }

    #[test]
    #[should_panic]
    fn test_index_out_of_bounds() {
        let mut data = vec![1, 2];
        data[2] = 3;
    }

    #[test]
    fn test_result_type() -> Result<(), String> {
        if 1 == 2 {
            Ok(())
        } else {
            Err("condition failed".to_string())
        }
    }

    #[test]
    #[ignore]
    fn test_skipped_by_default() {
        assert_eq!(1, 2, "1 != {}", 2);
    }
}

tests/itest1.rs

use lzfmath::*;
mod haha;

#[test]
fn integration_add() {
    haha::haha();
    assert_eq!(add(1, 2), 3);
}

tests/haha/mod.rs

pub fn haha() {
    println!("哈哈!");
}

Running Unit Tests

Standard unit test execution produces output showing which tests passed or failed.

Limiting Threads and Viewing Output

Run with cargo test -- --test-threads=1 --show-output to execute sequentially and display printed output.

Filtering by Test Name

Use cargo test it_works to run only tests matching "it_works" (fuzzy matching).

Running Only Ignored Tests

Use cargo test -- --ignored to execute only tests marked with #[ignore].

Running Integration Tests

Integration tests and modules can be executed together with appropriate command options.

Summary

Testing in Rust encompasses three core areas:

1. Writing Tests

Compilation Attributes:

  • #[cfg(test)] — Condisional compilation directive ensuring code is included only during testing
  • #[test] — Marks functions as executable test cases
  • #[should_panic] — Expects panic; test fails if no panic occurs

Assertion Macros:

  • assert!(expr) — Asserts expression evaluates to true
  • assert_eq!(a, b) — Asserts two values are equal
  • assert_ne!(a, b) — Asserts two values are not equal
  • assert_matches!(pattern, expression) — Asserts expression matches pattern; useful for enums and structs

Assertion macros accept optional formatting arguments for custom failure messages.

Result Type:

  • Result<T, E> — Enum representing success or failure. Returns Ok(()) for passing tests, Err(message) for failures.

2. Controlling Tests

Default Behavior:

  • cargo test — Compiles in test mode and runs the test binary
  • Tests run concurrently by default, with output suppressed for readability

Control Options:

  • Parallelism: --test-threads=1 limits to single-threaded execution
  • Output: --show-output displays stdout/stderr from tests
  • Filtering: cargo test <name> runs tests matching the name; cargo test --test <file> targets specific integration test files

Ignoring Tests:

  • #[ignore] attribute marks tests to skip during normal execution
  • cargo test -- --ignored runs only ignored tests

Examples:

cargo test result -- --test-threads=2 --show-output
cargo test -- --ignored

3. Organizing Tests

Unit Tests:

  • Use #[cfg(test)] and #[test] attributes
  • Conventionally place in a tests module within the source file

Integration Tests:

  • Create a tests directory at project root (same level as src)
  • Each .rs file in tests/ becomes a separate test crate
  • Run with cargo test (includes unit tests) or cargo test --test <filename> for specific files

Shared Modules in Integration Tests:

  • Create a subdirectory under tests/, e.g., tests/haha/
  • Add a mod.rs file containing shared code
  • Other test files can import with mod haha;
  • Note: Integration tests use the older module convention (directory + mod.rs)

Why Many Projects Have Both src/main.rs and src/lib.rs:

This structure simplifies testing. main.rs is a binary target, making its modules difficult to import in integration tests. lib.rs is a library target, so its public API is readily accessible. By maintaining both, you get a runnable binary while keeping code testable.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

SBUS Signal Analysis and Communication Implementation Using STM32 with Fus Remote Controller

Overview In a recent project, I utilized the SBUS protocol with the Fus remote controller to control a vehicle's basic operations, including movement, lights, and mode switching. This article is aimed...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.