Skip to content

Tests

Tests are ways to assert that the logic contained in our functions produces expected results. Testing involves setting up the data, running the code we want to test, and asserting that the result is what we expect. At their core, tests in Rust are simply functions with the #[test] attribute. These functions are executed by Cargo with the cargo test command. Running cargo test activates multiple test phases which include unit tests, integration tests, and doc tests. The way Cargo is configured as of early 2024, it panics and exits the process if it encounters failures at any of the phases. This means that if any unit tests fail, Cargo will not complete integration or doc tests.

It is possible to write test functions all over our code, but thats a good way to loose track of our tests and cramp our code. Test-driven development practices allow us to separate out test modules to organize tests without cluttering main() binaries or litter lib.rs code. Test-driven development allows us to write code, test it easily in an organized way, and refine our code based on testable expectations.

Unit testing

Unit tests are small and often target granular functions or modules. Unit tests help ensure that each element (unit) is working by itself. Appropriately scoped unit tests can help pinpoint issues when problems arise as our codebase expands. Unit tests are also generally meant to be internal (for “public” codebases) and may test private functions or interfaces.

Unit tests typically exist as (sub)modules in whatever .rs file were testing. These files exist within the src directory for whatever bin/lib crate we’re building. By convention unit tests are organized into test modules within the “unit” code file. The modules are also annotated with #[cfg(test)] by convention. Adding this cfg (configuration) allows the test modules to be ignored when we compile the code with other commands such as cargo run, cargo build, cargo build --release, etc. which saves memory and clutter in the final output. Lets look at a toy example for some code that handles arithmetic. Lets pretend this is in a library crate as src/math.rs.

// Library functions we need to test
pub fn add(left: usize, right: usize) -> usize {
left + right
}
// The tests themselves organized into a tests module
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

Integration testing

Integration tests are meant to test how external elements/code integrates with our code. As such integration tests are only really meant to touch our public API. Rust allows us to write integration tests that call private functions, but that’s arguably not the point of an integration test. Compared to unit tests, integration tests might test more comprehensive features which may access more than one module at a time. Because binary crates aren’t really meant to expose reusable functionality as standard APIs they must contain all exportable functionality within a src/lib.rs crate to write integration tests with. For the most part, integration testing is done primarily for library crates.

Integration tests only run if all of our unit tests pass. Integration test directories must be set up within a tests/ directory. We can set up more than one integration test module/file within the tests/ directory. Integration test modules can have any name within the tests/ directory. Each file/module is compiled as its own separate crate. Submodules/files of elements in the tests/ directories are not treated separately and are grouped with the first level within the tests directory. Because of this we can group common elements with the same module structure as elsewhere in our source code. Each integration test module (file) is run sequentially and is organized in the console as a separate test run. If one module fails, the test suite exits without running the remainder of the tests.

Lets revisit the structure from the unit tests example. In that example, we had a tests module in our src/math.rs file. To extend this example lets add a tests directory as a sibling to our src directory. Within it we can place any number of integration test modules. This example illustrates only one module.

.
├── Cargo.lock
├── Cargo.toml
├── src
| ├── lib.rs
│ └── math.rs
└── tests
└── integration_test.rs

Because our integration_test.rs file is outside our source directory we need to bring the project into scope with a use statement that names the project. We also dont need to set up a configuration or test module with an integration test because again it is outside the project’s source code so the integration tests are only compiled with the cargo test option.

use test_project;
#[test]
fn it_works() {
let result = test_project::add(2, 2);
assert_eq!(result, 4);
}

Test logic

Tests essentially look for boolean checks. It is common to use the assert_eq! (assert equal to) or assert_ne! (assert not equal to) macros to check boolean conditions. If the macro determines the conditions are true then nothing happens and the test passes. If they are not true the macro calls panic! and fails the test. These macros also print out why the test failed with the “left” and “right” values of the assertion. The values being compared must implement the PartialEq and Debug traits. All built-in types contain these traits, and adding them to our custom traits can be as simple as adding #[derive(PartialEq, Debug)].

#[test]
fn equal_to() {
let result = add(2, 2);
assert_eq!(result, 4);
}
#[test]
fn not_equal_to() {
let result = add(2, 2);
assert_ne!(result, 5);
}

Notice that we list reesult before the expected result. Unlike some languages the order of the assertion arguments doesn’t matter. This is why they are called “left” and “right” instead of “expected” and “actual”.

---- tests::not_equal_to stdout ----
thread 'tests::not_equal_to' panicked at src/lib.rs:27:9:
assertion `left != right` failed
left: 4
right: 4
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

It is also possible to use the assert! macro with some logic operator, such as == (equal to) or != (not equal to), or even a simple boolean tests that accomplish a similar result. Note that using the assert! macro by itself does not print out the assertion or parameters by itself. The macro does use the format! macro under the hood so we can include parameterized string to print custom messages. This actually works with all assert macros.

#[test]
fn using_assert() {
let a = 2;
let b = 2;
let result = add(a, b);
assert!(
result == 5,
"Expected {}, actual {}\nParams: {}, {}",
5, result, a, b
);
}
---- tests::using_assert stdout ----
thread 'tests::using_assert' panicked at src/lib.rs:35:9:
Expected 5, actual 4
Params: 2, 2

Error Handling

Most tests assert that code returns values that we expect. We can also assert that our code fails when its supposed to by using the should_panic annotation. These kinds of tests are imprecise as tests could fail for reasons we’re not expecting. We can add an expected argument to the annotation to make the function more precise.

#[test]
#[should_panic(expected = "some value")]
fn this_code_should_fail() {
...
}

Rust can also test functions that return Result<T, E> types.

#[test]
fn error_handling() -> Result<(), String> {
let a = 2;
let b = 3;
let result = add(a, b);
if result == 4 {
Ok(())
} else {
Err(String::from(format!("{} plus {} does not equal {}", a, b, result)))
}
}
failures:
---- tests::error_handling stdout ----
Error: "2 plus 3 does not equal 5"

Its not possible to use the #[should_panic] annotation on tests that use Result<T, E>. To assert that an operation returns an Err variant, don’t use the question mark operator on the Result<T, E> value. Instead, use assert!(value.is_err()).

Test Setup

When we run $ cargo test we can give instructions to test as well as the binary we’re executing. Arguments must be separated by --.

Test threading

By default tests run in parallel to make efficient use of resources. To tell the binary to execute tests on a single thread (in sequence rather than in parallel) we can specify the number of threads as $ cargo test -- --test-threads=1.

Optional test output

If we want to print the stdout output (think println!) of successful tests add --show-output.

Test filtering

If we want to filter the tests we run we can name them in the command. $ cargo test error_handling This will only run the named test(s). This works with fuzzy logic so that all tests with matching substrings can be run at once with one command. For example, if we have a bunch of tests that start with “add_” we can simply run $ cargo test add to run them. We can also filter integration tests using the --test option with the name of a file/module, e.g. cargo test --test int_test_one.

Alternatively we can add the #[ignore] annotation to ignore a particular test. Its like commenting out the test! Instead of unannotating the ignore flag we can run it by simply passing the ignored flag to the binary as $ cargo test -- --ignored. Running ALL test, whether they’re annotated as ignored or not, can be done by running $ cargo test —include-ignored`.

Tests in Workspaces

This section relies on information from the Project Structures section. Running test from the root of a workspace runs all tests in that workspace. To specify running tests for only a specified module, add the -p command with the module you wish to test.

$ cargo test -p my_module