Managing Test Metadata and Execution with Pytest Markers
Overview of Pytest Markers
The pytest.mark infrastructure provides a robust mechanism for applying metadata to your test fnuctions. By attaching these markers, developers can control test execution flow, group tests logically, and handle specific conditional behaviors. While the API reference contains a comprehensive list of built-in markers, you can quickly view all available markers—including those defined by third-party plugins—by executing the command pytest --markers in your terminal.
Commonly used built-in markers include:
- usefixtures: Automatically applies specified fixtures to a test function or class.
- filterwarnings: Filters out specific warning messages during the execution of a test.
- skip: Unconditionally prevents a test from running.
- skipif: Skips a test only if a specific condition evaluates to true.
- xfail: Indicates that a test is expected to fail; if it does fail, it is reported as an "expected failure" rather than an error.
- parametrize: Allows a single test logic to be executed multiple times with distinct input parameters.
Markers can be applied to individual functions, entire classes, or modules. They are frequently utilized by plugins and are essential for selecting subsets of tests to run using the -m command-line option (e.g., pytest -m "not slow"). Note that markers influence test colllection and execution but do not diretcly affect the behavior of fixtures themselves.
Registering Custom Markers
To avoid warnings and ensure clarity, custom markers should be explicitly registered in your project configuration. This can be done in a pytest.ini file or within pyproject.toml.
Using pytest.ini:
[pytest]
markers =
integration: marks tests as integration tests (run with '-m "integration"')
external_service: marks tests that rely on external APIs
Using pyproject.toml:
[tool.pytest.ini_options]
markers = [
"integration: marks tests as integration tests",
"external_service: marks tests that rely on external APIs",
]
Any text following the colon in the configuration serves as a description for the marker. Registering markers ensures they appear in the help output and prevents them from being treated as unknown typos.
Alternatively, you can register markers programmatically using the pytest_configure hook:
def pytest_configure(configuration):
configuration.addinivalue_line(
"markers", "environment(target_env): mark test to run only on specific environment"
)
Strict Marker Enforcement
By default, applying an unregistered marker (e.g., due to a typo) will trigger a warning. To enforce stricter code quality, you can configure Pytest to treat unknown markers as errors rather than warnings.
Activating the --strict-markers flag ensures that any marker not found in the registered list will cause the test run to fail immediately. You can enforce this behavior permanently by adding the flag to the addopts section of your configuration file.
Configuration example:
[pytest]
addopts = --strict-markers
markers =
integration: marks tests as integration tests
external_service: marks tests that rely on external APIs