Skip to main content

Testing (pytest)

Additional info

https://docs.pytest.org/en/stable/explanation/anatomy.html

Initial setup

In your pyproject.toml file, make sure to

[tool.pytest.ini_options]
# Your project files will be in `src/`
pythonpath = "src"
# Your test files can be found in `test/`
testpaths = ["test"]

pytest_env allows to set env variables during the test session in pyproject.toml

[tool.pytest_env]
STAGE = "test"

You can use pytest.fail() to force-fail a test during a run.

Run with -s switch to see print output (usually suppressed on succeeding tests).

BaseClass with setup and teardown

from unittest import TestCase

import hypothesis.strategies as st
from hypothesis import given, settings
from loguru import logger

class TestHypothesis(TestCase):
    """
    Setup and teardown happen as described in this file.
    You can verify this by running
    uv run pytest -s test/test_hypothesis.py
    """

    class_number = 0
    class_number_pytest = 0
    method_number_pytest = 0
    method_number = 0
    example_number = 0

    @classmethod
    def setUpClass(cls):
        logger.info("1) Setting up class1")
        cls.class_number += 1

    @classmethod
    def tearDownClass(cls):
        logger.info("Teardown class1")
        cls.class_number -= 1

    @classmethod
    def setup_class(cls):
        logger.info("2) Setting up class2")
        cls.class_number_pytest += 1

    @classmethod
    def teardown_class(cls):
        logger.info("Teardown class2")
        cls.class_number_pytest -= 1

    def setup_method(self, _method):
        logger.info("3) Setting up method1")
        self.method_number_pytest += 1

    def teardown_method(self, _method):
        logger.info("Teardown method1")
        self.method_number_pytest -= 1

    @classmethod
    def setUp(cls):
        logger.info("4) Setting up method2")
        cls.method_number += 1

    @classmethod
    def tearDown(cls):
        logger.info("Teardown method2")
        cls.method_number -= 1

    @classmethod
    def setup_example(cls):
        logger.info("5) Setting up example")
        cls.example_number += 1

    @classmethod
    def teardown_example(cls, _token=None):
        logger.info("Teardown example")
        cls.example_number -= 1

    @settings(max_examples=2)
    @given(_number=st.integers())
    def test_hypothesis(self, _number: int):
        assert self.class_number == 1, self.class_number
        assert self.class_number_pytest == 1, self.class_number_pytest
        assert self.method_number_pytest == 1, self.method_number_pytest
        assert self.method_number == 1, self.method_number
        assert self.example_number == 1, self.example_number

Fixtures

I would recommend inheriting from Base-Classes that have setup functions over global fixtures. Sure, you can mix fixtures, but they are not extendable. With Test-Classes you can extend your setup function, with fixture they are fixed and need to be adjusted in each test function.

Additionally, when importing fixtures, the import will get auto-cleaned up because it is listed as unused. Then, you will have to remember the exact fixture name every time.

Example fixture

@pytest.fixture(scope="function")
def test_client_db_reset() -> Iterator[TestClient[Litestar]]:
    # Run setup
    try:
        with TestClient(app=app, raise_server_exceptions=True) as client:
            yield client
        # Clean up after test
    finally:
        # Clean up on error
    # Clean up after test

Asyncio and trio

Asyncio vs trio in pytest?! whats the difference in mark

@pytest.mark.asyncio

@pytest.mark.trio

class TestClass(BaseTestClass):
    @pytest.mark.trio
    async def add_link(self, page: Page) -> None:
        await fill_database_with_2_items_and_link()

Mock object

https://docs.pytest.org/en/stable/how-to/monkeypatch.html

You can use pytest's monkeypatching or unittest's object patch.

The following 3 examples do pretty much the same thing.

from unittest.mock import AsyncMock, patch
import pytest

def test_mytest():
    pytest.MonkeyPatch().setattr(
        DiscordQuote,
        "raw",
        AsyncMock(return_value=[DiscordQuote(**data)]),
    )
    # Execute code to be tested
    
    with pytest.MonkeyPatch().context() as mp:
        mp.setattr(
            DiscordQuote,
            "raw",
            AsyncMock(return_value=[DiscordQuote(**data)]),
        )
        # Execute code to be tested
    
    with patch.object(
        DiscordQuote,
        "raw",
        AsyncMock(return_value=[DiscordQuote(**data)]),
    ):
        # Execute code to be tested

pytest.mark.parametrize

import pytest

@pytest.mark.parametrize(
    "book_relative_path, chapters_amount",
    [
        ("actual_books/frankenstein.epub", 31),
        ("actual_books/romeo-and-juliet.epub", 28),
        ("actual_books/the-war-of-the-worlds.epub", 29),
    ],
)
def test_parsing_real_epubs(book_relative_path: str, chapters_amount: int) -> None:  # noqa: F811
    book_path = Path(__file__).parent / book_relative_path
    book_bytes_io = io.BytesIO(book_path.read_bytes())
    chapters_extracted = extract_chapters(book_bytes_io)
    assert len(chapters_extracted) == chapters_amount

pytest.mark.xfail

Expect the test to fail. Will be marked as xpassed on success and xfailed if the test failed. The test suite then still succeeds.

import pytest

@pytest.mark.xfail(reason="This test may fail because it is flakey")
def test_mytest():
  pytest.fail()

pytest.mark.skip

If you want to skip a test, use this decorator.

import pytest

@pytest.mark.skip(reason="TODO Fix this test")
def test_mytest():
  pytest.fail()

Mocking httpx requests

from pytest_httpx import HTTPXMock

@pytest.mark.asyncio
async def test_get_github_user_success(httpx_mock: HTTPXMock):
    httpx_mock.add_response(
        url="https://api.github.com/user",
        json={"id": 123, "login": "Abc"},
    )
    result = await provide_github_user("test_access_token")
    assert result is not None
    assert result.id == 123
    assert result.login == "Abc"

pytest.raises

Expect a function call to raise an error on specific input values.

import pytest

def test_mytest():
  with pytest.raises(ValueError, match='must be 0 or None'):
      raise ValueError("value must be 0 or None")

Hypothesis

To test a range of input values, you can use hypothesis which covers more than parametrize.

Debugger seems to sometimes work and sometimes doesn't with it.

@settings(
  # Max amount of generated examples
  max_examples=100,
  # Max deadline in milliseconds per test example
  deadline=200,
)
@example(_day=1_000_000, _hour=0, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=1_000_000, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=1_000_000, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=0, _second=1_000_000, _message="a")
@given(
    # Day
    st.integers(min_value=0, max_value=1_000_000),
    # Hour
    st.integers(min_value=0, max_value=1_000_000),
    # Minute
    st.integers(min_value=0, max_value=1_000_000),
    # Second
    st.integers(min_value=0, max_value=1_000_000),
    # Message
    st.text(min_size=1),
)
@pytest.mark.asyncio
async def test_parsing_date_and_time_from_message_success(_day, _hour, _minute, _second, _message: str):