Skip to main content

Testing (pytest)

Additional info

(https://docs.pytest.org/en/stable/explanation/anatomy.html)[https://docs.pytest.org/en/stable/explanation/anatomy.html]html

Initial setup

In your pyproject.toml file, make sure to

[tool.pytest.ini_options]
# Your project files will be in `src/`
pythonpath = "src"
# Your test files can be found in `test/`
testpaths = ["test"]

pytest_env allows to set env variables during the test session in pyproject.toml

[tool.pytest_env]
STAGE = "test"

You can use pytest.fail() to force-fail a test during a run.

BaseClass with setup and teardown

from unittest import TestCase
import hypothesis.strategies as st
from hypothesis import given, settings

class TestBaseclass(TestCase):
    class_number = 0
    class_number_pytest = 0
    method_number = 0
    method_number_pytest = 0
    example_number = 0

    @classmethod
    def setUpClass(cls):
        cls.class_number += 1

    @classmethod
    def tearDownClass(cls):
        cls.class_number -= 1

    @classmethod
    def setup_class(cls):
        cls.class_number_pytest += 2

    @classmethod
    def teardown_class(cls):
        cls.class_number_pytest -= 2

    @classmethod
    def setUp(cls):
        cls.method_number += 3

    @classmethod
    def tearDown(cls):
        cls.method_number -= 3

    def setup_method(self, _method):
        self.method_number_pytest += 4

    def teardown_method(self, _method):
        self.method_number_pytest -= 4

    @classmethod
    def setup_example(cls):
        cls.example_number += 5

    @classmethod
    def teardown_example(cls, _token=None):
        cls.example_number -= 5

    @settings(max_examples=100)
    @given(_number=st.integers())
    def test_hypothesis(self, _number: int):
        assert self.class_number == 1, "a"
        assert self.class_number_pytest == 2, "b"
        assert self.method_number == 3, "c"
        assert self.method_number_pytest == 4, "d"
        assert self.example_number == 5, "e"

and their shutdown counterparts

Fixtures

I would recommend inheriting from Base-Classes that have setup functions over global fixtures. Sure, you can mix fixtures, but they are not extendable. With Test-Classes you can extend your setup function, with fixture they are fixed and need to be adjusted in each test function. Additionally, when importing fixtures, the import will get auto-cleaned up because it is listed as unused. Then, you will have to remember the exact fixture name every time.

Example fixture

@pytest.fixture(scope="function")
def test_client() -> Iterator[TestClient[Litestar]]:

Asyncio and trio

Asyncio vs trio in pytest?! whats the difference in mark

@pytest.mark.asyncio

@pytest.mark.trio

class TestClass(BaseTestClass):
    @pytest.mark.trio
    async def add_link(self, page: Page) -> None:
        await fill_database_with_2_items_and_link()

Mock object

pytesthttps://docs.pytest.org/en/stable/how-to/monkeypatch.html

mocks

You can use pytest's monkeypatching or unittest's object patch.

import pytest

def test_mytest():
  # TODO
  with pytest.MonkeyPatch().context() as mp:
      mp.setattr(functools, "partial", 3)
from unittest.mock import AsyncMock, patch

def test_mytest():
  # TODO
  with patch.object(DiscordQuote, "raw", AsyncMock()) as execute:

pytest.mark.parametrize

import pytest

@pytest.mark.parametrize(
    "book_relative_path, chapters_amount",
    [
        ("actual_books/frankenstein.epub", 31),
        ("actual_books/romeo-and-juliet.epub", 28),
        ("actual_books/the-war-of-the-worlds.epub", 29),
    ],
)
def test_parsing_real_epubs(book_relative_path: str, chapters_amount: int) -> None:  # noqa: F811
    book_path = Path(__file__).parent / book_relative_path
    book_bytes_io = io.BytesIO(book_path.read_bytes())
    chapters_extracted = extract_chapters(book_bytes_io)
    assert len(chapters_extracted) == chapters_amount

pytest.mark.xfail

Expect the test to fail. Will be marked as xpassed on success and xfailed if the test failed. The test suite then still succeeds.

import pytest

@pytest.mark.xfail(reason="This test may fail because it is flakey")
def test_mytest():
  pytest.fail()

pytest.mark.skip

If you want to skip a test, use this decorator.

import pytest

@pytest.mark.skip(reason="TODO Fix this test")
def test_mytest():
  pytest.fail()

Mocking httpx requests

from pytest_httpx import HTTPXMock

@pytest.mark.asyncio
async def test_get_github_user_success(httpx_mock: HTTPXMock):
    httpx_mock.add_response(
        url="https://api.github.com/user",
        json={"id": 123, "login": "Abc"},
    )
    result = await provide_github_user("test_access_token")
    assert result is not None
    assert result.id == 123
    assert result.login == "Abc"

pytest.raises

Expect a function call to raise an error on specific input values.

import pytest

def test_mytest():
  with pytest.raises(ValueError, match='must be 0 or None'):
      raise ValueError("value must be 0 or None")

Hypothesis

To test a range of input values, you can use hypothesis which covers more than parametrize.

Debugger seems to sometimes work and sometimes doesn't with it.

@settings(
  # Max amount of generated examples
  max_examples=100,
  # Max deadline in milliseconds per test example
  deadline=200,
)
@example(_day=1_000_000, _hour=0, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=1_000_000, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=1_000_000, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=0, _second=1_000_000, _message="a")
@given(
    # Day
    st.integers(min_value=0, max_value=1_000_000),
    # Hour
    st.integers(min_value=0, max_value=1_000_000),
    # Minute
    st.integers(min_value=0, max_value=1_000_000),
    # Second
    st.integers(min_value=0, max_value=1_000_000),
    # Message
    st.text(min_size=1),
)
@pytest.mark.asyncio
async def test_parsing_date_and_time_from_message_success(_day, _hour, _minute, _second, _message: str):