Testing (pytest)
Additional info
https://docs.pytest.org/en/stable/explanation/anatomy.html
Initial setup
In your pyproject.toml file, make sure to
[tool.pytest.ini_options]
# Your project files will be in `src/`
pythonpath = "src"
# Your test files can be found in `test/`
testpaths = ["test"]
pytest_env allows to set env variables during the test session in pyproject.toml
[tool.pytest_env]
STAGE = "test"
You can use pytest.fail() to force-fail a test during a run.
Run with -s switch to see print output (usually suppressed on succeeding tests).
BaseClass with setup and teardown
from unittest import TestCase
import hypothesis.strategies as st
from hypothesis import given, settings
from loguru import logger
class TestHypothesis(TestCase):
"""
Setup and teardown happen as described in this file.
You can verify this by running
uv run pytest -s test/test_hypothesis.py
"""
class_number = 0
class_number_pytest = 0
method_number_pytest = 0
method_number = 0
example_number = 0
@classmethod
def setUpClass(cls):
logger.info("1) Setting up class1")
cls.class_number += 1
@classmethod
def tearDownClass(cls):
logger.info("Teardown class1")
cls.class_number -= 1
@classmethod
def setup_class(cls):
logger.info("2) Setting up class2")
cls.class_number_pytest += 1
@classmethod
def teardown_class(cls):
logger.info("Teardown class2")
cls.class_number_pytest -= 1
def setup_method(self, _method):
logger.info("3) Setting up method1")
self.method_number_pytest += 1
def teardown_method(self, _method):
logger.info("Teardown method1")
self.method_number_pytest -= 1
@classmethod
def setUp(cls):
logger.info("4) Setting up method2")
cls.method_number += 1
@classmethod
def tearDown(cls):
logger.info("Teardown method2")
cls.method_number -= 1
@classmethod
def setup_example(cls):
logger.info("5) Setting up example")
cls.example_number += 1
@classmethod
def teardown_example(cls, _token=None):
logger.info("Teardown example")
cls.example_number -= 1
@settings(max_examples=2)
@given(_number=st.integers())
def test_hypothesis(self, _number: int):
assert self.class_number == 1, self.class_number
assert self.class_number_pytest == 1, self.class_number_pytest
assert self.method_number_pytest == 1, self.method_number_pytest
assert self.method_number == 1, self.method_number
assert self.example_number == 1, self.example_number
Fixtures
I would recommend inheriting from Base-Classes that have setup functions over global fixtures. Sure, you can mix fixtures, but they are not extendable. With Test-Classes you can extend your setup function, with fixture they are fixed and need to be adjusted in each test function.
Additionally, when importing fixtures, the import will get auto-cleaned up because it is listed as unused. Then, you will have to remember the exact fixture name every time.
Example fixture
@pytest.fixture(scope="function")
def test_client_db_reset() -> Iterator[TestClient[Litestar]]:
# Run setup
try:
with TestClient(app=app, raise_server_exceptions=True) as client:
yield client
# Clean up after test
finally:
# Clean up on error
# Clean up after test
Asyncio and trio
If you have async functions that need to be tested, you can use @pytest.mark.asyncio.
@pytest.mark.asyncio
def test_mytest():
...
@pytest.mark.trio
def test_mytest():
...
Mock object
https://docs.pytest.org/en/stable/how-to/monkeypatch.html
You can use pytest's monkeypatching or unittest's object patch.
The following 3 examples do pretty much the same thing.
from unittest.mock import AsyncMock, patch
import pytest
def test_mytest():
pytest.MonkeyPatch().setattr(
DiscordQuote,
"raw",
AsyncMock(return_value=[DiscordQuote(**data)]),
)
# Execute code to be tested
with pytest.MonkeyPatch().context() as mp:
mp.setattr(
DiscordQuote,
"raw",
AsyncMock(return_value=[DiscordQuote(**data)]),
)
# Execute code to be tested
with patch.object(
DiscordQuote,
"raw",
AsyncMock(return_value=[DiscordQuote(**data)]),
):
# Execute code to be tested
pytest.mark.xfail
Expect the test to fail. Will be marked as xpassed on success and xfailed if the test failed. The test suite then still succeeds.
import pytest
@pytest.mark.xfail(reason="This test may fail because it is flakey")
def test_mytest():
pytest.fail()
pytest.mark.skip
If you want to skip a test, use this decorator.
import pytest
@pytest.mark.skip(reason="TODO Fix this test")
def test_mytest():
pytest.fail()
Mocking httpx requests
When making http requests, HTTPXMock can easily return fake data on specific urls.
from pytest_httpx import HTTPXMock
@pytest.mark.asyncio
async def test_get_github_user_success(httpx_mock: HTTPXMock):
httpx_mock.add_response(
url="https://api.github.com/user",
json={"id": 123, "login": "Abc"},
)
result = await provide_github_user("test_access_token")
assert result is not None
assert result.id == 123
assert result.login == "Abc"
pytest.raises
Expect a function call to raise an error on specific input values.
import pytest
def test_mytest():
with pytest.raises(ValueError, match='must be 0 or None'):
raise ValueError("value must be 0 or None")
pytest.mark.parametrize
parametrize can be used to test multiple input parameters reusing the same test function.
Seems to work well with VScode, where you can pick the specific parameter to debug.
The testdata can even be in a separate file to keep test and data separate, so data could be used in multiple tests. This is useful for setting up databases with the same values.
testdata = [
(datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
(datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
]
@pytest.mark.parametrize("a, b, expected", testdata)
def test_mytest(a, b, expected):
# Test code
import pytest
@pytest.mark.parametrize(
"book_relative_path, chapters_amount",
[
("actual_books/frankenstein.epub", 31),
("actual_books/romeo-and-juliet.epub", 28),
("actual_books/the-war-of-the-worlds.epub", 29),
],
)
def test_parsing_real_epubs(book_relative_path: str, chapters_amount: int) -> None:
...
Hypothesis
To test a range of input values, you can use hypothesis which covers more than parametrize. For example various strings with unicode characters or iterables with varying lengths.
On error, it tries to find a minimal example that breaks the test.
Debugger seems to sometimes work and sometimes not.
@settings(
# Max amount of generated examples
max_examples=100,
# Max deadline in milliseconds per test example
deadline=200,
)
@example(_day=1_000_000, _hour=0, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=1_000_000, _minute=0, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=1_000_000, _second=0, _message="a")
@example(_day=0, _hour=0, _minute=0, _second=1_000_000, _message="a")
@given(
# Day
st.integers(min_value=0, max_value=1_000_000),
# Hour
st.integers(min_value=0, max_value=1_000_000),
# Minute
st.integers(min_value=0, max_value=1_000_000),
# Second
st.integers(min_value=0, max_value=1_000_000),
# Message
st.text(min_size=1),
)
@pytest.mark.asyncio
async def test_parsing_date_and_time_from_message_success(_day, _hour, _minute, _second, _message: str):