pytest makes testing Python code straightforward. Here's how to use it effectively.

Getting Started

pip install pytest

Write a test:

# test_math.py
def add(a, b):
    return a + b
 
def test_add():
    assert add(2, 3) == 5
    assert add(-1, 1) == 0

Run it:

pytest                    # All tests
pytest test_math.py       # Specific file
pytest -v                 # Verbose output
pytest -x                 # Stop on first failure

Test Discovery

pytest finds tests automatically:

  • Files named test_*.py or *_test.py
  • Functions starting with test_
  • Classes starting with Test (no __init__)
project/
├── src/
│   └── calculator.py
└── tests/
    ├── test_calculator.py
    └── test_utils.py

Fixtures

Fixtures provide reusable test setup:

import pytest
 
@pytest.fixture
def sample_user():
    return {"name": "Alice", "email": "alice@example.com"}
 
def test_user_name(sample_user):
    assert sample_user["name"] == "Alice"
 
def test_user_email(sample_user):
    assert "example.com" in sample_user["email"]

Fixture Scopes

Control when fixtures are created/destroyed:

@pytest.fixture(scope="function")  # Default: per test
def fresh_db():
    return create_database()
 
@pytest.fixture(scope="module")    # Once per file
def shared_client():
    return APIClient()
 
@pytest.fixture(scope="session")   # Once per test run
def expensive_resource():
    return load_large_dataset()

Fixture Cleanup

Use yield for teardown:

@pytest.fixture
def temp_file():
    path = create_temp_file()
    yield path
    os.remove(path)  # Runs after test completes

Parametrize

Test multiple inputs without repetition:

import pytest
 
@pytest.mark.parametrize("input,expected", [
    ("hello", 5),
    ("", 0),
    ("world", 5),
])
def test_string_length(input, expected):
    assert len(input) == expected
 
@pytest.mark.parametrize("a,b,result", [
    (2, 3, 5),
    (-1, 1, 0),
    (0, 0, 0),
])
def test_add(a, b, result):
    assert add(a, b) == result

Testing Exceptions

import pytest
 
def divide(a, b):
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b
 
def test_divide_by_zero():
    with pytest.raises(ValueError) as exc_info:
        divide(10, 0)
    assert "zero" in str(exc_info.value)
 
def test_divide_by_zero_simple():
    with pytest.raises(ValueError, match="zero"):
        divide(10, 0)

Mocking

Use unittest.mock or pytest-mock:

from unittest.mock import Mock, patch
 
def fetch_data(client):
    return client.get("/api/data")
 
def test_fetch_data():
    mock_client = Mock()
    mock_client.get.return_value = {"status": "ok"}
    
    result = fetch_data(mock_client)
    
    assert result == {"status": "ok"}
    mock_client.get.assert_called_once_with("/api/data")
 
# Patching modules
@patch("myapp.requests.get")
def test_api_call(mock_get):
    mock_get.return_value.json.return_value = {"data": []}
    result = call_external_api()
    assert result == {"data": []}

Markers

Tag and filter tests:

import pytest
 
@pytest.mark.slow
def test_large_dataset():
    ...
 
@pytest.mark.integration
def test_database_connection():
    ...
 
@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
    ...
 
@pytest.mark.skipif(sys.platform == "win32", reason="Unix only")
def test_unix_specific():
    ...

Run by marker:

pytest -m slow           # Only slow tests
pytest -m "not slow"     # Skip slow tests
pytest -m integration    # Only integration tests

Register custom markers in pytest.ini:

[pytest]
markers =
    slow: marks tests as slow
    integration: marks integration tests

conftest.py

Share fixtures across files:

# tests/conftest.py
import pytest
 
@pytest.fixture
def db_connection():
    conn = create_connection()
    yield conn
    conn.close()
 
@pytest.fixture
def auth_client(db_connection):
    return AuthenticatedClient(db_connection)

All tests in tests/ can use these fixtures automatically.

Test Organization

tests/
├── conftest.py           # Shared fixtures
├── unit/
│   ├── test_models.py
│   └── test_utils.py
├── integration/
│   ├── test_api.py
│   └── test_database.py
└── e2e/
    └── test_workflows.py

Useful Options

pytest -v                 # Verbose
pytest -x                 # Stop on first failure
pytest --lf               # Run last failed tests
pytest --ff               # Run failed first, then rest
pytest -k "user"          # Tests matching "user"
pytest --cov=src          # Coverage report (needs pytest-cov)
pytest -n auto            # Parallel execution (needs pytest-xdist)

Configuration

pyproject.toml:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = "test_*.py"
python_functions = "test_*"
addopts = "-v --strict-markers"
markers = [
    "slow: marks tests as slow",
    "integration: integration tests",
]

Quick Tips

  1. Keep tests fast — Slow tests don't get run
  2. One assertion per concept — Makes failures clear
  3. Test behavior, not implementation — Tests shouldn't break on refactors
  4. Use descriptive namestest_user_creation_fails_without_email
  5. Isolate tests — No shared state between tests

pytest's simplicity hides a lot of power. Start with basic assertions, add fixtures as patterns emerge, and use parametrize to reduce repetition.

React to this post: