Unit Testing in Python: Complete Guide with Examples
Python testing with pytest, TDD, mocking, and coverage
Unit testing ensures your Python code works correctly and continues to work as your project evolves. This comprehensive guide covers everything you need to know about unit testing in Python, from basic concepts to advanced techniques.

Why Unit Testing Matters
Unit testing provides numerous benefits for Python developers:
- Early Bug Detection: Catch bugs before they reach production
- Code Quality: Forces you to write modular, testable code
- Refactoring Confidence: Make changes safely knowing tests will catch regressions
- Documentation: Tests serve as executable documentation of how code should work
- Faster Development: Automated tests are faster than manual testing
- Better Design: Writing testable code leads to better architecture
Understanding Unit Testing Fundamentals
What is a Unit Test?
A unit test verifies the smallest testable part of an application (usually a function or method) in isolation. It should be:
- Fast: Run in milliseconds
- Isolated: Independent of other tests and external systems
- Repeatable: Produce same results every time
- Self-Validating: Pass or fail clearly without manual inspection
- Timely: Written before or alongside the code
The Testing Pyramid
A healthy test suite follows the testing pyramid:
/\
/ \ E2E Tests (Few)
/____\
/ \ Integration Tests (Some)
/________\
/ \ Unit Tests (Many)
/____________\
Unit tests form the foundation - they’re numerous, fast, and provide quick feedback.
Python Testing Frameworks Compared
unittest: The Built-in Framework
Python’s standard library includes unittest, inspired by JUnit:
import unittest
class TestCalculator(unittest.TestCase):
def setUp(self):
"""Run before each test"""
self.calc = Calculator()
def tearDown(self):
"""Run after each test"""
self.calc = None
def test_addition(self):
result = self.calc.add(2, 3)
self.assertEqual(result, 5)
def test_division_by_zero(self):
with self.assertRaises(ZeroDivisionError):
self.calc.divide(10, 0)
if __name__ == '__main__':
unittest.main()
Pros:
- Built-in, no installation needed
- Good for developers familiar with xUnit frameworks
- Enterprise-friendly, well-established
Cons:
- Verbose syntax with boilerplate
- Requires classes for test organization
- Less flexible fixture management
pytest: The Modern Choice
pytest is the most popular third-party testing framework:
import pytest
def test_addition():
calc = Calculator()
assert calc.add(2, 3) == 5
def test_division_by_zero():
calc = Calculator()
with pytest.raises(ZeroDivisionError):
calc.divide(10, 0)
@pytest.mark.parametrize("a,b,expected", [
(2, 3, 5),
(-1, 1, 0),
(0, 0, 0),
])
def test_addition_parametrized(a, b, expected):
calc = Calculator()
assert calc.add(a, b) == expected
Pros:
- Simple, Pythonic syntax
- Powerful fixture system
- Excellent plugin ecosystem
- Better error reporting
- Parametrized testing built-in
Cons:
- Requires installation
- Less familiar to developers from other languages
Installation:
pip install pytest pytest-cov pytest-mock
Writing Your First Unit Tests
Let’s build a simple example from scratch using Test-Driven Development (TDD). If you’re new to Python or need a quick reference for syntax and language features, check out our Python Cheatsheet for a comprehensive overview of Python fundamentals.
Example: String Utility Functions
Step 1: Write the Test First (Red)
Create test_string_utils.py:
import pytest
from string_utils import reverse_string, is_palindrome, count_vowels
def test_reverse_string():
assert reverse_string("hello") == "olleh"
assert reverse_string("") == ""
assert reverse_string("a") == "a"
def test_is_palindrome():
assert is_palindrome("racecar") == True
assert is_palindrome("hello") == False
assert is_palindrome("") == True
assert is_palindrome("A man a plan a canal Panama") == True
def test_count_vowels():
assert count_vowels("hello") == 2
assert count_vowels("HELLO") == 2
assert count_vowels("xyz") == 0
assert count_vowels("") == 0
Step 2: Write Minimal Code to Pass (Green)
Create string_utils.py:
def reverse_string(s: str) -> str:
"""Reverse a string"""
return s[::-1]
def is_palindrome(s: str) -> bool:
"""Check if string is palindrome (ignoring case and spaces)"""
cleaned = ''.join(s.lower().split())
return cleaned == cleaned[::-1]
def count_vowels(s: str) -> int:
"""Count vowels in string"""
return sum(1 for char in s.lower() if char in 'aeiou')
Step 3: Run Tests
pytest test_string_utils.py -v
Output:
test_string_utils.py::test_reverse_string PASSED
test_string_utils.py::test_is_palindrome PASSED
test_string_utils.py::test_count_vowels PASSED
Advanced Testing Techniques
Using Fixtures for Test Setup
Fixtures provide reusable test setup and teardown:
import pytest
from database import Database
@pytest.fixture
def db():
"""Create a test database"""
database = Database(":memory:")
database.create_tables()
yield database # Provide to test
database.close() # Cleanup after test
@pytest.fixture
def sample_users(db):
"""Add sample users to database"""
db.add_user("Alice", "alice@example.com")
db.add_user("Bob", "bob@example.com")
return db
def test_get_user(sample_users):
user = sample_users.get_user_by_email("alice@example.com")
assert user.name == "Alice"
def test_user_count(sample_users):
assert sample_users.count_users() == 2
Fixture Scopes
Control fixture lifetime with scopes:
@pytest.fixture(scope="function") # Default: run for each test
def func_fixture():
return create_resource()
@pytest.fixture(scope="class") # Once per test class
def class_fixture():
return create_resource()
@pytest.fixture(scope="module") # Once per module
def module_fixture():
return create_resource()
@pytest.fixture(scope="session") # Once per test session
def session_fixture():
return create_expensive_resource()
Mocking External Dependencies
Use mocking to isolate code from external dependencies:
from unittest.mock import Mock, patch, MagicMock
import requests
class WeatherService:
def get_temperature(self, city):
response = requests.get(f"https://api.weather.com/{city}")
return response.json()["temp"]
# Test with mock
def test_get_temperature():
service = WeatherService()
# Mock the requests.get function
with patch('requests.get') as mock_get:
# Configure mock response
mock_response = Mock()
mock_response.json.return_value = {"temp": 72}
mock_get.return_value = mock_response
# Test
temp = service.get_temperature("Boston")
assert temp == 72
# Verify the call
mock_get.assert_called_once_with("https://api.weather.com/Boston")
Using pytest-mock Plugin
pytest-mock provides a cleaner syntax:
def test_get_temperature(mocker):
service = WeatherService()
# Mock using pytest-mock
mock_response = mocker.Mock()
mock_response.json.return_value = {"temp": 72}
mocker.patch('requests.get', return_value=mock_response)
temp = service.get_temperature("Boston")
assert temp == 72
Parametrized Testing
Test multiple scenarios efficiently:
import pytest
@pytest.mark.parametrize("input,expected", [
("", True),
("a", True),
("ab", False),
("aba", True),
("racecar", True),
("hello", False),
])
def test_is_palindrome_parametrized(input, expected):
assert is_palindrome(input) == expected
@pytest.mark.parametrize("number,is_even", [
(0, True),
(1, False),
(2, True),
(-1, False),
(-2, True),
])
def test_is_even(number, is_even):
assert (number % 2 == 0) == is_even
Testing Exceptions and Error Handling
import pytest
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
def test_divide_by_zero():
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)
def test_divide_by_zero_with_message():
with pytest.raises(ValueError) as exc_info:
divide(10, 0)
assert "zero" in str(exc_info.value).lower()
# Testing that no exception is raised
def test_divide_success():
result = divide(10, 2)
assert result == 5.0
Testing Async Code
Testing asynchronous code is essential for modern Python applications, especially when working with APIs, databases, or AI services. Here’s how to test async functions:
import pytest
import asyncio
async def fetch_data(url):
"""Async function to fetch data"""
await asyncio.sleep(0.1) # Simulate API call
return {"status": "success"}
@pytest.mark.asyncio
async def test_fetch_data():
result = await fetch_data("https://api.example.com")
assert result["status"] == "success"
@pytest.mark.asyncio
async def test_fetch_data_with_mock(mocker):
# Mock the async function
mock_fetch = mocker.AsyncMock(return_value={"status": "mocked"})
mocker.patch('module.fetch_data', mock_fetch)
result = await fetch_data("https://api.example.com")
assert result["status"] == "mocked"
For practical examples of testing async code with AI services, see our guide on integrating Ollama with Python, which includes testing strategies for LLM interactions.
Code Coverage
Measure how much of your code is tested:
Using pytest-cov
# Run tests with coverage
pytest --cov=myproject tests/
# Generate HTML report
pytest --cov=myproject --cov-report=html tests/
# Show missing lines
pytest --cov=myproject --cov-report=term-missing tests/
Coverage Configuration
Create .coveragerc:
[run]
source = myproject
omit =
*/tests/*
*/venv/*
*/__pycache__/*
[report]
exclude_lines =
pragma: no cover
def __repr__
raise AssertionError
raise NotImplementedError
if __name__ == .__main__.:
if TYPE_CHECKING:
@abstractmethod
Coverage Best Practices
- Aim for 80%+ coverage for critical code paths
- Don’t obsess over 100% - focus on meaningful tests
- Test edge cases not just happy paths
- Exclude boilerplate from coverage reports
- Use coverage as a guide not a goal
Test Organization and Project Structure
Recommended Structure
myproject/
├── myproject/
│ ├── __init__.py
│ ├── module1.py
│ ├── module2.py
│ └── utils.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Shared fixtures
│ ├── test_module1.py
│ ├── test_module2.py
│ ├── test_utils.py
│ └── integration/
│ ├── __init__.py
│ └── test_integration.py
├── pytest.ini
├── requirements.txt
└── requirements-dev.txt
conftest.py for Shared Fixtures
# tests/conftest.py
import pytest
from myproject.database import Database
@pytest.fixture(scope="session")
def test_db():
"""Session-wide test database"""
db = Database(":memory:")
db.create_schema()
yield db
db.close()
@pytest.fixture
def clean_db(test_db):
"""Clean database for each test"""
test_db.clear_all_tables()
return test_db
pytest.ini Configuration
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
-v
--strict-markers
--cov=myproject
--cov-report=term-missing
--cov-report=html
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
Best Practices for Unit Testing
1. Follow the AAA Pattern
Arrange-Act-Assert makes tests clear:
def test_user_creation():
# Arrange
username = "john_doe"
email = "john@example.com"
# Act
user = User(username, email)
# Assert
assert user.username == username
assert user.email == email
assert user.is_active == True
2. One Assertion Per Test (Guideline, Not Rule)
# Good: Focused test
def test_user_username():
user = User("john_doe", "john@example.com")
assert user.username == "john_doe"
def test_user_email():
user = User("john_doe", "john@example.com")
assert user.email == "john@example.com"
# Also acceptable: Related assertions
def test_user_creation():
user = User("john_doe", "john@example.com")
assert user.username == "john_doe"
assert user.email == "john@example.com"
assert isinstance(user.created_at, datetime)
3. Use Descriptive Test Names
# Bad
def test_user():
pass
# Good
def test_user_creation_with_valid_data():
pass
def test_user_creation_fails_with_invalid_email():
pass
def test_user_password_is_hashed_after_setting():
pass
4. Test Edge Cases and Boundaries
def test_age_validation():
# Valid cases
assert validate_age(0) == True
assert validate_age(18) == True
assert validate_age(120) == True
# Boundary cases
assert validate_age(-1) == False
assert validate_age(121) == False
# Edge cases
with pytest.raises(TypeError):
validate_age("18")
with pytest.raises(TypeError):
validate_age(None)
5. Keep Tests Independent
# Bad: Tests depend on order
counter = 0
def test_increment():
global counter
counter += 1
assert counter == 1
def test_increment_again(): # Fails if run alone
global counter
counter += 1
assert counter == 2
# Good: Tests are independent
def test_increment():
counter = Counter()
counter.increment()
assert counter.value == 1
def test_increment_multiple_times():
counter = Counter()
counter.increment()
counter.increment()
assert counter.value == 2
6. Don’t Test Implementation Details
# Bad: Testing implementation
def test_sort_uses_quicksort():
sorter = Sorter()
assert sorter.algorithm == "quicksort"
# Good: Testing behavior
def test_sort_returns_sorted_list():
sorter = Sorter()
result = sorter.sort([3, 1, 2])
assert result == [1, 2, 3]
7. Test Real-World Use Cases
When testing libraries that process or transform data, focus on real-world scenarios. For example, if you’re working with web scraping or content conversion, check out our guide on converting HTML to Markdown with Python, which includes testing strategies and benchmark comparisons for different conversion libraries.
Continuous Integration Integration
GitHub Actions Example
# .github/workflows/tests.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11, 3.12]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run tests
run: |
pytest --cov=myproject --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
Testing Serverless Functions
When testing AWS Lambda functions or serverless applications, consider integration testing strategies alongside unit tests. Our guide on building a dual-mode AWS Lambda with Python and Terraform covers testing approaches for serverless Python applications, including how to test Lambda handlers, SQS consumers, and API Gateway integrations.
Testing Best Practices Checklist
- Write tests before or alongside code (TDD)
- Keep tests fast (< 1 second per test)
- Make tests independent and isolated
- Use descriptive test names
- Follow AAA pattern (Arrange-Act-Assert)
- Test edge cases and error conditions
- Mock external dependencies
- Aim for 80%+ code coverage
- Run tests in CI/CD pipeline
- Review and refactor tests regularly
- Document complex test scenarios
- Use fixtures for common setup
- Parametrize similar tests
- Keep tests simple and readable
Common Testing Patterns
Testing Classes
class TestUser:
@pytest.fixture
def user(self):
return User("john_doe", "john@example.com")
def test_username(self, user):
assert user.username == "john_doe"
def test_email(self, user):
assert user.email == "john@example.com"
def test_full_name(self, user):
user.first_name = "John"
user.last_name = "Doe"
assert user.full_name() == "John Doe"
Testing with Temporary Files
import pytest
from pathlib import Path
@pytest.fixture
def temp_file(tmp_path):
"""Create a temporary file"""
file_path = tmp_path / "test_file.txt"
file_path.write_text("test content")
return file_path
def test_read_file(temp_file):
content = temp_file.read_text()
assert content == "test content"
Testing File Generation
When testing code that generates files (like PDFs, images, or documents), use temporary directories and verify file properties:
@pytest.fixture
def temp_output_dir(tmp_path):
"""Provide a temporary output directory"""
output_dir = tmp_path / "output"
output_dir.mkdir()
return output_dir
def test_pdf_generation(temp_output_dir):
pdf_path = temp_output_dir / "output.pdf"
generate_pdf(pdf_path, content="Test")
assert pdf_path.exists()
assert pdf_path.stat().st_size > 0
For comprehensive examples of testing PDF generation, see our guide on generating PDFs in Python, which covers testing strategies for various PDF libraries.
Testing with Monkeypatch
def test_environment_variable(monkeypatch):
monkeypatch.setenv("API_KEY", "test_key_123")
assert os.getenv("API_KEY") == "test_key_123"
def test_module_attribute(monkeypatch):
monkeypatch.setattr("module.CONSTANT", 42)
assert module.CONSTANT == 42
Useful Links and Resources
- pytest Documentation
- unittest Documentation
- pytest-cov Plugin
- pytest-mock Plugin
- unittest.mock Documentation
- Test Driven Development by Example - Kent Beck
- Python Testing with pytest - Brian Okken
Related Resources
Python Fundamentals and Best Practices
- Python Cheatsheet - Comprehensive reference for Python syntax, data structures, and common patterns
Testing Specific Python Use Cases
- Integrating Ollama with Python - Testing strategies for LLM integrations and async AI service calls
- Converting HTML to Markdown with Python - Testing approaches for web scraping and content conversion libraries
- Generating PDF in Python - Testing strategies for PDF generation and file output
Serverless and Cloud Testing
- Building a Dual-Mode AWS Lambda with Python and Terraform - Testing Lambda functions, SQS consumers, and API Gateway integrations
Unit testing is an essential skill for Python developers. Whether you choose unittest or pytest, the key is to write tests consistently, keep them maintainable, and integrate them into your development workflow. Start with simple tests, gradually adopt advanced techniques like mocking and fixtures, and use coverage tools to identify untested code.