Six months ago, I thought type hints were just extra noise. Now I can't imagine writing Python without them. Here's what I learned.

Why Bother?

I resisted typing for a while. Python's dynamic nature felt like freedom. Then I spent four hours debugging a function that expected a dict but received a list.

Type hints would have caught that instantly.

# Before: What does this return? Who knows!
def get_users(ids):
    ...
 
# After: Crystal clear
def get_users(ids: list[int]) -> list[User]:
    ...

Basic Type Hints

Start with the fundamentals.

# Variables
name: str = "Owen"
age: int = 28
is_active: bool = True
balance: float = 99.99
 
# Function signatures
def greet(name: str) -> str:
    return f"Hello, {name}"
 
def add(a: int, b: int) -> int:
    return a + b
 
def process() -> None:
    print("No return value")

Collection Types

Python 3.9+ lets you use built-in collection types directly:

# Lists
names: list[str] = ["Alice", "Bob"]
numbers: list[int] = [1, 2, 3]
 
# Dicts
user: dict[str, int] = {"age": 28, "score": 100}
cache: dict[str, list[str]] = {}
 
# Sets and tuples
unique_ids: set[int] = {1, 2, 3}
point: tuple[float, float] = (1.0, 2.5)
record: tuple[str, int, bool] = ("Owen", 28, True)
 
# Variable-length tuple
values: tuple[int, ...] = (1, 2, 3, 4, 5)

For Python 3.8 and earlier, import from typing:

from typing import List, Dict, Set, Tuple
 
names: List[str] = ["Alice", "Bob"]

Optional and Union

The most common types you'll use after basics.

Optional: Might Be None

from typing import Optional
 
def find_user(user_id: int) -> Optional[User]:
    """Returns User or None if not found."""
    user = db.get(user_id)
    return user  # Could be None
 
# Using the result
user = find_user(123)
if user is not None:
    print(user.name)  # Type checker knows user is User here

Modern Python (3.10+) prefers the | syntax:

def find_user(user_id: int) -> User | None:
    ...

Union: One of Several Types

from typing import Union
 
# Old style
def process(value: Union[str, int]) -> str:
    return str(value)
 
# Python 3.10+ style
def process(value: str | int) -> str:
    return str(value)
 
# Common pattern: accepting multiple input types
def normalize_id(id_value: str | int) -> int:
    if isinstance(id_value, str):
        return int(id_value)
    return id_value

Don't Overuse Union

If you have Union[str, int, float, list, dict, None], something's wrong with your design.

Generics

Generics let you write reusable, type-safe code.

Using Generic Types

from typing import TypeVar
 
T = TypeVar("T")
 
def first(items: list[T]) -> T | None:
    """Return first item or None."""
    return items[0] if items else None
 
# Type checker infers the return type
user = first([user1, user2])  # type: User | None
name = first(["Alice", "Bob"])  # type: str | None

Constrained TypeVars

from typing import TypeVar
 
# Only allow specific types
Number = TypeVar("Number", int, float)
 
def add(a: Number, b: Number) -> Number:
    return a + b
 
add(1, 2)      # OK: int
add(1.5, 2.5)  # OK: float
add("a", "b")  # Error: str not allowed

Generic Classes

from typing import Generic, TypeVar
 
T = TypeVar("T")
 
class Stack(Generic[T]):
    def __init__(self) -> None:
        self._items: list[T] = []
    
    def push(self, item: T) -> None:
        self._items.append(item)
    
    def pop(self) -> T:
        return self._items.pop()
    
    def peek(self) -> T | None:
        return self._items[-1] if self._items else None
 
# Usage
int_stack: Stack[int] = Stack()
int_stack.push(1)
int_stack.push(2)
value = int_stack.pop()  # type: int
 
str_stack: Stack[str] = Stack()
str_stack.push("hello")

TypedDict

Perfect for dictionary structures with known keys.

from typing import TypedDict
 
class UserDict(TypedDict):
    name: str
    email: str
    age: int
 
def create_user(data: UserDict) -> User:
    return User(
        name=data["name"],
        email=data["email"],
        age=data["age"]
    )
 
# Type checker validates keys
user_data: UserDict = {
    "name": "Owen",
    "email": "owen@example.com",
    "age": 28
}
 
# Error: missing required key
bad_data: UserDict = {"name": "Owen"}  # Missing email, age

Optional Keys

from typing import TypedDict, NotRequired
 
class UserDict(TypedDict):
    name: str
    email: str
    age: NotRequired[int]  # Optional
 
# Or use total=False for all optional
class PartialUser(TypedDict, total=False):
    name: str
    email: str
    age: int

Nested TypedDicts

class Address(TypedDict):
    street: str
    city: str
    country: str
 
class UserProfile(TypedDict):
    name: str
    email: str
    address: Address
 
profile: UserProfile = {
    "name": "Owen",
    "email": "owen@example.com",
    "address": {
        "street": "123 Main St",
        "city": "NYC",
        "country": "USA"
    }
}

When to Use TypedDict vs dataclass/Pydantic

  • TypedDict: JSON from APIs, config dicts, when you need actual dicts
  • dataclass: Internal data structures, simple containers
  • Pydantic: Validation required, parsing external data

Protocol

Structural typing. Define behavior, not inheritance.

from typing import Protocol
 
class Readable(Protocol):
    def read(self) -> str:
        ...
 
class Closeable(Protocol):
    def close(self) -> None:
        ...
 
class ReadableCloseable(Readable, Closeable, Protocol):
    pass
 
def process(source: Readable) -> str:
    return source.read()
 
# Any class with a read() method works
class StringReader:
    def __init__(self, data: str):
        self.data = data
    
    def read(self) -> str:
        return self.data
 
# Works! StringReader has read() method
reader = StringReader("hello")
result = process(reader)  # Type checks correctly

Real-World Protocol Example

from typing import Protocol, runtime_checkable
 
@runtime_checkable
class DataSource(Protocol):
    def fetch(self, query: str) -> list[dict]:
        ...
    
    def count(self) -> int:
        ...
 
class DatabaseSource:
    def __init__(self, conn_string: str):
        self.conn = connect(conn_string)
    
    def fetch(self, query: str) -> list[dict]:
        return self.conn.execute(query).fetchall()
    
    def count(self) -> int:
        return self.conn.execute("SELECT COUNT(*) FROM data").scalar()
 
class APISource:
    def __init__(self, base_url: str):
        self.base_url = base_url
    
    def fetch(self, query: str) -> list[dict]:
        return requests.get(f"{self.base_url}/search?q={query}").json()
    
    def count(self) -> int:
        return requests.get(f"{self.base_url}/count").json()["count"]
 
def aggregate(sources: list[DataSource]) -> list[dict]:
    results = []
    for source in sources:
        results.extend(source.fetch("*"))
    return results
 
# Both work despite no shared base class
db = DatabaseSource("postgresql://...")
api = APISource("https://api.example.com")
aggregate([db, api])  # Type checks!

Protocol vs ABC

Use Protocol when:

  • You don't control the classes
  • You want structural (duck) typing
  • You're interfacing with third-party code

Use ABC when:

  • You want to enforce inheritance
  • You need shared implementation
  • You control the hierarchy

Callable Types

For functions as arguments.

from typing import Callable
 
# Function that takes int and returns str
Formatter = Callable[[int], str]
 
def apply_format(value: int, formatter: Formatter) -> str:
    return formatter(value)
 
# Lambda works
result = apply_format(42, lambda x: f"Value: {x}")
 
# Named function works
def hex_format(n: int) -> str:
    return hex(n)
 
result = apply_format(42, hex_format)

Complex Callable Signatures

from typing import Callable, ParamSpec, TypeVar
 
P = ParamSpec("P")
R = TypeVar("R")
 
def retry(func: Callable[P, R]) -> Callable[P, R]:
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        for _ in range(3):
            try:
                return func(*args, **kwargs)
            except Exception:
                continue
        raise RuntimeError("All retries failed")
    return wrapper
 
@retry
def fetch_data(url: str, timeout: int = 30) -> dict:
    ...

mypy Configuration

Start strict, get the benefits.

pyproject.toml

[tool.mypy]
python_version = "3.11"
strict = true
warn_return_any = true
warn_unused_ignores = true
disallow_untyped_defs = true
disallow_incomplete_defs = true
check_untyped_defs = true
disallow_untyped_decorators = true
no_implicit_optional = true
warn_redundant_casts = true
warn_unused_configs = true
 
# Per-module overrides
[[tool.mypy.overrides]]
module = "tests.*"
disallow_untyped_defs = false
 
[[tool.mypy.overrides]]
module = "migrations.*"
ignore_errors = true

Common mypy Options Explained

# Start with these
strict = true                    # Enable all strict checks
warn_return_any = true          # Warn on returning Any
disallow_untyped_defs = true    # Require types on all functions
 
# Helpful warnings
warn_unused_ignores = true       # Catch obsolete # type: ignore
warn_redundant_casts = true      # Find unnecessary casts
 
# For legacy code
disallow_untyped_defs = false    # Allow untyped functions
check_untyped_defs = true        # Still check inside them

Running mypy

# Check specific file
mypy src/main.py
 
# Check directory
mypy src/
 
# With config
mypy --config-file pyproject.toml src/
 
# Quick check (fewer plugins, faster)
mypy --no-incremental src/

Handling Type Errors

# Ignore specific line (with reason!)
result = sketchy_function()  # type: ignore[no-untyped-call]
 
# Cast when you know better than mypy
from typing import cast
data = cast(dict[str, int], json.loads(response))
 
# Assert narrowing
assert isinstance(value, str)
# Now mypy knows value is str
 
# Reveal type for debugging
reveal_type(some_variable)  # mypy will print the inferred type

Gradual Typing Strategy

Don't type everything at once. Here's my approach:

Phase 1: New Code Only

Add types to all new functions. Don't touch old code yet.

# Old code: leave alone
def process_data(data):
    ...
 
# New code: fully typed
def validate_input(data: dict[str, Any]) -> ValidationResult:
    ...

Phase 2: Public Interfaces

Type your module's public API. Others depend on these.

# Type public functions in __init__.py
from .core import process_user, UserConfig
 
__all__ = ["process_user", "UserConfig"]
 
# Add types to exported functions
def process_user(user_id: int, config: UserConfig) -> ProcessResult:
    ...

Phase 3: Critical Paths

Type code that handles money, security, data integrity.

def transfer_funds(
    from_account: AccountId,
    to_account: AccountId,
    amount: Decimal,
    currency: Currency
) -> TransferResult:
    """Money movement - types are critical here."""
    ...

Phase 4: Everything Else

Gradually type remaining code during normal refactoring.

py.typed Marker

When your package is fully typed:

my_package/
├── __init__.py
├── core.py
└── py.typed  # Empty file, signals "we're typed"

Common Mistakes

Mistakes I made so you don't have to.

Mistake 1: Using dict Instead of Mapping

# Bad: Forces exact dict type
def process(data: dict[str, int]) -> int:
    return sum(data.values())
 
# Good: Accepts any mapping
from typing import Mapping
 
def process(data: Mapping[str, int]) -> int:
    return sum(data.values())
 
# Now works with dict, OrderedDict, defaultdict, etc.

Mistake 2: Mutable Default Arguments

# Bad: Type-checks but buggy
def add_item(item: str, items: list[str] = []) -> list[str]:
    items.append(item)
    return items
 
# Good: Use None default
def add_item(item: str, items: list[str] | None = None) -> list[str]:
    if items is None:
        items = []
    items.append(item)
    return items

Mistake 3: Optional Everywhere

# Bad: Everything optional
def process(
    name: str | None = None,
    age: int | None = None,
    email: str | None = None
) -> User | None:
    ...
 
# Good: Required params required
def process(name: str, email: str, age: int | None = None) -> User:
    ...

Mistake 4: Ignoring Type Narrowing

# Bad: Repeated isinstance checks
def format_value(value: str | int) -> str:
    if isinstance(value, str):
        return value.upper()
    if isinstance(value, int):  # Redundant!
        return str(value)
    return ""  # Dead code
 
# Good: Trust narrowing
def format_value(value: str | int) -> str:
    if isinstance(value, str):
        return value.upper()
    return str(value)  # mypy knows it's int here

Mistake 5: Overly Complex Types

# Bad: Unreadable
def process(
    data: dict[str, list[tuple[int, dict[str, list[str]]]]]
) -> list[dict[str, int | str | list[float]]]:
    ...
 
# Good: Use type aliases
DataRow = tuple[int, dict[str, list[str]]]
InputData = dict[str, list[DataRow]]
OutputRecord = dict[str, int | str | list[float]]
 
def process(data: InputData) -> list[OutputRecord]:
    ...

Mistake 6: Not Using Literal

# Bad: Any string allowed
def set_log_level(level: str) -> None:
    ...
 
set_log_level("DEBUG")
set_log_level("YOLO")  # No error!
 
# Good: Constrain values
from typing import Literal
 
LogLevel = Literal["DEBUG", "INFO", "WARNING", "ERROR"]
 
def set_log_level(level: LogLevel) -> None:
    ...
 
set_log_level("DEBUG")  # OK
set_log_level("YOLO")   # Error!

Type Aliases

Make complex types readable.

from typing import TypeAlias
 
# Simple alias
UserId: TypeAlias = int
Email: TypeAlias = str
 
# Complex aliases
JsonValue: TypeAlias = str | int | float | bool | None | list["JsonValue"] | dict[str, "JsonValue"]
Headers: TypeAlias = dict[str, str]
Callback: TypeAlias = Callable[[int, str], bool]
 
def fetch(url: str, headers: Headers) -> JsonValue:
    ...

Final Thoughts

Type hints have saved me countless debugging hours. My rules:

  1. Start with strict mypy — it's easier to loosen than tighten
  2. Type function signatures first — that's where the value is
  3. Use type aliases — complex types should have names
  4. Don't ignore errors blindly — understand why mypy complains
  5. Gradual is fine — typed code > untyped code, always

The goal isn't perfect typing. It's catching bugs before runtime.

Start with your next function. Add types. Run mypy. Fix errors. Repeat.

You'll wonder how you lived without it.

React to this post: