functools is one of those standard library modules that quietly upgrades your codebase. It has the building blocks for decorators, caching, adaptation, and functional-style utilities.
functools.wraps (the decorator you should almost always use)
When you write a decorator without wraps, you lose function metadata (name, docstring, signature in some tools), and debugging gets worse.
import functools
def log_calls(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
print(f"Calling {fn.__name__}")
return fn(*args, **kwargs)
return wrapper
@log_calls
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
print(add.__name__) # add
print(add.__doc__) # Add two numbers.Why it matters
- Tracebacks show the real function name
- Doc generators and help() stay accurate
- Tools like
inspectbehave better
functools.partial (pre-fill arguments)
partial is “currying” in practical Python.
import functools
def greet(greeting, name):
return f"{greeting}, {name}!"
say_hi = functools.partial(greet, "Hi")
print(say_hi("Joe")) # Hi, Joe!
# Works with keyword arguments too
power_of_2 = functools.partial(pow, exp=2)
# Note: pow(a, b) doesn't have exp kwarg; here's a better example:
def clamp(x, *, lo, hi):
return max(lo, min(hi, x))
clamp_0_1 = functools.partial(clamp, lo=0.0, hi=1.0)
print(clamp_0_1(1.7)) # 1.0Partial vs lambda
partialis picklable more often- It’s clearer when you’re just binding params
functools.lru_cache (memoization)
Cache expensive pure-ish functions.
import functools
@functools.lru_cache(maxsize=256)
def fib(n: int) -> int:
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
print(fib(35))
print(fib.cache_info())
# Clear cache when needed
fib.cache_clear()Practical advice
- Only cache functions where output depends on inputs
- Beware caching huge inputs or unbounded growth (
maxsize=None) - For methods: consider
@functools.cache(3.9+) or manage keying carefully
functools.cache (Python 3.9+)
Like lru_cache(maxsize=None).
import functools
@functools.cache
def parse_schema(path: str) -> dict:
# expensive IO + parsing
return load_schema(path)Use it when you want memoization and you’re confident the input space is small.
functools.singledispatch (type-based function overloading)
singledispatch lets you define one public function with multiple implementations.
import functools
@functools.singledispatch
def normalize(value):
raise TypeError(f"Unsupported type: {type(value)}")
@normalize.register
def _(value: str):
return value.strip().lower()
@normalize.register
def _(value: int):
return value
@normalize.register
def _(value: list):
return [normalize(v) for v in value]
print(normalize(" Hello ")) # hello
print(normalize([" A ", "B"])) # ['a', 'b']Notes
- Dispatch is on the first argument only
- Use for adapter layers and input normalization
functools.singledispatchmethod (Python 3.8+)
Same idea, but for methods.
import functools
class Parser:
@functools.singledispatchmethod
def parse(self, value):
raise TypeError("unsupported")
@parse.register
def _(self, value: str):
return value.split(",")
@parse.register
def _(self, value: bytes):
return value.decode().split(",")functools.reduce (fold)
Python prefers loops and comprehensions, but reduce can be nice when you mean a fold.
import functools
import operator
nums = [1, 2, 3, 4]
product = functools.reduce(operator.mul, nums, 1)
print(product) # 24Rule of thumb: if a loop is clearer, use the loop.
functools.total_ordering
If you define __eq__ and one ordering method, total_ordering fills in the rest.
import functools
@functools.total_ordering
class Version:
def __init__(self, major, minor):
self.major = major
self.minor = minor
def __eq__(self, other):
if not isinstance(other, Version):
return NotImplemented
return (self.major, self.minor) == (other.major, other.minor)
def __lt__(self, other):
if not isinstance(other, Version):
return NotImplemented
return (self.major, self.minor) < (other.major, other.minor)
print(Version(1, 2) <= Version(1, 3)) # Truefunctools.cmp_to_key
Modern Python wants key functions, not comparison functions. cmp_to_key adapts old-style comparators.
import functools
items = ["10", "2", "1"]
def cmp_numeric(a, b):
return int(a) - int(b)
items_sorted = sorted(items, key=functools.cmp_to_key(cmp_numeric))
print(items_sorted) # ['1', '2', '10']Writing decorators cleanly (a few patterns)
Decorator with arguments
import functools
def retry(times: int):
def decorator(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
last_exc = None
for _ in range(times):
try:
return fn(*args, **kwargs)
except Exception as e:
last_exc = e
raise last_exc
return wrapper
return decorator
@retry(times=3)
def flaky():
...Preserve typing with ParamSpec (Python 3.10+)
import functools
from typing import Callable, ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
def logged(fn: Callable[P, R]) -> Callable[P, R]:
@functools.wraps(fn)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
print(fn.__name__)
return fn(*args, **kwargs)
return wrapperOne useful mental model
wrapskeeps your decorators from sabotaging toolingpartialis for binding argumentslru_cache/cachefor memoizationsingledispatchfor clean type-based adapters
If you're writing a lot of decorators or adapter code, functools should be in your muscle memory.