Three main options for HTTP in Python. Here's when to use each.
Quick Comparison
| Library | Sync | Async | HTTP/2 | Best For |
|---|---|---|---|---|
| requests | ✓ | ✗ | ✗ | Simple scripts |
| httpx | ✓ | ✓ | ✓ | Modern apps |
| aiohttp | ✗ | ✓ | ✗ | High concurrency |
requests
The classic. Simple and reliable.
import requests
# GET
response = requests.get("https://api.example.com/users")
data = response.json()
# POST with JSON
response = requests.post(
"https://api.example.com/users",
json={"name": "Owen", "email": "owen@example.com"}
)
# Headers and auth
response = requests.get(
"https://api.example.com/me",
headers={"Authorization": "Bearer token123"}
)
# Timeout (always set one!)
response = requests.get(url, timeout=10)Use when: Simple scripts, quick prototypes, no async needed.
httpx
Modern, supports both sync and async, HTTP/2.
import httpx
# Sync (same as requests)
response = httpx.get("https://api.example.com/users")
# Async
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com/users")
# HTTP/2
client = httpx.Client(http2=True)
# Connection pooling with client
with httpx.Client() as client:
for url in urls:
response = client.get(url) # Reuses connectionsUse when: New projects, need async, want HTTP/2, want one library for both sync/async.
aiohttp
Pure async, highest performance for concurrent requests.
import aiohttp
import asyncio
async def fetch(session, url):
async with session.get(url) as response:
return await response.json()
async def fetch_all(urls):
async with aiohttp.ClientSession() as session:
tasks = [fetch(session, url) for url in urls]
return await asyncio.gather(*tasks)
# Fetch 100 URLs concurrently
results = asyncio.run(fetch_all(urls))Use when: High-concurrency apps, async-only codebase, maximum performance.
Common Patterns
Session/Client reuse
# Bad: new connection per request
for url in urls:
requests.get(url)
# Good: reuse connections
with requests.Session() as session:
for url in urls:
session.get(url)Retries
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
retries = Retry(
total=3,
backoff_factor=0.5,
status_forcelist=[500, 502, 503, 504]
)
session.mount("https://", HTTPAdapter(max_retries=retries))Timeouts
# Always set timeouts!
# (connect_timeout, read_timeout)
response = requests.get(url, timeout=(3, 10))
# httpx
response = httpx.get(url, timeout=httpx.Timeout(10, connect=3))Error handling
try:
response = requests.get(url, timeout=10)
response.raise_for_status() # Raises on 4xx/5xx
except requests.ConnectionError:
print("Network error")
except requests.Timeout:
print("Request timed out")
except requests.HTTPError as e:
print(f"HTTP error: {e.response.status_code}")Streaming large responses
# Don't load entire response into memory
with requests.get(url, stream=True) as r:
for chunk in r.iter_content(chunk_size=8192):
process(chunk)My Recommendations
New projects: Start with httpx. It's modern, supports both sync and async, and has better defaults.
Existing sync code: requests is fine. No need to migrate.
High-concurrency async: aiohttp if you need maximum performance. httpx async for simpler cases.
Always:
- Set timeouts
- Use sessions/clients for multiple requests
- Handle errors properly
- Consider retries for flaky APIs
Migration Path
# requests → httpx is mostly drop-in
import httpx
# Instead of: import requests
# Use: import httpx
response = httpx.get(url) # Same API
response = httpx.post(url, json=data) # Same APIMost code works unchanged. Check the httpx docs for edge cases.