When I first needed a database for a small project, I almost reached for PostgreSQL. Then a senior engineer stopped me: "It's a CLI tool with a hundred users. Just use SQLite."
That advice saved me hours of setup and taught me something important—SQLite isn't a "toy database." It powers Firefox, Chrome, iOS, and millions of production apps. Python ships with sqlite3 built in, so there's zero installation.
Here's everything I've learned using it.
Connection and Cursor Basics
Every SQLite operation starts with a connection. Think of it as opening a file:
import sqlite3
# Creates the file if it doesn't exist
conn = sqlite3.connect("myapp.db")But you can't run queries directly on a connection. You need a cursor—a pointer that tracks your position in result sets:
cursor = conn.cursor()
cursor.execute("SELECT 1 + 1")
result = cursor.fetchone()
print(result) # (2,)Why separate objects? The connection manages the database file and transactions. The cursor manages individual queries. You can have multiple cursors on one connection:
cursor1 = conn.cursor()
cursor2 = conn.cursor()
cursor1.execute("SELECT * FROM users")
cursor2.execute("SELECT * FROM orders")
# Both work independentlyFor quick experiments, use an in-memory database:
conn = sqlite3.connect(":memory:") # Gone when connection closesAlways close your connections when done:
conn.close()Or better yet—use context managers (we'll get there).
Creating Tables and Inserting Data
SQLite is dynamically typed, but you should still declare types for documentation and tooling:
cursor.execute("""
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL UNIQUE,
email TEXT NOT NULL,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
)
""")IF NOT EXISTS prevents errors when running your code twice. AUTOINCREMENT handles IDs automatically.
Inserting data:
cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("alice", "alice@example.com")
)
conn.commit() # Don't forget this!That conn.commit() is crucial. Without it, your data vanishes when the connection closes. SQLite uses transactions by default—you must explicitly save changes.
Get the ID of what you just inserted:
cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("bob", "bob@example.com")
)
print(cursor.lastrowid) # 2
conn.commit()Parameterized Queries (The SQL Injection Talk)
Never do this:
# DANGEROUS - SQL injection vulnerability!
username = "alice"
cursor.execute(f"SELECT * FROM users WHERE username = '{username}'")Why? If username comes from user input and someone types alice'; DROP TABLE users; --, your table is gone.
Always use parameterized queries:
# Safe - sqlite3 escapes the value
cursor.execute(
"SELECT * FROM users WHERE username = ?",
(username,) # Note: tuple, even for one value
)The ? is a placeholder. sqlite3 handles escaping, quoting, and type conversion. This also works with named parameters:
cursor.execute(
"SELECT * FROM users WHERE username = :name AND email = :email",
{"name": "alice", "email": "alice@example.com"}
)Named parameters are cleaner when you have many values:
params = {
"min_date": "2026-01-01",
"max_date": "2026-12-31",
"status": "active"
}
cursor.execute("""
SELECT * FROM orders
WHERE created_at BETWEEN :min_date AND :max_date
AND status = :status
""", params)Row Factory: Dict-Like Access
By default, fetchone() returns tuples:
cursor.execute("SELECT id, username, email FROM users WHERE id = 1")
row = cursor.fetchone()
print(row) # (1, 'alice', 'alice@example.com')
print(row[1]) # 'alice'Index-based access is fragile. If you add a column or reorder your SELECT, everything breaks.
Enter sqlite3.Row:
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
cursor.execute("SELECT id, username, email FROM users WHERE id = 1")
row = cursor.fetchone()
print(row["username"]) # 'alice'
print(row["email"]) # 'alice@example.com'
print(row[0]) # 1 (index still works)Set row_factory right after connecting—I do this on every project:
conn = sqlite3.connect("myapp.db")
conn.row_factory = sqlite3.RowFor actual dictionaries (useful for JSON serialization):
def dict_factory(cursor, row):
return {
col[0]: row[idx]
for idx, col in enumerate(cursor.description)
}
conn.row_factory = dict_factory
cursor = conn.cursor()
cursor.execute("SELECT * FROM users")
user = cursor.fetchone()
print(user) # {'id': 1, 'username': 'alice', 'email': 'alice@example.com', ...}Context Managers for Transactions
The cleanest pattern for database operations:
import sqlite3
with sqlite3.connect("myapp.db") as conn:
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("charlie", "charlie@example.com")
)
# Auto-commits when exiting the block successfullyWhat makes this great:
- Auto-commit on success: If the block completes normally, changes are committed
- Auto-rollback on exception: If anything raises, all changes are rolled back
- Connection stays open: The connection isn't closed (only the transaction)
For multiple operations that should succeed or fail together:
with sqlite3.connect("myapp.db") as conn:
cursor = conn.cursor()
try:
cursor.execute(
"UPDATE accounts SET balance = balance - ? WHERE id = ?",
(100, sender_id)
)
cursor.execute(
"UPDATE accounts SET balance = balance + ? WHERE id = ?",
(100, receiver_id)
)
cursor.execute(
"INSERT INTO transfers (from_id, to_id, amount) VALUES (?, ?, ?)",
(sender_id, receiver_id, 100)
)
conn.commit()
except Exception:
conn.rollback()
raiseThis is a bank transfer. Either all three operations succeed, or none do. No partial states.
Common Patterns
Upsert (Insert or Update)
SQLite 3.24+ supports ON CONFLICT:
cursor.execute("""
INSERT INTO users (username, email)
VALUES (?, ?)
ON CONFLICT(username) DO UPDATE SET
email = excluded.email
""", ("alice", "alice-new@example.com"))If alice exists, update her email. If not, insert her. The excluded table refers to the values you tried to insert.
For older SQLite versions:
cursor.execute("""
INSERT OR REPLACE INTO users (username, email)
VALUES (?, ?)
""", ("alice", "alice-new@example.com"))Caveat: INSERT OR REPLACE deletes and reinserts, which resets rowid and triggers delete cascades. Use ON CONFLICT when possible.
Batch Inserts
For many rows, executemany is much faster than looping:
users = [
("user1", "user1@example.com"),
("user2", "user2@example.com"),
("user3", "user3@example.com"),
# ... hundreds more
]
cursor.executemany(
"INSERT INTO users (username, email) VALUES (?, ?)",
users
)
conn.commit()For even better performance with massive imports, wrap everything in an explicit transaction and disable synchronous writes temporarily:
cursor.execute("PRAGMA synchronous = OFF")
cursor.execute("BEGIN TRANSACTION")
for batch in chunks(users, 1000): # Process 1000 at a time
cursor.executemany(
"INSERT INTO users (username, email) VALUES (?, ?)",
batch
)
cursor.execute("COMMIT")
cursor.execute("PRAGMA synchronous = FULL")Simple Database Wrapper
A pattern I use in every project:
import sqlite3
from contextlib import contextmanager
class Database:
def __init__(self, path: str):
self.path = path
self._init_db()
@contextmanager
def _connection(self):
conn = sqlite3.connect(self.path)
conn.row_factory = sqlite3.Row
try:
yield conn
finally:
conn.close()
def _init_db(self):
with self._connection() as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL UNIQUE,
email TEXT NOT NULL
)
""")
conn.commit()
def add_user(self, username: str, email: str) -> int:
with self._connection() as conn:
cursor = conn.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
(username, email)
)
conn.commit()
return cursor.lastrowid
def get_user(self, user_id: int) -> dict | None:
with self._connection() as conn:
cursor = conn.execute(
"SELECT * FROM users WHERE id = ?", (user_id,)
)
row = cursor.fetchone()
return dict(row) if row else None
def list_users(self) -> list[dict]:
with self._connection() as conn:
cursor = conn.execute("SELECT * FROM users")
return [dict(row) for row in cursor.fetchall()]
# Usage
db = Database("myapp.db")
user_id = db.add_user("alice", "alice@example.com")
user = db.get_user(user_id)
print(user["username"]) # 'alice'This wraps all the boilerplate—row factory, connection management, table creation—so the rest of your code stays clean.
When to Use SQLite vs Other Databases
Use SQLite when:
- Single-user application (CLI tools, desktop apps, mobile)
- Embedded database (no separate server process)
- Prototypes and MVPs (zero setup, easy to iterate)
- Testing (in-memory databases are fast and isolated)
- Caching layer or local storage
- Read-heavy workloads with occasional writes
- Data under 10GB (realistically, under 1TB works fine)
Use PostgreSQL/MySQL when:
- Multiple concurrent writers (SQLite locks the whole database on write)
- Web apps with many simultaneous users
- You need advanced features (JSON operators, full-text search, geospatial)
- Distributed systems or replication
- Heavy write volume
- Your data lives on a different machine than your code
The key question: Do you need a server? If your app and data live on the same machine and you don't have heavy concurrent writes, SQLite is almost always the right choice.
I've seen SQLite handle millions of rows and hundreds of reads per second without breaking a sweat. The "toy database" reputation is undeserved.
Parting Tips
A few things I wish I'd known earlier:
-
WAL mode improves concurrency:
cursor.execute("PRAGMA journal_mode=WAL")allows concurrent reads during writes -
Use indexes:
CREATE INDEX idx_users_email ON users(email)can make queries 100x faster -
Check your foreign keys: They're disabled by default. Enable with
PRAGMA foreign_keys = ON -
Backup is trivial:
conn.backup(sqlite3.connect("backup.db"))copies everything -
SQLite has great docs: The official documentation is one of the best I've read
SQLite has been in my toolkit for every small project since that first CLI tool. No setup, no servers, no dependencies—just import and go.