Why This Matters
pg_stat_statements is a built-in Postgres extension that tracks execution statistics for all SQL statements. It's the foundation for identifying performance problems across your entire database, not just individual queries.
Without it, you can only analyze queries one at a time with EXPLAIN ANALYZE. With pg_stat_statements, you can:
- Find slowest queries: Identify queries consuming the most total time
- Find frequent queries: Spot queries executed thousands of times
- Track query trends: Monitor performance over time
- Identify N+1 problems: Find queries executed excessively
Note: pg_stat_statements is enabled by default on Supabase. For self-hosted Postgres, you need to enable it in postgresql.conf.
The Problem
Without pg_stat_statements, you can't see which queries are causing problems:
-- Database is slow, but which queries?
-- You can only check pg_stat_activity for current queries
SELECT pid, query, state
FROM pg_stat_activity
WHERE state = 'active';
-- This only shows what's running NOW
-- Can't see:
-- - Which queries run most often?
-- - Which queries take the most total time?
-- - Historical performance trends?
The Solution
Enable pg_stat_statements and query it to find problem queries:
-- Enable the extension (if not already enabled)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find slowest queries by total execution time
SELECT
query,
calls,
total_exec_time,
mean_exec_time,
max_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
Useful Queries
Find queries executed most frequently (potential N+1 problems):
-- Most frequently executed queries
SELECT
query,
calls,
total_exec_time,
mean_exec_time
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10;
Find queries with highest average execution time:
-- Slowest queries by average time
SELECT
query,
calls,
mean_exec_time,
total_exec_time
FROM pg_stat_statements
WHERE calls > 100 -- Filter out one-off queries
ORDER BY mean_exec_time DESC
LIMIT 10;
Result: You can now identify queries that run 10,000 times per hour but take 50ms each - that's 500 seconds of database time that could be optimized.
Key Columns
- query: Normalized SQL text (parameters replaced with $1, $2, etc.)
- calls: Number of times executed
- total_exec_time: Total time spent executing (milliseconds)
- mean_exec_time: Average execution time per call
- max_exec_time: Maximum execution time
Monitor Query Performance Automatically
Connect your Supabase project to automatically track slow queries and identify optimization opportunities.
Start Supabase AuditCommon Mistakes
- Not resetting statistics: Old data skews results - use
pg_stat_statements_reset()periodically or after deployments. - Ignoring call count: A query taking 1 second once is less important than one taking 10ms but called 10,000 times.
- Not normalizing queries: The
querycolumn normalizes parameters - two queries with different values are grouped together. - Forgetting to enable: On self-hosted Postgres, add
shared_preload_libraries = 'pg_stat_statements'topostgresql.conf.