Why This Matters
OFFSET pagination becomes exponentially slower as you paginate deeper into large datasets. To fetch page 1000, Postgres must scan and skip the first 99,000 rows, even though you only need 100 rows.
This causes serious performance issues:
- Slow deep pagination: Page 1000 takes 10+ seconds while page 1 takes 50ms
- Inconsistent performance: Response times vary wildly based on page number
- Wasted resources: Database must process rows it will never return
- Poor user experience: Users experience timeouts when browsing deep pages
Warning: OFFSET pagination on tables with 1M+ rows becomes unusable beyond page 100. Cursor-based pagination maintains consistent performance regardless of position.
The Problem
Using OFFSET requires Postgres to scan and skip rows before returning results:
-- Page 1: Fast (50ms)
SELECT * FROM posts
ORDER BY created_at DESC
LIMIT 100 OFFSET 0;
-- Page 100: Slow (500ms) - must skip 9,900 rows
SELECT * FROM posts
ORDER BY created_at DESC
LIMIT 100 OFFSET 9900;
-- Page 1000: Very slow (5+ seconds) - must skip 99,900 rows
SELECT * FROM posts
ORDER BY created_at DESC
LIMIT 100 OFFSET 99900;
Postgres must scan through all skipped rows, even though they're never returned. This overhead grows linearly with the offset value.
The Solution
Use cursor-based (keyset) pagination with WHERE conditions instead of OFFSET:
-- First page: Get first 100 posts
SELECT * FROM posts
ORDER BY created_at DESC, id DESC
LIMIT 100;
-- Returns posts with created_at values and IDs
-- Store the last row's created_at and id as cursor
-- Next page: Use cursor to fetch rows after the cursor
SELECT * FROM posts
WHERE (created_at, id) < ('2024-01-15 10:30:00', 12345)
ORDER BY created_at DESC, id DESC
LIMIT 100;
-- Result: Consistent ~50ms performance for any page
-- Uses index efficiently, no row skipping
The cursor is the combination of values from the last row of the previous page. Postgres uses an index to jump directly to the cursor position, then scans forward.
Important: Always include a unique column (like id) in the ORDER BY clause to ensure deterministic ordering when the cursor column has duplicate values.
Result: Cursor-based pagination maintains ~50ms response time for any page, while OFFSET pagination degrades to 5+ seconds on deep pages - a 100x improvement.
Check Your Project Automatically
Connect your Supabase project to detect OFFSET pagination patterns and other performance issues.
Start Supabase AuditCommon Mistakes
- Not including a unique column: When ordering by a non-unique column (like
created_at), always add a unique column (likeid) to ensure deterministic ordering. - Missing index on cursor columns: Cursor-based pagination requires an index on the columns used in ORDER BY. Without it, performance degrades.
- Using cursor for random access: Cursor pagination doesn't support jumping to arbitrary pages. Users must navigate sequentially, which is usually fine for feeds and lists.
- Not handling edge cases: Handle cases where the cursor doesn't exist (deleted rows) or when there are no more pages.
Implementation Tips
When implementing cursor-based pagination:
-- Create index on cursor columns
CREATE INDEX posts_created_at_id_idx
ON posts (created_at DESC, id DESC);
-- API endpoint example
-- GET /posts?cursor=2024-01-15T10:30:00Z_12345&limit=100
-- Parse cursor (created_at_id format)
const [createdAt, id] = cursor.split('_');
-- Query with cursor
SELECT * FROM posts
WHERE (created_at, id) < ($1, $2)
ORDER BY created_at DESC, id DESC
LIMIT $3;
-- Return next cursor in response
-- { data: [...], nextCursor: "2024-01-14T09:20:00Z_12340" }
For bidirectional pagination (next/previous), use different comparison operators:
-- Next page: rows after cursor
WHERE (created_at, id) < (cursor_created_at, cursor_id)
-- Previous page: rows before cursor
WHERE (created_at, id) > (cursor_created_at, cursor_id)
ORDER BY created_at ASC, id ASC