Medium-High Impact

Prevent Deadlocks with Consistent Lock Ordering

Acquire locks in the same order to avoid circular waits

Why This Matters

A deadlock occurs when two or more transactions wait indefinitely for each other to release locks. Postgres automatically detects deadlocks and aborts one transaction, but this causes errors and wasted work.

Deadlocks happen when transactions acquire locks in different orders:

Warning: Deadlocks cause transaction rollbacks, application errors, and poor user experience. They're especially common in high-concurrency scenarios.

The Problem

Inconsistent lock ordering creates deadlock scenarios:

Incorrect - Inconsistent lock order causes deadlock
-- Transaction 1: Lock users, then orders
BEGIN;
UPDATE users SET last_login = NOW() WHERE id = 1;
-- Holds lock on users.id=1
UPDATE orders SET status = 'processed' WHERE user_id = 1;
-- Waits for lock on orders...

-- Transaction 2: Lock orders, then users (OPPOSITE ORDER!)
BEGIN;
UPDATE orders SET status = 'shipped' WHERE id = 100;
-- Holds lock on orders.id=100
UPDATE users SET points = points + 10 WHERE id = 1;
-- Waits for lock on users.id=1

-- DEADLOCK! Both transactions wait forever
-- Postgres detects and aborts one with: "deadlock detected"

When transactions acquire locks in different orders, circular waits occur and deadlocks are inevitable.

The Solution

Always acquire locks in a consistent order across all transactions. Sort resource IDs or use a canonical ordering:

Correct - Consistent lock order prevents deadlocks
-- Always lock users BEFORE orders (alphabetical order)
BEGIN;
UPDATE users SET last_login = NOW() WHERE id = 1;
UPDATE orders SET status = 'processed' WHERE user_id = 1;
COMMIT;

-- Same transaction, same order
BEGIN;
UPDATE users SET points = points + 10 WHERE id = 1;
UPDATE orders SET status = 'shipped' WHERE id = 100;
COMMIT;
-- No deadlock - both lock users first, then orders

For row-level locks, sort by ID to ensure consistent ordering:

-- Sort IDs to ensure consistent lock order
CREATE OR REPLACE FUNCTION transfer_funds(from_user BIGINT, to_user BIGINT, amount NUMERIC)
RETURNS VOID AS $$
DECLARE
  first_id BIGINT;
  second_id BIGINT;
BEGIN
  -- Always lock lower ID first
  IF from_user < to_user THEN
    first_id := from_user;
    second_id := to_user;
  ELSE
    first_id := to_user;
    second_id := from_user;
  END IF;
  
  -- Lock in consistent order
  SELECT * FROM accounts WHERE user_id = first_id FOR UPDATE;
  SELECT * FROM accounts WHERE user_id = second_id FOR UPDATE;
  
  -- Now safe to transfer
  UPDATE accounts SET balance = balance - amount WHERE user_id = from_user;
  UPDATE accounts SET balance = balance + amount WHERE user_id = to_user;
END;
$$ LANGUAGE plpgsql;

Result: Consistent lock ordering eliminates circular waits. Even if transactions run concurrently, they won't deadlock because they acquire locks in the same sequence.

Handling Deadlocks

Even with consistent ordering, deadlocks can still occur with complex queries. Always handle deadlock errors:

-- Retry logic for deadlock detection
DO $$
DECLARE
  retry_count INTEGER := 0;
  max_retries INTEGER := 3;
BEGIN
  LOOP
    BEGIN
      -- Your transaction logic here
      UPDATE users SET last_login = NOW() WHERE id = 1;
      UPDATE orders SET status = 'processed' WHERE user_id = 1;
      COMMIT;
      EXIT; -- Success, exit loop
    EXCEPTION
      WHEN deadlock_detected THEN
        retry_count := retry_count + 1;
        IF retry_count > max_retries THEN
          RAISE EXCEPTION 'Deadlock retry limit exceeded';
        END IF;
        ROLLBACK;
        PERFORM pg_sleep(0.1 * retry_count); -- Exponential backoff
    END;
  END LOOP;
END $$;

Tip: Use exponential backoff when retrying after deadlocks. This gives other transactions time to complete and reduces contention.

Best Practices

Check Your Project Automatically

Connect your Supabase project to detect potential deadlock scenarios and locking issues.

Start Supabase Audit

Learn More