Why Database Performance Matters
Your database is often the bottleneck. We've seen applications go from 2-second page loads to 200ms just by optimizing PostgreSQL. Here are the techniques that make the biggest difference.
1. Use EXPLAIN ANALYZE
Before optimizing, understand what's happening:
EXPLAIN ANALYZE SELECT * FROM orders
WHERE customer_id = 123 AND status = 'pending';
Look for sequential scans on large tables, high row estimates vs. actual rows, and nested loops on large datasets.
2. Index Strategically
Don't index everything. Focus on:
- Columns in WHERE clauses
- Columns in JOIN conditions
- Columns in ORDER BY
- Foreign keys
Composite Indexes
Column order matters. Put the most selective column first:
CREATE INDEX idx_orders_customer_status
ON orders(customer_id, status);
3. Use Partial Indexes
Index only the rows you query:
CREATE INDEX idx_orders_pending
ON orders(created_at)
WHERE status = 'pending';
4. Optimize Connection Pooling
PostgreSQL connections are expensive. Use PgBouncer or your application's connection pool:
- Set pool size to 2-3x CPU cores
- Use transaction-level pooling for most apps
- Monitor connection usage
5. Tune Memory Settings
shared_buffers = 25% of RAM
effective_cache_size = 75% of RAM
work_mem = 256MB (for complex queries)
maintenance_work_mem = 512MB
6. Vacuum Regularly
PostgreSQL's MVCC creates dead tuples. Ensure autovacuum is working:
SELECT relname, n_dead_tup, last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC;
7. Avoid SELECT *
Only fetch columns you need:
-- Bad
SELECT * FROM users WHERE id = 1;
-- Good
SELECT id, name, email FROM users WHERE id = 1;
8. Use Proper Data Types
- Use UUID instead of VARCHAR for IDs
- Use TIMESTAMP WITH TIME ZONE
- Use NUMERIC for money, not FLOAT
- Use JSONB, not JSON
9. Batch Operations
Insert/update in batches, not one by one:
INSERT INTO orders (customer_id, amount)
VALUES (1, 100), (2, 200), (3, 300);
10. Monitor and Alert
Set up monitoring for:
- Slow queries (log queries > 100ms)
- Connection count
- Cache hit ratio (should be > 99%)
- Replication lag
Conclusion
Database optimization is iterative. Start with EXPLAIN ANALYZE, fix the biggest issues, measure, repeat.
Need help optimizing your database? Book a free consultation.