Presented by:

Greg Dostatni

Command Prompt

Greg is an experienced IT professional with over 20 years of experience in higher education. Starting as a developer, he worked on software to manage Search and Rescue incidents ans urban crowd movement simulations. He has a background as a DBA / System Administrator working with Oracle and PostgreSQL in Solaris and Linux. Greg has later transitioned into building and managing applications and services for a large university in Canada, including designing developing and managing large shared database environments. Greg is committed to using technology to solve problems both at home and in professional setting.

No video of the event yet, sorry!

PostgreSQL’s long-standing recommendation to allocate 25% of system memory to shared_buffers dates back to version 8.3 (released in 2008) when typical system memory was measured in a handful of GB. Modern servers can provision hundreds of GB, and it is not unheard of to come across databases that approach 0.5 TB of RAM or more. The relationship between memory size, workload characteristics and buffer_pool performance is no longer obvious.

We present an empirical investigation conducted on a dedicated 128 GB RAM test platform running PostgreSQL 18. A custom synthetic benchmark suite was built to emulate realistic multi-table join workloads with configurable query mixes that span OLTP-style point lookups as well as complex analytical joins and mixed read-write patterns.

The experiment matrix investigated the effects of huge_pages, type of workload and the effect of a working set where only a subset of the data is typically accessed.

The results suggest that the simple 25% guideline does not hold for large memory systems, and the actual behavior is very dependent on options and workload characteristics.

Date:
Duration:
50 min
Room:
Conference:
Postgres Conference: 2026
Language:
Track:
Ops
Difficulty:
Medium