Dealing with Gigantic Tables
Five years ago, Linas has found himself in front of a massive PostgreSQL database, and has grown to love PostgreSQL's maturity and stability ever since.
One of the databases that I'm working on belongs to an academic project, and academia is notorious for their dislike of deleting data - in their eyes, every single byte has "future research potential" and so nothing is to be purged at any cost. Thus, research datasets have a tendency to grow to colossal sizes, and normal database management practices no longer apply - one has to put their own DBA strategy using scraps of information on mailing lists, RhodiumToad's IRC logs, and creative hacks and tricks of varying nastiness that one has thought of in a shower.
In this talk, I present my own stash of tricks of dealing with large (1+ TB, 1+ billion of rows) tables:
- Real and imaginary reasons to partition large tables
- Gradually partitioning large tables without any downtime
- Partitions and query planner
- Continuously backing up large tables using volume snapshots
- Adding new columns with DEFAULT values to large tables
- Large indexes, index bloat, and dirty tricks on how to make indexes smaller
- Dumping large tables
- Replicating huge datasets
- 2018 October 15 14:00 PDT
- 20 min
- Winchester 2
- Silicon Valley