1 Billion Row Challenge: Comparing Postgres, DuckDB, and Extensions
Presented by:
![Ryanb headshot redgate square small](/system/users/avatar/000/042/852/large/ryanb_headshot_redgate_square_small.jpg)
Ryan Booz
Ryan is an Advocate at Redgate focusing on PostgreSQL. Ryan has been working as a PostgreSQL advocate, developer, DBA and product manager for more than 22 years, primarily working with time-series data on PostgreSQL and the Microsoft Data Platform.
Ryan is a long-time DBA, starting with MySQL and Postgres in the late 90s. He spent more than 15 years working with SQL Server before returning to PostgreSQL full-time in 2018. He’s at the top of his game when he's learning something new about the data platform or teaching others about the technology he loves.
No video of the event yet, sorry!
the most efficient way to process a file with 1 billion rows of data. Unsurprisingly, many database communities quickly took on the same challenge with varying results. Postgres, in many cases, performed the worst without close attention to settings and efficient resource utilization. But, with a little more effort, could it compete head-to-head?
In this session, we’ll look at the original challenge and how to approach it with vanilla Postgres beyond the basics. Next, we’ll explore how the increasingly popular in-memory analytics database, DuckDB, handles the same challenge. Finally, we’ll explore recent opportunities to integrate the two databases together to provide a powerful analytical engine with Postgres for the best of both worlds.
- Date:
- Duration:
- 50 min
- Room:
- Conference:
- Postgres Conference 2025
- Language:
- Track:
- Dev
- Difficulty:
- Medium