Presented by:

F1e0e0c3c3196a63c9b17a2344fb6a61

Mike Fowler

Claranet

Mike has been using PostgreSQL since 7.4, contributed some XML features in 9.1 and made a handful of bugfixes to the JDBC driver. He's spoken at a number of conferences and meetups about using & migrating to PostgreSQL from Oracle & MySQL. Currently he's the Principal Data Engineer in the Public Cloud Practice at Claranet where he helps organisations adopt Big Data and Machine Learning, often as part of cloud migration projects.

No video of the event yet, sorry!
Download the Slides

You've made the decision to migrate to PostgreSQL, converted your schemas and fixed the syntax differences but how do you move your data? For a long time your only option was a careful dump and restore but many businesses can no longer afford such a lengthy outage to accommodate you. Over the last few years many database platforms have exposed change data capture (CDC) mechanisms that allow you to logically replicate wherever you wish. That still leaves one puzzle piece, moving the accumulated historic data. This piece is filled by Debezium.

In this talk we'll look at Debezium, an open source Kafka Connect plugin that performs initial reads of source data systems and then uses CDC to provide a continual stream of changes. After discussing what becomes possible by using Debezium, we'll perfom a demonstration of migrating the Sakila MySQL sample database to PostgreSQL.

Date:
Duration:
50 min
Room:
Conference:
Postgres Conference 2020
Language:
Track:
Migration
Difficulty:
Easy