Joshua D. Drake Blog Posts

On occasion, professional developers will drop into the Postgresql.org mailing lists, meetups, and conferences to ask the question, “Why isn’t PostgreSQL development on Github?” In an effort to see if the demand was really there and not just anecdotal we ran a poll/survey over several social media platforms that asked a simple question:

 

Should PostgreSQL development move to Github?

    • Yes
    • No
    • No, but to something like Gitlab would be good

 

We received well over 300 responses and the majority (75%+) chose a move to Github or to something like Github. This was an unscientific poll but it does point out a few interesting topics for consideration:

 

  1. We need to recognize that the current contribution model does work for existing contributors. We need to have an honest discussion about what that means for the project as contributors age, change employment, and mature in their skill set, etc..
  2. Of the people that argued in comments against the move to a service, only one is a current contributor to PostgreSQL.org core code. The rest were former code contributors or those who contribute in other ways (Advocacy, System administration, etc.).
  3. Would a move to Github or similar option produce a higher rate of contribution?

 

This poll does not answer point #3; it only provides a data point that people may desire a modern collaboration platform. The key takeaway from the conversation about migrating to Github or similar service is the future generation of developers use technology such as Slack and Microsoft Teams. They expect a bug/issue tracker. They demand simplicity in collaboration and most importantly they will run a cost->benefit analysis to determine if the effort to contribute is a net positive.

 

It should also be considered that this is not just individual potential contributors. There are many corporations big and small that rely on the success of PostgreSQL. Those corporations will not contribute as much directly to PostgreSQL if the cost to benefit analysis is a net negative. They will instead contribute through other more productive means that produce a net positive when the cost->benefit analysis is run. A good example of this analysis is the proliferation of external projects such as pg_auto_failover, patroni and lack of direct contribution from innovative extension based companies.

Do we need a culture shift within PostgreSQL?

There are those within the Postgresql.org community that would suggest that we do not need a culture shift within PostgreSQL but that does not take into account the very clear market dynamics that are driving the growth of PostgreSQL, Postgres, and the global ecosystem. It is true that 20 years of hard work by Postgresql.org started the growth and it is also true that the majority of growth in the ecosystem and community is from products such as Greenplum, Aurora, Azure, and Timescale. The growth in the ecosystem is from the professional community and that ecosystem will always perform a cost to benefit analysis before contributing.

 

It is not that we should create radical rifts or disrupt our culture. It is to say that we must evolve and shift our community thinking. We need to be able to consider the big picture. A discussion should never start as an opposition to change. The idea of change should be an open discussion about possibility and vision. It should always include whether the change is a good idea and it should always avoid visceral reactions of, "works for me,” “no,” or “we tried that 15 years ago." Those reactions are immature and lacking in the very thing the community needs to continue to grow: positivity, inclusion, vision, and inspiration.

Joshua D. Drake     May 13, 2019

 
Image result for postgresql
No year has been better for PostgreSQL or the Postgres Ecosystem than 2017. The continued adoption and growth of open source communities over the last 40 years shows a mature and strong ecosystem. It is true what they say, "Middle age is the best time of your life." Here are just a few of the great results of 2017:
  • Amazing work from PostgreSQL.Org with the release of v10 which brought much sought after technologies such as native table partitioning, integrated logical replication, and mature support for federated tables.
  • Pivotal announced multi-cloud support for their Open Source, BigData, MPP Postgres variant Greenplum.
  • Increased support and features from Cloud Industry heavy weights AWSCompose.IO, and Microsoft. Microsoft released Azure Database for PostgreSQL, Compose increased their high availability options, and AWS announced the availability of Amazon Aurora with PostgreSQL compatibility.
  • Enterprise Consulting and Support continued to grow with support from PostgreSQL.Org Major Sponsors 2ndQuadrant and OpenSCG.
2017 was also the year we saw the launch of the International Postgres Conference, PostgresConf. The PostgresConf project is a globally aware, ecosystem centric conference focused on People, Postgres, Data. The project organized more events this year than any other Postgres Advocacy and Education project. In the United States there was PGConf US (now PostgresConf US), Austin Mini, Philadelphia, (2) NYC Minis, Seattle, and finally a full Austin event. The project also hosted PostgresConf South Africa and has several International events planned in 2018.
 
The PostgresConf International efforts wouldn't be possible without the fundamental support of the community and our ecosystem partners:
 
 
 

We have nothing but confidence in the continued growth of PostgreSQL and the Postgres related ecosystem through 2018. Thank you to the PostgreSQL.Org community, our ecosystem partners, and the global Postgres Ecosystem community; without you our efforts would not continue to succeed as a volunteer organized, non-profit Postgres conference. We are looking forward to a fantastic 2018, centered on People, Postgres, Data.


 
 
Joshua D. Drake     January 08, 2018

With more than 200 events submitted and approximately 80 slots to be filled, this has been the most difficult schedule to arrange in the history of PostgresConf. By far, the majority of content received we wanted to include in the schedule. It is that level of community support that we work so hard to achieve and we are thankful to the community for supporting PostgresConf. There is no doubt that the number one hurdle the community must overcome is effective access to education on Postgres. The US 2018 event achieves this with two full days of training and three full days of breakout sessions, including the Regulated Industry Summit and Greenplum Summit.


For your enjoyment and education here is our almost granite schedule!

See something you like? Then it is time to buy those tickets!

This event would not be possible without the continued support from the community and our ecosystem partners:

Joshua D. Drake     February 22, 2018

We are having yet another PGConf Mini in NYC. The event is scheduled for December 14th, 2017 and Work Bench is hosting:

 
 
The event is part of the PGConf Mini series and is free to attend. The PGConf Mini series works directly with user groups and external communities to organize events for the local community. The events are held as a larger meetup style event with networking opportunities and up to 4 presentations. The current agenda for the latest PGConf Mini: NYC is:
 
Agenda: 
 
• 6:30 - 7:00: Jonathan Katz, (TBD), Postgresql Contributor and PGConf Chair Emeritus

Efficiently and Safely Propagate Data Changes Without Triggers!

 

Prior to PostgreSQL 9.4, the primary way to distribute data-driven changes across multiple tables was to use triggers. While triggers guarantee that these changes will be propagated, they can have a significant impact application performance, both technically and with development time (see: "debugging"). PostgreSQL 9.4 introduced logical decoding, which provides a way to stream all changes in a database to a consumer. Using a logical decoder, you can read all changes that are made in a table into your programming language of choice to perform many tasks: cache invalidation, data propagation, submitting changes to remote services, and more. Many PostgreSQL drivers, such as psycopg2 and JDBC support the logical replication protocol, which lets you easily stream your database changes to be manipulated using your favorite programming language. This talk will demonstrate how you can setup logical decoding for your application, look at architecture strategies for working with a logical decoder, and look at a case-study that shows how using logical decoding led to a big performance boost over a similar trigger-based system.
 
• 7:00 - 7:30:  Kevin Jernigan, Senior Product Manager, Amazon
Technical Architecture of Postgres Aurora 
 
Amazon Aurora is a cloud-optimized relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The recently announced PostgreSQL-compatibility, together with the original MySQL compatibility, are perfect for new application development and for migrations from overpriced, restrictive commercial databases. In this session, we’ll do a deep dive into the new architectural model and distributed systems techniques behind Amazon Aurora, discuss best practices and configurations, look at migration options and share customer experience from the field. 
 
• 7:30 - 8:20: Joshua (JD) Drake POSTPONED due to flight cancellation)
The Power of Postgres Replication, Postgres Expert - Lead Consultant Command Prompt, Inc and Co-Chair PGConf!
 
With PostgreSQL v10 a new replication engine has come to town. Let's explore Postgres Logical Replication, how to use it, optimize it and let it best fit in with your organization. We will also discuss its interactions with external tools as well as Binary Replication and features such as Hot Standby. 
 
 
 
 
Joshua D. Drake     November 20, 2017

Pivotal Sponsor Highlight Blog for PostgresConf 2019

 

Written by:

Jacque Istok, Head of Data, Pivotal

 

1. Greenplum has its own community; what do you hope to achieve by joining the Postgres community and PostgresConf?

Both interest and adoption of Postgres have skyrocketed over the last two years, and we feel fortunate to be a part of the extended community. We have worked very hard to uplevel the base version of Postgres within Greenplum to more current levels and to be active in the Postgres community. We see Greenplum as a parallel (and analytic focused) implementation of Postgres, and we encourage the community to continue to embrace both the technology and the goal of the Greenplum project, which is Postgres at scale.

 

2. Are you planning to provide any new tech (PG features, etc.)?

This year we plan to announce several new things for both Greenplum and Postgres. We’re introducing new innovations in our cloud offerings in the marketplaces of AWS, Azure, and GCP. We also have major news about both our natural language at-scale analytics solution based on Apache Solr, and our multi-purpose machine learning and graph analytics library Apache MADlib. The next major release of Greenplum is a major focus as well, differentiating Greenplum from each of its competitors and bringing us ever closer to the latest versions of Postgres.

 

3. Are there any rising stars in the community you’d like to give props to?

While it seems a little self-serving, I would like to take the opportunity to give props to the Pivotal Data Team. This team is a 300+ worldwide organization that helps our customers, our prospects, and the community to solve real world and really hard data problems—solved in part through Postgres technology. They all attack these use cases with passion and truly make a difference in the lives of the people that their solutions touch. I couldn’t wish to work with a finer group.

 

4. What is the number one benefit you see within Postgres that everyone should be aware of?

The number one benefit of Postgres is really its flexibility. This database chameleon can be used for SQL, NoSQL, Big Data, Microservices, time series data, and much more. In fact, our latest analytic solution, MADLib Flow, leverages Postgres as an operational engine. For example, Machine Learning models created in Greenplum can be pushed into a restful API as part of an agile continuous integration/continuous delivery pipeline easily and efficiently—making Postgres the power behind what I still like to think of as #DataOps.

 

5. What is the best thing about working with the Postgres community?

 

I deeply admire the passion and consistency of the community behind Postgres, constantly and incrementally improving this product over decades. And because Greenplum is based on Postgres, we get to interact with this vast community of talent. We are also able to more seamlessly interact with ecosystem products that already work with Postgres, making the adoption of Greenplum that much easier.

 

6. Tell us why you believe people should attend PostgresConf 2019 in March.

 

PostgresConf is going to be awesome, and I can’t wait for it to start! With Pivotal, Amazon, and EnterpriseDB headlining as Diamond sponsors, Greenplum Summit (along with multiple other summits), and high-quality speakers and content across the board, this year’s PostgresConf promises to be bigger and better than ever and surely won’t disappoint.

 

We’re thrilled to be back to present the second annual Greenplum Summit on March 19th at PostgresConf. Our theme this year is “Scale Matters”, and what we’ve seen with our customers is that every year it matters more and more. Our users are part of organizations that are generating tons of data and their need to easily and quickly ingest and interrogate all of it is paramount. This is true even more now than ever before as the insights that can be found not only help differentiate them from their competitors, but are also used to build better products and increase customer loyalty.

 

The day will be filled with real-world case studies from Greenplum users including Morgan Stanley, the European Space Astronomy Centre, Insurance Australia Group, Purdue University, Baker Hughes (a GE company), Conversant, and others, plus presentations and deep-dive tech sessions for novices and experts alike.

Joshua D. Drake     February 14, 2019

PgConf US 2017 has now completed. We had a record number of attendees, a record number of sponsors and a record number of talks. The conference rocked. It was only made possible by a team of highly talented and dedicated volunteers. Thank you to those volunteers.



As of this writing, we are no longer the largest PostgreSQL Conference in North America. We are the largest PostgreSQL Conference. mic drop

Members of the South African Community










We attribute our growth directly to our community. We believe that there is no better community than the PostgreSQL community. A welcoming, inclusive community that shares knowledge and a common goal: Make PostgreSQL the database you use. It is because of this common goal that not only does our conference succeed, but the majority of PostgreSQL events across the globe succeed as well. It is why over 60% of our attendees have been using PostgreSQL for less than 3 years. It is why sponsors such as Amazon Web Services, EnterpriseDB, OpenSCG, and 2ndQuadrant consistently support the conference. It is why a brand new community member flew last minute from Texas the night before the conference (more on this new community member later). It is why the South African community shows up, every year.

Thank you to our speakers
There are quite a few knobs that get turned to run a conference and although it is an amazing experience to be a part of, it takes an enormous amount of resources (financially and physically) to execute it in a manner that is beneficial to all parties.

We think we did a pretty good job this year. This is not a pat on the back; we have more work to do. We want speakers to have everything they need including scheduled mentor times for first time speakers. We want speaking at PgConf US to be a pleasant, fun, and growth opportunity.

Thank you to our sponsors

We want sponsors to get better visibility. This was the first time at our current location and the layout wasn't perfect. We want to have "sponsor training." The PostgreSQL community is different than many, and sponsors (especially those that are relatively new to the community) should be able to leverage the expertise of the organizers to learn how best to work within the community. This would allow them to generate the business that makes it worth it for them to continue to sponsor.

We want coffee in the morning. Yes, the Chairs felt that coffee in the morning wasn't a requirement. Yes, the Chairs failed in a glorious fashion. We listen, we learn. There will be coffee in the morning at the next PgConf National.

There is more but that will wait for another day.

tl;dr; It is with sincerest hearts that the Chairs, Organizers, and Volunteers thank the community for supporting our efforts to bring the best PostgreSQL Conference experience possible.

Joshua D. Drake     April 04, 2017

Oh my goodness, Data Days!


When we rescheduled PGConf US Local: Seattle from August to November we did so due to attendee feedback. It was amazing - people didn't want to go to a conference on Saturday in August (I wonder why). I know, we should have known but it was a new model and we tried. We are extremely pleased with the results of the shift in schedule. The conference now takes place during "professional hours" on "professional days."

Image result for creative commons professional

Because of the shift and sponsor support we have added three new tracks, reopened the CFP, and created Data Days. The new tracks are: Big Data, AWS/Cloud, and Data Science. As these three Postgres content areas are Postgres independent we are also requesting that all communities within this realm submit to present. Let's turn PGConf US Local: Seattle into not only the best West Coast Postgres Conference but also the most highly integrated, heterogeneous data event in the Pacific Northwest.

CFP Dates:

  • Open until: 10/15/2017
  • Notification:  10/18/2017
  • CFP Link
Joshua D. Drake     September 19, 2017




The presentation includes an introduction and setup for consul as the means of providing highly available PostgreSQL in local and geographically disparate data centers or cloud providers. The presentations includes:

*) Introduction to consul and its architecture
*) Setup of a single consul cluster
*) Setup for a few sample database instances (OLAP and OLTP)
*) Firewall requirements
*) Integration with bind, djbdns, and dnsmasq
*) Setup geographic failover to two different data centers and cloud providers
*) Various Best Practices tips and suggestions
*) Q&A

Joshua D. Drake     April 25, 2017

PostgresConf US 2018 is in 9 days. Here is the obligatory "Buy your tickets" reminder! If you look around (a Google search of Gold sponsor Google Cloud is a good place to start) you will find a lot of discount codes.

In 2017 we launched a community wide effort to better recognize contributors for not only the conference but the wider Postgres Community. We continued this effort in 2018 and are pleased to have many speaker profiles available, with more being published every day:

As one of the Chairs of PostgresConf, I am honored by the resounding support from sponsors, speakers, and volunteers to help create a fantastic event for all attendees. It has been a pleasure working toward the common goal of creating a global, non-profit, Postgres Conference series.
 
 
 
 

 

It is late August, 2019. This is the time where we are usually prepping for the very busy fall season and not much else. However, this is the Year of Postgres and everyone is driving 200MPH down the ecosystem highway (321.8688/KPH). We are going to kick off this newsletter with some exciting information about the community.

Events

PostgresConf has launched Digital Events! The goal of Digital Events is to open our education platform year round to all members of the community. Our first series of events will be held with our ecosystem partner YugabyteDB and their “Distributed SQL Webinar Series.” This is a series of free-to-attend Webinars exploring Distributed SQL from leaders in the field.

 

PostgreConf Silicon Valley tickets are going at a brisk pace and half day trainings are almost sold out. Register today to reserve your seat before prices go up on September 1st!

 

Right after Silicon Valley, PostgresConf South Africa is kicking off. This conference has grown by leaps and bounds over the last two years. We highly recommend attending for anyone who can!

 

PGConf.IN (India) has announced that their conference will be held in February 2020!

Meetups

We have seen the launch of three new meetups this month:

  • Los Angeles Postgres The first meetup is planned for late October or early November as we continue to build the Silicon Beach community.
  • Toronto Postgres Similar to Los Angeles, the first meetup is planned for late October or early November.
  • Charm City Postgres This meetup was formed by long time community member Robert Treat.

 

Several other meetups are growing quickly: 

 

Interested in speaking or hosting a meetup? Contact us and we’ll connect you with the right people! 

Learn

Here is a short, great introduction tutorial on running PostgreSQL in Docker by Igal Sapir, Los Angeles Postgres organizer. Everybody has 13 minutes.

 

Shawn Wang from our friends at High Go has provided an insightful write-up on AES Performance.

Ecosystem

TimescaleDB is running a “State of Postgres” survey. Please take five minutes and help them out! They have also announced a new Distributed Timeseries product.

 

VMWare has just acquired Greenplum and PostgreSQL supporting company Pivotal.

Postgresql.org

PostgreSQL versions 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, and 12 Beta 3 are now out in the wild and addressing several important security concerns and bug fixes.

 

---

 

Have news you’d like included in future newsletters? Contact us.

Joshua D. Drake     August 23, 2019