The Chairs (myself, Jim Mlodgenski, and Amanda Nystrom) have recently decided to bring some visibility to charities that are close to our hearts. They are listed below:

  • Joshua Drake: Navajo Water Project. The Navajo nation is approximately the size of West Virginia and has a population of over 150,000 people (300k in the tribe). Anywhere from 15% - 40% of the residents do not have access to running water. The Navajo Water Project aims to bring clean water to each person and family through support from those that donate. 
  • Jim Mlodgenski: St. Jude Children’s Research Hospital. The hospital is one of the premier research hospitals for cancer and other life threatening illnesses for some of our most vulnerable people. Approximately one in 285 children in the U.S. will be diagnosed with cancer before their 20th birthday. Through donations, St. Jude’s provides treatment to those with cancer, and is actively dedicating resources to the research and cure for cancer. 
  • Amanda Nystrom: ASPCA. The American Society for the Prevention of Cruelty to Animals (ASPCS) was the first humane society to be established in North America, with the goal of providing kind and respectful treatment to animals under the law. Unlike humans, cases of animal abuse aren’t compiled but studies have shown a correlation between domestic violence and animal abuse. The ASPCA prevents animal homelessness and actively rescues animals from dangerous and/or cruel situations.

Upcoming Webinars

With the Coronavirus causing the conference market to dry up for 2020, we at Postgres Conference have pivoted to ensure that we continue to provide quality Postgres content to the world of People, Postgres, Data. We have been performing multiple webinars per month. Here is the current schedule and you can register (free) here:


  • May 21, 11am PT: A Deep Dive into PostgreSQL Indexing
  • June 2, 10AM PT: How to Move Data from Oracle to Postgres in Near-Real Time
  • June 9, 11am PT: Community vs. Enterprise Open Source – Which is Right for Your Business?
  • June 10, 11am PT: Bring Compression to Postgres at Zero Cost of Performance
  • June 16, 11AM PT: Mostly mistaken and ignored PostgreSQL parameters while optimizing a PostgreSQL database
  • June 30, 11am PT: Deeper Understanding of PostgreSQL Execution Plan: At plan time and run time
  • July 15, 10AM PT: Working with JSON Data in PostgreSQL vs. MongoDB
  • June 17, 11am PT: Postgres vs. MongoDB for real-time machine learning on wind turbine data

Articles from the community

Coronavirus Resources:

Joshua D. Drake     May 19, 2020

People, Postgres, Data,

Due to the health risks and travel restrictions created by the Coronavirus we unfortunately have to cancel Postgres Conference 2020 which was to be held at the Marriott Marquis on March 23rd, 2020 thru March 27th, 2020.

We want to thank all of our attendees, partners and volunteers for all the hard work we all put in to try and pull this event off. Sadly, the stars were not aligned this year. We now focus our efforts on Postgres Conference Silicon Valley 2020 and our Digital Events

Thank you all for your patience and support,

Postgres Conference Chairs

Joshua D. Drake     March 12, 2020

What is the future of Postgres?

When you observe the ecosystem you can’t help but ask yourself where the community and software is going next. It is without question that the future of data will reside in something Postgres. It may be PostgreSQL, Cockroach, Yugabyte, Aurora, Azure, or workload specific Postgres such as Greenplum. Based on the sheer number of successful software adventures that are based on Postgres, there is no doubt in our minds that it is the future. 

This is why the inclusivity of People, Postgres, Data is vital to the continued success of the community. It is also why we invite all of Postgres to Come As You Are from March 23rd - 27th, 2020 at the Marriott Marquis in Manhattan! 

Isn’t Postgres, PostgreSQL?

Yes, and no. It is true that the term Postgres is sometimes used as a short version of PostgreSQL which allows easier pronunciation of the project and software name. It is also true that PostgreSQL contains a great deal of Postgres code but it is not technically Postgres. In fact, Postgres predates PostgreSQL by quite a few years and had an interim fork called Postgres95 before the PostgreSQL project was founded. That is why we use Postgres as an inclusive term for all software Postgres including many that some would consider forks. Fun fact:  Did you know that Informix is based on Illustra, a commercial Postgres fork from 1997?

Call for Papers

We are actively seeking people to deliver exceptional educational opportunities at Postgres Conference 2020. Postgres Conference is the perfect opportunity for students, hobbyists, and professionals to exhibit their knowledge in solving problems that are People, Postgres, or Data related. Submit your proposal today.

Instructor lead Digital Training

We have the following training opportunities in November and December:

  • November 12th: PostgreSQL Performance and Maintenance
  • November 14th: Finding and Fixing Slow Queries in PostgreSQL
  • November 21st: PostgreSQL and Kubernetes
  • December 10th: PostgreSQL Replication deployment and best practices
  • December 12th: PGPool-II: Performance and best practices

Register here.


We have the following webinars in November:

  • Nov 13: Designing a Change Data Capture and Two Data Center Architecture for a Distributed SQL Database
  • Nov 14: Yugabyte DB 2.0 Jepsen Test Results and Distributed Transactions Algorithms in Google Spanner, YugabyteDB and CockroachDB
  • Nov 20: Zero Down-Time Oracle to Cloud-Native PostgreSQL Migrations

Find more information here.

Interesting projects

  • PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.
  • HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world's most visited ones. Over the years it has become the de-facto standard opensource load balancer, is now shipped with most mainstream Linux distributions, and is often deployed by default in cloud platforms. Related content.


Joshua D. Drake     November 07, 2019

Where is your path leading you?


At Postgres Conference Silicon Valley I promised during the launch that after the conference was completed PostgresWarrior and I would be taking a freedom tour to various National Parks. 


For us, our path is serving the community through education and professional and personal development. This happens in many forms including these newsletters.


Recent projects have been coordinating a successful webinar series from Yugabyte, creating  online live Postgres instructor lead training, and launching a new educational series on PostGIS. This is all happening while the Call for Papers for Postgres Conference 2020 now open! The ongoing goal is to allow any person to receive the education they need to be successful with People, Postgres, Data year around.


The current training options from Postgres Conference can be found here:

We have two performance trainings coming up in October:

  • PostgreSQL Performance & Maintenance on October 29th
  • Finding and Fixing Slow Queries on October 30th


Both of these training opportunities sell out at the physical conferences. They are solid content and at a reasonable price (149.00 USD ) it is hard to say no to a few hours for education in your day!


Are you frustrated with the limitations and fragility of Logical Replication in PostgreSQL Core? There is a new software on the block called pgcat and it has an impressive list of features to allow your Logical Replication experience to be exceptional.


Looking for a simple script to help find tuning opportunities for PostgreSQL? The perl script postgresqltuner may just be what you are looking for. Yes, there really is an active developer community for the Perl language still.


A HyperLogLog data type for PostgreSQL from our friends at Citus. This Postgres module introduces a new data type hll which is a HyperLogLog data structure. HyperLogLog is a fixed-size, set-like structure used for distinct value counting with tunable precision. For example, in 1280 bytes hll can estimate the count of tens of billions of distinct values with only a few percent error.


Our partner Heimdall Data has been creating a new type of connection pool that removes a significant limitation within other software such as PgBouncer and PgPool. If you are looking for Enterprise Authentication (Active Directory/LDAP) as well as intelligent pooling for many users (and connections), it may be worth a look. There is a webinar next week on how it all works!

Does your path allow people to “Come as you are?”

In consideration of all of the great news from our community we can’t help but reflect on the blessings we have in the world of Open Source. Remember that Open Source is about exceptionalism, creativity, and most importantly freedom. When communities start restricting these three tenets of Open Source, they are no longer Open Source communities, even if their software is.


The theme for Postgres Conference 2020 in NYC is “come as you are” and we are asserting this mantra throughout our entire community. Over the past few years there has been an influx of toxicity throughout all circles and it is time for civility and grace to return. It is time to remember that we are all human. We all have angels and demons to our personalities. We are all flawed and we are all exceptional in our own way.


"But just because I don't agree with someone on everything doesn't mean that I'm not going to be friends with them. When I say, 'be kind to one another,' I don't only mean the people that think the same way that you do. I mean be kind to everyone."


-- Ellen DeGeneres


(Yes, this happened. No, it wasn’t planned.) 

Just outside of Moab Utah.


Find YOUR path.

Joshua D. Drake     October 17, 2019

Welcome to "Cultivating DEI" , a series in which Postgres community members share their insight and experience about creating a more diverse and inclusive Postgres environment where all are welcome.

Recently I've been thinking a lot about relationships between the PostgreSQL community and the Database research community. To put it bluntly – these two communities do not talk to each other!

There are many reasons why I am concerned about this situation. First, I consider myself belonging to both of these communities. Even if right now I am 90% in industry, I can't write off my academic past. Writing a scientific paper with the hope of being accepted to the real database conference is something which appeals to me.

Secondly, we want to have quality candidates for database positions. Anyone who has tried recently to fill these positions knows that this is not an easy task. If you are looking at recent college grads, there are almost no chances that you can find somebody who has PostgreSQL experience. Here is where we face the other side of the problem.

The problem is not simply that scientists do not speak at the PostgreSQL conferences, and that PostgreSQL developers do not speak at academic conferences. The larger issue is that for many Computer Sciences (CS) students, their academic research and practical experience do not intersect. They learn about some incredible algorithms, and as part of their coursework they may suggest some enhancements to existing algorithms. They then practice their SQL skills with MySQL, which from my observations lacks so many basic features, that it can hardly be taken seriously as a data platform.

If students practiced using PostgreSQL, they would have a full-scale enterprise ready object-relational database -- not a "light" version, but a robust platform, which supports a multitude of index and data types, constraints, procedural languages and much more.

I've heard from several professors that "MySQL is okay for "learning SQL." I want to ask -- what does "learning SQL" mean? Is it just learning how to write a syntactically correct SQL? One contributing factor to the problem is that MySQL comes on each laptop by default, integrated with basic tools that allow building websites. It is integrated with Wordpress. There is no reason for PostgreSQL not to have similar support, but it is not in place.

This is particularly frustrating when you recognize the amount of database research was completed using Postgres, for Postgres or with help of Postgres; R-Tree and GIST indexes, for example. Also, the SIGMOD Test of Time Award in 2018 went to the paper "Serializable isolation for snapshot databases," which was implemented in PostgreSQL.

I know the answer to the question "why do they not talk?" Researchers do not want to talk at the PostgreSQL conferences, because those are not scientific conferences, and participation in these conferences will not result in a publication. Postgres developers do not present at the CS conferences, because they do not want to write long papers. Even if they do submit something, their papers are often rejected as "not having any scientific value." I have experienced this on multiple occasions.

I came across another example of "why” when I attended the ACM/SIGMOD conference in Amsterdam. I attended a compelling presentation on the problem of cardinality estimation over multi-join queries, that introduced new optimization techniques. The presenter mentioned that he had used Postgres to build the prototype. I was too far back in the room to ask my question, so I reached out via the conference website.

I asked the presenter why he didn't submit a patch. He replied that their approach was hacky, and it needs more work to think about adding it to Postgres. I've asked whether he would be interested in working on it with some PostgreSQL community members. His reply? "Not in the next two years, I've just received a post-doc position at Microsoft, so I can't do it for the next two years."

So yes -- I know the answer as to why these two communities historically do not communicate. However, I do not like or accept it. Perhaps we can talk about and resolve this problem together?!

Contributor Bio:

Henrietta Dombrovskaya is a database researcher and developer with over 30 years of academic and industrial experience. She holds a Ph.D. in Computer Science from University of Saint Petersburg, Russia. She taught Database and Transaction theory at the University of Saint – Petersburg (Russia), as well as multiple database tuning classes for both beginners and advanced professionals.

Her professional experience includes consulting for a number of government projects in Chicago and New York, and providing Data services in the financial sector, manufacturing, and distribution. She is a co-author, with B. Novikov, of the book “System Tuning”, BHV, S.-Petersburg, Russia. Her researches in overcoming object-relational impedance mismatch were publish in the Proceedings of EDBT 2014 Athens and ICDE 2016 in Helsinki. At Braviant Holdings she is happy to have an opportunity to implement the results of her research in practice.

Henrietta Dombrovskaya is a co-organizer of the Chicago PostgreSQL User Group and a member of the Diversity, Equity, and Inclusion Work Group for the Postgres Conference Series. She was recently awarded the 2019 "Technologist of the Year" award by the Illinois Technology Association. This award is  "presented to the individual whose talent has championed true innovation, either through new applications of existing technology or the development of technology to achieve a truly unique product or service."

Come as you are

The theme of Postgres Conference for 2020 is: Come as you are. We want everyone to feel welcome. You are welcome for your love of People, Postgres, Data and a desire to ascend beyond the box. It isn’t about your race, gender, sexuality, political affiliation, or love of blueberry flavored popcorn. It is 100% about you, the creativity and excellence that only you can bring to our community.

Postgres Conference 2020 CFP now open

We are actively seeking content for the largest Postgres Conference in the world, set to be held from March 23rd to 27th, 2020  at the Marriott Marquis, Time Square! Please submit your content here.

Postgres Conference Silicon Valley 2019

Postgres Conference Silicon Valley doubled its attendance, making it the largest Postgres  Conference on the West coast ever! We are very pleased to be able to bring so many community members together for education, fun, and ecosystem support!



  • Postgres Conference is now offering live digital training throughout the year. Continuing on our mission of creating exceptional Postgres people, you can now not only register for pre-defined training but request specific training and we will connect you with a class! The current training options are:
    • PostgreSQL Performance and Maintenance
    • Finding and Fixing slow queries
    • Postgres + Kubernetes: yes it really is a match made in heaven
    • PostgreSQL Replication deployment and best practices
    • PgPool-II Performance and best practices


Register here:


Digital Events

We have free digital events coming up including topics such as Kubernetes, DistributedSQL, Backups, Postgis and many more. Coming up: 

Joshua D. Drake     October 02, 2019
Pivotal On Light

As part of the countdown to PostgresConf Silicon Valley, learn more about featured Partner Sponsor Pivotal, including their commitment to partnering with and contributing to the Postgres community.

Tell us about your commitment to the PostgreSQL Community.

For those who haven’t heard of us, Greenplum is PostgreSQL massively scaled specifically for analytics. We’ve optimized PostgreSQL for complex analytic queries on text, graph, geospatial, and structured business data.  Via free, open-source Apache MADlib, Python, and R libraries, Greenplum is also capable of in-database machine learning and deep learning, and in our latest version we support distributed execution of Keras/Tensorflow along with GPU acceleration.   Our blog highlights  everything that’s new in our latest version (6.0).

Moreover, we now also offer a commercially-supported version of open-source PostgreSQL based on v11. 

At Pivotal, every product we build is based on open-source, and we’re committed to the viability of the PostgreSQL community.  There are over 100 engineers at Pivotal that contribute to the community, and we’ve made a number of contributions to v11.  We’re currently working on contributions for things like a column store, parallel grouping sets, and improvements to PL/R performance.

You can join our very active community at

What is the best thing about working with the Postgres community?

We’re excited to be part of one of the most successful, longest-running open-source projects.   We enjoy the camaraderie and solidarity of the community, and the willingness of participants to engage, share and collaborate.

We have a large community of PostgreSQL experts at Pivotal – we’ve been working with it for over 15 years – and we’re always interested in adding to it.  Please visit our careers page for more information!

Why is Postgres an ideal foundation to build on?

It’s open source, is proven by a large number of commercial deployments, and has an active community. The ecosystem and the many tools and extensions were incredibly valuable to help us accelerate innovation faster than we could on our own.  We plan to be part of this community for the long haul.

Why should you attend PostgresConf Silicon Valley 2019 in San Jose?

The PostgresConf events in general are great for learning and collaboration.  You can learn from people using Postgres to solve meaningful problems in production, and you make relationships that will last throughout your entire career.    If you haven’t been before, we heartily recommend that you join us here!

 See you at PostgresConf Silicon Valley 2019!


Bob Glithero, Head of Product Marketing for Pivotal Greenplum

Bob Glithero     September 10, 2019

Summer is officially over (although the calendar says otherwise), the kids are back in school, the last three-day camping weekend of the season has passed, and we are staring right at PostgresConf Silicon Valley starting September 18th! Registrations for this fantastic event have already exceeded 2018 numbers and our training day is showing great success. 


Digital Events

  • YugabyteDB Distributed SQL Webinars
    • A series of free webinars discussing technical opportunities with Distributed SQL. YugabyteDB is an Open Source, Postgres compatible Distributed SQL database.




Partner Conferences

Register Today for API World 2019 and Save $200!

The API World team has offered us 25 free OPEN Passes and discounted PRO Passes to API World 2019 so our members can attend the event.


API World (October 8–10, San Jose Convention Center) is the world’s largest API & Microservices conference and expo with 3,500+ attendees, 60+ exhibitors, and 10+ tracks covering API Lifecycle Management, API Innovations, Microservices, Containers, Kubernetes, and more. 140+ speakers include leaders from Intuit, US Bank, IBM, Okta, Capital One, Box, Kong, GitHub, Comcast, Microsoft, Postman, Twillio, SendGrid, Oracle, Ford, UPS, Uber, Google, eBay and 100+ more. 

GitLab Commit, our premier community event, brings together the GitLab community to connect, learn, and inspire. We want to make sure the NY tech community is well-represented at Commit so we are offering a HUGE discount to members of local tech community groups. You can use code 'COMMITCOMMUNITY102' to save 50%. 


Joshua D. Drake     September 04, 2019

What is Distributed SQL?

A distributed SQL database is similar to a NoSQL database in that it can globally distribute data and elastically scale. At the same time, it can also deliver strong consistency, ACID transactions and support the SQL syntax like you would expect from a monolithic SQL system.

Join us at Distributed SQL Summit at PostgresConf

YugaByte DB is excited to announce the speaker schedule for this year’s Distributed SQL Summit! Expert speakers will dive deep into the use cases, best practices and next steps for successfully implementing distributed SQL as a key part of an enterprise cloud and Oracle migration strategy.

Presented in partnership with PostgresConf, the inaugural Distributed SQL Summit is taking place on September 20, 2019 in San Jose, California. The Summit is the first ever event to focus exclusively on sharing best practices and technical knowledge on the cloud-native approach to deploying, operating and scaling distributed SQL databases. The Summit is co-located with Postgres Conference Silicon Valley, as a “conference within a conference.”

This one-day event features speakers and panelists from some of the biggest names in cloud and database infrastructure including Amazon Aurora, Google Spanner, Facebook,  Kroger and YugaByte DB. Here’s a small preview of some of the scheduled talks:



James Watters, SVP Products – Pivotal


Panel: Facebook’s Distributed Database Evolution

Jeff Rothschild, Dhruba Borthakur, Vishal Kathuria, Karthik Ranganathan


An Introduction to Amazon Aurora

Kamal Gupta, Head of Engineering, Aurora – Amazon


Google Spanner’s SQL Evolution

Campbell Fraser, Software Development Lead – Google Spanner


Transforming the Omni-Channel Experience at Kroger

Mahesh Tyagarajan, VP Engineering – Kroger


Panel: How Cloud-Native and Distributed SQL are Transforming ECommerce & Retail

Moderator: Ram Ravichandran, CTO – Narvar


Distributed MySQL Architectures, Past, Present and Future

Peter Zaitsev, Founder & CEO – Percona


Building Microservices in a Cloud-Native World with Distributed SQL

Ryan Scheidter, Lead Software Engineer – Cerner


Make sure to check out the complete speaker schedule and secure your tickets for PostgresConf and Distributed SQL Summit!

Learn about YugaByte DB

YugaByte DB is an open-source, cloud-native, high-performance distributed SQL database for global, internet-scale apps. And best of all, it is fully compatible with the PostgreSQL wire protocol and the SQL syntax. Built using a unique combination of high-performance document store, auto sharding, per-shard distributed consensus replication and multi-shard ACID transactions (inspired by Google Spanner), YugaByte DB serves both scale-out RDBMS and internet-scale OLTP workloads with low query latency, extreme resilience against failures and global data distribution. As a cloud native database, it can be deployed across public and private clouds as well as in Kubernetes environments with ease.

Attend Talk on Thursday

Karthik Ranganathan, YugaByte’s Co-Founder & CTO, will be highlighting the challenges faced in building a PostgreSQL-compatible distributed SQL database in a talk titled “6 Technical Challenges Developing a Distributed SQL Database” (Thursday, September 19 at 12.30pm-12.50pm). This talk will serve as an excellent introduction to distributed SQL databases and prepare you to make the most of the Distributed SQL Summit the next day.

Visit Sponsor Table

Visit the YugaByte DB sponsor table at PostgresConf to learn how to build business-critical multi-cloud applications with maximum agility. You will see multiple demos with real-world use cases in action and have the opportunity to win some cool prizes.

See you at PostgresConf 2019 Silicon Valley

PostgresConf has always been an excellent resource for attendees to learn from their peers as well as Postgres experts. The 2019 Silicon Valley edition promises to the best ever. We look forward to connecting with you at the conference!

Jimmy Guerrero     September 03, 2019
Cybertec Logo Nameonly 01

 A database such as PostgreSQL is not just there to store data – it is also a tool to protect data. Your data must not be lost and it must not be seen by people who are unauthorized or hostile. The main goal is therefore to protect data at any cost and ensure that nothing is ever lost, leaked or compromised. As we have seen in the past more often than not a leak can easily ruin the reputation of a company or even lead to its destruction. This is true in all sectors including but not limited to finance, medical services, IT, and so on.


Protecting data at various levels

If you are using PostgreSQL you can protect data at various levels. The goal is to develop a comprehensive security concepts which protects from all kinds of attacks. The following aspects have to be taken into account:

  • Network security

  • Transport encryption (SSL, etc.)

  • Database level permissions

  • Data masking and obfuscation

  • Data-At-Rest Encryption (PostgreSQL TDE)

The following overview shows how to implement a sound policy at every level.


Ensuring network security

The first line of defense is always the network. The golden rule is: Only listen on network connections you really need and which offer a small attack surface. Fortunately, PostgreSQL has all the means to ensure security at this level.

The first thing to do is to configure the “listen_addresses” parameter in postgresql.conf. It tells PostgreSQL which bind addresses you want to use. The rule is: If you don’t have to listen on certain IPs don’t do it. To ensure security only use bind addresses which are really in use.

The second line of defense is pg_hba.conf. This config file will tell PostgreSQL how to authenticate which network segment. pg_hba.conf will be familiar to most readers so I will skip the details in this post.

However, let us assume that an attacker somehow manages to reach your database and launch a brute force attack to figure out your passwords. One way to defend against such an attack is to use the “auth_delay” extension which is part of the PostgreSQL contrib package. What is the general idea behind this extension? If an attacker launches a brute force attack auth_delay will wait some time before returning an error. This simple method will already greatly reduce your risk. Here is how it works:

# postgresql.conf

shared_preload_libraries = 'auth_delay'

auth_delay.milliseconds = '500'

Just add those lines to postgresql.conf and the module will take care of the rest.


Implementing transport encryption (SSL, etc.)

Once we have secured bind addresses and reduced the risk of a brute force attack it is important to protect your lines of communication. The way to do that is to use SSL. PostgreSQL supports various levels of SSL. The following levels are supported:

  • disable: I don't care about security, and I don't want to pay the overhead of encryption

  • allow: I don't care about security, but I will pay the overhead of encryption if the server insists on it.

  • prefer: I don't care about encryption, but I wish to pay the overhead of encryption if the server supports it.

  • require: I want my data to be encrypted, and I accept the overhead. I trust that the network will make sure I always connect to the server I want.

  • verify-ca: I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server that I trust.

  • verify-full: I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server I trust, and that it's the one I specify.

Depending on your performance and security requirements you can decide which level is best for you. Performance-wise SSL encryption does not come for free but if your security requirements are high it is worth paying the price.


Configuring database level permissions

Once you have taken care of network security, pg_hba.conf, authentication, as well as transport encryption it is time to take a look at what you can actually do inside the database.

The following layers of security are important:

  • schema: Make sure that only trusted people can access a schema

  • table: Ensure that only relevant people have access to a specific table

  • column: Restrict access to columns for specific users (to protect credit card data and alike)

  • row-level-security (RLS): Remove rows from the scope of a user

RLS (row-level-security) is especially important and promising because it allows people to only access specific rows in a table which greatly increases your ability to protect data in a very fain grained way. The important thing to keep in mind is: RLS is powerful but it requires proper testing. Some stuff is quite tricky and requires a fair amount of expertise as shown in my blog post about the topic ( If you need assistance with RLS feel free to get in touch with us. We are pleased to help.


Data masking and obfuscation

Studies have shown that many attacks come from within. Consider the following scenario: You are running a large online shop. Your production database is secure and nothing happens. However, your development team has to test new stuff and needs data to ensure high quality standards. What are you going to do? Do you really want to give all your developers a full copy of all your data? The trouble is: If there is no proper test data your applications will be buggy – if you hand over all your data to developers you might face issues on the legal side of you are running the risk of giving data to people you cannot fully trust under all circumstances.

The solution to the problem is Cybertec Data Masking ( Our product will allow you to define an obfuscation model and give developers access to an obfuscated dump which can be used safely. The advantages are that the data given to developers has the same properties as the live data but does not contain personal or business critical data which should not be seen by ordinary developers.

Cybertec Data Masking provides some addons and extensions to PostgreSQL and helps you to obfuscate data in the most simple and elegant way possible. Get in touch with our sales team to find out more.


Enabling Data-At-Rest Encryption (PostgreSQL TDE)

Once you have secured your database using the steps outlined above you might still be faced with additional risks. What if your disks are compromised? PostgreSQL TDE (“Transparent Data Encryption”) will be the solution for you.

PostgreSQL TDE is a PostgreSQL distribution by Cybertec which automatically encrypts data on disk. All data files are safely encrypted and unless you know the key there is no way to launch your server. PostgreSQL TDE is especially useful if you are dealing with medical records, customer and financial data, which requires even more protection and security. PostgreSQL TDE is the ultimate solution to most high end security demands.

How does it work?

PostgreSQL will cypher every block as it is written to disk and decrypt data as it is read from your storage devices. Using cutting edge hardware acceleration TDE ensures superior performance and total transparency. PostgreSQL TDE can integrate into professional key stores and there is no need to store the key on the same server as the data.


If you are looking for PostrgreSQL TDE for PostgreSQL 11 or maybe even PostgreSQL 12 get in touch with our team here at Cybertec to find out more and to learn about this wonderful product:

(Author: Hans-Juergen Schoenig)


Hans-Jürgen Schönig     August 28, 2019

Latest Posts

  • The Chairs (myself, Jim Mlodgenski, and Amanda Nystrom) have recently decided to bring some visibility to charities that are close to our hearts. They are listed below: Joshua Drake: Navajo Wate...
  • People, Postgres, Data,Due to the health risks and travel restrictions created by the Coronavirus we unfortunately have to cancel Postgres Conference 2020 which was to be held at the Marriott Marqu...
  • What is the future of Postgres? When you observe the ecosystem you can’t help but ask yourself where the community and software is going next. It is without question that the future of data will...
  • Where is your path leading you?   At Postgres Conference Silicon Valley I promised during the launch that after the conference was completed PostgresWarrior and I would be taking a freedom tour...
  • Welcome to "Cultivating DEI" , a series in which Postgres community members share their insight and experience about creating a more diverse and inclusive Postgres environment where all are welco...