PgEdge and CloudNativePG Partnership: Simplifying Distributed Postgres on K8s

2 hours ago 1

With pgEdge now fully open source, we’re continuing our mission to make distributed Postgres accessible to developers, operators, and the broader open-source community. A key part of that story is how we can make it easier to run pgEdge using tools that have broad adoption in the community.

Today, we’re excited to introduce two key releases that make it even easier to deploy and operate pgEdge Distributed Postgres on Kubernetes:

  • New pgEdge Postgres Container images built for compatibility with CloudNativePG

  • An updated pgEdge Helm chart that simplifies deploying pgEdge on Kubernetes by leveraging CloudNativePG

CloudNativePG is an open-source Kubernetes operator that automates the lifecycle of PostgreSQL clusters using native Kubernetes resources. Its adoption has skyrocketed in recent years, and its recent acceptance as a CNCF Sandbox project has cemented it as the community standard for running Postgres natively on Kubernetes.

New pgEdge Postgres Container Images

In order to make it easier to operate pgEdge in CloudNativePG, and to support other integrations, we’re releasing a new container image built from our pgEdge Enterprise Postgres packages with support for Postgres 16 through 18.

These images are published on the Github Container Registry as pgedge/pgedge-postgres

https://github.com/pgEdge/postgres-images/pkgs/container/pgedge-postgres

We’re releasing two image flavors initially.

  • The minimal image comes bundled by default with the required pgEdge extensions to support distributed deployments: spock, snowflake and lolor

  • The standard image includes popular extensions pgVector, PostGIS and pgAudit

This approach means that you can utilize a single set of images across your Postgres deployments, whether they be in a single region, or distributed with spock’s multi-master replication.

Over time, we’ll make additions to the image flavors we publish to support additional extensions and improvements. You can also extend these images to add other extensions to your deployment.

This image is designed to be compatible with CloudNativePG, but also includes support for the official Postgres entrypoint, as well as a Patroni entrypoint. This adds more integration opportunities for popular open source tools.

You can learn more about the new images here: https://github.com/pgEdge/postgres-images

pgEdge Distributed Postgres in CloudNativePG with pgedge-helm

We also want to make it easier to operate distributed architectures in Kubernetes so that more users can leverage spock’s powerful multi-master capabilities.

In order to do this, we’ve released an updated version of our pgEdge Helm chart which supports deploying both pgEdge Enterprise Postgres and pgEdge Distributed Postgres in Kubernetes.

This new version leverages CloudNativePG to manage Postgres, providing flexible options for single-region and multi-region deployments.

The new chart supports the following features:

  • Postgres 16, 17, and 18 via pgEdge Enterprise Postgres Images

  • Flexible deployment options for both single-region and multi-region deployments

    • Deploy pgEdge Enterprise Postgres in a single region with optional standby replicas.

    • Deploy pgEdge Distributed Postgres across multiple regions with Spock active-active replication.

  • Configuring Spock replication configuration across all nodes during helm install and upgrade processes.

  • Best practice configuration defaults for deploying pgEdge Distributed Postgres in Kubernetes.

  • Extending / overriding configuration for CloudNativePG across all nodes, or on specific nodes.

  • Configuring standby instances with automatic failover, leveraging Spock's delayed feedback and failover slots worker to maintain active-active replication across failovers and promotions.

  • Adding pgEdge nodes using Spock or CloudNativePG's bootstrap capabilities to synchronize data from existing nodes or backups.

  • Performing Postgres major and minor version upgrades.

  • Client certificate authentication for managed users, including the pgedge replication user.

  • Configuration options to support deployments across multiple Kubernetes clusters.

The chart includes a simple example which demonstrates deploying a pgEdge Distributed Postgres deployment with 3 nodes.

pgEdge: appName: pgedge nodes: - name: n1 hostname: pgedge-n1-rw clusterSpec: instances: 3 postgresql: synchronous: method: any number: 1 dataDurability: required - name: n2 hostname: pgedge-n2-rw - name: n3 hostname: pgedge-n3-rw clusterSpec: storage: size: 1Gi

You can install this example by first downloading the latest release package and setting up the required dependencies:

1. Download the latest pgedge-helm release package from pgEdge Helm Releases.

After downloading and extracting the package on your machine, navigate into the pgedge-helm directory.

2. Install pre-requisites (CloudNativePG and cert-manager)

# Install CloudNativePG kubectl apply --server-side -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.27/releases/cnpg-1.27.1.yaml # Install cert-manager kubectl apply -f \ https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml # Wait for cert-manager deployment to finish kubectl wait --for=condition=Available deployment \ -n cert-manager cert-manager cert-manager-cainjector cert-manager-webhook -- timeout=120s

3. Install the chart

helm install \ --values examples/configs/single/values.yaml \ --wait \ pgedge ./

The chart includes a Kubernetes job which ensures spock’s configuration is kept up to date across chart upgrades.

Once the chart is deployed, you can utilize the CloudNativePG kubectl plugin to connect to the app database on the primary for each pgEdge node

kubectl cnpg psql pgedge-n1 -- -U app app

Automatic DDL replication is enabled by default, so inserting a new table with data will be replicated to all other nodes:

4. Create a table and insert data on n1

kubectl cnpg psql pgedge-n1 -- -U app app -c "CREATE TABLE example (id int primary key, data text); INSERT INTO example VALUES (1, 'foo');" INFO:DDL statement replicated. CREATE TABLE INSERT 0 1

5. Query the data on n2

kubectl cnpg psql pgedge-n2 -- -U app app -c "SELECT * FROM example;" id | data ----+------ 1 | foo (1 row)

For more details on using chart features, see the pgEdge documentation.

Deploying across multiple Kubernetes clusters

A single Kubernetes cluster is most commonly deployed in one region, with support for running workloads across multiple availability zones. Most customers who are taking advantage of pgEdge Distributed Postgres operate nodes in different regions for performance or availability reasons, sometimes across multiple Cloud providers.

Deploying across multiple Kubernetes clusters with pgEdge Distributed requires addressing two aspects:

  • Network Connectivity

    • We must ensure that pgEdge nodes can connect across Kubernetes clusters with cross-cluster DNS using tools like Cilium or Submariner

  • Certificate Management

    • We must ensure that managed users have consistent client certificates across all pgEdge nodes by copying certificates across clusters using different tools

These domains are well known in the Kubernetes community as part of operating other multi-cluster workloads, and customers often have solutions in place to manage them, so building a single approach into pgedge-helm doesn’t make sense.

Instead, the new chart includes a few configuration mechanisms to support multi-cluster deployments:

  • pgEdge.initSpock - controls whether spock configuration should be created and updated when deploying the chart. Defaults to true

  • pgEdge.provisionCerts - controls whether or not cert-manager certs should be deployed when deploying the chart. Defaults to true

  • pgEdge.externalNodes - allows configuring nodes that are part of the pgEdge Distributed Postgres deployment, but managed externally to this Helm chart. These nodes will be configured in the spock-init job when it runs.

In order to apply these to a multi-cluster scenario, you can utilize these configuration elements across deployments in multiple clusters.

For example, let’s assume you want to deploy 2 pgEdge nodes across 2 Kubernetes clusters, with a single helm install run against each cluster. These values files highlight how to leverage these options, ensuring that:

  • Certificates are only issued during deployment to the first Kubernetes cluster

    • The client-ca-key-pair and streaming-replica-client-cert secrets must be copied to the second Kubernetes cluster by an external process

  • Spock configuration is applied across nodes in both clusters by the initialization job run in the second Kubernetes cluster

Cluster A: cluster-a.yaml

pgEdge: appName: pgedge initSpock: false provisionCerts: true nodes: - name: n1 hostname: pgedge-n1-rw clusterSpec: instances: 3 postgresql: synchronous: method: any number: 1 dataDurability: required externalNodes: - name: n2 hostname: pgedge-n2-rw clusterSpec: storage: size: 1Gi

Cluster B: cluster-b.yaml

pgEdge: appName: pgedge initSpock: true provisionCerts: false nodes: - name: n2 hostname: pgedge-n2-rw clusterSpec: instances: 3 postgresql: synchronous: method: any number: 1 dataDurability: required externalNodes: - name: n1 hostname: pgedge-n1-rw clusterSpec: storage: size: 1Gi

This example assumes you have a cross-cluster DNS solution in place. If you want to simulate this type of deployment in a single Kubernetes cluster, deploying into two separate namespaces should provide a similar experience without needing to handle this aspect.

We’ll be working to produce more blog content for multi-cluster approaches using different Kubernetes networking / certificate management solutions as we move ahead.

Conclusion

These updates mark an important step toward making pgEdge simpler, more flexible, and easier to integrate into Kubernetes environments.

You can explore the new images and Helm chart today on GitHub:

Whether you’re running in a single region or operating a multi-cluster deployment across clouds, pgEdge now provides the open-source foundation and tools to achieve your requirements in Kubernetes.

Our team is here to help with your journey, including 24×7×365 global support from seasoned Postgres experts with decades of experience and direct contributions to the PostgreSQL community, with optional Forward Deployed Engineer services for dedicated assistance.

Learn more and try pgEdge Enterprise Postgres for free - www.pgedge.com/get-started

Read Entire Article