Estuary

How-to: Real-time Change Data Capture for Neon PostgreSQL with Estuary Flow

Learn how to stream real-time changes from Neon PostgreSQL with Estuary Flow and Change Data Capture (CDC)

How-to: Real-time Change Data Capture for Neon PostgreSQL with Estuary Flow
Share this article

This guide will walk you through setting up Change Data Capture (CDC) for your Neon PostgreSQL database using Estuary Flow. This setup will allow you to replicate data from your Neon database to external destinations in real-time, such as Snowflake, Databricks, Iceberg, or even other PostgreSQL databases.

Why CDC is Great for Data Replication

Here are a few points to illustrate why log-based CDC is the best way to replicate data in real-time from PostgreSQL databases such as Neon.

  1. Real-time Data Synchronization: CDC enables you to keep multiple systems in sync with minimal delay, ensuring data consistency across your entire data ecosystem.
  2. Reduced Load on Source Systems: CDC minimizes the impact on your source database's performance by only capturing and transferring changed data.
  3. Efficient Data Integration: CDC facilitates efficient ETL (Extract, Transform, Load) processes by focusing only on changed data, reducing processing time and resource usage.
  4. Historical Change Tracking: CDC can provide a detailed audit trail of data changes, which is crucial for compliance and data governance.

Why Estuary Flow is The Best Option for Neon CDC

  1. Native Integration with Neon: Estuary Flow offers native support for Neon PostgreSQL, making it easy to set up and manage CDC pipelines from your Neon database to various destinations.
  2. Real-time Data Pipelines: With Estuary Flow, you can create real-time data pipelines that capture and process data changes as they occur, enabling up-to-the-minute analytics and decision-making.
  3. Automated Schema Management: Estuary Flow automatically detects schema changes in your source database and propagates these changes to your target systems, reducing manual intervention and potential errors.
  4. Wide Range of No-Code Connectors: Estuary Flow supports a variety of source and destination connectors, allowing you to easily integrate your Neon PostgreSQL data with popular data warehouses, analytics platforms, and other databases.

Prerequisites

Before you begin, make sure you have:

  1. An Estuary Flow account (you can start for free, no credit card is required!)
  2. Neon account

Step 1: Enable Logical Replication in Neon

  1. Log in to the Neon Console and select your project.
  2. Go to the Neon Dashboard and select "Project settings".
  3. Click on the "Beta" tab.
  4. Click "Enable" to turn on logical replication.
Image #2.png
Configure logical replication in Neon

Note: This action will modify the wal_level configuration parameter to logical for all databases in your Neon project. This change cannot be reversed and will restart all computes in your project. To verify that logical replication is enabled, run this query in the Neon SQL Editor:

plaintext
SHOW wal_level;

The result should be logical.

Step 2: Create a Postgres Role for Replication

Image #3.png
Create Role in Neon
  1. Navigate to the Neon Console.
  2. Select a project.
  3. Select Branches.
  4. Select the branch where you want to create the role.
  5. On the Roles & Databases tab, click Add role.
  6. Specify a role name (e.g., cdc_role) and click "Create".
  7. The role is created and you are provided with the password for the role.

Alternatively, you can use the Neon CLI:

plaintext
neon roles create --name cdc_role

Step 3: Grant Schema Access to Your Postgres Role

In the SQL Editor of your project, run these SQL commands for each schema you want to replicate:

plaintext
GRANT USAGE ON SCHEMA public TO cdc_role; GRANT SELECT ON ALL TABLES IN SCHEMA public TO cdc_role; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO cdc_role;

Step 4: Create a Publication

Create a publication named flow_publication that includes all the tables you want to ingest into Estuary Flow:

plaintext
CREATE PUBLICATION flow_publication FOR TABLE <tbl1, tbl2, tbl3>; ALTER PUBLICATION flow_publication SET (publish_via_partition_root = true);

Step 5: (Optional) Allow Inbound Traffic

Image #4.png
Allow traffic in Neon based on IP addresses

If you're using Neon's IP Allow feature, you'll need to allow inbound traffic from Estuary Flow's IP addresses. Consult the Estuary Flow documentation for the list of IPs to allowlist.

Step 6: Create a Neon Source Connector in Estuary Flow

  1. In the Estuary Flow web UI, go to "Sources" and click "New Capture".
  2. Search for "Neon PostgreSQL" from the connector catalog and click "Capture".
Image #5.png
Create Neon Capture in Estuary Flow
  1. Enter your Neon database connection details:
    • Server Address: (from your Neon connection string)
    • User: cdc_role
    • Password: (the password for cdc_role)
    • Database: (your database name)
Image #6.png
Configure Neon Capture in Estuary Flow
  1. Click "Next" to scan your database for available tables for replication.
  2. Select the tables you want to replicate by enabling bindings for them.
Image #7.png
Configure Collections in Estuary Flow
  1. Click "Save and Publish" to start the connector and begin the backfill process.

And that’s it! Estuary Flow will start backfilling all existing data in just a few seconds, then shift into a continuous replication mode to capture all future data changes. From this point, you can either set up some transformations on your data in a true ETL fashion, or you can materialize the incoming records straight into a data warehouse, such as Snowflake, Databricks, or BigQuery.

Conclusion

In this quick guide, you’ve learned how to set up an Estuary Flow Neon change data capture data flow in a just a few minutes. This integration enables you to activate your data for analytics, machine learning, AI, and a heap of other use cases.

Make sure to check out some of our other articles on our blog.

If you have any questions, join us in our community Slack channel!

Start streaming your data for free

Build a Pipeline
Share this article

Table of Contents

Build a Pipeline

Start streaming your data for free

Build a Pipeline

About the author

Picture of Dani Pálma
Dani Pálma

Dani is a data professional with a rich background in data engineering and real-time data platforms. At Estuary, Daniel focuses on promoting cutting-edge streaming solutions, helping to bridge the gap between technical innovation and developer adoption. With deep expertise in cloud-native and streaming technologies, Dani has successfully supported startups and enterprises in building robust data solutions.

Popular Articles

Streaming Pipelines.
Simple to Deploy.
Simply Priced.
$0.50/GB of data moved + $.14/connector/hour;
50% less than competing ETL/ELT solutions;
<100ms latency on streaming sinks/sources.