How to Set Up a Highly Available PostgreSQL Cluster Using Patroni on Ubuntu 16.04

Prerequisites

Launch Alibaba Cloud ECS Instance

apt-get update -y

Setup Environment

Install PostgreSQL

apt-get install postgresql -y
systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2018-09-21 20:03:04 IST; 1min 7s ago
Main PID: 3994 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/postgresql.service
Sep 21 20:03:04 Node1 systemd[1]: Starting PostgreSQL RDBMS...
Sep 21 20:03:04 Node1 systemd[1]: Started PostgreSQL RDBMS.
Sep 21 20:03:24 Node1 systemd[1]: Started PostgreSQL RDBMS.
systemctl stop postgresql
ln -s /usr/lib/postgresql/9.5/bin/* /usr/sbin/

Install Patroni

apt-get install python3-pip python3-dev libpq-dev
sudo -H pip3 install --upgrade pip
pip install patroni
pip install python-etcd

Install Etcd and HAProxy

apt-get install etcd -y
apt-get install haproxy -y

Configure Etcd

nano /etc/default/etcd
ETCD_LISTEN_PEER_URLS="http://192.168.0.103:2380,http://127.0.0.1:7001"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379, http://192.168.0.103:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.103:2380"
ETCD_INITIAL_CLUSTER="etcd0=http://192.168.0.103:2380,"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.103:2379"
ETCD_INITIAL_CLUSTER_TOKEN="cluster1"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl restart etcd
systemctl status etcd
● etcd.service - etcd - highly-available key value store
Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-09-21 22:27:47 IST; 5s ago
Docs: https://github.com/coreos/etcd
man:etcd
Main PID: 4504 (etcd)
CGroup: /system.slice/etcd.service
└─4504 /usr/bin/etcd
Sep 21 22:27:47 Node2 etcd[4504]: starting server... [version: 2.2.5, cluster version: to_be_decided]
Sep 21 22:27:47 Node2 systemd[1]: Started etcd - highly-available key value store.
Sep 21 22:27:47 Node2 etcd[4504]: added local member ce2a822cea30bfca [http://localhost:2380 http://localhost:7001] to cluster 7e27652122e8b2ae
Sep 21 22:27:47 Node2 etcd[4504]: set the initial cluster version to 2.2
Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca is starting a new election at term 5
Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca became candidate at term 6
Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 6
Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca became leader at term 6
Sep 21 22:27:48 Node2 etcd[4504]: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at term 6
Sep 21 22:27:48 Node2 etcd[4504]: published {Name:hostname ClientURLs:[http://192.168.0.103:2379]} to cluster 7e27652122e8b2ae

Configure Patroni

nano /etc/patroni.yml
scope: postgres
namespace: /db/
name: postgresql0
restapi:
listen: 192.168.0.105:8008
connect_address: 192.168.0.105:8008
etcd:
host: 192.168.0.103:2379
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
initdb:
- encoding: UTF8
- data-checksums
pg_hba:
- host replication replicator 127.0.0.1/32 md5
- host replication replicator 192.168.0.105/0 md5
- host replication replicator 192.168.0.104/0 md5
- host all all 0.0.0.0/0 md5
users:
admin:
password: admin
options:
- createrole
- createdb
postgresql:
listen: 192.168.0.105:5432
connect_address: 192.168.0.105:5432
data_dir: /data/patroni
pgpass: /tmp/pgpass
authentication:
replication:
username: replicator
password: password
superuser:
username: postgres
password: password
parameters:
unix_socket_directories: '.'
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
mkdir -p /data/patroni
chown postgres:postgres /data/patroni
chmod 700 /data/patroni
nano /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target
[Service]
Type=simple
User=postgres
Group=postgres
ExecStart=/usr/local/bin/patroni /etc/patroni.ymlKillMode=processTimeoutSec=30Restart=no[Install]
WantedBy=multi-user.targ
systemctl start patroni
systemctl start postgresql
systemctl status patroni
● patroni.service - Runners to orchestrate a high-availability PostgreSQL
Loaded: loaded (/etc/systemd/system/patroni.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2018-09-21 22:22:22 IST; 3min 17s ago
Main PID: 3286 (patroni)
CGroup: /system.slice/patroni.service
├─3286 /usr/bin/python3 /usr/local/bin/patroni /etc/patroni.yml
├─3305 postgres -D /data/patroni --config-file=/data/patroni/postgresql.conf --max_worker_processes=8 --max_locks_per_transaction=64
├─3308 postgres: postgres: checkpointer process
├─3309 postgres: postgres: writer process
├─3310 postgres: postgres: stats collector process
├─3315 postgres: postgres: postgres postgres 192.168.0.105(54472) idle
├─3320 postgres: postgres: wal writer process
└─3321 postgres: postgres: autovacuum launcher process
Sep 21 22:24:52 Node1 patroni[3286]: 2018-09-21 22:24:52,329 INFO: Lock owner: postgresql0; I am postgresql0
Sep 21 22:24:52 Node1 patroni[3286]: 2018-09-21 22:24:52,391 INFO: no action. i am the leader with the lock

Configure HAProxy

nano /etc/haproxy/haproxy.cfg
global
maxconn 100
defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
listen postgres
bind *:5000
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server postgresql_192.168.0.105_5432 192.0.2.11:5432 maxconn 100 check port 8008
server postgresql_192.168.0.104_5432 192.0.2.12:5432 maxconn 100 check port 8008
systemctl restart haproxy

Test PostgreSQL Cluster

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store