深入淺析Docker容器中的Patroni

上一篇文章向大傢介紹瞭Repmgr的搭建過程,實現瞭自動切換,今天將向大傢介紹,如何搭建容器下的Patroni集群環境,Patroni作為開箱即用PG高可用工具,越來越多的被各個廠商用於雲環境下使用。

patroni基本架構如圖所示:

在這裡插入圖片描述

etcd作為分佈式註冊中心、進行集群選主工作;vip-manager為主節點設置漂移IP;patroni負責引導集群的創建、運行和管理工作,並可以使用patronictl來進行終端訪問。

具體流程:
1、首先啟動etcd集群,本例中etcd數量為3個。
2、檢測etcd集群狀態健康後,啟動patroni並競爭選主,其他跟隨節點進行數據同步過程。
3、啟動vip-manager,通過訪問etcd集群中/ S E R V I C E N A M E / {SERVICE_NAME}/ SERVICEN​AME/{CLUSTER_NAME}/leader鍵中的具體值,判斷當前節點是否為主節點ip,如果是則為該節點設置vip,提供對外讀寫服務。
註:建議真實環境下將etcd部署到獨立容器上,對外提供服務。

創建鏡像

文件結構

其中Dockerfile為鏡像主文件,docker服務通過該文件在本地倉庫創建鏡像;entrypoint.sh為容器入口文件,負責業務邏輯的處理;function為執行業務方法的入口文件,負責啟動etcd,監控etcd集群狀態、啟動patroni和vip-manager;generatefile為整個容器生成對應的配置文件,包括etcd、patroni及vip-mananger。

目錄結構大致如圖所示:

在這裡插入圖片描述

註:數據庫安裝包和patroni安裝包請自行構建。

DockerFile

FROM centos:7

MAINTAINER wangzhibin <wangzhibin>

ENV USER="postgresql" \
    PASSWORD=123456 \
    GROUP=postgresql 
	
RUN useradd ${USER} \
       && chown -R ${USER}:${GROUP} /home/${USER} \
       && yum -y update && yum install -y iptables sudo net-tools iproute  openssh-server openssh-clients which vim sudo crontabs
#安裝etcd
COPY etcd/etcd /usr/sbin
COPY etcd/etcdctl /usr/sbin

#安裝database
COPY lib/ /home/${USER}/lib
COPY include/ /home/${USER}/include
COPY share/ /home/${USER}/share
COPY bin/ /home/${USER}/bin/
COPY patroni/ /home/${USER}/patroni

#安裝vip-manager
COPY vip-manager/vip-manager /usr/sbin
#安裝執行腳本
COPY runtime/ /home/${USER}/runtime
COPY entrypoint.sh /sbin/entrypoint.sh

#設置環境變量
ENV LD_LIBRARY_PATH /home/${USER}/lib
ENV PATH /home/${USER}/bin:$PATH
ENV ETCDCTL_API=3

#安裝Patroni
RUN yum -y install epel-release python-devel  && yum -y install python-pip \
    && pip install /home/${USER}/patroni/1/pip-20.3.3.tar.gz \
    && pip install /home/${USER}/patroni/1/psycopg2-2.8.6-cp27-cp27mu-linux_x86_64.whl \
    && pip install --no-index --find-links=/home/${USER}/patroni/2/ -r /home/${USER}/patroni/2/requirements.txt \
    && pip install /home/${USER}/patroni/3/patroni-2.0.1-py2-none-any.whl

#修改執行權限
RUN chmod 755 /sbin/entrypoint.sh \ 
&& mkdir /home/${USER}/etcddata \
&& chown -R ${USER}:${GROUP} /home/${USER} \
&& echo 'root:root123456' | chpasswd \
&& chmod 755 /sbin/etcd \
&& chmod 755 /sbin/etcdctl \
&& chmod 755 /sbin/vip-manager

#設置Sudo
RUN chmod 777 /etc/sudoers \
       && sed -i '/## Allow root to run any commands anywhere/a '${USER}' ALL=(ALL) NOPASSWD:ALL' /etc/sudoers \
       && chmod 440 /etc/sudoers

#切換用戶
USER ${USER}

#切換工作目錄
WORKDIR /home/${USER}

#啟動入口程序
CMD ["/bin/bash", "/sbin/entrypoint.sh"]

entrypoint.sh

#!/bin/bash
set -e

# shellcheck source=runtime/functions
source "/home/${USER}/runtime/function"

configure_patroni

function

#!/bin/bash

set -e
source /home/${USER}/runtime/env-defaults
source /home/${USER}/runtime/generatefile

PG_DATADIR=/home/${USER}/pgdata
PG_BINDIR=/home/${USER}/bin

configure_patroni()
{
    #生成配置文件
    generate_etcd_conf
    generate_patroni_conf
    generate_vip_conf
    #啟動etcd
    etcdcount=${ETCD_COUNT}
    count=0
    ip_temp=""
    array=(${HOSTLIST//,/ })
    for host in ${array[@]}
    do
        ip_temp+="http://${host}:2380,"
    done
    etcd --config-file=/home/${USER}/etcd.yml >/home/${USER}/etcddata/etcd.log 2>&1 &
    while [ $count -lt $etcdcount ]
    do
      line=(`etcdctl --endpoints=${ip_temp%?} endpoint health -w json`)
      count=`echo $line | awk -F"\"health\":true" '{print NF-1}'`
      echo "waiting etcd cluster"
      sleep 5
    done
    #啟動patroni
    patroni /home/${USER}/postgresql.yml >  /home/${USER}/patroni/patroni.log 2>&1 &
    #啟動vip-manager
    sudo vip-manager --config /home/${USER}/vip.yml
}

generatefile

#!/bin/bash
set -e

HOSTNAME="`hostname`"
hostip=`ping ${HOSTNAME} -c 1 -w 1 | sed '1{s/[^(]*(//;s/).*//;q}'`

#generate etcd
generate_etcd_conf()
{
    echo "name : ${HOSTNAME}" >> /home/${USER}/etcd.yml
    echo "data-dir: /home/${USER}/etcddata" >> /home/${USER}/etcd.yml
    echo "listen-client-urls: http://0.0.0.0:2379" >> /home/${USER}/etcd.yml
    echo "advertise-client-urls: http://${hostip}:2379" >> /home/${USER}/etcd.yml
    echo "listen-peer-urls: http://0.0.0.0:2380" >> /home/${USER}/etcd.yml
    echo "initial-advertise-peer-urls: http://${hostip}:2380" >> /home/${USER}/etcd.yml
    ip_temp="initial-cluster: "
    array=(${HOSTLIST//,/ })  
    for host in ${array[@]}
    do
    	ip_temp+="${host}=http://${host}:2380," 
    done
    echo  ${ip_temp%?} >> /home/${USER}/etcd.yml
    echo "initial-cluster-token: etcd-cluster-token" >> /home/${USER}/etcd.yml
    echo "initial-cluster-state: new" >> /home/${USER}/etcd.yml
}

#generate patroni
generate_patroni_conf()
{
  echo "scope: ${CLUSTER_NAME}" >> /home/${USER}/postgresql.yml
  echo "namespace: /${SERVICE_NAME}/ " >> /home/${USER}/postgresql.yml
  echo "name: ${HOSTNAME} " >> /home/${USER}/postgresql.yml
  echo "restapi: " >> /home/${USER}/postgresql.yml 
  echo "  listen: ${hostip}:8008 " >> /home/${USER}/postgresql.yml 
  echo "  connect_address: ${hostip}:8008 " >> /home/${USER}/postgresql.yml
  echo "etcd: " >> /home/${USER}/postgresql.yml
  echo "  host: ${hostip}:2379 " >> /home/${USER}/postgresql.yml
  echo "  username: ${ETCD_USER} " >>  /home/${USER}/postgresql.yml
  echo "  password: ${ETCD_PASSWD} " >> /home/${USER}/postgresql.yml
  echo "bootstrap: " >> /home/${USER}/postgresql.yml
  echo "  dcs: " >> /home/${USER}/postgresql.yml
  echo "    ttl: 30 " >> /home/${USER}/postgresql.yml
  echo "    loop_wait: 10  " >> /home/${USER}/postgresql.yml
  echo "    retry_timeout: 10   " >> /home/${USER}/postgresql.yml
  echo "    maximum_lag_on_failover: 1048576 " >> /home/${USER}/postgresql.yml
  echo "    postgresql: " >>  /home/${USER}/postgresql.yml
  echo "      use_pg_rewind: true  " >>  /home/${USER}/postgresql.yml
  echo "      use_slots: true  " >>  /home/${USER}/postgresql.yml
  echo "      parameters:  " >>  /home/${USER}/postgresql.yml
  echo "  initdb:  " >>  /home/${USER}/postgresql.yml
  echo "  - encoding: UTF8  " >>  /home/${USER}/postgresql.yml
  echo "  - data-checksums  " >>  /home/${USER}/postgresql.yml
  echo "  pg_hba:   " >>  /home/${USER}/postgresql.yml
  echo "  - host replication ${USER} 0.0.0.0/0 md5  " >>  /home/${USER}/postgresql.yml
  echo "  - host all all 0.0.0.0/0 md5  " >>  /home/${USER}/postgresql.yml
  echo "postgresql:  " >>  /home/${USER}/postgresql.yml
  echo "  listen: 0.0.0.0:5432  " >>  /home/${USER}/postgresql.yml
  echo "  connect_address: ${hostip}:5432  " >>  /home/${USER}/postgresql.yml
  echo "  data_dir: ${PG_DATADIR}  " >>  /home/${USER}/postgresql.yml
  echo "  bin_dir: ${PG_BINDIR}  " >>  /home/${USER}/postgresql.yml
  echo "  pgpass: /tmp/pgpass  " >>  /home/${USER}/postgresql.yml
  echo "  authentication:  " >>  /home/${USER}/postgresql.yml
  echo "    replication:  " >>  /home/${USER}/postgresql.yml
  echo "      username: ${USER}  " >>  /home/${USER}/postgresql.yml
  echo "      password: ${PASSWD}  " >>  /home/${USER}/postgresql.yml
  echo "    superuser:  " >>  /home/${USER}/postgresql.yml
  echo "      username: ${USER}  " >>  /home/${USER}/postgresql.yml
  echo "      password: ${PASSWD}  " >>  /home/${USER}/postgresql.yml
  echo "    rewind:  " >>  /home/${USER}/postgresql.yml
  echo "      username: ${USER}  " >>  /home/${USER}/postgresql.yml
  echo "      password: ${PASSWD}  " >>  /home/${USER}/postgresql.yml
  echo "  parameters:  " >>  /home/${USER}/postgresql.yml
  echo "    unix_socket_directories: '.'  " >>  /home/${USER}/postgresql.yml
  echo "    wal_level: hot_standby  " >>  /home/${USER}/postgresql.yml
  echo "    max_wal_senders: 10  " >>  /home/${USER}/postgresql.yml
  echo "    max_replication_slots: 10  " >>  /home/${USER}/postgresql.yml
  echo "tags:  " >>  /home/${USER}/postgresql.yml
  echo "    nofailover: false  " >>  /home/${USER}/postgresql.yml
  echo "    noloadbalance: false  " >>  /home/${USER}/postgresql.yml
  echo "    clonefrom: false  " >>  /home/${USER}/postgresql.yml
  echo "    nosync: false  " >>  /home/${USER}/postgresql.yml
}
#........ 省略部分內容

構建鏡像

docker build -t patroni .

運行鏡像

運行容器節點1:
docker run –privileged –name patroni1 -itd –hostname patroni1 –net my_net3 –restart always –env ‘CLUSTER_NAME=patronicluster’ –env ‘SERVICE_NAME=service’ –env ‘ETCD_USER=etcduser’ –env ‘ETCD_PASSWD=etcdpasswd’ –env ‘PASSWD=zalando’ –env ‘HOSTLIST=patroni1,patroni2,patroni3′ –env ‘VIP=172.22.1.88′ –env ‘NET_DEVICE=eth0′ –env ‘ETCD_COUNT=3′ patroni
運行容器節點2:
docker run –privileged –name patroni2 -itd –hostname patroni2 –net my_net3 –restart always –env ‘CLUSTER_NAME=patronicluster’ –env ‘SERVICE_NAME=service’ –env ‘ETCD_USER=etcduser’ –env ‘ETCD_PASSWD=etcdpasswd’ –env ‘PASSWD=zalando’ –env ‘HOSTLIST=patroni1,patroni2,patroni3′ –env ‘VIP=172.22.1.88′ –env ‘NET_DEVICE=eth0′ –env ‘ETCD_COUNT=3′ patroni
運行容器節點3:
docker run –privileged –name patroni3 -itd –hostname patroni3 –net my_net3 –restart always –env ‘CLUSTER_NAME=patronicluster’ –env ‘SERVICE_NAME=service’ –env ‘ETCD_USER=etcduser’ –env ‘ETCD_PASSWD=etcdpasswd’ –env ‘PASSWD=zalando’ –env ‘HOSTLIST=patroni1,patroni2,patroni3′ –env ‘VIP=172.22.1.88′ –env ‘NET_DEVICE=eth0′ –env ‘ETCD_COUNT=3′ patroni

總結

本操作過程僅限於試驗環境,為瞭演示etcd+patroni+vipmanager整體的容器化搭建。在真實環境下,etcd應該部署在不同容器下,形成獨立的分佈式集群,並且PG的存儲應該映射到本地磁盤或網絡磁盤,另外容器集群的搭建盡量使用編排工具,例如docker-compose、docker-warm或者Kubernetes等。

附圖

etcd集群狀態如圖:

在這裡插入圖片描述

patroni集群狀態如圖:

在這裡插入圖片描述

vip-manager狀態如圖:

在這裡插入圖片描述

在這裡插入圖片描述

到此這篇關於深入淺析Docker容器中的Patroni的文章就介紹到這瞭,更多相關Docker容器Patroni內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!

推薦閱讀: