k8s搭建mysql集群實現主從復制的方法步驟

環境介紹

名稱 版本 操作系統 IP 備註
K8S集群 1.20.15 Centos7.9 192.168.11.21 192.168.11.22 192.168.11.23 21為k8s-master 22為k8s-node01 23為k8s-node02
MySql 5.7 Centos7.9 一主兩從
nfs服務器 Centos7.9 192.168.11.24 共享目錄為/nfs

一、部署NFS服務器

11.24:

1.創建NFS共享目錄

mkdir -p /nfs

2.安裝NFS服務

yum -y install nfs-utils rpcbind

3.編輯NFS配置

echo "/nfs  *(rw,async,no_root_squash)" >>/etc/exports

4.啟動服務

systemctl enable --now nfs-server
systemctl enable --now rpcbind

5.驗證

showmount -e  ##看是否能看到/nfs *字段;如果沒有該命令yum -y install showmount

11.21/22/23(所有K8S節點):

1.安裝NFS

yum -y install nfs-utils

2.測試是否能檢測到NFS共享目錄

showmount -e 192.168.11.24  ##看是否能看到/nfs *

二、創建PV

11.21:

1、創建存放MySQL的yaml清單目錄

mkdir  -p /webapp
cd /webapp

2、創建NFS的YAML文件

vim nfs-client.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/xngczl/nfs-subdir-external-provisione:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs #註意這個值,可以自定義
            - name: NFS_SERVER
              value: 192.168.11.24  ##IP不同修改此處
            - name: NFS_PATH
              value: /nfs   ##nfs共享目錄
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.11.24  ##IP不同修改此處
            path: /nfs  ##nfs共享目錄

創建rbac

vim nfs-client-rbac.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

創建sc

vim nfs-client-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata: 
  name: course-nfs-storage

啟動:

kubectl  apply  -f nfs-client.yaml 
​​​​​​​kubectl  apply  -f nfs-client-rbac.yaml
kubectl  apply  -f nfs-client-class.yaml 
kubectl  get po,sc
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-8579c9d69b-m6vp4   1/1     Running   0          13m

NAME                                             PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  13m

三、編寫MySQL的yaml文件

11.21:

mkdir -p /weapp/mysql
cd  /weapp/mysql

創建CM

“`bash

vim mysql-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only

此文件定義瞭兩個MySQL的配置文件
1.是master.cnf,開啟瞭log-bin。開啟二進制日志文件後才能進行主從復制
2.slave.cnf,開啟瞭super-read-only,表示從節點隻能讀,不能進行其他操作。
兩個文件以配置文件形式掛載到mysql容器中`

創建MySQL的Service

vim mysql-services.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

創MySQL的StatefulSet

vim mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi          
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: fxkjnj/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql          
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: fxkjnj/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
 
          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
            cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_slave_info xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm -f xtrabackup_binlog_info xtrabackup_slave_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi
 
          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
 
            echo "Initializing replication from clone position"
            mysql -h 127.0.0.1 \
                  -e "$(<change_master_to.sql.in), \
                          MASTER_HOST='mysql-0.mysql', \
                          MASTER_USER='root', \
                          MASTER_PASSWORD='', \
                          MASTER_CONNECT_RETRY=10; \
                        START SLAVE;" || exit 1
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
          fi
 
          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"          
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      storageClassName: "course-nfs-storage"
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 0.5Gi
  • 使用xtrbackup工具進行初始化數據的備份
  • 使用linux自帶的ncat工具進行容器初始化數據拷貝
  • 使用mysql的bin-log實現主從復制
  • 使用mysqladmin的ping作為健康檢查方式
  • 利用pod的主機名的序號來判定當前節點為主還是從,再根據節點拷貝不同的配置文件到指定目錄

四、啟動MySQL

11.21

kubectl apply -f mysql-configmap.yaml
kubectl apply -f mysql-services.yaml
kubectl apply -f mysql-statefulset.yaml
kubectl get po
NAME      READY   STATUS    RESTARTS   AGE      IP            NODE            NOMINATED NODE    READINESS GATES
mysql-0   2/2     Running   0          3h12m    10.244.0.5    k8s-master1     <none>            <none>
mysql-1   2/2     Running   0          3h11m    10.244.1.6    k8s-node02      <none>            <none>
mysql-2   2/2     Running   0          3h10m    10.244.1.5    k8s-node01      <none>            <none>

五、驗證MySQL主從復制

11.21:

kubectl  exec  -it mysql-0 -- bash  ##進入mysqk-0pod
  mysql -h mysql-0.mysql  ##進入數據庫
    CREATE DATABASE test;  ##創建庫表。
    CREATE TABLE test.messages (message VARCHAR(250));
    INSERT INTO test.messages VALUES ('hello');
    \q
exit
kubectl  exec  -it mysql-1 -- bash  ##進入mysql-1pod
  mysql -h mysql-1.mysql  ##進入數據庫
    SELECT * FROM test.messages;  ##看是否看得到創建的test庫

獲得以下輸出    

Waiting for pod default/mysql-client to be running, status is Pending, pod ready: false
+———+
| message |
+———+
| hello   |
+———+

到此這篇關於k8s搭建mysql集群實現主從復制的方法步驟的文章就介紹到這瞭,更多相關k8s搭建mysql實現主從復制內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!

推薦閱讀: