k8s deployment stateful service zookeeper example

Without considering the configuration file first:

apiVersion: apps/v1
kind: StatefulSet   #####A problem with fixed hostname and stateful service is that if the pod is not in running state, the hostname cannot be resolved, which constitutes a dead cycle. When I sed to replace the hostname, because the pod is not in running state, she can only get her own hostname. Unable to get other's host name, so I changed it to ip in zookeeper
metadata:
  name: zookeeper
spec:
  serviceName: zookeeper  ####So the names of the three pod s generated are zookeeper-0, zookeeper-1 and zookeeper-2
  replicas: 3
  revisionHistoryLimit: 10
  selector:  ##Statefullset must have
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath: 
          path: /var/log/zookeeper
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:3.4.10
        imagePullPolicy: IfNotPresent
        livenessProbe:
          tcpSocket:
            port: 2181
          initialDelaySeconds: 30
          timeoutSeconds: 3
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 2
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_NAME  #Declare the variable that comes with k8s, so that after the pod is created, you can directly echo ${my {pod {name} to get the hostname
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper #My cluster name is this. In any generated pod, you can ping the zookeeper, which is equivalent to the cluster name of the three generated pods. It will be found that the address of each Ping is not necessarily the same. nslookup zookeeper gets the pod ip of three pods, with three records in total.
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper 
  clusterIP: None  #This sentence must be added
[root@host5 src]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
default       zookeeper-0                                1/1     Running   0          12m     192.168.55.69    host3   <none>           <none>
default       zookeeper-1                                1/1     Running   0          12m     192.168.31.93    host4   <none>           <none>
default       zookeeper-2                                1/1     Running   0          12m     192.168.55.70    host3   <none>           <none>
bash-4.3# nslookup zookeeper
nslookup: can't resolve '(null)': Name does not resolve

Name:      zookeeper
Address 1: 192.168.55.70 zookeeper-2.zookeeper.default.svc.cluster.local
Address 2: 192.168.55.69 zookeeper-0.zookeeper.default.svc.cluster.local
Address 3: 192.168.31.93 zookeeper-1.zookeeper.default.svc.cluster.local
bash-4.3# ping zookeeper-0.zookeeper
PING zookeeper-0.zookeeper (192.168.55.69): 56 data bytes
64 bytes from 192.168.55.69: seq=0 ttl=63 time=0.109 ms
64 bytes from 192.168.55.69: seq=1 ttl=63 time=0.212 ms
^C
--- zookeeper-0.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.160/0.212 ms
bash-4.3# ping zookeeper-1.zookeeper
PING zookeeper-1.zookeeper (192.168.31.93): 56 data bytes
64 bytes from 192.168.31.93: seq=0 ttl=62 time=0.535 ms
64 bytes from 192.168.31.93: seq=1 ttl=62 time=0.507 ms
64 bytes from 192.168.31.93: seq=2 ttl=62 time=0.587 ms
^C
--- zookeeper-1.zookeeper ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.507/0.543/0.587 ms
bash-4.3# ping zookeeper-2.zookeeper
PING zookeeper-2.zookeeper (192.168.55.70): 56 data bytes
64 bytes from 192.168.55.70: seq=0 ttl=64 time=0.058 ms
64 bytes from 192.168.55.70: seq=1 ttl=64 time=0.081 ms
^C
--- zookeeper-2.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.069/0.081 ms

The commonly used variables of k8s are as follows:

env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
spec.nodeName :  pod Of the node IP,Host host IP

status.podIP : pod IP

Let's look at the configuration file again:

[root@docker06 conf]# cat zoo.cfg |grep -v ^#|grep -v ^$
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06
server.1=docker05:2888:3888
server.2=docker06:2888:3888
server.3=docker04:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

We need to modify the shape as follows:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06  #The following three lines are fixed. The main reason is that this line needs to modify my pod IP of the cost machine. We can use configmap to mount the configuration file, and then use sed to replace this line of configuration in the pod
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

Refer to the following method:

First, mount the configuration file into the pod through configmap, such as fix-ip.sh

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
data:
  fix-ip.sh: |
    #!/bin/sh
    CLUSTER_CONFIG="/var/lib/redis/nodes.conf"
    if [ -f ${CLUSTER_CONFIG} ]; then
      if [ -z "${POD_IP}" ]; then
        echo "Unable to determine Pod IP address!"
        exit 1
      fi
      echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
      sed -i.bak -e "/myself/s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
    fi
    exec "$@"
  redis.conf: |+
    cluster-enabled yes
    cluster-require-full-coverage no
    cluster-node-timeout 15000
    cluster-config-file /var/lib/redis/nodes.conf
    cluster-migration-barrier 1
    appendonly yes
    protected-mode no

Then execute this script when starting pod:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: 10.11.100.85/library/redis
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]  #Here, the script is executed first, and then redis is started
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 20
          periodSeconds: 3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /etc/redis
          readOnly: false
        - name: data
          mountPath: /var/lib/redis
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
#          items:
#          - key: redis.conf
#            path: redis.conf
#          - key: fix-ip.sh
#            path: fix-ip.sh
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        name: redis-cluster
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 150Mi

Note: the configuration file generated by configmap is read-only and cannot be modified by sed. You can mount it to the temporary directory and then copy it to sed. But there is also a problem. If you dynamically modify configmap, you will only change the files in the temporary directory, but not the files in the past

Configuration of actual production environment:

1. Remake image

[root@host4 zookeeper]# ll
//Total dosage 4
drwxr-xr-x 2 root root  45 5 Month 2415:48 conf
-rw-r--r-- 1 root root 143 5 Month 2306:19 Dockerfile
drwxr-xr-x 2 root root  20 5 Month 2415:48 scripts

[root@host4 conf]# cd conf
[root@host4 conf]# ll
//Total dosage 8
-rw-r--r-- 1 root root 1503 5 Month 2304:15 log4j.properties
-rw-r--r-- 1 root root  324 5 Month 2415:48 zoo.cfg

[root@host4 conf]# cat zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress=PODIP #ip is used here, and host name is used below. See the above for the reason
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

[root@host4 conf]# cd ../scripts/
[root@host4 scripts]# ll
//Total dosage 4
-rwxr-xr-x 1 root root 177 5 Month 2415:48 sed.sh

[root@host4 scripts]# cat sed.sh 
#!/bin/bash
MY_ID=`echo ${MY_POD_NAME} |awk -F'-' '{print $NF}'`
MY_ID=`expr ${MY_ID} + 1`
echo ${MY_ID} > /data/myid
sed -i 's/PODIP/'${MY_POD_IP}'/g' /conf/zoo.cfg
exec "$@"

[root@host4 scripts]# cd ..
[root@host4 zookeeper]# ls
conf  Dockerfile  scripts
[root@host4 zookeeper]# cat Dockerfile 
FROM harbor.test.com/middleware/zookeeper:3.4.10
MAINTAINER rongruixue@163.com

ARG zookeeper_version=3.4.10

COPY conf /conf/
COPY scripts /

In this way, docker build creates image: harbor.test.com/middleware/zookeeper:v3.4.10

Then we start pod through yml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper
spec:
 # podManagementPolicy: Parallel  #This configuration determines whether to have 3 pod s up at the same time, rather than in the order of 0 1 2
  serviceName: zookeeper
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath: 
          path: /var/log/zookeeper
      - name: volume-data
        hostPath:
          path: /opt/zookeeper/data
      terminationGracePeriodSeconds: 10
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:v3.4.10
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
        #-Name: volume data can't be mounted here and data can't be local. Otherwise, if two pod s are assigned to the same node, they will overwrite each other and myid will be overwritten
         # mountPath: /data
        command:
          - /bin/bash
          - -c
          - -x
          - |
            /sed.sh #The function of this script is to write podip to the zoo.cfg configuration file, and then write / data/myid
            sleep 10
            zkServer.sh start-foreground
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper 
  clusterIP: None

Tags: Operation & Maintenance Zookeeper Redis log4j Docker

Posted on Fri, 10 Jan 2020 09:14:10 -0500 by lunny