k8s continuous integration

Build a DevOps continuous integration and deployment environment of gitlab+Jenkins+harbor+kubernetes

Structure diagram of the whole environment.

1, Preparatory work

gitlab and harbor are installed on a host outside the kubernetes cluster.

1.1. Set image source

docker-ce.repo

[root@support harbor]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

1.2. Installation dependency package

[root@support yum.repos.d]# yum install -y docker-ce-18.09.7
[root@support yum.repos.d]# yum install -y docker-compose
[root@support yum.repos.d]# git
[root@support yum.repos.d]# cat /etc/docker/daemon.json
{"registry-mirrors": ["http://f1361db2.m.daocloud.io"]}
[root@support yum.repos.d]# systemctl start docker

2, harbor deployment

2.1 installation package

[root@support yum.repos.d]# wget -b https://storage.googleapis.com/harbor-releases/release-1.9.0/harbor-offline-installer-v1.9.0.tgz
Continuing in background, pid 9771.
Output will be written to 'wget-log'.
[root@support ~]# tar zxf harbor-offline-installer-v1.9.0.tgz
[root@support ~]# cd harbor
[root@support harbor]# vi harbor.yml
hostname: 139.9.134.177
http:
  port: 8080

2.2 deployment

[root@support harbor]# ./prepare 

[root@support harbor]# ./install.sh 

[root@support harbor]# docker-compose ps
      Name                     Command              State             Ports          
-------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core             Up                               
harbor-db           /docker-entrypoint.sh           Up      5432/tcp                 
harbor-jobservice   /harbor/harbor_jobservice       Up                               
                    ...                                                              
harbor-log          /bin/sh -c /usr/local/bin/      Up      127.0.0.1:1514->10514/tcp
                    ...                                                              
harbor-portal       nginx -g daemon off;            Up      8080/tcp                 
nginx               nginx -g daemon off;            Up      0.0.0.0:8080->8080/tcp   
redis               redis-server /etc/redis.conf    Up      6379/tcp                 
registry            /entrypoint.sh /etc/regist      Up      5000/tcp                 
                    ...                                                              
registryctl         /harbor/start.sh                Up 

3, gitlab deployment

3.1. Pull image

[root@support yum.repos.d]# docker pull gitlab/gitlab-ce
Using default tag: latest
latest: Pulling from gitlab/gitlab-ce
16c48d79e9cc: Pull complete 
3c654ad3ed7d: Pull complete 
6276f4f9c29d: Pull complete 
a4bd43ad48ce: Pull complete 
075ff90164f7: Pull complete 
8ed147de678c: Pull complete 
c6b08aab9197: Pull complete 
6c15d9b5013c: Pull complete 
de3573fbdb09: Pull complete 
4b6e8211dc80: Verifying Checksum 
latest: Pulling from gitlab/gitlab-ce
16c48d79e9cc: Pull complete 
3c654ad3ed7d: Pull complete 
6276f4f9c29d: Pull complete 
a4bd43ad48ce: Pull complete 
075ff90164f7: Pull complete 
8ed147de678c: Pull complete 
c6b08aab9197: Pull complete 
6c15d9b5013c: Pull complete 
de3573fbdb09: Pull complete 
4b6e8211dc80: Pull complete 
Digest: sha256:eee5fc2589f9aa3cd4c1c1783d5b89667f74c4fc71c52df54660c12cc493011b
Status: Downloaded newer image for gitlab/gitlab-ce:latest
docker.io/gitlab/gitlab-ce:latest
[root@support yum.repos.d]#

3.2. Start the container

[root@bogon /]# docker run --detach \
--hostname 139.9.134.177 \
--publish 10443:443 --publish 10080:80 --publish 10022:22 \
--name gitlab \
--restart always \
--volume /opt/gitlab/config:/etc/gitlab \
--volume /opt/gitlab/logs:/var/log/gitlab \
--volume /opt/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
git Warehouse initialization
git init --bare 
git clone 
yum install jenkins -y
java -version

tail -f /var/log/jenkins/jenkins.log
log Medium output jenkins Web initialization password.

4, jenkins deployment

Deploy jenkins in kubernetes cluster on github

https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/kubernetes/jenkins.yml

4.1 NFS-PV dynamic supply

NFS service preparation

# Install NFS utils using yum or up2date
[root@support ~]# yum install -y nfs-utils
[root@support ~]# mkdir /ifs/kubernetes
[root@support ~]# cat /etc/exports
# Provide shared directory to 10.0.0.0 segment host
/ifs/kubernetes 10.0.0.0/24(rw,no_root_squash)
[root@support ~]# systemctl start nfs
[root@support ~]# exportfs -arv
exporting 10.0.0.0/24:/ifs/kubernetes

nfs.yaml

[root@master jenkins]# cat nfs.yaml 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
    
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
  
---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
    
---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"

---

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
  
---

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector: 
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.123
            - name: NFS_PATH
              value: /ifs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.123
            path: /ifs/kubernetes
[root@master jenkins]#
# Create PV dynamic supply
root@master jenkins]# kubectl apply -f nfs.yaml

4.2. Jenkins deployed on kubernetes

Jenkins master is scheduled to the master node of K8S.

jenkins.yaml

[root@master jenkins]# cat jenkins.yaml 
apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  selector:
    name: jenkins
  type: NodePort
  ports:
    -
      name: http
      port: 80
      targetPort: 8080
      protocol: TCP
      nodePort: 30006
    -
      name: agent
      port: 50000
      protocol: TCP
      
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  selector: 
    matchLabels:
      name: jenkins
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      # Schedule to master node
      nodeSelector:
        labelName: master
      # Tolerate master node stains
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts-alpine
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
            - containerPort: 50000
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
# Create jenkins Pod
root@master jenkins]# kubectl apply -f jenkins.yaml

# Open browser to access jenkins address
http://139.9.139.49:30006/

# Stuck in the startup interface for a long time
[root@support default-jenkins-home-jenkins-0-pvc-ea84462f-241e-4d38-a408-e07a59d4bf0e]# cat hudson.model.UpdateCenter.xml 
<?xml version='1.1' encoding='UTF-8'?>
<sites>
  <site>
    <id>default</id>
    <url>http://mirror.xmission.com/jenkins/updates/update-center.json</url>
  </site>
</sites>

4.3 plug in installation

Install plug-in system management -- > plug-in management in jenkins

4.3.1 list of plug-ins to be downloaded

Git plugin        git
GitLab Plugin     gitlab
Kubernetes plugin Create proxy dynamically
Pipeline          Assembly line
Email Extension   Mail extension

Installing plug-ins is too slow. A few kb per second ╮ ( ̄▽  ̄) ╭

We have a way to solve this problem [] ( ̄▽  ̄)*

4.3.2. Tell jenkins which plug-ins need to be updated

Use the mirror address of Tsinghua University https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json

1. Enter jenkins system management
2. Enter Manage Plugins

--> Advanced -- > upgrade site

4.3.3 principle

https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json   This file contains the updated addresses of all plug-ins. Tsinghua took this file, but did not change the plug-in upgrade address to Tsinghua. Downloading plug-ins still needs to be downloaded from foreign hosts. In this way, only a batch of plug-ins with fast update information and slow actual download will be obtained.

curl -vvvv  http://updates.jenkins-ci.org/download/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi
302 reach
http://mirrors.jenkins-ci.org/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi
 Redirect to another ftp Address shunting.

Tsinghua's address is:
https://mirrors.tuna.tsinghua.edu.cn/jenkins/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi
 Just put mirrors.jenkins-ci.org Proxy to mirrors.tuna.tsinghua.edu.cn/jenkins Just.

4.3.4 cheat jenkins to download plug-ins in Tsinghua

binding   mirrors.jenkins-ci.org   Domain name to this computer  / etc/hosts   in

[root@support nginx]# cat /etc/hosts
127.0.0.1 mirrors.jenkins-ci.org

nginx reverse proxy to the jenkins plug-in download address of Tsinghua University

[root@support ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {

    access_log  /var/log/nginx/access.log;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server
    {
        listen 80;
        server_name mirrors.jenkins-ci.org;
        root    /usr/share/nginx/html;

        location / {
            proxy_redirect off;
            proxy_pass https://mirrors.tuna.tsinghua.edu.cn/jenkins/;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Accept-Encoding "";
            proxy_set_header Accept-Language "zh-CN";
        }
        index index.html index.htm index.php;

        location ~ /\.
        {
            deny all;
        }

    }

}

Finally, let's take a look at the nginx access log. All jenkins download Plug-in requests sent from the local machine are forwarded to the Tsinghua image source.

127.0.0.1 - - [14/Oct/2019:23:40:32 +0800] "GET /plugins/kubernetes-credentials/0.4.1/kubernetes-credentials.hpi HTTP/1.1" 200 17893 "-" "Java/1.8.0_222"
127.0.0.1 - - [14/Oct/2019:23:40:37 +0800] "GET /plugins/variant/1.3/variant.hpi HTTP/1.1" 200 10252 "-" "Java/1.8.0_222"
127.0.0.1 - - [14/Oct/2019:23:40:40 +0800] "GET /plugins/kubernetes-client-api/4.6.0-2/kubernetes-client-api.hpi HTTP/1.1" 200 11281634 "-" "Java/1.8.0_222"
127.0.0.1 - - [14/Oct/2019:23:40:42 +0800] "GET /plugins/kubernetes/1.20.0/kubernetes.hpi HTTP/1.1" 200 320645 "-" "Java/1.8.0_222"
127.0.0.1 - - [14/Oct/2019:23:40:45 +0800] "GET /plugins/git/3.12.1/git.hpi HTTP/1.1" 200 2320552 "-" "Java/1.8.0_222"
127.0.0.1 - - [14/Oct/2019:23:40:47 +0800] "GET /plugins/gitlab-plugin/1.5.13/gitlab-plugin.hpi HTTP/1.1" 200 8456411 "-" "Java/1.8.0_222"

According to the recommended practice, it is found that the speed is too fast, basically less than  ̄ 2 seconds ˇ ~) most of the online tutorials only do the first step. After setting, sometimes they can speed up and sometimes they can't. this is the real final solution.

Of course, in order to achieve this step, I stepped on the pit all night. Firstly, jenkins deployed as pod in K8S cannot use this proxy method. After trying hard and fruitless, I can only rudely install a jenkins of the same version on the NFS server. I actually found the local persistent directory / var / jenkins in pod_ The files in the path corresponding to home are directly copied to / var/lib/jenkins. The running state of this new jenkins is consistent with that of jenkins in pod. Therefore, after the new jenkins downloads the plug-in, copy the plug-in directory / var/lib/jenkins/plugins directly into the pod persistent volume.

4.4. gitlab triggers jenkins

4.4.1. gitlab generates a token

Copy this token, which is displayed only once: vze6nS8tLAQ1dVpdaHYU

4.4.2 jenkins configures and connects gitlab

Click system management - > system settings to find gitlab

Select gitlab api token as the type, and fill in the token generated by gitab

4.4.3. Create jenkins task

This address is used to set the webhook of gitlab: http://139.9.139.49:30006/project/gitlab-citest-pipeline

Click generate token: 2daf58bf638f04ce9e201ef0df9bec0f

This token is also used to set the webhook of gitlab

4.4.4 setting webhooks in gitlab

4.4.5. Submit the code to gitlab to trigger jenkins task

First clone the warehouse on gitlab to the local database

[root@support ~]# git clone http://139.9.134.177:10080/miao/citest.git
Cloning into 'citest'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.

After modification, submit the code to gitlab

[root@support citest]# git commit -m "Testing gitlab and jenkins Connection #1"
[master 03264a7] Testing gitlab and jenkins Connection 1
 1 file changed, 3 insertions(+), 1 deletion(-)
[root@support citest]# git push origin master
Username for 'http://139.9.134.177:10080': miao
Password for 'http://miao@139.9.134.177:10080': 
Counting objects: 5, done.
Writing objects: 100% (3/3), 294 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To http://139.9.134.177:10080/miao/citest.git
   25f05bb..03264a7  master -> master

jenkins mission has started

The display task is triggered by gitlab, and the first stage is successful.

4.5 jenkins creates dynamic agents in kubernetes

We use Docker in Docker technology here, which is to deploy jenkins in k8s. jenkins master will dynamically create a slave pod and use the slave pod to run command operations such as code cloning, project construction, image construction, etc. Delete the slave pod after the composition is completed. Reducing the load of jenkins master can greatly improve resource utilization.

4.5.1 configure and connect kubernetes

We have installed the Kubernetes plug-in. We click directly in jenkins

System management - > system settings - > pull to the bottom and there is a cloud.

Add a cloud -- > kubernetes

Because jenkins runs directly on k8s, you can directly access the service name of kubernetes through the dns of k8s. Click -- > test connection to connect k8s successfully.

Then click -- > save

4.5.2. Build Jenkins slave image

github official build slave document

https://github.com/jenkinsci/docker-jnlp-slave

To build a Jenkins slave image, we need to prepare four files

1. Enter the following address in the jenkins address field to get slave.jar

http://119.3.226.210:30006/jnlpJars/slave.jar

2. Jenkins slave, the startup script of slave.jar

[root@support jenkins-slave]# cat jenkins-slave 
#!/usr/bin/env sh

if [ $# -eq 1 ]; then

	# if `docker run` only has one arguments, we assume user is running alternate command like `bash` to inspect the image
	exec "$@"

else

	# if -tunnel is not provided try env vars
	case "$@" in
		*"-tunnel "*) ;;
		*)
		if [ ! -z "$JENKINS_TUNNEL" ]; then
			TUNNEL="-tunnel $JENKINS_TUNNEL"
		fi ;;
	esac

	# if -workDir is not provided try env vars
	if [ ! -z "$JENKINS_AGENT_WORKDIR" ]; then
		case "$@" in
			*"-workDir"*) echo "Warning: Work directory is defined twice in command-line arguments and the environment variable" ;;
			*)
			WORKDIR="-workDir $JENKINS_AGENT_WORKDIR" ;;
		esac
	fi

	if [ -n "$JENKINS_URL" ]; then
		URL="-url $JENKINS_URL"
	fi

	if [ -n "$JENKINS_NAME" ]; then
		JENKINS_AGENT_NAME="$JENKINS_NAME"
	fi  

	if [ -z "$JNLP_PROTOCOL_OPTS" ]; then
		echo "Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior"
		JNLP_PROTOCOL_OPTS="-Dorg.jenkinsci.remoting.engine.JnlpProtocol3.disabled=true"
	fi

	# If both required options are defined, do not pass the parameters
	OPT_JENKINS_SECRET=""
	if [ -n "$JENKINS_SECRET" ]; then
		case "$@" in
			*"${JENKINS_SECRET}"*) echo "Warning: SECRET is defined twice in command-line arguments and the environment variable" ;;
			*)
			OPT_JENKINS_SECRET="${JENKINS_SECRET}" ;;
		esac
	fi
	
	OPT_JENKINS_AGENT_NAME=""
	if [ -n "$JENKINS_AGENT_NAME" ]; then
		case "$@" in
			*"${JENKINS_AGENT_NAME}"*) echo "Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable" ;;
			*)
			OPT_JENKINS_AGENT_NAME="${JENKINS_AGENT_NAME}" ;;
		esac
	fi

	#TODO: Handle the case when the command-line and Environment variable contain different values.
	#It is fine it blows up for now since it should lead to an error anyway.

	exec java $JAVA_OPTS $JNLP_PROTOCOL_OPTS -cp /usr/share/jenkins/slave.jar hudson.remoting.jnlp.Main -headless $TUNNEL $URL $WORKDIR $OPT_JENKINS_SECRET $OPT_JENKINS_AGENT_NAME "$@"
fi

3. maven's profile

[root@support jenkins-slave]# cat settings.xml 
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <pluginGroups>
  </pluginGroups>
  <proxies>
  </proxies>
  <servers>
  </servers>
  <mirrors>
    <mirror>     
      <id>central</id>     
      <mirrorOf>central</mirrorOf>     
      <name>aliyun maven</name>
      <url>https://maven.aliyun.com/repository/public</url>
    </mirror>
  </mirrors>
  <profiles>
  </profiles>
</settings>

4,Dockerfile

FROM centos:7
LABEL maintainer lizhenliang

# Make the image have the ability to drag git warehouse and compile java code
RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \ 
    yum clean all && \
    rm -rf /var/cache/yum/* && \
    mkdir -p /usr/share/jenkins

# Put the obtained slave.jar into the image
COPY slave.jar /usr/share/jenkins/slave.jar
# Jenkins slave execution script
COPY jenkins-slave /usr/bin/jenkins-slave
# aliyun's image is set in settings.xml
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave

ENTRYPOINT ["jenkins-slave"]

Put these four files in the same level directory, and then we start to build the slave image

Build and label images

[root@support jenkins-slave]# docker build . -t 139.9.134.177:8080/jenkinsci/jenkins-slave-jdk:1.8
[root@support jenkins-slave]# docker image ls
REPOSITORY                                       TAG                        IMAGE ID            CREATED             SIZE
139.9.134.177:8080/jenkinsci/jenkins-slave-jdk   1.8                        940e56848837        3 minutes ago       535MB

Start pushing images

http login denied. docker is https by default. daemon.json needs to be modified

[root@support jenkins-slave]# docker login 139.9.134.177:8080
Username: admin
Password: 
Error response from daemon: Get https://139.9.134.177:8080/v2/: http: server gave HTTP response to HTTPS client
# Increase trust in http
[root@support ~]# cat /etc/docker/daemon.json
{
    "registry-mirrors": ["http://f1361db2.m.daocloud.io"],
    "insecure-registries": ["http://139.9.134.177:8080"]
}
# Successfully logged in
[root@support ~]# docker login 139.9.134.177:8080
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

All k8s hosts also need to configure the address to access harbor. Restart the docker service.

We set the trusted address as the intranet address to ensure sufficient speed.

4.5.3 Jenkins task is performed by k8s pod

Create a pod dynamically using the following pipeline script

// Mirror warehouse address
def registry = "10.0.0.123:8080"

podTemplate(label: 'jenkins-agent', cloud: 'kubernetes', 
    containers: [
    containerTemplate(
        name: 'jnlp', 
        image: "${registry}/jenkinsci/jenkins-slave-jdk:1.8"
    )],
    volumes: [
        hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
        hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
    ]) 
{
  node("jenkins-agent"){
        stage('Pull code') { // for display purposes
            git 'http://139.9.134.177:10080/miao/citest.git'
            sh 'ls'
        }
        stage('Code compilation') {
            echo 'ok'
        }
        stage('deploy') {
            echo 'ok'
        }
    }
}

4.6. Continuous integration using pipeline script

Use the pipeline script to pull down the code submitted to gitlab each time, compile it into a docker image and push it to harbor.

Here, we need to configure two credentials first, because our gitlab code warehouse is private and the harbor warehouse is private. jenkins can access it only by configuring credentials.

Enter the account and password of gitlab. After generating a credential, copy the id of the credential and reference it in the pipeline

Enter the account and password of harbor. After generating a credential, copy the id of the credential and reference it in the pipeline

// Mirror warehouse address
def registry = "10.0.0.123:8080"
// Mirror warehouse project
def project = "jenkinsci"
// Image name
def app_name = "citest"
// Mirror full name
def image_name = "${registry}/${project}/${app_name}:${BUILD_NUMBER}"
// git warehouse address
def git_address = "http://139.9.134.177:10080/miao/citest.git"

// authentication
def harbor_auth = "db4b7f06-7df6-4da7-b5b1-31e91b7a70e3"
def gitlab_auth = "53d88c8f-3063-4048-9205-19fc6222b887"

podTemplate(
    label: 'jenkins-agent', 
    cloud: 'kubernetes', 
    containers: [
        containerTemplate(
            name: 'jnlp', 
            image: "${registry}/jenkinsci/jenkins-slave-jdk:1.8"
        )
    ],
    volumes: [
        hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
        hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
    ]
) 
{
  node("jenkins-agent"){
        stage('Pull code') { // for display purposes
            checkout([$class: 'GitSCM', branches: [[name: '${Branch}']], userRemoteConfigs: [[credentialsId: "${gitlab_auth}", url: "${git_address}"]]])
            sh "ls"
        }
        stage('Code compilation') {
            sh "mvn clean package -Dmaven.test.skip=true"
            sh "ls"
        }
        stage('Build mirror') {
            withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
				sh """
					echo '
						FROM tomcat
						LABEL maintainer miaocunfa
						RUN rm -rf /usr/local/tomcat/webapps/*
						ADD target/*.war /usr/local/tomcat/webapps/ROOT.war 
					' > Dockerfile

					docker build -t ${image_name} .
					docker login -u ${username} -p '${password}' ${registry}
					docker push ${image_name}
				"""
			}
		}
	}
}

Write a script to submit gitlab

[root@support ~]# cat gitpush.sh 
testdate=$(date)
cd /root/citest
echo $testdate >> pod-slave.log
git add -A
git commit -m "$testdate"
git push origin master

Code submission has triggered task number 33 to start building.

Logs during jenkins build.

After jenkins is successfully built, there is already an image labeled 33 in harbor.

4.7 Jenkins continues to deploy in Kubernetes

After the mirror has been successfully built using jenkins, the next step is to deploy the mirror on the K8s platform. In this process, we need to use the plug-in Kubernetes Continuous Deploy Plugin

4.7.1. k8s certification

Copy the contents of. kube/config to jenkins to generate credentials

Copy the id of the credential to be referenced in the pipeline script

4.7.2. k8s add harbor warehouse secret

[root@master ~]# kubectl create secret docker-registry harbor-pull-secret --docker-server='http://10.0.0.123:8080' --docker-username='admin' --docker-password='Harbor12345'
secret/harbor-pull-secret created

4.7.3 pipeline script

// Mirror warehouse address
def registry = "10.0.0.123:8080"
// Mirror warehouse project
def project = "jenkinsci"
// Image name
def app_name = "citest"
// Mirror full name
def image_name = "${registry}/${project}/${app_name}:${BUILD_NUMBER}"
// git warehouse address
def git_address = "http://139.9.134.177:10080/miao/citest.git"

// authentication
def harbor_auth = "db4b7f06-7df6-4da7-b5b1-31e91b7a70e3"
def gitlab_auth = "53d88c8f-3063-4048-9205-19fc6222b887"

// K8s certification
def k8s_auth = "586308fb-3f92-432d-a7f7-c6d6036350dd"
// harbor warehouse secret_name
def harbor_registry_secret = "harbor-pull-secret"
// nodePort exposed after k8s deployment
def nodePort = "30666"

podTemplate(
    label: 'jenkins-agent', 
    cloud: 'kubernetes', 
    containers: [
        containerTemplate(
            name: 'jnlp', 
            image: "${registry}/jenkinsci/jenkins-slave-jdk:1.8"
        )
    ],
    volumes: [
        hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
        hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
    ]
) 
{
  node("jenkins-agent"){
        stage('Pull code') { // for display purposes
            checkout([$class: 'GitSCM', branches: [[name: '${Branch}']], userRemoteConfigs: [[credentialsId: "${gitlab_auth}", url: "${git_address}"]]])
            sh "ls"
        }
        stage('Code compilation') {
            sh "mvn clean package -Dmaven.test.skip=true"
            sh "ls"
        }
        stage('Build mirror') {
            withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
				sh """
					echo '
						FROM tomcat
						LABEL maintainer miaocunfa
						RUN rm -rf /usr/local/tomcat/webapps/*
						ADD target/*.war /usr/local/tomcat/webapps/ROOT.war 
					' > Dockerfile

					docker build -t ${image_name} .
					docker login -u ${username} -p '${password}' ${registry}
					docker push ${image_name}
				"""
			}
		}
		stage('Deploy to K8s'){
            sh """
                sed -i 's#\$IMAGE_NAME#${image_name}#' deploy.yml
                sed -i 's#\$SECRET_NAME#${harbor_registry_secret}#' deploy.yml
                sed -i 's#\$NODE_PORT#${nodePort}#' deploy.yml
            """
            kubernetesDeploy configs: 'deploy.yml', kubeconfigId: "${k8s_auth}"
		}
	}
}
deploy.yaml

It is used to deploy the image as a pod controlled by the deployment controller and push it together with the code in the code warehouse.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: web
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: java-demo
  template:
    metadata:
      labels:
        app: java-demo
    spec:
      imagePullSecrets:
      - name: $SECRET_NAME
      containers:
      - name: tomcat 
        image: $IMAGE_NAME
        ports:
        - containerPort: 8080
          name: web
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 20
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 20
          timeoutSeconds: 5
          failureThreshold: 3

---

kind: Service
apiVersion: v1
metadata:
  name: web
spec:
  type: NodePort
  selector:
    app: java-demo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
      nodePort: $NODE_PORT

4.7.4 push

The following is the complete CI/CD process

1. git push code to gitlab code warehouse

2. gitlab uses webhook to trigger jenkins task

The webhook in the lower left corner has been triggered, and the jenkins task numbered 53 has started

jenkins task flow

3. harbor image warehouse

The image with tag 53 has also been pushed to harbor

4. Monitoring pods changes using kubectl
jenkins will first build the slave pod in the task flow. After deploying the image to kubernetes, the slave pod will be destroyed and the web image is in the running state.

5. Email notification
After the entire jenkins task is successfully executed, an email notification is sent

The mail configuration will be posted in the 4.8 optimization section.

4.8 optimization

4.8.1 the pipeline script is managed together with the code

The advantage of putting Jenkinsfile in the code warehouse is that it can also manage the version of Jenkinsfile, which is consistent with the current project life cycle.

First, save the pipeline script to the local git repository with the file name Jenkinsfile

jenkins is configured as follows

4.8.2. Add email notification after successful construction

1. Email notification requires an installed plug-in Email Extension

2. Configuration of Email Extension

3. Mail template content, html template

4. The system defaults to the mail service configuration. After configuration, you can send test mail.

5. Test message content

Mail template
<!DOCTYPE html>    
<html>    
<head>    
<meta charset="UTF-8">    
<title>${ENV, var="JOB_NAME"}-The first ${BUILD_NUMBER}Secondary build log</title>    
</head>    
    
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4"    
    offset="0">    
    <table width="95%" cellpadding="0" cellspacing="0"  style="font-size: 11pt; font-family: Tahoma, Arial, Helvetica, sans-serif">    
        <tr>    
            This email is sent automatically by the system without reply!<br/>            
            Hello, colleagues, the following is ${PROJECT_NAME }Project construction information</br> 
            <td><font color="#CC0000 "> build result - ${build_status} < / font ></td>   
        </tr>    
        <tr>    
            <td><br />    
            <b><font color="#0b610b "> build information < / font ></b>   
            <hr size="2" width="100%" align="center" /></td>    
        </tr>    
        <tr>    
            <td>    
                <ul>    
                    <li>Project Name: ${PROJECT_NAME}</li>    
                    <li>Build No.: page ${BUILD_NUMBER}Secondary construction</li>    
                    <li>Trigger reason: ${CAUSE}</li>    
                    <li>Build status: ${BUILD_STATUS}</li> 
		    <li>Build information: <a href="${BUILD_URL}">${BUILD_URL}</a></li>					
                    <li>Build log: <a href="${BUILD_URL}console">${BUILD_URL}console</a></li>    
                    <li>Construction history: <a href="${PROJECT_URL}">${PROJECT_URL}</a></li> 
		    <!--<li>Deployment address: <a href="${project_url}">${project_url}</a></li>-->
                </ul>    

				<h4><font color="#0b610b "> failure case < / font > < / H4 >
				<hr size="2" width="100%" />
				$FAILED_TESTS<br/>

				<h4><font color="#0B610B">Recently submitted(#$SVN_REVISION)</font></h4>
				<hr size="2" width="100%" />
				<ul>
				${CHANGES_SINCE_LAST_SUCCESS, reverse=true, format="%c", changesFormat="<li>%d [%a] %m</li>"}
				</ul>
				<font color="#0b610b "> detailed submission: < / font > < a href =" ${project_url} changes "> ${project_url} changes < / a > < br / >

            </td>    
        </tr>    
    </table>    
</body>    
</html> 

In the area of continuous integration, I am still a beginner and look forward to your guidance.

Posted on Thu, 11 Nov 2021 02:00:39 -0500 by watson516