[K8S] 10,000 words long, understand the continuous integrated delivery environment based on Docker+K8S+GitLab/SVN+Jenkins+Harbor

Overview of environmental construction 1.What is K8S? ...
Overview of environmental construction
Server Planning
Installation Environment Version
Install Docker Environment
Install docker-compose
Install K8S Cluster Environment
Problems caused by restarting K8S cluster
K8S Install ingress-nginx
K8S Install gitlab Code Repository
Install Harbor Private Warehouse
Install Jenkins (general practice)
Physical Machine Installation SVN
Docker Install SVN
Physical Machine Installation Jenkins
Configure the Jenkins runtime environment
Jenkins publishes Docker projects to the K8s cluster

Overview of environmental construction

1.What is K8S?

K8S, known as Kubernetes, is a new leading distributed architecture based on container technology, which aims to automate resource management and maximize resource utilization across multiple data centers.

If our system design follows the design idea of kubernetes, then those low-level code or function modules that have little to do with business in the traditional system architecture can be managed by K8S. We do not have to worry about load balancing selection and deployment implementation, introducing or developing a complex service governance framework by ourselves, and we do not have to worry and admire anymore.Development of transaction monitoring and troubleshooting modules.In summary, using the solution provided by kubernetes can significantly reduce development costs while focusing more on the business itself, and since kubernetes provides a powerful automation mechanism, the difficulty and cost of operation in the later stages of the system are greatly reduced.

2. Why use K8S?

Docker, an emerging container technology, has been adopted by many companies. It is inevitable that Docker will move from single machine to cluster, and the booming cloud computing is accelerating this process.Kubernetes is currently the only widely recognized and promising solution for Docker distributed systems.It is expected that in the next few years, a large number of new systems will choose it, whether running on enterprise local servers or hosted on public clouds.

3. What are the benefits of using K8S?

Using Kubernetes means fully deploying the microservice architecture.The core of the micro-service architecture is to decompose a large single application into many small interconnected micro-services. A single micro-service may have multiple instance replicas behind it. The number of replicas may be adjusted with the load changes of the system. The embedded load balancer is k8sThere are multiple instances in the platform that are supported. The number of replicas may be adjusted with the load of the system. The embedded load balancer plays an important role in the k8s platform.The micro-service architecture allows each service to be developed by a dedicated development team, which gives developers the freedom to choose the development technology, which is valuable for large-scale teams.In addition, each micro-service is developed, upgraded and expanded independently, which makes the system highly stable and fast iteration evolution.

4. Environmental Composition

The set of environment includes: Docker environment, docker-compose environment, K8S cluster, GitLab code warehouse, SVN warehouse, Jenkins Automated Deployment environment, Harbor private warehouse.

In this document, the entire environment is built including:

  • Install Docker Environment
  • Install docker-compose
  • Install K8S Cluster Environment
  • Problems caused by restarting K8S cluster
  • K8S Install ingress-nginx
  • K8S Install gitlab Code Repository
  • Install Harbor Private Warehouse
  • Install Jenkins
  • Physical Machine Installation SVN (Recommended)
  • Physical Machine Installation Jenkins (Recommended)
  • Configure the Jenkins runtime environment
  • Jenkins Publishes Docker Projects to K8S

Server Planning

IP host name node operating system 192.168.0.10 test10 K8S Master CentOS 8.0.1905 192.168.0.11 test11 K8S Worker CentOS 8.0.1905 192.168.0.12 test12 K8S Worker CentOS 8.0.1905

Installation Environment Version

Software Name Software Version Explain Docker 19.03.8 Provide container environment docker-compose 1.25.5 Define and run applications consisting of multiple containers K8S 1.18.2 Is an open source application for managing containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make deploying containerized applications simple and efficient. Kubernetes provides a mechanism for deploying, planning, updating, and maintaining applications. GitLab 12.1.6 Code repository Harbor 1.10.2 Private Mirror Warehouse Jenkins 2.222.3 Continuous Integrated Delivery

Install Docker Environment

Docker is an open source application container engine based on the Go language and compliant with the Apache 2.0 protocol.

Docker s can allow developers to package their applications and dependencies into a lightweight, portable container that can then be published to any popular Linux machine, or they can be virtualized.

This document builds a Docker environment based on Docker version 19.03.8.

Create install_on all serversDocker.shThe script, which is shown below.

#Use Ali Cloud Mirror Center export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com #Install the yum tool dnf install yum* #Install docker environment yum install -y yum-utils device-mapper-persistent-data lvm2 #Configure Docker's yum source yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Install Container Plug-ins dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm #Specify installation of docker version 19.03.8 yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 #Set Docker Start Up systemctl enable docker.service #Start Docker systemctl start docker.service #View Docker Version docker version

Install_on each serverDocker.shThe script grants executable privileges and executes the script, as shown below.

# Give install_Docker.shScript Executable Rights chmod a+x ./install_docker.sh # Execute install_Docker.shScript ./install_docker.sh

Install docker-compose

Compose is a tool for defining and running multi-container Docker applications.With Compose, you can use YML files to configure all the services your application needs.Then, with one command, you can create and start all services from the YML file configuration.

Note: Install docker-compose on each server

1. Download the docker-compose file

#Download and install docker-compose curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

2. Give executable permissions to docker-compose files

#Give docker-compose executable permissions chmod a+x /usr/local/bin/docker-compose

3. View the docker-compose version

#View docker-compose version [root@binghe ~]# docker-compose version docker-compose version 1.25.5, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

Install K8S Cluster Environment

Kubernetes is an open source application for managing containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make deploying containerized applications simple and efficient. Kubernetes provides a mechanism for deploying, planning, updating, and maintaining applications.

This document builds K8S clusters based on K8S version 1.8.12

Install K8S Foundation Environment

Create install_on all serversThe contents of the k8s.sh script file are shown below.

#################Configure Ali Cloud Mirror Accelerator Start######################## mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker ######################Configure Ali Cloud Mirror Accelerator End######################### #Install nfs-utils yum install -y nfs-utils #Install wget software download command yum install -y wget #Start nfs-server systemctl start nfs-server #Configure nfs-server startup self-start systemctl enable nfs-server #Close Firewall systemctl stop firewalld #Cancel firewall boot-up self-start systemctl disable firewalld #Turn off SeLinux setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # Close swap swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab ############################modify /etc/sysctl.conf start########################### # Modify if there is a configuration sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf # Maybe not, append echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf ############################modify /etc/sysctl.conf End########################### # Execute the command to make the modified/etc/Sysctl.confDocument Effective sysctl -p ################# To configure K8S Of yum Source Start############################# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ################# To configure K8S Of yum Source End############################# # Uninstall old version K8S yum remove -y kubelet kubeadm kubectl # Install kubelet, kubeadm, kubectl, here I have version 1.18.2 installed, you can also install version 1.17.2 yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 # Modify docker Cgroup Driver to systemd # # Will/usr/lib/systemd/system/Docker.serviceExecStart=/usr/bin/dockerd-H fd:// --containerd=/run/containerd/Containerd.sock # # Modify to ExecStart=/usr/bin/dockerd-H fd:// --containerd=/run/containerd/Containerd.sock--exec-optNative.cgroupdriver=systemd # If not modified, you may encounter the following errors when adding worker nodes # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". # Please follow the guide at https://kubernetes.io/docs/setup/cri/ sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service # Set up docker image to improve download speed and stability of docker image # If accessedHttps://hub.docker.ioThe speed is very stable, and you can skip this step, usually without configuration # curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s $ # Reload Configuration File systemctl daemon-reload #Restart docker systemctl restart docker # Set kubelet to boot, start and start kubelet systemctl enable kubelet && systemctl start kubelet # View docker version docker version

Install_on each serverThe k8s.sh script gives executable privileges and executes the script

# Give install_Executable permissions for k8s.sh scripts chmod a+x ./install_k8s.sh # Run install_k8s.sh script ./install_k8s.sh

Initialize Master Node

Actions performed only on the test10 server.

1. Initialize the Master node's network environment

Note: The following command needs to be executed manually from the command line.

# Execute only on master node # The export command is only valid in the current shell session. After opening a new shell window, if you want to continue the installation process, please re-execute the export command here export MASTER_IP=192.168.0.10 # Replace k8s.master with the dnsName you want export APISERVER_NAME=k8s.master # The segment of the network where the kubernetes container group is located, which is created by kubernetes after installation and does not exist in the physical network beforehand export POD_SUBNET=172.18.0.1/16 echo "$ $" >> /etc/hosts

2. Initialize Master Node

Create init_on test10 serverMaster.shA script file with the contents shown below.

#!/bin/bash # Terminate execution when a script fails set -e if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then echo -e "\033[31;1m Make sure you have set environment variables POD_SUBNET and APISERVER_NAME \033[0m" echo current POD_SUBNET=$POD_SUBNET echo current APISERVER_NAME=$APISERVER_NAME exit 1 fi # View full configuration optionsHttps://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2 rm -f ./kubeadm-config.yaml cat <<EOF > ./kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.2 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers controlPlaneEndpoint: "$:6443" networking: serviceSubnet: "10.96.0.0/16" podSubnet: "$" dnsDomain: "cluster.local" EOF # kubeadm init # Initialize kebeadm kubeadm init --config=kubeadm-config.yaml --upload-certs # Configure kubectl rm -rf /root/.kube/ mkdir /root/.kube/ cp -i /etc/kubernetes/admin.conf /root/.kube/config # Install calico network plug-in # Reference DocumentsHttps://docs.projectcalico.org/v3.13/get-start/kubernetes/self-managed-onprem/onpremises echo "install calico-3.13.1" rm -f calico-3.13.1.yaml wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml kubectl apply -f calico-3.13.1.yaml

Give init_Master.shScript files execute permissions and execute scripts.

# Give init_Master.shFile Executable Permissions chmod a+x ./init_master.sh # Run init_Master.shScript ./init_master.sh

3. View Master node initialization results

(1) Ensure that all container groups are in the Running state

# Execute the following command and wait for 3-10 minutes until all container groups are in the Running state watch kubectl get pod -n kube-system -o wide

Execution is shown below.

[root@test10 ~]# watch kubectl get pod -n kube-system -o wide Every 2.0s: kubectl get pod -n kube-system -o wide test10: Sun May 10 11:01:32 2020 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5b8b769fcd-5dtlp 1/1 Running 0 118s 172.18.203.66 test10 <none> <none> calico-node-fnv8g 1/1 Running 0 118s 192.168.0.10 test10 <none> <none> coredns-546565776c-27t7h 1/1 Running 0 2m1s 172.18.203.67 test10 <none> <none> coredns-546565776c-hjb8z 1/1 Running 0 2m1s 172.18.203.65 test10 <none> <none> etcd-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-apiserver-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-controller-manager-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-proxy-dvgsr 1/1 Running 0 2m1s 192.168.0.10 test10 <none> <none> kube-scheduler-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none>

(2) View Master node initialization results

# View Master Node Initialization Results kubectl get nodes -o wide

Execution is shown below.

[root@test10 ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME test10 Ready master 3m28s v1.18.2 192.168.0.10 <none> CentOS Linux 8 (Core) 4.18.0-80.el8.x86_64 docker://19.3.8

Initialize Worker Node

1. Get join command parameters

Execute the following command on the Master node (test10 server) to get the join command parameters.

kubeadm token create --print-join-command

Execution is shown below.

[root@test10 ~]# kubeadm token create --print-join-command W0510 11:04:34.828126 56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

Here, there is the following line of output.

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

This line of code is the join command you get.

Note: The token in the join command has a valid time of two hours, within which any number of worker nodes can be initialized.

2. Initialize the Worker node

Execute for all worker nodes, in this case, on the test11 and test12 servers.

Manually execute the following commands in each command.

# Execute only on worker nodes # 192.168.0.10 is the master node's intranet IP export MASTER_IP=192.168.0.10 # Replace APISERVER_used by k8s.master to initialize master nodeNAME export APISERVER_NAME=k8s.master echo "$ $" >> /etc/hosts # Replace with join output from the kubeadm token create command on the master node kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

Execution is shown below.

[root@test11 ~]# export MASTER_IP=192.168.0.10 [root@test11 ~]# export APISERVER_NAME=k8s.master [root@test11 ~]# echo "$ $" >> /etc/hosts [root@test11 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d W0510 11:08:27.709263 42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

From the output, you can see that the Worker node joins the K8S cluster.

Note: kubeadm join...Is the join output from the kubeadm token create command on the master node.

3. View initialization results

Execute the following command on the Master node (test10 server) to view the initialization results.

kubectl get nodes -o wide

Execution is shown below.

[root@test10 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION test10 Ready master 20m v1.18.2 test11 Ready <none> 2m46s v1.18.2 test12 Ready <none> 2m46s v1.18.2

Note: The kubectl get nodes command with the -o wide parameter can output more information.

Problems caused by restarting K8S cluster

1.Worker node failure cannot start

The IP address of the Master node has changed, causing the worker node to fail to start.The K8S cluster needs to be reinstalled and all nodes have fixed intranet IP addresses.

2.Pod crashes or is not accessible properly

After restarting the server, use the following command to see how Pod is running.

#View all pod s running kubectl get pods --all-namespaces

Many Pod s have been found to be out of Running state, at which point you need to use the following command to delete pods that are not functioning properly.

kubectl delete pod <pod-name> -n <pod-namespece>

Note: If a Pod was created using Deployment, StatefulSet, and other controllers, K8S will create a new Pod as an alternative, and the restarted Pod will usually work correctly.

Where pod-name represents the name of the pod running in K8S and pod-namespece represents the namespace.For example, you need to delete a pod named pod-test with a namespace of pod-test-namespace, using the following command.

kubectl delete pod pod-test -n pod-test-namespace

K8S Install ingress-nginx

As a reverse proxy, it imports external traffic into the cluster, exposes the Service inside Kubernetes to the outside, matches the Service through the domain name in the Ingress object, and accesses the Service inside the cluster directly through the domain name.nginx-ingress performs better than traefik.

Note: On the Master node (executed on the test10 server)

1. Create ingress-nginx namespace

Create ingress-nginx-Namespace.yamlThe main purpose of the file is to create the ingress-nginx namespace, as shown below.

apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: name: ingress-nginx

Execute the following command to create the ingress-nginx namespace.

kubectl apply -f ingress-nginx-namespace.yaml

2. Install ingress controller

Create ingress-nginx-Mandatory.yamlFile, the main purpose is to install ingress-nginx.The contents of the file are shown below.

apiVersion: v1 kind: Namespace metadata: name: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx namespace: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx spec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ---

Install ingress controller by executing the following command.

kubectl apply -f ingress-nginx-mandatory.yaml

3. Install K8S SVC:ingress-nginx

Mainly used to expose pod:nginx-ingress-controller.

Create service-Nodeport.yamlFile, the contents of which are shown below.

apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP nodePort: 30080 - name: https port: 443 targetPort: 443 protocol: TCP nodePort: 30443 selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx

Perform the following command to install.

kubectl apply -f service-nodeport.yaml

4. Access K8S SVC:ingress-nginx

View the deployment of the ingress-nginx namespace as shown below.

[root@test10 k8s]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-796ddcd9b-vfmgn 1/1 Running 1 10h nginx-ingress-controller-58985cc996-87754 1/1 Running 2 10h

On the command line server command line, type the following command to see the port mapping of ingress-nginx.

kubectl get svc -n ingress-nginx

This is shown below.

[root@test10 k8s]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.96.247.2 <none> 80/TCP 7m3s ingress-nginx NodePort 10.96.40.6 <none> 80:30080/TCP,443:30443/TCP 4m35s

Therefore, ingress-nginx can be accessed through the IP address of the Master node (test10 server) and port number 30080, as shown below.

[root@test10 k8s]# curl 192.168.0.10:30080 default backend - 404

You can also open it in a browserHttp://192.168.0.10: 30080 to access ingress-nginx, as shown below.

K8S Install gitlab Code Repository

GitLab was developed by GitLab Inc. and uses the MIT license's network-based Git warehouse management tool with Wiki and issue tracking capabilities.web services built on Git as a code management tool.

Note: On the Master node (executed on the test10 server)

1. Create k8s-ops namespace

Create k8s-ops-Namespace.yamlFile, the main purpose is to create the k8s-ops namespace.The contents of the file are shown below.

apiVersion: v1 kind: Namespace metadata: name: k8s-ops labels: name: k8s-ops

Execute the following command to create a namespace.

kubectl apply -f k8s-ops-namespace.yaml

2. Install gitlab-redis

Create gitlab-Redis.yamlThe contents of the file are as follows.

apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: k8s-ops labels: name: redis spec: selector: matchLabels: name: redis template: metadata: name: redis labels: name: redis spec: containers: - name: redis image: sameersbn/redis imagePullPolicy: IfNotPresent ports: - name: redis containerPort: 6379 volumeMounts: - mountPath: /var/lib/redis name: data livenessProbe: exec: command: - redis-cli - ping initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: exec: command: - redis-cli - ping initialDelaySeconds: 10 timeoutSeconds: 5 volumes: - name: data hostPath: path: /data1/docker/xinsrv/redis --- apiVersion: v1 kind: Service metadata: name: redis namespace: k8s-ops labels: name: redis spec: ports: - name: redis port: 6379 targetPort: redis selector: name: redis

First, from the command line, execute the following command to create the / data1/docker/xinsrv/redis directory.

mkdir -p /data1/docker/xinsrv/redis

Perform the following command to install gitlab-redis.

kubectl apply -f gitlab-redis.yaml

3. Install gitlab-postgresql

Create gitlab-Postgresql.yamlThe contents of the file are shown below.

apiVersion: apps/v1 kind: Deployment metadata: name: postgresql namespace: k8s-ops labels: name: postgresql spec: selector: matchLabels: name: postgresql template: metadata: name: postgresql labels: name: postgresql spec: containers: - name: postgresql image: sameersbn/postgresql imagePullPolicy: IfNotPresent env: - name: DB_USER value: gitlab - name: DB_PASS value: passw0rd - name: DB_NAME value: gitlab_production - name: DB_EXTENSION value: pg_trgm ports: - name: postgres containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql name: data livenessProbe: exec: command: - pg_isready - -h - localhost - -U - postgres initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: exec: command: - pg_isready - -h - localhost - -U - postgres initialDelaySeconds: 5 timeoutSeconds: 1 volumes: - name: data hostPath: path: /data1/docker/xinsrv/postgresql --- apiVersion: v1 kind: Service metadata: name: postgresql namespace: k8s-ops labels: name: postgresql spec: ports: - name: postgres port: 5432 targetPort: postgres selector: name: postgresql

First, execute the following command to create the / data1/docker/xinsrv/postgresql directory.

mkdir -p /data1/docker/xinsrv/postgresql

Next, install gitlab-postgresql, as shown below.

kubectl apply -f gitlab-postgresql.yaml

4. Install gitlab

(1) Configure user name and password

First, transcode the user name and password using base64 encoding on the command line. In this example, the user name used is admin and the password is admin.1231

Transcoding is shown below.

[root@test10 k8s]# echo -n 'admin' | base64 YWRtaW4= [root@test10 k8s]# echo -n 'admin.1231' | base64 YWRtaW4uMTIzMQ==

Transcoded user name: YWRtaW4=Password: YWRtaW4uMTIzMQ==

You can also decode a base64 encoded string, for example, a password string, as shown below.

[root@test10 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode admin.1231

Next, create secret-Gitlab.yamlFile, the user name and password used to configure GitLab, is shown below.

apiVersion: v1 kind: Secret metadata: namespace: k8s-ops name: git-user-pass type: Opaque data: username: YWRtaW4= password: YWRtaW4uMTIzMQ==

Execute the contents of the configuration file as shown below.

kubectl create -f ./secret-gitlab.yaml

(2) Install GitLab

EstablishGitlab.yamlThe contents of the file are as follows.

apiVersion: apps/v1 kind: Deployment metadata: name: gitlab namespace: k8s-ops labels: name: gitlab spec: selector: matchLabels: name: gitlab template: metadata: name: gitlab labels: name: gitlab spec: containers: - name: gitlab image: sameersbn/gitlab:12.1.6 imagePullPolicy: IfNotPresent env: - name: TZ value: Asia/Shanghai - name: GITLAB_TIMEZONE value: Beijing - name: GITLAB_SECRETS_DB_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_SECRETS_SECRET_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_SECRETS_OTP_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_ROOT_PASSWORD valueFrom: secretKeyRef: name: git-user-pass key: password - name: GITLAB_ROOT_EMAIL value: [email protected] - name: GITLAB_HOST value: gitlab.binghe.com - name: GITLAB_PORT value: "80" - name: GITLAB_SSH_PORT value: "30022" - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS value: "true" - name: GITLAB_NOTIFY_PUSHER value: "false" - name: GITLAB_BACKUP_SCHEDULE value: daily - name: GITLAB_BACKUP_TIME value: 01:00 - name: DB_TYPE value: postgres - name: DB_HOST value: postgresql - name: DB_PORT value: "5432" - name: DB_USER value: gitlab - name: DB_PASS value: passw0rd - name: DB_NAME value: gitlab_production - name: REDIS_HOST value: redis - name: REDIS_PORT value: "6379" ports: - name: http containerPort: 80 - name: ssh containerPort: 22 volumeMounts: - mountPath: /home/git/data name: data livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 180 timeoutSeconds: 5 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 1 volumes: - name: data hostPath: path: /data1/docker/xinsrv/gitlab --- apiVersion: v1 kind: Service metadata: name: gitlab namespace: k8s-ops labels: name: gitlab spec: ports: - name: http port: 80 nodePort: 30088 - name: ssh port: 22 targetPort: ssh nodePort: 30022 type: NodePort selector: name: gitlab --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gitlab namespace: k8s-ops annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: gitlab.binghe.com http: paths: - backend: serviceName: gitlab servicePort: http

Note: When configuring GitLab, I can't use IP address when listening on hosts. Instead, I need to use hostname or domain name. In the above configuration, I usedGitlab.binghe.comHost name.

From the command line, execute the following command to create the / data1/docker/xinsrv/gitlab directory.

mkdir -p /data1/docker/xinsrv/gitlab

Install GitLab, as shown below.

kubectl apply -f gitlab.yaml

5. Installation complete

View the k8s-ops namespace deployment as shown below.

[root@test10 k8s]# kubectl get pod -n k8s-ops NAME READY STATUS RESTARTS AGE gitlab-7b459db47c-5vk6t 0/1 Running 0 11s postgresql-79567459d7-x52vx 1/1 Running 0 30m redis-67f4cdc96c-h5ckz 1/1 Running 1 10h

You can also view it using the following commands.

[root@test10 k8s]# kubectl get pod --namespace=k8s-ops NAME READY STATUS RESTARTS AGE gitlab-7b459db47c-5vk6t 0/1 Running 0 36s postgresql-79567459d7-x52vx 1/1 Running 0 30m redis-67f4cdc96c-h5ckz 1/1 Running 1 10h

The effect is the same.

Next, you'll look at the port mappings for GitLab, as shown below.

[root@test10 k8s]# kubectl get svc -n k8s-ops NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE gitlab NodePort 10.96.153.100 <none> 80:30088/TCP,22:30022/TCP 2m42s postgresql ClusterIP 10.96.203.119 <none> 5432/TCP 32m redis ClusterIP 10.96.107.150 <none> 6379/TCP 10h

At this point, you can see that the host name of the Master node (test10) is availableGitlab.binghe.comAnd port 30088 to access GitLab.Since I'm using a virtual machine here to set up the environment, access the mapped virtual machine locallyGitlab.binghe.comWhen configuring a local hosts file, add the following configuration items to the local hosts file.

192.168.0.10 gitlab.binghe.com

Note: In the Windows operating system, the hosts file is located in the following directory.

C:\Windows\System32\drivers\etc

Next, you can use the link in your browser:http://gitlab.binghe.com:30088 to visit GitLab, as shown below.

At this point, you can log in to GitLab with the user name root and password admin.1231.

Note: The user name here is root, not admin, because root is the default superuser for GitLab.

At this point, K8S installs gitlab.

Install Harbor Private Warehouse

Habor is an open source container mirror warehouse from VMWare.In fact, Habor is a corresponding enterprise extension to Docker Registry, which has gained wider application. These new enterprise features include managing user interfaces, role-based access control, AD/LDAP integration, and audit logs to meet basic enterprise needs.

Note: The Harbor private warehouse is installed on the Master node (test10 server) here, and it is recommended to install on a different server in the actual production environment.

1. Download the offline installation version of Harbor

wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

2. Unzip Harbor's installation package

tar -zxvf harbor-offline-installer-v1.10.2.tgz

After successful decompression, a harbor directory is generated in the server's current directory.

3. Configure Harbor

Note: Here, I've changed Harbor's port to 1180. If I don't change Harbor's port, the default port is 80.

(1) ModificationHarbor.ymlfile

cd harbor vim harbor.yml

The modified configuration items are shown below.

hostname: 192.168.0.10 http: port: 1180 harbor_admin_password: binghe123 ###And comment out https, otherwise error will occur during installation:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set #https: #port: 443 #certificate: /your/certificate/path #private_key: /your/private/key/path

(2) ModificationDaemon.jsonfile

Modify/etc/docker/Daemon.jsonFile, create without, at / etc/docker/Daemon.jsonAdd the following to the file.

[root@binghe~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"], "insecure-registries":["192.168.0.10:1180"] }

You can also use the ip addr command on the server to view all IP address segments on your machine and configure them to/etc/docker/Daemon.jsonFile.Here, the contents of my configured file are shown below.

{ "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"], "insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.0.10:1180"] }

4. Install and start harbor

Once the configuration is complete, type the following command to install and start Harbor

[root@binghe harbor]# ./install.sh

5. Log in to Harbor and add an account

After successful installation, type in the browser address barHttp://192.168.0.10: 1180 Open the link, enter the username admin and password binghe123, and log in to the system.

Next, we choose user management, add an administrator account, and prepare for subsequent packaging and uploading of Docker images.

The password is Binghe123.Click Yes, at this time, the account binghe is not an administrator. Select the binghe account and click Set As Administrator.

At this point, the binghe account is set as an administrator.The installation of Harbor is now complete.

6. Modify the Harbor port

If you need to modify Harbor's port after installing Harbor, you can follow these steps to modify Harbor's port. Here, I'll take port 180 as an example.

(1) ModificationHarbor.ymlfile

cd harbor vim harbor.yml

The modified configuration items are shown below.

hostname: 192.168.0.10 http: port: 1180 harbor_admin_password: binghe123 ###And comment out https, otherwise error will occur during installation:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set #https: #port: 443 #certificate: /your/certificate/path #private_key: /your/private/key/path

(2) Modify docker-Compose.ymlfile

vim docker-compose.yml

The modified configuration items are shown below.

ports: - 1180:80

(3) ModificationConfig.ymlfile

cd common/config/registry vim config.yml

The modified configuration items are shown below.

realm: http://192.168.0.10:1180/service/token

(4) Restart Docker

systemctl daemon-reload systemctl restart docker.service

(5) Restart Harbor

[root@binghe harbor]# docker-compose down Stopping harbor-log ... done Removing nginx ... done Removing harbor-portal ... done Removing harbor-jobservice ... done Removing harbor-core ... done Removing redis ... done Removing registry ... done Removing registryctl ... done Removing harbor-db ... done Removing harbor-log ... done Removing network harbor_harbor [root@binghe harbor]# ./prepare prepare base dir is set to /mnt/harbor Clearing the configuration file: /config/log/logrotate.conf Clearing the configuration file: /config/nginx/nginx.conf Clearing the configuration file: /config/core/env Clearing the configuration file: /config/core/app.conf Clearing the configuration file: /config/registry/root.crt Clearing the configuration file: /config/registry/config.yml Clearing the configuration file: /config/registryctl/env Clearing the configuration file: /config/registryctl/config.yml Clearing the configuration file: /config/db/env Clearing the configuration file: /config/jobservice/env Clearing the configuration file: /config/jobservice/config.yml Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml loaded secret from file: /secret/keys/secretkey Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [root@binghe harbor]# docker-compose up -d Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-db ... done Creating redis ... done Creating registry ... done Creating registryctl ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating harbor-portal ... done Creating nginx ... done [root@binghe harbor]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS

Install Jenkins (general practice)

Jenkins is an open source, continuous integration (CI) tool that provides a user-friendly interface. It originated from Hudson (Hudson is commercial) and is primarily used to build/test software projects continuously and automatically, monitor the operation of external tasks (this is abstract, written for the time being, without explanation).Jenkins are written in Java and can run in popular servlet containers such as Tomcat or independently.Usually used in conjunction with version management tools (SCM) and build tools.Common version control tools are SVN, GIT, and build tools are Maven, Ant, Gradle.

1. Install nfs (you can omit this step if you have installed nfs before)

The biggest problem with using nfs is write permissions, which allow the jenkins container to be writable by specifying the uid of the user running jenkins in the jenkins container using kubernetes'securityContext/runAsUser, or unlimited so that all users can write.For simplicity, let all users write.

This step can be omitted if NFS has been previously installed.Find a host and install nfs. Here, I'll take the example of installing NFS on the Master node (test10 server).

On the command line, type the following command to install and start nfs.

yum install nfs-utils -y systemctl start nfs-server systemctl enable nfs-server

2. Create nfs shared directory

Create A / opt/nfs/jenkins-data directory on the Master node (test10 server) as a shared directory for nfs, as shown below.

mkdir -p /opt/nfs/jenkins-data

Next, edit the / etc/exports file, as shown below.

vim /etc/exports

Add the following line of configuration to the /etc/exports file.

/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)

Here the ip uses the ip range of the kubernetes node, followed by all_The square option maps all accessed users to nfsnobody users and ultimately compresses them to nfsnobody regardless of who you are accessing, so as long as the owner of/opt/nfs/jenkins-data is changed to nfsnobody, any user accessing it has write permissions.

This option is different on many machines because of user uid irregularities, but is effective when writing to a shared directory.

Next, authorize the / opt/nfs/jenkins-data directory and reload nfs, as shown below.

#Authorize / opt/nfs/jenkins-data/directory chown -R 1000 /opt/nfs/jenkins-data/ #Reload nfs-server systemctl reload nfs-server

Use the following command to validate on any node in the K8S cluster:

#View directory permissions for the nfs system showmount -e NFS_IP

If you can see / opt/nfs/jenkins-data, it means ok.

This is shown below.

[root@test10 ~]# showmount -e 192.168.0.10 Export list for 192.168.0.10: /opt/nfs/jenkins-data 192.168.175.0/24 [root@test11 ~]# showmount -e 192.168.0.10 Export list for 192.168.0.10: /opt/nfs/jenkins-data 192.168.175.0/24

3. Create a PV

Jenkins can actually read previous data by simply loading the corresponding directory, but because deployment cannot define storage volumes, we can only use StatefulSet.

First, create a pv, which is used by StatefulSet. Each time a StatefulSet starts, it creates a pvc through the volume ClaimTemplates template, so you must have a PV to bind to it.

Create jenkins-Pv.yamlFile, the contents of which are shown below.

apiVersion: v1 kind: PersistentVolume metadata: name: jenkins spec: nfs: path: /opt/nfs/jenkins-data server: 192.168.0.10 accessModes: ["ReadWriteOnce"] capacity: storage: 1Ti

I've given 1T of storage here, which I can configure according to the actual configuration.

Execute the following command to create a pv.

kubectl apply -f jenkins-pv.yaml

4. Create a serviceAccount

Creating a service account requires that jenkins be able to create slave s dynamically afterwards, so it must have some privileges.

Create jenkins-service-Account.yamlFile, the contents of which are shown below.

apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create", "delete", "get", "list", "patch", "update", "watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create", "delete", "get", "list", "patch", "update", "watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins

In the above configuration, a RoleBinding and a ServiceAccount are created, and RoleBinding's permissions are bound to this user.Therefore, the jenkins container must be run using this ServiceAccount, otherwise it will not have RoleBinding privileges.

The RoleBinding permissions are easy to understand because jenkins need to create and delete slave s, so they are needed.For secrets permissions, it is an https certificate.

Execute the following command to create a serviceAccount.

kubectl apply -f jenkins-service-account.yaml

5. Install Jenkins

Create jenkins-Statefulset.yamlFile, the contents of which are shown below.

apiVersion: apps/v1 kind: StatefulSet metadata: name: jenkins labels: name: jenkins spec: selector: matchLabels: name: jenkins serviceName: jenkins replicas: 1 updateStrategy: type: RollingUpdate template: metadata: name: jenkins labels: name: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccountName: jenkins containers: - name: jenkins image: docker.io/jenkins/jenkins:lts imagePullPolicy: IfNotPresent ports: - containerPort: 8080 - containerPort: 32100 resources: limits: cpu: 4 memory: 4Gi requests: cpu: 4 memory: 4Gi env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi - name: JAVA_OPTS # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 # ~2 minutes readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 # ~2 minutes # pvc template, corresponding to previous pv volumeClaimTemplates: - metadata: name: jenkins-home spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Ti

When jenkins are deployed, you need to be aware of the number of copies, how many PVS you have, and how much storage is consumed.I've only used one copy here, so I've only created one pv before.

Use the following command to install Jenkins.

kubectl apply -f jenkins-statefulset.yaml

6. Create Service

Create jenkins-Service.yamlFile, which is mainly used to run Jenkins in the background, is shown below.

apiVersion: v1 kind: Service metadata: name: jenkins spec: # type: LoadBalancer selector: name: jenkins # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7) #externalTrafficPolicy: Local ports: - name: http port: 80 nodePort: 31888 targetPort: 8080 protocol: TCP - name: jenkins-agent port: 32100 nodePort: 32100 targetPort: 32100 protocol: TCP type: NodePort

Use the following command to install the Service.

kubectl apply -f jenkins-service.yaml

7. Install ingress

The web interface of Jenkins needs to be accessed from outside the cluster, so the choice here is to use ingress.Create jenkins-Ingress.yamlFile, the contents of which are shown below.

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins spec: rules: - http: paths: - path: / backend: serviceName: jenkins servicePort: 31888 host: jekins.binghe.com

Here, it is important to note that the host must be configured as either a domain name or a host name, or an error will occur, as shown below.

The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.0.10": must be a DNS name, not an IP address

Use the following command to install ingress.

kubectl apply -f jenkins-ingress.yaml

Finally, since I'm using a virtual machine here to set up the environment, access the mapped virtual machine locallyJekins.binghe.comWhen configuring a local hosts file, add the following configuration items to the local hosts file.

192.168.0.10 jekins.binghe.com

Note: In the Windows operating system, the hosts file is located in the following directory.

C:\Windows\System32\drivers\etc

Next, you can use the link in your browser:http://jekins.binghe.com:31888 to visit Jekins.

Physical Machine Installation SVN

Apache Subversion, commonly abbreviated as SVN, is an open source version control system that was developed by CollabNet Inc in 2000 and is now part of the Apache Software Foundation's rich developer and user community.

Compared with RCS and CVS, SVN uses branch management system, which is designed to replace CVS.Free version control services on the Internet are based on Subversion.

Here, take installing SVN on the Master node (binghe101 server) as an example.

1. Install SVN using yum

Install SVN from the command line by executing the following command.

yum -y install subversion

2. Create SVN Library

Execute the following commands in turn.

#Create/data/svn mkdir -p /data/svn #Initialize svn svnserve -d -r /data/svn #Create Code Warehouse svnadmin create /data/svn/test

3. Configure SVN

mkdir /data/svn/conf cp /data/svn/test/conf/* /data/svn/conf/ cd /data/svn/conf/ [root@binghe101 conf]# ll //Total usage 20 -rw-r--r-- 1 root root 1080 5 12/02:17 authz -rw-r--r-- 1 root root 885 5 12/02:17 hooks-env.tmpl -rw-r--r-- 1 root root 309 5 12/02:17 passwd -rw-r--r-- 1 root root 4375 5 12/02:17 svnserve.conf
  • Configure the authz file,
vim authz

The configuration is shown below.

[aliases] # joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average [groups] # harry_and_sally = harry,sally # harry_sally_and_joe = harry,sally,&joe SuperAdmin = admin binghe = admin,binghe # [/foo/bar] # harry = rw # &joe = r # * = # [repository:/baz/fuz] # @harry_and_sally = rw # * = r [test:/] @SuperAdmin=rw @binghe=rw
  • Configure passwd file
vim passwd

The configuration is shown below.

[users] # harry = harryssecret # sally = sallyssecret admin = admin123 binghe = binghe123
  • To configureSvnserve.conf
vim svnserve.conf

The configured file is shown below.

### This file controls the configuration of the svnserve daemon, if you ### use it to allow access to this repository. (If you only allow ### access through http: and/or file: URLs, then this file is ### irrelevant.) ### Visit http://subversion.apache.org/ for more information. [general] ### The anon-access and auth-access options control access to the ### repository for unauthenticated (a.k.a. anonymous) users and ### authenticated users, respectively. ### Valid values are "write", "read", and "none". ### Setting the value to "none" prohibits both reading and writing; ### "read" allows read-only access, and "write" allows complete ### read/write access to the repository. ### The sample settings below are the defaults and specify that anonymous ### users have read-only access to the repository, while authenticated ### users have read and write access to the repository. anon-access = none auth-access = write ### The password-db option controls the location of the password ### database file. Unless you specify a path starting with a /, ### the file's location is relative to the directory containing ### this configuration file. ### If SASL is enabled (see below), this file will NOT be used. ### Uncomment the line below to use the default password file. password-db = /data/svn/conf/passwd ### The authz-db option controls the location of the authorization ### rules for path-based access control. Unless you specify a path ### starting with a /, the file's location is relative to the ### directory containing this file. The specified path may be a ### repository relative URL (^/) or an absolute file:// URL to a text ### file in a Subversion repository. If you don't specify an authz-db, ### no path-based access control is done. ### Uncomment the line below to use the default authorization file. authz-db = /data/svn/conf/authz ### The groups-db option controls the location of the file with the ### group definitions and allows maintaining groups separately from the ### authorization rules. The groups-db file is of the same format as the ### authz-db file and should contain a single [groups] section with the ### group definitions. If the option is enabled, the authz-db file cannot ### contain a [groups] section. Unless you specify a path starting with ### a /, the file's location is relative to the directory containing this ### file. The specified path may be a repository relative URL (^/) or an ### absolute file:// URL to a text file in a Subversion repository. ### This option is not being used by default. # groups-db = groups ### This option specifies the authentication realm of the repository. ### If two repositories have the same authentication realm, they should ### have the same password database, and vice versa. The default realm ### is repository's uuid. realm = svn ### The force-username-case option causes svnserve to case-normalize ### usernames before comparing them against the authorization rules in the ### authz-db file configured above. Valid values are "upper" (to upper- ### case the usernames), "lower" (to lowercase the usernames), and ### "none" (to compare usernames as-is without case conversion, which ### is the default behavior). # force-username-case = none ### The hooks-env options specifies a path to the hook script environment ### configuration file. This option overrides the per-repository default ### and can be used to configure the hook script environment for multiple ### repositories in a single file, if an absolute path is specified. ### Unless you specify an absolute path, the file's location is relative ### to the directory containing this file. # hooks-env = hooks-env [sasl] ### This option specifies whether you want to use the Cyrus SASL ### library for authentication. Default is false. ### Enabling this option requires svnserve to have been built with Cyrus ### SASL support; to check, run 'svnserve --version' and look for a line ### reading 'Cyrus SASL authentication is available.' # use-sasl = true ### These options specify the desired strength of the security layer ### that you want SASL to provide. 0 means no encryption, 1 means ### integrity-checking only, values larger than 1 are correlated ### to the effective key length for encryption (e.g. 128 means 128-bit ### encryption). The values below are the defaults. # min-encryption = 0 # max-encryption = 256

Next, add theSvnserve.confCopy the file to the /data/svn/test/conf/directory.As shown below.

[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/ cp: Whether to Overwrite'/data/svn/test/conf/svnserve.conf'? y

4. Start SVN Service

(1) CreateSvnserve.serviceservice

EstablishSvnserve.servicefile

vim /usr/lib/systemd/system/svnserve.service

The contents of the file are shown below.

[Unit] Description=Subversion protocol daemon After=syslog.target network.target Documentation=man:svnserve(8) [Service] Type=forking EnvironmentFile=/etc/sysconfig/svnserve #ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS ExecStart=/usr/bin/svnserve --daemon $OPTIONS PrivateTmp=yes [Install] WantedBy=multi-user.target

Next, execute the following command to make the configuration take effect.

systemctl daemon-reload

After the command executes successfully, modify the / etc/sysconfig/svnserve file.

vim /etc/sysconfig/svnserve

The modified file contents are shown below.

# OPTIONS is used to pass command-line arguments to svnserve. # # Specify the repository location in -r parameter: OPTIONS="-r /data/svn"

(2) Start SVN

First look at the SVN status, as shown below.

[root@test10 conf]# systemctl status svnserve.service ● svnserve.service - Subversion protocol daemon Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: man:svnserve(8)

As you can see, the SVN is not started at this time. Next, you need to start the SVN.

systemctl start svnserve.service

Set the SVN service to start automatically.

systemctl enable svnserve.service

Next, you can download and install TortoiseSVN, enter a linkSvn://192.168.0.10/testAnd enter the user name binghe, password binghe123 to connect to SVN.

Docker Install SVN

Pull SVN Mirror

docker pull docker.io/elleflorio/svn-server

Start SVN container

docker run -v /usr/local/svn:/home/svn -v /usr/local/svn/passwd:/etc/subversion/passwd -v /usr/local/apache2:/run/apache2 --name svn_server -p 3380:80 -p 3690:3960 -e SVN_REPONAME=repos -d docker.io/elleflorio/svn-server

Enter inside SVN container

docker exec -it svn_server bash

After entering the container, you can configure the SVN repository with reference to how the physical machine installs the SVN.

Physical Machine Installation Jenkins

Note: Before installing Jenkins, you need to install JDK and Maven. Here I will also install Jenkins on the Master node (binghe101 server).

1. Enable the Jenkins Library

Run the following command to download the repo file and import the GPG key:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

2. Install Jenkins

Perform the following command to install Jenkis.

yum install jenkins

Next, modify the Jenkins default port, as shown below.

vim /etc/sysconfig/jenkins

The two modified configurations are shown below.

JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java" JENKINS_PORT="18080"

At this point, the port of Jenkins has been modified from 8080 to 18080

3. Start Jenkins

Start Jenkins at the command line by typing the following command.

systemctl start jenkins

Configure Jenkins to start from scratch.

systemctl enable jenkins

View the status of Jenkins.

[root@test10 ~]# systemctl status jenkins ● jenkins.service - LSB: Jenkins Automation Server Loaded: loaded (/etc/rc.d/init.d/jenkins; generated) Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago Docs: man:systemd-sysv-generator(8) Tasks: 71 (limit: 26213) Memory: 550.8M

Explains that Jenkins started successfully.

Configure the Jenkins runtime environment

1. Sign in to Jenkins

After the first installation, you need to configure the running environment of Jenkins.First, access the link in the browser address barHttp://192.168.0.10: 18080, open the Jenkins interface.

Use the following command to find the password value on the server as prompted, as shown below.

[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword 71af861c2ab948a1b6efc9f7dde90776

Copy the password 71af861c2ab948a1b6efc9f7dde90776 to the text box and click Continue.It jumps to the Custom Jenkins page, as shown below.

Here, you can directly select "Install recommended plug-ins".It then jumps to a page where the plug-in is installed, as shown below.

This step may fail to download and can be ignored directly.

2. Install plug-ins

Plug-ins to be installed

  • Kubernetes Cli Plugin: This plug-in can operate directly in Jenkins using the kubernetes command line.

  • Kubernetes plugin: The plugin needs to be installed to use kubernetes

  • Kubernetes Continuous Deploy Plugin:kubernetes deployment plug-in, available for use as needed

There are more plug-ins to choose from, you can click System Management - > Management Plug-ins to manage and add, install the corresponding Docker plug-ins, SSH plug-ins, Maven plug-ins.Other plug-ins can be installed as needed.As shown in the following figure.

3. Configure Jenkins

(1) Configuring JDK and Maven

Configure JDK and Maven in Global Tool Configuration, as shown below, to open the Global Tool Configuration interface.

Now you're ready to configure JDK and Maven.

Since I installed Maven in the / usr/local/maven-3.6.3 directory on my server, I need to configure it in Maven Configuration, as shown in the following figure.

Next, configure the JDK, as shown below.

Note: Do not check Install automatic

Next, configure Maven, as shown below.

Note: Do not check Install automatic

(2) Configure SSH

Configure SSH by entering the Configure System interface of Jenkins, as shown below.

Find SSH remote hosts to configure.

When the configuration is complete, click the Check connection button and Successfull connection will be displayed.As shown below.

The basic configuration of Jenkins is now complete.

Jenkins publishes Docker projects to the K8s cluster

1. Adjust the configuration of the SpringBoot project

Implementation of the module in which the startup class resides in the SpringBoot projectPom.xmlA configuration packaged as a Docker image needs to be introduced, as shown below.

<properties> <docker.repostory>192.168.0.10:1180</docker.repostory> <docker.registry.name>test</docker.registry.name> <docker.image.tag>1.0.0</docker.image.tag> <docker.maven.plugin.version>1.4.10</docker.maven.plugin.version> </properties> <build> <finalName>test-starter</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <!-- docker Of maven Plugin, Official: https://github.com/spotify/docker‐maven‐plugin --> <!-- Dockerfile maven plugin --> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>$</version> <executions> <execution> <id>default</id> <goals> <!--If package Do not want to use docker Pack,Just comment out this goal--> <goal>build</goal> <goal>push</goal> </goals> </execution> </executions> <configuration> <contextDirectory>$</contextDirectory> <!-- harbor Warehouse username and password--> <useMavenSettingsForAuth>useMavenSettingsForAuth>true</useMavenSettingsForAuth> <repository>$/$/$</repository> <tag>$</tag> <buildArgs> <JAR_FILE>target/$.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins> <resources> <!-- Appoint src/main/resources All files and folders below are resource files --> <resource> <directory>src/main/resources</directory> <targetPath>$/classes</targetPath> <includes> <include>**/*</include> </includes> <filtering>true</filtering> </resource> </resources> </build>

Next, create a Dockerfile in the root directory of the module where the SpringBoot startup class resides, as shown in the following example.

#Add a dependency environment if you pull the Docker image of Java8 from the official mirror repository and upload it to your own Harbor private repository FROM 192.168.0.10:1180/library/java:8 #Specify mirror author MAINTAINER binghe #Run Directory VOLUME /tmp #Copy local files to containers ADD target/*jar app.jar #Commands automatically executed after starting the container ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]

Modify it according to the actual situation.

Note: FROM 192.168.0.10:1180/library/Java:8The precondition is to execute the following command.

docker pull java:8 docker tag java:8 192.168.0.10:1180/library/java:8 docker login 192.168.0.10:1180 docker push 192.168.0.10:1180/library/java:8

Create a yaml file in the root directory of the module where the SpringBoot boot class is located, and the entry is calledTest.yamlFile, as shown below.

apiVersion: apps/v1 kind: Deployment metadata: name: test-starter labels: app: test-starter spec: replicas: 1 selector: matchLabels: app: test-starter template: metadata: labels: app: test-starter spec: containers: - name: test-starter image: 192.168.0.10:1180/test/test-starter:1.0.0 ports: - containerPort: 8088 nodeSelector: clustertype: node12 --- apiVersion: v1 kind: Service metadata: name: test-starter labels: app: test-starter spec: ports: - name: http port: 8088 nodePort: 30001 type: NodePort selector: app: test-starter

2.Jenkins Configuration Publishing Project

Upload the project to the SVN code base, for example, atSvn://192.168.0.10/test

Next, configure automatic publishing in Jenkins.The steps are shown below.

Click New Item.

Enter the description information in the Description text box, as shown below.

Next, configure the SVN information.

Note: The steps for configuring GitLab are the same as those for SVN and will not be repeated.

Locate Jenkins'Build Module and use Execute Shell to build the publishing project to the K8S cluster.

The commands executed are shown below in turn.

#Remove local mirrors without affecting mirrors in the Harbor repository docker rmi 192.168.0.10:1180/test/test-starter:1.0.0 #Using Maven to compile and build Docker images, the image files will be rebuilt in the local Docker container when execution is complete /usr/local/maven-3.6.3/bin/mvn -f ./pom.xml clean install -Dmaven.test.skip=true #Log in to the Harbor repository docker login 192.168.0.10:1180 -u binghe -p Binghe123 #Upload mirror to Harbor repository docker push 192.168.0.10:1180/test/test-starter:1.0.0 #Stop and delete running in K8S cluster /usr/bin/kubectl delete -f test.yaml #Republish Docker image to K8S cluster /usr/bin/kubectl apply -f test.yaml

12 June 2020, 20:59 | Views: 4660

Add new comment

For adding a comment, please log in
or create account

0 comments