Jenkins continuous integration

Continuous integration practice

[GitLab warehouse 192.168.87.100]

  1. Installation related dependencies

yum -y install policycoreutils openssh-server openssh-clients postfix

  1. Start ssh Service & set to boot
    systemctl enable sshd && sudo systemctl start sshd

  2. Set post "X" to start automatically and start. Post "x supports gitlab sending function
    yum install postfix
    systemctl enable postfix && systemctl start postfix

  3. Open ssh and http services, and then reload the firewall list
    firewall-cmd --add-service=ssh --permanent
    firewall-cmd --add-service=http --permanent
    firewall-cmd --reload

If you turn off the firewall, you do not need to do the above configuration

  1. Download the gitlab package and install the online download installation package:
    wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el6/gitlab-ce-12.4.2-ce.0.el6.x 86_64.rpm

  2. Installation:
    rpm -i gitlab-ce-12.4.2-ce.0.el6.x86_64.rpm

  3. Modify gitlab configuration

vi /etc/gitlab/gitlab.rb
###Modify the gitlab access address and port. The default is 80. We change it to 82 
#Modify external_url 'http://192.168.87.100 '
#Add nginx['listen_port'] = 82
  1. Overload configuration and start gitlab
gitlab-ctl reconfigure
gitlab-ctl restart
  1. Add port to firewall
firewall-cmd --zone=public --add-port=82/tcp --permanent 
firewall-cmd --reload

After successful startup, see the following page to modify the administrator root password. After modifying the password, log in
9. Uninstall gitlab
[tutorial] https://www.cnblogs.com/peteremperor/p/10837551.html

10. Use:
First create a group, then create a user, create a project in the group, add members to the project, switch to members and carry out daily development!
After creating a group, add members to the group

[Jenkins installation 192.168.87.101]

Here are two methods. One is to add the latest version of jenkins to the yum source, and the other is to download the specified version of rpm package

1) Install JDK

2) Install jenkins

wget -O: Download and save with a different file name

There are no Jenkins in yum's repo by default. You need to add the Jenkins repository to yum repos and execute the following command:

#prepare
yum  -y install epel-release
yum -y install daemonize
#Formal installation
wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
yum install -y jenkins #Install the latest by default
systemctl enable jenkins  #Startup and self start
#vim /etc/sysconfig/jenkins   #Modified content: JENKINS_USER=“root” JENKINS_PORT=“8888”
sed -i  "s/JENKINS_PORT=\"8080\"/JENKINS_PORT=\"8888\"/" /etc/sysconfig/jenkins
service jenkins start  #start-up

#Open port
firewall-cmd --zone=public --add-port=8888/tcp --permanent
firewall-cmd --reload #Reload firewall

Open browser access http://192.168.87.101:8888

1. Configure Jenkins

Create an administrator user - > instance configuration - > you can install the recommended plug-in or another.

3) Configured as domestic plug-in download address:

cd /var/lib/jenkins/updates
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json
sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

Manage Plugins click Advanced and change the Update Site to the download address of domestic plug-ins:

https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json

After Sumbit, enter in the browser: http://101 Machine ip:8888/restart , restart Jenkins

4) Install Chinese plugin

After installing the plug-in of the plug-in, it will help us install the Chinese plug-in. If you don't search "Chinese", install it

2. Create an association with GitLab

The steps are as follows: pull the code from gitlab using ordinary credentials - > pull the code from gitlab Using ssh credentials

Pull the code on Gitlab on Jenkins

Install Credentials Binding on Jenkins, Git plug-in and Git on Jenkins server (Yum install Git - Y & & Git -- version)

1. Create common voucher

Use ssh credentials to pull

2. Use ssh keygen to generate ssh, open the cat public key under / root/.ssh / and copy it to ssh in gitlab settings:

3. Configure ssh credentials

3. Build pipeline environment

Pipeline construction project: the construction project has changed from the previous process to pipeline. After we create a pipeline project, we only need to write / generate a pipeline to complete the operations of pulling, building and deploying the project.

The following steps: install the plug-in (Pipeline, included in the recommended plug-in) - > write the project build script - > view

1. Install plug-ins

4. Extended Choice Parameter build

Example:

5. Precautions

1) When pulling the code, the git credential in the Jenkins script is ssh

6. Code review

install

Note that version 6.7 of Sonar requires a MySQL 5.7 database, jdk1.8 or above, such as 8.5, and then create a database named Sonar. Then start installing Sonar. After startup, open the port and log in to get the token:

The following steps: ensure that there is a database named sonar - > install conar - > the rest of the work

Ensure that a database named sonar exists

Install Conar

Download sonar compressed package: https://www.sonarqube.org/downloads/

wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-6.7.4.zip
yum install unzip
unzip sonarqube-6.7.4.zip #decompression
mv sonarqube-6.7.4/* /usr/local/ #move file

useradd sonar #To create a sonar user, you must use sonar for startup, or an error will be reported
chown -R sonar. /usr/local/sonarqube-6.7.4 #Change the sonar directory and file permissions, and modify the sonar configuration file

vim /usr/local/sonarqube-6.7.4/conf/sonar.properties
#Paste the following directly into the configuration without modifying the port (default 9000)

sonar.jdbc.username=root
sonar.jdbc.password=3333  
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs= maxPerformance&useSSL=false

cd /usr/local/sonarqube-6.7.4

#su sonar cannot be started by root!!! Therefore, the following is used to start under the root user
su sonar ./bin/linux-x86-64/sonar.sh start #start-up
su sonar ./bin/linux-x86-64/sonar.sh status #View status
su sonar ./bin/linux-x86-64/sonar.sh stop #stop it

tail -f logs/sonar.logs #View log access sonar http://192.168.66.101:9000
#Open port
firewall-cmd  --zone=public --add-port=9000/tcp --permanent
firewall-cmd --reload

Remaining work

2) Login: default admin/admin

​ Get token:

​ Enter admin to get the following information. You need to extract and save the token

​ admin: d6065dbc1e1cfcd3a9714614b16d6bd816246d29

Sonar and Jenkins integration

Install plug-ins

Installing the sonar scanner tool in Jenkins

Manage Jenkins - > global tool con "confi guration add a voucher

In jenkins "system configuration", make the following configuration

In the project Jenkins script

7. Compile package

Environmental preparation

Make sure that the jenkin host has jdk and maven installed (configure the alicloud image, and then see below)
2. Configuration on Jenkins

Configuration 1:

Configuration 2:

For common modules

We need to install public dependencies, so we need to edit the "Jenkinsfile" script under the project root directory and add the phase of "compiling and installing public subprojects":

//git credential ID
def git_auth = "3b3eca2f-e2d7-44e5-8e11-e98ee29953dc"
//url address of git
def git_url = "git@192.168.87.133:zhuangjie/tensquare_back.git"

   stage('Pull code') {
      checkout([$class: 'GitSCM', branches: [[name: "*/${branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]])
   }
   stage('Code review') {
        def scannerHome=tool 'SonarQube-Scanner'//Modify according to your own jenkinsonarqube scanner environment
        withSonarQubeEnv('ConarQube6.7'){ //Introducing jenkinsonarqube environment
                sh """
		cd ${project_name}
		${scannerHome}/bin/sonar-scanner
	"""
        }
   }
   stage('Compile and install common subprojects') {
   	sh "mvn -f tensquare_common clean install"
   }
    
}

Then, ensure that there is a packaged plug-in in each module "pom.xml" (the plug-in of the parent project can be deleted):

	<build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

Upload parent module to service

This ensures that the dependent installation can be completed. Let's package the runnable microservices:

Because zull depends on the parent module, we need to ensure that the local warehouse of the server has a parent project:

[root@localhost tensquare]# pwd
/root/.m2/repository/com/tensquare
[root@localhost tensquare]# mv /home/zhuangjie / desktop / tensquare_parent/ ./
[root@localhost tensquare]# ll
drwxr-xr-x. 3 root      root      58 7 January 29:18 tensquare_common
drwxrwxrwx. 3 zhuangjie zhuangjie 58 7 January 29:37 tensquare_parent

File directory we moved:

Then build. In this way, we will package the jar file of the module in the pulled code directory (the test jar can run normally).

8. Docker environment

To sum up in a simple sentence: Docker technology enables us to deploy and use any application on Linux servers more efficiently and easily

`[Docker installation]``

1) Uninstall old version

yum list installed | grep docker #Lists the packages of all docker s currently
yum -y remove docker  #Uninstall docker package with the package name of
rm -rf /var/lib/docker #Delete all images and containers of docker

2) Install the necessary software packages

sudo yum install -y yum-utils  device-mapper-persistent-data  lvm2

3) Set the downloaded image warehouse

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#Or use accelerated mirroring: Yum config Manager -- add repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

4) Lists the versions that need to be installed

yum list docker-ce --showduplicates | sort -r  

5) Install the specified version (version 18.0.1 is used here)

sudo yum install docker-ce-18.06.1.ce

6) View version & set startup

docker -v
sudo systemctl enable docker #Set startup  
systemctl is-enabled docker  #Check whether it is set to start automatically

7) Start Docker

sudo systemctl start docker #start-up

8) Add alicloud image download address (in addition, it is also the trust list pulled and pushed)

vi /etc/docker/daemon.json

{ 
  "registry-mirrors": ["https://zydiol88.mirror.aliyuncs.com"]
}

9) Restart Docker

sudo systemctl restart docker  
systemctl status docker  #View docker status

For offline installation method, please refer to: https://www.cnblogs.com/kingsonfu/p/11576797.html

[docker command]

1) Mirror command:

Image: it is equivalent to the installation package of the application. Any application deployed in Docker needs to be built into an image first

docker search Image name #Search image
docker pull Image name #Example of pulling image: docker pull nginx
docker images #View all local mirrors
docker rmi -f Image name #Delete mirror (even when running)
docker rmi -f $(docker images -qa) #Delete all mirrors

2) Container command

Containers: containers are created from mirrors. The container is the carrier for Docker to run applications. Each application runs on each of Docker
In the container.

docker run -di -p Corresponding host port:Open application port image name:label #Run container (foreground run by default), di:i is run, d is background- p means public;
docker ps #View running containers
docker ps -a #Query all containers

docker exec -it container ID/Container name /bin/bash #Enter the inside of the container
docker start/stop/restart Container name/ID #Start / stop / restart container
docker rm -f Container name/ID #Delete container
docker rm `docker ps -a -q` #Delete all containers
docker logs -f -t --tail Row count container name  #View container run log

[image making]

Next, make the "ttensquare_eureka_server-1.0-SNAPSHOT.jar" enreka micro service into a docker image

1) Upload Eureka's microservice jar package to linux
2) Write Dockerfile

FROM openjdk:8-jdk-alpine
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
EXPOSE 10086
ENTRYPOINT ["java","-jar","/app.jar"]

3) Build mirror

docker build --build-arg JAR_FILE=tensquare_eureka_server-1.0-SNAPSHOT.jar -t eureka:v1 .

4) Check whether the image is created successfully

docker images

5) Create container

docker run -i --name=eureka -p 10086:10086 eureka:v1

6) Access container
http://192.168.66.101:10086

9. Image making

There is no need to install plug-ins in Jenkins to assist in this process. If Jenkins's machine wants to push containers to harbor machine, Jenkins only needs to configure docker, add harbor to the trust list and log in to harbor server. Harbor can be logged in successfully!
For internal projects, you need to add a "Dockerfile" file

When the program executes "MVN - f ${currentprojectname} clean package dockerfile: build" in the project root directory, it will be compiled, packaged and made into an image!

[Harbor environment 192.168.87.102]

Introduction:

Harbor is an enterprise Registry server used to store and distribute Docker images.
In addition to the private image warehouse of Harbor, there is also the Registry officially provided by Docker. Compared with Registry, Harbor has a lot of advantages
Multiple advantages:

  1. Provide a layered transmission mechanism to optimize network transmission. The Docker image is layered. If a full amount of files are used for each transmission (so
    FTP is obviously not economical. A mechanism for identifying hierarchical transmission must be provided, which is identified by the UUID of the layer
    Object to transfer.
  2. To provide a WEB interface and optimize the user experience, it is obviously inconvenient to upload and download only the name of the image. A user community is needed
    The interface can support login and search functions, including distinguishing public and private images.
  3. Support horizontal cluster expansion. When users upload and download images on a server, the corresponding access pressure needs to be divided
    Solution.
  4. With good security mechanism, the development team in the enterprise has many different positions, and different permissions are assigned to different positions,
    With better security.

1) First install Docker and start Docker (completed)

Refer to the previous installation procedure
2) Install docker compose first

sudo curl -L https://github.com/docker/compose/releases/download/1.27.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose # installation
#Accelerated curl - L“ https://get.daocloud.io/docker/compose/releases/download/1.27.3/docker-compose- $(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose  
docker-compose --version  #View version

3) Download the compressed package of Harbor (version of this course: v1.9.2)
https://github.com/goharbor/harbor/releases

4) Installation starts

tar -xzf harbor-offline-installer-v1.9.2.tgz -C /usr/local/
cd /usr/local/harbor

vim harbor.yml
#Modification 1: hostname: 192.168.87.102
#Modified content 2: Port: 85
#You can also change the login password, which is not configured here. The following is the default password for login

./install.sh
docker-compose up -d #start-up
#docker-compose stop #stop it
#docker-compose restart #Restart

5) Add as startup
vim /lib/systemd/system/harbor.service

[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f  /usr/local/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /usr/local/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target

systemctl enable harbor
systemctl is-enabled harbor

8) Visit Harbor
http://192.168.66.102:85
Default account password: admin/Harbor12345

Problems you may encounter: https://www.cnblogs.com/hallejuayahaha/p/13926575.html (Reference) the blogger encountered that the docker compose was too low and could not be found, resulting in an error at startup:

[root@localhost zhuangjie]# docker-compose up -d #start-up
ERROR: 
        Can't find a suitable configuration file in this directory or any
        parent. Are you in the right directory?

        Supported filenames: docker-compose.yml, docker-compose.yaml
        
#solve
sudo curl -L https://github.com/docker/compose/releases/download/1.27.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version

./install.sh #Note that you should also cd go to the Harbor root directory for execution
[project and user]

After login:

1) Create private projects (projects are divided into public projects and private projects)

2) Create user

3) Add members to the project:

role Permission description
visitor Has read-only permission for the specified item
Developer Have read and write permissions for the specified item
maintenance staff Create Webhooks if you have read and write permissions for the specified project
Project Manager In addition to read and write permissions, it also has user management / image scanning and other management permissions

1. Push image to Harbor

1) Label the image

docker tag eureka:v1 192.168.208.157:85/uplog/eureka:v1

2) Push image (two problems will be encountered, please solve them)

  1. Problem 1: the Harbor address needs to be added to the Docker trust list. If it is not solved, the push will prompt:

    The push refers to repository [192.168.66.102:85/tensquare/eureka]
    Get https://192.168.66.102:85/v2/: http: server gave HTTP response to HTTPS
    client
    

    vi /etc/docker/daemon.json

    { 
     "registry-mirrors": ["https://zydiol88.mirror.aliyuncs.com"],
     "insecure-registries": ["192.168.208.157:85"]
    }
    

    Docker needs to be restarted

  2. Problem 2: insufficient permission. If it is not solved, the push will prompt: "denied: requested access to the resource is denied"

    #docker login -u username - p password 192.168.66.102:85
    
    [root@localhost desktop]# docker login -u zhuangjie  -p gkmzjaznH21  192.168.87.102:85
    Login Succeeded
    
  3. Push

    docker push 192.168.208.157:85/uplog/eureka
    

3) Log in to Harbor to view

2. Pull image from Harbor

After Docker exists, we make the following preparations to ensure that we can pull images from Harbor:

  1. The Harbor address needs to be added to the Docker trust list. If it is not resolved, the push will prompt:

    The push refers to repository [192.168.66.102:85/tensquare/eureka]
    Get https://192.168.66.102:85/v2/: http: server gave HTTP response to HTTPS
    client
    

    vi /etc/docker/daemon.json

    { 
     "registry-mirrors": ["https://zydiol88.mirror.aliyuncs.com"],
     "insecure-registries": ["192.168.208.157:85"]  
    }
    

    Docker needs to be restarted

  2. Ready to pull

    docker login -u zhuangjie -p zhuangJIE3333 192.168.208.157:85  #Sign in
    docker pull 192.168.208.157:85/uplog/eureka:v1  #Pull. The pull command is copied directly from Harbor
    docker images  #View local mirror
    

[Web server 192.168.87.103]

1. Application deployment

For distributed application deployment, we need to add another web server:
Environment initialization:

systemctl enable sshd && sudo systemctl start sshd

firewall-cmd --add-service=ssh --permanent
firewall-cmd --add-service=http --permanent
firewall-cmd --reload

Environment: JDK and docker must be installed (then add harbor to the trust list:),

vi /etc/docker/daemon.json

{
 "registry-mirrors": ["https://zydiol88.mirror.aliyuncs.com"],
 "insecure-registries": ["192.168.87.102:85"]
}

sudo systemctl restart docker # restart
systemctl status docker # view docker status

jenkins installed the "Publish over SSH" plug-in. Now the problem is how to control a web server through ssh.

1) You need to SSH copy ID 192.168.87.104 to the new web server in jenkins server, and then configure it in jenkins server to use this web server for deployment

2) Configure the content of "Publish over SSH" in "global configuration" of jenkins. How?

[password] it seems that the password is not set. If it is set, it needs to be filled in.
[Path to key] the path of the key file (private key) / root/.ssh/id_rsa
If [Key] is empty, the test can succeed. Note that if the "/ root/.ssh/id_rsa" file cannot be found, you need to paste the private Key of. ssh into the text box. Whether it is root or other users depends on which user is used to execute the copy command.

[SSH Server Name] ID name, whatever you choose
[Hostname] the host name or IP address that needs to be connected to ssh. Fill in the application server IP here (recommended IP)
[Username] user name
[Remote Directory] Remote Directory (fill in the file as needed and transfer it to this directory)
[Test Configuration] after configuration, click test to display Success! []

Full "Jenkinsfile":

//git credential ID
def git_auth = "da1d1333-c35f-415a-903c-b8d58c870c56"
//url address of git
def git_url = "git@192.168.87.100:zhuangjie/tensquare_back.git"

//Version number of the image
def tag = "latest"
//url address of Harbor
def harbor_url = "192.168.87.102:85"
//Image library project name
def harbor_project = "uplog"
//Login credential ID of Harbor
def harbor_auth = "ef2651c8-92a7-4edb-b4ca-4e182fadf0ca"

node {
    //Gets the name of the currently selected item
    def selectedProjectNames = "${project_name}".split(",")
    //Convert the selected server information into an array
    def selectedServers = "${publish_server}".split(',')

   stage('Pull code') {
      checkout([$class: 'GitSCM', branches: [[name: "*/${branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]])
   }
   stage('Code review') {
        for (int i=0; i<selectedProjectNames.length; i++) {
            //tensquare_eureka_server@10086
            def projectInfo = selectedProjectNames[i];
            //Name of the project currently traversed
            def currentProjectName = "${projectInfo}".split("@")[0]
            //Project port currently traversed
            def currentProjectPort = "${projectInfo}".split("@")[1]
            def scannerHome=tool 'SonarQube-Scanner'//Modify according to your own jenkinsonarqube scanner environment
            withSonarQubeEnv('SonarQube6.7'){ //Introducing jenkinsonarqube environment
                sh """
            		cd ${currentProjectName}
            		${scannerHome}/bin/sonar-scanner
            	"""
            }

        }


   }
   stage('Compile and install common subprojects') {
   	sh "mvn -f tensquare_common clean install"
   }
   stage('Compile, package microservice project, upload image') {
        for (int i=0; i<selectedProjectNames.length; i++) {
            //tensquare_eureka_server@10086
            def projectInfo = selectedProjectNames[i];
            //Name of the project currently traversed
            def currentProjectName = "${projectInfo}".split("@")[0]
            //Project port currently traversed
            def currentProjectPort = "${projectInfo}".split("@")[1]
            sh "echo 'Start mirroring'"
            sh "mvn -f ${currentProjectName}  clean package dockerfile:build"
            sh "echo 'The image is ready'"
            //Define mirror name
            def imageName = "${currentProjectName}:${tag}"
            //Label the image
            sh "docker tag ${imageName} ${harbor_url}/${harbor_project}/${imageName}"
            //Push the image to Harbor
            withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                    //Log in to Harbor
                    sh "docker login -u ${username} -p ${password} ${harbor_url}"
                    //Image upload
                    sh "docker push ${harbor_url}/${harbor_project}/${imageName}"
                    sh "echo Image upload succeeded"
            }
           //=====The following is a remote call for project deployment========
           for(int j=0; j<selectedServers.size(); j++){
               //Each service name
               def currentServer = selectedServers[j]
               //Add microservice runtime parameters: spring.profiles.active
               def activeProfile = "--spring.profiles.active="
               if(currentServer=="master_server"){
                    activeProfile = activeProfile+"eureka-server1"
               }else if(currentServer=="slave_server1"){
                    activeProfile = activeProfile+"eureka-server2"
               }
                sh "echo 'Application deployment start'"
                sshPublisher(publishers: [
                       sshPublisherDesc(configName: "${currentServer}",
                       transfers: [
                           sshTransfer(cleanRemote: false,
                           excludes: '',
                           // Triggered command, deploy.sh
                           execCommand: "/opt/jenkins_shell/deployCluster.sh $harbor_url $harbor_project $currentProjectName $tag $currentProjectPort",
                           execTimeout: 120000,
                           flatten: false,
                           makeEmptyDirs: false,
                           noDefaultExcludes: false,
                           patternSeparator: '[ , ]+',
                           remoteDirectory: '',
                           remoteDirectorySDF: false,
                           removePrefix: '',
                           sourceFiles: '')
                       ],
                       usePromotionTimestamp: false,
                       useWorkspaceInPromotion: false,
                       verbose: false)
                ])
                sh "echo 'End of application deployment'"
            }
        }
   }

}

3) Add the "deployCluster.sh" script in the "/ opt/jenkins_shell" directory of the two production servers and grant the execution permission;

deployCluster.sh

#! /bin/sh
#Receive external parameters
harbor_url=$1
harbor_project_name=$2
project_name=$3
tag=$4
port=$5
profile=$6

imageName=$harbor_url/$harbor_project_name/$project_name:$tag

echo "$imageName"

#Query whether the container exists, and delete it if it exists
containerId=`docker ps -a | grep -w ${project_name}:${tag}  | awk '{print $1}'`

if [ "$containerId" !=  "" ] ; then
    #Stop the container
    docker stop $containerId

    #Delete container
    docker rm $containerId
	
	echo "Successfully deleted container"
fi

#Query whether the image exists, and delete it if it exists
imageId=`docker images | grep -w $project_name  | awk '{print $3}'`

if [ "$imageId" !=  "" ] ; then
      
    #delete mirror
    docker rmi -f $imageId
	
	echo "Mirror deleted successfully"
fi


# Log in to Harbor
docker login -u zhuangjie -p gkmzjaznH21 $harbor_url

# Download Image
docker pull $imageName

# Start container
docker run -di -p $port:$port $imageName $profile

echo "Container started successfully"

Note: Harbor login information needs to be modified.

2. Highly available nginx

1) Install Nginx (completed)

2) Configure nginx

upstream zuulServer {
        server 192.168.87.103:10020 weight=1;
        server 192.168.87.104:10020 weight=1;
}
server {
        listen       9090;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
           proxy_pass http://zuulServer/;
		}
...

Posted on Fri, 05 Nov 2021 16:22:09 -0400 by inkdrop