Centos8 Deployment Kubernetes Cluster and Harbor Installation Configuration


Recently, we have been paying attention to k8s, holidays on May 1st, and have not been out of town. All of us have set up k8s cluster and Harbo repository installation configuration in the test environment through VMware workstation. We hope this will be helpful to everyone, and at the same time improve our understanding of K8S.


1 Preparatory phase

Habor: Create a library

K8s-master01 ,k8s-node01,k8s-node02 kubeadm


2 Cluster Installation IP Assignment

192.168.253.167 k8s-master 192.168.253.168 k8s-node01 192.168.253.169 k8s-node02


3 Set the system hostname and Host file to resolve each other

hostnamectl set-hostname --static k8s-master
hostnamectl set-hostname --static k8s-node01
hostnamectl set-hostname --static k8s-node02
vim /etc/hosts
192.168.253.167 k8s-master
192.168.253.168 k8s-node01
192.168.253.169 k8s-node02

Installation Dependency

yum install -y vim wget net-tools git
yum install lrzsz  --nogpgcheck

Set firewall to Iptables and set empty rules

systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables.services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save;(redhat7 Use)

Close swap partition and selinux - execution successful

swapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstab
setenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

Adjust the kernel parameters for k8s

cat > kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1 #Not executed
net.bridge.bridge-nf-call-ip6tables=1 #Not executed
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0 #Not executed
vm.swappiness=0 #Prohibits the use of swap space and can only be used when the system OOM
vm.overcommit_memory=1 #Do not check if physical memory is sufficient
vm.panic_on_oom=0 #Turn on OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp  kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p  /etc/sysctl.d/kubernetes.conf

Adjust System Time Zone - Successful Execution

#Set system time zone to China/Shanghai
```
timedateclt set-timezone Asia/Shanghai
```

#Write current UTC time to hardware clock
```
timedatectl set-local-rtc 0
```

#Restart services dependent on system time
```
systemctl restart rsyslog
systemctl restart crond
```

Shut down services not needed by the system

systemctl stop postfix && systemctl disable postfix #unexecuted

Setting rsyslogd and systemd journald - unsuccessful execution

Mkdir/var/log/journal persistent save log directory
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#Persist to disk
Storage-persistent
#Compress the history log
Compress=yes
SyncIntervalSec=5m
RateLimitBurst=1000
#Maximum occupied space 10G
SystemMaxUse=10G
Single log file maximum 200M
SystemMaxFileSize=200M
Log saved for 2 weeks
MaxRetentionSec=2week
#Do not forward logs to syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-jounald

Upgrade Kernel Version: - Not Executed

rpm -Uvh https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el8/x86_64/RPMS/elrepo-release-8.1-1.el8.elrepo.noarch.rpm
#Post Installation Check/boot/grub2/grub.cfg Corresponding Kernel in menuentry Is it included in initrd16 Configuration, if not, install again.
yum --enablerepo=elrepo-kernel install -y kernel-lt
#Set boot to boot from new kernel
grub2-set-default "Centos Linux (4.4.182-1.el7.elrepo.x86_64) 7 (CORE)"


kube-proxy opens the prerequisite for ipvs - successful execution

Install the ipset package on all nodes, we will install ipvsadm (optional) for easy viewing of ipvs rules
yum install ipset -y
yum install ipvsadm -y

modprobe br_netfilter
cat  >/etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs-rr
modprobe -- ip_vs-wrr
modprobe -- ip_vs-sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4

Install docker software--successful execution

# step 1: Install some necessary system tools
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: Add Software Source Information
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Install containerd.io first
yum install -y containerd.io
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
#sudo yum -y install docker-ce
# Step 4: Turn on the Docker service
sudo service docker start && systemctl enable docker

#Note:
#Official software sources enable the latest software by default, and you can get each version of the package by editing the source.For example, the software source for the beta version is not officially available. You can open it as follows.Similarly, you can turn on various beta versions, and so on.
# vim /etc/yum.repos.d/docker-ee.repo
# Modify enabled=0 under [docker-ce-test] to enabled=1
#
#Install the specified version of Docker-CE:
# Step 1: Find the version of Docker-CE:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: Install the specified version of Docker-CE: (VERSION such as 17.03.0.ce.1-1.el7.centos above)
# sudo yum -y install docker-ce-[VERSION]
docker version

Create/etc/docker directory

mkdir /etc/docker

cat  >/etc/docker/daemon.json << EOF
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "log-driver": "json-file",
 "log-opts": {
  "max-size": "100m"
 }
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

Restart Docker Service

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

Install kubeadm (master-slave configuration)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
ps: Synchronization is not open on the official website, There may be an index gpg Check for failures, Please use yum install -y --nogpgcheck kubelet kubeadm kubectl install
systemctl enable kubelet.service

Initialize the primary node:

 View a list of mirrored versions
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

Pull up the mirror:

Science goes online and pulls up the mirror:
docker  pull k8s.gcr.io/kube-apiserver:v1.18.2
docker  pull k8s.gcr.io/kube-controller-manager:v1.18.2
docker  pull k8s.gcr.io/kube-scheduler:v1.18.2
docker  pull  k8s.gcr.io/kube-proxy:v1.18.2
docker pull  k8s.gcr.io/pause:3.2
docker pull  k8s.gcr.io/etcd:3.4.3-0
docker pull  k8s.gcr.io/coredns:1.6.7

Save Mirror:

docker save k8s.gcr.io/kube-proxy -o kube-proxy.tar
docker save k8s.gcr.io/kube-apiserver -o kube-apiserver.tar
docker save k8s.gcr.io/kube-scheduler -o kube-scheduler.tar
docker save k8s.gcr.io/kube-controller-manager -o kube-kube-controller-manager.tar
docker save k8s.gcr.io/pause -o pause.tar
docker save k8s.gcr.io/etcd -o /etcd.tar
docker save k8s.gcr.io/etcd -o etcd.tar
docker save k8s.gcr.io/coredns -o coredns.tar

docker save quay.io/coreos/flannel  -o flannel.tar
vim load-images.sh
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/images_list.txt
cd /root/kubeadm-basic.images
for i in $( cat /tmp/images_list.txt )
do
  docker load -i $i
done
rm -rf /tmp/images_list.txt
chmod a+x load-images.sh
bash load-images.sh

Initialize the primary node:

Create init profile:
kubeadm config print init-defaults > kubeadm-config.yaml

Edit the configuration file (printed vim):
vim kubeadm-config.yaml
localAPIEndpoint:
AdvertiseAddress: 192.168.253.167 #Modify advertiseAddress: 192.168.253.167 is the master address

kubernetesVersion: v1.18.2
 podSubnet: "10.244.0.0/16"

apiVersion: kubeproxy.config.k8s.io/v1alpha1 #Modify to ipvs mode
kind: KubeProxyConfiguration
featureGates:
   SupportIPVSProxyMode: true
mode: ipvs

Initialization (Initialization, requires docker in startup state):
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
Error message 1:
Warning, docker starts off without booting and executes as prompted.
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
systemctl enable docker.service

Error message 2:
# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Error: unknown flag: --experimental-upload-certs

Error message 3: The docker driver is cgroupfs recommended to be systemd

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 

Add the following configuration
#vim /etc/docker/daemon.json
{"registry-mirrors": ["https://wv1h618x.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}}
restart
#systemctl daemon-reload && systemctl restart docker && systemctl enable docker 


# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0502 20:39:04.447334   29833 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.253.167]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0502 20:39:18.493122   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0502 20:39:18.502456   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.032749 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
e04dd9c52332d23bab221c18d59016fb3789d7f39c8529681b7cc476707c1380
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8
   
Start successfully, note that when the start succeeds, there will be a hint below to let us do the related operation, and the command to enter the node will also be there:
   
Join the primary node and the rest of the work nodes:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If some of the generated configurations have been messed up by us, you can try again.
kubeadm reset

View production certificates:
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# ls
apiserver.crt              apiserver-etcd-client.key  apiserver-kubelet-client.crt  ca.crt  etcd                front-proxy-ca.key      front-proxy-client.key  sa.pub
apiserver-etcd-client.crt  apiserver.key              apiserver-kubelet-client.key  ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key

View node:
[root@k8s-master pki]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   6m33s   v1.18.2
Note: When master looks at the pod for notread, he finds a network error, that is, flannel is not installed. Let's take a look at the mirror of flannel.

Network flannel deployment:

The latest version of yaml file for the k8 network plugin kube-flannel.yml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml

[root@k8s-master flannel]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[root@k8s-master flannel]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   36m   v1.18.2
[root@k8s-master flannel]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-4pzqn             1/1     Running   0          35m
coredns-66bff467f8-bw2b4             1/1     Running   0          35m
etcd-k8s-master                      1/1     Running   0          36m
kube-apiserver-k8s-master            1/1     Running   0          36m
kube-controller-manager-k8s-master   1/1     Running   0          36m
kube-flannel-ds-amd64-kf7j7          1/1     Running   0          18m
kube-proxy-g2hlg                     1/1     Running   0          35m
kube-scheduler-k8s-master            1/1     Running   0          36m

View POD configuration:
kubectl describe pod kube-flannel-ds-amd64-k92bk -n kube-system

Join other nodes:

The preparation is the same. We download k8s on the node and then add it to the node using the join command that we automatically generated at init:

kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8

# kubectl get pod -n kube-system -o wide
# kubectl get node
# kubectl get pod -n kube-system
# kubectl get pod -n kube-system -w
Initial error: because there is no mirror
kube-flannel-ds-amd64-k92bk          0/1     Init:0/1   0          4m3s
   kube-flannel-ds-amd64-k92bk          0/1     Init:ImagePullBackOff   0          5m15s
   kube-flannel-ds-amd64-k92bk          0/1     Init:ErrImagePull       0          6m39s

2. Harbor Installation

Harbor Installation - Warehouse Configuration: Harbor is an open source mirror warehouse, harbor website: https://goharbor.io/

Initial settings:
hostnamectl set-hostname --static k8s-habor
yum install -y vim wget net-tools git
yum install lrzsz  --nogpgcheck
systemctl stop firewalld && systemctl disable firewalld
swapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstab
setenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

Edit docker configuration:
vim /etc/docker/daemon.json

master:
cat  >/etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://y7guluho.mirror.aliyuncs.com"],
"insecure-registries": ["https://hub.51geeks.com"]
}
EOF  

nodes:
cat  >/etc/docker/daemon.json << EOF
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "log-driver": "json-file",
 "log-opts": {
  "max-size": "100m"
 },
 "insecure-registries": ["https://hub.51geeks.com"]
}
EOF
All three servers need to be added

Preparations:

Environmental Science Software Edition Download Address Remarks
system Centos 8.1


docker 19.03.8


docker-componse 1.25.5 https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)

harbor 1.10.2 https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

Install docker

$ yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine            
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum-config-manager --enable docker-ce-edge
$ yum install -y docker-ce
$ systemctl start docker
$ systemctl enable docker

Install docker-componse

Installation Help: https://docs.docker.com/compose/install/

https://github.com/docker/compose/

  1. sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    docker-compose --version

Install harbor

wget -c  https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz
tar zxvf harbor-offline-installer-v1.10.2.tgz
cd harbor

Configure harbor.yml


$vim harbor.yml
hostname: hub.51geeks.com
http:
port: 80
https:
port: 443
   certificate: /data/cert/server.crt
   private_key: /data/cert/server.key
harbor_admin_password: Harbor12345 # Web-side admin user password
database:
  password: root123
data_volumn: /data

Create a certificate:

Create a private key:
openssl genrsa -des3 -out server.key 2048
Create a certificate request:
openssl req -new -key server.key -out server.csr
Backup private key:
cp server.key server.key.org
Exit password:
openssl rsa -in  server.key.org -out server.key
Certificate signature:
openssl x509 -req -days 365 -in server.csr -signkey  server.key -out server.crt
chmod a+x *
mkdir /data/cert
chmod -R 777 /data/cert

Append resolves 1 master, 2 nodes, 1 harbor:
echo "192.168.253.170 hub.51geeks.com" >> /etc/hosts
192.168.253.167 k8s-master
192.168.253.168 k8s-node01
192.168.253.169 k8s-node02
192.168.253.170 hub.51geeks.com

Install harbor

$ ./install.sh

[root@k8s-habor harbor]# ./install.sh

[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.8

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.25.5

[Step 2]: loading Harbor images ...
ad1dca7cdecb: Loading layer [==================================================>]   34.5MB/34.5MB
fe0efe3b32dc: Loading layer [==================================================>]  63.56MB/63.56MB
5504ea8a1c89: Loading layer [==================================================>]  58.39MB/58.39MB
e5fe51919fa7: Loading layer [==================================================>]  5.632kB/5.632kB
5591c247d2e6: Loading layer [==================================================>]  2.048kB/2.048kB
db6a70d4a66e: Loading layer [==================================================>]   2.56kB/2.56kB
a898589079d4: Loading layer [==================================================>]   2.56kB/2.56kB
a45af9651ff3: Loading layer [==================================================>]   2.56kB/2.56kB
be9c1b049bcc: Loading layer [==================================================>]  10.24kB/10.24kB
Loaded image: goharbor/harbor-db:v1.10.2
346fb2bd57a4: Loading layer [==================================================>]  8.435MB/8.435MB
2e3e5d2fc1dd: Loading layer [==================================================>]  6.239MB/6.239MB
ef4f6d3760d4: Loading layer [==================================================>]  16.04MB/16.04MB
c72e6e471644: Loading layer [==================================================>]  28.25MB/28.25MB
8ef2ab5918ad: Loading layer [==================================================>]  22.02kB/22.02kB
8c6f27a03a6c: Loading layer [==================================================>]  50.52MB/50.52MB
Loaded image: goharbor/notary-server-photon:v1.10.2
6d0fd267be6a: Loading layer [==================================================>]  115.2MB/115.2MB
cc6a0cb3722a: Loading layer [==================================================>]  12.14MB/12.14MB
2df571d6ea95: Loading layer [==================================================>]  3.072kB/3.072kB
9971e5655191: Loading layer [==================================================>]  49.15kB/49.15kB
10c405f9f0e2: Loading layer [==================================================>]  3.584kB/3.584kB
6861c00be6c7: Loading layer [==================================================>]  13.02MB/13.02MB
Loaded image: goharbor/clair-photon:v1.10.2
1826656409e9: Loading layer [==================================================>]  10.28MB/10.28MB
8cdf4e864764: Loading layer [==================================================>]  7.697MB/7.697MB
15824ca72188: Loading layer [==================================================>]  223.2kB/223.2kB
16130654d1d1: Loading layer [==================================================>]  195.1kB/195.1kB
f3ed25db3f03: Loading layer [==================================================>]  15.36kB/15.36kB
3580b56fee01: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: goharbor/harbor-portal:v1.10.2
a6d6e26561c2: Loading layer [==================================================>]  12.21MB/12.21MB
86ec36cec073: Loading layer [==================================================>]   42.5MB/42.5MB
a834e5c5df07: Loading layer [==================================================>]  5.632kB/5.632kB
d74d9eba8546: Loading layer [==================================================>]  40.45kB/40.45kB
6d5eed6f3419: Loading layer [==================================================>]   42.5MB/42.5MB
484994b6bc3f: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: goharbor/harbor-core:v1.10.2
8b67d91d471e: Loading layer [==================================================>]  12.21MB/12.21MB
2584449c95d0: Loading layer [==================================================>]  49.37MB/49.37MB
Loaded image: goharbor/harbor-jobservice:v1.10.2
b23fa00ea843: Loading layer [==================================================>]  8.441MB/8.441MB
b2c0f9d70915: Loading layer [==================================================>]  3.584kB/3.584kB
b503c86a04d4: Loading layer [==================================================>]  21.76MB/21.76MB
b360fa5431c1: Loading layer [==================================================>]  3.072kB/3.072kB
eb575ebe03ac: Loading layer [==================================================>]  8.662MB/8.662MB
80fb2b0f0315: Loading layer [==================================================>]  31.24MB/31.24MB
Loaded image: goharbor/harbor-registryctl:v1.10.2
1358663a68ec: Loading layer [==================================================>]  82.23MB/82.23MB
711a7d4ecee3: Loading layer [==================================================>]  3.072kB/3.072kB
5bb647da1c5e: Loading layer [==================================================>]   59.9kB/59.9kB
57ea330779ba: Loading layer [==================================================>]  61.95kB/61.95kB
Loaded image: goharbor/redis-photon:v1.10.2
dd582a00d0e4: Loading layer [==================================================>]  10.28MB/10.28MB
Loaded image: goharbor/nginx-photon:v1.10.2
f4ce9d4c5979: Loading layer [==================================================>]   8.44MB/8.44MB
4df17639d73c: Loading layer [==================================================>]   42.3MB/42.3MB
06a92309fcf7: Loading layer [==================================================>]  3.072kB/3.072kB
6961179c06b3: Loading layer [==================================================>]  3.584kB/3.584kB
24058aa4795e: Loading layer [==================================================>]  43.12MB/43.12MB
Loaded image: goharbor/chartmuseum-photon:v1.10.2
28bdd74b7611: Loading layer [==================================================>]  49.82MB/49.82MB
312844c67ef0: Loading layer [==================================================>]  3.584kB/3.584kB
97ff7939d09c: Loading layer [==================================================>]  3.072kB/3.072kB
fe1ca6ca62b1: Loading layer [==================================================>]   2.56kB/2.56kB
807185e8884e: Loading layer [==================================================>]  3.072kB/3.072kB
7014ac08f821: Loading layer [==================================================>]  3.584kB/3.584kB
b9a09e8231aa: Loading layer [==================================================>]  12.29kB/12.29kB
Loaded image: goharbor/harbor-log:v1.10.2
5fc142634b19: Loading layer [==================================================>]  8.441MB/8.441MB
6d25b55ca036: Loading layer [==================================================>]  3.584kB/3.584kB
470e0bc7c886: Loading layer [==================================================>]  3.072kB/3.072kB
6deec48d670d: Loading layer [==================================================>]  21.76MB/21.76MB
4b0f50c1f9a2: Loading layer [==================================================>]  22.59MB/22.59MB
Loaded image: goharbor/registry-photon:v1.10.2
7c0c9681bb5c: Loading layer [==================================================>]  14.61MB/14.61MB
f8f5185485f0: Loading layer [==================================================>]  28.25MB/28.25MB
7aa4e440ddd4: Loading layer [==================================================>]  22.02kB/22.02kB
1bf5d3e32ab4: Loading layer [==================================================>]  49.09MB/49.09MB
Loaded image: goharbor/notary-signer-photon:v1.10.2
e5f331e45d1c: Loading layer [==================================================>]  337.3MB/337.3MB
e0d97714dc5d: Loading layer [==================================================>]  135.2kB/135.2kB
Loaded image: goharbor/harbor-migrator:v1.10.2
6b5627387d23: Loading layer [==================================================>]  77.91MB/77.91MB
6d898f9318cc: Loading layer [==================================================>]  48.28MB/48.28MB
3e9ed699ea3e: Loading layer [==================================================>]   2.56kB/2.56kB
3bc549d11dcc: Loading layer [==================================================>]  1.536kB/1.536kB
74fd1d3f8fa2: Loading layer [==================================================>]  157.2kB/157.2kB
547fd9c0c9c5: Loading layer [==================================================>]   2.81MB/2.81MB
Loaded image: goharbor/prepare:v1.10.2
9d7087c5277a: Loading layer [==================================================>]  8.441MB/8.441MB
c0f8862cab3f: Loading layer [==================================================>]   9.71MB/9.71MB
a9e3fbb9bcfc: Loading layer [==================================================>]   9.71MB/9.71MB
Loaded image: goharbor/clair-adapter-photon:v1.10.2
[Step 3]: preparing environment ...
[Step 4]: preparing harbor configs ...
prepare base dir is set to /usr/local/src/harbor
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /secret/keys/secretkey
Generated certificate, key file: /secret/core/private_key.pem, cert file: /secret/registry/root.crt
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... done
Creating registry      ... done
Creating harbor-db     ... done
Creating registryctl   ... done
Creating redis         ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
✔ ----Harbor has been installed and started successfully.----

Service startup completes automatic creation of container services such as nginx and db

$ docker-compose ps         
[root@k8s-habor harbor]# docker-compose ps    
     Name                     Command                  State                          Ports                  
---------------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (healthy)                                              
harbor-db           /docker-entrypoint.sh            Up (healthy)   5432/tcp                                  
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)                                              
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                  
harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                                  
nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp
redis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                                  
registry            /home/harbor/entrypoint.sh       Up (healthy)   5000/tcp                                  
registryctl         /home/harbor/start.sh            Up (healthy)            

Login Interface

User name: admin password: Harbor12345

k8s-node01 logs on to harbor

[root@k8s-node01 ~]# docker login https://hub.51geeks.com 
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

[root@k8s-master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
nginx                                latest              602e111c06b6        9 days ago          127MB
httpd                                latest              b2c2ab6dcf2e        10 days ago         166MB

Remove Mirror:
docker rmi -f hub.51geeks.com/library/nginx:latest

Push Mirror Docker command
docker tag nginx hub.51geeks.com/library/nginx
docker push hub.51geeks.com/library/nginx

Mark mirrors in the project:
docker tag SOURCE_IMAGE[:TAG] hub.51geeks.com/library/IMAGE[:TAG]

Push mirror to current project:
docker push hub.51geeks.com/library/IMAGE[:TAG]

K8S With the use of clusters:
kubectl  run --help
kubectl apply -f nginx-deployment.yaml
See Pods
kubectl get pods
kubectl get deployment
kubectl get  rs
kubectl get nodes Check local environment information

Port mapping, exposing services externally, in Kubernetes in Pod It has its own life cycle, Node In case of failure, ReplicationController perhaps ReplicationSet Will Pod Migrate to other nodes to keep users in the desired state.
#kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer
View service status(See service Which port is mapped to): kubectl get services
[root@k8s-master ~]# kubectl get service
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        19h
nginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   89m

After you create the deployment, you can see that the container is running, but by default, the containers can only access each other internally. There are several ways to provide services to the outside world:

ClusterIP,By default, through clusters IP To provide external services, this can only be accessed within a cluster.
NodePort,utilize NAT Technology in Node Provides external services on a specified port.External application via:Method access.
LoadBalancer,Access to services using external load balancing facilities.
ExternalName,This is 1.7 After version kube-dns Functions provided.

[root@k8s-master ~]# kubectl get  pod  -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-7789b77975-fcbmg   1/1     Running   0          2m31s   10.244.2.6   k8s-node02   <none>           <none>
nginx-deployment-7789b77975-m85sx   1/1     Running   0          2m31s   10.244.2.7   k8s-node02   <none>           <none>

[root@k8s-node02 ~]#  docker ps -a |grep nginx

delete pod:
kubectl delete pod nginx-deployment-7789b77975-fcbmg
[root@k8s-master ~]# kubectl get  svc
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        17h
nginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   10m

ipvsadm  -Ln


Use of K8S:

K8S version 1.18 creates pod s based on YAML files

vim tomcat.yaml
apiVersion: v1
kind: Pod
metadata: #metadata Information
name: tomcat-c #kubectl get pods and the name displayed by the landing container
Labels: * labels #labels, can be used as query condition kubectl get pods -l
   app=tomcat
   node=devops-103
spec: #Specifications
containers: #containers
- name: tomcat #container name
Image: mirror used by docker.io/tomcat
   ports:
     - containerPort: 8080
En: #Set env, log on to the container to see the environment variables, and the value of DEME_GREETING is "hello from the enviroment"
   - name:GREETING
     value: "hello from the environment"
Create pod:
kubectl create -f tomcat.yaml
kubectl get pods
kubectl get nodes
kubectl scale deployments/tomcat --replicas=3
kubectl get deployments
kubectl get pods
kubectl describe pod tomcat-858b8c476d-cfrtt
kubectl scale deployments/tomcat --replicas=2
kubectl describe deployment
kubectl get pods -l app=tomcat
kubectl get services -l app=tomcat
kubectl label --overwrite  pod tomcat-858b8c476d-vnm98 node=devops-102
#Here's the--overwrite property because it was mismarked
kubectl describe pods tomcat-858b8c476d-vnm98


[root@k8s-master ~]# kubectl describe pods nginx
Name:         nginx-deployment-7789b77975-m85sx
Namespace:    default
Priority:     0
Node:         k8s-node02/192.168.253.169
Start Time:   Sun, 03 May 2020 14:22:25 +0800
Labels:       app=nginx            #When creating a deployment, kubectl automatically tags us with app=nginx.
             pod-template-hash=7789b77975
Annotations:  <none>
Status:       Running
IP:           10.244.2.7
IPs:
 IP:           10.244.2.7
Controlled By:  ReplicaSet/nginx-deployment-7789b77975
Containers:
 nginx:
   Container ID:   docker://be642684912e5662a8bdd3b10e5e1be28936c045ea1df07421dac106b07cbef1
   Image:          hub.51geeks.com/library/nginx:latest
   Image ID:       docker-pullable://hub.51geeks.com/library/nginx@sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422
   Port:           80/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Sun, 03 May 2020 14:22:40 +0800
   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9jpf (ro)
Conditions:
 Type              Status
 Initialized       True
 Ready             True
 ContainersReady   True
 PodScheduled      True
Volumes:
 default-token-q9jpf:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-q9jpf
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Name:         nginx-deployment-7789b77975-n5zpc
Namespace:    default
Priority:     0
Node:         k8s-node01/192.168.253.168
Start Time:   Sun, 03 May 2020 14:33:19 +0800
Labels:       app=nginx
             pod-template-hash=7789b77975
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
 IP:           10.244.1.2
Controlled By:  ReplicaSet/nginx-deployment-7789b77975
Containers:
 nginx:
   Container ID:   docker://ec776f43a83d474be5ebfb994532731509727344de57e814464f69b78dbeb30a
   Image:          hub.51geeks.com/library/nginx:latest
   Image ID:       docker-pullable://hub.51geeks.com/library/nginx@sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422
   Port:           80/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Sun, 03 May 2020 14:34:36 +0800
   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9jpf (ro)
Conditions:
 Type              Status
 Initialized       True
 Ready             True
 ContainersReady   True
 PodScheduled      True
Volumes:
 default-token-q9jpf:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-q9jpf
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>



Tags: Linux Docker Kubernetes Nginx kubelet

Posted on Tue, 05 May 2020 21:33:41 -0400 by evildj