Getting started with cloud native -k8s-service

Kubernetes Pod is ordinary. Its doors will be created and will die (birth, death, old age), and they cannot be resurrected. Replication controllers dynamically create and destroy Pods (such as expanding or shrinking the scale, or performing dynamic updates). Each pod consists of its own IP, and these IP can not be continuously dependent over time. This raises a question: if some Pods (let's call them background and back-end) provide some functions for other Pods (let's call them foreground), how can these foreground continuously track these background in kubernete cluster?

The answer is: Service

As shown in the figure, a controller called Frontend Deployment creates three pods, and the selector labels of the pod are app=webapp role=frontend version=1.0.0
Then there is a service. I will record all the pod s with the tag app=webapp role=frontend.
After that, users accessing the service will be load balanced to their respective pods. If the pods are switched, the service will find it by itself.

Type of service

There are four types of service in K8S

  1. clusterIp: the default type. It automatically assigns a virtual ip that can only be accessed inside the Cluster
  2. Nodeport: on the basis of ClusterIp, bind a port to each machine of servicezai, so that the service can be accessed through nodeIp: nodeport
  3. loadbalancer: Based on nodeport, create an external load balancer with the help of cloudprovider and forward the request to nodeip: nodeport
  4. externalName: the services outside the cluster are introduced into the cluster, which can be accessed directly.

Kube proxy implements service proxy

In the K8S cluster, each node runs a Kube proxy process. Kube proxy is responsible for implementing a form of vip (virtual ip) for the service. iptables proxy is used by default from K8SV1.2, and ipvs is used by default from k8sV1.14

Agent classification

iptables

iptables is a module of packet forwarding and filtering in the firewall. You can configure DNAT for service forwarding

ipvs

svc workflow

  1. apiServer users send the command to create a service to apiServer through the kubectl command. After receiving the request, apiServer stores the data in etcd
  2. Kube proxy perceives etcd changes and writes rules to iptables or ipvs
  3. iptables or ipvs use NAT and other technologies to forward vip traffic to real services
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP
  selector:
      app: nginx
  ports:
    - name: http
      port: 8080
      targetPort: 8080

Create svc service

kubectl apply -f service.yaml

You will see that the following svc is created

Use the ipvsadm -Ln command to view the mapping relationship

Therefore, accessing clusterIp is load balancing to its ip

If the access fails

==Kubernetes K8S handling of cluster IP type access failure of Service service in IPVS proxy mode==

ethtool -K flannel.1 tx-checksum-ip-generic off 

If the above command can be accessed after running, execute it

[root@k8s-node02 ~]# vi /etc/systemd/system/k8s-flannel-tx-checksum-off.service
[Unit]
Description=Turn off checksum offload on flannel.1
After=sys-devices-virtual-net-flannel.1.device

[Install]
WantedBy=sys-devices-virtual-net-flannel.1.device

[Service]
Type=oneshot
ExecStart=/sbin/ethtool -K flannel.1 tx-checksum-ip-generic off

Power on

1 systemctl enable k8s-flannel-tx-checksum-off
2 systemctl start  k8s-flannel-tx-checksum-off

nodeport mode

Just change the type to nodeport

apiVersion: v1
kind: Service
metadata:
  name: my-service-nodeport
spec:
  type: NodePort
  selector:
      app: nginx
  ports:
    - name: http
      port: 8080
      targetPort: 8080


It can be mapped to the 8080 of the intranet through the external IP: 32180

ingress

svc mode only supports 4-tier proxy, and it needs to use ingress to support 7-tier proxy through domain name

ingress-nginx


Install ingress nginx

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

Create a service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

Now there is a service

To expose domain name access through ingress, first create an ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
    - host: foo.bar.com
      http:
        paths:
          - path: /
            backend:
              serviceName: my-service
              servicePort: 8080

Create an ingress using kubectl apply -f ingress.yaml.

At this time, if you visit foo.bar.com: 32404 / test address, the request will be transferred to port 8080 of my service, and then my service will find a pod to forward the request through ipvs

Implementing a virtual host using ingress

  1. Create two deployment s
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-01
spec:
  replicas: 2
  selector:
    matchLabels:
      app: v1
  template:
    metadata:
      labels:
        app: v1
    spec:
      containers:
      - name: zc-deploy-01
        image: zhucheng1992/myboot:1.0
        ports:
        - containerPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-02
spec:
  replicas: 2
  selector:
    matchLabels:
      app: v2
  template:
    metadata:
      labels:
        app: v2
    spec:
      containers:
      - name: zc-deploy-02
        image: zhucheng1992/myboot:2.0
        ports:
        - containerPort: 8080
  1. Create two SVCS
apiVersion: v1
kind: Service
metadata:
  name: my-service-01
spec:
  type: ClusterIP
  selector:
      app: v1
  ports:
    - name: http
      port: 8080
      targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: my-service-02
spec:
  type: ClusterIP
  selector:
      app: v2
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  1. Create an ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress-01
spec:
  rules:
    - host: www.zc1.com
      http:
        paths:
          - path: /
            backend:
              serviceName: my-service-01
              servicePort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress-02
spec:
  rules:
    - host: www.zc2.com
      http:
        paths:
          - path: /
            backend:
              serviceName: my-service-02
              servicePort: 8080

Tags: Kubernetes Container Cloud Native

Posted on Sun, 05 Dec 2021 14:33:46 -0500 by cuvaibhav