Kubernetes in action Cluster Service

Kubernetes Cluster Service

Kubernetes in action Cluster Service (1)

The Kubemetes service is a resource that provides a single invariant access point for a set of pods with the same functionality.When a service exists, its IP address and port will not change.Clients make connections through IP addresses and port numbers, which are routed to any pod that provides the service.In this way, the client does not need to know the address of each individual pod providing the service so that the pods can be created or removed at any time in the cluster.

Interpreting services with examples

  • External clients connect to the front-end pod without regard to the number of servers.
  • Front-end pods need to connect to back-end databases.Since the database is running in a pod, it may move around in the cluster, causing IP address changes.When the background database is moved, there is no need to reconfigure the front pod.

By creating a service for the front-end pod and configuring it to be accessible outside the cluster, you can expose a single, constant IP address for external clients to connect to the pod.Similarly, you can create a service for the background database pod and assign it a fixed IP address.Although the IP address of the pod changes, the IP address of the service is fixed.In addition, by creating a service, front-end pods can access back-end services through environment variables or DNS and service names.

Create services through kubectl expose

pod

kubectl run kubia --image=luksa/kubia --port=8080
pod/kubia created

Expose

kubectl create deploy kubia --image=luksa/kubia --replicas=3
deploy/kubia create

kubectl get pod
kubia-6c68d68756-44kvt      1/1     Running             0          5m31s
kubia-6c68d68756-pp6pl      0/1     ContainerCreating   0          4s
kubia-6c68d68756-r8kqr      1/1     Running             0          4s

kubectl get deploy
kubia   3/3     3            3           10m

kubectl expose deploy kubia --type=NodePort --name kubia-http
NAME           TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
kubia-http     NodePort       10.97.10.52    <none>            80:32767/TCP   5m38s

Service

kubia-svc.yaml
```
apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia
```

Result

Execute commands remotely in a running container
kubectl exec kubia-6c68d68756-44kvt -- curl -s http://10.96.232.104
You've hit kubia-6c68d68756-44kvt


Configure session affinity on services

If you want all requests made by a particular client to point to the same pod each time, you can set the session Affinity property of the service to ClentIP (instead of None, which is the default value)

apiVersion: v1
kind: Service 
spec: 
  sessionAffinity: ClientIP

Kubernetes supports only two forms of session affinity services: None and Client Worker P.You may be surprised that the cookie-Based session affinity option is not supported, but you need to understand that the Kubernetes service does not work at the HTTP level.Services handle TCP and UDP packages and do not care about the payload content.Because cookies are part of the HTTP protocol, services do not know about them, which explains why session affinity cannot be based on cookies.

Exposing multiple ports to the same service

cat many-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: many-port
  labels:
    app: kubia
spec:
  containers:
  - name: kubia
    image: luksa/kubia
    ports:
    - name: http
      containerPort: 8080
    - name: https
      containerPort: 8443

cat many-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8443
  selector:
    app: kubia

Service Discovery

By creating services, you can now access pod s through a single, stable IP address.This address remains unchanged throughout the service life cycle

Discover services through environment variables

kubectl exec many-pod env

MYAPP_SERVICE_HOST=10.97.156.36
MYAPP_SERVICE_PORT=80

Discover service through DNS (to be)

Other pod s in the cluster are configured to use them as DNS (Kubemetes is implemented by modifying/etc/reso conf for each container).

kubectl exec many-port -it – /bin/sh

cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

Reason why unable to ping pass service IP

curl services work, but ping doesn't work.This is because the cluster IP of a service is a virtual IP and only meaningful when it is tied to a service port.

Connect services outside the cluster

Before going into how to do this, let's talk about services.Services are not directly connected to pod s.Instead, there is a resource somewhere in between - it's the Endpoint resource.

kds svc kubia-svc

TargetPort: 8080/TCP
Endpoints: 10.244.0.108:8080,10.244.0.109:8080,10.244.0.110:8080 + 1 more...
Session Affinity: None

Although the pod selector is defined in the spec service, it is not used directly when redirecting incoming connections.Instead, selectors are used to build lists of IP and ports, which are then stored in Endpoint resources.When a client connects to a service, the service proxy selects one of these IP and port pairs and redirects incoming connections to the server listening at that location.

Manually configure the endpoint of the service

Once the endpoint of the service is decoupled from the service, you can manually configure it and if you create a service without a pod selector, Kubemetes will not create an Endpoint resource (after all, without a selector, you will not know which pods the service contains).This requires the creation of an Endpoint resource to specify the endpoint list for the service.

kubia-endp.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: external-service
subsets:
  - addresses:
    - ip: 11.11.11.11
    - ip: 22.22.22.22
    ports:
    - port: 80 #Target port of endpoint

The Endpoint object needs to have the same name as the service and contain a list of destination IP addresses and ports for the service.Once both the service and the Endpoint resource are published to the server, the service can function as if it had a pod selector.The container created after the service is created will contain the service's environment variables, and all connections to its IP:port pairs will be load balanced between the service endpoints.

Create aliases for external services

cat externalname.yaml

apiVersion: v1
kind: Service
metadata:
  name: external-name-service
spec:
  type: ExternalName
  externalName: someapi.app.com
  ports:
  - port: 80

kubia-ex.yaml

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  ports:
  - port: 80

curl someapi.app.com

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.2.7</center>
</body>
</html>

The ExternalName service only implements one at the DNS level to create a simple CNAME for the service

Exposing services to external clients


There are several ways to access services externally:

  • Set the service type to NodePort - Each cluster node opens a port on the node, and for NodePort services, each cluster node opens a port on the node itself (hence the name NodePort) and redirects the traffic received on that port to the underlying service.The service is accessible only on internal cluster IP and ports, but also through dedicated ports on all nodes.
  • Set the service type to LoadBalance, an extension of the NodePort type - this allows the service to be accessed through a dedicated load balancer provided by the running cloud infrastructure in Kubernetes.The load balancer redirects traffic to node ports across all nodes.Clients connect to the service through the IP of the load balancer.
  • Creating an Ingress resource is an entirely different mechanism for exposing multiple services through an IP address - it runs on the HTTP layer (Network Protocol Layer 7)

Service using NodePort type

cat nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-nodeport
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30123
  selector:
    app: kubia
    
[root@kmaster wangyang]# kgp
kubia-6c68d68756-44kvt      1/1     Running       0          27h
kubia-6c68d68756-pp6pl      1/1     Running       0          27h
kubia-6c68d68756-r8kqr      1/1     Running       0          27h

[root@kmaster wangyang]# curl 10.107.34.4:80
You've hit many-port
[root@kmaster wangyang]# curl 10.107.34.4:80
You've hit kubia-6c68d68756-r8kqr
[root@kmaster wangyang]# curl 10.107.34.4:80
You've hit kubia-6c68d68756-44kvt
[root@kmaster wangyang]# curl kmaster:30123
You've hit kubia-6c68d68756-r8kqr

Get the IP of all nodes using JSONPath

kubectl get nodes -o jsonpath='{.items[*].status.addresses[*].address}'
192.168.145.128 kmaster 192.168.145.129 kubernode1 192.168.145.130 kubernode2

Service using LoadBalancer type

Establish

cat loadbalancer.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubia-loadbalancer
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia

See

[root@kmaster wangyang]# kg svc -w
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
external-name-service   ExternalName   <none>          someapi.app.com   80/TCP           4h57m
external-service        ClusterIP      10.105.57.192   <none>            80/TCP           5h9m
kubernetes              ClusterIP      10.96.0.1       <none>            443/TCP          32h
kubia                   ClusterIP      10.109.52.215   <none>            80/TCP,443/TCP   6h10m
kubia-http              NodePort       10.97.10.52     <none>            80:32767/TCP     27h
kubia-loadbalancer      LoadBalancer   10.104.78.31    <pending>         80:32462/TCP     99s

test

[root@kmaster wangyang]# curl 10.104.78.31:80
You've hit kubia-6c68d68756-r8kqr
[root@kmaster wangyang]# curl 10.104.78.31:80

Understanding the characteristics of external connections

Understanding and preventing unnecessary network hops

  • Randomly selected pods do not necessarily run on the same node receiving the connection when an external client connects to the service through a node port, which also includes the case when it first passes through a load balancer.Additional network hops may be required to reach the pod, but this behavior is not expected.

  • This extra number of hops can be prevented by configuring the service to redirect only external communications to pod s running on the receiving connected nodes.spec: externalTrafficPolicy: Local

  • If the service definition contains this setting and an external connection is opened through the service's node port, the service proxy will choose a pod that runs locally.If no local pod exists, the connection will be suspended (it will not be forwarded to a random global pod as if no annotations were used).Therefore, you need to ensure that the load balancer forwards the connection to a node that has at least one pod.

  • Suppose node A runs one pod and node B runs two other pods.If the load balancer is evenly distributed between the two nodes, the pod on node A will receive 50% of all connections, but the two pods on node B will only receive 25% of each connection.

    Remember that client IP is not logged

  • Typically, when clients in a cluster connect to a service, the pod supporting the service can obtain the IP address of the client.However, when a connection is received through a node port, the source IP of the packet changes because of the source network address translation (SNAT) performed on the packet.

  • Backend pod s cannot see the actual client IP, which can be a problem for some applications that need to know the client IP.For example, with a Web server of 1,000, this means that access logs cannot display browsing
    IP of the machine.

  • The local external traffic policy described in the previous section affects client IP retention because there is no additional jump (no SNAT) between the node receiving the connection and the node hosting the target pod.

Exposing services through Ingress

Why do I need Ingress?

  • One important reason is that each LoadBalancer service requires its own load balancer and a unique public IP address, whereas Ingress requires only one public network IP to provide access to many services.When a client sends an HTTP request to Ingress, Ingress determines the service to which the request is forwarded based on the host name and path of the request
  • ingress operates at the application layer of the network stack (HTTP) and provides features that services cannot implement, such as cookie-Based session affinity.
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: kubia-example.com
    http:
      paths:
       - path: /
         backend:
           serviceName: kubia-nodeport
           servicePort: 80

Learn how Ingress works

Clients start with KubiExample.comThe DNS lookup is performed and the DNS server (or local operating system) returns the IP of the igress controller.The client then sends an HTTP request from the Ingress controller, referring to kubia in the Host Example.com.The controller determines from this header which service the client is trying to access, looks at the pod IP through the endpoint object associated with the service, and forwards the client's request to one of the pods.

Different services map to different paths of the same host

ingress-diffpath.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: kubia-example.com
    http:
      paths:
       - path: /kubia
         backend:
           serviceName: kubia-nodeport
           servicePort: 80
       - path: /foo
         backend:
           serviceName: bar
           servicePort: 80

Paths for different services mapped to different hosts

ingress-diffhost.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: kubia-example.com
    http:
      paths:
       - path: /kubia
         backend:
           serviceName: kubia-nodeport
           servicePort: 80
  - host: bar.example.com
    http:
      paths:
      - path: /foo
         backend:
           serviceName: bar
           servicePort: 80

Create TLS certification for Ingress

When the client creates a TLS connection to the ngress controller, the controller terminates the TLS connection.Communication between client and controller is encrypted, while communication between controller and backend pod is not.Applications running on POD do not need to support TLS

openssl genrsa -out tls.key 2048
openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subject
[root@kmaster wangyang]# ls
externalname.yaml      ingress-diffpath.yaml  kubia-ex.yaml   loadbalancer.yaml  nodeport.yaml
ex.yaml                ingres.yaml            kubia-svc.yaml  many-pod.yaml      tls.cert
ingress-diffhost.yaml  kubia-endp.yaml

cat ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  tls:
  - hosts: kubia.example.com
      secertName: tls-secret
  rules:
  - host: kubia-example.com
    http:
      paths:
       - path: /kubia
         backend:
           serviceName: kubia-nodeport
           servicePort: 80

[root@kmaster wangyang]# kubectl apply -f ingres.yaml 
ingress.extensions/kubia configured
[root@kmaster wangyang]# curl -k -v https://kubia.example.com/kubia

Tags: curl Kubernetes Session DNS

Posted on Thu, 18 Jun 2020 21:15:40 -0400 by shage