#IT star is not a dream; affinity polling of distributed load balancing algorithm

No matter in the early load balancer or in the current client based load balancing of microservices, there is a basic po...
1.1 Service and Endpoints
1.2 polling algorithm
1.3 affinity
2.1 realization of affinity
2.2 load balancing status of service data structure
2.3 load balancing polling data structure
2.4 implementation of load balancing algorithm

No matter in the early load balancer or in the current client based load balancing of microservices, there is a basic polling algorithm, that is, requests are evenly distributed to multiple machines. Today, I will talk about how kube proxy implements the core data structure of affinity polling. I will learn about the implementation of affinity policy, failure retry and other mechanisms

1. Foundation construction

1.1 Service and Endpoints


Service and Endpoint are the concepts in kubernetes, where service represents a service and usually corresponds to a pile of pods. Because the IP address of a pod is not fixed, Servicel is used to provide a unified access entry for a group of pods on the back end, while Endpoints is a collection of IP addresses and ports that the back end provides the same service
You can come here if you know that,

1.2 polling algorithm


Polling algorithm is probably the simplest algorithm. In go, most of the implementations store all the back-end addresses that can be accessed currently through a slice, while index is used to store the index of the next host request in slice

1.3 affinity


The implementation of affinity is relatively simple. The so-called affinity is actually that when an IP repeatedly calls a service at the back end, it can be forwarded to the previously forwarded machine

2. Implementation of core data structure

2.1 realization of affinity

2.1.1 affinity strategy

The design of affinity strategy is mainly divided into three parts:
Affinity policy: affinity type, i.e. according to what information of the client, it is based on clientip
Affinity map: according to the affinity type defined in the Policy as the key of hash, stores the affinity information of clientip
ttlSeconds: expiration time of storage affinity, that is, when the time is exceeded, RR polling algorithm will be selected again

type affinityPolicy struct { affinityType v1.ServiceAffinity // The Type field is just a string, no need to delve into it affinityMap map[string]*affinityState // map client IP -> affinity info ttlSeconds int }

2.1.2 affinity state of affinity

As mentioned above, affinity status will be stored through affinity map. In fact, the key information in the affinity status includes two endpoints (the endpoint to be accessed by the backend) and last used (the last time when the affinity is accessed)

type affinityState struct { clientIP string //clientProtocol api.Protocol //not yet used //sessionCookie string //not yet used endpoint string lastUsed time.Time }

2.2 load balancing status of service data structure


balancerState stores the load balance status data of the current Service, in which endpoints stores the ip:port collection of the backend pod, index is the node index to implement the RR polling algorithm, and affinity stores the corresponding affinity policy data

type balancerState struct { endpoints []string // a list of "ip:port" style strings index int // current index into endpoints affinity affinityPolicy }

2.3 load balancing polling data structure


The core data structure mainly saves the load balance status of the service through the services field, and protects the service map through the read-write lock

type LoadBalancerRR struct { lock sync.RWMutex services map[proxy.ServicePortName]*balancerState }

2.4 implementation of load balancing algorithm

We only focus on the implementation of polling and affinity allocation for load balancing. For some codes of service and endpoints, we omit the logic of update and deletion. The next chapter is the nextenpoint implementation

2.4.1 lock up and validity

Validity verification is mainly to check whether the corresponding service exists and whether the corresponding endpoint exists

lb.lock.Lock() defer lb.lock.Unlock() // Lock up // Check whether the service exists state, exists := lb.services[svcPort] if !exists || state == nil { return "", ErrMissingServiceEntry } // Check if the service has an endpoint for the service if len(state.endpoints) == 0 { return "", ErrMissingEndpoints } klog.V(4).Infof("NextEndpoint for service %q, srcAddr=%v: endpoints: %+v", svcPort, srcAddr, state.endpoints)

2.4.2 affinity type support test

By checking the affinity type, determine whether the current affinity is supported, that is, by checking whether the corresponding field is set

sessionAffinityEnabled := isSessionAffinity(&state.affinity) func isSessionAffinity(affinity *affinityPolicy) bool { // Should never be empty string, but checking for it to be safe. if affinity.affinityType == "" || affinity.affinityType == v1.ServiceAffinityNone { return false } return true }

2.4.3 affinity matching and last visit update

Affinity matching will give priority to returning the corresponding endpoint. However, if the endpoint has failed to access, you need to reselect the node and reset the affinity

var ipaddr string if sessionAffinityEnabled { // Caution: don't shadow ipaddr var err error // Obtain the corresponding srcIP. Currently, the matching is based on the ip of the client ipaddr, _, err = net.SplitHostPort(srcAddr.String()) if err != nil { return "", fmt.Errorf("malformed source address %q: %v", srcAddr.String(), err) } // Affinity reset, which is false by default, but needs to be reset if the current endpoint access fails // Because there is a connection error, you must select a new machine, and the current affinity cannot be used anymore if !sessionAffinityReset { // If affinity is found, the corresponding endpoint is returned sessionAffinity, exists := state.affinity.affinityMap[ipaddr] if exists && int(time.Since(sessionAffinity.lastUsed).Seconds()) < state.affinity.ttlSeconds { // Affinity wins. endpoint := sessionAffinity.endpoint sessionAffinity.lastUsed = time.Now() klog.V(4).Infof("NextEndpoint for service %q from IP %s with sessionAffinity %#v: %s", svcPort, ipaddr, sessionAffinity, endpoint) return endpoint, nil } } }

2.4.4 build affinity status according to clientIP

// Get an endpoint and update the index endpoint := state.endpoints[state.index] state.index = (state.index + 1) % len(state.endpoints) if sessionAffinityEnabled { // Keep affinity status var affinity *affinityState affinity = state.affinity.affinityMap[ipaddr] if affinity == nil { affinity = new(affinityState) //&affinityState state.affinity.affinityMap[ipaddr] = affinity } affinity.lastUsed = time.Now() affinity.endpoint = endpoint affinity.clientIP = ipaddr klog.V(4).Infof("Updated affinity key %s: %#v", ipaddr, state.affinity.affinityMap[ipaddr]) } return endpoint, nil

Well, today's analysis is here. I hope to help you understand the implementation of affinity polling algorithm, learn the core data structure design, and some designs to deal with failures in the process of generation. Thank you for sharing your attention, thank you

k8s source reading e-book address: https://www.yuque.com/baxiaoshi/tyado3

14 February 2020, 06:39 | Views: 4398

Add new comment

For adding a comment, please log in
or create account

0 comments