1 Overview
The one-stop solution of distributed microservice architecture is a collection of landing technologies of a variety of microservice architectures, commonly known as microservice architecture
- java8
- maven
- git
- Nginx
- rabbitMQ
- SpringBoot2.0
form
- Service registration and discovery eureka, zookeeper, nacos
- Service call netflix oss ribbon, LoadBalancer
- Service hystrix
- Load balancing feign, openfeign
- Service degradation, hystrix, Sentinel
- Service message queue MQ
- Configuration center management config, nacos
- Service gateway zuul, gateway
- Service monitoring hystrix
- Automated build deployment
- Service scheduled task scheduling operation
- Service bus, nacos
- Service development spring boot
https://cloud.spring.io/spring-cloud-static/Hoxton.SR1/reference/htmlsingle/
https://docs.spring.io/spring-boot/docs/2.2.2.RELEASE/reference/htmlsingle/
- spring boot 2.2.2
- Spring cloud version H
In maven
- dependencyManagement: used to provide a way to manage dependent version numbers. Usually, the dependencyManagement element is seen in the parent POM at the top level in an organization or project,
- When adding a dependency to a child project, you can use the version number specified in the parent POM instead of specifying the version number
- Dependency management only declares dependencies and does not actually introduce them. Therefore, the subproject still needs to display the dependencies required for the declaration
- If the dependency is not declared in the child project, it will not be inherited from the parent project. The item will be inherited from the parent project only if the dependency is specified in the child project and the specific version is not specified, and both version and scope are read from the parent pom
- If a specific version is specified for a subproject, the version of the subproject is used
- dependencies:
docker starts mysql
$ docker run -d -p 3306:3306 -v /usr/local/mysql/data:/var/lib/mysql -v /usr/local/mysql/conf/mysql.cnf:/etc/mysql/mysql.cnf -e MYSQL_ROOT_PASSWORD=123456 --name mysql-service docker.io/mysql
2. Microservice architecture coding construction
Build project, specify package and version – > build module
- Build database
- Entity class
- dao
- Mapping file xml
- service
- controller
Turn on hot deployment
- Add dependency
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency>
- Add maven plug-in to parent project
<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <fork>true</fork> <addResources>true</addResources> </configuration> </plugin> </plugins> </build>
- idea sets all four ADB items in the compiler β
- register
- ctrl+shift+alt + / open registry
- Check:
- compiler.automake.allow.when.app.ruunning
- actionSystem.assertFocusAccessFromEdt
- Restart idea
RestTemplate is used for calls between microservices: π
RestTemplate provides a variety of convenient ways to access remote Http services
It is a simple and convenient template class for accessing restful services. It is the client template tool set provided by Spring for accessing Rest services
π cloud01, call between microservices
Engineering reconstruction:
- Extract common parts as modules
- maven install clean module to maven warehouse
- Other modules introduce jar packages of common modules
π cloud02, stand-alone
3 service registration
3.1Eureka
3.1.1 service governanceIn the traditional rpc invocation framework, it is complex to manage the direct dependency between each service and service, so it is necessary to use service governance to manage the dependency between services, so as to realize service invocation, load balancing, fault tolerance, service discovery and registration
Eureka adopts the C/S design architecture
Eureka Server: after each service node is started through configuration, it will be registered in Eureka Server, so that the service registry in Eureka Server will store the information of all available service nodes, and the information of service nodes can be seen intuitively in the interface
EurekaClient: accessed through the registry. It is a Java client that interacts with EurekaServer. The client also has a built-in load balancer using polling load balancing algorithm. After the application is started, it will send heartbeat to EurekaServer,. If EurekaServer does not receive the heartbeat of a node, EurekaServer will remove the service node from the service registry
3.1.2 Eureka-server- rely on
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency>
- to configure
server: port: 7001 eureka: instance: hostname: localhost # Instance name of eureka server client: register-with-eureka: false # Do not register yourself with the registry fetch-registry: false # It means that it is the registry, whose responsibility is to maintain service instances and does not need to retrieve services service-url: # Set the address to interact with eureka service. Both query service and registration service need to rely on this address defaultZone: http://$:$/eureka/
- Main startup class
@SpringBootApplication @EnableEurekaServer // The label is eureka service public class EurekaMain { public static void main(String[] args) { SpringApplication.run(EurekaMain.class, args); } }3.1.3 Eureka-client
- rely on
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency>
- to configure
eureka: client: register-with-eureka: true # Register yourself into eureka server fetch-registry: true # Grab your own registration information from eureka server. The cluster must be true to use load balancing with ribbon service-url: defaultZone: http://localhost:7001/eureka
- Main startup class
@SpringBootApplication@EnableEurekaClientpublic class PaymentMain { public static void main(String[] args) { SpringApplication.run(PaymentMain.class, args); }}3.1.4 cluster
- Service registration: register the service information into the registration center
- Service discovery: get service information from the registry
- Essence: save the key service name and get the value call address
β What is the core of microservice RPC remote service call?
High availability
π cloud03, cluster version
Build eureka cluster to realize load balancing and fault tolerance -- > register and watch each other
π
- Create a new Eureka server module. Refer to the previous
- Modify pom
- Modify mapping configuration / etc/hosts
# hosts127.0.0.1 eureka7001.com127.0.0.1 eureka7002.com
- Modify yaml
server: port: 7001eureka: instance: hostname: eureka7001.com # eureka Server instance name client: register-with-eureka: false # Do not register yourself with the registry fetch-registry: false # It means that it is the registry, whose responsibility is to maintain service instances and does not need to retrieve services service-url: # Set the address to interact with Eureka service. Both query service and registration service need to rely on this address. defaultZone: http://eureka7002.com:7002/eureka/,http://eureka7003.com:7003/eureka/
server: port: 7002eureka: instance: hostname: eureka7002.com # eureka Server instance name client: register-with-eureka: false # Do not register yourself with the registry fetch-registry: false # It means that it is the registry, whose responsibility is to maintain service instances and does not need to retrieve services service-url: # Set the address to interact with Eureka service. Both query service and registration service need to rely on this address. defaultZone: http://eureka7001.com:7001/eureka/,http://eureka7003.com:7003/eureka/
- Master startup: each Eureka server is started by its own master and services are started
- Register other microservices into the Eureka server cluster
eureka: client: register-with-eureka: true # Register yourself in eureka server fetch-registry: true # Grab your own registration information from eureka server. The cluster must be true to use load balancing with ribbon. Service URL: defaultzone: http://eureka7001.com:7001/eureka/ , http://eureka7002.com:7002/eureka/
π£ On the same server, the cluster is built on different ports. If the ip or host name is the same, the replica cannot be formed. So one of them was migrated to another server
π The payment micro service is also adjusted to a cluster
- Refer to the previous module payment2 to create a new module payment2
- Modify pom
- Modify yaml
- Main startup class
- Business class
- Modify the controller of 8001 / 8002 (load balancing)
β At this time, there are two micro services payment1 and payment2. How to achieve load balancing?
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (IMG aunwada6-1631543501425) (images / eureka_info. PNG)]
π You can see that there are two eureka clusters and two payment-service services with the same name. At this time, the ip and port accessing the payment service in the order service code are dead, which does not meet load balancing.
- Modify to access payment service through service name
public static final String PAYMENT_URL = "http://PAYMENT-SERVICE";
- Use @ LocalBalanced to give RestTemplate the ability of load balancing
@Configurationpublic class ApplicationContextConfig { @Bean @LoadBalanced public RestTemplate getRestTemplate() { return new RestTemplate(); }}
π This is the load balancing function of ribbon below
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (IMG ozuzimei-1631543501439) (images / Eureka. PNG)]
Ctuator cluster information improvement
# client configeureka: instance: instance-id: payment8002 # Configure host name prefer-ip-address: true # The access path can display the ip address
Microservice discovery registered by Eureka
// payment controller@RestControllerpublic Class paymentcontroller {@ Autowired private paymentservice paymentservice; @ value ("$") private string serverport; @ resource private discoveryclient; / / used to expose micro service information ... @ GetMapping("/payment/discovery") public Object discovery() { List<String> services = discoveryClient.getServices(); for (String service : services) { System.out.println(service); } List<ServiceInstance> instances = discoveryClient.getInstances("PAYMENT-SERVICE"); for (ServiceInstance instance : instances) { System.out.println(instance.getInstanceId() + "\t" + instance.getHost() + "\t" + instance.getPort() + "\t" + instance.getUri()); } return discoveryClient; }}
@SpringBootApplication@EnableEurekaClient@EnableDiscoveryClient // Add service discovery annotation to main startup class public class paymentmain8001 }
Eureka's self-protection mechanism
notice
EMERGENCY!...
It means that eureka enters protection mode.
What kind of self-protection mechanism: it means that when a micro service is unavailable at a certain time, eureka will not clean it up immediately, but will still save the micro service information. If eurekaserver does not receive the heartbeat of a micro service instance within a certain event, eurekaserver will cancel the instance for 90 seconds by default. However, when network partition failures occur, such as jamming, delay, etc, There is no normal traffic between the microservice and eureka server. The above behavior may become very dangerous because the service itself is healthy. At this time, eureka should not log off the microservice. eureka solves this problem through self-protection mode. eureka would rather keep the wrong service registration information than log off any service instances that may be healthy. It's better to live than die!
How do I turn off self-protection?
- Modify eureka server configuration
eureka: instance: hostname: eureka7001.com # eureka Server instance name client: register-with-eureka: false # Do not register yourself with the registry fetch-registry: false # It means that it is the registry, whose responsibility is to maintain service instances and does not need to retrieve services service-url: # Setup and eureka The address of service interaction, query service and registration service all need to rely on this address defaultZone: http://Eureka7002. Com: 7002 / Eureka / Server: enable self preservation: false # turn off self-protection
- Modify eureka client configuration
eureka: client: register-with-eureka: true # Register yourself in eureka server fetch-registry: true # from eureka server Grab your own registration information. The cluster must be true Can cooperate ribbon Using load balancing service-url: defaultZone: http://eureka7001.com:7001/eureka, http://eureka7002.com:7002/eureka Instance: instance ID: payment8001 # configure instance name prefer ip address: true # access path can display ip address lease renewal interval in seconds: 1 # event interval for client to send heartbeat to server, unit: seconds; default: 30 lease expiration duration in seconds: 2 # Eureka ser The upper limit of the time that ver waits after receiving the last heartbeat, in seconds. The default is 90. Timeout will delete the micro service
3.2 Zookeeper
zookeeper is a distributed coordination tool that implements registry functions.
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-K7ufRUEr-1631543501441)(images/zookeeper.png)]
Turn off the zookeeper firewall and start the zookeeper server
$ systemctl stop firewalld
- pom
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zookeeper-discovery</artifactId></dependency>
- yaml
server: port: 8004spring: application: name: payment-service # Register to zookeeper Name of the registry cloud: zookeeper: connect-string: 192.168.80.130:2181 # connect-string: 192.168.80.130:2181,192.168.80.131:2181 #colony
- Main startup + controller+service, etc. (omitted)
@SpringBootApplication@EnableDiscoveryClient // This annotation is used to register the service public class paymentmain8004 }
@RestControllerpublic class PaymentController { @Value("$") private String serverPort; @GetMapping("/payment/zk") public String paymentzk() { return "springcloud with zookeeper" + serverPort + UUID.randomUUID().toString(); }}
- zookeeper package version conflict handling
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zookeeper-discovery</artifactId> <!--Exclude your own first zookeeper3.5.3--> <exclusions> <exclusion> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> </exclusion> </exclusions></dependency><!--add to zookeeper3.4.14 Version, to be associated with zookeeper Consistent service version--><dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.9</version> <!--zookeeper introduce log4j Conflict, need to be eliminated--> <exclusions> <exclusion> <artifactId>slf4j-log4j12</artifactId> <groupId>org.slf4j</groupId> </exclusion> </exclusions></dependency>
- View in zookeeper service
$ zkCli.shls /services # See what microservices are available ls /services/payment # Get node list get /services/payment-service/7aa84ad6-5ec5-4309-a3d2-63f10e4af278 # Get node information
// Node information {"name": "payment service", "Id": "7aa84ad6-5ec5-4309-a3d2-63f10e4af278", "address": "192.168.190.1", "port": 8004, "sslport": null, "payload": {"@ class": "org.springframework.cloud.zookeeper.discovery.zookeeperinstance", "Id": "application-1", "name": "payment service", "metadata": {} "registrationtimeutc" : 1627912537184, "serviceType": "DYNAMIC", "uriSpec": { "parts": [ { "value": "scheme", "variable": true }, { "value": "://", "variable": false }, { "value": "address", "variable": true }, { "value": ":", "variable": false }, { "value" : "port", "variable": true } ] }}
- visit
$ curl http://localhost:8004/payment/zk
π€ Is the service node registered on zookeeper temporary or persistent
π¦ payment8004 + orderzk80
3.3 Consul
slightly
π¦ cloud-provider-consul-payment8006 + cloud-consumer-consul-order80
3.4 Nacos
3.5 summary
assemblylanguageCAPService monitoring inspectionExternal exposure interfaceSpring Cloud integrationEurekaJavaAPConfigurable supportHTTPIntegratedConsulGoCPsupportHTTP/DNSIntegratedZookeeperJavaCPsupportclientIntegratedNacosAP/CPsupportπ‘ A commonplace thing in distributed environment - CAP
C - consistency Strong consistency A - availability usability P - partition tolerance Partition tolerance
The core of CAP theory is that a distributed system cannot meet the three requirements of consistency, availability and partition fault tolerance at the same time
CA - single point cluster, which meets the requirements of consistency and availability, and is usually not very powerful in scalability
CP - systems that meet consistency and partition fault tolerance, usually have not particularly high performance
AP - meet availability, partition fault tolerance, and lower requirements for data consistency.
Therefore, considering the expansion of clusters, the distributed system can only choose CP or AP.
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-yBpaVzs0-1631543501444)(images/cap.jpg)]
Zookeeper guaranteed CP
However, zk will encounter such a situation. When the master node loses contact with other nodes due to network failure, the remaining nodes will conduct leader election again. The problem is that the leader election time is too long, 30~120s, and the whole zk cluster is unavailable during the election. This will lead to the paralysis of registration services during the election. In the environment of cloud deployment, zk cluster will be disabled due to network problems It is very likely that the master node will be lost. Although the service can be finally restored, it cannot be tolerated when the registration is unavailable for a long time due to a long election time
Eureka guaranteed AP
Data inconsistency can be tolerated
4 load balancing service call
4.1 Ribbon
4.1.1 load balancingSpring Cloud Ribbon is a set of tools for client-side load balancing, which mainly provides client-side software load balancing and service invocation.
List all the machines behind the LoadBalancer in the configuration file. The Ribbon will automatically help you connect these machines based on certain rules (such as simple polling, random connection, etc.).
One sentence: load balancing + RestTemplate
Difference from Nginx
- Nginx is server load balancing. All client requests will be handed over to nginx, and then nginx will forward the requests. That is, load balancing is realized by the server.
- Ribbon local load balancing. When calling the micro service interface, it will obtain the registration information service list in the registry and cache it to the JVM local, so as to realize RPC remote service call locally.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-ribbon</artifactId></dependency>
Spring cloud stat Netflix Eureka client comes with spring cloud starter Netflix ribbon
4.1.2 Ribbon's own load balancing rules:- Polling βοΈ
- random
- Poll first. If the poll fails, try again
- For the extension of polling, the faster the response speed, the greater the weight
- Skip the failed service and select the service with the least concurrency first
- Skip the fault instance and select a smaller instance
- Judge the performance of the region where the server is located and the availability of the server, and select the server
How do I replace the default rule β
- Create a new package package com.chmingx.myrule;
- to configure
@Configurationpublic class MyselfRule { @Bean public IRule myRule() { return new RandomRule(); // Random load balancing rule}
- Add @ RibbonClient on the main startup class
@SpringBootApplication@EnableEurekaClient@RibbonClient(name = "PAYMENT-SERVICE", configuration = MyselfRule.class)public class OrderMain80 { public static void main(String[] args) { SpringApplication.run(OrderMain80.class, args); }}
π¦ orderribbon80
π¬ There are other methods to implement replacement, and you can query documents π
4.1.3 polling algorithmπ Explain the polling algorithm of ribbon in detail
Principle: the number of requests of the rest interface% the total number of server clusters = the actual call server location subscript. The rest interface count starts from 1 after each server restart
r e s t meet mouth The first A few second please seek number % clothes Affairs implement collection group total number amount = real Occasion transfer use clothes Affairs implement position Set lower mark Number of requests for the rest interface \% total number of server clusters = actual call server location subscript Number of requests for rest interface% total number of server clusters = actual call server location subscript
βοΈ Self writing polling algorithm
- The polling algorithm provided by ribbon is no longer used, and the @ LoadBalanced annotation is cancelled
@Configurationpublic class ApplicationContexstConfig { @Bean// @LoadBalanced / / enables RestTemplate to be load balanced. Public RestTemplate getresttemplate() }
- Define the interface and implementation class, implement multiple with spin lock, and issue the acquisition request for several times, R e s t meet mouth The first A few second please seek number % clothes Affairs implement collection group total number amount = real Occasion transfer use clothes Affairs implement position Set lower mark Number of requests for Rest interface \% total number of server clusters = actual call server location subscript Number of requests for Rest interface% total number of server clusters = actual call server location subscript, find the subscript, and then find the server
public interface LoadBalancer { ServiceInstance instance(List<ServiceInstance> serviceInstances);}
public class MyLB implements LoadBalancer { private AtomicInteger atomicInteger = new AtomicInteger(0); public final int getAndIncrement() { int current; int next; do { current = this.atomicInteger.get(); // Integer.MAX_VALUE is 2147483647. The largest integer next = current > = 2147483647? 0 : current + 1; } while (!this.atomicInteger.compareAndSet(current, next)); // Spin lock is used here. Under concurrent processing, it is the number of accesses to System.out.println("***** the number of accesses, next:" + next); return next; } // Number of requests for rest interface% total number of server clusters = actual call server location subscript @ override public serviceinstance instance (list < serviceinstance > serviceinstances) }
- Test (can be skipped)
// Add a test interface on the server @ GetMapping("/payment/lb") / / used to customize polling load balancing and obtain LB node public string getpaymentlb()
- Add the handwritten load balancing algorithm to the controller and send a request for testing
@RestControllerpublic class OrderController { private static final String PAYMENT_URL = "http://Payment-service "; @ Autowired private resttemplate resttemplate; @ Autowired private loadbalancer loadbalancer; / / inject custom load balancer @ Autowired private discoveryclient; / / use custom load balancing algorithm @ getmapping (" / consumer / payment / lb ") public string getpaymentlb () serviceinstance serviceinstance = loadbalancer.instance (instancelist); / / use the custom load balancer URI uri = serviceInstance.getUri(); return restTemplate.getForObject(uri + "/payment/lb", String.class); }}
π¦ orderribbon80
4.2 OpenFeign
4.2.1 conceptFeign is a declarative Web server, which makes it very easy to write a Web service client. Just create an interface and add annotations on the interface.
Ribbon + RestTemplate has been used to encapsulate Http requests and form a set of templated calling methods. However, in actual development, because there may be more than one service dependent call, an interface will often be called in multiple places, so it is common to encapsulate some client classes for each micro service to wrap the calls of these dependent services Use. Therefore, Feign further encapsulates on this basis to help us define and implement the definition of dependent service interfaces. Under Feign implementation, we only need one interface and configure it in the way of annotation to complete the interface binding to the service provider, which simplifies the development of automatically encapsulating the service call client when using the Spring Cloud Ribbon.
Feign also integrates Ribbon and realizes client load balancing through polling
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-184vLhRT-1631543501446)(images/ribbon.png)]
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-R3FzNkFJ-1631543501448)(images/openfeign.png)]
Feign π OpenFeign
- Feign is a lightweight Restful Http service client in the Spring Cloud component. Feign has a built-in Ribbon, which is used for client load balancing to call the services of the service registry. Feign uses feign's annotation to define the interface. By calling this interface, you can call the services of the service registry.
- OpenFeign is Spring Cloud's annotation that supports spring MVC based on Feign. OpenFeign's @ FeignClient can parse the interface under spring MVC's @ RequestMapping annotation, generate implementation classes through dynamic proxy methods, implement load balancing in the classes and call other services
β How to call OpenFeign?
- New module π¦ orderfeign80
- pom
<dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>com.chmingx</groupId> <artifactId>common</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- yaml
server: port: 80eureka: client: register-with-eureka: true fetch-registry: true service-url: defaultZone: http://eureka7001.com:7001/eureka, http://eureka7002.com:7002/eurekaspring: Application: Name: order service # set feign client timeout (OpenFeign supports ribbon by default) ribbon: readtimeout: 5000 # time taken to establish a connection, applicable to normal network conditions, ConnectTimeout: 5000 # refers to the time taken to read available resources from the server after the connection is established. Logging: level: # feign log: at what level to monitor which interface com.chmingx.springcloud.service.PaymentFeignService: debug
- Main startup class activates Feign
@SpringBootApplication@EnableFeignClients // Activate feignpublic class orderfeignmain80 }
- The consumer defines the service interface and calls the interface in the service provider controller
/** * On the surface, it is an interface, and after adding @ FeignClient annotation, the dynamic agent generates a controller, * so when other controllers call this interface, it is essentially the dynamic agent controller generated by that controller calling this interface */@Component@FeignClient(value = "PAYMENT-SERVICE")public interface PaymentFeignService { @GetMapping("/payment/") CommonResult<Payment> getPaymentById(@PathVariable("id") Long id);}
- Calling FeignClient interface in controller
@RestControllerpublic class OrderFeignController { @Autowired private PaymentFeignService paymentFeignService; @GetMapping("/consumer/payment/") public CommonResult<Payment> getPaymentById(@PathVariable("id") Long id) { return paymentFeignService.getPaymentById(id); }}4.2.3 OpenFeign timeout control
β° OpenFeign timeout control
OpenFeign waits for 1 sec by default, and an error is reported if it times out
Timeout tests
// Timeout test @ getmapping ("/ payment / feign / timeout") public string paymentfeigntimeout() catch (interruptedexception E) return serverport;}
@Component@FeignClient(value = "PAYMENT-SERVICE")public interface PaymentFeignService { @GetMapping("/payment/") CommonResult<Payment> getPaymentById(@PathVariable("id") Long id); @GetMapping("/payment/feign/timeout") public String paymentFeignTimeout();}
@RestControllerpublic class OrderFeignController { @Autowired private PaymentFeignService paymentFeignService; @GetMapping("/consumer/payment/") public CommonResult<Payment> getPaymentById(@PathVariable("id") Long id) { return paymentFeignService.getPaymentById(id); } // Timeout test @ getmapping ("/ payment / feign / timeout") public string paymentfeigntimeout() {/ / openfiegn waits for 1 second by default. return paymentFeignService.paymentFeignTimeout();}}
βοΈ Modify configuration
# set up feign Client timeout( OpenFeign Default support ribbon)ribbon: ReadTimeout: 5000 # The time taken to establish a connection is applicable to the time taken to connect both ends under normal network conditions ConnectTimeout: 5000 # It refers to the time taken to read available resources from the server after the connection is established4.2.3 OpenFeign log
π OpenFeign log level
- NONE
- BASIC
- HEADERS
- FULL
@Configurationpublic class FeignConfig { // Configure the open feign log level @ bean logger. Level feignloggerlevel() }
logging: level: # feign log: at what level to monitor which interface com.chmingx.springcloud.service.PaymentFeignService: debug
5 service degradation \ fusing \ current limiting
Applications in complex distributed architecture have several dependencies, and each dependency inevitably fails at some time.
Service avalanche:
When calling between multiple microservices, suppose microservice A calls microservice B and microservice C, and microservice C calls other microservices, which is the so-called "fan out". If the response time of A microservice on the fan out link is too long or unavailable, the call to microservice A will occupy more and more system resources, resulting in system crash, the so-called avalanche effect.
5.1 Hystrix
5.1.1 concept understandingHystrix is an open source library for dealing with delay and fault tolerance of distributed systems. In distributed systems, many dependencies inevitably fail to call, such as timeout, exception, etc. hystrix can ensure that when a dependency fails, it will not lead to overall service failure. It is better than selling cascading failures to improve the elasticity of distributed systems.
The circuit breaker itself is a kind of switching device. When a service unit fails, it returns an expected and processable alternative response (Fallback) to the caller through the fault monitoring of the circuit breaker (similar to fuse blowing) Instead of waiting for a long time or throwing exceptions that cannot be handled by the caller, this ensures that the thread of the service caller will not be occupied unnecessarily for a long time, so as to avoid the spread and even avalanche of faults in the distributed system.
Service degradation: assuming that the service system is unavailable, you need to provide a comprehensive solution to deal with the alternative response Fallback
- Abnormal operation
- overtime
- Service degradation
- Full thread pool / semaphore will also trigger service degradation
Service fuses: analog fuses achieve maximum service access, directly deny access, pull off the power limit, then call the service degradation method and return friendly prompts.
Service flow restriction: second kill, high concurrency and other operations. It is strictly prohibited to rush over and crowd. Everyone queue up, N per second, in an orderly manner
5.1.2 construction of hystrix serviceπ¦ cloud-provider-hystrix-payment8001
π¦ cloud-consumer-feign-hystrix-order80
- pom
- yaml
- Main startup class
- Business class
jmeter for pressure measurement
Other interface services at the same level of 8001 are trapped because the working threads in the tomcat thread pool have been preempted. When calling again, the client response is slow and even timeout errors occur. Our technologies such as degradation, fault tolerance and current limiting were born because of the above failures or poor performance
Solution requirements:
- Timeout - no longer waiting, there must be service degradation
- Errors, downtime or program errors -- errors must be explained, and there must be service degradation
- If the client fails or the waiting time is less than the time required for service provision, handle the degradation by itself
@HystrixCommand5.1.4.1 solve from the service side
Set the peak value of its own call timeout, and it can run normally within the peak value; there needs to be a thorough processing method as a service fallback
- Business class enable
@Servicepublic class PaymentService { // Access public string paymentinfo normally_ OK (integer ID) / * * * the method will be handed over to paymentinfo after timeout_ Timeouthandler handles * / @ hystrixcommand (fallbackmethod = "paymentInfo_TimeoutHandler", commandproperties = {@ hystrixproperty (name = "execution. Isolation. Thread. Timeoutinmilliseconds", value = "1000") / / the timeout of this thread is 1 second. You can also configure} public string paymentinfo in yaml_ Timeout (integer ID) catch (interruptedexception E) return "thread pool:" + Thread.currentThread().getName() + "\t" +"Paymentinfo_timeout, ID:" + ID + "\ T" + "OK!!!" + "\ T" + "time consuming:" + timenumber;} / / paymentinfo_ The method to find out when timeout error occurs. Note that the method of service degradation should be consistent with the original method. Public string paymentinfo_ Timeouthandler (integer ID) }
# Change the default timeout: hystrix: Command: default: Execution: isolation: Thread: timeoutinmilliseconds: 3000
- Activate in main startup class
@SpringBootApplication@EnableEurekaClient@EnableCircuitBreaker // Activate service degradation public class paymenthystrixmain8001 }5.1.4.2 solve the problem from the customer side, and generally put the service degradation on the client side
The hot deployment of IDEA is sensitive to the modification of Java code, but not to the modification of properties in @ HystrixCommand. It is recommended to restart the microservice at this time
- Enable client support service degradation
feign: hystrix: enabled: true
- Main start
@SpringBootApplication@EnableFeignClients@EnableHystrix // Open hystrixpublic class orderhystrixmain80 }
- controller configuration fallback
@GetMapping("/consumer/payment/hystrix/timeout/") @HystrixCommand(fallbackMethod = "paymentTimeoutFallback", commandProperties = { @HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "500") }) public String paymentInfo_Timeout(@PathVariable("id") Integer id) { String result = paymentHystrixService.paymentInfo_Timeout(id); log.info(result); return result; } public String paymentTimeoutFallback(@PathVariable("id") Integer id) { return "I'm consumer 80. The other party's payment system is busy. Please try again later, or there is an error in your operation. Please check yourself"; }5.1.4.3 global configuration Fallback
- Method 1: use @ DefaultProperties(defaultFallback = "") to call fallback if the fallback is specially configured, and call global fallback if it is not configured
@RestController@Slf4j@DefaultProperties(defaultFallback = "paymentGlobalFallback") // Configure the global fallbackpublic class orderhystrixcontroller {@ Autowired private paymenthystrixservice paymenthystrixservice; @ getmapping ("/ consumer / payment / hystrix / global / ") @ hystrixcommand / / after the service reports an error, use the degraded method public string paymentinfo_global (@ pathvariable ("Id") integer ID) public string paymentglobalfallback() }
- Method 2: use @ FeignClient to optimize the code again. Just add a service degradation processing implementation class for the interface defined by Feign client to decouple the code. It is often used to deal with downtime
@Component@FeignClient(value = "CLOUD-PROVIDER-HYSTRIX-PAYMENT", fallback = PaymentFallbackService.class)public interface PaymentHystrixService { @GetMapping("/payment/hystrix/ok/") public String paymentInfo_OK(@PathVariable("id") Integer id); @GetMapping("/payment/hystrix/timeout/") public String paymentInfo_Timeout(@PathVariable("id") Integer id);}
// Implement the interface @ componentpublic class paymentfallbackservice implements paymenthystrixservice {@ override public string paymentinfo_ok (integer ID) @ override public string paymentinfo_timeout (integer ID) }5.1.5 service fuse
Fusing mechanism: the fusing mechanism is a microservice link protection mechanism to deal with the avalanche effect. When a microservice in the fan out link is unavailable or the response time is too long, the microservice will be degraded, and then the microservice call of the node will be fused to quickly return the wrong response information. When it is detected that the microservice call response of the node is normal, the calling link will be restored.
In spring cloud, the circuit breaker mechanism is implemented through Hystrix, which monitors the status of calls between microservices. When the failed call reaches a certain threshold, 20 calls fail within 5 seconds by default, and the circuit breaker mechanism will be started. The annotation of the circuit breaker mechanism is @ HystrixCommand
@Servicepublic class PaymentService { // ----------Service fuse ----------------------- @ hystrixcommand (fallbackmethod = "paymentcircuitbreaker_fallback", commandproperties = {@ hystrixproperty (name = "circuitbreaker. Enabled", value = "true"), / / whether to turn on the circuit breaker @ hystrixproperty (name = "circuitbreaker. Requestvolumthreshold", value = "10") , / / number of requests @ HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000"), / / time window period, how long after the recovery attempt @ HystrixProperty(name = "circuitBreaker.errorThresholdPercentage", value = "60") / / how many times the failure rate reaches and trips, this is the probability, percentage}) Public string paymentcircuitbreaker (@ pathvariable ("Id") integer ID) string serialnumber = idutil. Simpleuuid(); / / UUID. Randomuuid(). Tostring(); return thread. Currentthread(). Getname() + "\ T" + "call succeeded, serial number: "+ serialnumber;} public string paymentcircuitbreaker_fallback (@ pathvariable (" Id ") integer ID) }
Three elements:
- Snapshot time window: whether the circuit breaker needs to be opened to count some request and error data, and the statistical time range is the snapshot time window, which is 10 seconds by default
- Total request threshold: within the snapshot time window, you must meet the total request threshold before you are qualified. The default value is 20, which means that if the number of calls of the hystrix command is less than 20 within 10 seconds, even if all requests timeout or fail for other reasons, the circuit breaker will not open
- Error percentage threshold: when the total number of requests exceeds the threshold within the snapshot time window, such as 30 calls, if 15 of the 30 calls have errors, the error percentage will exceed 50%. When the 50% threshold is set by default, the circuit breaker will be opened and the service fuse will occur
When the service fusing occurs, the main logic will not be called again, but the degraded fallback will be called directly. Through the circuit breaker, the automatic error detection is realized, and the degraded logic is switched to the main logic to reduce the response delay effect.
β How to restore the service?
When the circuit breaker is opened and the main logic is fused, hystrix will start a sleep time window. In this time window, the degraded logic will temporarily become the main logic. When the sleep time window expires, the circuit breaker will enter the semi open state and release a request to the original main logic. If the request returns to normal, the circuit breaker will be closed and the main logic will be restored, If there is still a problem with the request, the circuit breaker continues to enter the open state, and the sleep time is long, and the timing is restarted
5.1.6 service current limitslightly
5.1.7 Hystrix workflow 5.1.8 Hystrix DashboardQuasi real time call monitoring of Hystrix Dashboard
π¦ cloud-consumer-hystrix-dashboard9001
- pom
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-hystrix-dashboard</artifactId></dependency>
- yaml
server: port: 9001
- main
@SpringBootApplication@EnableHystrixDashboardpublic class HystrixDashboardMain9001 { public static void main(String[] args) { SpringApplication.run(HystrixDashboardMain9001.class, args); }}
- The monitored microservices need to introduce spring boot starter actor
- visit http://localhost:9001/hystrix
- Configure microservices to monitor
@SpringBootApplication@EnableEurekaClient@EnableCircuitBreaker //public class PaymentHystrixMain8001 { public static void main(String[] args) { SpringApplication.run(PaymentHystrixMain8001.class, args); } /*** this configuration is configured for service monitoring and has nothing to do with service fault tolerance itself. It is the pit after springcloud upgrade. * servletregistrationbean because the default path of SpringBoot is not / hystrix. Stream * just configure the upper and lower servlets in your project * @ return * / @ bean public servletregistrationbean getservlet() { HystrixMetricsStreamServlet streamServlet = new HystrixMetricsStreamServlet(); ServletRegistrationBean registrationBean = new ServletRegistrationBean(streamServlet); registrationBean.setLoadOnStartup(1); registrationBean.addUrlMappings("/hystrix.stream"); registrationBean.setName("HystrixMetricsStreamServlet") ; return registrationBean; }}
- Enter the microservice address to be monitored into hystrix dashboard to view the monitoring
6 service gateway
6.1 GateWay overview
GateWay: it is an API GateWay service built on the Spring ecosystem, based on Spring 5, Spring Boot2, Project Reactor, etc. It aims to provide a simple and effective way to route APIs and provide some powerful filter functions, such as fusing, current limiting and retry. GateWay is developed based on asynchronous non blocking model, so don't worry about performance.
GateWay properties:
- It is built based on spring 5, Spring Boot2 and Project Reactor
- Dynamic routing: it can match any request attribute
- You can specify Predicate and Filter for routes
- Integrated circuit breaker function of Hystrix
- Integrating spring cloud service discovery
- Easy to write Predicate and Filter
- Request current limiting function
- Support path rewriting
β Servlet lifecycle in Java web?
The servlet is managed by the servlet container. When the container starts, it constructs a servlet object and calls servlet init() for initialization. The contianer runtime accepts the request, assigns a thread for each request (usually obtains the idle thread from the thread pool) and then calls service(). When container closes, servlet destroy() destroys servlet.
This is a blocked network I/O
Non blocking asynchronous I/O since servlet 3.1
Concept:
- Routing: it is the basic module for building a gateway. It consists of ID, target time URI, a series of assertions and filters. If the assertion is True, the route will be matched
- Assertion Predicate: referring to java.util.function.Predicate in Java 8, developers can match all contents in HTTP requests (such as request headers and request parameters). If the request matches the assertion, it will be routed
- Filtering: the instance of GatewayFilter in the Spring framework. Using the filter, you can modify the request before or after it is routed
The web request locates the real service node through some matching conditions, and performs some fine control before and after the forwarding process. Predict is our matching condition; Filter can be understood as an omnipotent interceptor. With these two elements and the target uri, a specific route can be implemented
Gateway process:
The client sends a request to the SpringCloud GateWay, then finds the route matching the request in the Gateway Handler Mapping and sends it to the Gateway Web Handler. The Handler sends the request to our actual service through the specified filter chain, executes the business logic, and then returns. The filter can execute logic before or after sending the proxy request. For example, parameter verification, permission verification, traffic monitoring, log output, protocol conversion, or modifying response content, response header, etc.
6.2 GateWay project construction
π¦ cloud-gateway-gateway9527
- pom
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId></dependency><dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId></dependency><!-- Do not introduce spring-boot-starter-web -->
- yaml
server: port: 9527spring: application: name: cloud-gatewayeureka: instance: hostname: cloud-gateway-service # The service provider is registered in the Eureka service list. Client: Service URL: register with Eureka: true fetch registry: true defaultzone: http://eureka7001.com:7001/eureka
- Main startup class
@SpringBootApplication@EnableEurekaClientpublic class GateWayMain9527 { public static void main(String[] args) { SpringApplication.run(GateWayMain9527.class, args); }}
- How to configure the gateway? For example, if you don't want to expose the 8001 port of payment8001, you want to set a layer of 9527 outside the 8001
spring: cloud: gateway: routes: - id: payment_routh # payment_route # Routed idοΌThere are no rules, but they require uniqueness. It is recommended to match the service name uri: http://Localhost: 8001 # route address of the service provided after matching predictions: - path = / payment / get / * * # assertion, route the path matching - ID: payment_ routh2 # payment_ route3 uri: http://localhost:8001 predicates: - Path=/payment/lb/**
- test
# Before configuration $ curl http://Localhost: after 8001 / payment / get / 1# configuration, the real gateway $curl can be hidden http://localhost:9527/payment/get/1
6.3 two ways to configure routing
- yaml configuration, see the previous section βοΈ
- Inject the Bean of RouteLocator into the code
@Configurationpublic class GateWayConfig { @Bean public RouteLocator customRouteLocator(RouteLocatorBuilder routeLocatorBuilder) { RouteLocatorBuilder.Builder routes = routeLocatorBuilder.routes(); routes.route("customer_route_locator", r -> r.path("/guonei").uri("http://news.baidu.com/guonei")).build(); return routes.build(); }}
6.4 dynamically configure routing by microservice name
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-IeKAiL6y-1631543501449)(images/gateway.png)]
By default, the Gateway will create a dynamic route for forwarding according to the service list registered in the registry and the micro service name on the registry, so as to realize the function of dynamic routing
# Dynamic routing by microservice name spring: application: name: cloud-gateway cloud: gateway: discovery: locator: enabled: true # Enable the function of dynamically creating routes from the registry, and use the microservice name for routing routes: - id: payment_routh # route idοΌRequest unique uri: lb://Cloud provider payment # matches the routing address of the service. It should be noted that the protocol of URI is lb, which means load balancing predictions are enabled: - path = / payment / get / * * # assertion. Route if the path matches - id: payment_routh2 uri: lb://cloud-provider-payment predicates: - Path=/payment/lb/**
6.5 common predictions
- after
spring: cloud: gateway: routes: - id: after_route uri: https://Example.org predictions: # this time will take effect - After=2017-01-20T17:42:47.789-07:00[America/Denver]
- before
- between
- cookie
spring: cloud: gateway: routes: - id: cookie_route uri: https://Example.org predictions: - Cookie = chocolate, Ch.P # cookie name, regular
# This command is equivalent to issuing get Request without cookiecurl http://Localhost: 9527 / payment / lb# curl with cookie http://localhost:9527/payment/lb --cookie "chocolate=chip"
- header
spring: cloud: gateway: routes: - id: header_route uri: https://example.org predicates: - Header=X-Request-Id, \d+
# CURL command with parameters specifying the request header curl http://localhost:9527/payment/lb -H "X-Request-Id:123"
6.6 Gateway Filter
The routing filter can be used to modify the incoming HTTP request and the returned HTTP response. The routing filter can only be used by specifying a route. Spring Cloud Gateway has built-in multiple routing filters, which are produced by the factory class of GatewayFilter
Life cycle:
- pre
- post
Type:
- GatewayFilter
- GlobalFilter
Use the method to view the official website, for example π
- id: payment_routh2 uri: lb://Cloud provider payment predictions: - path = / payment / lb / * * - method = get, post filters: - addrequestparameter = X-Request-Id, 1024 # filter factory will add a pair of request headers with the name of X-Request-Id and the value of 1024 to the matching request headers6.6.1 custom filters
/** * Custom global filter */@Component@Slf4jpublic class MyLogGatewayFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { log.info("**** come in MyLogGatewayFilter: " + new Date()); String uname = exchange.getRequest().getQueryParams().getFirst("uname"); if (uname == null) { log.info("****** User name is NullοΌIllegal user, access prohibited ****"); exchange.getResponse().setStatusCode(HttpStatus.NOT_ACCEPTABLE); return exchange.getResponse().setComplete(); } return chain.filter(exchange); } @Override public int getOrder() { return 0; // Returns the filter priority. The smaller the value, the higher the priority}
test
$ curl http://localhost:9527/payment/lb?uname=zs
7 service configuration
7.1 general
SpringCloud Config provides centralized external configuration support for microservices in the microservice architecture, and the configuration server provides a centralized external configuration center for pine oil environments of different microservice applications
- Server side: the distributed configuration center is an independent micro service application, which is used to connect to the configuration server and provide access interfaces for the client to obtain configuration information, encrypt and decrypt information
- Client: it manages application resources and business-related configuration contents through the specified configuration center, and obtains and loads configuration information from the configuration center at startup. The configuration server uses git to store configuration information by default. In this way, it is your turn to have a subject to version the environment configuration, Moreover, GIT client tools can be used to facilitate the management and access of configuration content
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-3gRIx9QQ-1631543501450)(images/springcloudconfig.png)]
7.2 configuration master control
π¦ cloud-config-center-3344
-
Create git repository [email protected]:chmingx/springcloud-config.git
-
pom
<dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- yaml
server: port: 3344spring: application: name: cloud-config-center # Register into eureka Microservice name of the server cloud: config: server: git:# uri: [email protected]:chmingx/springcloud-config.git # gitee above git Name of warehouse, The error is because ssh The version is too high, uri: https://gitee.com/chmingx/springcloud-config.git # search directory search paths: - springcloud config force pull: true Username: chmingx password: 1024.chm # read branch label: master # service register to Eureka address Eureka: client: Service URL: defaultzone: http://eureka7001.com:7001/eur eka
- Main startup class
@SpringBootApplication@EnableConfigServerpublic class ConfigCenterMain3344 { public static void main(String[] args) { SpringApplication.run(ConfigCenterMain3344.class, args); }}
- test
$ curl http://config-3344.com:3344/master/config-dev.yml$ curl http://config-3344.com:3344/config-dev.yml$ curl http://config-3344.com:3344/config-dev.yml/master
be careful π₯
If used ssh The error is because, openssh The version is too high ssh-keygen -m PEM -t rsa Regenerate old format keyοΌVariable solvable-m Parameter specifies the format of the key, PEM(that is RSA Format) is the old format used before
7.3 configuring client setup
- application.yaml is a user level resource configuration item
- bootstrap.yaml is system level with higher priority β
Spring Cloud will create a Bootstrap Context as the parent context of the Application Context of spring application. During initialization, the Bootstrap Context is responsible for loading configuration properties from external sources and parsing the configuration. The two contexts share an Environment obtained from the outside
Bootstrap properties have higher priority. By default, they are not overwritten by local configuration. Bootstrap Context and ApplicationContext have different conventions.
bootstrap.yaml has higher priority than application.yaml. Load it first
π¦ cloud-config-client-3355
- pom
<dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- bootstrap.yaml
server: port: 3355spring: application: name: config-client cloud: # Config Client configuration config: label: master # branch name: config # Profile name profile: dev # Read the suffix name and the above three combinations: the configuration file of config-dev.yml on the master branch is read, http://config-3344.com:3344/master/conf9g-dev.yml uri: http://config-3344.com:3344eureka: client: service-url: defaultZone: http://eureka7001.com:7001/eureka
- Main startup class
@SpringBootApplication@EnableEurekaClientpublic class ConfigClientMain3355 { public static void main(String[] args) { SpringApplication.run(ConfigClientMain3355.class, args); }}
- Add a controller to access the configuration file in a RESTful style
@RestControllerpublic class ConfigClientController { @Value("$") private String configInfo; @GetMapping("/configInfo") public String getConfigInfo() { return configInfo; }}
- test
$ curl http://localhost:3355/configInfo
β If someone modifies the configuration file, it will be found that the cloud config center is updated in time, but the cloud config client needs to restart the microservice before it can be refreshed. How to achieve dynamic refresh?
7.4 dynamic refresh configuration
- Add actor to pom
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId></dependency>
- Modify yaml to expose the monitoring endpoint
# bootstrap.yaml# Exposure monitoring endpoint management: endpoints: web: exposure: include: "*" # It can be configured to expose all monitoring information, such as only info, health, etc
- controller add @ RefreshScope
@RestController@// Beans that need to be hot loaded need to add @ refreshscopepublic class configclientcontroller {@ value ("$") private string configinfo; @ getmapping ("/ configinfo") public string getconfiginfo() }
- Test: gitee modification βοΈ --> View cloud-config-center-3344 modified β --> View cloud-config-client-3355 unmodified β
- After modifying the configuration file in gitee warehouse, the operation and maintenance personnel need to send a POST request to cloud-config-client-3355
$ curl -X POST "http://localhost:3355/actuator/refresh"
- Test: gitee modification βοΈ --> View cloud-config-center-3344 modified β --> View cloud-config-client-3355 modified β
β This manual refresh method is troublesome. If the number of microservices is large, if some need to be refreshed, some do not need to be refreshed, and it is also troublesome. Therefore, the message bus is introduced to help deal with it
8 Service Bus
8.1 general
Message bus: in a microservice mating system, a lightweight message broker is usually used to build a common message topic and connect all microservice instances of the system. Because the messages generated by this topic will be monitored and consumed by all instances, it is called message bus.
Spring cloudbus is a framework used to connect the nodes of distributed system with lightweight message system. It integrates the event processing mechanism of Java and the functions of message middleware. It can manage and propagate messages between distributed systems, just like a distributed actuator, which can be used to broadcast state changes, event push, etc
Spring Cloud Bus + Spring Cloud Config can dynamically and automatically refresh the configuration
Support: RabbitMQ + Kafka
Principle: the Config Client instance will listen to the same topic in an MQ (SpringCloudBus by default). When a micro service refreshes data, it will put this message into the topic, so that other services listening to the same topic can be notified, and then update their own configuration
8.2 spring cloud bus dynamic refresh global broadcast
# Docker running rabbitmq $docker run - D - P 5672:5672 - P 15672:15672 -- name rabbitmq rabbitmq: Management
βοΈ Use the message bus to trigger / bus/refresh of ConfigServer on a server and refresh the configuration of all clients
8.2.1 add message bus support to the server of the configuration centerπ¦ cloud-config-center-3344
- pom adds message bus RabbitMQ support
<!--Add message bus RabbitMQ support--><dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-bus-amqp</artifactId></dependency>
- Modify yaml
spring: # rabbitmq Related configuration rabbitmq: host: 192.168.80.130 port: 5672 username: guest password: guest# expose bus Refresh configured endpoints management: endpoints: web: exposure: include: 'bus-refresh' # Use single quotation marks
Bus refresh is the refresh operation of the actor
8.2.2 configuring the client to add message bus supportπ¦ cloud-config-client-3355
π¦ cloud-config-client-3366
- pom
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-bus-amqp</artifactId></dependency>
- yaml
spring: # to configure rabbitmq rabbitmq: host: 192.168.80.130 port: 5672 username: guest password: guest # Exposure monitoring endpoint Management: endpoints: Web: exposure: include: "*"8.2.3 testing
- Modify the configuration file on gitee
- Send POST request
curl -X POST "http://localhost:3344/actuator/bus-refresh"
- gitee βοΈ --> config-centerβ -> config-clientβ
8.3 SpringCloud Bus dynamic refresh fixed point notification
You can specify a specific instance to take effect, not all
Formula: http://localhost:3344/actuator/bus-refresh/
# Only notify 3355, not 3366curl - x post“ http://localhost:3344/actuator/bus -refresh/config-client:3355"
9 message driven
9.1 Spring Cloud Stream overview
As the middle layer, spring cloud stream shields the differences between the underlying message middleware, reduces the switching cost, and unifies the programming model of message
https://spring.io/projects/spring-cloud-stream#overview
The application interacts with the binder object in spring cloud stream through input or output, and the binder is responsible for interacting with the message middleware. Therefore, it is only necessary to figure out how to interact with spring cloud stream to facilitate the use of message driven methods
Currently supported: RabbitMQ / Kafka
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-TbTmRYJ6-1631543501452)(images/stream.png)]
The message communication mode in spring cloud stream follows the publish subscribe mode. Topics are broadcast. RabbitMQ is Exchange and Kafka is Topic
- binder: connect middleware to shield differences
- Channel: it is an abstraction of queue. In the message communication system, it is the medium for storage and forwarding. The queue is configured through channel
- source/sink: publishing a message from stream is output, and receiving a message is input
9.2 message driven producers
π¦ cloud-stream-rabbit-provider8801
- pom
<!--spring cloud stream rabbit--><dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId></dependency>
- yaml
server: port: 8801spring: application: name: cloud-stream-provider rabbitmq: host: 192.168.80.130 port: 5672 username: guest password: guest cloud: stream: binders: # Configure the to bind here rabbitmq Service information for defaultRabbit: # Represents the name of the definition, used for binding integration type: rabbit # Message component type# environment: # set up rabbitmq Related environment configuration# spring:# rabbitmq:# host: 192.168.80.130# port: 5672# username: guest# password: guest bindings: # Service integration processing output: # This name is the name of a channel destination: studyExchange # Indicates the to use Exchange Name definition content-type: application/json # Set the message type, this time jsonοΌText is set"text/plain" binder: defaultRabbit # Set the specific settings of the message service to be bound eureka: client: service-url: defaultZone: http://Eureka7001. Com: 7001 / Eureka instance: lease renewal interval in seconds: 2 # set heartbeat event interval 2 lease expiration duration in seconds: 5 # if the interval exceeds 5 seconds now, instance ID: send-8801.com # display the host name in the information list. Prefer IP address: true # the access path becomes an IP address
- Main startup class
@SpringBootApplication@EnableEurekaClientpublic class StreamMQMain8801 { public static void main(String[] args) { SpringApplication.run(StreamMQMain8801.class, args); }}
- Interface to access RabbitMQ
public interface IMessageProvider { public String send();}
- Implementation interface
/** * The Service interacts with rabbitmq without @ Service annotation */@EnableBinding(Source.class) // Define the message push pipeline public class messageproviderimpl implements imessageprovider {@ Autowired private messagechannel output; / / the message send pipeline corresponds to the output in the configuration file @ override public string send() }
- controller
@RestControllerpublic class SendMessageController { @Autowired private IMessageProvider messageProvider; @GetMapping(value = "/sendMessage") public String sendMessage() { return messageProvider.send(); }}
- test
curl http://localhost:8801/sendMessage
9.3 message driven consumers
π¦ cloud-stream-rabbitmq-consumer8802
π¦ cloud-stream-rabbitmq-consumer8803
- pom
<dependencies> <!--spring cloud stream rabbit--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies>
- yaml
server: port: 8802spring: application: name: cloud-stream-consumer rabbitmq: host: 192.168.80.130 port: 5672 username: guest password: guest cloud: stream: binders: # Configure the to bind here rabbitmq Service information for defaultRabbit: # Represents the name of the definition, used for binding integration type: rabbit # Message component type # environment: # set up rabbitmq Related environment configuration # spring: # rabbitmq: # host: 192.168.80.130 # port: 5672 # username: guest # password: guest bindings: # Service integration processing input: # This name is the name of a channel destination: studyExchange # Indicates the to use Exchange Name definition content-type: application/json # Set the message type, this time jsonοΌText is set"text/plain" binder: defaultRabbit # Set the specific settings of the message service to be bound eureka: client: service-url: defaultZone: http://Eureka7001. Com: 7001 / Eureka instance: lease renewal interval in seconds: 2 # set heartbeat event interval 2 lease expiration duration in seconds: 5 # if the interval exceeds 5 seconds now, instance ID: receive-8802.com # display the host name in the information list. Prefer IP address: true # the access path becomes an IP address
- Main startup class
@SpringBootApplication@EnableEurekaClientpublic class StreamMQMain8802 { public static void main(String[] args) { SpringApplication.run(StreamMQMain8802.class, args); }}
- Business class
@Component@EnableBinding(Sink.class)public class ReceiveMessageListenerController { @Value("$") private String serverPort; @StreamListener(Sink.INPUT) public void input(Message<String> message) { System.out.println("consumer 1, ----> receive: " + message.getPayload() + "\t" + "Port: " + serverPort); }}
9.4 grouping
β Repeated consumption problem
For example, if an order is received by two services at the same time, it will cause data errors. We have to avoid this situation. It can be solved by message grouping of Stream.
Multiple consumers in the same group in the Stream are competitive, which can ensure that messages will only be consumed once by one application, and different groups can consume repeatedly
Modify yaml to group consumers
bindings: # Service integration processing input: # This name is the name of a channel destination: studyExchange # Indicates the to use Exchange Name definition content-type: application/json # Set the message type, this time jsonοΌText is set"text/plain" binder: defaultRabbit # Set the specific settings of the message service to be bound group: chmingxA # grouping
β Remove the 8802 packet, retain the 8803 packet, and stop 8802 and 88038801 to send messages. After restarting 88028802, messages will not be received, but messages will be received after restarting 8803
This is because exchange data is sent to the queue. Since 02 restarts without setting packets, it will recreate the queue and listen, while 03 still listens to the original queue.
Therefore, grouping can realize message persistence and prevent data loss
10 Sleuth distributed request link tracking
sleuth monitoring + zipkin presentation
Monitoring the invocation of microservices
- Trace is similar to the Span collection of tree structure. It represents a call link with unique identification
- Span refers to the source of the calling link. Generally speaking, span is a request for information
π¦ cloud-consumer-order80
π¦ cloud-provider-payment8001
All add
- pom
<!--spring cloud zipkin Contains sleuth--><dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId></dependency>
- yaml
spring: application: name: cloud-provider-payment # Microservice link monitoring related configuration zipkin: base-url: http://Localhost: 9411 Sleuth: Sampler: Probability: 1 # sampling rate is between 0 and 1, and 1 indicates all collection
11 Nacos
11.1 general
A dynamic service discovery, configuration management and service platform that is easier to build cloud native applications.
Nacos: dynamic naming and configuration service
Nacos = registration center + configuration center
Nacos=Eureka+Config+Bus
$ docker ps -a|grep Exited|awk '' # View all containers that are not running $ docker rm `docker ps -a|grep Exited|awk ''` # Delete all stopped containers $ docker rm $(sudo docker ps -a -q)$ docker pull nacos/nacos-server$ docker run --env MODE=standalone --name nacos -d -p 8848:8848 nacos/nacos-server$ firewall-cmd --zone=public --query-port=8848/tcp # Query whether the port is open $ firewall-cmd --zone=public --add-port=8848/tcp --permanent # Permanent development 8848 port
π₯ Remember to turn off the firewall in the virtual machine
11.2 Registration Center
11.2.1 registered service producerπ cloud04
π¦ cloudalibaba-provider-payment9001
π¦ cloudalibaba-provider-payment9002
- pom
<dependencies> <!--spring cloud alibaba nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- yaml
server: port: 9001spring: application: name: nacos-payment-provider cloud: nacos: discovery: server-addr: localhost:8848 # to configure nacos address# Open monitoring endpoint Management: endpoints: Web: exposure: include: '*'
- Main startup class
@SpringBootApplication@EnableDiscoveryClientpublic class PaymentMain9001 { public static void main(String[] args) { SpringApplication.run(PaymentMain9001.class, args); }}
- Business class
@RestControllerpublic class PaymentController { @Value("$") private String serverPort; @GetMapping("/payment/") public String getPaymentById(@PathVariable("id") Integer id) { return "nacos registry, serverPort: " + serverPort + "\t id" + id; }}
- see http://localhost:8848/nacos
π¦ cloudalibaba-consumer-order83
11.2.3 summaryComparison of various registries
C means that the data seen by all nodes in the same event is consistent; A is that all requests will receive a response
If you do not need to store service level information, and the service instance is registered through the Nacos client and can maintain heartbeat reporting, you can select the AP mode. The current mainstream services, such as Spring Cloud and Dubbo, are applicable to the AP mode. The AP mode weakens the consistency for the possibility of services. Therefore, only temporary instances can be registered in the AP mode
If service level convenience or storage of configuration information is required, the K8S service and DNS service in CP mode use the same mode as CP. in CP mode, persistent instance registration is supported. In this case, the Raft protocol is used as the cluster operation mode. In this mode, the service must be registered before registering the instance. If the service does not exist, an error will be returned
Nacos supports CP and AP
# Switch command curl - x put '$Nacos_ SERVER:8848/nacos/v1/ns/operator/switches?entry=serverMode&value=CP'
11.3 configuration center
11.3.1 configuration center project construction- pom
<dependencies> <!--nacos config--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency> <!--spring cloud alibaba nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- bootstrap.yaml
# nacos to configure server: port: 3377spring: application: name: nacos-config-client cloud: nacos: discovery: server-addr: localhost:8848 #Nacos Service registry address config: server-addr: localhost:8848 #Nacos As configuration center address file-extension: yaml #appoint yaml Format configuration# $-$.$# nacos-config-client-dev.yaml# nacos-config-client-test.yaml ----> config.info
- application.yaml
spring: profiles: active: dev # Represents the development environment #active: test # Represents the test environment #active: info
- Main startup class
@SpringBootApplication@EnableDiscoveryClientpublic class NacosConfigClientMain3377 { public static void main(String[] args) { SpringApplication.run(NacosConfigClientMain3377.class, args); }}
- Business class
@RestController@RefreshScope //Support the dynamic refresh function of Nacos public class configclientcontroller {@ value ("$") private string configinfo; @ getmapping ("/ config / Info") public string getconfiginfo() }
- Add configuration on nacos
- test
In the Nacos Spring Cloud, the complete format of dataId is as follows:
#$-$.$# nacos-config-client-dev.yaml
- Prefix defaults to the value of spring.application.name, which can also be configured through the configuration item spring.cloud.nacos.config.prefix
- Spring.profile.active is the profile corresponding to the current environment. For details, please refer to the Spring Boot document. Note: when spring.profile.active is empty, the corresponding connector - will not exist, and the splicing format of datald will become $. $
- File exception is the data format of the configuration content, which can be configured through the configuration item spring. Cloud.nacos.config.file extension. Currently, only properties and yaml types are supported
- Automatic configuration update through Spring Cloud native annotation @ RefreshScope
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-wk9oybv1-1631543501453) (images / Nacos config. PNG)]
namespace + group + dataid
- Namespace distinguishes deployment environments. The default namespace is public. For example, there are three environments: development, testing and production. We can create three namespaces. Different namespaces are isolated
- Group defaults to DEFAULT_GROUP, which can divide different microservices into one group
- A Service is a micro Service: a Service can contain multiple clusters. The DEFAULT Cluster of Nacos is DEFAULT, and the Cluster is a virtual partition of the specified micro Service.
- Finally, Instance is the Instance of micro service
11.4 persistence
- Nacos / conf / Nacos mysql.sql, create database table nacos_config
- Run mirror
docker run --env MODE=standalone --name nacos -d -p 8848:8848 nacos/nacos-serverdocker exec -it <container id> bashvim conf/application.properties# Modify the corresponding mysql parameters
- Modify the corresponding mysql parameters in application.properties
spring.datasource.platform=mysqldb.num=1db.url.0=jdbc:mysql://localhost:3306/nacos_config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=true&serverTimezone=Asia/Shanghaidb.user=rootdb.password=123456
- Restart nacos
11.5 nacos cluster π©
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-pw9lq2vw-1631543501455) (images / Nacos cluster. PNG)]
By default, Nacos uses embedded database to store data. Therefore, if multiple Nacos nodes in the default configuration are started, there is a consistency problem in the data storage. In order to solve this problem, Nacos adopts centralized storage to support cluster deployment. At present, it only supports MySQL storage.
Nacos supports three deployment modes
- Stand alone mode - used for testing and stand-alone trial.
- Cluster mode - used in production environment to ensure high availability.
- Multi cluster mode - used in multi data center scenarios.
Nginx + 3 Nacos + mysql
- Create tables in MySQL with conf / Nacos mysql.sql
- Modify nacos/conf/application.properties
- Modify nacos/conf/cluster.conf
192.168.80.130:8847192.168.80.130:8848192.168.80.130:8849
- Modify nacos/bin/start.sh, pass in the parameter - P < port >, and start the nacos service with port
- Start three nacos
$ ./start.sh -p 8847$ ./start.sh -p 8848$ ./start.sh -p 8849
- nginx configuration modification
- start nginx
- Test, you can view and persist the storage configuration file
Register the service into the nacos cluster
# application.yamlserver: port: 9002spring: application: name: nacos-payment-provider c1oud: nacos: discovery: #to configure Nacos address #server-addr: Localhost:8848 #Replace the 1111 port of nginx with the cluster server addr: 192.168.111.144:1111management: endpoints: Web: exposure: inc1ude: '*'
12 Sentinel
Sentinel is a traffic guard for distributed systems
With the popularity of microservices, the stability between services and services becomes more and more important. Sentinel takes traffic as the starting point to protect the stability of services from multiple dimensions such as traffic control, fuse degradation and system load protection.
Sentinel has the following characteristics:
-
Rich application scenarios: Sentinel has undertaken the core scenarios of Alibaba's double 11 traffic promotion in recent 10 years, such as second kill (i.e. sudden traffic control within the range of system capacity), message peak cutting and valley filling, cluster traffic control, real-time fuse downstream unavailable applications, etc.
-
Complete real-time monitoring: Sentinel also provides real-time monitoring function. You can see the second level data of a single machine connected to the application in the console, and even the summary operation of clusters with a scale of less than 500.
-
Extensive open source Ecology: Sentinel provides out of the box integration modules with other open source frameworks / libraries, such as Spring Cloud, Dubbo and gRPC. You can quickly access Sentinel by introducing corresponding dependencies and simple configuration.
-
Perfect SPI extension point: Sentinel provides simple, easy-to-use and perfect SPI extension interface. You can quickly customize logic by implementing extension interfaces. For example, custom rule management, adapting dynamic data sources, etc.
-
Service avalanche
-
service degradation
-
Service fuse
-
Service current limiting
Run sentinel
$ java -Dserver.port=6666 -jar sentinel-dashboard-1.7.2.jar
12.1 sentinel monitoring items
π¦ cloudalibaba-sentinel-service8401
- pom
<dependencies> <!--springcloud alibaba nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!--springcloud alibaba sentinel-datasource For subsequent persistence--> <dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-datasource-nacos</artifactId> </dependency> <!--springcloud alibaba sentinel--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <!--springboot web--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.7.5</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- yaml
server: port: 8401spring: application: name: cloudalibaba-sentinel-service cloud: nacos: discovery: server-addr: localhost:8848 #Nacos Service registry address sentinel: transport: dashboard: localhost:8080 #to configure Sentinel dashboard address port: 8719 # 8719 yes sentinel In the background sentinel Front desk dashboard Port to use for interaction, If 8719 is occupied, it will automatically add 1 until the unoccupied port is found management: endpoints: web: exposure: include: '*'feign: sentinel: enabled: true # Activate Sentinel support for Feign
- Main start
@SpringBootApplication@EnableDiscoveryClientpublic class MainApp8401 { public static void main(String[] args) { SpringApplication.run(MainApp8401.class, args); }}
- Business class
@RestControllerpublic class FlowLimitController { @GetMapping("/testA") public String testA() { return "------ testA"; } @GetMapping("/testB") public String testB() { return "------ testB"; }}
- Test: nacos – > sentinel – > cloudalibaba sentinel service8401
π₯ sentinel is lazily loaded
12.2 flow control
Concept:
- Resource Name: unique name, default request path.
- For source: Sentinel can limit the flow for the caller, fill in the micro service name, and default (regardless of source).
- Threshold type / single machine threshold:
- QPS (number of requests per second): limit the current when the QPS calling the API reaches the threshold.
- Number of threads: limit the flow when the number of threads calling the API reaches the threshold.
- Cluster: no cluster is required.
- Flow control mode:
- Direct: when the API reaches the current limiting condition, the current is limited directly.
- Association: when the associated resource reaches the threshold, it limits itself.
- Link: only record the traffic on the specified link (the traffic of the specified resource coming in from the entrance resource. If it reaches the threshold, it will be limited) [API level source].
- Flow control effect:
- Quick failure: direct failure, throw exception.
- Warm up: according to the value of Code Factor (cold loading factor, default 3), the set QPS threshold can be reached only after preheating time from the threshold / codeFactor.
- Queuing: queue at a constant speed to allow requests to pass at a constant speed. The threshold type must be set to QPS, otherwise it is invalid
Warm Up: i.e. warm-up / cold start mode. When the system is at low water level for a long time, when the flow suddenly increases, directly pull the system to high water level, which may crush the system in an instant. Through "cold start", let the passing flow increase slowly and gradually increase to the upper limit of the threshold within a certain time, give the cold system a warm-up time to avoid crushing the cold system
Application scenarios such as: when the seckill system is turned on, there will be a lot of traffic, which is likely to kill the system. The preheating method is to slowly put the traffic in to protect the system and slowly increase the threshold to the set threshold
__The uniform queuing _ method will strictly control the interval between requests, that is, let requests pass at a uniform speed, corresponding to the leaky bucket algorithm.
This method is mainly used to handle intermittent burst traffic, such as message queues. Imagine a scenario where a large number of requests arrive in one second and are idle in the next few seconds. We hope that the system can gradually process these requests during the next idle period, rather than directly reject redundant requests in the first second.
12.3 fuse degradation
Overview of fuse degradation
In addition to flow control, fuse degradation of unstable resources in the call link is also one of the important measures to ensure high availability. A service often calls other modules, possibly another remote service, database, or third-party API. For example, when paying, it may be necessary to remotely call the API provided by UnionPay to query the price of a commodity, Database query may be required. However, the stability of the dependent service cannot be guaranteed. If the dependent service is unstable and the response time of the request becomes longer, the response time of the method calling the service will also become longer, and the threads will accumulate. Finally, the thread pool of the business itself may be exhausted and the service itself will become unavailable.
Modern microservice architectures are distributed and consist of many services. Different services call each other to form complex call links. The above problems will amplify the effect of link calls. If a ring on a complex link is unstable, it may be connected layer by layer, and eventually the whole link is unavailable. Therefore, we need to deal with unstable weakly dependent services Call to conduct fuse degradation, temporarily cut off unstable calls, and avoid the overall avalanche caused by local unstable factors. Fuse degradation, as a means to protect itself, is usually configured at the client (calling end).
Flow control is to protect the service from hanging up and fusing. There is a problem with the service to prevent avalanche
-
RT (average response time in seconds)
- If the average response time exceeds the threshold and the request passes within the time window > = 5, the degradation will be triggered after the two conditions are met at the same time.
- Close the circuit breaker after the window period.
- RT maximum 4900 (larger ones can take effect only through - DCSP. Sentinel. Statistical. Max.rt = XXXX).
-
Exception ratio column (in seconds)
When QPS > = 5 and the abnormal proportion (second level statistics) exceeds the threshold, the degradation is triggered; after the time window ends, the degradation is closed.
Different constant (minute level)
- Different constant (minute Statistics)
When the threshold is exceeded, the degradation is triggered; after the time window ends, the degradation is closed
Different constants are counted in minutes, and the time window must be greater than or equal to 60 seconds
12.4 hot key
What are hotspots? Hotspots are frequently accessed data. Many times, we want to count the most frequently accessed Top K data in a hotspot data and restrict its access. For example:
- Commodity ID is a parameter that counts and limits the most frequently purchased commodity ID in a period of time
- User ID is a parameter, which limits the user ID frequently accessed over a period of time
Hotspot parameter current limiting will count the hotspot parameters in the incoming parameters, and limit the flow of resource calls containing hotspot parameters according to the configured current limiting threshold and mode. Hotspot parameter current limiting can be regarded as a special flow control, which is only effective for resource calls containing hotspot parameters.
Prerequisite - note of the hotspot parameter. The parameter must be a basic type or String
@SentinelResource - handles violations of sentinel console configuration, including the details of blockHandler method configuration;
RuntimeException int age = 10/0, which is the runtime exception reported by java runtime, @ SentinelResource doesn't care
12.5 system rules
Sentinel system's adaptive flow restriction controls the application inlet flow from the overall dimension. Combined with the monitoring indicators of application Load, CPU utilization, overall average RT, inlet QPS and the number of concurrent threads, through the adaptive flow control strategy, the system inlet flow and system Load can be balanced, so that the system can run at the maximum throughput as much as possible Prove the overall stability of the system.
- Load adaptation (only valid for Linux / Unix like machines): system load1 is used as the heuristic index for adaptive system protection. System protection is triggered only when system load1 exceeds the set heuristic value and the current number of concurrent threads exceeds the estimated system capacity (BBR stage) . the system capacity is estimated by maxQps * minRt of the system. The setting reference value is generally CPU cores * 2.5.
- CPU usage (version 1.5.0 +): when the system CPU utilization exceeds the threshold, the system protection is triggered (value range 0.0-1.0), which is sensitive.
- Average RT: when the average RT of all inlet flows on a single machine reaches the threshold, the system protection is triggered, and the unit is milliseconds.
- Number of concurrent threads: system protection is triggered when the number of concurrent threads of all inlet traffic on a single machine reaches the threshold.
- Inlet QPS: when the QPS of all inlet flows on a single machine reaches the threshold, the system protection is triggered.
12.6 @SentinelResource
@RestControllerpublic class RateLimitController { @GetMapping("/byResource") @SentinelResource(value = "byResource", blockHandler = "handleException") public String byResource() { return "200, Current limit test by resource name, OK"; } public String handleExeception(BlockException blockException) { return "444, Current limit test by resource name, Fail"; } @GetMapping("/rateLimit/byUrl") @SentinelResource("byUrl") public String byUrl() { return "200, Press url Current limiting test, OK"; }}
π Upgrade code
Create a custom flow limiting processing logic class
/** * Customize the current limiting processing class and provide methods for processing current limiting */public class CustomerBlockHandler { public static String handlerException(BlockException blockException) { return "444, custom blockhandler Treatment current limiting ------- 1"; } public static String handlerException2(BlockException blockException) { return "444, custom blockhandler Treatment current limiting ------- 2"; }}
Configure and use custom current limiting processing logic
@RestControllerpublic class RateLimitController { // Use the custom user processing class @ getmapping ("/ ratelimit / customerblockhandler") @ sentinelresource (value = "customerblockhandler", blockhandlerclass = customerblockhandler.class, blockhandler = "handlerexception2") public string customerblockhandler() }
@SentinelResource is used to define resources and provide optional exception handling and fallback configuration items@ The SentinelResource annotation contains the following attributes:
- value: resource name, required (cannot be empty)
- entryType: entry type, optional (EntryType.OUT by default)
- blockHandler / blockHandlerClass: blockHandler corresponds to the name of the function handling BlockException. Optional. The access scope of the blockHandler function needs to be public, the return type needs to match the original method, the parameter type needs to match the original method, and finally add an additional parameter with the type of BlockException. The blockHandler function needs to be in the same Class as the original method by default. If you want to use functions of other classes, you can specify blockHandlerClass as the Class object of the corresponding Class. Note that the corresponding function must be a static function, otherwise it cannot be parsed.
- fallback /fallbackClass: the name of the fallback function. Optional. It is used to provide fallback processing logic when throwing exceptions. The fallback function can handle all types of exceptions (except those excluded in exceptionsToIgnore). Fallback function signature and location requirements:
- The return value type must be consistent with the return value type of the original function;
- The method parameter list needs to be consistent with the original function, or an additional Throwable parameter can be used to receive the corresponding exception.
- The fallback function needs to be in the same Class as the original method by default. If you want to use functions of other classes, you can specify fallbackClass as the Class object of the corresponding Class. Note that the corresponding function must be a static function, otherwise it cannot be parsed.
- defaultFallback (since 1.6.0): the default fallback function name, which is optional. It is usually used for general fallback logic (that is, it can be used for many services or methods). The default fallback function can handle all types of exceptions (except those excluded in exceptionsToIgnore). If both fallback and defaultFallback are configured, only fallback will take effect. defaultFallback function signature requirements:
- The return value type must be consistent with the return value type of the original function;
- The method parameter list needs to be empty, or an additional Throwable parameter can be used to receive the corresponding exception.
- The defaultFallback function needs to be in the same Class as the original method by default. If you want to use functions of other classes, you can specify fallbackClass as the Class object of the corresponding Class. Note that the corresponding function must be a static function, otherwise it cannot be parsed.
- Exceptions to ignore (since 1.6.0): it is used to specify which exceptions are excluded, which will not be included in the exception statistics, nor will it enter the fallback logic, but will be thrown as is.
12.7 integration of ribbon and openfeign
π¦ cloudalibaba-provider-payment9003
π¦ cloudalibaba-provider-payment9004
π¦ cloudalibaba-consumer-order84
- pom
<dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency></dependencies>
- yaml
server: port: 84spring: application: name: nacos-order-consumer cloud: nacos: discovery: server-addr: localhost:8848 sentinel: transport: #to configure Sentinel dashboard address dashboard: localhost:8080 #The default port is 8719. If it is occupied, it will start from 8719 automatically+1 scanning,Until an unoccupied port is found port: 8719#The name of the micro service that the consumer will access(Registered successfully nacos Microservice provider)service-url: nacos-user-service: http://Nacos payment provider # activates Sentinel's support for Feign feign: sentinel: enabled: true
- Main start
@SpringBootApplication@EnableDiscoveryClient@EnableFeignClients // Open feign's support for public class ordernacosmain84 }
- Business class
@Component@FeignClient(value = "nacos-payment-provider", fallback = PaymentFallbackService.class)public interface PaymentService { @GetMapping("/payment/") public String getPaymentById(@PathVariable("id") Integer id);}
@Componentpublic class PaymentFallbackService implements PaymentService{ @Override public String getPaymentById(Integer id) { return "openfeign, Service degradation, 444"; }}
- controller
@RestControllerpublic class CircleBreakerController { @Value("$") private String serviceUrl; @Autowired private RestTemplate restTemplate; @GetMapping("/consumer/payment/") @SentinelResource(value = "consumer", blockHandler = "blockHandler", // Handle the Sentinel console configuration violation fallback = "fallbackHandler", / / handle the java internal exception exceptionsToIgnore = / / specify the ignored exception) public string getpaymentbyid (@ pathvariable ("Id") integer ID) else if (id = = 1) Return resttemplate. Getforebject (serviceurl + "/ payment / + ID, string. Class, ID);} public string blockhandler (integer ID, blockexception, blockexception) public string fallbackhandler (integer ID, throwable throwable) / / ------ openfeign ------ @ Autowired private paymentservice paymentservice; @ getmapping ("/ consumer / openfeign / payment / ") public string getpaymentthroughfeign (@ pathvariable ("Id") integer ID) }
12.8 persistence
Write flow control rules into nacos, π¦ cloudalibaba-sentinel-service8401
- pom
<!--SpringCloud ailibaba sentinel-datasource-nacos For subsequent persistence--><dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-datasource-nacos</artifactId></dependency>
- yaml
server: port: 8401spring: application: name: cloudalibaba-sentinel-service cloud: nacos: discovery: server-addr: localhost:8848 #Nacos Service registry address sentinel: transport: dashboard: localhost:8080 #to configure Sentinel dashboard address port: 8719 # 8719 yes sentinel In the background sentinel Front desk dashboard Port to use for interaction, If 8719 is occupied, it will automatically add 1 until the unoccupied port is found # take sentinel Persistent monitoring rules saved to nacos in datasource: ds1: nacos: server-addr: localhost8848 dataId: cloudalibaba-sentinel-service # And spring.application.name equally groupId: DEFAULT_GROUP data-type: json rule-type: flowmanagement: endpoints: web: exposure: include: '*'feign: sentinel: enabled: true # Activate Sentinel support for Feign
- Adding configuration content to nacos
[{ "resource": "/rateLimit/byUrl", // Resource name "IimitApp": "default", / / source application "grade": 1, / / threshold type, 0 indicates the number of threads, 1 indicates QPS "count": 1, / / stand-alone threshold "strategy": 0, / / flow control mode, 0 indicates direct, 1 indicates association, 2 indicates link "controlBehavior": 0, / / flow control effect, 0 indicates rapid failure, 1 indicates warm up, and 2 indicates queuing for "clusterMode": false / / is it a cluster}]
12.9 summary
13 Seata handles distributed transactions
13.1 Seata overview
β The data consistency within each service is guaranteed by local transactions, but the global data consistency cannot be guaranteed. Distributed transactions will occur when a business operation needs to be called remotely across multiple data sources or across multiple systems.
Seata is an open source distributed transaction solution, which is committed to providing high-performance and easy-to-use distributed transaction services under the microservice architecture.
One ID + three component model of distributed transaction processing process:
- Transaction ID XID globally unique transaction ID
- Three component concept
- TC (Transaction Coordinator) - Transaction Coordinator: maintains the status of global and branch transactions and drives global transaction commit or rollback.
- TM (Transaction Manager) - transaction manager: defines the scope of global transactions: start global transactions, commit or roll back global transactions.
- RM (Resource Manager) - Resource Manager: manages the resources of branch transaction processing, talks with TC to register branch transactions and report the status of branch transactions, and drives branch transaction submission or rollback.
Process:
- TM applies to TC to start a global transaction. The global transaction is successfully created and a globally unique XID is generated;
- XID propagates in the context of the microservice invocation link;
- RM registers branch transactions with TC and brings them under the jurisdiction of global transactions corresponding to XID;
- TM initiates a global commit or rollback resolution for XID to TC;
- TC schedules all branch transactions under XID to complete the commit or rollback request.
13.2 installation and configuration of Seata
slightly
13.3 distributed project construction
π¦ seata-order-service2001
π¦ seata-storage-service2002
π¦ seata-account-service2003
- Business table and rollback log table construction
CREATE DATABASE seata_order;CREATE DATABASE seata_storage;CREATE DATABASE seata_account;CREATE TABLE seata_order.t_order ( `id` BIGINT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY, `user_id` BIGINT(11) DEFAULT NULL COMMENT 'user id', `product_id` BIGINT(11) DEFAULT NULL COMMENT 'product id', `count` INT(11) DEFAULT NULL COMMENT 'quantity', `money` DECIMAL(11,0) DEFAULT NULL COMMENT'amount of money', `status` INT(1) DEFAULT NULL COMMENT 'Order status: 0:Creating; 1:Closed',) ENGINE=INNODB AUTO_INCREMENT=` DEFAULT CHARSET=utf8;SELECT * FROM t_order;CREATE TABLE seata_storage.t_storage (`id` BIGINT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,`product_id` BIGINT(11) DEFAULT NULL COMMENT 'product id',`total` INT(11) DEFAULT NULL COMMENT 'Total inventory'οΌ`used` INT(11) DEFAULT NULL COMMENT 'Used inventory'οΌ`residue` INT(11) DEFAULT NULL COMMENT 'Remaining inventory') ENGINE=INNODB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;INSERT INTO seata_storage.t_storage(`id`, `product_id`, `total`, `used`, `residue`)VALUES ('1', '1', '100', '0','100');SELECT * FROM t_storage;CREATE TABLE seata_account.t_account( `id` BIGINT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'id', `user_id` BIGINT(11) DEFAULT NULL COMMENT 'user id', `total` DECIMAL(10,0) DEFAULT NULL COMMENT 'Total amount', `used` DECIMAL(10,0) DEFAULT NULL COMMENT 'Balance used', I `residue` DECIMAL(10,0) DEFAULT '0' COMMENT 'Remaining available limit') ENGINE=INNODB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;INSERT INTO seata_account.t_account(`id`, `user_id`, `total`, `used`, `residue`)VALUES ('1', '1', '1000', '0', '1000');SELECT * FROM t_account;
seata/conf/db_undo_log.sql π, Every library should be built
-- the table to store seata xid data-- 0.7.0+ add context-- you must to init this sql for you business databese. the seata server not need it.-- This script must be initialized in your current business database for AT pattern XID record. And server End independent (Note: business database)-- Note here 0.3.0+ Add unique index ux_undo_logdrop table `undo_log`;CREATE TABLE `undo_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `branch_id` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollback_info` longblob NOT NULL, `log_status` int(11) NOT NULL, `log_created` datetime NOT NULL, `log_modified` datetime NOT NULL, `ext` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
@GlobalTransactional(name = "fsp-create-order", rollbackFor = Exception.class) // //rollbackFor = Exception.class indicates that any exception will be rolled back