introduction
This paper verifies the communication process of gRCP service consumer mesha and gRPC service provider meshb deployed in Istio grid. Through this example, the external registry can be connected to the grid, which is no longer difficult.
1, Isito configuration points
Isito has been installed in the Kubernetes cluster, and verify that the following parameters are set correctly.
istio-config.yaml content
--- apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: default values: global: logging: level: default:debug meshConfig: accessLogFile: /dev/stdout defaultConfig: holdApplicationUntilProxyStarts: true proxyMetadata: ISTIO_META_DNS_CAPTURE: "true" ISTIO_META_DNS_AUTO_ALLOCATE: "true" components: pilot: hub: istio tag: 1.10.4
Description of key parameters
parameter | explain |
---|---|
holdApplicationUntilProxyStarts | The default is false. If it is set to true, it means that the business container needs to be started after the sidecar Proxy container is started |
ISTIO_META_DNS_CAPTURE | The default is false. If it is set to true, it means that the DNS proxy is enabled, and DNS requests will be forwarded to sidecar |
ISTIO_META_DNS_AUTO_ALLOCATE | The default is false. If it is set to true, it means that the DNS proxy will automatically assign IP to serviceentries without specifying |
Execute the following command to validate the configuration
istioctl install -y -f istio-config.yaml
Verify the effective configuration with the following command
kubectl describe IstioOperator installed-state -n istio-system >> istioOperator.conf
2, Example description
Define Proto
The simple method SayHello is used to complete the communication between client and server.
syntax = "proto3"; option java_multiple_files = true; option java_package = "com.melon.test.client.grpc"; option java_outer_classname = "HelloMesh"; package meshgrpc; service Mesher { rpc SayHello (HelloRequest) returns (HelloReply) {} } message HelloRequest { string name = 1; } message HelloReply { string message = 1; }
Service consumer mesha
@RestController public class MeshSender { @GetMapping("demo") public String meshWorker(){ String target = "dns:///AppMeshClient.mesh:50000"; // String target = "127.0.0.1:50000"; ManagedChannel channel = ManagedChannelBuilder.forTarget(target) .usePlaintext() .build(); MesherGrpc.MesherBlockingStub blockingStub = MesherGrpc.newBlockingStub(channel); HelloRequest request = HelloRequest.newBuilder().setName("mesh demo!").build(); HelloReply reply = blockingStub.sayHello(request); System.out.println(reply.getMessage()); return reply.getMessage(); } }
Service provider meshb
@Component public class MeshReceiver { @PostConstruct public void receiverWorker() throws IOException { int port = 50000; Server server = ServerBuilder.forPort(port) .addService(new MeshBService()) .build() .start(); System.out.println("Server started."); } class MeshBService extends MesherGrpc.MesherImplBase { public void sayHello(HelloRequest request, io.grpc.stub.StreamObserver<HelloReply> responseObserver) { System.out.println("receiver client message: " + request.getName()); HelloReply reply = HelloReply.newBuilder().setMessage("I'm from server " + request.getName()).build(); responseObserver.onNext(reply); responseObserver.onCompleted(); } } }
Image push
<plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>3.1.4</version> <configuration> <from>  <auth> <username>${harbor.username}</username> <password>${harbor.password}</password> </auth> </from> <to>  <auth> <username>${harbor.username}</username> <password>${harbor.password}</password> </auth> <tags> <tag>${project.version}</tag> <tag>latest</tag> </tags> </to> <container> <ports> <port>x</port> </ports> <creationTime>USE_CURRENT_TIMESTAMP</creationTime> </container> </configuration> </plugin>
Note: through the maven plug-in jib, execute the following command: "mvn compile jib:build" to push the image to the harbor warehouse
3, gRPC service provider deployment
Set the Deployment of the meshb service
--- apiVersion: apps/v1 kind: Deployment metadata: name: meshb labels: app: meshb spec: selector: matchLabels: app: meshb replicas: 1 template: metadata: labels: app: meshb spec: imagePullSecrets: - name: xxxx #Warehouse name containers: - name: meshb image: x.x.x.x/x/meshb:latest imagePullPolicy: Always ports: - containerPort: 50000
Execute the following command to take effect
kubectl apply -f meshb.yaml deployment.apps/meshb created
Check whether the operation status is normal
# kubectl get pods NAME READY STATUS RESTARTS AGE meshb-565945d794-wcb8z 2/2 Running 0 9m19s
View the startup log. Startup succeeded
# kubectl logs meshb-565945d794-wcb8z -n default Server started. 2021-11-05 09:21:19.828 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8181 (http) with context path '' 2021-11-05 09:21:19.847 INFO 1 --- [ main] com.melon.test.MeshbApplication : Started MeshbApplication in 3.859 seconds (JVM running for 4.344)
Note: the gRPC service provider has been deployed so far.
4, gRPC service consumer deployment
Set the Deployment of the mesha service
--- apiVersion: apps/v1 kind: Deployment metadata: name: mesha labels: app: mesha spec: selector: matchLabels: app: mesha replicas: 1 template: metadata: labels: app: mesha spec: imagePullSecrets: - name: middleware containers: - name: mesha image: arbor.helxike.cn/base/mesha:latest ports: - containerPort: 7171 imagePullPolicy: Always --- apiVersion: v1 kind: Service metadata: name: mesha spec: selector: app: mesha type: LoadBalancer ports: - name: web
Execute command deployment
# kubectl apply -f mesha.yaml deployment.apps/mesha created service/mesha created
View service status
# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mesha LoadBalancer 10.x.61.x <pending> 7171:30514/TCP 20s
View Pod running status
# kubectl get pods NAME READY STATUS RESTARTS AGE mesha-b559fc4f4-m9752 2/2 Running 0 87s
Note: so far, the service consumer has been deployed.
4, Service call validation
Access via domain name
http://x.x.x.x:30514/demo
Page printing error

Viewing the log found that the domain name could not be resolved:
kubectl logs -f mesha-b559fc4f4-m9752 -n default 2021-11-05 09:01:37.372 WARN 1 --- [ault-executor-0] io.grpc.internal.ManagedChannelImpl : [Channel<1>: (dns:///AppMeshClient.mesh:50000)] Failed to resolve name. status=Status{code=UNAVAILABLE, description=Unable to resolve host AppMeshClient.mesh, cause=java.lang.RuntimeException: java.net.UnknownHostException: AppMeshClient.mesh: Name or service not known at io.grpc.internal.DnsNameResolver.resolveAll(DnsNameResolver.java:436) at io.grpc.internal.DnsNameResolver$Resolve.resolveInternal(DnsNameResolver.java:272) at io.grpc.internal.DnsNameResolver$Resolve.run(DnsNameResolver.java:228) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.UnknownHostException: AppMeshClient.mesh: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at io.grpc.internal.DnsNameResolver$JdkAddressResolver.resolveAddress(DnsNameResolver.java:646) at io.grpc.internal.DnsNameResolver.resolveAll(DnsNameResolver.java:404) ... 5 more }
Mapping service provider IP through ServiceEntry
Check that the IP address of the service provider meshb is x.x.0.17
# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mesha-b559fc4f4-m9752 2/2 Running 0 140m x.x.1.117 k8s-servicemesh-3 <none> <none> meshb-565945d794-wcb8z 2/2 Running 0 118m x.x.0.17 k8s-servicemesh-1 <none> <none>
meshb-service-entry.yaml content
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: meshb spec: endpoints: - address: x.x.0.17 hosts: - AppMeshClient.mesh location: MESH_INTERNAL ports: - name: grpc number: 50000 protocol: grpc resolution: STATIC
Execute the following command to validate ServiceEntry
kubectl apply -f meshb-service-entry.yaml serviceentry.networking.istio.io/meshb created
Visit the page again and find that it is normal

Note: at this point, the service consumer initiates a call to the service provider in the grid.
5, Log verification trace
View the service consumer mesha log
kubectl logs -f mesha-b559fc4f4-m9752 -n default I'm from server mesh demo!
Note: the information returned by the service consumer mesha from the service provider meshb.
View the service provider meshb log
# kubectl logs -f meshb-565945d794-wcb8z -n default 2021-11-05 09:21:19.828 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8181 (http) with context path '' 2021-11-05 09:21:19.847 INFO 1 --- [ main] com.melon.test.MeshbApplication : Started MeshbApplication in 3.859 seconds (JVM running for 4.344) receiver client message: mesh demo!
Note: the service provider meshb has received the request from the service consumer mesha.
View the envoy log of the service consumer mesha
kubectl logs -f -l app=mesha -c istio-proxy -n default 2021-11-05T10:25:17.772317Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 2021-11-05T10:52:35.856445Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T10:52:36.093048Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 [2021-11-05T11:06:23.510Z] "GET /demo HTTP/1.1" 500 - via_upstream - "-" 0 x 37 36 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36" "49cd74d8-86eb-4ef5-bd71-8236d188c2f9" "10.69.x1.156:30514" "10.166.1.11x:7171" inbound|7171|| 127.0.0.6:41007 10.166.1.117:7171 10.166.0.0:20826 - default 2021-11-05T11:22:58.633956Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T11:22:59.100387Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 [2021-11-05T11:27:23.842Z] "POST /meshgrpc.Mesher/SayHello HTTP/2" 200 - via_upstream - "-" 17 33 371 319 "-" "grpc-java-netty/1.28.0" "79f2edbd-9c31-4265-87fb-38a594d9383b" "AppMeshClient.mesh:50000" "10.166.0.17:50000" outbound|50000||AppMeshClient.mesh 10.166.x.117:51914 240.240.0.63:50000 10.166.x.117:40544 - default [2021-11-05T11:27:23.527Z] "GET /demo HTTP/1.1" 200 - via_upstream - "-" 0 26 x 728 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36" "859344b6-012d-4457-a391-2b4d13aa3e35" "10.69.x1.156:30514" "10.166.1.11x:7171" inbound|7171|| 127.0.0.6:47065 10.166.1.117:7171 10.166.0.0:2410 - default
View the envoy log of the service consumer meshb
# kubectl logs -f -l app=meshb -c istio-proxy -n default 2021-11-05T09:57:12.490428Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T09:57:12.765887Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 2021-11-05T10:24:48.767511Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T10:24:48.914475Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 2021-11-05T10:57:52.604281Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T10:57:52.757736Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 2021-11-05T11:26:56.551824Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, transport is closing 2021-11-05T11:26:56.987345Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 [2021-11-05T11:27:23.844Z] "POST /meshgrpc.Mesher/SayHello HTTP/2" 200 - via_upstream - "-" 17 33x 316 "-" "grpc-java-netty/1.28.0" "79f2edbd-9c31-4265-87fb-38a594d9383b" "AppMeshClient.mesh:50000" "10.166.0.17:50000" inbound|50000|| 127.0.0.6:44221 10.166.0.17:50000 10.166.1.117:51914 - default
Log in to the Pod of mesa for verification
kubectl describe pod/mesha-b559fc4f4-m9752 -n default
[12:04:44root@mesha-b559fc4f4-m9752 /] C:2 # curl -v AppMeshClient.mesh * About to connect() to AppMeshClient.mesh port 80 (#0) * Trying 240.240.0.63... * Connected to AppMeshClient.mesh (240.240.0.63) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: AppMeshClient.mesh > Accept: */* > < HTTP/1.1 503 Service Unavailable < content-length: 91 < content-type: text/plain < date: Fri, 05 Nov 2021 12:04:59 GMT < server: envoy < * Connection #0 to host AppMeshClient.mesh left intact upstream connect error or disconnect/reset before headers. reset reason: connection failure
Note: log in to the Pod of mesha and access the domain name AppMeshClient.mesh. It is found that it automatically assigns IP "240.240.0.63" and points to sidecar "envy", which will redirect the traffic of the service container to sidecar through DNS Proxy.
Note: by viewing the service consumer logs, service provider logs and their data plane enovy logs, it shows that their calls are made in the istio grid.