Linkerd is Kubernetes' service grid. It makes running services easier and safer by providing you with runtime debugging, observability, reliability, and security -- all without any changes to your code.
For a brief introduction to the service grid model, we recommend reading service grid: what every software engineer needs to know about the most hyped technology in the world. Linkerd is completely open source and licensed under Apache v2. It is a graduate project of the cloud native Computing Foundation.
Linkerd has three basic components: UI, data plane and control plane. You can run linkerd as follows:
- Install the CLI on the local system;
- Install the control plane into your cluster;
- Add your service to Linkerd's data plane.
Once the service runs with linkerd, you can use linkerd's UI to check and manipulate it. For the installation and deployment mode and istioctl type of linkerd, please refer to the documentation for specific installation documents https://linkerd.io/2.11/getting-started/
Step 1: install CLI
If this is the first time you run linkerd, you need to download the CLI to the local machine. The CLI will allow you to interact with your linkard deployment. linkerd
To manually install the CLI:
curl -fsL https://run.linkerd.io/install | sh
Be sure to follow the instructions to add it to the path.
(or, if you use self-control , you can install the CLI. You can also install the CLI directly through Linkerd version page Download CLI. brew install linkerd
After installation, verify that the CLI is operating correctly, including:
linkerd version
You should check the CLI version. This is because you have not installed the control plane on the cluster. Don't worry, we will solve this problem as soon as possible. Server version: unavailable
Step 2: verify your Kubernetes cluster
Kubernetes clusters can be configured in many different ways. Before installing Linkerd control plane, we need to check and verify that everything is configured correctly. To check whether the cluster is ready to install Linkerd, run:
linkerd check --pre
If any of the checks fail, be sure to follow the links provided and resolve these issues before continuing.
Step 3: install the control plane on the cluster
Now that you have the CLI running locally and are ready to run the cluster, it is time to install the control plane. To do this, run:
linkerd install | kubectl apply -f -
This command generates a Kubernetes form with all core control plane resources (please check this output at any time if you are curious). Insert this form and instruct Kubernetes to add these resources to your cluster.
Depending on the Internet connection speed of the cluster, it takes a minute or two for the control plane to complete the installation. Wait until the control plane is ready (and verify your installation by running):
linkerd check
Step 4: install the demo application
Let's install a demo application called emoticons. Emoticons are a simple stand-alone kubernett application that uses a combination of gRPC and HTTP calls to allow users to vote for their favorite emoticons.
Install emoticons into the namespace by running: emojivoto
curl -fsL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
After deployment, inject sidecar into the pod on the data plane
kubectl get -n emojivoto deploy -o yaml \ | linkerd inject - \ | kubectl apply -f -
Linkerd has now been added to the application! Like the control plane, you can verify how everything works on one side of the data plane. Use:
linkerd -n emojivoto check --proxy
Question:
When you deploy linker d2.11 on tke according to the official documents, there will be a problem, that is, the policy container of linker d-destination will not work all the time, and an error will be reported when checking the container log. If the policy does not work, the pod of linker d-proxy-injector will not start
# k logs linkerd-destination-549f568447-mlbfb policy 2021-11-04T05:51:38.191602Z INFO linkerd_policy_controller: Admission controller server listening addr=0.0.0.0:9443 2021-11-04T05:51:38.192177Z INFO serve{addr=0.0.0.0:9990}: linkerd_policy_controller::admin: HTTP admin server listening addr=0.0.0.0:9990 2021-11-04T05:51:38.192831Z INFO grpc{addr=0.0.0.0:8090 cluster_networks=[10.0.0.0/8, 100.64.0.0/10, 172.16.0.0/12, 192.168.0.0/16]}: linkerd_policy_controller: gRPC server listening addr=0.0.0.0:8090 2021-11-04T05:51:38.201214Z WARN servers: rustls::session: Sending fatal alert DecodeError 2021-11-04T05:51:38.201272Z ERROR servers: kube::client: failed with error error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:38.201284Z INFO servers: linkerd_policy_controller_k8s_api::watch: Failed error=failed to perform initial object list: HyperError: error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:38.201387Z WARN serverauthorizations: rustls::session: Sending fatal alert DecodeError 2021-11-04T05:51:38.201436Z ERROR serverauthorizations: kube::client: failed with error error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:38.201452Z INFO serverauthorizations: linkerd_policy_controller_k8s_api::watch: Failed error=failed to perform initial object list: HyperError: error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:38.202124Z WARN pods: rustls::session: Sending fatal alert DecodeError 2021-11-04T05:51:38.202184Z ERROR pods: kube::client: failed with error error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:38.202194Z INFO pods: linkerd_policy_controller_k8s_api::watch: Failed error=failed to perform initial object list: HyperError: error trying to connect: invalid certificate: BadDER 2021-11-04T05:51:39.202667Z INFO serverauthorizations: linkerd_policy_controller_k8s_api::watch: Restarting 2021-11-04T05:51:39.202714Z INFO servers: linkerd_policy_controller_k8s_api::watch: Restarting 2021-11-04T05:51:39.203709Z INFO pods: linkerd_policy_controller_k8s_api::watch: Restarting
Later, we found this problem on github and found a bug in linkerd. For specific issue, please refer to the link
https://github.com/linkerd/linkerd2/issues/7217
A temporary solution is to replace the image version of policy and adopt the following image. The developer also said that it will be completely repaired in stable-2.11.2.
ghcr.io/olix0r/policy-controller:nativetls.40ab223a