On Knative1.0 technology -- installation and function deployment of serving and istio


For an introduction to knative, refer to: Getting to know Knative - Technical tutorial (knative-sample.com)

On November 4, knative finally released its first stable version: Knative 1.0 . It has been three years since knative was first released in July 2018. The main features of this release are:

  • Support multiple HTTP routing layers (including istio, Contour, Kourier and Ambassador). Of course, istio should be the main choice;
  • Support multiple event storage layers to handle event subscription mechanisms, such as Kafka, MQ, etc;
  • A "duck type" type is abstracted to allow processing of Kubernetes resources with public fields;
  • Knative Build became independent and evolved into a separate CI/CD project: Tekon;
  • Using Brokers and Triggers simplifies the event publishing and subscription mechanism and decouples producers and consumers;
  • Support the transmission of event components to non Knative components, including components outside the cluster or specific URL s on the host;
  • Support automatic configuration of TLS certificate through DNS or HTTP01;
  • Add Parallel and Sequence components to write some composite event workflows;
  • Support automatic scaling of horizontal Pod based on concurrency or RPS;
  • Use DomainMapping to simplify the management and publishing of services.

See official documents for more information: Knative 1.0 is out! - Knative


tips: the description of installing on the official website is not accurate enough, and there are pits in the example of helloworld. It took me about a week to finish it. I was a little speechless.

Preparation before installation

to configure

It is recommended to install in Linux environment. I use ubuntu locally.

  • cpu: 8 cores and above
  • Memory: 8GB and above
  • Disk: 40GB and above
  • Network: can access the external network


  1. docker

    Needless to say, the container engine docker.

    The installation is relatively simple. You can directly install the official documents: https://docs.docker.com/get-started/

    tips: you'd better create your own docker up , which can be used to store personal images.

  2. k8s

    It is recommended to install minukube for local learning. Direct installation k8s is too difficult.

    Minicube installation is relatively simple. Refer to: Minikube - Kubernetes local experimental environment - Alibaba cloud developer community (aliyun.com)

  3. golang

    Go directly to the official website to download the latest golang version, unzip and add environment variables.

    Official Manual: https://golang.org/doc/install


Due to the need to pull k8s image library, it is inaccessible in China. The methods are:

Installing knative serving

Official Manual: https://knative.dev/docs/install/serving/install-serving-with-yaml/

There are some holes in the official documents. I'll talk about it later.

The installation steps are as follows:

Installing components of serving

tips: the installation method of the official website is direct:

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-core.yaml

I suggest you download the url file locally first, and then kubectl apply -f. This has the following advantages: 1. It is convenient for you to go back if the installation fails; 2. It is convenient for you to manually replace the image file that cannot be pulled from it.

  1. serving-crds.yaml

    Install crd resources. This file has only some basic configurations, and there is no image file to pull

    $ wget https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-crds.yaml $ kubectl apply -f serving-crds.yaml
  2. serving-core.yaml

    This file is the core component of serving.

    $ wget https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-core.yaml $ kubectl apply -f serving-core.yaml

    There are six components in this file, that is, to pull the images of these six components, you can see through kubectl get Pod - n knative serving:

    $ kubectl get pod -n knative-serving NAME                                     READY   STATUS   RESTARTS       AGE activator-68b7698d74-gkgnd               1/1     Running   3 (148m ago)   22h autoscaler-6c8884d6ff-b5rkt              1/1     Running   3 (148m ago)   22h controller-76cf997d95-c28ft              1/1     Running   3 (148m ago)   22h domain-mapping-57fdbf97b-gj96k           1/1     Running   3 (148m ago)   22h domainmapping-webhook-579dcb874d-x4zc2   1/1     Running   4 (148m ago)   21h webhook-7df8fd847b-c85tx                 1/1     Running   4 (148m ago)   22h

    If the STATUS is not Running, you need to find the reason through the kubectl describe... Command. Generally speaking, the STATUS of ErrImagePull or CrashLoopBackOff is generally the problem that the image cannot be pulled. Solution: for example, you can pull the files in yaml through alicloud image service, then create an image, and then replace the image address of the corresponding component in the yaml file. (the same below)

Install network layer

Generally, istio is selected as Knative's service grid.

Note: this step on the official website is not to install istio. Istio needs to be installed separately, as shown below. This step is just to select a network layer for knative serving.

  1. Configure istio. (Note: istio is not installed)

    $ wget https://github.com/knative/net-istio/releases/download/knative-v1.0.0/istio.yaml $ kubectl apply -l knative.dev/crd-install=true -f istio.yaml $ kubectl apply -f istio.yaml
  2. Install the knive istio controller.

    $ https://github.com/knative/net-istio/releases/download/knative-v1.0.0/net-istio.yaml $ kubectl apply -f net-istio.yaml

Then you can see whether the istio ingress gateway service is normal through the following command:

$ kubectl --namespace istio-system get service istio-ingressgateway NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP                   PORT(S)               AGE istio-ingressgateway   LoadBalancer    15021:31993/TCP,80:31426/TCP,443:30168/TCP,31400:30840/TCP,15443:31673/TCP   21h

On minicube, if the status of istio ingress gateway is found to be pengding or EXTERNAL-IP, the content is none. You also need to open minikube's load balancing policy:

In the background or open a new command window, execute the command:

minikube tunnel # Or use a mandatory command: # minikube tunnel --cleanup

As mentioned above, this step is not to install istio. You will see that many pod s in istio system are in CrashLoopBackOff and pending states.

$ kubectl get pod -n istio-system NAME                                   READY   STATUS             RESTARTS     AGE istio-ingressgateway-b899b7b79-87sn6   0/1     CrashLoopBackOff   3 (41s ago)   93s istio-ingressgateway-b899b7b79-dzrrr   0/1     CrashLoopBackOff   3 (41s ago)   93s istio-ingressgateway-b899b7b79-l85lq   0/1     CrashLoopBackOff   3 (41s ago)   93s istiod-d845fbcfd-8zk9w                 1/1     Running            0             93s istiod-d845fbcfd-fsrb4                 1/1     Running            0             78s istiod-d845fbcfd-qszng                 0/1     Pending            0             78s

Before, I always thought that even if knative serving was successfully installed here, it was not, it was the pit here.

Therefore, istio needs to be installed separately.

Configure DNS (optional)

You can configure a DNS for. Later, we need to deploy a helloworld service, so we can directly use a CNAME domain name. In this way, when we access the helloworld service, we can directly use curl.

Of course, DNS can not be configured, but you need to bring a request header to curl when accessing the helloworld service later. (more later)

If configuration is required, execute the following command:

$ wget https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-default-domain.yaml $ kubectl apply -f serving-default-domain.yaml

You can also install other plug-ins. See the operation manual on the official website. For example, I also installed HPA. But none of this is necessary. You can skip this step directly.

Here, even if the basic installation of knative serving is completed, the contents of the pod in the command space knative serving are as follows:

$ kubectl get pod -n knative-serving NAME READY   STATUS   RESTARTS       AGE activator-68b7698d74-gkgnd               1/1     Running   3 (4h20m ago)   24h autoscaler-6c8884d6ff-b5rkt              1/1     Running   3 (4h20m ago)   24h autoscaler-hpa-85b46c9646-9k6h9          2/2     Running   2 (4h20m ago)   20h controller-76cf997d95-c28ft              1/1     Running   3 (4h20m ago)   24h domain-mapping-57fdbf97b-gj96k           1/1     Running   3 (4h20m ago)   24h domainmapping-webhook-579dcb874d-x4zc2   1/1     Running   4 (4h20m ago)   23h net-istio-controller-544874485d-ztb2d    1/1     Running   2 (4h20m ago)   22h net-istio-webhook-695d588d65-s79d2       2/2     Running   8 (4h18m ago)   22h webhook-7df8fd847b-c85tx                 1/1     Running   4 (4h20m ago)   24h

Install istio

For the download and installation of istio, you can directly refer to the official documents: https://istio.io/latest/docs/setup/getting-started/#download

For the version of istio, the official recommendation of knative is version 1.9.5.

We will use the officially recommended version of istio for installation.


Enter the istio download page: https://github.com/istio/istio/releases/tag/1.9.5

Select the appropriate version of your host and download it locally.

environment variable

Then decompress it directly and configure the environment variables according to the normal steps.

$ export ISTIOPATH=/root/istio-1.9.5 $ export PATH=$ISTIOPATH/bin:$PATH $ istioctl version client version: 1.9.5 pilot version: 1.9.5 pilot version: 1.9.5 pilot version: 1.9.5 pilot version: 1.10.5 pilot version: 1.10.5 pilot version: 1.10.5 data plane version: 1.10.5 (2 proxies), 1.9.5 (4 proxies)


Local learning environment, so we can directly select the profile demo to install.

$ istioctl install --set profile=demo -y # There may be some inexplicable errors, but it doesn't matter. Just confirm that the following components can be installed ✔  Istio core installed  ✔  Istiod installed  ✔  Egress gateways installed  ✔  Ingress gateways installed  ✔  Installation complete

Enable sidecar

Enable the sidecar container on the knative serving system namespace

$ kubectl label namespace knative-serving istio-injection=enabled namespace/knative-serving labeled

At this time, even if istio is installed, you can see the pod status of istio system in the command space:

$ kubectl get pod -n istio-system NAME                                   READY   STATUS   RESTARTS       AGE istio-egressgateway-64b4ccccbf-r9d2j    1/1     Running   0               4h27m istio-ingressgateway-6dc7b4b675-4k2pq   1/1     Running   0               4h27m istio-ingressgateway-6dc7b4b675-lrtk7   1/1     Running   0               4h27m istio-ingressgateway-6dc7b4b675-nl6xz   1/1     Running   0               4h27m istiod-65fbd8c54c-mc9dr                 0/1     Running   0               4h27m istiod-65fbd8c54c-pd5dq                 0/1     Running   0               4h27m istiod-65fbd8c54c-wq4dd                 0/1     Running   0               4h27m istiod-d845fbcfd-dcvs6                  1/1     Running   2 (4h35m ago)   22h istiod-d845fbcfd-q7t8k                  0/1     Running   0               4h27m istiod-d845fbcfd-rt49j                  0/1     Running   0               4h27m

Then confirm the service status of istio system in the command space:

$ kubectl get svc -n istio-system NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP                   PORT(S)                                                                     AGE istio-egressgateway     ClusterIP   <none>                        80/TCP,443/TCP,15443/TCP                                                     4h27m istio-ingressgateway   LoadBalancer,   15021:31993/TCP,80:31426/TCP,443:30168/TCP,31400:30840/TCP,15443:31673/TCP   22h istiod                 ClusterIP   <none>                        15010/TCP,15012/TCP,443/TCP,15014/TCP                                       22h knative-local-gateway   ClusterIP   <none>                        80/TCP                                                                       22h


At this point, the Knative serving installation is completed, and then we can deploy a hello world service for Knative.

ATTENTION!!! There is also a pit here. If you directly use the HelloWorld go example in the official document, you will probably not run. Anyway, I have deployed on two hosts without success, and the service status is always revisionMiss in the end.

Moreover, the image size of the official demo is more than 200 M. apart from the package of the golang language itself, a simple helloworld program should not be so large, so I seriously doubt that there is a problem with the demo given by the official website. As follows:

Next, let's introduce the example of writing helloworld.

Use off the shelf

If you are too lazy to write, you can also use my helloworld image.


Then you can jump to

Write it yourself

Using my existing helloworld image, you can skip this step.

Let's write a spring boot service of Java project as a demo.

reference resources: Hello World - Spring Boot Java - Knative


  1. A simple spring boot project, my project name: knative-demo
  2. maven environment
  3. docker hub, which is used to upload our images.


  1. You can first create a simple springboot project on your idea. Then write a simple controller in the spring boot project.

    @RestController public class KnativeServingController {    @Value("${TARGET:World}")    String target;    @RequestMapping("/")    String hello() {        return "Hello " + target + "!";   } }

    After writing it, remember to test it yourself. For example, after startup, you can directly access localhost:8080 and output "hello world"

    $ curl http://localhost:8080 Hello World!

    This means that the code we write is OK.

  2. The Dockerfile is usually created in the root directory of the project (i.e. in the knive demo folder).

    # Use the official maven/Java 8 image to create a build artifact: https://hub.docker.com/_/maven FROM maven:3.5-jdk-8-alpine as builder# Copy local code to the container image. WORKDIR /root/knative/helloworld-java/knative-demo COPY pom.xml . COPY src ./src# Build a release artifact. RUN mvn package -DskipTests# Use the Official OpenJDK image for a lean production stage of our multi-stage build. # https://hub.docker.com/_/openjdk # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds FROM openjdk:8-jre-alpine# Copy the jar to the production image from the builder stage. COPY --from=builder /root/knative/helloworld-java/knative-demo/target/knative-demo-*.jar /helloworld.jar# Run the web service on container startup. CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]

    (note to replace your own project directory path above).

  3. Build an image and push it to your docker hub

    Under the path of the Dockerfile file above, execute the following command to build:

    # Abreaking is my own docker hub user name, and HelloWorld Java is the custom image name docker build - t abreaking / HelloWorld Java

    The build time is a little long. After completion, push it to your own docker hub warehouse:

    docker push abreaking/helloworld-java

    After the push is successful, you can see your image in the dockerhub

Deploy application

The demo we wrote earlier is called a function in knative's view, because knative is a faas (service and function) platform.

Then we can deploy. Knive usually does three things in the process of deploying applications:

  • Create a new and unchangeable version; (this is very important, and some grayscale strategies will be involved later)
  • At the network level, the routing route, traffic ingress, service and load balancer will be automatically created for the application.
  • At the k8s level, it will automatically expand or shrink the capacity of the application's pod. If your service is not called, it will automatically shrink to zero.

Write yaml

Create a new file name service-java.yaml, as follows:

apiVersion: serving.knative.dev/v1 kind: Service metadata:   name: helloworld-java-spring   namespace: default spec:   template:     spec:       containers:       # Here is the address of the image file - Image: docker.io/abreaking/helloworld Java env: - Name: target value: "world"


Use the following command directly to deploy the application

$ kubectl apply -f service-java.yaml service.serving.knative.dev/helloworld-java-spring created


Verify the hello world we deployed with the following command:

$ kubectl get ksvc helloworld-java-spring NAME                     URL                                                         LATESTCREATED                 LATESTREADY                   READY   REASON helloworld-java-spring   http://helloworld-java-spring.default.   helloworld-java-spring-00001   helloworld-java-spring-00001   True    

If the status is true, the deployment is successful.

If the status is always revisionMissing, the cause of the problem may be the problem of the image itself. For example, the image cannot be downloaded at all. You can manually doucker pull the image name to verify it.

If you are sure that the image itself is OK, you may need to be patient for a while (two or three minutes at most).


We can see that the helloworld has a url. We can directly execute the url manually:

$ curl http://helloworld-java-spring.default. Hello World!

Can output Hello World! Then it means that the whole installation and deployment is successful.

The first time you execute the url, the response may be a little slow. The reason is very simple: as I said earlier, when there is no call, the application will automatically shrink to zero. When there is a call, it will first pull the code, deploy, start the process of allocating pod, so the first start will be a little slow.

Some problems and Solutions

istio installation failed. Some components are pending or crashloopoff

The version of istio is still the recommended higher version. Just update it directly. The official recommendation is version 1.9.5;

Secondly, it should be noted that istio has strict requirements for memory and CPU. It is recommended that memory be 8G or above and CPU8 core or above.

As long as the version is OK and the memory and CPU configurations meet the requirements, there is basically no problem with istio installation.

After the helloworld service is started, it always revisionMissing

This problem is a little unsolved. Personally, I think the reason is still in the demo. The official demo still has a problem. It's pulled down for hundreds of M.

Later, my solution was to make my own demo and replace the official demo image. Then there was the problem of revisionMission. I waited a little longer, and then it was OK.

Istio ingress gateway service is always pending, and EXTERNAL-IP is empty

The problem is minicube. First, check whether the type of your istio ingress gateway service is LoadBalancer. If so, minicube needs to do an operation: open a new window and run the command: minicube tunnel. Then it can be solved.

reference material



Posted on Wed, 10 Nov 2021 00:02:19 -0500 by sneskid