Deploy containers to Kubernetes on Azure

catalogue

Building a Node.js application

Docker container

Use Azure container registry

Deploy to Kubernetes

next step

stay In the last article , we studied how to extend our cloud native applications to Azure Kubernetes services, and let Azure DevOps automatically build and deploy applications and infrastructure at the same time.

In this article, we go further by building a containerized node. JS application component. We first build a simple node.js front end for our Functions, and then use Docker to container the application and put it into our own private container registry. Finally, we add the container to the pod in the Kubernetes cluster.

At the end of this article, we will bundle the application components of serverless Azure Functions and more traditional Node.js applications into a code repository and deploy them using Azure DevOps.

Building a Node.js application

Before we start building containers using Docker, let's build a simple Node.js front-end application using the Express.js framework. In this project, we just built a simple page to retrieve the task list from our function, but you can expand it further at any time.

First, we create a new folder called NodeServer in the code base. In this folder, we run the following command to set up our application:

  • npm init
  • npm install express
  • npm install axios

These three commands initialize our application and install the two packages that we need to host the web page (Express) and retrieve data from our API(Axios). When finished, create a new file called app.js and enter the following code:

const express = require('express')
const axios = require('axios').default;
const app = express()
const port = 3001

app.listen(port, () => {
  console.log(`Example app listening at http://localhost:${port}`)
})

app.get('/', function (req, res) {

    try {
        const response = axios({
            method: 'get',
            url: 'https://tsfunctionexample.azurewebsites.net/api/ListTasks',
            data: { username: 'testuser' }
        }).then(function (response){
            var returnPage = "<html><body><h1>Task List</h1>";
            if(response.status = '200'){
                returnPage += '<table style="width:100%"><tr><th>Username</th><th>Task ID</th><th>Task Name</th><th>Due Date</th></tr>';
                response.data.results.forEach(element => {
                    returnPage += '<tr><td>' + element.username + '</td><td>' + element.taskID + '</td><td>' + element.name + '</td><td>' + element.dueDate + '</td></tr>';
                });
                returnPage += '</table>';
            } else {
                returnPage += '<p>Error - No Tasks Found</p>';
            }
            returnPage += '</body></html>';
            console.log(response);
            res.send(returnPage);
        });
    } catch (error) {
        console.error(error);
    }
});

If we simply step through this code, first we load Express and start it listening on port 3001. We then use the. get() function to listen to the root or home page of the URL to get the get request. After receiving this message, we use Axios to call our List tasks API (our hosted https://tsfunctionexample.azurewebsites.net/api/ListTasks ), use the previous user name "testuser" as the principal payload.

When the response returns, we iterate through all the results in the list, format them into a simple HTML table, and then return them. If we save this file and now try to run our code node app.js with the command, we will see the task list returned from our function.

It's worth noting that if you haven't used your features for a while, this page may take some time to return. This is because the function takes time to warm up when it is not used for a period of time.

From here, we can add additional functionality for publishing, deleting, and getting specific tasks, but let's see how to container this application.

Docker container

Docker enables you to package an application and all its dependencies into one container. This includes the base version of the operating system (usually Linux), any dependencies (such as Node and npm), your application and its required libraries.

The container is the running instance of the application, and the blueprint is the configuration for building the container. Let's build a blueprint for our current application and run the container locally for testing. Make sure you are installed and ready Docker.

First, create a new file called Dockerfile to make our container blueprint. Open this file and use the following code:

FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js"]

Most container blueprints should be very simple and follow a similar pattern. The first line defines the source image we are building. In this case, it is the node version 14 image.

Next, we specify where our application code can be found in this image. Because this source image contains both Node and npm, we can install all our dependencies by copying our existing package files and running npm install.

Next, we copy our application code by copying the current local directory to the current working directory.

Finally, we allow the container to open port 3001 and run Node using CMD, specifying our apps.js as a parameter.

Before we build and run our Docker image, we should also create an ignore file so that our local debugging and packages will not be copied. To do this, create the file. Dockeignore and the following:

node_modules
npm-debug.log

After completion, we now use the following command to construct the Docker image:

docker build -t <username>/node-server .

Docker traverses and compiles all necessary components and saves them to the local registry. You can view all images in the local registry using the command docker images.

 

Once we have our Docker image in the local registry, we now start a container to run it with the following command:

docker run -p 80:3001 <username>/node-server

This command runs the mirror using - D repository / image name, but also specifies that any traffic from port 80 should be redirected to port 3001 on the container using the parameter - p  . When we run the command docker ps now, we should see that our container is running with the forwarding port.

When we open a Web browser to access http://localhost Our task manager application should be displayed.

Use Azure container registry

Therefore, we use docker to container our application. The container image is located in our local registry. Docker can build and run containers locally. Now let's build a container image and store it in the Azure container registry so that AKS can host the container. First, we need to create the container registry by opening the Azure portal, going to the command shell and typing the following command:

az acr create --resource-group TSFunctionTutorial --name tsfunctionRegistry --sku Basic

Once our container registry exists, we can now add another step to our infrastructure pipeline to build our container image.

To access the Docker registry, we create additional service principals by opening the project in Azure DevOps, selecting the project settings, and opening the service connection.

Click new service connection, select Docker registry from the connection options, and then find the Azure container registry from the subscription.

After setting up the service principal, we open the pipeline and edit the task manager pipeline we created in the previous article.

In the Deploy Infrastructure phase, add a new job under the Deploy AKS Cluster job using the following code:

- stage: BuildContainer
  displayName: Build Container
  jobs:
  - job: Build
    displayName: Build
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: Docker@2
      displayName: Build and push an image to container registry
      inputs:
        command: buildAndPush
        repository: tsfunctionnodeserver
        dockerfile: '**/Dockerfile'
        containerRegistry: TSFunctionRegistry

This job is using the Docker@2 The task is a single step in building and pushing our image. The key inputs here are the name of the docker image (repository), the location of the docker file, and the name of the container registry we created. Save and commit the changes, and then run the pipeline to build and deploy the application components.

After completion, when we go to the container registry in the Azure portal and view the repository, we should see the docker image waiting to be deployed.

Deploy to Kubernetes

Now that our Docker image has been built and in the registry, we can set an option to deploy our container to the cluster.

First, we need to create another service connection for our cluster.

Go to Project Settings, then go to Service Connections, and create a new service connection to Kubernetes. Select Azure subscription option, AKS cluster and default namespace. We call the service connection TSFunctionCluster, which we will use later.

Now that we have created a service principal, we need to create a manifest file to deploy our container.

Create a new directory named manifest and a new file named deployment.yaml with the following code:

apiVersion : apps/v1
kind: Deployment
metadata:
  name: tsfunctionnodeserver 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tsfunctionnodeserver
  template:
    metadata:
      labels:
        app: tsfunctionnodeserver 
    spec:
      containers:
        - name: tsfunctionnodeserver 
          image: tsfunctionregistry.azurecr.io/tsfunctionnodeserver
          ports:
          - containerPort: 80

This will deploy our container with no real limit on the resources it can consume and open on port 80. Next, we create a service.yaml file with the following code:

apiVersion: v1
kind: Service
metadata:
    name: tsfunctionnodeserver
spec:
    type: LoadBalancer
    ports:
    - port: 80
    selector:
        app: tsfunctionnodeserver

This component adds the network to our container so that we can access it from outside the cluster.

We also need to add two additional phases to our pipeline to configure the Kubernetes key and extract the image from the ACR. After the build container phase, add the following code to the azure-pipelines.yaml file:

- stage: DeployContainer
  displayName: Deploy Container
  jobs:
  - job: Deploy
    displayName: Deploy Container
    pool:
      vmImage:  $(vmImageName)
    steps:
    - task: KubernetesManifest@0
      inputs:
        action: 'createSecret'
        kubernetesServiceConnection: 'TSFunctionCluster'
        namespace: 'default'
        secretType: 'dockerRegistry'
        secretName: 'tsfunction-access'
        dockerRegistryEndpoint: 'TSFunctionRegistry'
    - task: KubernetesManifest@0
      inputs:
        action: 'deploy'
        kubernetesServiceConnection: 'TSFunctionCluster'
        namespace: 'default'
        manifests: |
                $(Build.SourcesDirectory)/manifests/deployment.yaml
                $(Build.SourcesDirectory)/manifests/service.yaml
        containers: 'tsfunctionnodeserver'
        imagePullSecrets: 'tsfunction-access'

The first task in this phase is to create the key needed to extract the container image from the repository. It uses Docker registration service connection and Kubernetes service connection for link access.

The second task uses two manifest files to deploy the image to a container on the cluster.

Save these files and check in your changes to run the pipeline. After deploying the pipeline, when you open the cluster in the Azure portal, click workload, and then click pod. You should see a pod starting with tsfunctionnodeserver. When you click this pod and copy the IP address, you should be able to use it to access your task management front-end application.

next step

Through this series, we use the combination of serverless Azure Functions and containerized Node.js application components to build a simple task management application. All our code, infrastructure configuration and pipeline configuration are stored in Git repository as part of Azure DevOps.

You can further develop this application by using a single page application (SPA) to build the front end, add security, and fully automate deployment. After reading through this series of articles and understanding the functions of Azure in combination with other cloud tools, you can now build and deploy your own unique serverless applications.

To learn more about using Kubernetes on Azure, browse Kubernetes Bundle |Microsoft Azure and On AzureKubernetes | Microsoft Azure.

Deploying Containers to Kubernetes on Azure - CodeProject

Tags: ASP.NET Kubernetes Container Azure

Posted on Sun, 07 Nov 2021 14:46:15 -0500 by mbrown