[k8s] deploy EKS cluster with one click of Terraform

This article applies to the model text

  • Use AWS overseas account
  • Have a certain understanding of aws, terrain and k8s
  • Create an independent VPC

Introduction to Terraform

Terrain is an open source infrastructure as code software.
Similarly, there is CloudFormation on AWS cloud. We chose terrain because it is more general and can manage common cloud services such as AWS cloud, Azure cloud and Alibaba cloud at the same time.
See the following for usage: https://registry.terraform.io/namespaces/hashicorp

Preconditions

  • Have an AWS overseas account and have administrator privileges
  • The server that executes the script grants the role. The role has administrator permission (actually, this high permission is not used. It is only for demonstration, and the minimum permission configuration is not made)
  • The terrain program has been installed on the server (see the terrain official website for the installation method)
  • The server has installed awscli and kubectl

explain

For ease of reading, tf scripts are split according to aws resource types (in fact, all configurations can be packaged into one tf script for execution)

Script content

Comments have been marked in the script, so the meaning of each paragraph will not be explained here.

main.tf

Dear guests, modify the parameters in this script according to your needs. No other scripts need to be modified

# author zhenglisai
# Use the server role permissions of the current script
provider "aws" {
  region = "us-west-2"
}
# Get current free zone
data "aws_availability_zones" "available" {
  state = "available"
}
# Local parameters and the following parameters can be modified according to your needs.
locals {
  # EKS cluster name
  cluster_name = "tf-cluster-zhenglisai"
  # EKS cluster role name
  cluster_role_name = "tf-cluster-zhenglisai"
  # EKS calculation node name
  node_name = "tf-node-zhenglisai"
  # EKS compute node role
  node_role_name = "tf-node-zhenglisai"
  # Startup template name used by EKS calculation node
  launch_template_name = "tf-launch_template-zhenglisai"
  # image_id is different for each region. This ID is only applicable to us-west-2 region. For the image of other regions, please refer to AWS documents
  launch_template_image_id = "ami-0cb182e3037115aa0"
  # The instance type used by the EKS compute node
  launch_template_instance_type = "t3.small"
  # The server login key needs to be configured in EC2 key management in advance
  launch_template_key_name = "Your key name"
  # VPC network segment used by EKS cluster
  vpc_cidr_block = "10.2.0.0/16"
  # Subnet segment used by EKS cluster
  subnet_1_cidr_block = "10.2.0.0/20"
  # Subnet segment used by EKS cluster
  subnet_2_cidr_block = "10.2.16.0/20"
}

eks.tf

# author zhenglisai
# colony
resource "aws_eks_cluster" "eks-cluster" {
  name     = local.cluster_name
  role_arn = aws_iam_role.eks-cluster.arn
  vpc_config {
    subnet_ids = [aws_subnet.subnet_1.id, aws_subnet.subnet_2.id]
    security_group_ids = [aws_security_group.eks-cluster.id]
  }
}
# Calculation node
resource "aws_eks_node_group" "eks-node" {
  cluster_name  = aws_eks_cluster.eks-cluster.name
  node_group_name = local.node_name
  node_role_arn = aws_iam_role.eks-node.arn
  subnet_ids    = [aws_subnet.subnet_1.id, aws_subnet.subnet_2.id]
  scaling_config {
    desired_size = 2
    max_size     = 3
    min_size     = 1
  }
  launch_template {
    version = aws_launch_template.eks-template.latest_version
    id = aws_launch_template.eks-template.id
  }
}

ec2.tf

# author zhenglisai
resource "aws_launch_template" "eks-template" {
  name = local.launch_template_name
  image_id = local.launch_template_image_id
  instance_type = local.launch_template_instance_type
  key_name = local.launch_template_key_name
  vpc_security_group_ids = [aws_security_group.eks-node.id]
  user_data = base64encode("#!/bin/bash\n/etc/eks/bootstrap.sh ${aws_eks_cluster.eks-cluster.name}")
}

iam.tf

# author zhenglisai
data "aws_iam_policy" "AmazonEKSClusterPolicy" {
  name = "AmazonEKSClusterPolicy"
}
data "aws_iam_policy" "AmazonEKSWorkerNodePolicy" {
  name = "AmazonEKSWorkerNodePolicy"
}
data "aws_iam_policy" "AmazonEC2ContainerRegistryReadOnly" {
  name = "AmazonEC2ContainerRegistryReadOnly"
}
data "aws_iam_policy" "AmazonEKS_CNI_Policy" {
  name = "AmazonEKS_CNI_Policy"
}
data "aws_iam_policy_document" "ec2-instance" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}
data "aws_iam_policy_document" "eks-instance" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["eks.amazonaws.com"]
    }
  }
}
resource "aws_iam_role" "eks-cluster" {
  name = local.cluster_role_name
  assume_role_policy = data.aws_iam_policy_document.eks-instance.json
  managed_policy_arns = [data.aws_iam_policy.AmazonEKSClusterPolicy.arn]
}
resource "aws_iam_role" "eks-node" {
  name = local.node_role_name
  assume_role_policy = data.aws_iam_policy_document.ec2-instance.json
  managed_policy_arns = [data.aws_iam_policy.AmazonEC2ContainerRegistryReadOnly.arn, data.aws_iam_policy.AmazonEKS_CNI_Policy.arn, data.aws_iam_policy.AmazonEKSWorkerNodePolicy.arn]
}

securitygroup.tf

# author zhenglisai
resource "aws_security_group" "eks-cluster" {
  name        = "eks-cluster"
  description = "Allow local vpc"
  vpc_id      = aws_vpc.eks.id
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [local.vpc_cidr_block]
  }
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "eks-cluster"
  }
}
resource "aws_security_group" "eks-node" {
  name        = "eks-node"
  description = "Allow local vpc"
  vpc_id      = aws_vpc.eks.id
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [local.vpc_cidr_block]
  }
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "eks-node"
  }
}

vpc.tf

# author zhenglisai
resource "aws_vpc" "eks" {
  cidr_block = local.vpc_cidr_block
  enable_dns_hostnames = "true"
  tags = {
    Name = "eks"
  }
}

# Define Subnet
resource "aws_subnet" "subnet_1" {
  vpc_id = aws_vpc.eks.id
  map_public_ip_on_launch = true
  cidr_block = local.subnet_1_cidr_block
  availability_zone = data.aws_availability_zones.available.names[0]
  tags = {
    Name = "subnet_1"
    "kubernetes.io/role/elb" = "1"
  }
}
resource "aws_subnet" "subnet_2" {
  vpc_id = aws_vpc.eks.id
  map_public_ip_on_launch = true
  cidr_block = local.subnet_2_cidr_block
  availability_zone = data.aws_availability_zones.available.names[1]
  tags = {
    Name = "subnet_2"
    "kubernetes.io/role/elb" = "1"
  }
}
# Create public network interface
resource "aws_internet_gateway" "igw-eks" {
  vpc_id = aws_vpc.eks.id
  tags = {
    Name = "igw-eks"
  }
}
# Modify routing table
data "aws_route_table" "route_table_eks" {
  vpc_id = aws_vpc.eks.id
  filter {
    name = "association.main"
    values = [true]
  }
}
resource "aws_route" "route_table_eks" {
  route_table_id = data.aws_route_table.route_table_eks.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id = aws_internet_gateway.igw-eks.id
}

Save the above files in a directory, such as eks_demo directory

Start execution

Enter the directory and initialize the terrain resource

cd eks_demo && terraform init

After initialization, start the terrain deployment

terraform apply

After execution, resources will be checked. After the check is completed, confirm to enter yes to start deployment

The whole deployment process takes about 15 minutes
After deployment, configure kubectl permissions, and then start interacting with EKS

Delete resource

After the experiment, if you do not need to reserve resources, execute in the directory where the tf script is located

terraform destroy

All terrain created resources can be deleted

Tags: AWS Kubernetes terraform

Posted on Thu, 02 Dec 2021 20:50:01 -0500 by StateGuy