The records of server deployment process and step-by-step records of kolla's openstack

Article catalog

preface

I am a little white, because the course requires building a cloud platform environment, and then I choose the container based openstack deployment method - kolla ansible, which took nearly two weeks to build the project, and finally deployed it successfully today. Next, we will show the deployment process and related network configuration of kolla ansible.

Experimental environment

  1. Two servers with CentOS 7 installed are prepared. One is the control node (node20), its IP address is 172.22.59.4, the other is the calculation node (node19), its IP address is 172.22.59.3.
  2. Three network cards (my network cards are enp49s0f0 and enp33s0f0 and Br Ex) are prepared on the control node. The enp49s0f0 network card of the control node is first used to connect to the external network (some dependency packages need to be downloaded before deployment), corresponding to the above 172.22.59.4 address. One network card of the calculation node is enough (my network card is enp49s0f0), corresponding to the above 172.22.59.3 address.
  3. If you use virtual machines for deployment, just look at the final resources for configuration.

preparation

In the gray box below, each line of the ා flag is a complete execution command, which can be executed on the corresponding host by copying one line at a time. (for centos7)

  1. Key completion software package and vim software package are installed on both servers:
	#  yum -y install bash-completion.noarch vim
  1. Both servers turn off firewalld and SELinux
	#. vim /etc/selinux/config
	


As shown in the figure above, set the value of SELINUX field in the above config to disabled

	#. setenforce 0
	#. getenforce
	#. systemctl disable firewalld && systemctl stop firewalld

To view the status of a firewall:

	# systemctl status firewalld


3. Configure the hosts file on both servers, as follows:

	# vim /etc/hosts


Just add the IP address and name of the two servers in the red box in the hosts configuration file of node19 and node20 servers.

  1. Modify the enp33s0f0 network card configuration of node20
	[root@node20 ~]# vim /etc/sysconfig/network-scripts/ifcfg-enp33s0f0

Put the following contents in the enp33s0f0 network card configuration:

NAME=enp33s0f0
DEVICE=enp33s0f0
TYPE=OVSPort
NM_CONTROLLED=no
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

Then restart the network service:

 [root@node20 ~]# systemctl restart network
  1. Install and update pip tool on node20 server

To install the epel source:

	[root@node20 ~]# yum -y install epel-release
	[root@node20 ~]# yum -y install python-pip
	[root@node20 ~]# cd ~
	[root@node20 ~]# mkdir .pip

Install some dependency packages in advance to avoid later errors:

[root@node20 ~]# yum install openldap-devel

Configure the pip package source (the following command is executed in multiple lines)

[root@node20 ~]# tee /root/.pip/pip.conf << 'EOF'
[global]
index-url = http://mirrors.aliyun.com/pypi/simple/
[install]
trusted-host=mirrors.aliyun.com
EOF

Update pip

[root@node20 ~]# 	pip install -U pip
  1. Configure the pip package source of node19:

Note: this step is for later installation
Install some dependency packages in advance to avoid later errors:

[root@node19 ~]# yum install openldap-devel
[root@node19 ~]# mkdir .pip
[root@node19 ~]# tee /root/.pip/pip.conf << 'EOF'
[global]
index-url = http://mirrors.aliyun.com/pypi/simple/
[install]
trusted-host=mirrors.aliyun.com
EOF

Start deployment

  1. Install ansible on node20
[root@node20 ~]# yum -y install python-devel libffi-devel gcc openssl-devel libselinux-python
[root@node20 ~]# pip install ansible
  1. Configure the ansible parameter on node20

Note: there is no configuration file for ANSI ble installed in pip. At this time, you need to go to github to copy the default configuration file
website: https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg

[root@node20 ~]# mkdir /etc/ansible
//On github ansible.cfg The content of is written to the server / etc/ansible/ansible.cfg In file
[root@node20 ~]# vim /etc/ansible/ansible.cfg
[root@node20 ~]# ansible --version

The query results are as follows:

Then start optimizing:

[root@OpenStack-con ~]# vim /etc/ansible/ansible.cfg

The revised contents are as follows:

forks = 10  //Line 19, set the number of parallel processes. If there are many hosts to manage, you can try to increase the value first
host_key_checking = False  //Line 67, skip ssh first connection prompt verification
pipelining = True  //Line 403, turn on the pipeline. Ansible needs ssh to the destination host multiple times when executing a module. Turn on this mode to reduce the number of ssh connections and the execution time of ansible.
//When large-scale servers or reference modules are deployed, enabling pipelining will bring significant performance improvement to ansible
  1. node20 extended partition (skip this part of the virtual machine directly, and refer to the first reference directly)

Here I stepped on a pit, that is, the original server allocated all the disk space when installing the system, so I couldn't expand the partition. In the end, if you are too busy, you need to reinstall the system, customize the partition, and do not allocate all the disk space. 5T of disk space is allocated to 2T, and 3T of space is not allocated. In this step, you need to expand the partition. Here I will use my own extension process (use the direct reference of virtual machine to configure)

To view disk usage:

	[root@node20 ~]# fdisk -l


It can be seen that my / dev/sda is 5T of disk space, and then I allocated 2T of space when I reinstalled the system, as shown in the second red box in the figure. / dev/sda3, the type of this part of space is linux LVM, but the 2T space can not be used as the storage space for the cinder service, because the partition holds other file information of the system. So I will expand the / dev/sda partition and allocate the remaining 3T space to a new empty partition, that is, / dev/sda4. The process is as follows:

//Start expanding partition
[root@node20 ~]# fdisk /dev/sda


Enter n to create a new partition.

Then you need to specify the partition number. In the previous figure, you can see that the 1, 2 and 3 partition numbers have been used. The partition number here starts from 5 because I have already allocated 3T space to 4 partition, so 4 partition number is also used. Let me show you how to create a partition.

This creates a new partition 5. Then enter w, and then restart the server to take effect. See the effect:

Next, in the body, / dev/sda5 partition is for demonstration purposes only, / dev/sda4 partition is the storage space I use as a cinder service. It can be seen that the type of the partition is Linux filesystete. However, when I deploy the cinder service, the type of storage space used should be linux LVM. So I also need to change the type of partition 4 to linux LVM, as follows:

First enter t, then specify partition 4, enter L to see the type:

Find the type ID of linux LVM is 31, and enter 31.
Then input w to save, restart the server, OK.

  1. node20 configuration cinder information
[root@node20 ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
//The blank partition of my server is / dev/sda4, and the added virtual hard disk of the virtual machine should be / dev/sdb. Pay attention to the difference
[root@node20 ~]# pvcreate /dev/sda4
[root@node20 ~]# vgcreate cinder /dev/sda4
// Ensure that the machine starts automatically
[root@node20 ~]# systemctl status lvm2-lvmetad.service

  1. node20 installing kolla ansible
//The stein version uses the following command
[root@node20 ~]# pip install kolla-ansible==8.0.1 --ignore-installed PyYAML
//The train version uses the following commands. Here is the train version I am using
[root@node20 ~]# pip install kolla-ansible==9.0.1 --ignore-installed PyYAML	

Copy the related configuration files of kolla ansible:

[root@node20 ~]# cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/
[root@node20 ~]# cp /usr/share/kolla-ansible/ansible/inventory/* /etc/kolla/
[root@node20 ~]# ls /etc/kolla/
all-in-one  globals.yml  multinode  passwords.yml

File Description: all in one is the ansible automatic installation configuration file for installing single node OpenStack; multi node is the ansible automatic installation configuration file for installing multi node OpenStack; globals.yml Is a custom configuration file deployed by OpenStack; passwords.yml Is the password file of each service in OpenStack.

Generate the key and authorize two servers:

//Just go straight back to the end
[root@node20 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:MlEHvdjHadF+ydFC80Gg0u/sKcP+hvC8gDpvHTOGuL4 root@OpenStack-con
The key's randomart image is:
+---[RSA 2048]----+
|        oo.  +=o.|
|       . .o o o+o|
|      .  + = +..+|
|       .. + * .o.|
|      + S  o . . |
|     . +.*  o    |
|      ..o.O .o   |
|     o.. ..B...  |
|    .E=.  .o*+   |
+----[SHA256]-----+
[root@node20 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node20
[root@node20 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node19

Configure a multi node multi node host

[root@node20 ~]# vim /etc/kolla/multinode
[control]
node20
[network]
node20
[compute]
node19
[monitoring]
node20
[storage]
node20
[deployment]
node20

Check whether all hosts communicate normally:

[root@node20 ~]# ansible -i /etc/kolla/multinode all -m ping

The success messages of node20 and node19 appear
Automatically generate password files of openstack services

[root@node20 ~]# kolla-genpwd
[root@node20 ~]# vim /etc/kolla/passwords.yml
keystone_admin_password: 123456    //Line 165, change web login password

Edit / etc/kolla/global.yml Customizing deployment issues in Openstack

[root@node20 ~]# vim /etc/kolla/globals.yml
//Lines 14 and 15, select the basic image to download, choose 1 from 5
# Valid options are ['centos', 'debian', 'rhel', 'ubuntu']
kolla_base_distro: "centos"

//Line 17 and line 18, select the installation method, 2 choose 1. Binary binary installation, source code installation
# Valid options are [ binary, source ]
kolla_install_type: "source"

//In lines 20 and 21, select the version label of OpenStack. For details, see: https://releases.openstack.org/
# Valid option is Docker repository tag
openstack_release: "train"  //Note that the version must be lowercase, and the docker image label related to OpenStack downloaded later is stein. I failed to change the train version to stein
  
//Lines 23 and 24, where to store the configuration file
# Location of configuration overrides
#node_custom_config: "/etc/kolla/config" / / default storage address
  
//Line 31, OpenStack internal management network address, which is used to access OpenStack Web page for management. If high availability is enabled, it needs to be set to VIP (drift IP)
kolla_internal_vip_address: "172.22.59.4"
  
//Line 87, network card interface for OpenStack internal management network address
network_interface: "enp49s0f0"
 
//Lines 92-94 and 97-98 remove the comments, so that the internal communication network can go through ens32
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
cluster_interface: "{{ network_interface }}"
tunnel_interface: "{{ network_interface }}"
dns_interface: "{{ network_interface }}"
  
//Line 105, OpenStack external (or public) network interface, can be vlan mode or flat mode.
//This network card should be active without IP address. If not, the virtual machine instance in OpenStack cloud platform will not be able to access the external network. (BR ex bridge is not successful when IP exists)
neutron_external_interface: "enp33s0f0"

//127 lines
neutron_plugin_agent: "openvswitch"

  
//Line 190, turn off high availability
enable_haproxy: "no"
 
//Line 213, enable cinder (block store)
enable_cinder: "yes"
 
//Line 218, the cinder backend enables lvm
enable_cinder_backend_lvm: "yes"
 
//Line 421, the volume group name of the cinder (block storage) needs to be the same as that on the openstack sto host
cinder_volume_group: "cinder"
 
//Lines 443 and 444 specify the virtualization technology used by the Nova compute daemons.
//Nova compute is a very important daemons, which is responsible for creating and terminating virtual machine instances, that is, managing the life cycle of virtual machine instances,
//Use the following fields of virtual machine to set qemu, and my server will use kvm's
# Valid options are [ qemu, kvm, vmware, xenapi ]
nova_compute_virt_type: "kvm"
// 559 lines
ironic_dnsmasq_dhcp_range:
// Line 601
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:

Start deployment
We will start to deploy soon, but we need to make some preparations to prevent stepping on the pit, as follows:

[root@node20 ~]# dig registry-1.docker.io


Write the contents of the red box to the / etc/hosts file, as follows:

The deployment statement is as follows:

[root@node20 ~]# kolla-ansible -i /etc/kolla/multinode bootstrap-servers
[root@node20 ~]# kolla-ansible -i /etc/kolla/multinode prechecks

Edit the docker volume mount mode and specify the docker accelerator

//Since there is no docker volume mount profile, it needs to be generated manually
# mkdir -p /etc/systemd/system/docker.service.d/
# vim /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared
//Specify the accelerator. Alicloud's accelerator is used here
# tee /etc/docker/daemon.json << 'EOF'
{
  "registry-mirrors": ["https://8mkqrctt.mirror.aliyuncs.com"]
}
EOF
# systemctl daemon-reload
# systemctl restart docker && systemctl enable docker

Pull image

[root@node20 ~]# kolla-ansible -i /etc/kolla/multinode pull

When pulling, if an error is reported, you can try to pull again. Check whether there is a problem with the configuration file. If there is no problem, you can try to change the version.

Deploy openstack

[root@node20 ~]# kolla-ansible -i /etc/kolla/multinode deploy

Validate deployment



After deployment, I started to configure the network. Find a way to let virtual machine instances connect to the Internet

network configuration

This is mainly for the node20 (i.e. the control node). The enp33s0f0 network card of node20 has been configured before deployment. Next, we mainly reconfigure enp49s0f0 and Br ex. My original network card configuration of enp49s0f0 is as follows:

Now configure the enp49s0f0 network card as follows:

Next, configure the br ex network card:

Note: change IPADDR, GATEWAY and DNS1 to the corresponding address on your host
After configuring the network card, do not restart the network in a hurry, and continue to configure neutron.

Delete the virtual bridge created by kolla by default

//Use the following command to view the virtual bridge created by kolla by default
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl show
 //Use the following command to delete the virtual bridge created by kolla by default
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl del-br br-ex
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl del-br br-int
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl del-br br-tun

Modify the original Network of neutron:
Edit / etc / kolla / neutron openvswitch agent / ML2_ conf.ini

[root@node20 ~]# vim /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini


Modify the red box.

[root@node20 ~]# vim /etc/kolla/neutron-server/ml2_conf.ini


Modify the red box.

//Use the following command to add a new virtual bridge
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl add-br br-ex
// Change enp49s0f0 to your own network card
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl add-port br-ex enp49s0f0
// Change enp33s0f0 to your own network card
[root@node20 ~]# docker exec -u root -it neutron_openvswitch_agent ovs-vsctl add-port br-ex enp33s0f0

Restart network and services

[root@node20 ~]# systemctl restart network
[root@node20 ~]# docker restart neutron_openvswitch_agent
[root@node20 ~]# docker restart neutron_server

In this way, the network configuration will be well done. You can find a tutorial to create an instance and a network by yourself. The instance should be able to ping the Internet.

Simple method of creating instance and Internet

Use init RunOnce, the initialization script of kolla

[root@node20 ~]# vim  /usr/share/kolla-ansible/init-runonce


Change the red box in the script to the Internet address of your host, so that the script will help you create an instance to connect to the Internet. Or you can create it by yourself.

[root@node20 ~]# source  /etc/kolla/admin-openrc.sh 
[root@node20 ~]# bash /usr/share/kolla-ansible/init-runonce

The script will automatically upload the image and create a simple instance.

Then go to the console of the instance and ping the Internet.

References

  1. I refer to this multi node deployment tutorial for the overall deployment process.
  2. Single node deployment process
    These two tutorials are from the same author. People who use virtual machine multi node deployment need to refer to the preparations in a single node
  3. I refer to the network principle of openstack
  4. I refer to the network card configuration of openstack, which is ubuntu configuration, but the same idea

Tags: network ansible OpenStack Docker

Posted on Sun, 21 Jun 2020 23:17:27 -0400 by GBS