centos7 installation and configuration of openstack Zun components (Stein version)

1, Basic environmental parameters

  • Environment: CentOS 7.6
  • Opentack Zun version stein
  • python2.7.5/python3.6 are all built-in Python environments
  • The default zun database and zun service password are 123456, which can be changed as needed

2, controller node zun installation

2.1 creating database

Login database: mysql -uroot -p123456

# Create zun database
MariaDB [(none)] CREATE DATABASE zun;

# Configure library access permissions for users of zun data
MariaDB [(none)]> GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' IDENTIFIED BY '123456';
# Update permission table
MariaDB [(none)]> flush privileges;
# After configuration, exit the database
MariaDB [(none)]> exit

2.2 create openstack users, services and endpoints

Load admin credentials to use openstack shell commands with administrator privileges:. Admin openrc or source / root / Admin openrc

# The default zun database and zun service password is 123456, which can be changed as needed
[root@controller ~]$ openstack user create --domain default --password 123456 zun
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | e4442931c8e445188d2f5e3220649e05 |
| name                | zun                              |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# Add admin role to zun user
[root@controller ~]$ openstack role add --project service --user zun admin

# Create zun service instance
[root@controller ~]$ openstack service create --name zun --description "Container Service" container
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Container Service                |
| enabled     | True                             |
| id          | e00970de38c74228a98857fdfce3d3f8 |
| name        | zun                              |
| type        | container                        |
+-------------+----------------------------------+


# Create container service API backend
[root@controller ~]$ openstack endpoint create --region RegionOne container public http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fbb6eafd604e4dcab77b57498715c024 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e00970de38c74228a98857fdfce3d3f8 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+

[root@controller ~]$ openstack endpoint create --region RegionOne container internal http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 28a45bda04fb4bbcb71e7fcf5a5c1c90 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e00970de38c74228a98857fdfce3d3f8 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+

[root@controller ~]$ openstack endpoint create --region RegionOne container admin http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c0afd38fd59343cab43739ff3c7cfdd8 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e00970de38c74228a98857fdfce3d3f8 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+

2.3 installing and starting zun services

2.3.1 creating users and groups
[root@controller ~]$ groupadd --system zun
[root@controller ~]$ useradd --home-dir "/var/lib/zun" --create-home --system --shell /bin/false -g zun zun
2.3.2 create directory
[root@controller ~]$ mkdir -p /etc/zun
[root@controller ~]$ chown zun:zun /etc/zun
2.3.3 installation zun
# Installation dependency
[root@controller ~]$ yum install epel-release python-pip git python-devel libffi-devel gcc openssl-devel -y 
# Upgrade pip
python -m pip install --upgrade pip

# Download and install zun using git 
[root@controller ~]$ cd /var/lib/zun
[root@controller zun]$ git clone -b stable/stein https://git.openstack.org/openstack/zun.git
# It will be slow to use git clone in China. You can change the protocol of GIT warehouse from https to HTTP (git clone -b stable/stein http://git.openstack.org/openstack/zun.git)
[root@controller zun]$ chown -R zun:zun zun
[root@controller zun]$ cd zun
[root@controller zun]$ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
[root@controller zun]$ python setup.py install
2.3.4 generate configuration file and configure
[root@controller zun]$ su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
[root@controller zun]$ su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun

Copy the api-paste.ini configuration file

[root@controller zun]$ su -s /bin/sh -c "cp etc/zun/api-paste.ini /etc/zun" zun

Edit the configuration file vim /etc/zun/zun.conf and add the following in the appropriate location

[DEFAULT]
transport_url = rabbit://openstack:123456@controller

[api]
host_ip = 192.168.204.194
port = 9517

[database]
connection = mysql+pymysql://zun:123456@controller/zun

[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = zun
password = 123456
auth_protocol = http
auth_version = v3
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = zun
password = 123456
auth_protocol = http
auth_version = v3
service_token_roles_required = True
endpoint_type = internalURL

[oslo_concurrency]
lock_path = /var/lib/zun/tmp

[oslo_messaging_notifications]
driver = messaging

[websocket_proxy]
wsproxy_host = 192.168.204.194
wsproxy_port = 6784
base_url = ws://controller:6784
2.3.5 filling database
[root@controller zun]$ su -s /bin/sh -c "zun-db-manage upgrade" zun
2.3.6 create startup file

vim /etc/systemd/system/zun-api.service

[Unit]
Description = OpenStack Container Service API

[Service]
ExecStart = /usr/bin/zun-api
User = zun

[Install]
WantedBy = multi-user.target

vim /etc/systemd/system/zun-wsproxy.service

[Unit]
Description = OpenStack Container Service Websocket Proxy

[Service]
ExecStart = /usr/bin/zun-wsproxy
User = zun

[Install]
WantedBy = multi-user.target
2.3.7 start service
[root@controller ~]$ systemctl daemon-reload
[root@controller ~]$ systemctl enable zun-api  zun-wsproxy
[root@controller ~]$ systemctl start zun-api  zun-wsproxy
[root@controller ~]$ systemctl status zun-api  zun-wsproxy

If websocket or selectors are started, an error is reported

pip install docker==4.4.4
pip install websocket-client==0.32.0
pip install websocket

2.4 Etcd installation and configuration

2.4.1 Etcd installation
[root@controller ~]$ yum install -y etcd
2.4.2 configuring Etcd

vim /etc/etcd/etcd.conf

#[Member] 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 
ETCD_LISTEN_PEER_URLS="http://192.168.204.194:2380" 
ETCD_LISTEN_CLIENT_URLS="http://192.168.204.194:2379" 
ETCD_NAME="controller" 
#[Clustering] 
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.204.194:2380" 
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.204.194:2379" 
ETCD_INITIAL_CLUSTER="controller=http://192.168.204.194:2380" 
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" 
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_STATE="existing"
2.4.3 start Etcd
[root@controller ~]$ systemctl enable etcd
[root@controller ~]$ systemctl start etcd
[root@controller ~]$ systemctl status etcd

3, Compute node zun installation

Before installing the Zun compute service on the compute node, you need to install docker and kuryr libnetwork on the compute node in order

Install Etcd on control node

Install some necessary dependencies

yum -y upgrade # Only update the package, not the kernel and system
yum install -y epel-release yum-utils device-mapper-persistent-data lvm2 python-pip git python-devel libffi-devel gcc openssl-devel wget vim net-tools

3.1 time synchronization

# 1. Install package
[root@zun ~]# yum install -y chrony

# 2. Modify the time synchronization server as the controller node
[root@zun ~]# sed -i '/^server/d' /etc/chrony.conf 
[root@zun ~]# sed -i '2aserver controller iburst' /etc/chrony.conf

# 3. Start the NTP service and configure it to start with the system
[root@zun ~]# systemctl enable chronyd.service
[root@zun ~]# systemctl start chronyd.service

# 4. Set time zone
[root@zun ~]# timedatectl set-timezone Asia/Shanghai

# 5. View time synchronization sources
[root@zun ~]# chronyc sources

# 6. Check whether the time is correct
[root@zun ~]# timedatectl status

3.2 docker installation

Reference blog

https://blog.csdn.net/SpiritedAway1106/article/details/117106616

3.3 installing kuryr libnetwork

3.3.1 control node
3.3.1.1 create kuryr user

. admin-openrc

# 123456
[root@controller ~]$ openstack user create --domain default --password 123456 kuryr
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b6974ecd4b7a44f8be9fc9f6728085c5 |
| name                | kuryr                            |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# Add role
[root@controller ~]$ openstack role add --project service --user kuryr admin
3.3.2 calculation node
3.3.2.1 create user
[root@zun ~]# groupadd --system kuryr
[root@zun ~]# useradd --home-dir "/var/lib/kuryr" --create-home --system --shell /bin/false -g kuryr kuryr
3.3.2.2 create directory
[root@zun ~]# mkdir -p /etc/kuryr
[root@zun ~]# chown kuryr:kuryr /etc/kuryr
3.3.2.3 installing kuryr libnetwork
[root@zun ~]# yum install epel-release python-pip git python-devel libffi-devel gcc openssl-devel -y

[root@zun ~]# cd /var/lib/kuryr
# The http protocol is still used when pulling the git warehouse, which will be faster
[root@zun kuryr]# git clone -b stable/stein http://git.openstack.org/openstack/kuryr-libnetwork.git
[root@zun kuryr]# chown -R kuryr:kuryr kuryr-libnetwork
[root@zun kuryr]# cd kuryr-libnetwork

# Upgrade pip
[root@zun kuryr-libnetwork]# pip install --upgrade pip
# pip upgrade failed. You can try the following method
# wget https://bootstrap.pypa.io/pip/2.7/get-pip.py
# sudo python get-pip.py
# pip install setuptools --upgrade

# Install python package
[root@zun kuryr-libnetwork]# pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# Prompt could not find suitable distribution for requirement. Parse ('pbr > = 2.0.0 '). You can manually execute PBR installation: pip install pbr
# Prompt error: cannot uninstall 'ipaddress'. It is a distutils installed project and then we cannot accurately determine which files are long to it which would lead to only a partial uninstall, You can use: PIP install ipaddress -- ignore installed to install, -- ignore installed means to ignore the existing installation and directly install the new version
# Or use the source code to install
# wget https://cbs.centos.org/kojifiles/packages/python-ipaddress/1.0.18/5.el7/noarch/python2-ipaddress-1.0.18-5.el7.noarch.rpm
# yum install -y python2-ipaddress-1.0.18-5.el7.noarch.rpm

# Install kuryr libnetwork
[root@zun kuryr-libnetwork]# python setup.py install
3.3.2.4 generate configuration file
[root@zun kuryr-libnetwork]# su -s /bin/sh -c "./tools/generate_config_file_samples.sh" kuryr
[root@zun kuryr-libnetwork]# su -s /bin/sh -c "cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf" kuryr
3.3.2.5 edit configuration file

sed -i.default -e "/^#/d" -e "/^$/d" /etc/kuryr/kuryr.conf

vim /etc/kuryr/kuryr.conf

[DEFAULT]
bindir = /usr/libexec/kuryr

[neutron]
www_authenticate_uri = http://controller:5000/v3
auth_url = http://controller:5000/v3
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = kuryr
password = 123456

3.3.2.6 create startup file

vim /etc/systemd/system/kuryr-libnetwork.service

[Unit]
Description = Kuryr-libnetwork - Docker network plugin for Neutron

[Service]
ExecStart = /usr/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf
CapabilityBoundingSet = CAP_NET_ADMIN

[Install]
WantedBy = multi-user.target

3.3.2.7 start service
[root@zun kuryr-libnetwork]# systemctl enable kuryr-libnetwork
[root@zun kuryr-libnetwork]# systemctl start kuryr-libnetwork
[root@zun kuryr-libnetwork]# systemctl restart docker
[root@zun kuryr-libnetwork]# systemctl status docker kuryr-libnetwork
# Error when starting kuryr libnetwork
● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for Neutron
   Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2021-09-18 14:05:08 CST; 3s ago
  Process: 69860 ExecStart=/usr/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE)
 Main PID: 69860 (code=exited, status=1/FAILURE)

Sep 18 14:05:07 zun kuryr-server[69860]: from kuryr.lib.binding.drivers import utils
Sep 18 14:05:07 zun kuryr-server[69860]: File "/usr/lib/python2.7/site-packages/kuryr/lib/binding/drivers/utils.py", line 14, in <module>
Sep 18 14:05:07 zun kuryr-server[69860]: import pyroute2
Sep 18 14:05:07 zun kuryr-server[69860]: File "/usr/lib/python2.7/site-packages/pyroute2/__init__.py", line 84
Sep 18 14:05:07 zun kuryr-server[69860]: origin=None, loader_state=None, is_package=None):
Sep 18 14:05:07 zun kuryr-server[69860]: ^
Sep 18 14:05:07 zun kuryr-server[69860]: SyntaxError: invalid syntax
Sep 18 14:05:08 zun systemd[1]: kuryr-libnetwork.service: main process exited, code=exited, status=1/FAILURE
Sep 18 14:05:08 zun systemd[1]: Unit kuryr-libnetwork.service entered failed state.
Sep 18 14:05:08 zun systemd[1]: kuryr-libnetwork.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

# Solution
# Directly back out the version of pyroute2. The default installation is 0.6.4. This error will be reported after trying several versions of 0.6 (python2.7). Version 0.5 is available
# For example, pip install pyroute2==0.5.19

# https://pypi.org/project/pyroute2/#history
# Error when starting kuryr libnetwork
● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for Neutron
   Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2021-09-18 14:31:46 CST; 2s ago
  Process: 70254 ExecStart=/usr/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE)
 Main PID: 70254 (code=exited, status=1/FAILURE)

Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr     return self.request(url, 'POST', **kwargs)
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 913, in request
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr     resp = send(**kwargs)
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1020, in _send_request
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr     raise exceptions.ConnectFailure(msg)
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr ConnectFailure: Unable to establish connection to http://controller:5000/v3/auth/tokens: HTTPConnectionPool(host='controller',...
Sep 18 14:31:46 zun kuryr-server[70254]: 2021-09-18 14:31:46.791 70254 ERROR kuryr 

# The reason for the error is that the local hosts file is not configured and the controller's ip cannot be resolved
# vim /etc/hosts add 192.168.204.203 zun-01
3.3.2.8 verification
#  Create network
[root@zun kuryr]# docker network create --driver kuryr --ipam-driver kuryr --subnet 10.10.0.0/16 --gateway=10.10.0.1 test_net2
7ea1d9cc1dfcae89195602579cb6e04d996c283b9ecdf07a7afa7ad1be62ec50

# View network
[root@zun kuryr]# docker network ls
NETWORK ID     NAME                                   DRIVER    SCOPE
...
7ea1d9cc1dfc   test_net2                              kuryr     global

# Use network
[root@zun kuryr]# docker run --net test_net2 cirros ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:A5:E1:00:7A  
          inet addr:10.10.3.123  Bcast:10.10.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:602 (602.0 B)  TX bytes:426 (426.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Since neutron uses Linux bridge, the source code of kuryr needs to be modified

vim /usr/lib/python2.7/site-packages/kuryr/lib/binding/drivers/veth.py

# Line 84, write King ='bridge 'directly

Otherwise, the docker run -- net test is executed_ An error will be reported when net2 mirrors ifconfig

docker: Error response from daemon: failed to create endpoint elastic_galois on network test_net2: NetworkDriver.CreateEndpoint: vif_type(binding_failed) is not supported. A binding script for this type can't be found.
ERRO[0004] error waiting for container: context canceled

After modification, execute systemctl restart kuryr libnetwork to restart the service

3.4 Zun compute service installation and configuration

3.4.1 create user
[root@zun ~]# groupadd --system zun
[root@zun ~]# useradd --home-dir "/var/lib/zun" --create-home --system --shell /bin/false -g zun zun
3.4.2 create directory
[root@zun ~]# mkdir -p /etc/zun
[root@zun ~]# chown zun:zun /etc/zun
3.4.3 installation zun package dependency
yum install epel-release python-pip git python-devel libffi-devel gcc openssl-devel -y
3.4.4 installation zun
[root@zun ~]# cd /var/lib/zun
[root@zun zun]# git clone -b stable/stein http://git.openstack.org/openstack/zun.git
[root@zun zun]# chown -R zun:zun zun
[root@zun zun]# cd zun

# python2 installation
[root@zun zun]# pip install -r requirements.txt
[root@zun zun]# python setup.py install
3.4.5 generate sample configuration file
[root@zun zun]# su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
[root@zun zun]# su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
[root@zun zun]# su -s /bin/sh -c "cp etc/zun/rootwrap.conf /etc/zun/rootwrap.conf" zun
[root@zun zun]# su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun
[root@zun zun]# su -s /bin/sh -c "cp etc/zun/rootwrap.d/* /etc/zun/rootwrap.d/" zun

# su -s /bin/sh -c "cp etc/cni/net.d/* /etc/cni/net.d/" zun

3.4.6 configuring zun users
[root@zun zun]# echo "zun ALL=(root) NOPASSWD: /usr/bin/zun-rootwrap /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap
3.4.7 editing configuration files

vim /etc/zun/zun.conf

[DEFAULT]
transport_url = rabbit://openstack:123456@controller
state_path = /var/lib/zun

[database]
connection = mysql+pymysql://zun:123456@controller/zun

[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller:11211
www_authenticate_uri= http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password

[websocket_proxy]
base_url = ws://controller:6784/
[oslo_concurrency]
lock_path = /var/lib/zun/tmp


[compute]
# If you want to run both containers and nova instances in this compute node, in the [compute] section, configure the host_shared_with_nova:
host_shared_with_nova = true
3.4.8 configuring docker and kuryr
3.4.8.1 create docker configuration folder
[root@zun zun]# mkdir -p /etc/systemd/system/docker.service.d
3.4.8.2 create docker configuration file

Here, replace compute and controller with the corresponding service host name or ip address

vim /etc/systemd/system/docker.service.d/docker.conf

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://compute:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379
3.4.8.3 restart docker
[root@zun ~]# systemctl daemon-reload
[root@zun ~]# systemctl restart docker
3.4.8.4 edit kuryr configuration file

vim /etc/kuryr/kuryr.conf

[DEFAULT]
capability_scope = global
process_external_connectivity = False
3.4.8.5 restart kuryr
[root@zun ~]# systemctl restart kuryr-libnetwork
3.4.9 configuring containerd
[root@zun ~]# containerd config default > /etc/containerd/config.toml
[root@zun ~]# chown zun:zun /etc/containerd/config.toml

# Get Zun_ group_ The method of ID is as follows
[root@zun ~]# getent group zun | cut -d: -f3
992

# Edit profile
## /etc/containerd/config.toml
[grpc]
  ...
  gid = 992
  
# Restart container
[root@zun ~]# systemctl restart containerd
3.4.10 create startup file

vim /etc/systemd/system/zun-compute.service

[Unit]
Description = OpenStack Container Service Compute Agent

[Service]
ExecStart = /usr/bin/zun-compute
User = zun

[Install]
WantedBy = multi-user.target
3.4.11 start Zun compute
[root@zun ~]# systemctl enable zun-compute
[root@zun ~]# systemctl start zun-compute
[root@zun ~]# systemctl status zun-compute

# report errors
[root@zun ~]# systemctl status zun-compute
● zun-compute.service - OpenStack Container Service Compute Agent
   Loaded: loaded (/etc/systemd/system/zun-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-09-22 11:32:34 CST; 7min ago
 Main PID: 81247 (zun-compute)
    Tasks: 1
   Memory: 84.7M
   CGroup: /system.slice/zun-compute.service
           └─81247 /usr/bin/python /usr/bin/zun-compute

Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task     engine = sqlalchemy.create_engine(url, **engine_args)
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task   File "<string>", line 2, in create_engine
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task   File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task     return fn(*args, **kwargs)
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/create.py", line 548, in create_engine
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task     dbapi = dialect_cls.dbapi(**dbapi_args)
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task   File "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 68, in dbapi
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task     return __import__("pymysql")
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task ImportError: No module named pymysql
Sep 22 11:39:43 zun zun-compute[81247]: 2021-09-22 11:39:43.398 81247 ERROR oslo_service.periodic_task 

# pip install pymysql

# Restart the Zun compute service again with an error
● zun-compute.service - OpenStack Container Service Compute Agent
   Loaded: loaded (/etc/systemd/system/zun-compute.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2021-09-22 14:12:28 CST; 4s ago
  Process: 82704 ExecStart=/usr/bin/zun-compute (code=exited, status=0/SUCCESS)
 Main PID: 82704 (code=exited, status=0/SUCCESS)

Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 55, in on_frame
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service     callback(channel, method_sig, buf, None)
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 521, in on_inbound_method
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service     method_sig, payload, content,
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service     listener(*args)
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 651, in _on_close
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service     (class_id, method_id), ConnectionError)
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
Sep 22 14:12:28 zun zun-compute[82704]: 2021-09-22 14:12:28.835 82704 ERROR oslo_service.service

# The password for rabbitmq is incorrectly configured. Please modify it
# vim /etc/zun/zun.conf
# [DEFAULT]
# transport_ url =  rabbit://openstack:123456 @The controller can configure the correct password
3.4.12 verification

Control node execution

[root@controller ~]# pip install python-zunclient
[root@controller ~]# source /root/admin-openrc
[root@controller ~]$ openstack appcontainer service list
+----+------+-------------+-------+----------+-----------------+---------------------+-------------------+
| Id | Host | Binary      | State | Disabled | Disabled Reason | Updated At          | Availability Zone |
+----+------+-------------+-------+----------+-----------------+---------------------+-------------------+
|  1 | zun  | zun-compute | up    | False    | None            | 2021-09-22 09:51:21 | nova              |
+----+------+-------------+-------+----------+-----------------+---------------------+-------------------+

4, Zun UI installation

Official documents: https://docs.openstack.org/zun-ui/latest/install/index.html

4.1 clone the code and install Zun UI

[root@controller zun_ui]$ git clone https://github.com/openstack/zun-ui
[root@controller zun_ui]$ cd zun-ui/
[root@controller zun-ui]$ git checkout stable/stein
[root@controller zun-ui]$ pip install .

4.2 enabling Zun UI in Horizon

[root@controller zun-ui]$ cp zun_ui/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled
[root@controller zun-ui]$ python /usr/share/openstack-dashboard/manage.py collectstatic
[root@controller zun-ui]$ python /usr/share/openstack-dashboard/manage.py compress

4.3 restart Horizon

[root@controller zun-ui]$ systemctl restart httpd.service memcached.service

# My local machine uses nginx to start horizon. You need to restart uwsgi
# systemctl restart uwsgi

4.3 verification

Visit the horzion page to see a list of containers

Tags: Docker OpenStack Container cloud computing

Posted on Fri, 24 Sep 2021 06:22:40 -0400 by savetime