Day77 ~ 79 openstack manual distributed deployment

openstack manual distributed deployment

1, Environmental preparation

reference resources: https://docs.openstack.org/zh_CN/install-guide/

1. Static IP(NetworkManager service can be turned off)

2. Host name and binding

192.168.122.11 controller
192.168.122.12 compute
192.168.122.13 cinder

3. Turn off the firewall and selinux

4. Time synchronization

Prepare yum source for all nodes

# yum install yum-plugin-priorities -y
# yum install https://mirrors.aliyun.com/centos-vault/altarch/7.5.1804/extras/aarch64/Packages/centos-release-openstack-pike-1-1.el7.x86_64.rpm -y

# vim /etc/yum.repos.d/CentOS-OpenStack-pike.repo
 hold
baseurl=http://mirror.centos.org/centos/ 7 /cloud/$basearch/openstack-pike/
replace with
baseurl=https://mirror.tuna.tsinghua.edu.cn/cc/ 7 /cloud/x86_64/openstack-pike/

# yum repolist

repo id 						repo name
								status
base/ 7 /x86_64 				CentOS-7 - Base
				 				10 , 070
centos-ceph-jewel/ 7 /x86_64	CentOS-7 - Ceph Jewel
								101
centos-openstack-pike 			CentOS-7 - OpenStack
pike 							3 , 426 + 2
centos-qemu-ev/ 7 /x86_64		CentOS-7 - QEMU EV
								63
extras/ 7 /x86_64 				CentOS-7 - Extras
								412
updates/ 7 /x86_64 				CentOS-7 - Updates
 								884
repolist: 14 , 956

Install openstack basic tools on all nodes

# yum install python-openstackclient openstack-selinux openstack-utils -y

Compute node installation basic software package

[root@compute ~]# yum install qemu-kvm libvirt bridge-utils -y

[root@compute ~]# ln -sv /usr/libexec/qemu-kvm/usr/bin/
'/usr/bin/qemu-kvm' -> '/usr/libexec/qemu-kvm'

2, Installation support services

Database deployment

Install MariaDB on the control node (you can also install a separate node or even a database set)

Group)

reference resources: https://docs.openstack.org/zh_CN/install-guide/environment-sql-database-rdo.html

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y

Add sub profile

[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.122.11 		# ip is the control section
 Point management network segment IP

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

Start service

[root@controller ~]# systemctl restart mariadb
[root@controller ~]# systemctl enable mariadb

Installation initialization
Please remember the password by yourself, or unify all passwords

[root@controller ~]# mysql_secure_installation

rabbitmq deployment

Purpose of message queue rabbitmq:

  • Tools for communication between components
  • Asynchronous information synchronization

1. Install rabbitmq on the control node

[root@controller ~]# yum install erlang socat rabbitmq-server -y

2. Start the service and verify the port

[root@controller ~]# systemctl restart rabbitmq-server
[root@controller ~]# systemctl enable rabbitmq-server
[root@controller ~]# netstat -ntlup |grep 15672 is the web management interface port of rabbitmq
tcp  		0  		0 0.0.0.0:25672		0.0.0.0:*
				LISTEN  		26806 /beam.smp
tcp6 		0  		0 ::: 5672 			:::*
				LISTEN  		26806 /beam.smp

3. Add openstack user and grant permission

 List user
[root@controller ~]# rabbitmqctl list_users
Listing users ...
guest 	[administrator]

increase openstack user,The password here is still unified as daniel.com
[root@controller ~]# rabbitmqctl add_user openstack
daniel.com
Creating user "openstack" ...

Mark as administrator
[root@controller ~]# rabbitmqctl set_user_tags
openstack administrator
Setting tags for user "openstack" to [administrator]
...

to openstack Configure all resources,read,Write permission
[root@controller ~]# rabbitmqctl set_permissions
openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"
...

View validation
[root@controller ~]# rabbitmqctl list_users
Listing users ...

openstack [administrator]
guest [administrator]

4. Open the web management monitoring plug-in of rabbitmq

rabbitmq has many plug-ins. Use the following command to check

[root@controller ~]# rabbitmq-plugins list
Configured: E = explicitly enabled; e = implicitly
enabled
| Status: * = running on rabbit@controller
|/
[ ] sockjs 0.3.4
[ ] cowboy 1.0.3
[ ] cowlib 1.0.1
[ ] mochiweb 2.13.1
[ ] rabbitmq_amqp1_0 3.6.5
[ ] rabbitmq_auth_backend_ldap 3.6.5
[ ] rabbitmq_auth_mechanism_ssl 3.6.5
[ ] rabbitmq_consistent_hash_exchange 3.6.5
[ ] rabbitmq_event_exchange 3.6.5
[ ] rabbitmq_federation 3.6.5
[ ] rabbitmq_federation_management 3.6.5
[ ] rabbitmq_jms_topic_exchange 3.6.5
[ ] rabbitmq_management 3.6.5
[ ] rabbitmq_management_agent 3.6.5
[ ] rabbitmq_management_visualiser 3.6.5
[ ] rabbitmq_mqtt 3.6.5
[ ] rabbitmq_recent_history_exchange 1.2.1
[ ] rabbitmq_sharding 0.1.0
[ ] rabbitmq_shovel 3.6.5
[ ] rabbitmq_shovel_management 3.6.5
[ ] rabbitmq_stomp 3.6.5
[ ] rabbitmq_top 3.6.5
[ ] rabbitmq_tracing 3.6.5
[ ] rabbitmq_trust_store 3.6.5
[ ] rabbitmq_web_dispatch 3.6.5
[ ] rabbitmq_web_stomp 3.6.5
[ ] rabbitmq_web_stomp_examples 3.6.5
[ ] sockjs 0.3.4
[ ] webmachine 1.10.3

explain:
E On behalf of open plug-in
e Dependent open plug-in
*Represents a running plug-in

5. Turn on rabbitmq_management plug-in

[root@controller ~]# rabbitmq-plugins enable
rabbitmq_management
The following plugins have been enabled:
	mochiweb
	webmachine
	rabbitmq_web_dispatch
	amqp_client
	rabbitmq_management_agent
	rabbitmq_management

Applying plugin configuration to rabbit@controller...started 6 plugins.

[root@controller ~]# rabbitmq-plugins list
Configured: E = explicitly enabled; e = implicitly
enabled
| Status: * = running on rabbit@controller
|/
[e*] amqp_client 3.6.5
[ ] cowboy 1.0.3
[ ] cowlib 1.0.1
[e*] mochiweb 2.13.1
[ ] rabbitmq_amqp1_0 3.6.5
[ ] rabbitmq_auth_backend_ldap 3.6.5
- [ ] rabbitmq_auth_mechanism_ssl 3.6.5
- [ ] rabbitmq_consistent_hash_exchange 3.6.5
- [ ] rabbitmq_event_exchange 3.6.5
- [ ] rabbitmq_federation 3.6.5
- [ ] rabbitmq_federation_management 3.6.5
- [ ] rabbitmq_jms_topic_exchange 3.6.5
- [E*] rabbitmq_management 3.6.5
- [e*] rabbitmq_management_agent 3.6.5
- [ ] rabbitmq_management_visualiser 3.6.5
- [ ] rabbitmq_mqtt 3.6.5
- [ ] rabbitmq_recent_history_exchange 1.2.1
- [ ] rabbitmq_sharding 0.1.0
- [ ] rabbitmq_shovel 3.6.5
- [ ] rabbitmq_shovel_management 3.6.5
- [ ] rabbitmq_stomp 3.6.5
- [ ] rabbitmq_top 3.6.5
- [ ] rabbitmq_tracing 3.6.5
- [ ] rabbitmq_trust_store 3.6.5
- [e*] rabbitmq_web_dispatch 3.6.5
- [ ] rabbitmq_web_stomp 3.6.5
- [ ] rabbitmq_web_stomp_examples 3.6.5
- [ ] sockjs 0.3.4
- [e*] webmachine 1.10.3

15672 by rabbitmq of web Management interface port
[root@controller ~]# netstat -ntlup |grep 15672
tcp  0  	0 0.0.0.0:15672			 0.0.0.0:*
LISTEN  26806 /beam.smp

6. Use the following command on the host to access (ip is the control node management network IP)

[root@daniel ~]# firefox 192.168.122.11:15672



memcache deployment

Role of memcache: memcached caches the verified token tokens of various openstack services.

1. Install relevant software packages at the control node

[root@controller ~]# yum install memcached python-memcached -y

2. Configure memcached listening

[root@controller ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.122.11,::1"

Will 127.0.0.1 Change to management network of control node IP,So that other node components can also access memcache

Start the service and verify the port

[root@controller ~]# systemctl restart memcached
[root@controller ~]# systemctl enable memcached
[root@controller ~]# netstat -ntlup |grep :
tcp  	0  		0 1	92.168.122.11: 11211 	0.0.0.0:*
		LISTEN  	30586 /memcached
tcp6 	0 	 	0 ::1:11211 				:::*
LISTEN  30586 /memcached
udp  0  0 192.168.122.11: 11211 0.0.0.0:*
30586 /memcached
udp6 0  0 :: 1 : 11211 :::*
 30586 /memcached

3, Certification services keystone

reference resources: https://docs.openstack.org/keystone/pike/install/

Introduction to certification function:

keystone has two main functions:

  • user management
  • Service Catalog

User management includes:

  • Authentication token, account, password, certificate, key
  • to grant authorization

Service Directory: openstack records of all available services and API endpoints (that is, a url access address)

keystone supports 3A:

  • account
  • authention
  • authorization

Endpoint

  • public external services
  • internal service
  • admin management related services

Terms and concepts:

  • user
  • project
  • role

Assign a User the Role of accessing a resource in the specified Project

Example: user Zhang San is a lecturer in the operation and maintenance discipline (project)

Installation and configuration

reference resources: https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html

1. Create keystone database and authorize

[root@controller ~]# mysql -pdaniel.com

MariaDB [(none)]> create database keystone;

MariaDB [(none)]> grant all on keystone.* to 'keystone'@'localhost' identified by 'daniel.com';

MariaDB [(none)]> grant all on keystone.* to 'keystone'@'%' identified by 'daniel.com';

MariaDB [(none)]> flush privileges;

Verify authorization OK

[root@controller ~]# mysql -h controller -u keystone -pdaniel.com -e 'show databases'
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+

2. Install keystone related software at the control node

[root@controller ~]# yum install openstack-keystone
httpd mod_wsgi -y

keystone be based on httpd start-up
httpd need mod_wsgi Module to run python Developed program

3. Configure keystone

[root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak

[root@controller ~]# vim /etc/keystone/keystone.conf
 configure connections rabbitmq
405 transport_url = rabbit://openstack:daniel.com@controller:

configure connections keystone
661 connection = mysql+pymysql://keystone:daniel.com@controller/keystone

Open the comment with the name below,fernet Is the provider of the token(That is, a way of token,fernet Compact and encrypted)
2774 provider = fernet

[root@controller ~]# grep -n '^[a-Z]'
/etc/keystone/keystone.conf
405 :transport_url = rabbit://openstack:daniel.com@controller:
661 :connection = mysql+pymysql://keystone:daniel.com@controller/keystone
2774 :provider = fernet

4. Initialize the data in the database

[root@controller ~]# mysql -h controller -u keystone -pdaniel.com -e 'use keystone;show tables;'
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

su - s Show to bash environment,because keystone Default not/bin/bash
su - c keystone Express with keystone Execute command as user
[root@controller ~]# mysql -h controller -u keystone -pdaniel.com -e 'use keystone;show tables;' |wc -l
39
 More than 30 tables were imported during initialization,Indicates success

5. Initialize keystone authentication information

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

stay/etc/keystone/The directory generates the following two directories to indicate successful initialization
credential-keys
fernet-keys

6. Initialize the api information of openstack administrator account

[root@controller ~]# keystone-manage bootstrap --
bootstrap-password daniel.com \
--bootstrap-admin-url http://controller: 35357 /v3/ \
--bootstrap-internal-url http://controller: 5000 /v3/ \
--bootstrap-public-url http://controller: 5000 /v3/ \
--bootstrap-region-id RegionOne

daniel.com Set for me openstack Administrator password

7. Configure httpd and start the service

[root@controller ~]# vim /etc/httpd/conf/httpd.conf
95 ServerName controller: 80 modify

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

[root@controller ~]# systemctl restart httpd
[root@controller ~]# systemctl enable httpd

[root@controller ~]# netstat -ntlup |grep http
tcp6 0  0 ::: 5000 :::*
LISTEN  387 /httpd
tcp6 0  0 ::: 80 :::*
LISTEN  387 /httpd
tcp6 0  0 ::: 35357 :::*
LISTEN  387 /httpd

Create domain,project,user and role

reference resources: https://docs.openstack.org/keystone/pike/install/keystone-users-rdo.html

Configure user variable information

1. Create variable script for admin user

[root@controller ~]# vim admin-openstack.sh
export OS_USERNAME=admin
export OS_PASSWORD=daniel.com
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION= 3
export OS_IMAGE_API_VERSION= 2

2. Create a project

The above variable script source needs to take effect (equivalent to logging in with admin user) before operation

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack project list
+----------------------------------+-------+
| ID 								| Name |
+----------------------------------+-------+
| 4 fa10f2089d149eca374af9497730535 | admin |
+----------------------------------+-------+

3. Create a service project

[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field 	| 								Value |
+-------------+----------------------------------+
| description | 				Service Project |
| domain_id  | 							default |
| enabled 	| 								True |
| id 		| 	cdc645fc266e4f35bfc23f36ecc223f3 |
| is_domain |							 False |
| name 		| 							service |
| parent_id |							 default |
+-------------+----------------------------------+

4. Create demo project

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field									 | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 5 abe51bdb68c453c935a2179b5ed06a1 |
| is_domain | False |
| name | demo |
| parent_id | default |
+-------------+----------------------------------+
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 4 fa10f2089d149eca374af9497730535 | admin |
| 5 abe51bdb68c453c935a2179b5ed06a1 | demo |
| cdc645fc266e4f35bfc23f36ecc223f3 | service |
+----------------------------------+---------+

5. Create demo user

[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
+----------------------------------+-------+

[root@controller ~]# openstack user create --domain default --password daniel.com demo
+---------------------+--------------------------------
--+
| Field | Value
|
+---------------------+--------------------------------
--+
| domain_id | default
|
| enabled | True
|
| id |
a1fa2787411c432096d4961ddb4e1a03 |
| name | demo
|
| options | {}
|

| password_expires_at | None
|
+---------------------+--------------------------------
--+

[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+-------+

6. Create a role

[root@controller ~]# openstack role list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 92065899 c45e469abeed725db3e232a3 | admin |
| 9 fe2ff9ee4384b1894a90878d3e92bab | _member_ |
Built in role,Never mind
+----------------------------------+----------+
[root@controller ~]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 9 bc0e93e91714972937a699e0e4dd06e |
| name | user |
+-----------+----------------------------------+
[root@controller ~]# openstack role list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 92065899 c45e469abeed725db3e232a3 | admin |
| 9 bc0e93e91714972937a699e0e4dd06e | user |
| 9 fe2ff9ee4384b1894a90878d3e92bab | _member_ |
+----------------------------------+----------+

7. Add the demo user to the user role

[root@controller ~]# openstack role add --project demo --user demo user

verification

reference resources: https://docs.openstack.org/keystone/pike/install/keystone-verify-rdo.html

1. Cancel the admin user environment variable from the previous source

[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

[root@controller ~]# openstack user list
Missing value auth-url required for auth plugin password

2. Use admin user authentication

[root@controller ~]# openstack --os-auth-url
http://controller:35357/v3 --os-project-domain-nameDefault --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password: 			input admin Password for

3. User authentication using demo

[root@controller ~]# openstack --os-auth-url
http://controller:5000/v3 --os-project-domain-name
Default --os-user-domain-name Default --os-project-name
demo --os-username demo token issue
Password: input demo Password for

4. Use the following command on the host to access (ip is the control node management network IP)

[root@daniel ~]# firefox 192.168.122.11:35357

[root@daniel ~]# firefox 192.168.122.11:5000

Get the following access information, which is for programmers to access

User environment variable script

reference resources: https://docs.openstack.org/keystone/pike/install/keystone-openrc-rdo.html

The admin user environment variable script was created earlier, and the demo user environment variable is written here. Later, it is convenient to use the script to switch user identity

[root@controller ~]# vim demo-openstack.sh
export OS_USERNAME=demo
export OS_PASSWORD=daniel.com
export OS_PROJECT_NAME=demo
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller: 5000 /v3
export OS_IDENTITY_API_VERSION= 3
export OS_IMAGE_API_VERSION= 2

source different user environment variable scripts, view different token information to verify environment variables

[root@controller ~]# source admin-openstack.sh
[root@controller ~]# openstack token issue

[root@controller ~]# source demo-openstack.sh
[root@controller ~]# openstack token issue

Script OK

4, Mirror service glance

reference resources: https://docs.openstack.org/glance/pike/install/

Database configuration

1. Data base building and authorization

[root@controller ~]# mysql -pdaniel.com

MariaDB [(none)]> create database glance;

MariaDB [(none)]> grant all on glance.* to 'glance'@'localhost' identified by 'daniel.com';

MariaDB [(none)]> grant all on glance.* to 'glance'@'%' identified by 'daniel.com';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit

2. Connection verification

[root@controller ~]# mysql -h controller -u glance -pdaniel.com -e 'show databases'
+--------------------+
| Database |
+--------------------+
| glance |
| information_schema |
+--------------------+

Permission configuration

1. Create user

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack user create --domain default --password daniel.com glance

[root@controller ~]# openstack user list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| 693998862e8b4261828cc0a356df1234 | glance |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+--------+

2. Add the grace user to the admin role group of the Service project

[root@controller ~]# openstack role add --project service --user glance admin

3. Create grace service

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

[root@controller ~]# openstack service list
+----------------------------------+----------+--------
--+
| ID | Name | Type
|
+----------------------------------+----------+--------
--+
| 2da4060802bf4e4bbf9328fb68b819b6 | keystone |
identity |
| 59 c3f3f50fc4466f8f3bbb72ca9a9e70 | glance | image
|
+----------------------------------+----------+--------
--+

4. Create the endpoint(url access) of the API of the grace service

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

verification

[root@controller ~]# openstack endpoint list
±---------------------------------±----------±------
-------±-------------±--------±----------±---------
-------------------+
| ID | Region |
Service Name | Service Type | Enabled | Interface | URL
|
±---------------------------------±----------±------
-------±-------------±--------±----------±---------
-------------------+
| 4 bbe9d5c517a4262bb9ce799215aabdc | RegionOne | glance
| image | True | internal |
http://controller: 9292 |
| 8 c31c5a8060c4412b67b9acfad7f3071 | RegionOne |
keystone | identity | True | admin |
http://controller: 35357 /v3/ |
| 92244 b7d5091491a997eecfa1cbff2fb | RegionOne |
keystone | identity | True | internal |
http://controller:5000 /v3/ |
| 961 a300c801246f2890e3168b55b2076 | RegionOne | glance
| image | True | public |
http://controller: 9292 |
| c05adadbc74541a2a5cf014466d82473 | RegionOne | glance
| image | True | admin |
http://controller: 9292 |
| c2481e7a89a34c0d8b85e50b9162bc01 | RegionOne |
keystone | identity | True | public |
http://controller:5000 /v3/ |
±---------------------------------±----------±------
-------±-------------±--------±----------±---------
-------------------+

Grace installation and configuration

1. Install at the control node

[root@controller ~]# yum install openstack-glance -y

2. Backup configuration file

[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
[root@controller ~]# cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak

3. Modify the grace-api.conf configuration file

[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
1823 connection = mysql+pymysql://glance:daniel.com@controller/glance

[glance_store]
1943 stores = file,http

1975 default_store = file

2294 filesystem_store_datadir = /var/lib/glance/images

3283 [keystone_authtoken] be careful:This sentence doesn't need to be changed,
3284 below - 3292 The row is added after this parameter group
3284 auth_uri = http://controller: 5000
3285 auth_url = http://controller: 35357
3286 memcached_servers = controller: 11211
3287 auth_type = password
3288 project_domain_name = default
3289 user_domain_name = default
3290 project_name = service
3291 username = glance
3292 password = daniel.com

[paste_deploy]
4235 flavor = keystone

The final configuration effect is as follows

[root@controller ~]# grep -Ev '#|^$' /etc/glance/glance-api.conf
[DEFAULT]
[cors]
[database]
connection = mysql+pymysql://glance:daniel.com@controller/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[image_format]
[keystone_authtoken]
auth_uri = http://controller: 5000
auth_url = http://controller: 35357
memcached_servers = controller: 11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = daniel.com
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

4. Configure the grace-registry.conf configuration file

[root@controller ~]# vim /etc/glance/glance-registry.conf
1141 connection = mysql+pymysql://glance:daniel.com@controller/glance

1234 [keystone_authtoken] 	be careful:This sentence doesn't need to be changed,1235 below - 1243 The row is added after this parameter group
1235 auth_uri = http://controller: 5000
1236 auth_url = http://controller: 35357
1237 memcached_servers = controller: 11211
1238 auth_type = password
1239 project_domain_name = default
1240 user_domain_name = default
1241 project_name = service
1242 username = glance
1243 password = daniel.com

2158 flavor = keystone
[root@controller ~]# grep -Ev '#|^$' /etc/glance/glance-registry.conf
[DEFAULT]
[database]

connection = mysql+pymysql://glance:daniel.com@controller/glance
[keystone_authtoken]
auth_uri = http://controller: 5000
auth_url = http://controller: 35357
memcached_servers = controller: 11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = daniel.com
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]

Import data to grace database

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py: 1330 :
OsloDBDeprecationWarning: EngineFacade is deprecated;
please use oslo_db.sqlalchemy.enginefacade
expire_on_commit=expire_on_commit, _conf=conf)
INFO [alembic.runtime.migration] Context impl
MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-
transactional DDL.
INFO [alembic.runtime.migration] Running upgrade ->
liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade
liberty -> mitaka01, add index on crea
ted_at and updated_at columns of
'images' table
INFO [alembic.runtime.migration] Running upgrade
mitaka01 -> mitaka02, update metadef o
s_nova_server
INFO [alembic.runtime.migration] Running upgrade
mitaka02 -> ocata01, add visibility to
and remove is_public from images
INFO [alembic.runtime.migration] Running upgrade
ocata01 -> pike01, drop glare artifact
s tables
INFO [alembic.runtime.migration] Context impl
MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-
transactional DDL.
Upgraded database to: pike01, current revision(s):
pike01

Verify that the data is imported

[root@controller ~]# mysql -h controller -u glance -pdaniel.com -e 'use glance; show tables'

+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| alembic_version |
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| metadef_namespace_resource_types |
| metadef_namespaces |
| metadef_objects |
| metadef_properties |
| metadef_resource_types |
| metadef_tags |
| migrate_version |
| task_info |
| tasks |
+----------------------------------+

Start service

[root@controller ~]# systemctl restart openstack-glance-api
[root@controller ~]# systemctl enable openstack-glance-api

[root@controller ~]# systemctl restart openstack-glance-registry
[root@controller ~]# systemctl enable openstack-glance-registry

[root@controller ~]# netstat -ntlup |grep -E '9191|9292'
tcp  0  0 0.0.0.0: 9191 0.0.0.0:*
LISTEN  7417 /python2
tcp  0  0 0.0.0.0: 9292 0.0.0.0:*
LISTEN  7332 /python2

9191 yes glance-registry port
9292 yes glance-api port

verification

1. Download the test image

[root@controller ~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

2. Upload image

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

public Indicates that all items are available

3. Verify that the image upload is OK

[root@controller ~]# openstack image list
+--------------------------------------+--------+------
--+
| ID | Name |
Status |
+--------------------------------------+--------+------
--+
| 3 aa31299-6102-4eab-ae91-84d204255fe2 | cirros |
active |
+--------------------------------------+--------+------
--+

[root@controller ~]# ls /var/lib/glance/images/3aa31299-6102-4eab-ae91-84d204255fe2

5, Compute component nova

reference resources: https://docs.openstack.org/nova/pike/install/get-started-compute.html

nova control node deployment

Database configuration

[root@controller ~]# mysql -pdaniel.com

MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> create database nova;
MariaDB [(none)]> create database nova_cell0;

MariaDB [(none)]> grant all on nova_api.* to 'nova'@'localhost' identified by 'daniel.com';
MariaDB [(none)]> grant all on nova_api.* to 'nova'@'%'identified by 'daniel.com';

MariaDB [(none)]> grant all on nova.* to 'nova'@'localhost' identified by 'daniel.com';
MariaDB [(none)]> grant all on nova.* to 'nova'@'%' identified by 'daniel.com';

MariaDB [(none)]> grant all on nova_cell0.* to 'nova'@'localhost' identified by 'daniel.com';
MariaDB [(none)]> grant all on nova_cell0.* to 'nova'@'%' identified by 'daniel.com';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit
[root@controller ~]# mysql -h controller -u nova -
pdaniel.com -e 'show databases'
+--------------------+
| Database |
+--------------------+
| information_schema |
| nova |
| nova_api |
| nova_cell0 |
+--------------------+

Permission configuration

Create nova user

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack user create --domain
default --password daniel.com nova

[root@controller ~]# openstack user list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| 648 ef5d3f85e4894bbbacc8d45f8ebdb | nova |
| 693998862e8b4261828cc0a356df1234 | glance |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+--------+

2. Add nova user to the admin role group of the Service project

[root@controller ~]# openstack role add --project service --user nova admin

3. Create nova service

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

[root@controller ~]# openstack service list
+----------------------------------+----------+--------
--+
| ID | Name | Type
|
+----------------------------------+----------+--------
--+
| 2da4060802bf4e4bbf9328fb68b819b6 | keystone |
identity |
| 59 c3f3f50fc4466f8f3bbb72ca9a9e70 | glance | image
|
| 8 bfb289223284a939b54f043f786b17f | nova | compute
|
+----------------------------------+----------+--------
--+

4. Configure the api address record of nova service

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
[root@controller ~]# openstack endpoint list
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+

| ID | Region |
Service Name | Service Type | Enabled | Interface | URL
|
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+
| 12 af4c0bd34b4588bb17bd0702066ed5 | RegionOne | nova
| compute | True | internal |
http://controller: 8774 /v2.1 |
| 4 bbe9d5c517a4262bb9ce799215aabdc | RegionOne | glance
| image | True | internal |
http://controller:9292 |
| 513e7612169c4be9aae6af659ea536db | RegionOne | nova
| compute | True | admin |
http://controller:8774 /v2.1 |
| 77 f2d6b77d224b598cd4334d3980b82f | RegionOne | nova
| compute | True | public |
http://controller: 8774 /v2.1 |
| 8 c31c5a8060c4412b67b9acfad7f3071 | RegionOne |
keystone | identity | True | admin |
http://controller: 35357 /v3/ |
| 92244 b7d5091491a997eecfa1cbff2fb | RegionOne |
keystone | identity | True | internal |
http://controller:5000 /v3/ |
| 961 a300c801246f2890e3168b55b2076 | RegionOne | glance
| image | True | public |
http://controller: 9292 |
| c05adadbc74541a2a5cf014466d82473 | RegionOne | glance
| image | True | admin |
http://controller: 9292 |
| c2481e7a89a34c0d8b85e50b9162bc01 | RegionOne |
keystone | identity | True | public |
http://controller: 5000 /v3/ |
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+


5, establish placement User, used for tracking records of resources
```shell
[root@controller ~]# openstack user create --domain default --password daniel.com placement

[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| 648 ef5d3f85e4894bbbacc8d45f8ebdb | nova |
| 693998862e8b4261828cc0a356df1234 | glance |
| 6e68e53c047949ce8f72c54c0dd58c34 | placement |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+-----------+

6. Add the placement user to the admin role group of the Service project

[root@controller ~]# openstack role add --project service --user placement admin

7. Create placement service

[root@controller ~]# openstack service create --name placement --description "Placement API" placement

[root@controller ~]# openstack service list
+----------------------------------+-----------+-------
----+
| ID | Name | Type
|
+----------------------------------+-----------+-------
----+
| 2da4060802bf4e4bbf9328fb68b819b6 | keystone |
identity |
| 59 c3f3f50fc4466f8f3bbb72ca9a9e70 | glance | image
|
| 8 bfb289223284a939b54f043f786b17f | nova |
compute |
| ebe864d64de14f04b05b67df4dd7b449 | placement |
placement |
+----------------------------------+-----------+-------
----+

8. Create the api address record of the placement service

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
[root@controller ~]# openstack endpoint list
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+

| ID | Region |
Service Name | Service Type | Enabled | Interface | URL
|
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+
| 12 af4c0bd34b4588bb17bd0702066ed5 | RegionOne | nova
| compute | True | internal |
http://controller: 8774 /v2.1 |
| 4 bbe9d5c517a4262bb9ce799215aabdc | RegionOne | glance
| image | True | internal |
http://controller:9292 |
| 513e7612169c4be9aae6af659ea536db | RegionOne | nova
| compute | True | admin |
http://controller: 8774 /v2.1 |
| 77 f2d6b77d224b598cd4334d3980b82f | RegionOne | nova
| compute | True | public |
http://controller:8774 /v2.1 |
| 862441d899cb4b8aad4c7463783e3da7 | RegionOne |
placement | placement | True | admin |
http://controller:8778 |
| 8 c31c5a8060c4412b67b9acfad7f3071 | RegionOne |
keystone | identity | True | admin |
http://controller:35357 /v3/ |
| 92244 b7d5091491a997eecfa1cbff2fb | RegionOne |
keystone | identity | True | internal |
http://controller:5000 /v3/ |
| 961 a300c801246f2890e3168b55b2076 | RegionOne | glance
| image | True | public |
http://controller:9292 |
| bf8defa2f0b34d8e8a5de3b87ca255e6 | RegionOne |
placement | placement | True | public |
http://controller:8778 |
| c05adadbc74541a2a5cf014466d82473 | RegionOne | glance
| image | True | admin |
http://controller:9292 |
| c2481e7a89a34c0d8b85e50b9162bc01 | RegionOne |
keystone | identity | True | public |
http://controller: 5000 /v3/ |
| d1f0416db52a4b9fae5187b29ab138fb | RegionOne |
placement | placement | True | internal |
http://controller: 8778 |
+----------------------------------+-----------+-------
-------+--------------+---------+-----------+----------
-------------------+

Software installation and configuration

1. Install nova related software at the control node

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

2. Backup configuration file

[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

[root@controller ~]# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak

3. Modify the nova.conf configuration file

[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
2753 enabled_apis=osapi_compute,metadata

[api_database]
3479 connection=mysql+pymysql://nova:daniel.com@controller/nova_api

[database]
4453 connection=mysql+pymysql://nova:daniel.com@controller/nova

[DEFAULT]
3130 transport_url=rabbit://openstack:daniel.com@controller

[api]
3193 auth_strategy=keystone

5771 [keystone_authtoken] be careful:This sentence doesn't need to be changed, 5772-5780
 All have to be added[keystone_authtoken]below
5772 auth_uri = http://controller:5000
5773 auth_url = http://controller:35357
5774 memcached_servers = controller:11211
5775 auth_type = password
5776 project_domain_name = default
5777 user_domain_name = default
5778 project_name = service
5779 username = nova
5780 password = daniel.com

[DEFAULT]
1817 use_neutron=true
2479 firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

[vnc]
9897 enabled=true
9919 vncserver_listen=192.168.122.11
9930 vncserver_proxyclient_address=192.168.122.11

[glance]
5067 api_servers=http://controller: 9292

[oslo_concurrency]
7489 lock_path=/var/lib/nova/tmp

8304 [placement] 	be careful:This sentence doesn't need to be changed, 8305-8312 All have to be added[placement]below
8305 os_region_name = RegionOne
8306 project_domain_name = Default
8307 project_name = service
8308 auth_type = password
8309 user_domain_name = Default
8310 auth_url = http://controller:35357/v3
8311 username = placement
8312 password = daniel.com

There are too many changes. You can directly copy the following configuration

[root@controller ~]# grep -Ev '^#|^$' /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:daniel.com@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:daniel.com@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:daniel.com@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller: 11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = daniel.com
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = daniel.com
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=192.168.122.11
vncserver_proxyclient_address=192.168.122.11
[workarounds]
[wsgi]
[xenserver]
[xvp]

4. Configure the 00-nova-placement-api.conf configuration file

[root@controller ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf

3 <VirtualHost *: 8778 >
......
Add the following paragraph to</VirtualHost>above
......
 <Directory /usr/bin>
	<IfVersion >= 2.4>
	 Require all granted
	</IfVersion>
	<IfVersion < 2.4>
		Order allow,deny
		Allow from all
	</IfVersion>
 </Directory>
25 </VirtualHost>

5. Restart the httpd service

[root@controller ~]# systemctl restart httpd

Import data to nova related database

Import data to nova_api library

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

Register cell0 database

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

Create cell1

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
ce887b87-b321-4bc2-a6c5-96642c6bdc4c

Synchronize the information to Nova database again (there are related table data in Nova database and nova_cell0 database)

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
 Ignore warning messages,This step takes a long time(It takes a few minutes in the current environment),Wait patiently

verification

[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+-------
-----------------------------+-------------------------
------------------------+
| Name | UUID |
Transport URL | Database
Connection |
+-------+--------------------------------------+-------
-----------------------------+-------------------------
------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |
none:/ |
mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | ce887b87-b321-4bc2-a6c5-96642c6bdc4c |
rabbit://openstack:****@controller |
mysql+pymysql://nova:****@controller/nova |
+-------+--------------------------------------+-------
-----------------------------+-------------------------
------------------------+

[root@controller ~]# mysql -h controller -u nova -pdaniel.com -e 'use nova;show tables;' |wc -l
111

[root@controller ~]# mysql -h controller -u nova -pdaniel.com -e 'use nova_api;show tables;' |wc -l
33

[root@controller ~]# mysql -h controller -u nova -pdaniel.com -e 'use nova_cell0;show tables;' |wc -l
111

Start service

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Verify access address record

[root@controller ~]# openstack catalog list
+-----------+-----------+------------------------------
-----------+
| Name | Type | Endpoints
|
+-----------+-----------+------------------------------
-----------+
| keystone | identity | RegionOne
|
| | | admin:
http://controller:35357/v3/ |

| | | RegionOne
|
| | | internal:
http://controller:5000/v3/ |
| | | RegionOne
|
| | | public:
http://controller:5000/v3/ |
| | |
|
| glance | image | RegionOne
|
| | | internal:
http://controller:9292 |
| | | RegionOne
|
| | | public:
http://controller:9292 |
| | | RegionOne
|
| | | admin:
http://controller:9292 |
| | |
|
| nova | compute | RegionOne
|
| | | internal:
http://controller:8774/v2.1 |
| | | RegionOne
|
| | | admin:
http://controller:8774/v2.1 |
| | | RegionOne
|
| | | public:
http://controller:8774/v2.1 |
| placement | placement | RegionOne
|
| | | admin:
http://controller: 8778 |
| | | RegionOne
|
| | | public:
http://controller: 8778 |
| | | RegionOne
|
| | | internal:
http://controller: 8778 |
| | |
|
+-----------+-----------+------------------------------
-----------+

Validation log file

[root@controller ~]# ls /var/log/nova/
nova-api.log 	nova-consoleauth.log nova-novncproxy.log nova-scheduler.log
nova-conductor.log nova-manage.log nova-placement-api.log

nova compute node deployment

reference resources: https://docs.openstack.org/nova/pike/install/compute-install.html

The following operations are performed on the compute node

Installation and configuration

1. Install software

[root@compute ~]# yum install openstack-nova-compute sysfsutils -y

2. Backup configuration file

[root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

3. Modify the configuration file (you can directly copy the nova configuration file of the control node to modify it)

[root@compute ~]# cat /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:daniel.com@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:daniel.com@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:daniel.com@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = [http://controller:](http://controller:) 5000
auth_url = [http://controller:](http://controller:) 35357
memcached_servers = controller: 11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = daniel.com
[libvirt]
virt_type=qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = daniel.com
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.122.12
novncproxy_base_url = http://192.168.122.11:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

be careful:And control node nova.conf Different places
1 ,[vnc]Several parameters under are different
vncserver_proxyclient_address Connected IP by compute Node management network IP

2 ,[libvirt]Add under parameter group virt_type=qemu out of commission kvm,Because we were there kvm The cloud platform built inside,cat/proc/cpuinfo |egrep 'vmx|svm'I can't find out
 However, if the production environment is built with a physical server, it should be virt_type=kvm

Start service

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service

Add calculation node on control node

1. View service

[root@controller ~]# openstack compute service list

2. Add a new calculation node record to nova database

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': ce887b87-b321-4bc2-
a6c5-96642c6bdc4c
Checking host mapping for compute host 'compute':
ee3f5d57-22be-489b-af2c-35e369c5aff9
Creating host mapping for compute host 'compute':
ee3f5d57-22be-489b-af2c-35e369c5aff9
Found 1 unmapped computes in cell: ce887b87-b321-4bc2-
a6c5-96642c6bdc4c

3. Verify whether all API s are normal

[root@controller ~]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+

6, Network component neutron

reference resources: https://docs.openstack.org/neutron/pike/install/

Deployment of neutron control node

Database configuration

[root@controller ~]# mysql -pdaniel.com

MariaDB [(none)]> create database neutron;

MariaDB [(none)]> grant all on neutron.* to 'neutron'@'localhost' identified by 'daniel.com';

MariaDB [(none)]> grant all on neutron.* to 'neutron'@'%' identified by 'daniel.com';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit
[root@controller ~]# mysql -h controller -u neutron -
pdaniel.com -e 'show databases';
+--------------------+
| Database |
+--------------------+
| information_schema |
| neutron |
+--------------------+

Permission configuration

1. Create a neutron user

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack user create --domain default --password daniel.com neutron

[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| 648 ef5d3f85e4894bbbacc8d45f8ebdb | nova |
| 693998862e8b4261828cc0a356df1234 | glance |
| 6e68e53c047949ce8f72c54c0dd58c34 | placement |
| 9 f35128a10b84b4fa988aa93b67bf712 | neutron |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+-----------+

2. Add the neutron user to the admin role group of the Service project

[root@controller ~]# openstack role add --project service --user neutron admin

3. Create a neutron service

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

[root@controller ~]# openstack service list
+----------------------------------+-----------+-------
----+
| ID | Name | Type
|
+----------------------------------+-----------+-------
----+
| 2da4060802bf4e4bbf9328fb68b819b6 | keystone |
identity |
| 59 c3f3f50fc4466f8f3bbb72ca9a9e70 | glance | image
|
| 8 bfb289223284a939b54f043f786b17f | nova |
compute |
| b4cbb4cce6a5446983969e5b6fde51fa | neutron |
network |
| ebe864d64de14f04b05b67df4dd7b449 | placement |
placement |
+----------------------------------+-----------+-------
----+

4. Configure the api address record of the neutron service

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696

Software installation and configuration

Here we choose the second network type:

https://docs.openstack.org/neutron/pike/install/controller-install-option2-rdo.html

1. Install neutron related software at the control node

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

2. Backup configuration file

[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

[root@controller ~]# cp
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/plugins/ml2/ml2_conf.ini.bak

[root@controller ~]# cp
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
/etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

3. Configure the neutron.conf file

[root@controller ~]# vim /etc/neutron/neutron.conf

[DEFAULT]

27 auth_strategy = keystone

30 core_plugin = ml2
33 service_plugins = router
85 allow_overlapping_ips = true

98 notify_nova_on_port_status_changes = true
102 notify_nova_on_port_data_changes = true

553 transport_url = rabbit://openstack:daniel.com@controller
560 rpc_backend = rabbit

[database]
710 connection = mysql+pymysql://neutron:daniel.com@controller/neutron

794 [keystone_authtoken] This sentence remains unchanged, 795 - 803 All configured to[keystone_authtoken]below
795 auth_uri = http://controller: 5000
796 auth_url = http://controller:35357
797 memcached_servers = controller: 11211
798 auth_type = password
799 project_domain_name = default
800 user_domain_name = default
801 project_name = service
802 username = neutron
803 password = daniel.com

1022 [nova] This sentence remains unchanged, 1023 - 1030 All configured to[nova]below
1023 auth_url = http://controller:35357
1024 auth_type = password
1025 project_domain_name = default
1026 user_domain_name = default
1027 region_name = RegionOne
1028 project_name = service

1029 username = nova
1030 password = daniel.com

[oslo_concurrency]
1141 lock_path = /var/lib/neutron/tmp

Configuration results

[root@controller ~]# grep -Ev '#|^$' /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:daniel.com@controller
rpc_backend = rabbit
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:daniel.com@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = daniel.com
[matchmaker_redis]
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = daniel.com
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

4. Configure the Modular Layer 2 (ML2) plug-in_ Conf.ini configuration file

[root@controller ~]# vim
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
132 type_drivers = flat,vlan,vxlan
137 tenant_network_types = vxlan
141 mechanism_drivers = linuxbridge,l2population
146 extension_drivers = port_security

[ml2_type_flat]
182 flat_networks = provider

[ml2_type_vxlan]
235 vni_ranges = 1 : 1000
 Support 1000 tunnel networks(be careful:There is also one same parameter in line 193,Don't get it wrong,Otherwise, you cannot create a self-service private network)

[securitygroup]
259 enable_ipset = true Enhance the efficiency of security group rules
[root@controller ~]# grep -Ev '#|^$' /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

5. Configure Linux Bridge_ Agent.ini file

[root@controller ~]# vim
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
142 physical_interface_mappings = provider:eth1
 Note that the network card is eth1,That is, go to the external network card name

[vxlan]
175 enable_vxlan = true
196 local_ip = 192.168.122.11
 this IP To manage the network card IP
220 l2_population = true

[securitygroup]
155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
160 enable_security_group = true
[root@controller ~]# grep -Ev '#|^$'  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallD
river
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.122.11
l2_population = true

6. Configuration l3_agent.ini file

[root@controller ~]# vim /etc/neutron/l3_agent.ini 
16 interface_driver = linuxbridge
[root@controller ~]# grep -Ev '#|^$' /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
[agent]
[ovs]

7. Configure dhcp_agent.ini file

[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
16 interface_driver = linuxbridge
37 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
46 enable_isolated_metadata = true
[root@controller ~]# grep -Ev '#|^$' /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]

8. Configure metadata_agent.ini file

reference resources: https://docs.openstack.org/neutron/pike/install/controller-install-rdo.html

[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
23 nova_metadata_host = controller
35 metadata_proxy_shared_secret = metadata_daniel

be careful:there metadata_daniel Only one string,Need and nova In the configuration file metadata_proxy_shared_secret corresponding

9. Add the following paragraph to the nova.conf configuration file

[root@controller ~]# vim /etc/nova/nova.conf

[neutron] 		stay[neutron]Add the following paragraph under the configuration section
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = daniel.com
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_daniel
[root@controller ~]# cat /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:daniel.com@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:daniel.com@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:daniel.com@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller: 11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = daniel.com
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = daniel.com
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_daniel
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = daniel.com
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=192.168.122.11
vncserver_proxyclient_address=192.168.122.11
[workarounds]
[wsgi]
[xenserver]
[xvp]

10. The network service initialization script needs to access / etc/neutron/plugin.ini to point to ml2_conf.ini configuration file, so you need to make a soft link

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

11. Synchronous data (long time)

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Start service

Restart nova service

[root@controller ~]# systemctl restart openstack-nova-api.service

Start the neutron service

[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

neutron computing node deployment

reference resources: https://docs.openstack.org/neutron/pike/install/compute-install-rdo.html

Note: the following operations are performed on the compute node

Installation and configuration

1. Install relevant software

[root@compute ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

2. Backup configuration file

[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

3. Configure the neutron.conf file

[root@compute ~]# vim /etc/neutron/neutron.conf

[DEFAULT]
27 auth_strategy = keystone
553 transport_url = rabbit://openstack:daniel.com@controller

794 [keystone_authtoken] stay[keystone_authtoken]Next, add the following configuration
795 auth_uri = http://controller:5000
796 auth_url = http://controller:35357
797 memcached_servers = controller:11211
798 auth_type = password
799 project_domain_name = default
800 user_domain_name = default
801 project_name = service
802 username = neutron
803 password = daniel.com

[oslo_concurrency]
1135 lock_path = /var/lib/neutron/tmp
[root@compute ~]# grep -Ev '#|^$' /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:daniel.com@controller

[agent]
[cors]
[database]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = daniel.com
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

4. It is still the type 2 network configuration

reference resources: https://docs.openstack.org/neutron/pike/install/compute-install-option2-rdo.html

Configuring linuxbridge_agent.ini file

[root@compute ~]# vim
/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
142 physical_interface_mappings = provider:eth1
 Name of external network card

[vxlan]
175 enable_vxlan = true
196 local_ip = 192.168.122.12
 Native management network IP(Key attention)
220 l2_population = true

[securitygroup]
155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
160 enable_security_group = true

[root@compute ~]# grep -Ev '#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth1
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.122.12
l2_population = true

5. Configure the nova.conf configuration file

[root@compute ~]# vim /etc/nova/nova.conf

[neutron] stay[neutron]Add the following paragraph under
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = daniel.com
[root@compute ~]# grep -Ev '#|^$' /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:daniel.com@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:daniel.com@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:daniel.com@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = daniel.com
[libvirt]
virt_type=qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = daniel.com
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = daniel.com
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.122.12
novncproxy_base_url = http://controller: 6080 /vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

Start service

1. Restart the openstack Nova compute service on the compute node

[root@compute ~]# systemctl restart openstack-nova-compute.service

2. Start the neutron Linux bridge agent service on the compute node

[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

3. Verify on the control node

[root@controller ~]# source admin-openstack.sh

7, dashboard component horizon

reference resources: https://docs.openstack.org/horizon/pike/install/

Installation and configuration

1. Install software at the control node

[root@controller ~]# yum install openstack-dashboard -y

2. Backup configuration file

[root@controller ~]# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak

3. Configure local_settings file

[root@controller ~]# vim /etc/openstack-dashboard/local_settings

38 ALLOWED_HOSTS = ['*',] 	Allow all,Convenient test,Only specific are allowed in the production environment IP

64 OPENSTACK_API_VERSIONS = {
66 "identity": 3 ,
67 "image": 2 ,
68 "volume": 2 ,
69 "compute": 2 ,
70 }

75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 Multi domain support
97 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
Default domain name
153 SESSION_ENGINE = 'django.contrib.sessions.backends.cache' 	Add this sentence
154 CACHES = {
155 'default': {
156 'BACKEND':
'django.core.cache.backends.memcached.MemcachedCache',
157 'LOCATION': 'controller:11211', express
 Give the conversation to controller of memcache
158 },
159 }

161 #CACHES = {
The above paragraph is configured,Note this paragraph
162 # 'default': {
163 # 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
164 # },
165 #}

183 OPENSTACK_HOST = "controller"
184 OPENSTACK_KEYSTONE_URL = "http://%s: 5000 / V3 "% openstack_host" changed to V3 instead of v3.0
185 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
Default role

313 OPENSTACK_NEUTRON_NETWORK = {
314 'enable_router': True,
315 'enable_quotas': True,
316 'enable_ipv6': True,
317 'enable_distributed_router': True,
318 'enable_ha_router': True,
319 'enable_fip_topology_check': True,
Full open,We use the second network type


453 TIME_ZONE = "Asia/Shanghai"
Time zone changed to Asia Shanghai

Note: all true in the above configuration file should not be written as true

4. Configure the httpd sub configuration file of dashborad

[root@controller ~]# vim /etc/httpd/conf.d/openstack-
dashboard.conf

4 WSGIApplicationGroup %{GLOBAL}
Add this sentence on line 4,In the official centos Not in the document,but ubuntu yes.We have to
 add,Otherwise behind dashboard No access

Start service

[root@controller ~]# systemctl restart httpd memcached

validate logon

8, Block storage component cinder

reference resources: https://docs.openstack.org/cinder/pike/install/

cinder control node deployment

Database configuration

reference resources: https://docs.openstack.org/cinder/pike/install/cinder-controller-install-rdo.html

[root@controller ~]# mysql -pdaniel.com

MariaDB [(none)]> create database cinder;

MariaDB [(none)]> grant all on cinder.* to 'cinder'@'localhost' identified by 'daniel.com';

MariaDB [(none)]> grant all on cinder.* to 'cinder'@'%' identified by 'daniel.com';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit
[root@controller ~]# mysql -h controller -u cinder -pdaniel.com -e 'show databases';
+--------------------+
| Database |
+--------------------+
| cinder |
| information_schema |
+--------------------+

Permission configuration

1. Create a cinder user

[root@controller ~]# source admin-openstack.sh

[root@controller ~]# openstack user create --domain default --password daniel.com cinder

[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 0 f92b4526f91451b81b2dc41f187fbf1 | cinder |
| 528911 ce70634cc296d69ef463d9e3fb | admin |
| 648 ef5d3f85e4894bbbacc8d45f8ebdb | nova |
| 693998862e8b4261828cc0a356df1234 | glance |
| 6e68e53c047949ce8f72c54c0dd58c34 | placement |
| 9 f35128a10b84b4fa988aa93b67bf712 | neutron |
| a1fa2787411c432096d4961ddb4e1a03 | demo |
+----------------------------------+-----------+

2. Add the cinder user to the service project and assign the admin role

[root@controller ~]# openstack role add --project service --user cinder admin

3. Create cinderv2 and cinderv3 services

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

[root@controller ~]# openstack service list
+----------------------------------+-----------+-------
----+
| ID | Name | Type
|

+----------------------------------+-----------+-------
 ----+

| 2 bdd5cdb64d1480c96d70ea945c1c529 | cinderv3 |
volumev3 |
| 2da4060802bf4e4bbf9328fb68b819b6 | keystone |
identity |
| 59 c3f3f50fc4466f8f3bbb72ca9a9e70 | glance | image
|
| 8 bfb289223284a939b54f043f786b17f | nova |
compute |
| b4cbb4cce6a5446983969e5b6fde51fa | neutron |
network |
| d7704f00f8fd4b9aa41881852481da06 | cinderv2 |
volumev2 |
| ebe864d64de14f04b05b67df4dd7b449 | placement |
placement |
+----------------------------------+-----------+-------
----+

4. Create a cinder related endpoint address record

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
use endpoint list List to verify(The result is too long to post)
[root@controller ~]# openstack endpoint list

Software installation and configuration

1. Install the openstack cinder package on the control node

[root@controller ~]# yum install openstack-cinder -y

2. Backup configuration file

[root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

3. Configure the cinder.conf configuration file

[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
283 my_ip = 192.168.122.11
288 glance_api_servers = http://Controller: there is no such sentence in the 9292 official file. The connection with grace should be added
400 auth_strategy = keystone
1212 transport_url = rabbit://openstack:daniel.com@controller
1219 rpc_backend = rabbit 	There is no such sentence in the official file,Future versions will be removed,Now try to add
[database]
3782 connection = mysql+pymysql://cinder:daniel.com@controller/cinder
4009 [keystone_authtoken] stay[keystone_authtoken]Add this paragraph below
4010 auth_uri = http://controller:5000
4011 auth_url = http://controller:35357
4012 memcached_servers = controller:11211
4013 auth_type = password
4014 project_domain_name = default
4015 user_domain_name = default
4016 project_name = service
4017 username = cinder
4018 password = daniel.com

[oslo_concurrency]
4298 lock_path = /var/lib/cinder/tmp

verification

[root@controller ~]# grep -Ev '#|^$' /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.122.11
glance_api_servers = http://controller:9292
auth_strategy = keystone
transport_url = rabbit://openstack:daniel.com@controller
rpc_backend = rabbit
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection =
mysql+pymysql://cinder:daniel.com@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = daniel.com
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

4. Configure the nova.conf configuration file

[root@controller ~]# vim /etc/nova/nova.conf

[cinder] find[cinder],Add this sentence below
os_region_name = RegionOne

5. Restart the openstack Nova API service

[root@controller ~]# systemctl restart openstack-nova-api.service

6. Synchronize database

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

Validate database table information
[root@controller ~]# mysql -h controller -u cinder -pdaniel.com -e 'use cinder;show tables' |wc -l
36

Start service

Start the service at the control node

[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

verification

[root@controller ~]# netstat -ntlup |grep :8776
tcp  0  0 0.0.0.0: 8776 0.0.0.0:*
LISTEN  13719 /python2

[root@controller ~]# openstack volume service list
+------------------+------------+------+---------+-----
--+----------------------------+
| Binary | Host | Zone | Status |
State | Updated At |
+------------------+------------+------+---------+-----
--+----------------------------+
| cinder-scheduler | controller | nova | enabled | up
| 2019 - 07 - 01 T15: 41 :32.000000 |
+------------------+------------+------+---------+-----
--+----------------------------+

cinder storage node deployment

reference resources: https://docs.openstack.org/cinder/pike/install/cinder-storage-install-rdo.html

Note: the following operations are performed on the 3rd node (storage node operation)

Adding a hard disk to a storage node

Add a hard disk to the cinder storage node to simulate storage (if you added it earlier, you can no longer add it here)

[root@cinder ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11 : 0  1 1024 M  0 rom
vda  253 : 0  0 50 G  0 disk
├─vda1 253 : 1  0  300 M  0 part /boot
├─vda2 253 : 2  0  2 G  0 part [SWAP]
└─vda3 253 : 3  0 47.7G  0 part /
vdb  253 : 16 0 50 G  0 disk

Confirm yes vdb This hard drive is missing

Installation and configuration

1. Install lvm related software on the storage node

[root@cinder ~]# yum install lvm2 device-mapper-persistent-data -y

2. Start service

[root@cinder ~]# systemctl start lvm2-lvmetad.service
[root@cinder ~]# systemctl enable lvm2-lvmetad.service

3. Create LVM

[root@cinder ~]# pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created.

[root@cinder ~]# vgcreate cinder_lvm /dev/vdb
Volume group "cinder_lvm" successfully created

Check pv and vg (Note: if the cinder storage node uses lvm when installing the system, multiple will be displayed here, which should be clearly distinguished)

[root@cinder ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/vdb cinder_lvm lvm2 a-- <50.00g <50.00g
[root@cinder ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cinder_lvm 1 0 0 wz--n- <50.00g <50.00g

4. Configure LVM filtering

[root@cinder ~]# vim /etc/lvm/lvm.conf

142 filter = [ "a/vdb/", "r/.*/" ]

Add this sentence,a Delegates allowed access accept, r Representative rejection reject

5. Install cinder related software

[root@cinder ~]# yum install openstack-cinder targetcli python-keystone -y

6. Configure the cinder.conf configuration file

[root@cinder ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

[root@cinder ~]# vim /etc/cinder/cinder.conf

[DEFAULT]
283 my_ip = 192.168.122.13 Management network of storage node IP
288 glance_api_servers = http://controller:9292

400 auth_strategy = keystone
404 enabled_backends = lvm
1212 transport_url = rabbit://openstack:daniel.com@controller
1219 rpc_backend = rabbit

[database]
3782 connection = mysql+pymysql://cinder:daniel.com@controller/cinder

4009 [keystone_authtoken] 	stay[keystone_authtoken]Add a section of configuration below
4010 auth_uri = http://controller:5000
4011 auth_url = http://controller:35357
4012 memcached_servers = controller:11211
4013 auth_type = password
4014 project_domain_name = default
4015 user_domain_name = default
4016 project_name = service
4017 username = cinder
4018 password = daniel.com

[oslo_concurrency]
4298 lock_path = /var/lib/cinder/tmp

5174 [lvm] 		[lvm]This paragraph does not exist,Manually add these 5 lines at the end of the configuration file
5175 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
5176 volume_group = cinder_lvm Be sure to talk to
 Previously created vg Consistent name
5177 iscsi_protocol = iscsi
5178 iscsi_helper = lioadm

Verify configuration

[root@cinder ~]# grep -Ev '#|^$' /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.122.13
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://openstack:daniel.com@controller
rpc_backend = rabbit
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:daniel.com@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = daniel.com
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm]
volume_driver =
cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder_lvm
iscsi_protocol = iscsi
iscsi_helper = lioadm

Start service

1. Start the service on the cinder storage node

[root@cinder ~]# systemctl start openstack-cinder-volume.service target.service
[root@cinder ~]# systemctl enable openstack-cinder-volume.service target.service

2. Verify at the control node controller

[root@controller ~]# openstack volume service list

+------------------+------------+------+---------+-----
--+----------------------------+-----------------+
| Binary | Host | Zone | Status |
State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-----
--+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up
| 2019 - 07 - 02 T15: 28 :24.000000 | - |
| cinder-volume | cinder@lvm | nova | enabled | up
| 2019 - 07 - 02 T15: 22 :20.000000 | - |
+------------------+------------+------+---------+-----
--+----------------------------+-----------------+

3. Verify on dashboard

There is no "volume" option before cinder

When you log out and log in again, you have the option of "volume"

9, Simple use of cloud platform

reference resources: https://docs.openstack.org/zh_CN/install-guide/launch-instance.html

Create network

[root@controller ~]# openstack network list
[root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

verification

[root@controller ~]# openstack network list
+--------------------------------------+----------+----
-----+
| ID | Name |
Subnets |
+--------------------------------------+----------+----
-----+
| 78723928 - 3 bde-4b83-8fb0-4b04096c8f3e | provider |
|
+--------------------------------------+----------+----
-----+

Add subnet for network

The network segment created corresponds to the network of our eth1 network card

[root@controller ~]# openstack subnet create --network provider --allocation-pool start=192.168.100.100,end=192.168.100.250 --dns-nameserver 114.114.114.114 --gateway 192.168.100.1 --subnet-range 192.168.100.0/24 provider

verification

[root@controller ~]# openstack network list
+--------------------------------------+----------+----
----------------------------------+
| ID | Name |
Subnets |
+--------------------------------------+----------+----
----------------------------------+
| 78723928 - 3 bde-4b83-8fb0-4b04096c8f3e | provider |
36 a0388b-b692-4546-9b50-b6184d8fced7 |
+--------------------------------------+----------+----
----------------------------------+
[root@controller ~]# openstack subnet list
+--------------------------------------+----------+----
----------------------------------+------------------+
| ID | Name |
Network | Subnet
|
+--------------------------------------+----------+----
----------------------------------+------------------+
| 36 a0388b-b692-4546-9b50-b6184d8fced7 | provider |
78723928 - 3 bde-4b83-8fb0-4b04096c8f3e | 192.168.100.0/ 24
|
+--------------------------------------+----------+----
----------------------------------+------------------+

Create virtual machine specification (flavor)

[root@controller ~]# openstack flavor list
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 1 m1.nano
[root@controller ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+------
-----+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is
Public |
+----+---------+-----+------+-----------+-------+------
-----+
| 0 | m1.nano | 512 |  1 | 0 | 1 | True
|
+----+---------+-----+------+-----------+-------+------
-----+

Create virtual machine instance

The admin user of dashboard creates the virtual machine

The admin user should not be used for normal management of virtual machines. Let's simply create a test here

once

Command to create a VM instance

1. View the image, specification, network and other information

[root@controller ~]# openstack image list
+--------------------------------------+--------+------
--+
| ID | Name |
Status |
+--------------------------------------+--------+------
--+
| 3 aa31299-6102-4eab-ae91-84d204255fe2 | cirros |
active |
+--------------------------------------+--------+------
--+
[root@controller ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+------
-----+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is
Public |
+----+---------+-----+------+-----------+-------+------
-----+
| 0 | m1.nano | 512 |  1 | 0 | 1 | True
|
+----+---------+-----+------+-----------+-------+------
-----+
[root@controller ~]# openstack network list
+--------------------------------------+----------+----
----------------------------------+
| ID | Name |
Subnets |
+--------------------------------------+----------+----
----------------------------------+
| 78723928 - 3 bde-4b83-8fb0-4b04096c8f3e | provider |
36 a0388b-b692-4546-9b50-b6184d8fced7 |
+--------------------------------------+----------+----
----------------------------------+

2. Create an instance

[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=78723928-3bde-4b83-8fb0-4b04096c8f3e admin_instance1

3. View the URL address accessed by the instance (it will change every query)

[root@controller ~]# openstack console url show
admin_instance1
+-------+----------------------------------------------
---------------------------------------+
| Field | Value
|
+-------+----------------------------------------------
---------------------------------------+
| type | novnc
|
| url | http://192.168.122.11:6080/vnc_auto.html?
token= 417 bf4d2-fd9e-490e-bff2-3ae708105fcf |
+-------+----------------------------------------------
---------------------------------------+

4. Use firefox to access on the host

[root@daniel ~]#
http://192.168.122.11:6080/vnc_auto.html?
token=417bf4d2-fd9e-490e-bff2-3ae708105fcf

5. Delete VM instance after testing

[root@controller ~]# openstack server delete admin_instance1
[root@controller ~]# openstack server list

Create VM instance from graph





demo user creates VM instance

demo user login

Create key pair




Create security group






Create a self-service private network






Create instance






verification

Console verification

Control node ssh connection (you can't directly connect to the self-service network now. Use a special method. See the following documents for details:)

[root@controller ~]# openstack network list
+--------------------------------------+-----------+---
-----------------------------------+
| ID | Name |
Subnets |
+--------------------------------------+-----------+---
-----------------------------------+
| 2d0bc22c-e94b-4efa-ad94-4e5c8efbd606 | demo_net1 |
644 b2496-f7cc-4a72-a64f-666783ddd96d |
| 78723928 - 3 bde-4b83-8fb0-4b04096c8f3e | provider |
36 a0388b-b692-4546-9b50-b6184d8fced7 |

[root@controller ~]# source demo-openstack.sh

[root@controller ~]# openstack server list
+--------------------------------------+----------+----
----+---------------------------+-------+---------+
| ID | Name |
Status | Networks | Image | Flavor |
+--------------------------------------+----------+----
----+---------------------------+-------+---------+
| 87 aa0528-6f5d-498a-b074-c665a3d2032c | demo_vm1 |
ACTIVE | demo_net1=192.168.198.103 | | m1.nano |
+--------------------------------------+----------+----
----+---------------------------+-------+---------+

Connect using ip netns exec qdhcp network ID ssh username @ IP

(qdhcp network ID can also be obtained through ip netns list query. ns is the namespace, which is used for resource isolation)

[root@controller ~]# ip netns exec qdhcp-2d0bc22c-e94b-
4efa-ad94-4e5c8efbd606 ssh root@192.168.198.103

practice:

1. Create a VM instance of the default security group of the provider network

Result: unable to ping and ssh connection (because the default security group refused by default)

2. Create a self built security group of the provider network (icmp and SSH are allowed previously)

example

Results: the controller node can be ping connected, ssh connected or ssh secret free connected
ssh -i key1 cirros@IP

Question: how can I access the VM instance of the self-service self-service network?

Homework after class

1. Allow external users to access VM instances of self-service self-service network

reference resources: https://docs.openstack.org/zh_CN/install-guide/launch-instance-selfservice.html




ping Floating management IP sure ping through
[root@controller ~]# ping -c 2 192.168.100.106
PING 192.168.100.106 (192.168.100.106) 56 ( 84 ) bytes of data.
64 bytes from 192.168.100.106: icmp_seq= 1 ttl= 63 time=3.01 ms
64 bytes from 192.168.100.106: icmp_seq= 2 ttl= 63 time=0.967 ms

--- 192.168.100.106 ping statistics ---

2 packets transmitted, 2 received, 0 % packet loss, time 999 ms
rtt min/avg/max/mdev = 0.967/1.992/3.017/1.025 ms

ssh - i The private key connection of the specified key pair can also log in without secret
[root@controller ~]# ssh -i keypair1 cirros@192.168.100.106
The authenticity of host '192.168.100.106
(192.168.100.106)' can't be established.
RSA key fingerprint is
SHA256:OkdjpTnT5AkhA9m3JN27lV5FQZ02Ql62e9hFUOdSJ3U.
RSA key fingerprint is
MD5: 94 : 61 :d3: 3 f: 41 : 30 :bb: 4 c: 39 : 8 c:fd: 67 : 00 :a2: 71 : 83.
Are you sure you want to continue connecting (yes/no)?
yes
Warning: Permanently added '192.168.100.106' (RSA) to
the list of known hosts.
$ id
uid= 1000 (cirros) gid= 1000 (cirros) groups= 1000 (cirros)

2. Import custom image

Self network query document

Tags: Operation & Maintenance OpenStack cloud computing

Posted on Wed, 15 Sep 2021 02:29:30 -0400 by Major Tom