OpenStack multi-node one-click deployment (super-detailed)

OpenStack Multi-Node One-Click Deployment in Experimental Environment (Ultra-Detailed) Preface ...
1.1 System Environment
1.2 Resource Pack
4.1 Install Operating System
4.2 System Environment Configuration
4.3 One-Click Deployment of OpenStack
OpenStack Multi-Node One-Click Deployment in Experimental Environment (Ultra-Detailed) Preface

The OpenStack project is an open source cloud computing platform project, a distributed system that controls computing, networking, and storage of three resources.Building such a cloud platform system can provide us with cloud services in the IaaS (Infrastructure as a Service) mode.The core of this article is not related theory, so a general introduction to cloud computing and OpenStack concepts can be found in the following three articles:

Cloud computing

Overview of OpenStack concepts and core components

OpenStack deployment node type and architecture

The purpose of this paper is to provide a detailed experimental process for multi-node, one-click deployment of OpenStack in an experimental environment, which is an R version of OpenStack deployed locally (using a yum source).The following is a brief description, practice and summary from four aspects: the experimental environment and required resources, system resources, deployment node planning, specific deployment, deployment summary process.

1. Experimental Environment and Required Resources

1.1 System Environment

win10 host, install operating system (Centos7.5) on VMware version 15 (which can be downloaded by yourself, preferably for experimentation);

1.2 Resource Pack

Centos7.5 mirror file, R version OpenStack source; resource links are as follows:

Links: https://pan.baidu.com/s/1hFENGyrRTz3lOLUAordTGg
Extraction code: mu5x

2. System Resources

The system resource situation mainly introduces the host hardware of the author, mainly considering whether the OpenStack project is very resource-intensive, in order to avoid unexpected failures in the process of experimental deployment. Of course, the system resource situation here is only the case of the author's notebook, and the specific hardware resources needed need to be tried several times.

The hardware resources used in the experiment are as follows:

CPU: i7 generation 9 (i7 is enough, mainly depending on the number of core threads); memory: 32G (which is a benchmark, can be lower than a little better than 24G); hard disk: 1TSSD solid state (preferably more than 200G of available disk space, the author gave 300G later on deployment) The main hardware resources are these three.

The following describes the node planning for the experimental deployment of the author. The node types are described in the link article and are not repeated here.

3. Deployment node planning

Considering the hardware configuration of the experimental environment, it is impossible to deploy as many nodes as in the production environment, so it is planned as three nodes, one control node and two computing nodes.Familiarize yourself with this architecture diagram again:

Limited resources, experimental deployment can only deploy the network on the control node, production environment is absolutely not like this deployment!On the one hand, experimental deployment is to deepen the theoretical understanding, on the other hand, it is convenient to familiarize yourself with some deployment processes and command operations and some troubleshooting ideas.

Now that you're talking about deploying production environments, let's give you a general example:

Assuming you deploy an OpenStack platform service with 300 servers, you can roughly plan this:

30 control nodes; 30 network nodes; 100 computing nodes; the rest can be stored;

(When it comes to storage, we know that OpenStack has Cinder block storage and Swift object storage. Another large project, CEPH distributed storage, is commonly used in production environments. We generally deploy OpenStack storage nodes in combination with this storage. In production environments, CEPH is a highly available cluster to ensure high reliability and availability of stored data.H's knowledge can be consulted by interested friends.

The following are specific resource allocations:

Control Node: 2*2 total processor cores; 8G memory; 300G, 1024G disks for ceph storage experiments; dual network card, one host-only mode (eth) (ip plan 192.168.100.20) and one NAT mode (ip plan 20.0.0.20);

Computing Node: The resource allocation of both computing nodes is the same, the total number of processor cores is 2*2; the memory is 8G; the disks are divided into 300G and 1024G; the network card is in a host-only mode (eth) (IP address planning is 192.168.100.21 and 192.168.100.22);

The above figure also gives the components to be installed on each node, but the author still considers simplifying it to make it easier for everyone to experiment, so some components have been chosen. Below is a detailed deployment process to understand the charm of OpenStack.

IV. Specific Deployment Process

The author divides the OpenStack experiment of one-click deployment R version into the following processes. Generally, during the deployment process, the probability of failure or other situations is very high. Some troubleshooting ideas will be given at the end of the article for your reference:

1. Install Operating System 2. System Environment Configuration 3. One-click deployment of OpenStack

Following is a step-by-step breakdown and demonstration of how some networks can be configured during deployment to define segment IP addresses, etc.

4.1 Install Operating System

As mentioned above, the experimental environment deploys one control and two computing nodes.Therefore, you need to install three virtual machines.Below is the specific installation process.

1. Modify the local VMnet8 network card

Here is the sequence of operations

Here are the results of the changes:

2. Create a new virtual machine (the virtual machine will not be turned on here for now)

The process for installing Centos7 on Linux is detailed in the author's previous article, and here are some of the different places illustrated in the diagrams below.Reference link: Centos7 Operating System Installation

The virtual machine settings for the control node are as follows:

The virtual machine settings for the compute node are as follows (both nodes are the same):

3. After the above process is set up, proceed with the Open Configuration Installation Virtual Machine (preferably one installation, three nodes with the same setup process, illustrated by any of them)

When turned on, the operation is illustrated as follows:

4. Simply select the smallest installation when installing, and then plan your disks as shown below

Click the dialog box after disk allocation to allocate disks


After clicking Done, the following dialog box appears to continue the configuration

The above screenshots without corresponding steps are consistent with the steps in the previous link to the installation system, and from this point on, the following actions will be consistent with the normal installation system.Ultimately, a normal installation can be logged in and then shut down (to avoid resource usage causing other node virtual machine installations to fail, taking into account your hardware configuration).

This is the whole process of our first step, which may seem like a lot, but when you are very familiar with the process of installing the Linux operating system on VMware, you will find it very simple, the most important of which is to remember the two commands before installing.

When there are no problems with the installation, we can turn on three virtual machines one by one (preferably one) and start the second step.

4.2 System Environment Configuration

Here is a list of the main steps to configure the system environment

1. Configure the host name and network card of each node to restart the network 2. Turn off firewalls, core protection, network management, and set to disable power-on self-start 3. Upload package - openstack-rocky compressed package (source), and decompress settings 4. Configure local yum source files 5. Three nodes do no interaction and verify 6. Configure time synchronization

Start configuring below

1. Configure the host names and network cards of each node and restart the network (here we first configure the remote connection tools such as Network Easy Connection Xshell locally to simulate the production environment as much as possible, on the other hand to facilitate code demonstration). Next, take a look at the network card settings

Control node configuration:

[root@localhost ~]# hostnamectl set-hostname ct [root@localhost ~]# su [root@ct ~]# cd /etc/sysconfig/network-scripts/ #Configure local network card eth0 and nat network card eth1 [root@ct network-scripts]# cat ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=6dc229bf-8b5b-4170-ac0d-6577b4084fc0 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.100.20 NETMASK=255.255.255.0 GATEWAY=192.168.100.1 [root@ct network-scripts]# cat ifcfg-eth1 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth1 UUID=37e4a752-3820-4d15-89ab-6f3ad7037e84 DEVICE=eth1 ONBOOT=yes IPADDR=20.0.0.20 NETMASK=255.255.255.0 GATEWAY=20.0.0.2 #Configure resolv.conf file for accessing external networks [root@ct network-scripts]# cat /etc/resolv.conf nameserver 8.8.8.8 #Restart the network for testing [root@ct ~]# ping www.baidu.com PING www.wshifen.com (104.193.88.123) 56(84) bytes of data. 64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=1 ttl=128 time=182 ms 64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=2 ttl=128 time=182 ms ^C --- www.wshifen.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 182.853/182.863/182.874/0.427 ms

Compute Node Network Card Configuration: (All except ip address is different)

[root@localhost ~]# hostnamectl set-hostname c1 [root@localhost ~]# su [root@c1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=d8f1837b-ce71-4465-8d6f-97668c343c6a DEVICE=eth0 ONBOOT=yes IPADDR=192.168.100.21 NETMASK=255.255.255.0 GATEWAY=192.168.100.1 #Configured ip address on computer node 2 is 192.168.100.22

Configure the / etc/hosts file on three nodes:

cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.20 ct 192.168.100.21 c1 192.168.100.22 c2 #Test if you can ping each other root@ct ~]# ping c1 PING c1 (192.168.100.21) 56(84) bytes of data. 64 bytes from c1 (192.168.100.21): icmp_seq=1 ttl=64 time=0.800 ms 64 bytes from c1 (192.168.100.21): icmp_seq=2 ttl=64 time=0.353 ms ^C --- c1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.353/0.576/0.800/0.224 ms [root@ct ~]# ping c2 PING c2 (192.168.100.22) 56(84) bytes of data. 64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=0.766 ms 64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=0.316 ms ^C --- c2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.316/0.541/0.766/0.225 ms [root@c1 ~]# ping c2 PING c2 (192.168.100.22) 56(84) bytes of data. 64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=1.25 ms 64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=1.05 ms 64 bytes from c2 (192.168.100.22): icmp_seq=3 ttl=64 time=0.231 ms ^C --- c2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 0.231/0.846/1.255/0.442 ms

2. Turn off firewalls, core protection, network management, and set to disable boot-on self-start (all three nodes need to be configured with the following commands, where you can check these services as much as possible before using OpenStack in your experimental environment)

systemctl stop firewalld systemctl disable firewalld setenforce 0 vi /etc/sysconfig/selinux SELINUX=disabled systemctl stop NetworkManager systemctl disable NetworkManager

3. Upload package - openstack-rocky compressed package (source), and decompress settings

The author uses the xftp tool to upload all three nodes, then unzip them into the / opt directory after uploading

As shown below

[root@ct ~]# ls anaconda-ks.cfg openstack_rocky.tar.gz [root@ct ~]# tar -zxf openstack_rocky.tar.gz -C /opt/ [root@ct ~]# cd /opt/ [root@ct opt]# ls openstack_rocky [root@ct opt]# du -h 2.4M ./openstack_rocky/repodata 306M ./openstack_rocky 306M .

4. Configure the local yum source file (note that the virtual machine image file is connected, check in the virtual machine settings, or check if the drive icon in the lower right corner has a green dot display, generally defaulting to the connection state). Demonstrate it here on the control node, and do the same on the remaining nodes.

4.1. Mount System Mirrors

[root@ct opt]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Mar 6 05:02:52 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=0d4b2a40-756a-4c83-a520-83289e8d50ca / xfs defaults 0 0 UUID=bd59f052-d9bc-47e8-a0fb-55b701b5dd28 /boot xfs defaults 0 0 UUID=8ad9f9e7-92db-4aa2-a93d-1fe93b63bd89 swap swap defaults 0 0 /dev/sr0 /mnt iso9660 defaults 0 0 [root@ct opt]# mount -a mount: /dev/sr0 is write-protected, mounting read-only [root@ct opt]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 xfs 291G 1.6G 290G 1% / devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 12M 3.8G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 134M 881M 14% /boot tmpfs tmpfs 781M 0 781M 0% /run/user/0 /dev/sr0 iso9660 4.2G 4.2G 0 100% /mnt

4.2, yum Source Backup Create New Source File

[root@ct opt]# cd /etc/yum.repos.d/ [root@ct yum.repos.d]# ls CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo [root@ct yum.repos.d]# mkdir backup [root@ct yum.repos.d]# mv C* backup/ [root@ct yum.repos.d]# vi local.repo [root@ct yum.repos.d]# cat local.repo [openstack] name=openstack baseurl=file:///opt/openstack_rocky #This path is the path to the source of the unpacked package gpgcheck=0 enabled=1 [centos] name=centos baseurl=file:///mnt gpgcheck=0 enabled=1

4.3. Modify the yum.conf file to set keepcache to 1, which means to save the cache

[root@ct yum.repos.d]# head -10 /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=1 #You only need to modify this parameter debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=5 [root@ct yum.repos.d]# yum clean all #Empty all packages Loaded plugins: fastestmirror Cleaning repos: centos openstack Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [root@ct yum.repos.d]# yum makecache #Create package local cache Loaded plugins: fastestmirror Determining fastest mirrors centos | 3.6 kB 00:00:00 openstack | 2.9 kB 00:00:00 (1/7): centos/group_gz | 166 kB 00:00:00 (2/7): centos/filelists_db | 3.1 MB 00:00:01 (3/7): centos/primary_db | 3.1 MB 00:00:01 (4/7): centos/other_db | 1.3 MB 00:00:00 (5/7): openstack/primary_db | 505 kB 00:00:00 (6/7): openstack/filelists_db | 634 kB 00:00:00 (7/7): openstack/other_db | 270 kB 00:00:00 Metadata Cache Created

5. Do no interaction and verify between the three nodes

ssh-keygen -t rsa #Enter all the way back. Enter the password for yes and the root of the logged-in virtual machine ssh-copy-id ct ssh-copy-id c1 ssh-copy-id c2

This way, to ensure the security of the experiment and the settings before validation, we take a snapshot and restart the virtual machine to validate these configurations (the following validation is required on each node, for example, the control node)

[root@ct ~]# ls anaconda-ks.cfg openstack_rocky.tar.gz [root@ct ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@ct ~]# systemctl status NetworkManager ● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:NetworkManager(8) [root@ct ~]# setenforce ? setenforce: SELinux is disabled #Verify again that the exempt interaction was successful [root@ct ~]# ssh c1 Last login: Sun Mar 8 13:11:32 2020 from c2 [root@c1 ~]# exit logout Connection to c1 closed. [root@ct ~]# ssh c2 Last login: Sun Mar 8 13:14:18 2020 from gateway [root@c2 ~]#

6. Configure time synchronization

This step is critical, especially in our production environment, where it is assumed that many services and businesses cannot be performed without synchronizing the time between servers, and even major accidents can result.

This experimental environment takes Ali Cloud synchronization clock server as an example, to control the node synchronization Ali Cloud server, while two computing nodes synchronize the node time through ntpd service.

Control node configuration:

[root@ct ~]# yum -y install ntpdate Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed --> Finished Dependency Resolution //...//Omit some content Installed: ntpdate.x86_64 0:4.2.6p5-28.el7.centos Complete! #Synchronize Ali Clock Server [root@ct ~]# ntpdate ntp.aliyun.com 8 Mar 05:20:32 ntpdate[9596]: adjust time server 203.107.6.88 offset 0.017557 sec [root@ct ~]# date Sun Mar 8 05:20:40 EDT 2020 [root@ct ~]# yum -y install ntp Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package ntp.x86_64 0:4.2.6p5-28.el7.centos will be installed --> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-28.el7.centos.x86_64 --> Running transaction check ---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================== Package Arch Version Repository Size ========================================================================================================================== Installing: ntp x86_64 4.2.6p5-28.el7.centos centos 549 k Installing for dependencies: autogen-libopts x86_64 5.18-5.el7 centos 66 k Transaction Summary ========================================================================================================================== Install 1 Package (+1 Dependent package) Total download size: 615 k Installed size: 1.5 M Downloading packages: -------------------------------------------------------------------------------------------------------------------------- Total 121 MB/s | 615 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : autogen-libopts-5.18-5.el7.x86_64 1/2 Installing : ntp-4.2.6p5-28.el7.centos.x86_64 2/2 Verifying : autogen-libopts-5.18-5.el7.x86_64 1/2 Verifying : ntp-4.2.6p5-28.el7.centos.x86_64 2/2 Installed: ntp.x86_64 0:4.2.6p5-28.el7.centos Dependency Installed: autogen-libopts.x86_64 0:5.18-5.el7 Complete!

Modify ntp master profile

Restart the service after saving the file and close the chronyd.service service

[root@ct ~]# systemctl disable chronyd.service Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service. [root@ct ~]# systemctl restart ntpd [root@ct ~]# systemctl enable ntpd

Configuration on two computing nodes

[root@c1 ~]# yum -y install ntpdate Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================== Package Arch Version Repository Size ========================================================================================================================== Installing: ntpdate x86_64 4.2.6p5-28.el7.centos centos 86 k Transaction Summary ========================================================================================================================== Install 1 Package Total download size: 86 k Installed size: 121 k Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ntpdate-4.2.6p5-28.el7.centos.x86_64 1/1 Verifying : ntpdate-4.2.6p5-28.el7.centos.x86_64 1/1 Installed: ntpdate.x86_64 0:4.2.6p5-28.el7.centos Complete! [root@c1 ~]# ntpdate ct 8 Mar 05:36:26 ntpdate[9562]: step time server 192.168.100.20 offset -28798.160949 sec [root@c1 ~]# crontab -e #Save exit after writing periodic scheduled tasks, for example: */30 * * * * * /usr/sbin/ntpdate CT > > /var/log/ntpdate.log no crontab for root - using an empty one crontab: installing new crontab

4.3 One-Click Deployment of OpenStack

Operate on the control node

#Install the openstack-packstack tool to generate an openstack answer file (txt text format) [root@ct ~]# yum install -y openstack-packstack [root@ct ~]# packstack --gen-answer-file=openstack.txt [root@ct ~]# ls anaconda-ks.cfg openstack_rocky.tar.gz openstack.txt

The focus is on how to modify: there is no specific explanation here, you can read this article, the next article will specify the configuration parameters of the answer file

The following is a list of which rows need to be changed, with careful modifications

41 That's ok: y-n 50 That's ok: y-n 97 Line: 192.168.100.11,192.168.100.12 557 Line: 20 G 817 : physnet1 862 : physnet1:br-ex 873: br-ex:eth1 1185: y-n
#There are also segments that need to be modified and passwords that need to be modified globally using sed regular expressions here [root@ct ~]# sed -i -r 's/(.+_PW)=.+/\1=sf144069/' openstack.txt [root@ct ~]# sed -i -r 's/20.0.0.20/192.168.100.20/g' openstack.txt

Command One-Click Deployment Installation

[root@ct ~]# packstack --answer-file=openstack.txt Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20200308-055746-HD3Zl3/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] ...//Omit some content

At each node terminal (the xshell terminal is at the end of a connection control node and uses the following command to view log information)

tail -f /var/log/messages

When the picture below shows, it means there is no problem at the moment and the next step is to be patient

The following illustration shows that the deployment was successful

We can use the browser (Google) login dashboard to verify that we can refer to the end of the following article:

Getting Started with OpenStack - Theories (2): OpenStack's Node Type and Architecture (Dashboard interface example with login)

V. Summary of Deployment and Ideas for Barrier Removal

The author also encountered some unexpected errors and problems during the deployment process, but basically all of them were solved, most of which were minor problems, code writing errors, some content change errors, and so on.However, I would like to give you some suggestions and troubleshooting ideas:

First, for these larger experimental projects, we need to first clarify the ideas and deployment sequence;

Secondly, during the installation process, because it is an experimental process, it needs to be backed up habitually (we can take snapshots on virtual machines), which is also a storage method.This allows us to roll back to the point in time when the deployment was initially successful when problems could not be resolved later, and so on.This can save us a lot of time;

(Then there is barrier removal, first understand the meaning of ERROR, if you see where the problem is, you can modify it directly; if there is no solution, check your environment for problems, if the service is on, if the stopped service is off, etc. If there is no problem with the environment, check your configuration file for errors, such as the phase of some of the configuration files.Whether the parameters are correctly modified and so on; if they still can not be solved, you need to look up the data by yourself, such as going to Baidu, looking at the official documents, and seeing if predecessors or previous engineers have encountered similar problems, how to solve them.

Specific troubleshooting still requires time and experience. The main purpose of this article is to demonstrate the whole process of locally deploying the R version of OpenStack platform with multiple nodes, to facilitate beginners to experiment. The theoretical part of the author is still updating continuously, and I hope that readers can continue to pay attention to reading!Thank you!

8 March 2020, 12:44 | Views: 8315

Add new comment

For adding a comment, please log in
or create account

0 comments