corosync v1 + pacemaker high availability cluster deployment resource configuration (VIP+httpd+NFS)

Experimental purpose: to deploy httpd high availability service (+ NFS) using corosync v1 + pacemaker. ...

Experimental purpose: to deploy httpd high availability service (+ NFS) using corosync v1 + pacemaker.

This experiment uses Centos 6.8 system, file system resource server, NA1 node 1, NA2 node 2, VIP192.168.94.222

1132408-20200527124155116-2072767990 (1)

Directory structure: (annoying, the layout is in a mess)

1. corosync v1 + pacemaker basic installation
2. Installation of pacemaker management tool crmsh
3. Resource management configuration
4. Basic introduction to creating resources
5. Create a VIP resource
6. Create an httpd resource
7. Resource constraints
8. Analog fault cluster conversion
9. httpd service high availability test
10. Create nfs file resource
11. High availability cluster test

1. corosync v1 + pacemaker basic installation

reference resources

corosync v1 + pacemaker high availability cluster deployment (I) basic installation

2. Installation of pacemaker management tool crmsh

Since pacemaker 1.1.8, crm sh has developed into an independent project, which is no longer available in pacemaker,
It shows that after we install pacemaker, there will be no crm resource manager.

There are many other management tools. In this experiment, we use crmsh tool to manage.

Crmsh-3.0.0-6.1 is used here noarch.rpm

crmsh installation: Na1 & Na2

For the first installation, it will remind us that we need the following dependency packages. We can find the corresponding rpm Just install the package.

[root@na1 ~]# rpm -ivh crmsh-3.0.0-6.1.noarch.rpm warning: crmsh-3.0.0-6.1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 17280ddf: NOKEY error: Failed dependencies: crmsh-scripts >= 3.0.0-6.1 is needed by crmsh-3.0.0-6.1.noarch python-dateutil is needed by crmsh-3.0.0-6.1.noarch python-parallax is needed by crmsh-3.0.0-6.1.noarch redhat-rpm-config is needed by crmsh-3.0.0-6.1.noarch [root@na1 ~]#

These two are installed with rpm package

rpm -ivh crmsh-scripts-3.0.0-6.1.noarch.rpm rpm -ivh python-parallax-1.0.1-28.1.noarch.rpm

These two can be installed using yum

yum install python-dateutil* -y yum install redhat-rpm-config* -y

After installation, install crmsh

rpm -ivh crmsh-3.0.0-6.1.noarch.rpm

3. Resource management configuration

crm will automatically synchronize the configuration of resources on a node. It only needs to be configured on one node without manual replication synchronization.

NA1

Configuration mode description

crm There are two configuration modes: batch mode and interactive mode

Batch (enter command at command line)

[root@na1 ~]# crm ls cibstatus help site cd cluster quit end script verify exit ra maintenance bye ? ls node configure back report cib resource up status corosync options history

Interactive mode (enter crm(live) ා, command operation, basic operation such as ls cd cd

[root@na1 ~]# crm crm(live)# ls cibstatus help site cd cluster quit end script verify exit ra maintenance bye ? ls node configure back report cib resource up status corosync options history crm(live)#

Initial configuration check

After configuring resources in crm interactive mode, you need to check whether there are errors in the configuration, and then submit the command

Let's check it before we configure it

[root@na1 ~]# crm crm(live)# configure crm(live)configure# verify ERROR: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid crm(live)configure#

This error is because we do not have a stop device at present, so it will report an error, and we will temporarily shut it down.

property stonith-enabled=false

Order submission

crm(live)configure# commit crm(live)configure#

View current configuration

crm(live)configure# show node na1.server.com node na2.server.com property cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.18-3.el6-bfe4e80420 \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes=2 \ stonith-enabled=false crm(live)configure#
View resource status
crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 21:38:15 2020 Last change: Sun May 24 21:37:10 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 0 resources configured Online: [ na1.server.com na2.server.com ] No resources crm(live)#

4. Basic introduction to creating resources

Basic resource primitive. You can view help by using the help command

Grammatical structure

primitive <rsc> {[<class>:[<provider>:]]<type>|@<template>} [description=<description>] [[params] attr_list] [meta attr_list] [utilization attr_list] [operations id_spec] [op op_type [<attribute>=<value>...] ...] attr_list :: [$id=<id>] [<score>:] [rule...] <attr>=<val> [<attr>=<val>...]] | $id-ref=<id> id_spec :: $id=<id> | $id-ref=<id> op_type :: start | stop | monitor

Brief introduction

primitive resource name resource category: provider of resource agent: resource agent name Resource agent category: lsb, ocf, stonith, service Provider of resource agent: such as heartbeat, pacemaker Resource Agent Name: resource agent, such as IPaddr2,httpd, mysql params -- set the instance property, pass the actual parameter (not so called) Meta -- meta attribute, an option that can be added to a resource. They tell CRM how to handle specific resources. The rest are used in a simple way

To create a resource, first look at what category it belongs to

#View currently supported categories crm(live)ra# classes lsb ocf / .isolation heartbeat pacemaker service stonith #See which program the agent name belongs to crm(live)ra# providers IPaddr heartbeat #List all resource agent types under this category crm(live)ra# list service # Check the information of IPaddr agent, that is, help, to see how to create this resource crm(live)ra# info ocf:heartbeat:IPaddr

5. Create a VIP resource

Check the help of IPaddr. The * parameter is required. The rest can be set according to specific requirements.

info ocf:heartbeat:IPaddr

Parameters (*: required, []: default): ip* (string): IPv4 or IPv6 address The IPv4 (dotted quad notation) or IPv6 address (colon hexadecimal notation) example IPv4 "192.168.1.1". example IPv6 "2001:db8:DC28:0:0:FC57:D4C8:1FFF". nic (string): Network interface The base network interface on which the IP address will be brought online. If left empty, the script will try and determine this from the routing table.

Create VIP resources under configure

crm(live)configure# primitive VIP ocf:heartbeat:IPaddr params ip=192.168.94.222 crm(live)configure# verify crm(live)configure# commit crm(live)configure#

View resource status (VIP started on na1)

crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:12:05 2020 Last change: Sun May 24 22:11:15 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 1 resource configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com crm(live)#

6. Create an httpd resource

Configure httpd service (no startup)

NA1 [root@na1 ~]# chkconfig httpd off [root@na1 ~]# service httpd stop //Stop httpd: [OK] [root@na1 ~]# echo "na1.server.com" >> /var/www/html/index.html [root@na1 ~]# NA2 [root@na2 ~]# chkconfig httpd off [root@na2 ~]# service httpd stop //Stop httpd: [OK] [root@na2 ~]# echo "na2.server.com" >> /var/www/html/index.html [root@na2 ~]#

Create httpd resource

crm(live)configure# primitive httpd service:httpd httpd crm(live)configure# verify crm(live)configure# commit crm(live)configure#

View resource status

crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:22:47 2020 Last change: Sun May 24 22:17:35 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na2.server.com crm(live)#

We found that VIP is on na1, httpd is on na2, which is not good. corosync will distribute resources equally, so we need to constrain resources.

7. Resource constraints

Resource constraints specify which cluster nodes to run resources on, in which order to load resources, and which other resources a particular resource depends on.
pacemaker provides us with three resource constraint methods:
1) Resource Location: defines which nodes a resource can, cannot, or can run on;
2) Resource alignment: alignment constraints are used to define whether cluster resources can or cannot run on a node at the same time;
3) Resource Order: sequence constraint defines the order in which cluster resources are started on nodes;

1. Which node does the resource run on first? 2. Two resources run together or cannot run together. 3. Start sequence: run the first and run the second.

Resource arrangement

Here we use colocation to bind VIP and httpd resources together so that they must be allowed on the same node.

#For help, we can refer to the case crm(live)configure# help collocation Example: colocation never_put_apache_with_dummy -inf: apache dummy colocation c1 inf: A ( B C ) -Inf can't be together, inf: must be together. Middle_ put_ apache_ with_ Dummy is a name. It's easy to remember.

Constrain

crm(live)configure# collocation vip_with_httpd inf: VIP httpd crm(live)configure# verify crm(live)configure# commit crm(live)configure#

Check the resource status (all running on na2. We restrain it and hope it stays on na1)

crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:38:55 2020 Last change: Sun May 24 22:37:44 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com crm(live)#

Positional constraint

Constraining resources in na1 Top: Location(Resource location):

Check help (we found that there is a number, which means to specify the meaning similar to a score. The larger the number, the higher the priority. The cluster will select nodes with high scores to allow. The default score is 0)

Examples: location conn_1 internal_www 100: node1 location conn_1 internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd location conn_2 dummy_float \ rule -inf: not_defined pingd or pingd number:lte 0 # never probe for rsc1 on node1 location no-probe rsc1 resource-discovery=never -inf: node1

Constraint resources (here we can restrict VIP, thinking that VIP and httpd have been constrained before, together, together)

crm(live)configure# location vip_httpd_prefer_na1 VIP 100: na1.server.com crm(live)configure# verify crm(live)configure# commit

Let's see resource status

All of them are running on the NA1 node.

crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:48:22 2020 Last change: Sun May 24 22:48:10 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com crm(live)#

8. Analog fault cluster conversion

Active standby switching

crm node Next is the operation for the node. There's a standby,Change yourself into a backup, online go online.

crm(live)node# --help bye exit maintenance show utilization -h cd fence online standby ? clearstate help quit status attribute delete list ready status-attr back end ls server up crm(live)node#

Go offline & Online

#na1 offline crm(live)# node standby crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:56:18 2020 Last change: Sun May 24 22:56:15 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Node na1.server.com: standby Online: [ na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com # na1 Online crm(live)# node online crm(live)# status Stack: classic openais (with plugin) Current DC: na1.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 22:56:29 2020 Last change: Sun May 24 22:56:27 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com crm(live)#

The resource will automatically switch to na1. The constraint score, 100 points, has been defined before, so it will come back again.

Physical failure

We stop the service of na1 and check the status on na2

[root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Sun May 24 23:20:29 2020 Last change: Sun May 24 22:56:27 2020 by root via crm_attribute on na1.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Stopped httpd (service:httpd): Stopped [root@na2 ~]#

We found our na2 online, but the service was turned off.

Introduction to Quorum

Only when the number of surviving bills of cluster nodes is greater than or equal to Quorum can the cluster work normally.

When the number of cluster tickets is odd, the (number of tickets + 1) / 2 of the Quorum value.

When the number of bills is even, the value of Quorum is (number of bills / 2) + 1

We are two nodes, and the Quorum value is 2, so we have to survive at least two or more nodes. We shut down one node, so the cluster cannot work properly.

This is not very friendly for two nodes. We can use

crm configure property no-quorum-policy=ignore

Ignore the cluster state that quorum cannot satisfy

View resource status (resource service starts normally)

[root@na2 ~]# crm configure property no-quorum-policy=ignore [root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Sun May 24 23:27:02 2020 Last change: Sun May 24 23:26:58 2020 by root via cibadmin on na2.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com [root@na2 ~]#

9. httpd service high availability test

Now it's na2 Online, resources running in na2 Up, NA1 Downtime, we visit VIP

We put na1 Service on

[root@na1 ~]# service corosync start Starting Corosync Cluster Engine (corosync): [determine] [root@na1 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 23:32:36 2020 Last change: Sun May 24 23:26:58 2020 by root via cibadmin on na2.server.com 2 nodes configured (2 expected votes) 2 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com [root@na1 ~]#

Visit VIP

httpd services are highly available.

10. Create nfs file resource

Services for NFS test

nfs Server configuration omitted

NA1 NA2 Node shutdown selinux

nfs Resources: 192.168.94.131

[root@filesystem ~]# exportfs /file/web 192.168.0.0/255.255.0.0 [root@filesystem ~]#

Test mount

[root@na1 ~]# mkdir /mnt/web [root@na1 ~]# mount -t nfs 192.168.94.131:/file/web /mnt/web [root@na1 ~]# cat /mnt/web/index.html <h1>this is nfs server</h1> [root@na1 ~]# umount /mnt/web/ [root@na1 ~]#

Create nfs file resource

View help( Filesystem There are three required parameters)

crm(live)ra# info ocf:heartbeat:Filesystem Parameters (*: required, []: default): device* (string): block device The name of block device for the filesystem, or -U, -L options for mount, or NFS mount specification. directory* (string): mount point The mount point for the filesystem. fstype* (string): filesystem type The type of filesystem to be mounted.

Create nfs resource

crm(live)configure# primitive nfs ocf:heartbeat:Filesystem params device=192.168.94.131:/file/web directory=/var/www/html fstype=nfs crm(live)configure# verify crm(live)configure# commit

Check the status (found on na2, it should be with httpd. We do position constraints)

crm(live)# status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Sun May 24 23:48:09 2020 Last change: Sun May 24 23:44:24 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com nfs (ocf::heartbeat:Filesystem): Started na2.server.com crm(live)#

Arrange, sort constraints

The constraints mentioned above have three functions, one is order ordered.

Httpd and nfs should be started by nfs, and then httpd.

We use httpd and nfs to arrange and sort constraints.

crm(live)configure# colocation httpd_with_nfs inf: httpd nfs #Start nfs first, and then start httpd crm(live)configure# order nfs_first Mandatory: nfs httpd #Start httpd first and start vip crm(live)configure# order httpd_first Mandatory: httpd VIP crm(live)configure# verify crm(live)configure# commit crm(live)configure#

11. High availability cluster test

View status

crm(live)# status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition with quorum Last updated: Mon May 25 00:00:23 2020 Last change: Sun May 24 23:58:47 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na1.server.com na2.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na1.server.com httpd (service:httpd): Started na1.server.com nfs (ocf::heartbeat:Filesystem): Started na1.server.com crm(live)#

Visit VIP

NA1 service shutdown, NA2 view status

[root@na2 ~]# crm status Stack: classic openais (with plugin) Current DC: na2.server.com (version 1.1.18-3.el6-bfe4e80420) - partition WITHOUT quorum Last updated: Mon May 25 00:01:42 2020 Last change: Sun May 24 23:58:47 2020 by root via cibadmin on na1.server.com 2 nodes configured (2 expected votes) 3 resources configured Online: [ na2.server.com ] OFFLINE: [ na1.server.com ] Full list of resources: VIP (ocf::heartbeat:IPaddr): Started na2.server.com httpd (service:httpd): Started na2.server.com nfs (ocf::heartbeat:Filesystem): Started na2.server.com [root@na2 ~]#

Visit VIP

View profile

[root@na2 ~]# crm configure show node na1.server.com \ attributes standby=off node na2.server.com primitive VIP IPaddr \ params ip=192.168.94.222 primitive httpd service:httpd \ params httpd primitive nfs Filesystem \ params device="192.168.94.131:/file/web" directory="/var/www/html" fstype=nfs order httpd_first Mandatory: httpd VIP colocation httpd_with_nfs inf: httpd nfs order nfs_first Mandatory: nfs httpd location vip_httpd_prefer_na1 VIP 100: na1.server.com colocation vip_with_httpd inf: VIP httpd property cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.18-3.el6-bfe4e80420 \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes=2 \ stonith-enabled=false \ no-quorum-policy=ignore [root@na2 ~]#

View xml configuration

crm configure show xml

On the other hand, what to do if the configuration is wrong halfway,

[root@na2 ~]# crm crm(live)# configure # You can edit the configuration file directly crm(live)configure# edit node na1.server.com \ attributes standby=off node na2.server.com primitive VIP IPaddr \ params ip=192.168.94.222 primitive httpd service:httpd \


There is always one on the way to study and keep fit

28 May 2020, 03:28 | Views: 2635

Add new comment

For adding a comment, please log in
or create account

0 comments