Operations Note 28 (deploying ip, http, storage, etc. on the cluster)

Summary: We have already assumed cluster and deployed fence power management before, so we just need to put the service up and run away. We take http ...

Summary:

We have already assumed cluster and deployed fence power management before, so we just need to put the service up and run away. We take http service as an example. The resources we need for this service are ip, storage and service software. After deploying these to cluster, we will test whether the services we deployed are highly available. Today's deployment is mainly in the web world. On the surface.

1. Domains to add handover services

Select Add to add a field

Firstly, the name is defined. The next three options are to perform services according to the priority of the nodes, and only switch services within the selected nodes. When switching from one node to another node, it will not automatically switch back to the original node. The bottom one is to select the node we want to add.

Displaying such a flag indicates successful addition.

2. Adding resources

Our http service needs three kinds of resources, namely ip, storage and service. We first ignore storage, let it be stored directly at the node, and then we can work out the storage.

After clicking Add Resources, select Add an ip Resource here.

The settings of ip resources are as above.


To add apache resources, we chose to configure apache in a scripted way

This adds two resources


3. Integrating Resource Groups

The last step is to combine the two resources into a service group and add them to the service group. And install http services on server1 and server2.

Give the service group a name, and then check that the service is self-startup and the service is exclusive. Choose to switch domain name and switch strategy (migration).

Add the ip, http resources we just configured.

Click the start button above to open the service group.

At this point, go back to the command line and use the command to view the status.

[root@server1 html]# clustat Cluster Status for newmo @ Wed Feb 15 03:37:57 2017 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1.mo.com 1 Online, Local, rgmanager server2.mo.com 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:apache server2.mo.com started
The command shows that the two machines are working and the service is running on server2.

[root@server1 html]# clusvcadm -e apache
This is self-startup and open apache service

[root@server1 html]# clusvcadm -d apache
Turn off apache's self-startup and shut down the service. Now test for high availability. We manually shut down the http service on server 2.

[root@server2 html]# /etc/init.d/httpd stop
Look at the cluster at this point.

[root@server1 html]# clustat Cluster Status for newmo @ Wed Feb 15 03:41:06 2017 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1.mo.com 1 Online, Local, rgmanager server2.mo.com 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:apache server1.mo.com started
The service has been switched to server 1.

4. Adding storage resources

In the real environment, the storage of services is not on the node machine, but on one database server and other servers. We also do this experiment. Now we use server3 as the storage server to make an iscsi network shared storage.

Add an 8G hard disk to the virtual machine

Disk /dev/vda: 8589 MB, 8589934592 bytes 16 heads, 63 sectors/track, 16644 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Server scsi:

yum install scsi-target-utils.x86_64 0:1.0.24-10.el6
Modify the configuration file/etc/tgt/targets.conf

<target iqn.2008-09.com.example:server.target1> backing-store /dev/vda initiator-address 172.25.9.20 initiator-address 172.25.9.21 </target>
Then start the service and share the storage.
[root@server3 ~]# /etc/init.d/tgtd start Starting SCSI target daemon: [ OK ]
Check to see if you share success

[root@server3 ~]# tgt-admin --show Target 1: iqn.2008-09.com.example:server.target1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 8590 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/vda Backing store flags: Account information: ACL information: 172.25.9.20 172.25.9.21
Share success as above.

Client iscsi:

On server 1

Install iscsi*, that is iscsi-initiator-utils-6.2.0.873-10.el6.x8.

devices detected

[root@server1 html]# iscsiadm -m discovery -t st -p 172.25.9.22 Starting iscsid: [ OK ] [ OK ] 172.25.9.22:3260,1 iqn.2008-09.com.example:server.target1
Add device

[root@server1 html]# iscsiadm -m node -l Logging in to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 172.25.9.22,3260] (multiple) Login to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 172.25.9.22,3260] successful.
Now look at the client device, whether there is a shared device, found that there is already.

Partitions can be used after formatting. Then create the logical volume.

[root@server1 html]# pvcreate /dev/sdb1 dev_is_mpath: failed to get device for 8:17 Physical volume "/dev/sdb1" successfully created [root@server1 html]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 VolGroup lvm2 a-- 19.51g 0 /dev/sdb1 lvm2 a-- 8.00g 8.00g
Then switch to server 2, after device discovery and so on, use pvs to see if there is a physical volume. If both sides found the physical volume of sdb1 8G, they would succeed.

Continue to create vg, lv, on server 1

[root@server1 html]# vgcreate clustervg /dev/sdb1 Clustered volume group "clustervg" successfully created
[root@server1 html]# vgs VG #PV #LV #SN Attr VSize VFree VolGroup 1 2 0 wz--n- 19.51g 0 clustervg 1 0 0 wz--nc 8.00g 8.00g

[root@server1 html]# lvcreate -n lvclu -L 4g clustervg
[root@server1 html]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 18.54g lv_swap VolGroup -wi-ao---- 992.00m lvclu clustervg -wi-a----- 4.00g
Storage has been created. There are two ways to use this storage. One is to use Conga for script management, the other is to use GFS file system, so there is no need to use management software.

The first is:


Fill in the information as shown above, and then join the resources into the service group, at this time please pay attention to the order of services, storage must be in front of the service, first start storage to have services.

Open the service group test and suggest using clusvcadm-e Apache to open the service

[root@server1 html]# clustat Cluster Status for newmo @ Wed Feb 15 04:36:34 2017 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1.mo.com 1 Online, Local, rgmanager server2.mo.com 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:apache server1.mo.com started
The service has been started, and you can see that the ip is added to the machine through the ip command.

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:31:79:cb brd ff:ff:ff:ff:ff:ff inet 172.25.9.20/24 brd 172.25.9.255 scope global eth0 inet 172.25.9.101/24 scope global secondary eth0 inet6 fe80::5054:ff:fe31:79cb/64 scope link valid_lft forever preferred_lft forever
This machine has been added an ip, which is the IP added to our cluster.


28 March 2019, 08:03 | Views: 1326

Add new comment

For adding a comment, please log in
or create account

0 comments