CentOS 7.6 for soft raid5 production

  CentOS 7.6 for soft raid5 production

 

Remarks (this record comes from the sharing of colleagues)

Original sharing address: Colleagues share direct

1, System Version Description

 

1. System Version Description

The operating system version for soft RAID is CentOS 7.6 64 bit.

 

2. Check that the server has the mdadm service installed

[root@host2 ~]# rpm -qa |grep mdadm

 

3. Installing the mdadm service
If the above command has no result output, install mdadm

[root@host2 ~]# yum install -y mdadm

 

4. View disk status
You can see that the system has four disks without partitions, / dev/sdb, / dev/sdc, / dev/sdd, / dev/sde.
Note: when using the command to create a soft raid, the disk can be partitioned or not. However, when using the disk command to partition, the partition type of the disk can be modified to Linux raid autodetect, so that the raid can start automatically when the system is restarted even if the / etc/mdadm.conf file is not configured.

 

[root@host2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda
8:0 0 60G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 16G 0 part [SWAP] └─sda3 8:3 0 43G 0 part / sdb 8:16 0 5G 0 disk sdc 8:32 0 5G 0 disk sdd 8:48 0 5G 0 disk sde 8:64 0 5G 0 disk sr0 11:0 1 4.3G 0 rom

2, Disk partition and raid creation


1. Disk partition


1.1 and 1.2 select one of the commands to partition according to the disk size. Generally, select the parted command to partition.

Note: the fdisk partition adopts MBR format. MBR cannot support hard disks over 2TB. Hard disks over 2TB need to use GPT partition. Here, the GPT partition uses the parted command.


1.1 fdisk partition

The partition example is as follows. Take the disk / dev/sdb partition as an example. The process of / dev/sdc, / dev/sdd, / dev/sde is the same as that of / dev/sdb partition.

[root@host2 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x8f2aa0c0.

Command (m for help): n                        # Start creating partition
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p                          # p start creating primary partitions (e is creating logical partitions)
Partition number (1-4, default 1):             # Enter (default number of zone is 1)
First sector (2048-10485759, default 2048):    # Enter (partition size starts from 2048 by default)
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): # Enter (zone default maximum)
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): t                        # Start modifying partition type
Selected partition 1
Hex code (type L to list all codes): fd        # Modify the partition type to Linux raid autodetect
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p                        # Query partition results

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8f2aa0c0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   fd  Linux raid autodetect

Command (m for help): w                        # Save the partition and exit
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@host2 ~]# partprobe                      # Update disk partition

 

The result of disk query after partition is as follows

[root@host2 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   60G  0 disk 
├─sda1   8:1    0    1G  0 part /boot
├─sda2   8:2    0   16G  0 part [SWAP]
└─sda3   8:3    0   43G  0 part /
sdb      8:16   0    5G  0 disk 
└─sdb1   8:17   0    5G  0 part 
sdc      8:32   0    5G  0 disk 
└─sdc1   8:33   0    5G  0 part 
sdd      8:48   0    5G  0 disk 
└─sdd1   8:49   0    5G  0 part 
sde      8:64   0    5G  0 disk 
└─sde1   8:65   0    5G  0 part 
sr0     11:0    1  4.3G  0 rom  

 

1.2. parted partition

The partition example is as follows. Take the disk / dev/sdg partition as an example:

[root@host137 ~]# parted /dev/sdg
GNU Parted 3.1
Using /dev/sdg
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt                    # Create a gpt type partition table                                  
Warning: The existing disk label on /dev/sdg will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes                                                               
(parted) mkpart primary 0% 100%         # “primary”   Is the name, which can be used arbitrarily,0% 100%Specify the start and end locations of the partition                                  
(parted) p                              # View partition                                  
Model: ATA HUS726060ALE610 (scsi)
Disk /dev/sdg: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  6001GB  6001GB               primary

(parted) q                             # Save the partition and exit                                   
Information: You may need to update /etc/fstab.
[root@host137 ~]# partprobe            # Update disk partition

 

2. Create raid and mount

-c specifies the name of the raid to be created- a yes automatically create equipment files- l specify RAID level- n number of disks- c data block size. The default size is 64k. Generally, - c is not used.
cat /proc/mdstat command can see the progress of spare rebuilding synchronization of disks in raid.
Here, take raid5 as an example (raid5 uses 4 disks for demonstration. Generally, it is OK to build 3 disks of raid5):

[root@host2 ~]# mdadm -C /dev/md0 -a yes -l 5 -n 4 /dev/sd{b,c,d,e}1 -c 256       
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@host2 ~]# mdadm -Ds                  # View raid information
ARRAY /dev/md0 metadata=1.2 spares=1 name=host2:0 UUID=e2ea5580:d5f14d94:9365e4fe:cdfe1b92
[root@host2 ~]# echo 'DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1' > /etc/mdadm.conf 
  # DEVICE is the name of the actual RAID disk
  
[root@host2 ~]# mdadm -Ds >> /etc/mdadm.conf 
# Save raid information to configuration file/etc/mdadm.conf,If the configuration file is incomplete,
It is recommended to refer to the standard configuration file and complete it according to the actual situation. Otherwise, if there is no configuration file, the new partition letter will become the default after the machine is restarted md127. 

[root@host2 ~]# mkdir /data0               # Create mount point directory
[root@host2 ~]# mkfs.xfs /dev/md0          # Create raid disk file system as xfs
meta-data=/dev/md0               isize=512    agcount=16, agsize=245504 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=3927552, imaxpct=25
         =                       sunit=64     swidth=192 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@host2 ~]# blkid /dev/md0            # View raid disk file system and UUID
/dev/md0: UUID="43487c20-3b1f-477c-9a32-714d5982603a" TYPE="xfs"
[root@host2 ~]# 
sed -i '$a\UUID="43487c20-3b1f-477c-9a32-714d5982603a" /data0  xfs defaults,_netdev 0 0' /etc/fstab    
# use/dev/md0 of UUID Add permanent mount,_netdev Parameters are indicated/dev/md0 For network devices,
When the server restarts, the system finds/dev/md0 The device is mounted. If it cannot be found, skip the mounting and start it directly. without_netdev Parameters,
System not found/dev/md0 The device won't start.
[root@host2 ~]# mount -a                 
# Verify boot auto mount profile/etc/fstab Whether the configuration just written in is correct, and then you can use df -h Command view/dev/md0 Mount

[root@host2 ~]# tail -n 1 /etc/fstab      # View permanent mount results
UUID="43487c20-3b1f-477c-9a32-714d5982603a" /data0                   xfs defaults,_netdev 0 0
[root@host2 ~]# mdadm -D /dev/md0         # -D see raid For details, you can find that all four are working Stateful

/dev/md0:
           Version : 1.2
     Creation Time : Wed Dec 16 18:02:25 2020
        Raid Level : raid5
        Array Size : 15710208 (14.98 GiB 16.09 GB)
     Used Dev Size : 5236736 (4.99 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Dec 16 18:20:10 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 256K

Consistency Policy : resync

              Name : host2:0  (local to host host2)
              UUID : e2ea5580:d5f14d94:9365e4fe:cdfe1b92
            Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1

 

If you are not confident about the operation, you can restart to test whether the raid disk status and mounting are normal.

 

3. Validate raid

Validation command:

lsblk
blkid
df -h
cat /etc/fstab
mdadm -D /dev/md0
mdadm -D /dev/md1
cat /proc/mdstat
cat /etc/mdadm.conf
mdadm -Ds

 

4. Remove bad disks and add new disks

 

1,Analog disk corruption
[root@host2 ~]# mdadm -f /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
2,see raid5 information
[root@host2 ~]# mdadm -D /dev/md0 
       0       8       17        -      faulty   /dev/sdb1

3,Remove damaged disk
[root@host2 ~]# mdadm -r /dev/md0 /dev/sdb1
4,Add new disk
[root@host2 ~]# mdadm -a /dev/md0 /dev/sdg1

 

5. Delete and rebuild raid

1,Delete soft raid:
(1),umount /data0               # Unmount the mount point before deleting it
(2),mdadm --stop /dev/md0       # Stop raid
(2),mdadm --misc --zero-superblock /dev/sdb1 /dev/sdd1 /dev/sdd1 /dev/sde1  
 # empty/dev/sdb1 disk raid information
(3),vim /etc/fstab                # Delete or comment / etc/fstab Mount information on
(4),vim /etc/mdadm.conf           # Delete or comment / etc/mdadm.conf of RAID information
 If after all the above operations, you find that/dev/Next and md0 This device file does not exist. It cannot be deleted directly rm -f /dev/md0 Just.

2,Rebuild startup software raid Command of:
If already configured/etc/mdadm.conf Profile, then: mdadm -As /dev/md0
 If not configured/etc/mdadm.conf Profile, then: mdadm -As /dev/md0 /dev/sd{b,c,d,e}1

 

3, Precautions

1. If the configuration file / etc/mdadm.conf is incomplete, it is recommended to refer to the standard configuration file and complete it according to the actual situation. Otherwise, if there is no configuration file, the new partition letter will change to the default md127 after the machine is restarted.
The raid information can be saved to the configuration file / etc/mdadm.conf through the following command. DEVICE is the name of the actual RAID disk:

[root@host2 ~]# echo 'DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1' > /etc/mdadm.conf
[root@host2 ~]# mdadm -Ds >> /etc/mdadm.conf

2. When mounting in the configuration file / etc/fstab, pay attention to adding_ netdev parameter, which indicates that / dev/md0 is a network device. When the server restarts, the system finds the / dev/md0 device and mounts it. If it cannot be found, it skips the mount and starts directly. If not_ netdev parameter. If the system cannot find the / dev/md0 device, it cannot start the machine.

[root@host2 ~]# tail -n 1 /etc/fstab      # View permanent mount results
UUID="43487c20-3b1f-477c-9a32-714d5982603a" /data0                   xfs defaults,_netdev 0 0

 

Posted on Sat, 04 Dec 2021 17:32:07 -0500 by Voodoo Jai