Making Raid disk array with Linux server (basic operation)

RAID:
Strong fault tolerance to ensure data security;
Better I/O transmission rate, effectively matching the speed of CPU and memory;
Large storage capacity ensures the storage of massive data;
Lower performance price ratio.

RAID 0
Strip mode
Data cutting, placing different disks in order
The free disk space is the number of disks * the minimum disk size

RAID 1
Mirror mode
Data replication
Disk free space is the number of disks * Minimum disk size / 2

RAID 5
Three or more disks form a disk array. Each time data is written, a disk is randomly selected for verification
The free disk space is: (number of disks - 1) * the minimum disk size

RAID 10(RAID 1 + RAID 0)
That is, a combination of RAID 0 and RAID 1
Disk free space is the number of disks * Minimum disk size / 2

1. Basic environment configuration
Prepare a virtual machine equipped with Centos7.2-1511 system and add four 20G hard disks.

(1) Configure the Yum source (I added the software package related to the mdadm tool in the Centos image)

① Remove the original networking source

mv /etc/yum.repos.d/* /media

② Write the local Yum source file local.repo

vi /etc/yum.repos.d/local.repo

[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0

③ Mount image

mkdir /opt/centos
mount CentOS-7-x86_64-DVD-1511.iso /opt/centos/        //Single mount mode is adopted here
mount: /dev/loop0 is write-protected, mounting read-only

④ Turn off firewall and selinux

systemctl stop firewalld        //The following all adopt single closing mode
setenforce 0

⑤ Install mdadm tools using yum or up2date

yum instal -y mdadm

View current disk information

[root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 49.5G  0 part
  ├─centos-root 253:0    0 47.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk
sdc               8:32   0   20G  0 disk
sdd               8:48   0   20G  0 disk
sde               8:64   0   20G  0 disk
sr0              11:0    1 1024M  0 rom
loop0             7:0    0    4G  0 loop /opt/centos

Create RAID 0 mode disk array (experiment with sdb and sdc disks this time)

[root@localhost ~]# mdadm -Cv /dev/md0 -l 0 -n 2 /dev/sd[b-c]
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
md0: Created RAID Grade is RAID 0 of md0((device name)
-C v: Create a device and display information.
-l 0: RAID The level of is RAID 0. 
-n 2: establish RAID There are 2 pieces of equipment.

View RAID details

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Nov 24 09:09:35 2021
     Raid Level : raid0
     Array Size : 41910272 (39.97 GiB 42.92 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed Nov 24 09:09:35 2021
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : xnode1:0  (local to host xnode1)
           UUID : a6c10beb:31a3423d:ad6c618a:adcd42dc
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

Create and mount the file system for the created RAID

[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mkdir /raid0/
[root@localhost ~]# mount /dev/md0 /raid0/
[root@localhost ~]# df -Th /raid0/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    40G   33M   40G   1% /raid0

Delete RAID operation

[root@localhost ~]# umount  /raid0/
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm --zero-superblock /dev/sd[b-c]
[root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 49.5G  0 part
  ├─centos-root 253:0    0 47.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk
sdc               8:32   0   20G  0 disk
sdd               8:48   0   20G  0 disk
sde               8:64   0   20G  0 disk
sr0              11:0    1 1024M  0 rom
loop0             7:0    0    4G  0 loop /opt/centos
[root@localhost ~]# rm -r /raid0/
rm: remove directory '/raid0/'? y

Simple raid 0 creation and deletion have been completed, as described above
*
*
*
RAID 5 operation and maintenance operations are as follows
Simulate a RAID 5 disk array with four disks, one of which is a hot spare

[root@localhost ~]# mdadm -Cv /dev/md5 -l 5 -n 3 /dev/sd[b-d] --spare-devices=1 /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20955136K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

View RAID details

[root@localhost ~]# mdadm  -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Wed Nov 24 09:35:24 2021
     Raid Level : raid5
     Array Size : 41910272 (39.97 GiB 42.92 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Nov 24 09:35:24 2021
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 5% complete      //Pay close attention to this prompt. At present, RAID5 disk array is still under construction. If the simulated hard disk is damaged, the whole RAID disk array will collapse. Enter the mdadm -D /dev/md5 command repeatedly to check the completion degree until 100%, and then you can carry out subsequent operations.

           Name : localhost:5  (local to host localhost)
           UUID : e15ef8c6:1eac185f:ce147026:ed3a88bd
         Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      spare rebuilding   /dev/sdd

       3       8       64        -      spare   /dev/sde

Analog hard disk failure

[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5

View the process of hot spare participating in raid reconstruction

[root@localhost ~]# mdadm  -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Wed Nov 24 09:35:24 2021
     Raid Level : raid5
     Array Size : 41910272 (39.97 GiB 42.92 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Nov 24 09:39:13 2021
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 44% complete    //Note here, ibid

           Name : localhost:5  (local to host localhost)
           UUID : e15ef8c6:1eac185f:ce147026:ed3a88bd
         Events : 27

    Number   Major   Minor   RaidDevice State
       3       8       64        0      spare rebuilding   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       0       8       16        -      faulty   /dev/sdb

Delete failed disk

[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5

View RAID details (failed disk removed)

[root@localhost ~]# mdadm  -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Wed Nov 24 09:35:24 2021
     Raid Level : raid5
     Array Size : 41910272 (39.97 GiB 42.92 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Nov 24 09:41:00 2021
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost:5  (local to host localhost)
           UUID : e15ef8c6:1eac185f:ce147026:ed3a88bd
         Events : 38

    Number   Major   Minor   RaidDevice State
       3       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

Create and mount the file system for the created RAID

[root@localhost ~]# mkfs.xfs  /dev/md5
meta-data=/dev/md5               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mount /dev/md5 /mnt/
[root@localhost ~]# df -Th /mnt/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md5       xfs    40G   33M   40G   1% /mnt

Delete RAID 5 and restore the clean state

[root@localhost ~]# umount  /mnt/
[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
[root@localhost ~]# mdadm --zero-superblock /dev/sd[c-e]
[root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 49.5G  0 part
  ├─centos-root 253:0    0 47.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk
sdc               8:32   0   20G  0 disk
sdd               8:48   0   20G  0 disk
sde               8:64   0   20G  0 disk
sr0              11:0    1 1024M  0 rom
loop0             7:0    0    4G  0 loop /opt/centos

Tags: Linux CentOS

Posted on Wed, 24 Nov 2021 06:02:22 -0500 by Lucky_PHP_MAN