Configuring RAID on Linux and details

Bowen Outline

What is RAID
Software,hardware RAID
 Setup of Software Disk Array
 Emulate RAID incorrect rescue mode
 Boot-up automatically starts RAID and mounts it automatically

1. What is RAID:

1. The full name of the disk array is "Redundant Arrays of Inexpensive Disks, RAID", which means: Fault-tolerant, inexpensive disk array.

2.RAID can integrate several smaller disks into a larger disk device using a single technology (software or hardware); this larger disk function is not just storage, but also data protection.

3. The entire RAID has different functions due to different level s of selection.

  • RAID-0 (stripe): Best performance

This mode works better if you use disks of the same size and capacity.
The RAID in this mode will cut the disk out of equal blocks (named chunk, which can generally be set between 4K and 1M), then when a file is written to the RAID, it will be cut according to the size of the chunk and placed on each disk in sequence.
Since each disk stores data staggered, when your data is written to the RAID, it is placed on each disk in equal amounts.

  • RAID-1 (map mode, mirror): full backup

This mode also requires the same disk capacity, preferably the same disk.
If the RAID-1 is made up of disks of different capacities, the total capacity will be dominated by the smallest disk!This mode is mainly "Let the same data be completely stored on two disks"
For example, if I have a 100MB file and I only have two disks that make up RAID-1, they will be written to their storage space at the same time as 100MB.As a result, the overall RAID capacity is almost 50%.Because the contents of the two hard disks are identical, as if they were mirrored, we also call them mirror mode

Since the data in the two disks are identical, your data will remain intact when any hard disk is damaged!
Maximum benefit backup

  • RAID 1+0, RAID 0+1 (combined)

RAID-0 has good performance but unsafe data, and RAID-1 has secure but poor performance. Can you combine the two to set RAID?
RAID 1+0 is:
(1) Let two disks make up RAID 1 first, and there are two groups of such settings;
(2) Re-group the two sets of RAID 1 into a set of RAID 0.This is RAID 1+0
RAID 0+1 is:
(1) Let two disks make up RAID 0 first, and there are two groups of such settings;
(2) Re-compose the two sets of RAID 0 into a set of RAID 1.This is RAID 0+1

  • RAID5: Balance of performance with data backup

RAID-5 requires at least three disks to make up this type of disk array.
The data write of this disk array is a bit like RAID-0, but during each striping cycle, a Parity check data is added to each disk, which records backup data from other disks for rescue when the disk is damaged.

As each cycle writes, some parity codes are recorded, and the recorded parity codes are recorded on different disks each time, so any disk that is damaged can be used to rebuild the data in the original disk with the check codes of other disks.However, it is important to note that the total capacity of RAID 5 is one less than the total number of disks due to the parity check code.
The original 3 disks will only have (3-1) = 2 disks left.
When the number of disks damaged is greater than or equal to two, the entire set of RAID 5 data is damaged.Because RAID 5 only supports one disk by default

Read performance: excellent
 Writing performance: general
 RAID5: Supports 1 damage
 RAID6: Supports 2 damages
  • Spare Disk: Functions of the standby disk:

In order for the system to rebuild actively in real time when the hard disk is damaged, the spare disk is needed.The so-called spare disk is one or more disks that are not included in the original disk array level. This disk is not normally used by the disk array. When there is any disk damage to the disk array, this spare disk will be actively pulled into the disk array and the damaged hard disk will be moved out of the disk array!Then rebuild the data system immediately.


When the disk of a disk array is damaged, you have to unplug the damaged disk and replace it with a new one.

  • Advantages of disk arrays:
  1. Data security and reliability: refers not to network information security, but to whether the data can be rescued or used safely when the hardware (referring to the disk) is damaged;
  2. Read and write performance: RAID 0, for example, can enhance read and write performance and improve the I/O part of your system;
  3. Capacity: Multiple disks can be combined, so a single file system can have considerable capacity.

Software,hardware RAID:

System resources, such as CPU operations and I/O bus resources.But now our personal computers are really fast, so the previous speed limit is no longer there!
The software disk array provided by our CentOS is mdadm, which uses partition or disk as the unit of disk. That is, you don't need more than two disks, you only need more than two partitions to design your disk array.
In addition, mdadm supports RAID0/RAID1/RAID5/spare disk, etc., which we have just mentioned!It also provides a management mechanism similar to hot-plug, which allows partition swapping on-line (normal file system use)

Settings for Software Disk Array:

Configure RAID
(Add a hard drive to the server and create five partitions (you can also add five hard drives, one partition per partition)

1.[root@localhost ~]# Gdisk/dev/sdb //Enter Hard Disk

Creating new GPT entries.

Command (? for help): n   //Create partitions
Partition number (1-128, default 1): 1   //Default 1
First sector (34-41943006, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-41943006, default = 41943006) or {+-}size{KMGTP}: +1G   //Size 1G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'
//Configure 4 more partitions as above
Command (? for help): p         //View partition table information
Disk /dev/sdb: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 6A978C77-4505-4345-ABEC-AE3C31214C6D
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 41943006
Partitions will be aligned on 2048-sector boundaries
Total free space is 31457213 sectors (15.0 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  8300  Linux filesystem
   2         2099200         4196351   1024.0 MiB  8300  Linux filesystem
   3         4196352         6293503   1024.0 MiB  8300  Linux filesystem
   4         6293504         8390655   1024.0 MiB  8300  Linux filesystem
   5         8390656        10487807   1024.0 MiB  8300  Linux filesystem

Command (? for help): wq   //Save Exit
[root@localhost ~]#  mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=4 --spare-devices=1 /dev/sd[b-e]1
//Command parameters:
--create    #Indicates to create raid
--auto=yes /dev/md0   #The newly created Software Disk Array device is md0 and the MD number can be 0-9
--level=5   #Level of the disk array, where raid5 was created
--raid-devices     #Number of blocks added to a disk array
--spare-devices   #Number of blocks added as spare disks
/dev/sd[b-f]1   #Devices used by disk arrays can also be written as/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1  
[root@localhost ~]# cat /proc/mdstat             #View RAID profile
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2** [3/3] [UUU]**

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md0             #View RAID profile details
/dev/md0:
        Version : 1.2
  Creation Time : Sun Jun 30 10:43:20 2019
  **   Raid Level : raid5**                                   #Array type is raid5

                                             ............              #Omit some content

 Active Devices : 3                 #Number of active disks
Working Devices : 4               #Number of all disks
 Failed Devices : 0                  #Number of failed disks
  Spare Devices : 1                 #Number of disks for hot backup
Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1
   4       8       49        2      active sync   /dev/sdd1

   3       8       65        -      spare   /dev/sde1       #A disk for hot backup
[root@localhost ~]# mkfs.xfs /dev/md0
#Format Disk
[root@localhost ~]# mkdir /a
[root@localhost ~]# mount /dev/md0 /a
#Mount Disk
[root@localhost ~]# df -hT              #disk size viewing

                            ............                #Omit some content

/dev/md0            xfs        40G   33M   40G    1% /a
 [root@localhost ~]# vim /etc/fstab                  #Write File Start Automount               

                                ............                #Omit some content

/dev/md0                /a                      xfs     defaults        0 0
[root@localhost ~]# cd /a
[root@localhost a]# touch 123.txt  456.txt                #Create Test File
[root@localhost a]# mdadm /dev/md0 -f /dev/sdb1      #Simulate sdb1 corruption
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@localhost a]# mdadm -D /dev/md0                  #View/dev/md0 details

                   ............               #Omit some content

    Number   Major   Minor   RaidDevice State
       3       8       65        0      spare rebuilding   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1
[root@localhost a]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
      41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [**3/3] [UUU]**

unused devices: <none>
[root@localhost a]# ll                     #View test files
//Total dosage 0
-rw-r--r--. 1 root root 0 6 Month 30 11:06 123.txt
-rw-r--r--. 1 root root 0 6 Month 30 11:06 456.txt
[root@localhost a]# mdadm /dev/md0 -r /dev/sdb1                        #Remove damaged disks
mdadm: hot removed /dev/sdb1 from /dev/md0
[root@localhost a]# mdadm -D /dev/md0                               #View/dev/md0 details

                                                                                 ............               #Omit some content

   Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1
[root@localhost a]# mdadm /dev/md0 -a /dev/sdb1                      #Add a hard disk
mdadm: added /dev/sdb1
[root@localhost a]# mdadm -D /dev/md0                           #View/dev/md0 details

Add another disk to the server and add it after restart:

[root@localhost a]# mdadm /dev/md0 -a /dev/sdf1
mdadm: added /dev/sdf1
[root@localhost a]# mdadm -D /dev/md0

                                                                                 ............               #Omit some content

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       5       8       17        -      spare   /dev/sdb1
       6       8       81        -      spare   /dev/sdf1
[root@localhost a]# mdadm /dev/md0 -G -n4
#-n is used to specify the number of active disks in a raid.It is best to ensure that enough hot backup is added.
[root@localhost a]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Jun 30 10:43:20 2019
     Raid Level : raid5
     Array Size : 41908224 (39.97 GiB 42.91 GB)                        #Disk capacity will change
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Sun Jun 30 11:22:00 2019
          State : clean         #Build complete

                                                ............         #Omit some content

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1
       6       8       81        3      active sync   /dev/sdf1

       5       8       17        -      spare   /dev/sdb1
    #Now raid has passed through four hard disks
[root@localhost a]# df -hT                                       #No change in capacity after viewing
                         ............         #Omit some content
/dev/md0            xfs        40G   33M   40G    1% /a
[root@localhost a]# resize2fs /dev/md0
#(resizefx is suitable for ext3, ext4 and other file systems but not for xfs file systems)
#The resize2fs command is used to update disks
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block When trying to open /dev/md0 time
//No valid file system superblock was found.
[root@localhost a]# xfs_growfs /dev/md0                            #Expand the file system
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10475520 to 15715584
[root@localhost a]# df -hT
//File System Type Capacity Used Available%Mountpoint

                      ............                   #Omit some content

/dev/md0            xfs        60G   33M   60G    1% /a
#Viewing capacity again has changed

Tags: Linux less network CentOS

Posted on Thu, 07 Nov 2019 11:32:28 -0500 by yarnold