1, RAID disk array introduction
-
Independent redundant disk array
-
Multiple independent physical hard disks are combined in different ways to form a hard disk group (logical hard disk), so as to provide higher storage performance than a single hard disk and NC backup technology
-
The different ways of forming disk arrays become RAID levels
-
Common RAID levels
-
RAID 0
RAID 0 continuously divides data in bits or bytes and reads / writes to multiple disks in parallel, so it has a high data transfer rate, but it has no data redundancy
RAID0 simply improves performance and does not guarantee the reliability of data. Moreover, the failure of one disk will affect all data
RAID O cannot be used in situations where data security requirements are high -
RAID 1
Data redundancy is realized through disk data mirroring, which generates data backed up by each other on paired independent disks. Data redundancy is realized through disk data mirroring, and data backed up by each other is generated on paired independent disks
When the original data is busy, the data can be read directly from the mirrored copy, so RAID 1 can improve the reading performance -
RAID 5
N (n > = 3) disks form an array. One piece of data generates N-1 strips and one piece of verification data. A total of N pieces of data are circularly and evenly stored on N disks
N disks read and write at the same time, and the read performance is very high. However, due to the problem of verification mechanism, the write performance is relatively low
(N-1)/N disk has high utilization and reliability. It is allowed to break one disk without affecting all data
-
RAID 6
N (n > = 4) disks form an array, (N-2)/N disk utilization
Compared with RAID 5, RAID 6 adds a second independent parity information blockTwo independent parity systems use different algorithms. Even if two disks fail at the same time, it will not affect the use of data
Compared with RAID 5, it has greater "write loss", so the write performance is poor -
RAID 1+0
-
After n (even number, n > = 4) disks are mirrored, they are combined into a RAID 0
N/2 disk utilization
N/2 disks are written at the same time and N disks are read at the same time
High performance and reliability
- Comparison of RAID
2, Array card introduction
1. Introduction
-
Array card is a board used to realize RAID function. It is usually composed of a series of components such as I/O processor, hard disk controller, hard disk connector and cache
-
Different raid cards support different raid functions: for example, RAID0, RAID1, RAID5, RAID10, etc
2. Interface type of array card
- IDE interface (parallel interface, low price and strong compatibility)
- SCSI interface (serial interface is a small computer system interface, which is widely used in high-speed data transmission technology on small computers. It supports hot plug and unplug, with low CPU occupancy, but high price)
- SATA interface (serial interface)
- SAS interface (next generation scsi interface, downward compatible with SATA)
3, Array card cache
- Cache is the place where the RAID card exchanges data with the external bus. The RAID card first transfers the data to the cache, and then the cache exchanges data with the external data bus.
- The size and speed of cache are important factors directly related to the actual transmission speed of RAID card. Large cache can improve the hit rate
- Different RAID cards are equipped with different memory capacities at the factory, generally ranging from a few megabytes to hundreds of megabytes.
4, Configure RAID
1. Configure RAID5
-
Query whether the mdadm software package exists. If it does not exist, install it. Use fdisk to partition according to, and mark the device ID as "fd".
rpm -q mdadm #Query whether the mdadm installation package exists yum install -y mdadm #Installing mdadm fdisk /dev/sdb #Partition fdisk /dev/sdc fdisk /dev/sdd fdisk /dev/sde
-
Create RAID device
[root@localhost ~]# mdadm -C -v /dev/md5 -l5 -n3 /dev/sd[bcd]1 -x1 /dev/sde1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 20954112K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started. Note:-C:Indicates new; -v:Displays details of the creation process. /dev/md5: establish RAID5 Name of the. -ayes:--autoļ¼Indicates that if any device file does not exist, it will be created automatically and can be omitted. -l: appoint RAID The level of, l5 Represents creation RAID5. . -n:Specify how many hard disks to create RAID, n3 Indicates that it is created with 3 hard disks RAID. /dev/sd [bcd]1: Specify to use these three disk partitions to create RAID. -x:Specify how many hard disks to use RAID Hot spare disk, x1 Indicates that one free hard disk is reserved for standby /dev/sde1: Specifies the disk to use as a spare
-
Check whether the creation is successful
Method 1: cat /proc/mdstat #You can also view the progress of creating RAID Method 2: mdadm -D /dev/md0 Method 3: watch -n 10 ' cat /proc/mdstat #Use the watch command to refresh the output of / proc/mdstat at regular intervals
-
Create and mount file systems
[root@localhost ~]# mkfs -t xfs /dev/md5 #format meta-data=/dev/md5 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# mount /dev/md5 /2 #Mount view mount [root@localhost ~]# df -Th file system type Capacity used available used% Mount point /dev/sda3 xfs 33G 3.7G 30G 12% / devtmpfs devtmpfs 898M 0 898M 0% /dev tmpfs tmpfs 912M 0 912M 0% /dev/shm tmpfs tmpfs 912M 9.1M 903M 1% /run tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup /dev/sda1 xfs 2.0G 174M 1.9G 9% /boot tmpfs tmpfs 183M 48K 183M 1% /run/user/1000 tmpfs tmpfs 183M 0 183M 0% /run/user/0 /dev/sr0 iso9660 4.3G 4.3G 0 100% /run/media/ly/CentOS 7 x86_64 /dev/md5 xfs 40G 33M 40G 1% /2 [root@localhost ~]# mdadm -D /dev/md5 #View raid disk details
-
Simulate failure and recover
mdadm /dev/md5 -f /dev/sdb1 #Simulate / dev/sdb1 failure mdadm -D /dev/md5 #It is found that sde1 has replaced sdb1
2. Create RAID1 0
-
Query whether the mdadm software package exists. If it does not exist, install it. Use fdisk to partition according to, and mark the device ID as "fd".
rpm -q mdadm #Query whether the mdadm installation package exists yum install -y mdadm #Installing mdadm fdisk /dev/sdb #Partition fdisk /dev/sdc fdisk /dev/sdd fdisk /dev/sde
-
Create RAID device
[root@localhost ~]# mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[bc]1 #Create raid1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20954112K Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@localhost ~]# mdadm -Cv /dev/md1 -l1 -n2 /dev/sd[de]1 #Create raid1 [root@localhost ~]# mdadm -Cv /dev/md10 -l0 -n2 /dev/md[01] #Set 2 raid1 as raid0 mdadm: chunk size defaults to 512K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md10 started. #raid 10 is to make two raid 1 and then raid 0. If raid 0 is made first, the effect is different, and data may be lost
-
Create and mount file systems
[root@localhost ~]# mkfs -t xfs /dev/md10 #Just format md10 here [root@localhost ~]# mkdir /1 [root@localhost ~]# mount /dev/md10 /1 #Mount and query mount [root@localhost ~]# df -Th [root@localhost ~]# mdadm -D /dev/md10 #View disk information
-
Simulated fault
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb1 #Here, we can only implement faults in md1 two groups mdadm: set /dev/sdb1 faulty in /dev/md0 [root@localhost ~]# mdadm /dev/md0 -r /dev/sdb1 [root@localhost ~]# mdadm /dev/md0 -a /dev/sdb1 [root@localhost ~]# mdadm -D /dev/md10 #Our capacity here is twice that of md1
5, RAID array management and device recovery
1. Steps
-
Scan or view disk array information
-
Start / stop RAID array
-
Device recovery operation simulates array device failure, replaces failed devices, and recovers data
2. mdadm command
-
Common options:
-
-r: Remove device
-
-a: Add device
-
-S: Stop RAID
-
-A: Start RAID
-
-f: Simulated fault
-
mdadm /dev/md0 -f /dev/sdc1 ## Set the specified disk as failed mdadm /dev/md0 -r /dev/sdc1 ## Remove specified disk mdadm /dev/md0 -a /dev/sdc1 ## Add specified disk
-
Create / etc/mdadm.conf configuration file to facilitate management of soft RAID configuration
[root@localhost ~]# echo ' DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1 /dev/sde1' > /etc/mdadm.conf #Write resource to configuration file [root@localhost ~]# mdadm --detail --scan >> /etc/mdadm.conf #Scan update [root@localhost ~]# umount /dev/md5 #Unmount [root@localhost ~]# mdadm -S /dev/md5 #Stop RAID mdadm: stopped /dev/md5 [root@localhost ~]# mdadm -As /dev/md5 #Turn on RAID mdadm: /dev/md5 has been started with 3 drives and 1 spare. [root@localhost ~]# umount /dev/md5 #Unmount [root@localhost ~]# mdadm -S /dev/md5 #Stop RAID mdadm: stopped /dev/md5 [root@localhost ~]# mdadm -As /dev/md5 #Turn on RAID mdadm: /dev/md5 has been started with 3 drives and 1 spare.