1, raid0 build
Raid0: n times the speed and N times the capacity, with poor security.
Experimental preparation: Use / dev/sdb2 and / dev/sdb2 hard disk partitions to simulate the hard disk to build raid0 (the two hard disks should be as large as possible)
Insert it here[root@local ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 10G 0 disk ├─sdb1 8:17 0 2G 0 part ├─sdb2 8:18 0 1G 0 part ├─sdb3 8:19 0 1G 0 part
1. Install mdadm
yum -y install mdadm
2. Start creating raid0
[root@local ~]# mdadm --create /dev/md0 -l 0 -n 2 /dev/sdb2 /dev/sdb3 mdadm: Fail to create md0 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
//– create creates a soft raid
//-l specify raid level
//-n specifies the number of disks
Check whether the creation is successful
[root@local ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 10G 0 disk ├─sdb1 8:17 0 2G 0 part ├─sdb2 8:18 0 1G 0 part │ └─md0 9:0 0 2G 0 raid0 ├─sdb3 8:19 0 1G 0 part │ └─md0 9:0 0 2G 0 raid0
View details of md0
[root@local ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Oct 26 20:14:31 2021 Raid Level : raid0 Array Size : 2093056 (2044.00 MiB 2143.29 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Oct 26 20:14:31 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Consistency Policy : unknown Name : local:0 (local to host local) UUID : b85fbe78:3647933d:30fa312c:d5acb942 Events : 0 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 19 1 active sync /dev/sdb3
For RAID 0 disk array md0 partition mount use
fdisk /dev/md0 / / partition / dev/md0
mkfs.xfs /dev/md0p1 / / format partition
mount /dev/md0p1 /root/u01 / / / Mount / dev/md0p1 for use
[root@local ~]# df -h file system Capacity used available used% Mount point /dev/md0p1 1018M 33M 986M 4% /root/u01 [root@local ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 10G 0 disk ├─sdb1 8:17 0 2G 0 part ├─sdb2 8:18 0 1G 0 part │ └─md0 9:0 0 2G 0 raid0 │ └─md0p1 259:0 0 1G 0 md /root/u01 ├─sdb3 8:19 0 1G 0 part │ └─md0 9:0 0 2G 0 raid0 │ └─md0p1 259:0 0 1G 0 md /root/u01
2, raid1 build
Raid1: the speed is half of N, the capacity is only half, and the security is the best.
Experimental preparation: Use / dev/sdb2 and / dev/sdb2 hard disk partitions to simulate the hard disk to build raid0 (the two hard disks should be as large as possible). In addition, one disk id is fd
[root@local ~]# mdadm -E /dev/sdb /dev/sdb: MBR Magic : aa55 Partition[0] : 4194304 sectors at 2048 (type 83) Partition[1] : 2097152 sectors at 4196352 (type fd) Partition[2] : 2097152 sectors at 6293504 (type 83)
1. Start creating raid1
[root@local ~]# mdadm --create /dev/md1 -l 1 -n 2 /dev/sdb2 /dev/sdb3 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Fail to create md1 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started.
2. View md1 information
[root@local ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Tue Oct 26 21:02:58 2021 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Oct 26 21:03:03 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : local:1 (local to host local) UUID : ccd8eb1b:64d210e4:2d1e081d:c969553a Events : 17 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 19 1 active sync /dev/sdb3
3. Format MD1: mkfs.xfs/dev/md1
4. Mount using md1: mount /dev/md1 /root/u01
5. Write the configuration file: mdadm - DS > > / etc / mdadm.conf
6. Test whether raid1 works
1) . create a new file: echo "hi" > > / root / u01 / file1
2) . mark a disk failure:
[root@local ~]# mdadm /dev/md1 -f /dev/sdb2 mdadm: set /dev/sdb2 faulty in /dev/md1
3) . check whether the file is damaged:
[root@local ~]# cat /root/u01/file1 hi
3, raid5 build
Raid5: speed N-1, capacity N-1. The disk reading speed is slightly faster, and the writing efficiency is relatively low. However, the comprehensive security and hard disk utilization are the best.
Experimental preparation: Use / dev/sdb2 and / dev/sdb2 and / dev/sdb6 hard disk partitions to simulate the hard disk to build raid0 (the three hard disks should be as large as possible).
[root@local ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 10G 0 disk ├─sdb1 8:17 0 2G 0 part ├─sdb2 8:18 0 1G 0 part ├─sdb3 8:19 0 1G 0 part ├─sdb4 8:20 0 1K 0 part ├─sdb5 8:21 0 2G 0 part ├─sdb6 8:22 0 1G 0 part
1. Start creating raid5
[root@local ~]# mdadm --create /dev/md5 -l 5 -n 3 /dev/sdb{2,3,6} mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started.
2. Format
mkfs.xfs /dev/md5
If the format is not successful, use mkfs.xfs -f /dev/md5- f is forced format. If there is important data in the previous disk, do not use the disk
3. Mount
mount /dev/md5 /root/u01
[root@local ~]# df -h file system Capacity used available used% Mount point /dev/md5 2.0G 33M 2.0G 2% /root/u01 [root@local ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 10G 0 disk ├─sdb1 8:17 0 2G 0 part ├─sdb2 8:18 0 1G 0 part │ └─md5 9:5 0 2G 0 raid5 /root/u01 ├─sdb3 8:19 0 1G 0 part │ └─md5 9:5 0 2G 0 raid5 /root/u01 ├─sdb4 8:20 0 1K 0 part ├─sdb5 8:21 0 2G 0 part ├─sdb6 8:22 0 1G 0 part │ └─md5 9:5 0 2G 0 raid5 /root/u01
4. Write configuration file
mdadm -Ds >> /etc/mdadm.conf
5. Test
1).echo "hi" >>/root/u01/file1
2) . 2). Mark a disk failure:
[root@local ~]# mdadm /dev/md5 -f /dev/sdb2 mdadm: set /dev/sdb2 faulty in /dev/md1 [root@local ~]# mdadm -D /dev/md5 // View MD5 information /dev/md5: Version : 1.2 Creation Time : Tue Oct 26 22:23:42 2021 Raid Level : raid5 Array Size : 2093056 (2044.00 MiB 2143.29 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Oct 26 22:25:53 2021 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 //A disk has failed Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : unknown Name : local:5 (local to host local) UUID : 8fc9a70d:3e562690:44ff8b0e:a64ee89d Events : 26 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 3 8 22 2 active sync /dev/sdb6 0 8 18 - faulty /dev/sdb2
3) . check whether the file is damaged:
[root@local ~]# cat /root/u01/file1 "hi"
4) . delete the invalid disk
[root@local ~]# mdadm /dev/md5 -r /dev/sdb2 mdadm: hot removed /dev/sdb2 from /dev/md5 [root@local ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Tue Oct 26 22:23:42 2021 Raid Level : raid5 Array Size : 2093056 (2044.00 MiB 2143.29 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Oct 26 22:29:03 2021 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 //The failed disk has been removed Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : unknown Name : local:5 (local to host local) UUID : 8fc9a70d:3e562690:44ff8b0e:a64ee89d Events : 31 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 3 8 22 2 active sync /dev/sdb6
5) . add spare disk
[root@local ~]# mdadm /dev/md5 -a /dev/sdb1 mdadm: added /dev/sdb1 [root@local ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Tue Oct 26 22:23:42 2021 Raid Level : raid5 Array Size : 2093056 (2044.00 MiB 2143.29 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Oct 26 22:30:27 2021 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 //Number of spare disks 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : unknown Rebuild Status : 83% complete Name : local:5 (local to host local) UUID : 8fc9a70d:3e562690:44ff8b0e:a64ee89d Events : 46 Number Major Minor RaidDevice State 4 8 17 0 spare rebuilding /dev/sdb1 1 8 19 1 active sync /dev/sdb3 3 8 22 2 active sync /dev/sdb6
4, raid10 build
Raid10: that is, Raid0 + Raid1, which is safe and improves the speed, but the hard disk capacity is only half, and the speed reaches half of N.
Experimental preparation: raid1 is composed of / dev/sdb2 and / dev/sdb3/ dev/sdb6 and / dev/sdb6 form raid1; Then combine their two components of raid1 into raid0. So / dev/sdb2 and / dev/sdb6 are marked fd here
1. Create raid10
[root@local ~]# mdadm --create /dev/md1 -l 1 -n 2 /dev/sdb{2,3} // 2. 3 combined into raid1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Fail to create md1 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. [root@local ~]# mdadm --create /dev/md2 -l 1 -n 2 /dev/sdb{6,7} // 6. 7 composition raid1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Fail to create md2 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md2 started. [root@local ~]# mdadm --create /dev/md10 -l 0 -n 2 /dev/md{1,2} / / two raid1 s form raid0 mdadm: Fail to create md10 when using /sys/module/md_mod/parameters/new_array, fallback to creationvia node mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md10 started.
2. View the information of raid10
[root@local ~]# mdadm -D /dev/md10 /dev/md10: Version : 1.2 Creation Time : Tue Oct 26 22:46:19 2021 Raid Level : raid0 Array Size : 2088960 (2040.00 MiB 2139.10 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Oct 26 22:46:19 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Consistency Policy : unknown Name : local:10 (local to host local) UUID : cd9f35d9:7edf8d10:67f72109:c89f630a Events : 0 Number Major Minor RaidDevice State 0 9 1 0 active sync /dev/md1 1 9 2 1 active sync /dev/md2
3. Format
mkfs.xfs /dev/md10
4. Mount and use
[root@local ~]# mount /dev/md10 /root/u01 [root@local ~]# df -h file system Capacity used available used% Mount point /dev/md10 2.0G 33M 2.0G 2% /root/u01
5. Prepare configuration file
mdadm -Ds >> /etc/mdadm.conf
6. Auto mount after startup
vim /etc/fstab
5, To delete soft raid in linux:
1. Uncoupling raid: umount /dev/md0
2. Stop the device: mdadm -S /dev/md0
3. Delete the hard disk: mdadm -- mic -- zero superblock / dev / md0
5. Delete the configuration file: rm -f /etc/mdadm.conf
6. If you have written the mount information into the auto mount configuration file before, you can delete it here, / etc/fstab
6, Mark a disk partition as fd
[root@local ~]# fdisk /dev/sdb Welcome fdisk (util-linux 2.23.2). The changes remain in memory until you decide to write the changes to disk. Think twice before using the write command. command(input m get help): p disk /dev/sdb: 10.7 GB, 10737418240 Bytes, 20971520 sectors Units = a sector of 1 * 512 = 512 bytes Sector Size (logic/Physics): 512 byte / 512 byte I/O size(minimum/optimum): 512 byte / 512 byte Disk label type: dos Disk identifier: 0 x212ba3bc equipment Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 83 Linux /dev/sdb2 4196352 6293503 1048576 83 Linux /dev/sdb3 6293504 8390655 1048576 83 Linux command(input m get help): t Partition number (1-3,Default 3): 2 Hex code(input L List all codes): fd Partitioned“ Linux"Change the type of to“ Linux raid autodetect" command(input m get help): p disk /dev/sdb: 10.7 GB, 10737418240 Bytes, 20971520 sectors Units = a sector of 1 * 512 = 512 bytes Sector Size (logic/Physics): 512 byte / 512 byte I/O size(minimum/optimum): 512 byte / 512 byte Disk label type: dos Disk identifier: 0 x212ba3bc equipment Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 83 Linux /dev/sdb2 4196352 6293503 1048576 fd Linux raid autodetect /dev/sdb3 6293504 8390655 1048576 83 Linux command(input m get help): w The partition table has been altered! Calling ioctl() to re-read partition table. Synchronizing disks. [root@local ~]# partprobe /dev/sdb // Synchronization information to kernel [root@local ~]# mdadm -E /dev/sdb /dev/sdb: MBR Magic : aa55 Partition[0] : 4194304 sectors at 2048 (type 83) Partition[1] : 2097152 sectors at 4196352 (type fd) Partition[2] : 2097152 sectors at 6293504 (type 83)