Linux disk management -- XFS file system & symbolic link & hard link

9, xfs file system

XFS file system configuration

Basically, XFS is a log file system. The reason why it is now regarded as the default file system is that it was originally developed for high-capacity disk and high-performance file system, which is quite suitable for the current environment. In addition, XFS has the functions of almost all EXT4 file systems.

In terms of data distribution, xfs file system is mainly planned into three parts: data section, file system active login section and real-time operation section. The details are as follows:

1 data section

This area is basically the same as the EXT family mentioned earlier, including inode, block, superblock and other data. This data area is similar to the block group of ext family and is divided into multiple allocation groups. Each storage group contains the superblock of the entire file system, the management mechanism of remaining space, and the allocation and tracking of inodes. In addition, inodes and blocks are dynamically configured only when the system needs to be used, so the formatting action is much faster than that of the EXT family.

In fact, you only need to treat the storage area group of this data area as the block group of ext, but inode and block are generated dynamically, and the configuration is not completed at the beginning of formatting.

2 file system activity log section

Log and data separation

This area is mainly used to record changes in the file system. The specific principle is the same as before.

Because all the actions of the system will be recorded in this area, the disk activity in this area is quite frequent. There is a subtlety in the design of xfs. You can specify an external disk as the log management block of xfs file system. For example, you can use SSD disk as the file system activity log area of xfs, so that when the system needs any activity, it can work more quickly.

3 real time section

When a file needs to be created, xfs will find one or more extent blocks in this section, place the file in this block, and write it to the inode and block of the data section after allocation. The size of this extent block must be specified when formatting. The minimum value is 4K and the maximum can be 1G. Generally, the default capacity of non disk array disks is 64K. In the case of a stripe similar to a disk array, it is recommended that the extent be set as large as the stripe. You'd better not move this ext, because it may affect the performance of the physical disk.

The information output above is explained below:

Isize: the capacity of inode, which is 256 bytes here.

agcount: the number of storage area groups. There are 4.

agsize: the number of block s in each storage area group, which is 32000 here.

sectsz: the capacity of the logical sector (512bytes here).

bsize: the capacity of each block is 4K.

blocks: there are 128000 block s in this file system.

sunit, swidth: it has a high correlation with the stripe of the disk array, which will not be explained here for the time being.

internal means that the location of the login area is in the file system rather than the external system, occupying 4K * 853 space.

Line 9: realtime area. The extent capacity is 4k and none = > but it is not used at present.

xfs has a write barrier. When there is a cache, turn on the barrier function to ensure data security. If there is no cache, turn off this parameter

mount -o nobarrier /dev/device /mount/point

barrier

The idea behind this function is very simple: before writing new data blocks to disk, metadata will be written to the log. Writing metadata to the log in advance can ensure that once an error occurs before and after writing real data, the log function can easily roll back to the state before the change. This method ensures that file system crashes do not occur.

If the device mapper is used as the priority of the storage layer, because the device mapper does not support barrier s, logical volumes, soft RAID, or multipath disks

xfs_quota

usrquota

groupquota

project

[root@localhost ~]# mount -o usrquota,grpquota /dev/sda3 /mnt/

[root@localhost ~]# xfs_quota -x -c 'report' /mnt/

User quota on /mnt (/dev/sda3)

Blocks

User ID Used Soft Hard Warn/Grace

---------- --------------------------------------------------

root 4 0 0 00 [--------]



Group quota on /mnt (/dev/sda3)

Blocks

Group ID Used Soft Hard Warn/Grace

---------- --------------------------------------------------

root 4 0 0 00 [--------]



[root@localhost ~]# xfs_quota -x -c 'limit bsoft=50K bhard=100K robin' /mnt

[root@localhost ~]# xfs_quota -x -c 'report' /mnt/

User quota on /mnt (/dev/sda3)

Blocks

User ID Used Soft Hard Warn/Grace

---------- --------------------------------------------------

root 4 0 0 00 [--------]

robin 0 52 100 00 [--------]



Group quota on /mnt (/dev/sda3)

Blocks

Group ID Used Soft Hard Warn/Grace

---------- --------------------------------------------------

root 4 0 0 00 [--------]





xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting' /target/path

prject quota (for directory quota)

mkdir /quota #Create quota directory

mount -o prjquota /dev/sda8 /quota #Mount the device and enable the option to support directory quota. prjquota conflicts with usrquota and grpquota

mkdir /quota/test #Create test directory

mount

/dev/sda8 on /quota type xfs (rw,relatime,attr2,inode64,prjquota) #Ensure that the parameters are on

[root@localhost ~]# echo 50:/quota/test >> /etc/projects #[configure project id and corresponding directory

[root@localhost ~]# echo test:50 >> /etc/projid #Project name and corresponding id

[root@localhost ~]# cat /etc/projects

50:/quota/test

[root@localhost ~]# cat /etc/projid

test:50

xfs_quota -x -c 'project -s -p /quota/test 50' #entry name

xfs_quota -x -c 'limit -p bhard=100M 50' /quota #Limit directory size

query

xfs_quota -x -c 'report' /quota #query

xfs limitations

1.XFS is a single node file system. If multiple nodes need to access at the same time, GFS2 file system should be considered

2.XFS supports 16EB file system, while redhat only supports 100TB file system

3.XFS is less suitable for one-way metadata intensive workloads. Other file systems (ext4) perform better under the workload of creating and deleting a large number of small files by single thread

4.xfs files may use twice the CPU resources when operating metadata. When CPU resources are limited, different file systems can be studied

5.xfs is more suitable for fast storage of large file systems. ext4 performs better in small file systems or when the system storage bandwidth is limited

[root@node6 ~]# yum install xfsprogs -y

[root@node6 ~]# mkfs.xfs /dev/vdb1

meta-data=/dev/vdb1 isize=256 agcount=4, agsize=6016 blks

= sectsz=512 attr=2, projid32bit=0

data = bsize=4096 blocks=24064, imaxpct=25

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0

log =internal log bsize=4096 blocks=1200, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0

Log separation (separate disks for data and logs)

[root@node6 ~]# mkfs.xfs -f -l logdev=/dev/sdc1 /dev/sdb1

meta-data=/dev/vdb1 isize=256 agcount=4, agsize=6016 blks

= sectsz=512 attr=2, projid32bit=0

data = bsize=4096 blocks=24064, imaxpct=25

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0

log =/dev/vdb2 bsize=4096 blocks=24576, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0

mount :

[root@node6 ~]# mount -o logdev=/dev/sdc1 /dev/sdb1 /mnt/

Repair file system

[root@node6 ~]# umount /xfs

[root@node6 ~]# xfs_repair /dev/vgxfs/lvxfs 

#fsck e2fsck

Disk defragmentation

[root@node6 ~]# mkfs.xfs -l logdev=/dev/vdb2 /dev/vdb1

[root@node6 ~]# mount -o logdev=/dev/vdb2 /dev/vdb1 /xfs

[root@node6 ~]# for FILE in file{0..3} ; do dd if=/dev/zero of=/xfs/${FILE} bs=4M count=100 & done

[root@node6 ~]# filefrag /xfs/file*

xfs_db -c frag -r /dev/vdb1 Current state of disk fragmentation

[root@node6 ~]# xfs_fsr -v





[root@node6 ~]# umount /xfs



[root@node6 ~]# xfs_repair -n -l /dev/vdb2 /dev/vdb1

Phase 1 - find and verify superblock...

Phase 2 - using external log on /dev/vdb2

- scan filesystem freespace and inode maps...

- found root inode chunk

Phase 3 - for each AG...

- scan (but don't clear) agi unlinked lists...

- process known inodes and perform inode discovery...

- agno = 0

- agno = 1

- agno = 2

- agno = 3

- process newly discovered inodes...

Phase 4 - check for duplicate blocks...

- setting up duplicate extent list...

- check for inodes claiming duplicate blocks...

- agno = 0

- agno = 1

- agno = 2

- agno = 3

No modify flag set, skipping phase 5

Phase 6 - check inode connectivity...

- traversing filesystem ...

- traversal finished ...

- moving disconnected inodes to lost+found ...

Phase 7 - verify link counts...

No modify flag set, skipping filesystem flush and exiting.

[root@node6 ~]#

[root@node6 ~]# xfs_repair -l /dev/vdb2 /dev/vdb1

Phase 1 - find and verify superblock...

Phase 2 - using external log on /dev/vdb2

- zero log...

- scan filesystem freespace and inode maps...

- found root inode chunk

Phase 3 - for each AG...

- scan and clear agi unlinked lists...

- process known inodes and perform inode discovery...

- agno = 0

- agno = 1

- agno = 2

- agno = 3

- process newly discovered inodes...

Phase 4 - check for duplicate blocks...

- setting up duplicate extent list...

- check for inodes claiming duplicate blocks...

- agno = 0

- agno = 1

- agno = 2

- agno = 3

Phase 5 - rebuild AG headers and trees...

- reset superblock...

Phase 6 - check inode connectivity...

- resetting contents of realtime bitmap and summary inodes

- traversing filesystem ...

- traversal finished ...

- moving disconnected inodes to lost+found ...

Phase 7 - verify and correct link counts...

done

Differential backup only backs up the changed data based on full backup; incremental backup backs up the changed data based on the previous incremental backup.

backups

[root@node6 ~]# mount -o logdev=/dev/vdb2 /dev/vdb1 /xfs

[root@node6 ~]# yum install xfsdump



[root@node6 ~]# xfsdump -L full -M dumpfile -l 0 - /xfs | xz > /tmp/xfs.$(date +%Y%m%d).0.xz

#Full backup - L label -M backup - l level

xfsdump: using file dump (drive_simple) strategy

xfsdump: version 3.0.4 (dump format 3.0) - Running single-threaded

xfsdump: level 0 dump of node6.uplooking.com:/xfs

xfsdump: dump date: Sat Sep 14 17:39:47 2013

xfsdump: session id: 75f91e6b-c0bc-4ad1-978b-e2ee5deb01d4

xfsdump: session label: "full"

xfsdump: ino map phase 1: constructing initial dump list

xfsdump: ino map phase 2: skipping (no pruning necessary)

xfsdump: ino map phase 3: skipping (only one dump stream)

xfsdump: ino map construction complete

xfsdump: estimated dump size: 1677743680 bytes

xfsdump: /var/lib/xfsdump/inventory created

xfsdump: creating dump session media file 0 (media 0, file 0)

xfsdump: dumping ino map

xfsdump: dumping directories

xfsdump: dumping non-directory files

xfsdump: ending media file

xfsdump: media file size 1678152296 bytes

xfsdump: dump size (non-dir files) : 1678101072 bytes

xfsdump: dump complete: 152 seconds elapsed

xfsdump: Dump Status: SUCCESS

[root@node6 ~]#

[root@node6 ~]# xfsdump -I

file system 0:

fs id: 467c218c-22b5-45bc-9b0e-cd5782be6e2e

session 0:

mount point: node6.uplooking.com:/xfs

device: node6.uplooking.com:/dev/vdb1

time: Sat Sep 14 17:39:47 2013

session label: "full"

session id: 75f91e6b-c0bc-4ad1-978b-e2ee5deb01d4

level: 0

resumed: NO

subtree: NO

streams: 1

stream 0:

pathname: stdio

start: ino 131 offset 0

end: ino 135 offset 0

interrupted: NO

media files: 1

media file 0:

mfile index: 0

mfile type: data

mfile size: 1678152296

mfile start: ino 131 offset 0

mfile end: ino 135 offset 0

media label: "dumpfile"

media id: de67b2b5-db72-4555-9804-a050829b2179

xfsdump: Dump Status: SUCCESS



[root@node6 ~]# rm -rf /xfs/*

[root@node6 ~]# xzcat /tmp/xfs.20130914.0.xz | xfsrestore - /xfs #Full recovery

xfsrestore: using file dump (drive_simple) strategy

xfsrestore: version 3.0.4 (dump format 3.0) - Running single-threaded

xfsrestore: searching media for dump

xfsrestore: examining media file 0

xfsrestore: dump description:

xfsrestore: hostname: node6.uplooking.com

xfsrestore: mount point: /xfs

xfsrestore: volume: /dev/vdb1

xfsrestore: session time: Sat Sep 14 17:39:47 2013

xfsrestore: level: 0

xfsrestore: session label: "full"

xfsrestore: media label: "dumpfile"

xfsrestore: file system id: 467c218c-22b5-45bc-9b0e-cd5782be6e2e

xfsrestore: session id: 75f91e6b-c0bc-4ad1-978b-e2ee5deb01d4

xfsrestore: media id: de67b2b5-db72-4555-9804-a050829b2179

xfsrestore: searching media for directory dump

xfsrestore: reading directories

xfsrestore: 1 directories and 4 entries processed

xfsrestore: directory post-processing

xfsrestore: restoring non-directory files

xfsrestore: restore complete: 33 seconds elapsed

xfsrestore: Restore Status: SUCCESS

[root@node6 ~]# ls /xfs

file0 file1 file2 file3

Full backup

echo aaaaa >> a.txt

xfsdump -L all -M dumpfile -l 0 - /mnt | xz > /home/xfs.$(date +%Y%m%d).all0.xz

xzcat /home/xfs.20170609.all0.xz

increment

echo bbbbbbb > b.txt

xfsdump -L add -M dumpfile -l 1 - /mnt | xz > /home/xfs.$(date +%Y%m%d).add1.xz

xzcat /home/xfs.20170609.add0.xz

xzcat /home/xfs.20170609.add1.xz

echo ccccc > c.txt

xfsdump -L add -M dumpfile -l 2 - /mnt | xz > /home/xfs.$(date +%Y%m%d).add2.xz

xzcat /home/xfs.20170609.add2.xz

echo dddddd > d.txt

xfsdump -L add -M dumpfile -l 3 - /mnt | xz > /home/xfs.$(date +%Y%m%d).add3.xz

xzcat /home/xfs.20170609.add3.xz

difference

xfsdump -L cha -M dumpfile -l 1 - /mnt | xz > /home/xfs.$(date +%Y%m%d).cha1.xz

xzcat /home/xfs.20170609.cha1.xz

Full recovery + incremental recovery

xzcat /home/xfs.20170609.all0.xz | xfsrestore - /mnt/

ls

xzcat /home/xfs.20170609.add1.xz | xfsrestore - /mnt/

ls

xzcat /home/xfs.20170609.add2.xz | xfsrestore - /mnt/

ls

xzcat /home/xfs.20170609.add3.xz | xfsrestore - /mnt/

ls

Full recovery + differential recovery

xzcat /home/xfs.20170609.all0.xz | xfsrestore - /mnt/

ls

xzcat /home/xfs.20170609.cha1.xz | xfsrestore - /mnt/

ls

Clear backup records

rm -rf /var/lib/xfsdump/inventory/*

10, Symbolic links and hard links

Symbolic link (soft link)

[root@localhost ~]# ln -s /root/aa.txt /tmp/aa_soft.txt

[root@localhost ~]# ll /tmp/aa_soft.txt

lrwxrwxrwx 1 root root 12 8 September 13:41 /tmp/aa_soft.txt -> /root/aa.txt

Hard connection

[root@localhost ~]# ln /root/aa.txt /tmp/aa_hard.txt

[root@localhost ~]# ll /tmp/aa_hard.txt

-rw-r--r-- 2 root root 4 8 September 13:40 /tmp/aa_hard.txt

Difference between symbolic connection and hard connection

1. Different parameters during creation

ln -s /root/aa.txt /tmp/aa_soft.txt ln /root/aa.txt /tmp/aa_hard.txt

2. The symbolic connection source file cannot be renamed or deleted, but the hard connection can

3. The permission of symbolic link file is always 777, and the hard connection permission is consistent with that of source file

[ root@localhost ~]#Ll / root / aa.txt -rw-r--r-- 2 root 4 August 9 13:40 / root / aa.txt[ root@localhost ~]#Ll / TMP / aa_soft.txt lrwxrwxrwx 1 root 12 August 9 13:41 / TMP / aa_soft.txt - > / root / aa.txt[ root@localhost ~]#Ll / TMP / aa_hard.txt - rw-r -- R -- 2 root 4 August 9 13:40 /tmp/aa_hard.txt

4. The inode number of the symbolic connection is different from that of the source file, and the hard connection is the same

[root@localhost ~]# ls -i /root/aa.txt 35261258 /root/aa.txt [root@localhost ~]# ls -i /tmp/aa_soft.txt 17462480 /tmp/aa_soft.txt [root@localhost ~]# ls -i /tmp/aa_hard.txt 35261258 /tmp/aa_hard.txt

5. Symbolic connection can operate on the directory, but hard connection cannot

[ root@localhost ~]# ln -s /root/ /tmp/new_root [ root@localhost ~]#Ln / root / / TMP / new_root ln: "/ root /": hard links to directories are not allowed

6. The soft connection can cross the file system, and the hard connection cannot cross the file system

[ root@localhost ~]# ln -s /root/aa.txt /boot/ [ root@localhost ~]#Ll / boot / aa.txt lrwxrwxrwx 1 root 12 August 9 13:51 / boot / aa.txt - > / root / aa.txt[ root@localhost ~]# ln /root/aa.txt /boot/aa_ Hard.txt ln: unable to create hard link "/ boot / aa_hard. TXT" = > "/ root / AA. TXT": invalid cross device connection

7. When creating a symbolic connection, it must be an absolute path (unless the source file and the target are in the same directory), and the hard connection relative path or absolute path can be used

Release connection

[root@localhost ~]# unlink /tmp/aa_soft.txt

[root@localhost ~]# unlink /tmp/aa_hard.txt

Tags: Linux CentOS

Posted on Fri, 19 Nov 2021 13:37:59 -0500 by mac007