MFS -- detailed explanation and deployment of MFS

mfs explanation and deployment 1, MFS details Distributed principle: MFS principle: Composition of MFS file system MFS...
Distributed principle:
MFS principle:
Composition of MFS file system
MFS read data processing:
MFS write data processing:
The process of deleting files in MFS
MFS process of modifying file content
MFS renaming process
The process of MFS traversing files
Host preparation
yum source
master configuration
chunk server configuration
Client client configuration

mfs explanation and deployment

1, MFS details

Distributed principle:

Distributed file system refers to the physical storage resources managed by file system are not necessarily directly connected to local nodes, but connected to nodes through computer network. In short, it is to put some scattered (distributed on each computer in the LAN) shared folders into a folder (virtual shared folder). For users, to access these shared folders, as long as you open the virtual shared folder, you can see all the shared folders linked to the virtual shared folder, and users can't feel that these shared files are scattered on each computer. The advantages of distributed file system are centralized access, simplified operation, data disaster recovery, and improved file access performance.

MFS principle:

MFS is a fault-tolerant network distributed file system, which stores data on multiple physical servers, and presents users with a unified resource.

Composition of MFS file system

  • Metadata server (Master): manage file system and maintain metadata in the whole system.

  • MetaLogger: backs up the change log files of the Master server. The file type is changelog_ml.*.mfs. When the Master server data is lost or damaged, you can get files from the log server and repair them.

  • Chunk Server: the server that actually stores data. When storing files, the files will be saved in blocks and copied between data servers. The more data servers, the more "capacity" they can use, the higher the reliability and the better the performance.

  • Client: you can mount MFS file system as you mount NFS. The operation is the same.

MFS read data processing:

  • Client sends read request to metadata server
  • The metadata server informs the client of the location where the required data is stored (the IP address and Chunk number of Chunk Server)
  • Client requests to send data to known Chunk Server
  • Chunk Server sends data to client

MFS write data processing:

  • Client sends write request to metadata server
  • The metadata server interacts with the Chunk Server (only when the Chunks needed exist), but the metadata server only creates new Chunks on some servers. After the Chunks are created successfully, the metadata server tells the client which Chunks of the Chunk Server can be used to write data.
  • Client writes data to the specified Chunk Server
  • The Chunk Server synchronizes data with other chunk servers. After the synchronization is successful, the Chunk Server tells the client that the data is written successfully
  • The client tells the metadata server that this write is completed

The process of deleting files in MFS

  • When the client has delete operation, first send the delete information to the Master;
  • Master locates the corresponding metadata information to delete, and adds the block deletion operation on the chunk server to the queue for asynchronous cleaning;
  • Respond to client delete successful signal

MFS process of modifying file content

  • When the client has modified the file content, it first sends the operation information to the Master;
  • Master applies for a new block to the. swp file,
  • After the client closes the file, it will send the closing information to the Master;
  • Master will check whether the content is updated. If so, apply for a new block to store the changed file and delete the original block and. swp file block;
  • If not, delete the. swp file block directly.

MFS renaming process

  • When the client renames the file, it will send the operation information to the Master;
  • Master directly modifies the file name in metadata information and returns the rename completion information;

The process of MFS traversing files

  • Traversal files do not need to access the chunk server. When there is a client traversal request, the operation information is sent to the Master;
  • Master returns the corresponding metadata information;
  • Display after the client receives the information
2, Deployment of MFS

For deployment, please refer to the official website: https://moosefs.com/download/#current

Host preparation

host ip effect server1 172.25.1.1 master server2 172.25.1.2 chunk server server3 172.25.1.3 chunk server foundation1 172.25.1.250 master

Host version: rhel7.6
selinux and firewall of all hosts are closed.

yum source

For all nodes, the yum source is the same

You can get it by the following command:

curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

If el6 or el8 changes el7 directly, it is enough.

Close gpgcheck after downloading repo file

[root@server1 ~]# cd /etc/yum.repos.d/ [root@server1 yum.repos.d]# ls MooseFS.repo redhat.repo rhel.repo [root@server1 yum.repos.d]# vim MooseFS.repo [root@server1 yum.repos.d]# cat MooseFS.repo [MooseFS] name=MooseFS $releasever - $basearch baseurl=http://ppa.moosefs.com/moosefs-3/yum/el7 gpgcheck=0 #Change to0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS enabled=1

At the same time, perform the master resolution on all nodes:

# vim /etc/hosts # cat /etc/hosts 172.25.1.1 server1 mfsmaster

master configuration

Installation:

[root@server1 ~]# yum install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli -y

Its main configuration file is / etc/mfs/mfsmaster.cfg

Start the service and CGI server:

[root@server1 ~]# systemctl enable --now moosefs-master [root@server1 ~]# systemctl enable --now moosefs-cgiserv.service

To view ports after startup:

[root@server1 ~]# netstat -antlp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 3821/mfsmaster tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 3821/mfsmaster tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 3821/mfsmaster tcp 0 0 0.0.0.0:9425 0.0.0.0:* LISTEN 3852/python2 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3097/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3300/master tcp 0 0 172.25.1.1:22 172.25.1.250:53068 ESTABLISHED 3315/sshd: root@pts tcp6 0 0 :::22 :::* LISTEN 3097/sshd tcp6 0 0 ::1:25 :::* LISTEN 3300/master

It can be seen that after opening the service, several ports of 9419942094219425 have been opened, among which 9421 is the port for listening to client connections, 9419 is the port for listening to metalogger (cold standby), masters and supervisors connections, 9420 is the listening port for chunkserver connections, and 9425 is the port for CGI server. You can access 172.25.1.1:9425 in the browser:


You can see that there is only one master at present.

chunk server configuration

Take server2 as an example. The operation of server3 is similar to that of server2. Install it first:

[root@server2 ~]# yum install moosefs-chunkserver -y

In the chunk server, / etc/mfs/mfshdd.cfg Used to specify its storage path

Create storage path:

[root@server2 mfs]# mkdir /mnt/chunk1 [root@server2 mfs]# chown mfs.mfs /mnt/chunk1/ #Change owner and all groups

Develop storage path:

[root@server2 mfs]# vim mfshdd.cfg [root@server2 mfs]# tail -1 mfshdd.cfg /mnt/chunk1

Start service after configuration:

[root@server2 mfs]# systemctl enable --now moosefs-chunkserver

View the open ports after service startup:

[root@server2 mfs]# netstat -antlp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:9422 0.0.0.0:* LISTEN 3874/mfschunkserver tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3087/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3309/master tcp 0 0 172.25.1.2:45240 172.25.1.1:9420 ESTABLISHED 3874/mfschunkserver tcp 0 0 172.25.1.2:22 172.25.1.250:49848 ESTABLISHED 3632/sshd: root@pts tcp6 0 0 :::22 :::* LISTEN 3087/sshd tcp6 0 0 ::1:25 :::* LISTEN 3309/master

You can see that a 9422 port is opened, and a random port 45240 is opened to communicate with the 9420 port of the master. To view the web page after startup:
You can see that server2 has joined the cluster.

Change chunk1 to chunk2 in server3. To view the web page after startup:

You can see that two chunk server s have joined the cluster.

Client client configuration

Install client:

[root@foundation1 ~]# yum install moosefs-client -y
3, Preliminary use of mfs

To create a new mfs Directory:

[root@foundation1 ~]# cd /mnt/ [root@foundation1 mnt]# mkdir mfs

Mount the directory:

[root@foundation1 mfs]# mfsmount /mnt/mfs/

To view the mount status:

[root@foundation1 mfs]# mount mfsmaster:9421 on /mnt/mfs type fuse.mfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

You can see that the mount is port 9421 of mfsmaster (server1).

Create test file directory:

[root@foundation1 mfs]# mkdir data1 [root@foundation1 mfs]# mkdir data2 [root@foundation1 mfs]# mfsgetgoal data1/ #If an error is reported, you can exit this directory and enter again, and then create a new directory data1/: 2 [root@foundation1 mfs]# mfsgetgoal data2/ data2/: 2

The above results indicate that two copies of both directories have been saved.

You can use the following command to change the number of backups to one:

[root@foundation1 mfs]# mfssetgoal -r 1 data1/ data1/: inodes with goal changed: 1 inodes with goal not changed: 0 inodes with permission denied: 0 [root@foundation1 mfs]# [root@foundation1 mfs]# mfsgetgoal data1/ data1/: 1 [root@foundation1 mfs]# mfsgetgoal data2/ data2/: 2

You can see that data1 has been saved as one copy. Copy the test file:

[root@foundation1 mfs]# cp /etc/passwd data1/ [root@foundation1 mfs]# cp /etc/fstab data2/

To view information about a copied file:

[root@foundation1 mfs]# mfsfileinfo data1/passwd data1/passwd: chunk 0: 0000000000000001_00000001 / (id:1 ver:1) copy 1: 172.25.1.3:9422 (status:VALID) [root@foundation1 mfs]# [root@foundation1 mfs]# mfsfileinfo data2/fstab data2/fstab: chunk 0: 0000000000000002_00000001 / (id:2 ver:1) copy 1: 172.25.1.2:9422 (status:VALID) copy 2: 172.25.1.3:9422 (status:VALID)

It can be seen that only one backup of the files in data1 is saved in server3, and two backups of the files in data2 are saved in server2 and server3 respectively.

When we shut down the mfs service of server3:

[root@server3 ~]# systemctl stop moosefs-chunkserver

To view the files in data2:

[root@foundation1 mfs]# cat data2/fstab # # /etc/fstab # Created by anaconda on Thu May 2 18:01:16 2019 # . . . . . .

You can view the files in data1 normally:

[root@foundation1 mfs]# mfsfileinfo data1/passwd data1/passwd: chunk 0: 0000000000000001_00000001 / (id:1 ver:1) no valid copies !!!

It can be seen that there is no copy available and access to the file will get stuck.

Now restore server3:

[root@server3 ~]# systemctl start moosefs-chunkserver

After recovery, you can view the files in data1 normally:

[root@foundation1 mfs]# cat data1/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin ......

Next, the experiment of segmentation is carried out. The default chunk size is 64M. When the file size is larger than 64M, segmentation will be carried out:

Create a new 100M file in data1:

[root@foundation1 data1]# dd if=/dev/zero of=bigfile bs=1M count=100

To view information about this file:

[root@foundation1 data1]# mfsfileinfo bigfile bigfile: chunk 0: 0000000000000003_00000001 / (id:3 ver:1) copy 1: 172.25.1.3:9422 (status:VALID) chunk 1: 0000000000000004_00000001 / (id:4 ver:1) copy 1: 172.25.1.2:9422 (status:VALID)

It can be seen that it is divided into two chunk s, so as to ensure the speed of writing and reading, which is the significance of distributed file system.

14 June 2020, 02:16 | Views: 9211

Add new comment

For adding a comment, please log in
or create account

0 comments