Chapter 1 Introduction to FastDFS
FastDFS overall architecture
Two institutions: client (usually our program) + server (tracker [scheduling] + storage node [file storage and management])
Chapter 2 Establishment of FastDFS environment
Start tracker, storage and upload files
fdfs_trackerd /etc/fdfs/tracker.conf fdfs_storaged /etc/fdfs/storage.conf ps -ef | grep fdfs fdfs_test /etc/fdfs/client.conf upload aa.txt
Chapter 3 Http access of FastDFS
Super cumbersome, vomit
But it is convenient to configure it once
Execution process of extension module:
Chapter 4 development examples of FastDFS in Java projects
package fun.rtzhao.fastdfs; import org.csource.common.MyException; import org.csource.fastdfs.*; import java.io.IOException; /** @Author The evil king's true eye is the strongest @ Date 2021/9/9 17:44 @Version 1.0 */ public class FastDFSUtil { public static void main(String[] args) { delect(); } /*Upload file*/ public static void upload() { TrackerServer trackerServer = null; StorageServer storageServer = null; try { /*Read the Fastdfs configuration file and read the addresses of all tracker s into memory*/ ClientGlobal.init("fastdfs.conf"); TrackerClient trackerClient = new TrackerClient(); trackerServer = trackerClient.getConnection(); storageServer = trackerClient.getStoreStorage(trackerServer); // Define the client object of Storage and use it to upload, download and delete specific files StorageClient storageClient = new StorageClient(trackerServer, storageServer); String[] result = storageClient.upload_file("d:/test.jpg", "jpg", null); for (String str:result){ System.out.println(str); } } catch (IOException e) { e.printStackTrace(); } catch (MyException e) { e.printStackTrace(); }finally{ if(storageServer!=null){ try { storageServer.close(); } catch (IOException e) { e.printStackTrace(); } } if(trackerServer!=null){ try { trackerServer.close(); } catch (IOException e) { e.printStackTrace(); } } } } /*Download File*/ public static void download() { TrackerServer trackerServer = null; StorageServer storageServer = null; try { /*Read the Fastdfs configuration file and read the addresses of all tracker s into memory*/ ClientGlobal.init("fastdfs.conf"); TrackerClient trackerClient = new TrackerClient(); trackerServer = trackerClient.getConnection(); storageServer = trackerClient.getStoreStorage(trackerServer); // Define the client object of Storage and use it to upload, download and delete specific files StorageClient storageClient = new StorageClient(trackerServer, storageServer); int result = storageClient.download_file( "group1", "M00/00/00/wKg0gGE529iAb4XMAAe9bZMw0PM358.jpg", "d:/aa.jpg"); System.out.println(result); } catch (IOException e) { e.printStackTrace(); } catch (MyException e) { e.printStackTrace(); }finally{ if(storageServer!=null){ try { storageServer.close(); } catch (IOException e) { e.printStackTrace(); } } if(trackerServer!=null){ try { trackerServer.close(); } catch (IOException e) { e.printStackTrace(); } } } } /*Delete file*/ public static void delect() { TrackerServer trackerServer = null; StorageServer storageServer = null; try { /*Read the Fastdfs configuration file and read the addresses of all tracker s into memory*/ ClientGlobal.init("fastdfs.conf"); TrackerClient trackerClient = new TrackerClient(); trackerServer = trackerClient.getConnection(); storageServer = trackerClient.getStoreStorage(trackerServer); // Define the client object of Storage and use it to upload, download and delete specific files StorageClient storageClient = new StorageClient(trackerServer, storageServer); int result = storageClient.delete_file("group1", "M00/00/00/wKg0gGE529iAb4XMAAe9bZMw0PM358.jpg"); System.out.println(result); } catch (IOException e) { e.printStackTrace(); } catch (MyException e) { e.printStackTrace(); }finally{ if(storageServer!=null){ try { storageServer.close(); } catch (IOException e) { e.printStackTrace(); } } if(trackerServer!=null){ try { trackerServer.close(); } catch (IOException e) { e.printStackTrace(); } } } } }
Chapter 5 Application of FastDFS in Web projects
The file is uploaded from the user's machine to the web server tomcat, where the user's file stream is obtained, and then the file stream is uploaded to fastDFS and stored in the database
Error: the springboot application is preliminarily built, and the error is: Failed to configure a DataSource: 'url' attribute is not specified and no embedd ed
Solution: the properties configuration file is not recognized correctly. You need to modify the configuration file field according to the added dependency, as shown below
spring.datasource.driverClassName=com.mysql.jdbc.Driver spring.datasource.url=jdbc:mysql://127.0.0.1:3306/fastdfs?useUnicode=true&characterEncoding=UTF-8&useSSL=false spring.datasource.username=root spring.datasource.password=333
IDEA recognizes all fields before it can run smoothly
Incorrect solution - spring boot will load the org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration class by default, and the DataSourceAutoConfiguration class uses the @ Configuration annotation to inject dataSource beans into spring, because there is no Configuration information related to dataSource in the project (oss module), Therefore, when spring creates a dataSource bean, it will report an error due to the lack of relevant information. Solution: add exclude to the @ SpringBootApplication annotation to release the automatic loading of DataSourceAutoConfiguration
Chapter 6 FastDFS distributed file system cluster
Taihu Lake light cluster
Load balancing operation
#0 is polling, 1 is the specified group, and 2 is storage priority store_lookup=0 store_group=group2
File access execution process:
Cluster configuration process
FastDFS distributed file system cluster environment setup - operation steps manual
Build a FastDFS distributed file system cluster, and it is recommended to deploy at least 6 server nodes;
Build FastDFS cluster
Step 1: install 6 mini versions of Linux. The mini version of Linux has no graphical interface and occupies little disk and resources. The Linux used in the enterprise is Linux without graphical interface;
Step 2: because Mini Linux lacks some common tool libraries, it is inconvenient to operate. It is recommended to install the following tool libraries:
1. Install lrzsz, yum install lrzsz -y
2. Install wget, yum install wget -y
4. Install vim, yum install vim -y
5. Install unzip, yum install unzip -y
6. Install ifconfig, yum install net tools - y
yum install lrzsz wget vim unzip net-tools -y
7. Library dependencies for installing nginx and fastdfs:
yum install gcc perl openssl openssl-devel pcre pcre-devel zlib zlib-devel libevent libevent-devel -y
Step 3 install fastdfs
1. Upload the installation package of fastdfs and libfastcommon
2. Unzip libfastcommon and install libfastcommon
3. Unzip fastdfs install fastdfs
4. Copy http.conf and mime.types in the fastdfs directory to the / etc/fdfs directory
Note: all 6 machines perform these operations
Step 4: deploy two tracker server servers. What you need to do:
Modify two tracker Server profile: tracker.conf: Modify one place: base_path=/opt/fastdfs/tracker #Set the data file and log directory of the tracker (to be created in advance) start-up tracker The server fdfs_trackerd /etc/fdfs/tracker.conf
Step 5: modify the storage.conf file of the four storage servers in the two groups
The first storage server of the first group 1 (modify the storage.conf configuration file):
group_name=group1 # group name, which can be modified according to the actual situation. The value is group1 or group2
base_path=/opt/fastdfs/storage # set the log directory of storage (to be created in advance)
store_path0=/opt/fastdfs/storage/files # storage path
tracker_ Server = 192.168.171.135: IP address and port number of 22122 #tracker server
tracker_server=192.168.171.136:22122
Group 2 group2 The first one storage server(modify storage.conf Profile: group_name=group2 #Group name, modified according to the actual situation, and the value is group1 or group2 base_path=/opt/fastdfs/storage #Set the log directory of storage (to be created in advance) store_path0=/opt/fastdfs/storage/files #Storage path tracker_server=192.168.171.135:22122 #The IP address and port number of the tracker server tracker_server=192.168.171.136:22122 start-up storage The server Before use Java Code test FastDFS Can the 6 machines upload files
Note: FastDFS has a load balancing policy by default. You can modify the tracker.conf file in two machines of the tracker
store_lookup=1
0 Random storage strategy 1 Specify group 2 Select the preferred default value for disk space Restart the service after modification fdfs_trackerd /etc/fdfs/tracker.conf restart
Load balancing using Nginx
Step 6 install nginx and use nginx to load balance fastdfs
Upload nginx-1.12.2.tar.gz and nginx's fastdfs extension installation package fastdfs-nginx-module-master.zip
Add installation dependencies for nginx
yum install gcc openssl openssl-devel pcre pcre-devel zlib zlib-devel -y
Unzip nginx
tar -zxvf nginx-1.12.2.tar.gz
Unzip the fastdfs extension module
unzip fastdfs-nginx-module-master.zip
Configure the installation information of nginx
Configuration information of two tracker servers (fastdfs module is not required)
./configure --prefix=/usr/local/nginx_fdfs
Configuration information of 4 storage servers (fastdfs module is required)
./configure --prefix=/usr/local/nginx_fdfs --add-module=/root/fastdfs-nginx-module-master/src
Compile and install nginx
./make
./make install
4 platform storage Your server needs a copy mod_fastdfs file take/root/fastdfs-nginx-module-master/src Under directory mod_fastdfs.conf Copy files to /etc/fdfs/Directory, so as to start normally Nginxï¼›
Step 7 configure nginx of two machines of tracker
Enter the installation directory
cd /usr/local/nginx_fdfs
Add a location Intercept the request and configure a regular interception rule fastdfs And forward the request to the other 4 machines storage The server(modify conf Directory nginx.conf file) #nginx intercept request path: location ~ /group[1-9]/M0[0-9] { proxy_pass http://fastdfs_group_server; } Add a upstream Execution of services IP For the other 4 stroage Address of #Deploy and configure nginx load balancing: upstream fastdfs_group_server { server 192.168.171.137:80; server 192.168.171.138:80; server 192.168.171.139:80; server 192.168.171.140:80; }
Step 8 configure another 4 storage nginx to add http access request path interception
Enter the installation directory
cd /usr/local/nginx_fdfs
Add a location to intercept the request, configure a regular rule to intercept the file path of fastdfs, and use the nginx module of fastdfs to forward the request (modify the nginx.conf file in the conf directory)
#nginx intercept request path:
location ~ /group[1-9]/M0[0-9] {
ngx_fastdfs_module;
}
Step 9: modify the mods of the four storage servers respectively_ Fasfdfs.conf file (/ etc/fdfs/mod_fastdfs.conf)
#Modify the basic path and create the corresponding folder in the specified path
base_path=/opt/fastdfs/nginx_mod # save log directory
#Specify the ip and port of the two tracker servers
tracker_ Server = 192.168.171.135: IP address and port number of 22122 #tracker server
tracker_server=192.168.171.136:22122
#Specify the port number of the storage server
storage_server_port=23000 # generally, it does not need to be modified
#Specify the group name to which the current storage server belongs (current cases 03 and 04 are group1, 05 and 06 are group2)
group_name=group1 # the group name of the current server
#Specifies whether the url path contains the group name (the current case url contains the group name)
url_have_group_name=true # whether there is a group name in the file URL
store_path_count=1 # number of storage paths, and store_ The number of paths matches (generally not changed)
store_path0=/opt/fastdfs/storage/files # storage path
#Specify the number of groups, which is determined according to the actual configuration. (the current case has two groups, group1 and group2)
group_count = 2 # sets the number of groups
Add 2 groups at the end: [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/opt/fastdfs/storage/files [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/opt/fastdfs/storage/files The second of the first group storage Follow the same steps; Two of the other group storage Follow the same steps; #Test whether the configuration file of nginx is correct (test all 6 servers) /usr/local/nginx_fdfs/sbin/nginx -c /usr/local/nginx_fdfs/conf/nginx.conf -t #Start nginx server (all 6 servers) /usr/local/nginx_fdfs/sbin/nginx -c /usr/local/nginx_fdfs/conf/nginx.conf
Test: use the browser to access fastdfs files in 6 servers respectively
Step 10: deploy the front-end user to access the portal server, that is, access Nginx on 192.168.230.128. The Nginx load is balanced to two tracker server s on the back-end;
Configure nginx.conf file
location ~ /group[1-9]/M0[0-9] {
proxy_pass http://fastdfs_group_server;
}
Add a upstream Execution of services IP 2 sets tracker Address of #Deploy and configure nginx load balancing: upstream fastdfs_group_server { server 192.168.171.135:80; server 192.168.171.136:80; }
Test: use the browser to access the fastdfs file in the 128 (nginx server with the only entry) server
Note: static resource intercepts may exist in the previous 128 nginx, which will lead to file access failure. In this case, you can comment or delete these static resource intercepts
Supplementary information
Finally, in order for the service to connect to the tracker normally, please turn off the firewall of all machines:
systemctl status firewalld view firewall status
systemctl disable firewalld disable boot firewall
systemctl stop firewalld stop firewall
systemctl restart network
systemctl start network
systemctl stop network
The network card service may not be enabled for the installed linux (without graphics). You can modify the network card configuration file under / etc / sysconfig / network scripts to set ONBOOT=yse
It means to start the network card, and then start the network service
Keepalived a software that will automatically switch to the standby nginx server when the primary nginx fails, which is usually used by the operation and maintenance personnel
Portal
Previous chapter: Chapter 6 distributed Spring Session
Next chapter: