Task background
Although the distributed glusterfs storage is used, it is still unable to cope with the explosive data growth. Storage also needs to keep pace with the maturity of technologies such as big data and cloud computing, so this time we choose object storage
Task requirements
1. Build ceph cluster
2. Realize the application of object storage
Task Disassembly
1. Understand ceph
2. Build ceph cluster
3. Understand rados native data access
4. Implement ceph file storage
5. Implement ceph block storage
6. Implement ceph object storage
Learning objectives
- The ceph cluster can be successfully deployed
- Be able to use ceph to share file storage, block storage and object storage
- Be able to describe the characteristics of object storage
1, Meet Ceph
Ceph is a distributed storage system that can provide file storage, block storage and object storage. It provides an infinitely scalable Ceph storage cluster.
2, ceph architecture
Reference official file: https://docs.ceph.com/docs/master/
RADOS: Ceph's high reliability, high scalability, high performance and high automation are provided by this layer, and the storage of user data is finally stored through this layer.
It can be said that RADOS is the underlying native data engine of ceph, but it is not directly used in practical application, but is used in the following four ways:
- LIBRADOS is a library that allows applications to interact with the RADOS system by accessing the library and supports multiple programming languages. Such as Python,C,C + +, etc. in short, it is an interface for developers.
- CEPH FS provides file systems through Linux kernel client and FUSE. (file storage)
- RBD provides a distributed block device through Linux kernel client and QEMU/KVM driver. (block storage)
- RADOSGW is a gateway based on the popular RESTFUL protocol, and is compatible with S3 and Swift. (object storage)
Extended NOUN
RESTFUL: RESTFUL is an architecture style that provides a set of design principles and constraints. http is a typical application of this style. The biggest features of REST are: resources, unified interface, URI and statelessness.
- Resources: a specific information on the network: a file, a picture and a video are all resources.
- Unified interface: meta operations of data, namely CRUD(create, read, update and delete) operations, correspond to HTTP methods respectively
- GET (SELECT): fetch resources (one or more items) from the server.
- POST (CREATE): CREATE a new resource on the server.
- PUT (UPDATE): UPDATE resources on the server (the client provides complete resource data).
- PATCH (UPDATE): UPDATE the resource on the server (the client provides the resource data to be modified).
- DELETE (DELETE): deletes resources from the server.
- Uri (uniform resource locator): each URI corresponds to a specific resource. To get this resource, just access its URI. The most typical URI is URL
- Stateless: the location of a resource has nothing to do with other resources and is not affected by other resources.
S3 (Simple Storage Service): S3 can be regarded as a super large hard disk, which stores data resources (files, pictures, videos, etc.), which are collectively referred to as objects. These objects are stored in the storage segment, which is called bucket in S3
Compared with the hard disk, a bucket is equivalent to a directory and an object is equivalent to a file.
The hard disk path is similar to / root/file1.txt
The URI of S3 is similar to s3://bucket_name/object_name
swift: originally developed by Rackspace, it was a highly available distributed object storage service and contributed to the OpenStack open source community as one of its initial core sub projects in 2010
3, Ceph cluster
Cluster components
Ceph cluster includes Ceph OSD and Ceph monitor daemons.
Ceph OSD(Object Storage Device): the function is to store data, handle data replication, recovery, backfilling and rebalancing, and provide some monitoring information to Ceph Monitors by checking the heartbeat of other OSD daemons.
Ceph Monitor: it is a monitor that monitors Ceph cluster status and maintains various relationships in the cluster.
Ceph storage cluster requires at least one Ceph Monitor and two OSD daemons.
Cluster environment preparation
preparation:
Prepare four servers, which need to be able to access the external network and have static fixed IP (at least one disk is added for each server except the client, with a minimum of 1G and no partition);
1. Configure host name and host name binding (all nodes must be bound)
(Note: all of them are changed to short host names for later experiments. If you insist on using a host name like vm1.cluster.com or adding an alias, ceph will intercept your host name vm1.cluster.com as Vm1 later, resulting in inconsistency and errors.)
# hostnamectl set-hostname --static node1 # vim /etc/hosts 10.1.1.11 node1 10.1.1.12 node2 10.1.1.13 node3 10.1.1.14 client
2. Close the firewall and SELinux (use iptables -F to clear the rules)
# systemctl stop firewalld # systemctl disable firewalld # iptables -F # setenforce 0
3. Time synchronization (start ntpd service and confirm that all nodes have the same time)
# systemctl restart ntpd # systemctl enable ntpd
4. Configure yum source (all nodes, including client)
There are 2 yum source methods for ceph:
- Public network ceph source (centos7 default public network source + epel source + aliyun source of ceph)
# yum install epel-release -y # vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/ enabled=1 gpgcheck=0 priority=1 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ enabled=1 gpgcheck=0 priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=0 priority=1
- Local ceph source (centos7 default public network source + ceph local source)
- The download network of public network source is slow, and the update of public network source may cause problems. You can use the downloaded as the local ceph source
CEPH to be shared_ Copy the soft directory to all nodes (for example, under: / root /)
# vim /etc/yum.repos.d/ceph.repo [local_ceph] name=local_ceph baseurl=file:///root/ceph_soft gpgcheck=0 enabled=1
Cluster deployment process
Step 1: configure ssh password free
Take node1 as the deployment configuration node and configure ssh equivalence on node1 (ssh node1, node2, node3 and client are required to be password free)
Note: this step is not necessary. The purpose of this step is:
- If you use CEPH deploy to install the cluster, the key will be easy to install
- If you do not use CEPH deploy to install, you can also facilitate the following operations: for example, synchronizing configuration files
[root@node1 ~]# ssh-keygen [root@node1 ~]# ssh-copy-id -i node1 [root@node1 ~]# ssh-copy-id -i node2 [root@node1 ~]# ssh-copy-id -i node3 [root@node1 ~]# ssh-copy-id -i client
Step 2: install the deployment tool on node1
(other nodes do not need to be installed)
[root@node1 ~]# yum install ceph-deploy -y
Step 3: create a cluster on node1
Create a cluster configuration directory
Note: most of the following operations will be in this directory
[root@node1 ~]# mkdir /etc/ceph [root@node1 ~]# cd /etc/ceph
Create a ceph cluster
[root@node1 ceph]# ceph-deploy new node1 [root@node1 ceph]# ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring explain: ceph.conf Cluster profile ceph-deploy-ceph.log use ceph-deploy Deployed logging ceph.mon.keyring mon Verification of key file
Step 4: install ceph on the ceph cluster node
The yum source has been prepared when preparing the environment. Here = = all cluster nodes (excluding client) = = install the following software
# yum install ceph ceph-radosgw -y # ceph -v ceph version 13.2.6 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
Supplementary notes:
- If the public network is OK and the network speed is good, you can use the CEPH deploy install node1 node2 node3 command to install, but if the network speed is bad, it will be worse
- So here we choose to directly use the prepared local ceph source, and then install ceph using Yum install ceph radosgw - Y.
Step 5: install CEPH common on the client
[root@client ~]# yum install ceph-common -y
Step 6: create mon (monitor)
Add public network for monitoring
stay[global]Add the following sentence to the configuration section (directly to the last line)) [root@node1 ceph]# vim /etc/ceph/ceph.conf public network = 10.1.1.0/24 Monitoring network
Monitor node initialization and synchronously configure to all nodes (node1,node2,node3, excluding clients)
[root@node1 ceph]# ceph-deploy mon create-initial [root@node1 ceph]# ceph health HEALTH_OK state health(Health) Synchronize profile information to all nodes [root@node1 ceph]# ceph-deploy admin node1 node2 node3
[root@node1 ceph]# ceph -s cluster: id: c05c1f28-ea78-41b7-b674-a069d90553ac health: HEALTH_OK Health status is OK services: mon: 1 daemons, quorum node1 1 Monitoring mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
In order to prevent single point of mon failure, you can add multiple mon nodes (odd number is recommended because there is quorum arbitration voting)
Review: what is quorum?
[root@node1 ceph]# ceph-deploy mon add node2 [root@node1 ceph]# ceph-deploy mon add node3 [root@node1 ceph]# ceph -s cluster: id: c05c1f28-ea78-41b7-b674-a069d90553ac health: HEALTH_OK Health status is OK services: mon: 3 daemons, quorum node1,node2,node3 3 Monitoring mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
Solution to time synchronization detected by monitoring
ceph clusters have very high requirements for time synchronization. Even if you have started the ntpd service, there may still be clock skew delayed warnings
Please try the following:
1. On all nodes of ceph cluster (node1,node2,node3), ntpd service is not used, and crontab synchronization is directly used
# systemctl stop ntpd # systemctl disable ntpd # crontab -e */10 * * * * ntpdate ntp1.aliyun.com Synchronize the server of the public network at any time every 5 or 10 minutes
2. Increase the time warning threshold
[root@node1 ceph]# vim ceph.conf [global] stay global Add the following two lines to the parameter group ...... mon clock drift allowed = 2 # Clock ticks between monitor s (0.5 seconds by default) mon clock drift warn backoff = 30 # Increase the allowable offset of the clock (5 by default)
3. Synchronize to all nodes
[root@node1 ceph]# ceph-deploy --overwrite-conf admin node1 node2 node3 The previous first synchronization does not need to be added--overwrite-conf parameter This modification ceph.conf Resynchronization needs to be added--overwrite-conf Parameter coverage
4. Restart ceph-mon.target service on all CEPH cluster nodes
# systemctl restart ceph-mon.target
Step 7: create Mgr (Management)
A new component is added in ceph luminous: Ceph Manager Daemon, or CEPH Mgr for short.
The main function of this component is to share and expand some functions of monitor, reduce the burden of monitor and better manage ceph storage system.
Create an mgr
[root@node1 ceph]# ceph-deploy mgr create node1 [root@node1 ceph]# ceph -s cluster: id: c05c1f28-ea78-41b7-b674-a069d90553ac health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active) node1 by mgr osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
Adding multiple MGRS enables HA
[root@node1 ceph]# ceph-deploy mgr create node2 [root@node1 ceph]# ceph-deploy mgr create node3 [root@node1 ceph]# ceph -s cluster: id: c05c1f28-ea78-41b7-b674-a069d90553ac health: HEALTH_OK Health status is OK services: mon: 3 daemons, quorum node1,node2,node3 3 Monitoring mgr: node1(active), standbys: node2, node3 notice node1 Mainly,node2,node3 For preparation osd: 0 osds: 0 up, 0 in See 0 disks data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
Step 8: create OSD (storage disk)
[root@node1 ceph]# ceph-deploy disk --help [root@node1 ceph]# ceph-deploy osd --help
The disks of all nodes in the list include sda and sdb. sdb is the disk we want to add distributed storage
List view disks on nodes [root@node1 ceph]# ceph-deploy disk list node1 [root@node1 ceph]# ceph-deploy disk list node2 [root@node1 ceph]# ceph-deploy disk list node3 zap Indicates that the data on the disk is killed,Equivalent to formatting [root@node1 ceph]# ceph-deploy disk zap node1 /dev/sdb [root@node1 ceph]# ceph-deploy disk zap node2 /dev/sdb [root@node1 ceph]# ceph-deploy disk zap node3 /dev/sdb Create disk as osd [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node1 [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node2 [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node3
[root@node1 ceph]# ceph -s cluster: id: c05c1f28-ea78-41b7-b674-a069d90553ac health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active), standbys: node2, node3 osd: 3 osds: 3 up, 3 in See here are three osd data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 41 MiB used, 2.9 GiB / 3.0 GiB avail The size is the sum of 3 disks pgs:
OSDs are created, so how to access data?
Capacity expansion method of cluster node
Suppose you add a new cluster node node4
1. Host name configuration and binding
2. Install CEPH CEPH radosgw - Y on node4 using Yum or up2date
3. Synchronize the configuration file on the deployment node node1 to node4. CEPH deploy admin node4
4. Add mon or mgr or osd on node4 as required
4, RADOS native data access demo
As mentioned above, RADOS can also access data, but we generally do not use it directly, but we can first use RADOS to deeply understand the data access principle of ceph.
Access principle
To achieve data access, you need to create a pool. To create a pool, you need to allocate PG first.
If the client writes a file to a pool, how is the file distributed to the disks of multiple nodes?
The answer is through the CRUSH algorithm.
CRUSH algorithm
- CRUSH(Controlled Scalable Decentralized Placement of Replicated Data) algorithm is the abbreviation of controllable, scalable and distributed replica data placement algorithm.
- The process algorithm of PG to OSD mapping is called cross algorithm. (an Object needs to save three copies, that is, it needs to be saved on three OSDs).
- Cross algorithm is a pseudo-random process. It can randomly select an OSD set from all OSDs, but the result of each random selection of the same PG is unchanged, that is, the mapped OSD set is fixed.
Summary:
- The client operates directly on the pool (but we don't do this for file storage, block storage and object storage)
- PG should be allocated in the pool
- Multiple objects can be stored in PG
- An object is a separate unit of data written by the client
- The CRUSH algorithm maps the data written by the client to the OSD and finally stores it on the physical disk (this specific process is abstract, and our operation and maintenance engineers don't have to dig deeply, because distributed storage is a big hard disk for the operation and maintenance engineers)
Create pool
Create test_pool, the specified number of pg is 128
[root@node1 ceph]# ceph osd pool create test_pool 128 pool 'test_pool' created
To view the PG quantity, you can use CEPH OSD pool set test_ pool pg_ Use commands such as num 64 to try to adjust
[root@node1 ceph]# ceph osd pool get test_pool pg_num pg_num: 128
Note: the number of pg is related to the number of ods
- The number of PG is a multiple of 2, generally less than 5 osd, and it can be divided into 128 PG or less (if the number of PG is more, an error will be reported, which can be adjusted down appropriately according to the error)
- You can use CEPH OSD pool set test_ pool pg_ Use commands such as num 64 to try to adjust
Storage test
1. I upload the local / etc/fstab file to test_pool and name it newfstab
[root@node1 ceph]# rados put newfstab /etc/fstab --pool=test_pool
2. View
[root@node1 ceph]# rados -p test_pool ls newfstab
3. Delete
[root@node1 ceph]# rados rm newfstab --pool=test_pool
Delete pool
1. Add a parameter on the deployment node node1 to allow ceph to delete the pool
[root@node1 ceph]# vim /etc/ceph/ceph.conf mon_allow_pool_delete = true
2. The configuration is modified to synchronize to other cluster nodes
[root@node1 ceph]# ceph-deploy --overwrite-conf admin node1 node2 node3
3. Restart the monitoring service
[root@node1 ceph]# systemctl restart ceph-mon.target
4. When deleting, enter the pool name twice, followed by the -- yes-i-really-really-mean-it parameter to delete it
[root@node1 ceph]# ceph osd pool delete test_pool test_pool --yes-i-really-really-mean-it
5, Create Ceph file store
To run Ceph file system, you must first create Ceph storage cluster with at least one mds
(MDS is not used for Ceph block devices and Ceph object storage).
Ceph MDS: Ceph file storage type is used to store and manage metadata metadata
Create a file store and use
Step 1: synchronize the configuration file on the node1 deployment node and create an mds service (you can also do multiple mds to implement HA)
[root@node1 ceph]# ceph-deploy mds create node1 node2 node3 I'll make three here mds
Step 2: a Ceph file system requires at least two RADOS storage pools, one for data and one for metadata. So we create them.
[root@node1 ceph]# ceph osd pool create cephfs_pool 128 pool 'cephfs_pool' created [root@node1 ceph]# ceph osd pool create cephfs_metadata 64 pool 'cephfs_metadata' created [root@node1 ceph]# ceph osd pool ls |grep cephfs cephfs_pool cephfs_metadata
Step 3: create Ceph file system and confirm the node accessed by the client
[root@node1 ceph]# ceph fs new cephfs cephfs_metadata cephfs_pool [root@node1 ceph]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_pool ] [root@node1 ceph]# ceph mds stat cephfs-1/1/1 up {0=ceph_node3=up:active}, 2 up:standby See here node3 by up state
Step 4: the client prepares to verify the key file
- Note: ceph enables cephx authentication by default, so the mounting of the client must be verified
View the key string on any cluster node (node1,node2,node3)
[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = AQDEKlJdiLlKAxAARx/PXR3glQqtvFFMhlhPmw== The following string is required for verification caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
Create a file record key string on the client
[root@client ~]# vim admin.key # Create a key file, copy and paste the string obtained above AQDEKlJdiLlKAxAARx/PXR3glQqtvFFMhlhPmw==
Step 5: mount the client (mount the node monitored by mon in ceph cluster, and the mon monitoring is port 6789)
[root@client ~]# mount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=/root/admin.key
Step 6: verify
[root@client ~]# df -h |tail -1 node1:6789:/ 3.8G 0 3.8G 0% /mnt # Don't worry about the size. Different scenes will affect the number of pg and copies
If you want to verify read and write, please verify yourself
You can use two clients to mount this file storage at the same time, which can read and write at the same time
Delete file storage method
If you need to delete a file store, follow the procedure below
Step 1: delete data on the client and umount all mounts
[root@client ~]# rm /mnt/* -rf [root@client ~]# umount /mnt/
Step 2: stop mds of all nodes (file storage can be deleted only when mds is stopped)
[root@node1 ~]# systemctl stop ceph-mds.target [root@node2 ~]# systemctl stop ceph-mds.target [root@node3 ~]# systemctl stop ceph-mds.target
Step 3: go back to any node of the cluster (one of node1, node2 and node3) and delete it
If you want to delete a client, you need to synchronize the configuration of CEPH deploy admin client on node1
[root@client ~]# ceph fs rm cephfs --yes-i-really-mean-it [root@client ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it pool 'cephfs_metadata' removed [root@client ~]# ceph osd pool delete cephfs_pool cephfs_pool --yes-i-really-really-mean-it pool 'cephfs_pool' removed
Step 4: start the mds service again
[root@node1 ~]# systemctl start ceph-mds.target [root@node2 ~]# systemctl start ceph-mds.target [root@node3 ~]# systemctl start ceph-mds.target
6, Create Ceph block store
Create block storage and use
Step 1: synchronize the configuration file to the client on node1
[root@node1 ceph]# ceph-deploy admin client
Step 2: establish storage pool and initialize
Note: operate on the client side
[root@client ~]# ceph osd pool create rbd_pool 128 pool 'rbd_pool' created [root@client ~]# rbd pool init rbd_pool
Step 3: create a storage volume (the volume name here is volume1 and the size is 5000M)
Note: the professional term of volume1 is image. I call it storage volume here for easy understanding
[root@client ~]# rbd create volume1 --pool rbd_pool --size 5000 [root@client ~]# rbd ls rbd_pool volume1 [root@client ~]# rbd info volume1 -p rbd_pool rbd image 'volume1': Can see volume1 by rbd image size 4.9 GiB in 1250 objects order 22 (4 MiB objects) id: 149256b8b4567 block_name_prefix: rbd_data.149256b8b4567 format: 2 There are two formats: 1 and 2,Now it's 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten characteristic op_features: flags: create_timestamp: Sat Aug 17 19:47:51 2019
Step 4: map the created volume to a block device
- Because some features of rbd image are not supported by the OS kernel, the mapping error is reported
[root@client ~]# rbd map rbd_pool/volume1 rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_pool/volume1 object-map fast-diff deep-flatten". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
- Solution: disable the related features
[root@client ~]# rbd feature disable rbd_pool/volume1 exclusive-lock object-map fast-diff deep-flatten
- Remap
[root@client ~]# rbd map rbd_pool/volume1 /dev/rbd0
Step 5: view the mapping (if you want to unmap, you can use rbd unmap /dev/rbd0)
[root@client ~]# rbd showmapped id pool image snap device 0 rbd_pool volume1 - /dev/rbd0
Step 6: format and mount
[root@client ~]# mkfs.xfs /dev/rbd0 [root@client ~]# mount /dev/rbd0 /mnt/ [root@client ~]# df -h |tail -1 /dev/rbd0 4.9G 33M 4.9G 1% /mnt
Self verifiable read and write
Note: block storage cannot be read and written at the same time. Please do not mount two clients for reading and writing at the same time
Expansion and reduction of block storage
Online capacity expansion
After testing, after partition, / dev/rbd0p1 cannot be expanded online, and / dev/rbd0 can be used directly
Capacity expansion to 8000 M [root@client ~]# rbd resize --size 8000 rbd_pool/volume1 [root@client ~]# rbd info rbd_pool/volume1 |grep size size 7.8 GiB in 2000 objects View size,No change [root@client ~]# df -h |tail -1 /dev/rbd0 4.9G 33M 4.9G 1% /mnt [root@client ~]# xfs_growfs -d /mnt/ View size again,Online capacity expansion succeeded [root@client ~]# df -h |tail -1 /dev/rbd0 7.9G 33M 7.9G 1% /mnt
Block storage reduction
It cannot be cut online. It needs to be reformatted and mounted after cutting, so please back up the data in advance
Cut back to 5000 M [root@client ~]# rbd resize --size 5000 rbd_pool/volume1 --allow-shrink Reformat mount [root@client ~]# umount /mnt/ [root@client ~]# mkfs.xfs -f /dev/rbd0 [root@client ~]# mount /dev/rbd0 /mnt/ View again,Confirm successful reduction [root@client ~]# df -h |tail -1 /dev/rbd0 4.9G 33M 4.9G 1% /mnt
Delete block storage method
[root@client ~]# umount /mnt/ [root@client ~]# rbd unmap /dev/rbd0 [root@client ~]# ceph osd pool delete rbd_pool rbd_pool --yes-i-really-really-mean-it pool 'rbd_pool' removed
7, Ceph object store
Test the connection of ceph object gateway
Step 1: create rgw on node1
[root@node1 ceph]# ceph-deploy rgw create node1 [root@node1 ceph]# lsof -i:7480 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME radosgw 6748 ceph 40u IPv4 49601 0t0 TCP *:7480 (LISTEN)
Step 2: test the connection object gateway on the client
Create a test user,It needs to be used on the deployment node ceph-deploy admin client Sync profile to client [root@client ~]# radosgw-admin user create --uid="testuser" --display-name="First User" { "user_id": "testuser", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "testuser", "access_key": "36ROCI84S5NSP4BPYL01", "secret_key": "jBOKH0v6J79bn8jaAF2oaWU7JvqTxqb4gjerWOFW" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
The main useful section above is access_key and secret_key, used to connect to the object storage gateway
[root@client ~]# radosgw-admin user create --uid='testuser' --display-name='First User' |grep -E 'access_key|secret_key' "access_key": "36ROCI84S5NSP4BPYL01", "secret_key": "jBOKH0v6J79bn8jaAF2oaWU7JvqTxqb4gjerWOFW"
S3 connection ceph object gateway
Amazon s3 is an Internet oriented object storage service. We can use s3 tool to connect ceph's object storage for operation
Step 1: install s3cmd tool on the client and write ceph connection configuration file
[root@client ~]# yum install s3cmd Create and write the following file,key The file corresponds to the test user created earlier key [root@client ~]# vim /root/.s3cfg [default] access_key = 36ROCI84S5NSP4BPYL01 secret_key = jBOKH0v6J79bn8jaAF2oaWU7JvqTxqb4gjerWOFW host_base = 10.1.1.11:7480 host_bucket = 10.1.1.11:7480/%(bucket) cloudfront_host = 10.1.1.11:7480 use_https = False
Step 2: Command test
list bucket,You can view the created by the previous test my-new-bucket [root@client ~]# s3cmd ls 2019-01-05 23:01 s3://my-new-bucket Build another bucket [root@client ~]# s3cmd mb s3://test_bucket Upload file to bucket [root@client ~]# s3cmd put /etc/fstab s3://test_bucket upload: '/etc/fstab' -> 's3://test_bucket/fstab' [1 of 1] 501 of 501 100% in 1s 303.34 B/s done Download to current directory [root@client ~]# s3cmd get s3://test_bucket/fstab See reference command help for more commands [root@client ~]# s3cmd --help
CEPH dashboard (expansion)
Visual monitoring of ceph storage system is completed through ceph dashboard.
Step 1: check the cluster status and confirm the active node of mgr
[root@node1 ~]# ceph -s cluster: id: 6788206c-c4ea-4465-b5d7-ef7ca3f74552 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active), standbys: node3, node2 confirm mgr of active Node is node1 osd: 4 osds: 4 up, 4 in rgw: 1 daemon active data: pools: 6 pools, 48 pgs objects: 197 objects, 2.9 KiB usage: 596 MiB used, 3.4 GiB / 4.0 GiB avail pgs: 48 active+clean
Step 2: open the dashboard module
[root@node1 ~]# ceph mgr module enable dashboard
Step 3: create a self signed certificate
[root@node1 ~]# ceph dashboard create-self-signed-cert Self-signed certificate created
Step 4: generate a key pair and configure it to ceph mgr
[root@node1 ~]# mkdir /etc/mgr-dashboard [root@node1 ~]# cd /etc/mgr-dashboard/ [root@node1 mgr-dashboard]# openssl req -new -nodes -x509 -subj "/O=IT-ceph/CN=cn" -days 365 -keyout dashboard.key -out dashboard.crt -extensions v3_ca Generating a 2048 bit RSA private key .+++ .....+++ writing new private key to 'dashboard.key' ----- [root@node1 mgr-dashboard]# ls dashboard.crt dashboard.key
Step 5: configure mgr services on the active mgr node of ceph cluster (node1 here)
The dashboard service is mainly used to configure the IP and Port used by the dashboard
[root@node1 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_addr 10.1.1.11 [root@node1 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_port 8080
Step 6: restart the dashboard module and check the access address
[root@node1 mgr-dashboard]# ceph mgr module disable dashboard [root@node1 mgr-dashboard]# ceph mgr module enable dashboard [root@node1 mgr-dashboard]# ceph mgr services { "dashboard": "https://10.1.1.11:8080/" }
Step 7: set the user name and password for accessing the web page
[root@node1 mgr-dashboard]# ceph dashboard set-login-credentials daniel daniel123 Username and password updated
Step 8: access through this machine or other hosts
ceph object storage combined with owncloud to create cloud disk (expansion)
1. Prepare the bucket and related connection key s on the ceph client
[root@client ~]# s3cmd mb s3://owncloud Bucket 's3://owncloud/' created [root@client ~]# cat /root/.s3cfg [default] access_key = 36ROCI84S5NSP4BPYL01 secret_key = jBOKH0v6J79bn8jaAF2oaWU7JvqTxqb4gjerWOFW host_base = 10.1.1.11:7480 host_bucket = 10.1.1.11:7480/%(bucket) cloudfront_host = 10.1.1.11:7480 use_https = False
2. Install the web environment required by the owncloud cloud disk on the client side
Owncloud requires web server and PHP support. At present, the latest version of owncloud requires php7.x version. Here we use rpm version to save time
[root@client ~]# yum install httpd mod_ssl php-mysql php php-gd php-xml php-mbstring -y [root@client ~]# systemctl restart httpd
3. Upload the owncloud package and unzip it to the httpd home directory
[root@client ~]# tar xf owncloud-9.0.1.tar.bz2 -C /var/www/html/ [root@client ~]# chown apache.apache -R /var/www/html/ It needs to be modified to run web User of the server owner,group,Otherwise, there will be permission problems when writing later
4. Access http:10.1.1.14/owncloud through the browser for configuration
5. File upload and download test
[root@client ~]# s3cmd put /etc/fstab s3://owncloud upload: '/etc/fstab' -> 's3://owncloud/fstab' [1 of 1] 501 of 501 100% in 0s 6.64 kB/s done
Because the default owncloud upload file is limited to 2M. So it needs to be modified
[root@client ~]# vim /var/www/html/owncloud/.htaccess <IfModule mod_php5.c> php_value upload_max_filesize 2000M Modify and enlarge php_value post_max_size 2000M Modify and enlarge [root@client ~]# vim /etc/php.ini post_max_size = 2000M Modify and enlarge upload_max_filesize = 2000M Modify and enlarge [root@client ~]# systemctl restart httpd