Ceph is the most popular distributed storage system today. This article records the detailed steps of installing and configuring Ceph.
Configure work in advance
Start with the first cluster node, and then gradually add other nodes. For Ceph, the first node we join should be Monitor, which we set to ...
Posted on Thu, 06 Feb 2020 03:42:24 -0500 by bluetonic
Step 1: Define a callback function
Step 2: Common functions call callback functions
Common Function Function Implementation
Step 3: Examine one of the group sub-modules
Step 4: Reuse id to control the development of team members
The first step in the learning of ...
Posted on Sun, 26 Jan 2020 21:00:25 -0500 by faza
Official website address: https://rook.io/
Project address: https://github.com/rook/rook
Prepare osd storage media
Hard disk symbol
>Before installation, use the command lvm lvs,lvm vgs and lvm pvs to check w ...
Posted on Wed, 08 Jan 2020 05:09:33 -0500 by ch3m1st
The MDS of ceph is the metadata service of cephFS file storage service.
When cephfs is created, there will be ceph mds services for management. By default, ceph assigns an mds service to manage cephfs, even if multiple mds services have been created, as follows:
[root@ceph-admin my-cluster]# ceph-deploy mds create ceph-node01 ceph-node02
Posted on Mon, 06 Jan 2020 03:39:07 -0500 by acheoacheo
Try 1. Directly reactivate all OSDs
1. View osd tree
[root@ceph01 ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.29279 root default
-2 0.14639 host ceph01
0.14639 osd.0 up 1.00000 1.00000 ...
Posted on Wed, 01 Jan 2020 07:58:52 -0500 by bsbotto
1 Introduction to Core Components
The lowest storage unit in Ceph is an Object object, each containing metadata and raw data.
OSD is fully named Object Storage Device, which is the process responsible for returning specific data in response to client requests.A Ceph cluster generally has many OSDs.
The full name of PG, Placement ...
Posted on Tue, 26 Nov 2019 15:47:23 -0500 by Cynthia Blue
Execute on ceph monitor
CINDER_PASSWD='cinder1234!' controllerHost='controller' RABBIT_PASSWD='0penstackRMQ'
1. Create pool pool
Create a pool pool for the cinder volume service (since I have only one OSD node, set the number of replicas to 1) ceph osd pool create cinder-volumes 32 ceph osd pool set cinder-volumes size 1 ceph osd pool applicat ...
Posted on Sun, 20 Oct 2019 16:33:32 -0400 by BrianPeiris
storage class is a kubernetes resource type, which is created on demand by administrators for the convenience of managing PV
The advantage of storage class is to support the dynamic creation of PV. Dynamic creation of adapted PV according to the requirement standard of PVC will bring great flexibility to storage management.
Posted on Tue, 08 Oct 2019 09:46:02 -0400 by bobcooper
(1) Looking at the cluster status, two osd states are down
[root@node140 /]# ceph -s
noout flag(s) set
2 osds down
1 scrub errors
Possible data damage: 1 pg inconsistent
Degraded data redundancy: 1633/ ...
Posted on Sat, 05 Oct 2019 04:43:35 -0400 by villager203
Change OSD operation steps
1. Fault Disk Location
2. Remove faulty disks
3. Rebuild raid0
4. Rebuild osd
Control data recovery and backfill speed
First of all, it should be noted that the osd of ceph is not recommended ...
Posted on Mon, 24 Jun 2019 13:00:54 -0400 by chrishawkins