Step by step installation and configuration of Ceph distributed storage cluster

Ceph is the most popular distributed storage system today. This article records the detailed steps of installing and configuring Ceph. Configure work in advance Start with the first cluster node, and then gradually add other nodes. For Ceph, the first node we join should be Monitor, which we set to ...

Posted on Thu, 06 Feb 2020 03:42:24 -0500 by bluetonic

Function encapsulation technique--function pointer+void*variable

Article Directory Preface Step 1: Define a callback function Step 2: Common functions call callback functions Common Function Function Implementation Step 3: Examine one of the group sub-modules Step 4: Reuse id to control the development of team members Preface The first step in the learning of ...

Posted on Sun, 26 Jan 2020 21:00:25 -0500 by faza

Rook quick start Ceph integrated storage

Quick start Official website address: https://rook.io/ Project address: https://github.com/rook/rook Installation cluster Prepare osd storage media Hard disk symbol Size Effect sdb 50GB OSD Data sdc 50GB OSD Data sdd 50GB OSD Data sde 50GB OSD Metadata >Before installation, use the command lvm lvs,lvm vgs and lvm pvs to check w ...

Posted on Wed, 08 Jan 2020 05:09:33 -0500 by ch3m1st

High speed hot standby state of CEPH MDS

The MDS of ceph is the metadata service of cephFS file storage service. When cephfs is created, there will be ceph mds services for management. By default, ceph assigns an mds service to manage cephfs, even if multiple mds services have been created, as follows: [root@ceph-admin my-cluster]# ceph-deploy mds create ceph-node01 ceph-node02 . ...

Posted on Mon, 06 Jan 2020 03:39:07 -0500 by acheoacheo

ceph fixes osd as down

Try 1. Directly reactivate all OSDs 1. View osd tree [root@ceph01 ~]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.29279 root default -2 0.14639 host ceph01 0.14639 osd.0 up 1.00000 1.00000 ...

Posted on Wed, 01 Jan 2020 07:58:52 -0500 by bsbotto

[mimic] ceph cluster set up manually

1 Introduction to Core Components Object The lowest storage unit in Ceph is an Object object, each containing metadata and raw data. OSD OSD is fully named Object Storage Device, which is the process responsible for returning specific data in response to client requests.A Ceph cluster generally has many OSDs. PG The full name of PG, Placement ...

Posted on Tue, 26 Nov 2019 15:47:23 -0500 by Cynthia Blue

Configuring the cinder volume service to use ceph as back-end storage

Execute on ceph monitor CINDER_PASSWD='cinder1234!' controllerHost='controller' RABBIT_PASSWD='0penstackRMQ' 1. Create pool pool Create a pool pool for the cinder volume service (since I have only one OSD node, set the number of replicas to 1) ceph osd pool create cinder-volumes 32 ceph osd pool set cinder-volumes size 1 ceph osd pool applicat ...

Posted on Sun, 20 Oct 2019 16:33:32 -0400 by BrianPeiris

SUSE CaaS Platform 4 - Use Ceph RBD as persistent storage (dynamic)

Storage class storage class is a kubernetes resource type, which is created on demand by administrators for the convenience of managing PV The advantage of storage class is to support the dynamic creation of PV. Dynamic creation of adapted PV according to the requirement standard of PVC will bring great flexibility to storage management. Th ...

Posted on Tue, 08 Oct 2019 09:46:02 -0400 by bobcooper

Troubleshooting of osd down in ceph cluster

(1) Looking at the cluster status, two osd states are down [root@node140 /]# ceph -s cluster: id: 58a12719-a5ed-4f95-b312-6efd6e34e558 health: HEALTH_ERR noout flag(s) set 2 osds down 1 scrub errors Possible data damage: 1 pg inconsistent Degraded data redundancy: 1633/ ...

Posted on Sat, 05 Oct 2019 04:43:35 -0400 by villager203

Ceph Replace OSD Disk

Catalog brief introduction Change OSD operation steps 1. Fault Disk Location 2. Remove faulty disks 3. Rebuild raid0 4. Rebuild osd Control data recovery and backfill speed brief introduction First of all, it should be noted that the osd of ceph is not recommended ...

Posted on Mon, 24 Jun 2019 13:00:54 -0400 by chrishawkins