Ceph distributed storage dual active service configuration

Ceph is born with the concept of two places and three centers. The so-called double activity is two multi-site. Ceph two...

Ceph is born with the concept of two places and three centers. The so-called double activity is two multi-site. Ceph two data centers can be in one cluster or in different clusters. The structure diagram (other mountain stone) is as follows:

1. Environmental information

2. Create Master zone

In a multi site configuration, all RGWS will receive the relevant configuration of CEPH radosgw corresponding to the mater zone in the master zone group. Therefore, a master zone group and a master zone must be configured first.

2.1 creating a realm

A realm contains the zone groups and zones in the multi site configuration, and is the globally unique namespace in the realm. Execute the following command to create on any node of the main cluster:

[root@ceph01 ~]# radosgw-admin -c ceph1 realm create --rgw-realm=xzxj --default { "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a", "name": "xzxj", "current_period": "02a14536-a455-4063-a990-24acaf504099", "epoch": 1 }

If the realm is only used for this cluster, the -- default option will be added, and the realm will be used by radosgw admin by default.

2.2 create master zonegroup

A realm must have at least one master zone group.

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zonegroup create --rgw-zonegroup=all --endpoints=http://192.168.120.53:8080,http://192.168.120.54:8080,http://192.168.120.55:8080,http://192.168.120.56:8080 --rgw-realm=xzxj --master --default

When there is only one zone group in the realm, specify the -- default option, which will be added to the zone group by default when a new zone is added.

2.3 create master zone

Add a new master zone: z1 for a multi site configuration, as follows:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone create --rgw-zonegroup=all --rgw-zone=z1 --endpoints=http://192.168.120.53:8080,http://192.168.120.54:8080,http://192.168.120.55:8080,http://192.168.120.56:8080 --default

The -- access key and -- secret are not specified here. In the following steps, these settings will be automatically added to the zone when users are created.

2.4 create system account

The ceph-radosgw daemons must be authenticated before pulling the realm and period information. In the master zone, create a system user to complete authentication between different daemons:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 user create --uid="sync-user" --display-name="sync user" --system { "user_id": "sync-user", "display_name": "sync user", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "sync-user", "access_key": "ZA4TXA65C5TGCPX4B8V6", "secret_key": "BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "system": "true", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }

When the secondary zones need to be authenticated with the master zone, access to the system account is required_ Key and secret_key. Finally, add the system user to the master zone and update the period:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone modify --rgw-zone=z1 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24 [root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit
2.5 update ceph profile

Edit profile ceph.conf , add rgw_zone option, whose value is the name of the master zone. Here rgw_zone=z1. Add as many rgw nodes as you want, as follows:

[root@ceph01 ~]# vi /etc/ceph/ceph1.conf [client.rgw.ceph01.rgw0] host = ceph01 keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph01.rgw0/keyring log file = /var/log/ceph/ceph1-rgw-ceph01.rgw0.log rgw frontends = beast endpoint=192.168.120.53:8080 rgw thread pool size = 512 rgw_zone=z1 [client.rgw.ceph02.rgw0] host = ceph02 keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph02.rgw0/keyring log file = /var/log/ceph/ceph1-rgw-ceph02.rgw0.log rgw frontends = beast endpoint=192.168.120.54:8080 rgw thread pool size = 512 rgw_zone=z1 [client.rgw.ceph03.rgw0] host = ceph03 keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph03.rgw0/keyring log file = /var/log/ceph/ceph1-rgw-ceph03.rgw0.log rgw frontends = beast endpoint=192.168.120.55:8080 rgw thread pool size = 512 rgw_zone=z1 [client.rgw.ceph04.rgw0] host = ceph04 keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph04.rgw0/keyring log file = /var/log/ceph/ceph1-rgw-ceph04.rgw0.log rgw frontends = beast endpoint=192.168.120.56:8080 rgw thread pool size = 512 rgw_zone=z1

After editing, synchronize the ceph configuration file to other cluster nodes, and restart the rgw service at all rgw nodes:

[root@ceph01 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

3. Create a slave zone

3.1 pull the realm from the master zone

Use the URL path, access key and secret key of the master zone in the master zone group to pull the realm to the corresponding host of the secondary zone. If you want to pull a non default realm, use the -- rgw realm or -- realm ID option:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 realm pull --url=http://192.168.120.53:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24 { "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a", "name": "xzxj", "current_period": "913e666c-57fb-4992-8839-53fe447d8427", "epoch": 2 }

Note: the access key and secret here are the access key and secret of the system account on the master zone.

3.2 pull period from master zone

Use the URL path, access key and secret key of the master zone in the master zone group to pull the period to the corresponding host of the secondary zone. If you are pulling period from a non default realm, use the -- rgw realm or -- realm ID option:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 period pull --url=http://192.168.120.53:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24
3.3 create a slave zone

Use the URL path, access key and secret key of the master zone in the master zone group to pull the period to the corresponding host of the secondary zone. If you are pulling period from a non default realm, use the -- RGW realm or -- realm ID option. By default, all zones run in active active configuration mode, that is, an RGW client can write data to any zone, and this zone will copy data to other zones in the same group. If the secondary zone does not accept writes, specify the -- read only option to create an active passive configured zone. In addition, you need to provide access key and secret key in the master zone.

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone create --rgw-zonegroup=all --rgw-zone=z2 --endpoints=http://192.168.120.57:8080,http://192.168.120.58:8080,http://192.168.120.59:8080,http://192.168.120.60:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24
3.4 update period
[root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit { "id": "913e666c-57fb-4992-8839-53fe447d8427", "epoch": 4, "predecessor_uuid": "02a14536-a455-4063-a990-24acaf504099", "sync_status": [], "period_map": { "id": "913e666c-57fb-4992-8839-53fe447d8427", "zonegroups": [ { "id": "8259119d-4ed7-4cfc-af28-9a8e6678c5f7", "name": "all", "api_name": "all", "is_master": "true", "endpoints": [ "http://192.168.120.53:8080", "http://192.168.120.54:8080", "http://192.168.120.55:8080", "http://192.168.120.56:8080" ], "hostnames": [], "hostnames_s3website": [], "master_zone": "91d15c30-f785-4bd1-8e80-d63ab939b259", "zones": [ { "id": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf", "name": "z2", "endpoints": [ "http://192.168.120.57:8080", "http://192.168.120.58:8080", "http://192.168.120.59:8080", "http://192.168.120.60:8080" ], "log_meta": "false", "log_data": "true", "bucket_index_max_shards": 0, "read_only": "false", "tier_type": "", "sync_from_all": "true", "sync_from": [], "redirect_zone": "" }, { "id": "91d15c30-f785-4bd1-8e80-d63ab939b259", "name": "z1", "endpoints": [ "http://192.168.120.53:8080", "http://192.168.120.54:8080", "http://192.168.120.55:8080", "http://192.168.120.56:8080" ], "log_meta": "false", "log_data": "true", "bucket_index_max_shards": 0, "read_only": "false", "tier_type": "", "sync_from_all": "true", "sync_from": [], "redirect_zone": "" } ], "placement_targets": [ { "name": "default-placement", "tags": [], "storage_classes": [ "STANDARD" ] } ], "default_placement": "default-placement", "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a" } ], "short_zone_ids": [ { "key": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf", "val": 1058646688 }, { "key": "91d15c30-f785-4bd1-8e80-d63ab939b259", "val": 895340584 } ] }, "master_zonegroup": "8259119d-4ed7-4cfc-af28-9a8e6678c5f7", "master_zone": "91d15c30-f785-4bd1-8e80-d63ab939b259", "period_config": { "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 } }, "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a", "realm_name": "xzxj", "realm_epoch": 2 }
3.5 change ceph profile and restart RGW service

Edit configuration ceph.conf , add rgw_zone=z2:

[root@ceph05 ~]# vi /etc/ceph/ceph2.conf [client.rgw.ceph05.rgw0] host = ceph05 keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph05.rgw0/keyring log file = /var/log/ceph/ceph2-rgw-ceph05.rgw0.log rgw frontends = beast endpoint=192.168.120.57:8080 rgw thread pool size = 512 rgw_zone= z2 [client.rgw.ceph06.rgw0] host = ceph06 keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph06.rgw0/keyring log file = /var/log/ceph/ceph2-rgw-ceph06.rgw0.log rgw frontends = beast endpoint=192.168.120.58:8080 rgw thread pool size = 512 rgw_zone= z2 [client.rgw.ceph07.rgw0] host = ceph07 keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph07.rgw0/keyring log file = /var/log/ceph/ceph2-rgw-ceph07.rgw0.log rgw frontends = beast endpoint=192.168.120.59:8080 rgw thread pool size = 512 rgw_zone= z2 [client.rgw.ceph08.rgw0] host = ceph08 keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph08.rgw0/keyring log file = /var/log/ceph/ceph2-rgw-ceph08.rgw0.log rgw frontends = beast endpoint=192.168.120.60:8080 rgw thread pool size = 512 rgw_zone= z2

After editing, synchronize the ceph configuration file to other cluster nodes, and restart the rgw service at all rgw nodes:

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

4. Synchronization status check

After the secondary zone is established and runs successfully, you can check the corresponding synchronization status. Synchronization copies the users and buckets created in the master zone to the secondary zone. Create a candon user on the master, and then view it on the slave:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 user create --uid="candon" --display-name="First User" { "user_id": "candon", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "candon", "access_key": "Y9WJW2H2N4CLDOOE8FN7", "secret_key": "CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] } [root@ceph01 ~]# radosgw-admin --cluster ceph1 user list [ "sync-user", "candon" ] [root@ceph05 ~]# radosgw-admin --cluster ceph2 user list [ "sync-user", "candon" ]

Synchronization status view:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 sync status realm 0f13bb55-68f6-4489-99fb-d79ba8ca959a (xzxj) zonegroup 8259119d-4ed7-4cfc-af28-9a8e6678c5f7 (all) zone 91d15c30-f785-4bd1-8e80-d63ab939b259 (z1) metadata sync no sync (zone is master) data sync source: 04231ccf-bb2b-4eff-aba7-a7cb9a3505cf (z2) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source [root@ceph05 ~]# radosgw-admin --cluster ceph2 sync status realm 0f13bb55-68f6-4489-99fb-d79ba8ca959a (xzxj) zonegroup 8259119d-4ed7-4cfc-af28-9a8e6678c5f7 (all) zone 04231ccf-bb2b-4eff-aba7-a7cb9a3505cf (z2) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 91d15c30-f785-4bd1-8e80-d63ab939b259 (z1) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source

Note: Although the secondary zone can receive bucket operations, it actually processes them by forwarding them to the master zone, and then synchronizes the processed results to the secondary zone. If the master zone does not work properly, the bucket operations executed in the secondary zone will fail. But object operations can succeed.

5. Client test

Here we use s3 client and swift client to test.

5.1 s3 client test
[root@client1 ~]# yum -y install python-boto [root@client1 ~]# vi s3test.py import boto import boto.s3.connection access_key = 'Y9WJW2H2N4CLDOOE8FN7' secret_key = 'CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD' boto.config.add_section('s3') conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 'ceph01', port = 8080, is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "\t".format( name = bucket.name, created = bucket.creation_date, ) [root@client1 ~]# python s3test.py my-new-bucket 2020-04-30T07:27:23.270Z
5.2 swift client test
establish Swift user [root@ceph01 ~]# radosgw-admin --cluster ceph1 subuser create --uid=candon --subuser=candon:swift --access=full { "user_id": "candon", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [ { "id": "candon:swift", "permissions": "full-control" } ], "keys": [ { "user": "candon", "access_key": "Y9WJW2H2N4CLDOOE8FN7", "secret_key": "CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD" } ], "swift_keys": [ { "user": "candon:swift", "secret_key": "VZaiUF8DzLJYtT67Jg5tWZStDWHsmAi6K6KDuGQc" } ], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
--client [root@client1 ~]# yum -y install python-setuptools [root@client1 ~]# easy_install pip [root@client1 ~]# pip install --upgrade setuptools [root@client1 ~]# swift -A http://192.168.120.53:8080/auth/1.0 -U candon:swift -K 'VZaiUF8DzLJYtT67Jg5tWZStDWHsmAi6K6KDuGQc' list my-new-bucket

6. failover verification

Set z2 to master:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone modify --rgw-zone=z2 --master --default { "id": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf", "name": "z2", "domain_root": "z2.rgw.meta:root", "control_pool": "z2.rgw.control", "gc_pool": "z2.rgw.log:gc", "lc_pool": "z2.rgw.log:lc", "log_pool": "z2.rgw.log", "intent_log_pool": "z2.rgw.log:intent", "usage_log_pool": "z2.rgw.log:usage", "reshard_pool": "z2.rgw.log:reshard", "user_keys_pool": "z2.rgw.meta:users.keys", "user_email_pool": "z2.rgw.meta:users.email", "user_swift_pool": "z2.rgw.meta:users.swift", "user_uid_pool": "z2.rgw.meta:users.uid", "otp_pool": "z2.rgw.otp", "system_key": { "access_key": "ZA4TXA65C5TGCPX4B8V6", "secret_key": "BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": "z2.rgw.buckets.index", "storage_classes": { "STANDARD": { "data_pool": "z2.rgw.buckets.data" } }, "data_extra_pool": "z2.rgw.buckets.non-ec", "index_type": 0 } } ], "metadata_heap": "", "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a" }

Update period:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit

Finally, restart each gateway service in the cluster node:

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

7. Disaster recovery

After the old master zone is restored, if you want to switch to the original zone as master, execute the following command:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 realm pull --url=http://192.168.120.57:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24 { "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a", "name": "xzxj", "current_period": "21a6550a-3236-4b99-9bc0-25268bf1a5c6", "epoch": 3 } [root@ceph01 ~]# radosgw-admin --cluster ceph1 zone modify --rgw-zone=z1 --master --default [root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit

Then restart each gateway service on the recovered master node

[root@ceph01 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

If you want to set the standby to read only, use the following command on the standby node:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone modify --rgw-zone=example --read-only [root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit

Finally, restart the gateway service on the standby node

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

8. Click the object gateway in the Web management interface to report the error and handle the fault

If the web interface is enabled and the multi site service is set, when clicking the web management interface and clicking the object gateway, an error will be reported, because the default CEPH dashboard account is deleted by default

[root@ceph01 ~]# radosgw-admin user info --uid=ceph-dashboard could not fetch user info: no user info saved

To rebuild this user in the master node:

[root@ceph01 ~]# radosgw-admin user create --uid=ceph-dashboard --display-name=ceph-dashboard --system

Record the user's access_key and secret_key, then update rgw API access key and rgw API secret key:

[root@ceph01 ~]# ceph dashboard set-rgw-api-access-key FX1L1DAY3JXI5J88VZLP Option RGW_API_ACCESS_KEY updated [root@ceph01 ~]# ceph dashboard set-rgw-api-secret-key UHArzi8B82sAMwMxUnkWH4dKy2O1iOCK25nV0rI1 Option RGW_API_SECRET_KEY updated

At this point, the master can access the object gateway normally, while the slave needs to update the rgw API access key and rgw API secret key.
Finally, update the rgw API access key and rgw API secret key at any node of the slave cluster:

[root@ceph06 ~]# ceph dashboard set-rgw-api-access-key FX1L1DAY3JXI5J88VZLP Option RGW_API_ACCESS_KEY updated [root@ceph06 ~]# ceph dashboard set-rgw-api-secret-key UHArzi8B82sAMwMxUnkWH4dKy2O1iOCK25nV0rI1 Option RGW_API_SECRET_KEY updated

9. Delete the default zonegroup and zone

If you do not need the default zonegroup or zone, delete it on the primary and secondary nodes.

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zonegroup delete --rgw-zonegroup=default [root@ceph01 ~]# radosgw-admin --cluster ceph1 zone delete --rgw-zone=default [root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit

Then edit / etc/ceph/ceph.conf Document, add the following:

[mon] mon allow pool delete = true

After synchronizing the ceph configuration file to other nodes, restart all mon services, and then delete the default pool.

[root@ceph01 ~]# systemctl restart ceph-mon.target [root@ceph01 ~]# ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it [root@ceph01 ~]# ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it [root@ceph01 ~]# ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it

22 May 2020, 05:16 | Views: 8710

Add new comment

For adding a comment, please log in
or create account

0 comments