Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

RADOS Gateway multi-site replication

Overview

Charmed Ceph supports multi-site replication with the Ceph RADOS Gateway. This provides even more resilience to your object storage by replicating it to geographically separated Ceph clusters. This is of particular importance for active backup and disaster recovery scenarios where a site is compromised and access to data would be otherwise lost.

Terminology

The below terms are key to understanding multi-site replication.

RADOS Gateway

The RADOS Gateway (RGW) is an object storage interface built on top of librados. It provides a RESTful gateway between applications and Ceph Storage Clusters. It provides object storage functionality compatible with both AWS S3 and Openstack Swift APIs.

Zone

A zone is effectively where RGW stores objects. Ceph automatically creates multiple storage pools backing every zone using {zone-name}.pool-name convention for metadata, logs and bucket data. Zones contain buckets, and buckets contain objects.

Zonegroup

A zonegroup is a collection of zones. By default, data and metadata is actively synchronised across all zones in a zonegroup (but it can be configured for a custom synchronisation requirement). Each zonegroup should have a master zone.

Realm

A realm is like a container for underlying zonegroups. It is a globally unique namespace that consists of one or more zonegroups. It contains the multi-site configurations for its zonegroups and enforces a realm-wide unique namespace. Each realm should have a master zonegroup.

How multi-site replication works

On an abstract level, multi-site replication works as depicted in the diagram below.

The “Apps” or clients perform IO at the RGW endpoints. In this particular example, the secondary zone is configured to be “read-only” hence its client can only perform read operations. The feedback arrow from master zone to secondary zone represents data synchronisation between respective RGW endpoints. At the bottom, we see the zonegroup (which is present for both zones) and the data distributed across the Ceph clusters.

multisite-replication-diagram

Reference: Upstream Ceph documentation

Further resources

The below howto pages are related to multi-site replication.

This page was last modified 1 year, 3 months ago. Help improve this document in the forum.