This post describes how to manually integrate Red Hat OpenStack 9 (RHOSP9) Cinder service with multiple pre-existing external Red Hat Ceph Storage 2 (RHCS2) clusters. The final configuration goals are to have Cinder configuration with multiple storage backends and support for creating volumes in either backend.
This post will not cover the initial deployment of OpenStack Cinder or the Ceph clusters.
Configuration Rational
There are multiple scenarios where RHOSP9 Cinder will need to be configure to multiple RHCS2 storage clusters. These include but are not limited to
- Different performance profiles between the clusters
- Expanding capacity where RHCS2 expansion is not possible
- Other operational requirements where various Cinder block storage devices must be separate
Component Description
Cinder
OpenStack Cinder service provides compute instances with persistent block storage. Block storage is appropriate for performance sensitive scenarios such as databases, expandable file systems, or providing a server with access to raw block level storage. Persistent block storage can survive instance termination and can also be moved across instances like any
external storage device.
Red Hat Ceph Storage
The Ceph Storage Cluster is the foundation for all Ceph deployments. Based upon RADOS (Reliable Autonomic Distributed Object Store), Ceph Storage Cluster consist of two types of daemons: a Ceph Object Storage daemon (OSD) that stores data as objects on a storage node, and a Ceph Monitor that maintains a master copy of the cluster map. Ceph stores data objects within two logical groups: pools and placement groups (PGs).
- Pools: Pools are logical groups for storing objects. Pools manage the number of placement groups, the number of object replicas, and the CRUSH ruleset for the pool. Ceph can snapshot pools. Each pool has a number of placement groups. CRUSH (Controlled Replication Under Scalable Hashing) maps PGs to OSDs dynamically. When a Ceph Client stores objects, CRUSH maps each object to a placement group.
- Placement Groups: A Placement Group (PG) aggregates a series of objects into a group, and maps the group to a series of OSDs. Tracking object placement and object metadata on a per-object basis is computationally expensive–i.e., a system with millions of objects cannot realistically track placement on a per-object basis. Placement groups address this barrier to performance and scalability. Additionally, placement groups reduce the number of processes and the amount of per-object metadata Ceph must track when storing and retrieving data.
Red Hat Ceph Storage (RHCS)
Red Hat Ceph Storage combines open-source Ceph designed to present object, from a single distributed computer cluster to connected clients. Ceph's main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available. RHCS has gone through Red Hat’s extensive QA and testing for increased reliability and interoperability. By adding Ceph Storage into its bandwagon of products, Red Hat offers a single source of support to deploy and manage a complex OpenStack environment integrated with Ceph storage.
For details on the RHCS, please refer to the URL https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5
Initial Conditions
- OpenStack Mitaka deployed - deployed all-in-one on clone.example.com
- Two Ceph clusters deployed - deployed as ceph1.example.com and ceph2.example.com
First Backend Configuration Process
The example configuration is for a OpenStack cluster installed in an all-in-one configuration and one external Ceph cluster is utilized. For larger OpenStack installations, the Cinder reconfiguration operations will need to be repeated on each controller. For multiple Ceph clusters, the ceph steps will be repeated once per cluster and a unique cinder.conf configuration stanza will be created.
Create Ceph pool
The Ceph pool should be created for cinder usage. The placement group size should be adjusted to satisfy operational requirements
[root@ceph1 ~]# ceph osd pool create cinder1 32
[root@ceph1 ~]# rados lspools
rbd
cinder1
Create Ceph client keyring
The client authentication keyring is created to permit cephx authenticated client connections. The "images1" and "vms1" pools are for other OpenStack usage. The client name (client.ceph1) needs to be unique for this service across all Ceph clusters.
[root@ceph1 ~]# ceph auth get-or-create client.ceph1 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder1, allow rwx pool=vms1, allow rx pool=images1' | tee /etc/ceph/ceph.client.ceph1.keyring
Copy Ceph config
The ceph configuration file needs to be copied to the OpenStack controllers.
[root@ceph1 ~]# scp /etc/ceph/ceph.conf root@clone:/etc/ceph/ceph-ceph1.conf
Copy Ceph client keyring[root@ceph1 ~]# scp /etc/ceph/ceph.client.ceph1.keyring root@clone:/etc/ceph/ceph.client.ceph1.keyring
Install ceph-common packages
The ceph-common package needs to be installed on the cinder servers
[root@clone ~]# yum -q -y install ceph-common
Package 1:ceph-common-10.2.2-0.el7.x86_64 already installed and latest version
Configure cinder.conf
Add the storage backend to the cinder.conf
[BACKEND_ceph1]
volume_driver=cinder.volume.drivers.rbd.RBDDriver
#rbd_secret_uuid is used by libvirt
rbd_secret_uuid=d0439829-7970-421b-a25e-37b1c3a97d7f
#rbd_ceph_conf points to the configuration file copied above
rbd_ceph_conf=/etc/ceph/ceph-ceph1.conf
#rbd_pool is the OSD pool created for this service
rbd_pool=cinder
backend_host=rbd:cinder1
#rbd_user is the client key create in the ceph cluster.
rbd_user=ceph1
volume_backend_name=BACKEND_ceph1
Update the list of available backends
enabled_backends = BACKEND_1,BACKEND_ceph1
Restart Cinder
The Cinder volume servers needs to be restarted on each configured controller.
[root@clone ~]# systemctl restart openstack-cinder-volume
Configure OpenStack to use new backend
Create a new type of volume
[root@clone ~]# openstack volume type create BACKEND_ceph1
Associate the new type to the configured backend.
[root@clone ~]# openstack volume type set --property volume_backend_name=BACKEND_ceph1 BACKEND_ceph1
List available volume types and display the configuration information of the new type.
[root@clone ~]# openstack volume type list
[root@clone ~]# openstack volume type show BACKEND_ceph1
Test new backend
Create a new volume with the new type.
[root@clone ~]# openstack volume create --size 1 --type BACKEND_ceph1 test
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-09-14T03:15:58.362561 |
| description | None |
| encrypted | False |
| id | a7950b6b-fcfd-4c90-8897-a0561befdaad |
| migration_status | None |
| multiattach | False |
| name | test |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | BACKEND_ceph1 |
| updated_at | None |
| user_id | d91cad8a93bb462cab84f51a6925e752 |
+---------------------+--------------------------------------+
Second Backend Configuration Process
The same process is used to configure the second backend with the following parameters changed:
- Ceph client name - ceph2
- Ceph client keyring name - ceph.clinet.ceph2.keyring
- Ceph pool name - cinder2
- Cinder configuration stanza and backend type - BACKEND_ceph2
- OpenStack volume type names
Final Configurations
Below is the output of various RHOSP9 and RHCS2 configuration queries
Contents of /etc/ceph
[root@clone ~]# ls -laF /etc/ceph
total 36
drwxr-xr-x. 2 root root 4096 Sep 13 23:20 ./
drwxr-xr-x. 87 root root 8192 Sep 13 23:21 ../
-rw-r--r--. 1 root root 400 Sep 13 22:21 ceph-ceph1.conf
-rw-r--r--. 1 root root 400 Sep 13 23:20 ceph-ceph2.conf
-rw-r--r--. 1 root root 63 Sep 13 22:22 ceph.client.ceph1.keyring
-rw-r--r--. 1 root root 63 Sep 13 23:19 ceph.client.ceph2.keyring
-rwxr-xr-x. 1 root root 92 Jul 4 06:00 rbdmap*
OpenStack Volume Services
[root@clone ~]# openstack volume service list
+------------------+-----------------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------------------------+------+---------+-------+----------------------------+
| cinder-volume | clone.example.com@BACKEND_1 | nova | enabled | up | 2016-09-14T03:26:57.000000 |
| cinder-scheduler | clone.example.com | nova | enabled | up | 2016-09-14T03:26:58.000000 |
| cinder-volume | rbd:cinder1@BACKEND_ceph1 | nova | enabled | up | 2016-09-14T03:27:07.000000 |
| cinder-volume | rbd:cinder2@BACKEND_ceph2 | nova | enabled | up | 2016-09-14T03:27:07.000000 |
+------------------+-----------------------------+------+---------+-------+----------------------------+
OpenStack Volume Types
[root@clone ~]# openstack volume type list
+--------------------------------------+---------------+
| ID | Name |
+--------------------------------------+---------------+
| 00bc818f-4d04-4678-9b33-739c9457d14f | BACKEND_ceph2 |
| fbd2bdba-c909-4369-beef-00427df10934 | BACKEND_ceph1 |
| 66c24362-d848-4b21-8124-171cb246f34f | BACKEND_1 |
+--------------------------------------+---------------+
OpenStack Volume Type BACKEND_ceph1
[root@clone ~]# openstack volume type show BACKEND_ceph1
+---------------------------------+--------------------------------------+
| Field | Value |
+---------------------------------+--------------------------------------+
| access_project_ids | None |
| description | None |
| id | fbd2bdba-c909-4369-beef-00427df10934 |
| is_public | True |
| name | BACKEND_ceph1 |
| os-volume-type-access:is_public | True |
| properties | volume_backend_name='BACKEND_ceph1' |
| qos_specs_id | None |
+---------------------------------+--------------------------------------+
OpenStack Volume Type BACKEND_ceph2
[root@clone ~]# openstack volume type show BACKEND_ceph2
+---------------------------------+--------------------------------------+
| Field | Value |
+---------------------------------+--------------------------------------+
| access_project_ids | None |
| description | None |
| id | 00bc818f-4d04-4678-9b33-739c9457d14f |
| is_public | True |
| name | BACKEND_ceph2 |
| os-volume-type-access:is_public | True |
| properties | volume_backend_name='BACKEND_ceph2' |
| qos_specs_id | None |
+---------------------------------+--------------------------------------+
OpenStack Volumes
[root@clone ~]# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| eada663d-523a-4d00-9874-fd7749745f1d | test2 | available | 1 | |
| a7950b6b-fcfd-4c90-8897-a0561befdaad | test | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+
OpenStack Volume Information
[root@clone ~]# openstack volume show test
+--------------------------------+-----------------------------------------+
| Field | Value |
+--------------------------------+-----------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-09-14T03:15:58.000000 |
| description | None |
| encrypted | False |
| id | a7950b6b-fcfd-4c90-8897-a0561befdaad |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | rbd:cinder1@BACKEND_ceph1#BACKEND_ceph1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 63919c9c4e4c4d149e560ad0815c41d3 |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | BACKEND_ceph1 |
| updated_at | 2016-09-14T03:16:00.000000 |
| user_id | d91cad8a93bb462cab84f51a6925e752 |
+--------------------------------+-----------------------------------------+
[root@clone ~]# openstack volume show test2
+--------------------------------+-----------------------------------------+
| Field | Value |
+--------------------------------+-----------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-09-14T03:24:21.000000 |
| description | None |
| encrypted | False |
| id | eada663d-523a-4d00-9874-fd7749745f1d |
| migration_status | None |
| multiattach | False |
| name | test2 |
| os-vol-host-attr:host | rbd:cinder2@BACKEND_ceph2#BACKEND_ceph2 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 63919c9c4e4c4d149e560ad0815c41d3 |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | BACKEND_ceph2 |
| updated_at | 2016-09-14T03:24:23.000000 |
| user_id | d91cad8a93bb462cab84f51a6925e752 |
+--------------------------------+-----------------------------------------+
Ceph Pool Contents
[root@ceph1 ~]# rados ls -p cinder1
rbd_header.5e4334da1f50
rbd_id.volume-a7950b6b-fcfd-4c90-8897-a0561befdaad
rbd_object_map.5e4334da1f50
rbd_directory
[root@ceph2 ~]# rados ls -p cinder2
rbd_id.volume-eada663d-523a-4d00-9874-fd7749745f1d
rbd_header.853994579fc
rbd_object_map.853994579fc
rbd_directory
Configuration Notes
- This manual configuration is not supported by OpenStack Director.
- Nova compute does not support talking to multiple backends.