您好,登錄后才能下訂單哦!
環境說明
openpstack-Pike對接cephRBD單集群,配置簡單,可參考openstack官網或者ceph官網;
1.Openstack官網參考配置:
https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html
2.Ceph官網參考配置:
https://docs.ceph.com/docs/master/install/install-ceph-deploy/
由于物理環境和業務需求變更,當前配置云計算環境要求一套openstack對接后臺兩套不同版本的cephRBD存儲集群;
此處以現有以下正常運行環境展開配置;
1)openstack-Pike
2)Ceph Luminous 12.2.5
3)Ceph Nautilus 14.2.7
其中,openstack對接ceph Luminous配置完成,且正常運行。現在此套openstack+ceph環境基礎上,新增一套ceph Nautilus存儲集群,使openstack能夠同時調用兩套存儲資源。
配置步驟
1.拷貝配置文件
#拷貝配置文件、cinder賬戶key到openstack的cinder節點
/etc/ceph/ceph3.conf
/etc/ceph/ceph.client.cinder2.keyring
#此處使用cinder賬戶,僅拷貝cinder2賬戶的key即可
2.創建存儲池
#OSD添加完成后,創建存儲池,指定存儲池pg/pgp數,配置其對應功能模式
ceph osd pool create volumes 512 512
ceph osd pool create backups 128 128
ceph osd pool create vms 512 512
ceph osd pool create images 128 128
ceph osd pool application enable volumes rbd
ceph osd pool application enable backups rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable images rbd
3.創建集群訪問賬戶
ceph auth get-or-create client.cinder2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.cinder2-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
4.查看進程信息
#查看當前openstack的cinder組件服務進程
source /root/keystonerc.admin
cinder service-list
5.修改配置文件
#修改cinder配置文件
[DEFAULT]
enabled_backends = ceph2,ceph3
[ceph2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph2
rbd_pool = volumes1
rbd_ceph_conf = /etc/ceph2/ceph2.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder1
rbd_secret_uuid = **
[ceph3]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph3
rbd_pool = volumes2
rbd_ceph_conf = /etc/ceph/ceph3/ceph3.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder2
rbd_secret_uuid = **
6.重啟服務
#重啟cinder-volume服務
service openstack-cinder-volume restart Redirecting to /bin/systemctl restart openstack-cinder-volume.service
service openstack-cinder-scheduler restart Redirecting to /bin/systemctl restart openstack-cinder-scheduler.service
7.查看進程
cinder service-list
8.創建卷測試
#卷類型綁定
cinder type-create ceph2
cinder type-key ceph2 set volume_backend_name=ceph2
cinder type-create ceph3
cinder type-key ceph3 set volume_backend_name=ceph3
9.查看綁定結果
cinder create --volume-type ceph2 --display_name {volume-name}{volume-size}
cinder create --volume-type ceph3 --display_name {volume-name}{volume-size}
配置libvirt
1.將第二套ceph的密鑰添加到nova-compute節點的libvirt
#為了使VM可以訪問到第二套cephRBD云盤,需要在nova-compute節點上將第二套ceph的cinder用戶的密鑰添加到libvirt
ceph -c /etc/ceph3/ceph3/ceph3.conf -k /etc/ceph3/ceph.client.cinder2.keyring auth get-key client.cinder2 |tee client.cinder2.key
#綁定之前cinder.conf中第二個ceph集群的uuid
cat > secret2.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>***</uuid>
<usage type='ceph'>
<name>client.cinder2 secret</name>
</usage>
</secret>
#以上整段拷貝執行即可,替換uuid值
sudo virsh secret-define --file secret2.xml
sudo virsh secret-set-value --secret ***** --base64 $(cat client.cinder2.key) rm client.cinder2.key secret2.xml
#刪除提示信息,輸入Y即可
2.驗證配置是否生效
#通過之前創建的兩個類型的云盤掛載到openstack的VM驗證配置
nova volume-attach {instance-id}{volume1-id}
nova volume-attach {instance-id}{volume2-id}
參考資料:
《ceph設計原理與實現》---謝型果
紅帽官網
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_block_device_to_openstack_guide/installing_and_configuring_ceph_clients
ceph官網
https://docs.ceph.com/docs/master/install/install-ceph-deploy/
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。