您好,登錄后才能下訂單哦!
這篇文章給大家分享的是有關ceph如何實現指定OSD創建pool之class的內容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。
前面我們做指定osd創建pool本質是選擇部分osd(假設為不同屬性的osd)重新構建一個osd邏輯樹,再針對新的邏輯樹創建一個crush_rule,設置pool的crush_rule 就可以到達目的。但是后面發現還有一種更為方便的辦法不需要單獨創建邏輯樹,只需要添加一個新的crush_rule 并對于到新的磁盤class(磁盤有多個class屬性,子節點host就有多個class)。
[root@ceph-node1 opt]# cat decrushmap # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54 # devices device 0 osd.0 class hdd device 1 osd.1 class ssd device 2 osd.2 class hdd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class ssd device 6 osd.6 class hdd device 7 osd.7 class hdd device 8 osd.8 class hdd device 9 osd.9 class hdd device 10 osd.10 class hdd device 11 osd.11 class hdd # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ceph-node1 { id -3 # do not change unnecessarily id -4 class hdd # do not change unnecessarily id -15 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.029 item osd.1 weight 0.029 } host ceph-node2 { id -5 # do not change unnecessarily id -6 class hdd # do not change unnecessarily id -16 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.2 weight 0.029 item osd.3 weight 0.029 } host ceph-node3 { id -7 # do not change unnecessarily id -8 class hdd # do not change unnecessarily id -17 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.4 weight 0.029 item osd.5 weight 0.029 } host ceph-node4 { id -9 # do not change unnecessarily id -10 class hdd # do not change unnecessarily id -18 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.6 weight 0.029 item osd.7 weight 0.029 } host ceph-node5 { id -11 # do not change unnecessarily id -12 class hdd # do not change unnecessarily id -19 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.8 weight 0.029 item osd.9 weight 0.029 } host ceph-node6 { id -13 # do not change unnecessarily id -14 class hdd # do not change unnecessarily id -20 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.10 weight 0.029 item osd.11 weight 0.029 } root default { id -1 # do not change unnecessarily id -2 class hdd # do not change unnecessarily id -21 class ssd # do not change unnecessarily # weight 0.354 alg straw2 hash 0 # rjenkins1 item ceph-node1 weight 0.059 item ceph-node2 weight 0.059 item ceph-node3 weight 0.059 item ceph-node4 weight 0.059 item ceph-node5 weight 0.059 item ceph-node6 weight 0.059 } # rules rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default class hdd step chooseleaf firstn 0 type host step emit } rule replicated_ssd { id 1 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type host step emit }
后面在創建pool的時候選擇新的crush_rule replicated_ssd
問題就解決了
感謝各位的閱讀!關于“ceph如何實現指定OSD創建pool之class”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。