您好,登錄后才能下訂單哦!
這篇文章主要介紹ceph中如何解決HEALTH_WARN too few PGs per OSD問題,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
執行ceph –s 發現集群狀態并非ok,具體信息如下:
$ sudo ceph -s cluster 257faba1-f259-4164-a0f9-1726bd70b05a health HEALTH_WARN too few PGs per OSD (16 < min 30) monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0} election epoch 2, quorum 0 bdc217 osdmap e50: 8 osds: 8 up, 8 in flags sortbitwise pgmap v119: 64 pgs, 1 pools, 0 bytes data, 0 objects 715 MB used, 27550 GB / 29025 GB avail 64 active+clean
由于是新配置的集群,只有一個pool
$ sudo ceph osd lspools 0 rbd,
查看rbd pool的PGS
$ sudo ceph osd pool get rbd pg_num pg_num: 64
pgs為64,因為是2副本的配置,所以當有8個osd的時候,每個osd上均分了64/8 *2=16個pgs,也就是出現了如上的錯誤 小于最小配置30個
解決辦法:修改默認pool rbd的pgs
$ sudo ceph osd pool set rbd pg_num 128 set pool 0 pg_num to 128 $ sudo ceph -s cluster 257faba1-f259-4164-a0f9-1726bd70b05a health HEALTH_WARN 64 pgs stuck inactive 64 pgs stuck unclean pool rbd pg_num 128 > pgp_num 64 monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0} election epoch 2, quorum 0 bdc217 osdmap e52: 8 osds: 8 up, 8 in flags sortbitwise pgmap v121: 128 pgs, 1 pools, 0 bytes data, 0 objects 715 MB used, 27550 GB / 29025 GB avail 64 active+clean 64 creating
發現需要把pgp_num也一并修改,默認兩個pg_num和pgp_num一樣大小均為64,此處也將兩個的值都設為128
$ sudo ceph osd pool set rbd pgp_num 128 set pool 0 pgp_num to 128
最后查看集群狀態,顯示為OK,錯誤解決:
$ sudo ceph -s cluster 257faba1-f259-4164-a0f9-1726bd70b05a health HEALTH_OK monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0} election epoch 2, quorum 0 bdc217 osdmap e54: 8 osds: 8 up, 8 in flags sortbitwise pgmap v125: 128 pgs, 1 pools, 0 bytes data, 0 objects 718 MB used, 27550 GB / 29025 GB avail 128 active+clean
以上是“ceph中如何解決HEALTH_WARN too few PGs per OSD問題”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。