亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

怎么使用crushtool

發布時間:2021-11-10 17:34:27 來源:億速云 閱讀:239 作者:柒染 欄目:云計算

今天就跟大家聊聊有關怎么使用crushtool,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結了以下內容,希望大家根據這篇文章可以有所收獲。

#使用crushtool 創建一個命名為crushmap128的crushmap(二進制,未解碼),包含128osd設備,每個host包含8個osd,每個機架里包含4host,所有的rack都在一個root default

crushtool --outfn crushmap128 --build --num_osds 128 host straw 8 rack straw 4 default straw 0

或者使用straw2算法

crushtool --outfn crushmap128 --build --num_osds 128 host straw2 8 rack straw2 4 default straw2 0

解碼

crushtool -d crushmap128 -o map128.txt

顯示 ceph osd tree

crushtool -i  crushmap128 --tree
[root@ceph01 test]# crushtool -i  crushmap128 --tree
ID      WEIGHT  TYPE NAME
-21     128.00000       default default
-17     32.00000                rack rack0
-1      8.00000                 host host0
0       1.00000                         osd.0
1       1.00000                         osd.1
2       1.00000                         osd.2
3       1.00000                         osd.3
4       1.00000                         osd.4
5       1.00000                         osd.5
6       1.00000                         osd.6
7       1.00000                         osd.7
-2      8.00000                 host host1
8       1.00000                         osd.8
9       1.00000                         osd.9
10      1.00000                         osd.10
11      1.00000                         osd.11
12      1.00000                         osd.12
13      1.00000                         osd.13
14      1.00000                         osd.14
15      1.00000                         osd.15
-3      8.00000                 host host2
16      1.00000                         osd.16
17      1.00000                         osd.17
18      1.00000                         osd.18
19      1.00000                         osd.19
20      1.00000                         osd.20
21      1.00000                         osd.21
22      1.00000                         osd.22
23      1.00000                         osd.23
-4      8.00000                 host host3
24      1.00000                         osd.24
25      1.00000                         osd.25
26      1.00000                         osd.26
27      1.00000                         osd.27
28      1.00000                         osd.28
29      1.00000                         osd.29
30      1.00000                         osd.30
31      1.00000                         osd.31
-18     32.00000                rack rack1
-5      8.00000                 host host4
32      1.00000                         osd.32
33      1.00000                         osd.33
34      1.00000                         osd.34
35      1.00000                         osd.35
36      1.00000                         osd.36
37      1.00000                         osd.37
38      1.00000                         osd.38
39      1.00000                         osd.39
-6      8.00000                 host host5
40      1.00000                         osd.40
41      1.00000                         osd.41
42      1.00000                         osd.42
43      1.00000                         osd.43
44      1.00000                         osd.44
45      1.00000                         osd.45
46      1.00000                         osd.46
47      1.00000                         osd.47
-7      8.00000                 host host6
48      1.00000                         osd.48
49      1.00000                         osd.49
50      1.00000                         osd.50
51      1.00000                         osd.51
52      1.00000                         osd.52
53      1.00000                         osd.53
54      1.00000                         osd.54
55      1.00000                         osd.55
-8      8.00000                 host host7
56      1.00000                         osd.56
57      1.00000                         osd.57
58      1.00000                         osd.58
59      1.00000                         osd.59
60      1.00000                         osd.60
61      1.00000                         osd.61
62      1.00000                         osd.62
63      1.00000                         osd.63
-19     32.00000                rack rack2
-9      8.00000                 host host8
64      1.00000                         osd.64
65      1.00000                         osd.65
66      1.00000                         osd.66
67      1.00000                         osd.67
68      1.00000                         osd.68
69      1.00000                         osd.69
70      1.00000                         osd.70
71      1.00000                         osd.71
-10     8.00000                 host host9
72      1.00000                         osd.72
73      1.00000                         osd.73
74      1.00000                         osd.74
75      1.00000                         osd.75
76      1.00000                         osd.76
77      1.00000                         osd.77
78      1.00000                         osd.78
79      1.00000                         osd.79
-11     8.00000                 host host10
80      1.00000                         osd.80
81      1.00000                         osd.81
82      1.00000                         osd.82
83      1.00000                         osd.83
84      1.00000                         osd.84
85      1.00000                         osd.85
86      1.00000                         osd.86
87      1.00000                         osd.87
-12     8.00000                 host host11
88      1.00000                         osd.88
89      1.00000                         osd.89
90      1.00000                         osd.90
91      1.00000                         osd.91
92      1.00000                         osd.92
93      1.00000                         osd.93
94      1.00000                         osd.94
95      1.00000                         osd.95
-20     32.00000                rack rack3
-13     8.00000                 host host12
96      1.00000                         osd.96
97      1.00000                         osd.97
98      1.00000                         osd.98
99      1.00000                         osd.99
100     1.00000                         osd.100
101     1.00000                         osd.101
102     1.00000                         osd.102
103     1.00000                         osd.103
-14     8.00000                 host host13
104     1.00000                         osd.104
105     1.00000                         osd.105
106     1.00000                         osd.106
107     1.00000                         osd.107
108     1.00000                         osd.108
109     1.00000                         osd.109
110     1.00000                         osd.110
111     1.00000                         osd.111
-15     8.00000                 host host14
112     1.00000                         osd.112
113     1.00000                         osd.113
114     1.00000                         osd.114
115     1.00000                         osd.115
116     1.00000                         osd.116
117     1.00000                         osd.117
118     1.00000                         osd.118
119     1.00000                         osd.119
-16     8.00000                 host host15
120     1.00000                         osd.120
121     1.00000                         osd.121
122     1.00000                         osd.122
123     1.00000                         osd.123
124     1.00000                         osd.124
125     1.00000                         osd.125
126     1.00000                         osd.126
127     1.00000                         osd.127

編輯role規則

vim map128.txt  #修改rule部分

###3副本都在一個rack且在同一個host內

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 1 type host
        step chooseleaf firstn 3 type osd
        step emit
}

測試結果(測試規則0 ,總共1..5個對象, 3副本) 
rule 0 (replicated_ruleset), x = 1..5, numrep = 3..3
CRUSH rule 0 x 1 [80,84,87]        對象1的3個副本在 osd.80,osd.84,osd.87
CRUSH rule 0 x 2 [63,58,61]        對象2的3個副本在 osd.63,osd.58,osd.61
CRUSH rule 0 x 3 [121,127,124]
CRUSH rule 0 x 4 [67,71,65]
CRUSH rule 0 x 5 [45,47,46]

###3副本都在一個rack里,可能在同一個host內

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 1 type rack
        step chooseleaf firstn 3 type osd
        step emit
}

測試結果
rule 0 (replicated_ruleset), x = 1..5, numrep = 3..3
CRUSH rule 0 x 1 [80,84,67]
CRUSH rule 0 x 2 [63,50,48]
CRUSH rule 0 x 3 [121,127,111]
CRUSH rule 0 x 4 [67,86,79]
CRUSH rule 0 x 5 [45,38,46]

###3副本都在一個rack里,不在同一個host內

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 1 type rack
        step chooseleaf firstn 3 type host
        step emit
}

測試結果
rule 0 (replicated_ruleset), x = 1..5, numrep = 3..3
CRUSH rule 0 x 1 [80,70,79]
CRUSH rule 0 x 2 [63,48,42]
CRUSH rule 0 x 3 [121,109,113]
CRUSH rule 0 x 4 [67,82,76]
CRUSH rule 0 x 5 [45,36,57]

###3副本在3個rack里

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 3 type rack
        step chooseleaf firstn 1 type host
        step emit
}


測試結果
rule 0 (replicated_ruleset), x = 1..5, numrep = 3..3
CRUSH rule 0 x 1 [80,115,43]
CRUSH rule 0 x 2 [63,7,126]
CRUSH rule 0 x 3 [121,30,73]
CRUSH rule 0 x 4 [67,8,61]
CRUSH rule 0 x 5 [45,79,28]

編碼

crushtool  -c map128.txt  -o maptmp.bin

測試

                                         顯示統計信息       測試rule0   上傳對象最小1個最多5個  3副本     顯示映射關系     顯示ceph osd tree
crushtool -i maptmp.bin  --test --show-statistics --rule 0 --min-x 1 --max-x 5 --num-rep 3  --show-mappings --tree
#添加機架
ceph osd crush add-bucket rack01 rack
ceph osd crush add-bucket rack02 rack
ceph osd crush add-bucket rack03 rack

#移動主機到機架
ceph osd crush move ceph23 rack=rack01
ceph osd crush move ceph24 rack=rack02
ceph osd crush move ceph25 rack=rack03

#移動機架到default root
ceph osd crush move rack01 root=default
ceph osd crush move rack02 root=default
ceph osd crush move rack03 root=default

得到新的crushmap


#移動好后使用測試的rule規則和新的crushmap導入到集群中,注意測試的crushmap中的osd編號不一定和實際的編號相同,因此子需要測試后確定rule正確即可

測試完成后導入集群

導出crushmap
ceph osd getcrushmap -o ma-crush-map
解碼crushmap
crushtool -d ma-crush-map -o ma-crush-map.txt
vim ma-crush-map.txt  #修改rule部分為上面測試過的rule

編譯crushmap
crushtool -c ma-crush-map.txt -o ma-nouvelle-crush-map
導入crushmap
ceph osd setcrushmap -i ma-nouvelle-crush-map

看完上述內容,你們對怎么使用crushtool有進一步的了解嗎?如果還想了解更多知識或者相關內容,請關注億速云行業資訊頻道,感謝大家的支持。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

浑源县| 洪泽县| 噶尔县| 临沂市| 开远市| 类乌齐县| 南岸区| 岳阳县| 余姚市| 玉田县| 镇远县| 沙洋县| 河间市| 新乡县| 望都县| 景洪市| 抚顺市| 义乌市| 曲靖市| 井陉县| 博兴县| 米脂县| 伽师县| 五大连池市| 屏东县| 温州市| 沛县| 郁南县| 巫山县| 若尔盖县| 安龙县| 铜梁县| 宜春市| 塔河县| 乌拉特前旗| 临清市| 专栏| 绥江县| 怀远县| 固原市| 远安县|