您好,登錄后才能下訂單哦!
今天就跟大家聊聊有關kubernetes中ceph RBD如何使用,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結了以下內容,希望大家根據這篇文章可以有所收獲。
備注:大量的放置組(例如,每個OSD為100個)可導致更好的平衡。不能過多,推薦100個。
(OSD * 100) PG總數= ------------ 泳池大小
如:群集有9個 OSD,默認池大小為3。因此,PG為.
9 * 100 PG總數= ------------ = 300 3
創建一個名為kube的新池,其放置組數為100
ceph osd pool create kube 150
語法:
ceph osd pool create {pool-name} {pg-num} {pool-name} – 池的名稱。它必須是唯一的。 {pg-num} –池的放置組總數。
查看創建結果
ceph osd lspools 1 device_health_metrics 2 kube
略
ceph osd pool application enable kube rbd
補充:
--- Ceph Filesystem --- $ sudo ceph osd pool application enable <pool-name> cephfs --- Ceph Block Device --- $ sudo ceph osd pool application enable <pool-name> rbd --- Ceph Object Gateway --- $ sudo ceph osd pool application enable <pool-name> rgw
rbd pool init kube
補充:
//要禁用應用程序,請使用: ceph osd pool application disable <poolname> <app> {--yes-i-really-mean-it} //獲取特定池或全部池的I / O信息 ceph osd pool stats [{pool-name}] //刪除池,請執行: ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
cat >external-storage-rbd-provisioner.yaml<<EOF apiVersion: v1 kind: ServiceAccount metadata: name: rbd-provisioner namespace: kube-system --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: kube-system roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rbd-provisioner namespace: kube-system rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: rbd-provisioner namespace: kube-system spec: selector: matchLabels: app: rbd-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: "quay.io/external_storage/rbd-provisioner:v2.0.0-k8s1.11" env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner EOF
kubectl apply -f external-storage-rbd-provisioner.yaml serviceaccount/rbd-provisioner created clusterrole.rbac.authorization.k8s.io/rbd-provisioner created clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created role.rbac.authorization.k8s.io/rbd-provisioner created rolebinding.rbac.authorization.k8s.io/rbd-provisioner created deployment.apps/rbd-provisioner created
備注:取決于拉取鏡像的速度
kubectl get pods -l app=rbd-provisioner -n kube-system NAME READY STATUS RESTARTS AGE rbd-provisioner-8ddb7f6c7-zssl5 1/1 Running 0 18s
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
獲取 admin
用戶的 key
( ceph
的 admin
用戶,非 kubernetes
的用戶)
ceph auth get-key client.admin AQAXcD9f2B24GhAA/RJvMLvnpO0zAb+XYQ2YuQ==
獲取 kube
用戶的 key
ceph auth get-key client.kube AQC8fz9fNLGyIBAAyOu9bGSx7zA2S3b4Ve4vNQ==
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ --from-literal=key=AQAXcD9f2B24GhAA/RJvMLvnpO0zAb+XYQ2YuQ== \ --namespace=kube-system
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \ --from-literal=key=AQC8fz9fNLGyIBAAyOu9bGSx7zA2S3b4Ve4vNQ== \ --namespace=default
kubectl get secrets ceph-admin-secret -n kube-system NAME TYPE DATA AGE ceph-admin-secret kubernetes.io/rbd 1 22h
cat >storageclass-ceph-rdb.yaml<<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: kube-ceph-rdb provisioner: ceph.com/rbd parameters: monitors: 172.27.9.211:6789,172.27.9.212:6789,172.27.9.215:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-user-secret fsType: ext4 imageFormat: "2" imageFeatures: "layering" EOF
kubectl apply -f storageclass-ceph-rdb.yaml
備注
kube-ceph-rdb 是要創建的StorageClass的名稱
Ceph Monitors 的地址可以通過 ceph -s
列出
kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE dynamic-ceph-rdb ceph.com/rbd Delete Immediate false 5m8s
cat >ceph-rdb-pvc-test.yaml<<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-rdb-claim spec: accessModes: - ReadWriteOnce storageClassName: kube-ceph-rdb resources: requests: storage: 2Gi EOF kubectl apply -f ceph-rdb-pvc-test.yaml
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rdb-claim Bound pvc-9eee5a95-7842-4356-af3d-562255a0d7ee 2Gi RWO kube-ceph-rdb 33s kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-9eee5a95-7842-4356-af3d-562255a0d7ee 2Gi RWO Delete Bound default/ceph-rdb-claim kube-ceph-rdb 38s
cat >nginx-pod.yaml<<EOF apiVersion: v1 kind: Pod metadata: name: nginx-pod1 labels: name: nginx-pod1 spec: containers: - name: nginx-pod1 image: nginx:alpine ports: - name: web containerPort: 80 volumeMounts: - name: ceph-rdb mountPath: /usr/share/nginx/html volumes: - name: ceph-rdb persistentVolumeClaim: claimName: ceph-rdb-claim EOF kubectl apply -f nginx-pod.yaml
kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-pod1 1/1 Running 0 39s 10.20.235.135 k8s03 <none> <none>
kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo This is from Ceph RBD!!! > /usr/share/nginx/html/index.html'
curl http://10.20.235.135 This is from Ceph RBD!!!
kubectl delete -f nginx-pod.yaml kubectl delete -f ceph-rdb-pvc-test.yaml
看完上述內容,你們對kubernetes中ceph RBD如何使用有進一步的了解嗎?如果還想了解更多知識或者相關內容,請關注億速云行業資訊頻道,感謝大家的支持。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。