亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

kubernetes數據持久化StorageClass動態供給怎么實現

發布時間:2022-11-29 09:45:10 來源:億速云 閱讀:92 作者:iii 欄目:開發技術

這篇文章主要講解了“kubernetes數據持久化StorageClass動態供給怎么實現”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“kubernetes數據持久化StorageClass動態供給怎么實現”吧!

正文

  • 存儲類的好處之一便是支持PV的動態供給,它甚至可以直接被視作為PV的創建模版,用戶用到持久性存儲時,需要通過創建PVC來綁定匹配的PV,此類操作需求較大,或者當管理員手動創建的PV無法滿足PVC的所有需求時,系統按PVC的需求標準動態創建適配的PV會為存儲管理帶來極大的靈活性,不過僅那些屬于StorageClass的PVC和PV才能產生綁定關系,即沒有指定StorageClass的PVC只能綁定同類的PV。

  • 存儲類對象的名稱至關重要,它是用戶調用的標識,創建存儲類對象時,除了名稱之外,還需要為其定義三個關鍵字段。provisioner、parameter和reclaimPolicy。

  • 所以kubernetes提供了一種可以動態分配的工作機制,可用自動創建PV,該機制依賴于StorageClass的API,將某個存儲節點劃分1T給kubernetes使用,當用戶申請5Gi的PVC時,會自動從這1T的存儲空間去創建一個5Gi的PV,而后自動與之進行關聯綁定。

  • 動態PV供給的啟用需要事先創建一個存儲類,不同的Provisoner的創建方法各有不同,并非所有的存儲卷插件都由Kubernetes內建支持PV動態供給。

基于NFS實現動態供應

由于kubernetes內部不包含NFS驅動,所以需要使用外部驅動nfs-subdir-external-provisioner是一個自動供應器,它使用NFS服務端來支持動態供應。

NFS-subdir-external- provisioner實例負責監視PersistentVolumeClaims請求StorageClass,并自動為它們創建NFS所支持的PresistentVolumes。

準備NFS服務端的共享目錄

這里的意思是要把哪個目錄給kubernetes來使用。把目錄共享出來。

[root@kn-server-node02-15 ~]# ll /data/
總用量 0
[root@kn-server-node02-15 ~]# showmount -e 10.0.0.15
Export list for 10.0.0.15:
/data        10.0.0.0/24

安裝NFS-Server驅動。

首先創建RBAC權限。

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

部署NFS-Provisioner

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-provisioner-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  鏡像在國內是拉取不到的,因此為下載下來了放在我的docker hub。 替換為lihuahaitang/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner   NFS-Provisioner的名稱,后續StorageClassName要與該名稱保持一致
            - name: NFS_SERVER    NFS服務器的地址
              value: 10.0.0.15  
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.15
            path: /data
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-provisioner-deploy.yaml 
deployment.apps/nfs-client-provisioner created
Pod正常運行。
[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-57d6d9d5f6-dcxgq   1/1     Running   0          2m25s
describe查看Pod詳細信息;
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pods nfs-client-provisioner-57d6d9d5f6-dcxgq 
Name:         nfs-client-provisioner-57d6d9d5f6-dcxgq
Namespace:    default
Priority:     0
Node:         kn-server-node02-15/10.0.0.15
Start Time:   Mon, 28 Nov 2022 11:19:33 +0800
Labels:       app=nfs-client-provisioner
              pod-template-hash=57d6d9d5f6
Annotations:  <none>
Status:       Running
IP:           192.168.2.82
IPs:
  IP:           192.168.2.82
Controlled By:  ReplicaSet/nfs-client-provisioner-57d6d9d5f6
Containers:
  nfs-client-provisioner:
    Container ID:   docker://b5ea240a8693185be681714747f8e0a9f347492a24920dd68e629effb3a7400f
    Image:          k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  鏡像來自k8s.gcr.io
    Image ID:       docker-pullable://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 28 Nov 2022 11:20:12 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  k8s-sigs.io/nfs-subdir-external-provisioner
      NFS_SERVER:        10.0.0.15
      NFS_PATH:          /data
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2z8w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.15
    Path:      /data
    ReadOnly:  false
  kube-api-access-q2z8w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m11s  default-scheduler  Successfully assigned default/nfs-client-provisioner-57d6d9d5f6-dcxgq to kn-server-node02-15
  Normal  Pulling    3m11s  kubelet            Pulling image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
  Normal  Pulled     2m32s  kubelet            Successfully pulled image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" in 38.965869132s
  Normal  Created    2m32s  kubelet            Created container nfs-client-provisioner
  Normal  Started    2m32s  kubelet            Started container nfs-client-provisioner

創建StorageClass

創建NFS StorageClass動態供應商。

[root@kn-server-master01-13 nfs-provisioner]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass    類型為storageclass
metadata:
  name: nfs-provisioner-storage    PVC申請時需明確指定的storageclass名稱
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner  供應商名稱,必須和上面創建的"PROVISIONER_NAME"保持一致
parameters:
  archiveOnDelete: "false" 如果值為false,刪除pvc后也會刪除目錄內容,"true"則會對數據進行保留
  pathPattern: "${.PVC.namespace}/${.PVC.name}" 創建目錄路徑的模板,默認為隨機命名。
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f storageclass.yaml 
storageclass.storage.k8s.io/nfs-provisioner-storage created
storage簡寫sc
[root@kn-server-master01-13 nfs-provisioner]# kubectl get sc
NAME                      PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-provisioner-storage   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  3s
describe查看配詳細信息。
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe sc 
Name:            nfs-provisioner-storage
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-provisioner-storage"},"parameters":{"archiveOnDelete":"false","pathPattern":"${.PVC.namespace}/${.PVC.name}"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           k8s-sigs.io/nfs-subdir-external-provisioner
Parameters:            archiveOnDelete=false,pathPattern=${.PVC.namespace}/${.PVC.name}
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

創建PVC,自動關聯PV

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-pvc-test.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc-test
spec:
  storageClassName: "nfs-provisioner-storage"
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 0.5Gi
這里的PV的名字是隨機的,數據的存儲路徑是根據pathPattern來定義的。
[root@kn-server-node02-15 data]# ls
default
[root@kn-server-node02-15 data]# ll default/
總用量 0
drwxrwxrwx 2 root root 6 11月 28 13:56 nfs-pvc-test
[root@kn-server-master01-13 pv]# kubectl get pv
pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f   512Mi      RWX            Delete           Bound         default/nfs-pvc-test   nfs-provisioner-storage            5m19s
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pv pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Name:            pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    nfs-provisioner-storage
Status:          Bound
Claim:           default/nfs-pvc-test
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        512Mi
Node Affinity:   <none>
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.15
    Path:      /data/default/nfs-pvc-test
    ReadOnly:  false
Events:        <none>
describe可用看到更詳細的信息
root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc 
Name:          nfs-pvc-test
Namespace:     default
StorageClass:  nfs-provisioner-storage
Status:        Bound
Volume:        pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      512Mi    定義的存儲大小
Access Modes:  RWX    卷的讀寫
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age   From                                                                                                                      Message
  ----    ------                 ----  ----                                                                                                                      -------
  Normal  ExternalProvisioning   13m   persistentvolume-controller                                                                                               waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
  Normal  Provisioning           13m   k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778  External provisioner is provisioning volume for claim "default/nfs-pvc-test"
  Normal  ProvisioningSucceeded  13m   k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778  Successfully provisioned volume pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f

創建Pod,測試數據是否持久。

[root@kn-server-master01-13 nfs-provisioner]# cat nginx-pvc-test.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-sc
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: nginx-page
      mountPath: /usr/share/nginx/html
  volumes:
  - name: nginx-page
    persistentVolumeClaim:
      claimName: nfs-pvc-test
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nginx-pvc-test.yaml 
pod/nginx-sc created
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc
Name:          nfs-pvc-test
Namespace:     default
StorageClass:  nfs-provisioner-storage
Status:        Bound
Volume:        pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      512Mi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       nginx-sc   可以看到的是nginx-sc這個Pod在使用這個PVC。
和上面名稱是一致的。
[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods nginx-sc
NAME       READY   STATUS    RESTARTS   AGE
nginx-sc   1/1     Running   0          2m43s
嘗試寫入數據
[root@kn-server-node02-15 data]# echo "haitang" > /data/default/nfs-pvc-test/index.html
訪問測試。
[root@kn-server-master01-13 nfs-provisioner]# curl 192.168.2.83
haitang

感謝各位的閱讀,以上就是“kubernetes數據持久化StorageClass動態供給怎么實現”的內容了,經過本文的學習后,相信大家對kubernetes數據持久化StorageClass動態供給怎么實現這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關知識點的文章,歡迎關注!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

竹溪县| 鹿泉市| 闸北区| 江阴市| 漠河县| 巴塘县| 鲜城| 丰顺县| 乐业县| 沂南县| 进贤县| 敦化市| 江孜县| 七台河市| 尚志市| 藁城市| 颍上县| 浪卡子县| 康马县| 禹城市| 建平县| 新沂市| 延川县| 唐山市| 拜泉县| 特克斯县| 邯郸县| 安康市| 江安县| 临安市| 安达市| 锡林郭勒盟| 内丘县| 遵义县| 湖北省| 诏安县| 迁安市| 攀枝花市| 城口县| 玉环县| 墨玉县|