您好,登錄后才能下訂單哦!
本篇內容介紹了“Kubernetes Cluster HA如何配置”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
所有組件可以通過kubelet static pod的方式啟動和管理,由kubelet static pod機制保證宿主機上各個組件的高可用, 注意kubelet要添加配置--allow-privileged=true
;
管理static pod的kubelet的高可用通過systemd來負責;
當然,你也可以直接通過進程來部署這些組件,systemd來直接管理這些進程;(我們選擇的是這種方式,降低復雜度。)
上圖中,etcd和Master部署在一起,三個Master節點分別部署了三個etcd,這三個etcd組成一個集群;(當然,如果條件允許,建議將etcd集群和Master節點分開部署。)
每個Master中的apiserver、controller-manager、scheduler都使用hostNetwork, controller-manager和scheduler通過localhost連接到本節點的apiserver,而不會和其他兩個Master節點的apiserver連接;
外部的rest-client、kubectl、kubelet、kube-proxy等都通過TLS證書,在LB節點做TLS Termination,LB出來就是http請求發到經過LB策略(RR)到對應的apiserver instance;
apiserver到kubelet server和kube-proxy server的訪問也類似,Https到LB這里做TLS Termination,然后http請求出來到對應node的kubelet/kube-proxy server;
apiserver的HA通過經典的haproxy + keepalived來保證,集群對外暴露VIP;
controller-manager和scheduler的HA通過自身提供的leader選舉功能(--leader-elect=true),使得3個controller-manager和scheduler都分別只有一個是leader,leader處于正常工作狀態,當leader失敗,會重新選舉新leader來頂替繼續工作;
因此,該HA方案中,通過haproxy+keepalived來做apiserver的LB和HA,controller-manager和scheduler通過自身的leader選舉來達到HA,etcd通過raft協議保證etcd cluster數據的一致性,達到HA;
keepalived的配置可參考如下:
vrrp_script check_script { script "/etc/keepalived/check_haproxy.py http://caicloud:caicloud@127.0.0.1/haproxy?stats" interval 5 # check every 5 seconds weight 5 fall 2 # require 2 fail for KO rise 1 # require 1 successes for OK } vrrp_instance VI_01 { state MASTER (BACKUP) interface eth2 track_interface { eth2 } vrrp_garp_master_repeat 5 vrrp_garp_master_refresh 10 virtual_router_id 51 priority 100 (97) advert_int 1 authentication { auth_type PASS auth_pass username } virtual_ipaddress { 192.168.205.254 dev eth2 label eth2:vip } track_script { check_script } notify "etc/keepalived/notify_state.sh" }
haproxy的配置可參考如下:
global log 127.0.0.1 local0 maxconn 32768 pidfile /run/haproxy.pid # turn on stats unix socket stats socket /run/haproxy.stats tune.ssl.default-dh-param 2048 default log global mode http option httplog option dontlognull retries 3 timeout connect 5000ms timeout client 50000ms timeout server 50000ms timeout check 50000ms timeout queue 50000ms frontend frontend-apisver-http bind *:8080 option forwardfor acl local_net src 192.168.205.0/24 http-request allow if local_net http-request deny default_backend backend-apiserver-http frontedn frontend-apiserver-https # haproxy enable ssl bind *:443 ssl crt /etc/kubernetes/master-lb.pem option forwardfor default_backend backend-apiserver-http backend backend-apiserver-http balance roundrobin option forward-for server master-1 192.168.205.11:8080 check server master-2 192.168.205.12:8080 check server master-3 192.168.205.13:8080 check listen admin_stats bind 0.0.0.0:80 log global mode http maxconn 10 stats enable #Hide HAPRoxy version, a necessity for any public-facing site stats hide-version stats refresh 30s stats show-node stats realm Haproxy\ Statistics stats auth caicloud:caicloud stats uri /haproxy?stats
LB所在的節點,注意確保ip_vs model已加載、ip_forward和ip_nonlocal_bind已開啟;
# make sure ip_vs kernel model is loaded modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr # enable ip_forward and ip_nonlocal_bind echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
如果你通過pod來部署K8S的組件,可參考官方給出的Yaml:
apiserver
apiVersion: v1 kind: Pod metadata: name: kube-apiserver spec: hostNetwork: true containers: - name: kube-apiserver image: gcr.io/google_containers/kube-apiserver:9680e782e08a1a1c94c656190011bd02 command: - /bin/sh - -c - /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd-servers=http://127.0.0.1:4001 --cloud-provider=gce --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.0.0.0/16 --client-ca-file=/srv/kubernetes/ca.crt --basic-auth-file=/srv/kubernetes/basic_auth.csv --cluster-name=e2e-test-bburns --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key --secure-port=443 --token-auth-file=/srv/kubernetes/known_tokens.csv --v=2 --allow-privileged=False 1>>/var/log/kube-apiserver.log 2>&1 ports: - containerPort: 443 hostPort: 443 name: https - containerPort: 7080 hostPort: 7080 name: http - containerPort: 8080 hostPort: 8080 name: local volumeMounts: - mountPath: /srv/kubernetes name: srvkube readOnly: true - mountPath: /var/log/kube-apiserver.log name: logfile - mountPath: /etc/ssl name: etcssl readOnly: true - mountPath: /usr/share/ssl name: usrsharessl readOnly: true - mountPath: /var/ssl name: varssl readOnly: true - mountPath: /usr/ssl name: usrssl readOnly: true - mountPath: /usr/lib/ssl name: usrlibssl readOnly: true - mountPath: /usr/local/openssl name: usrlocalopenssl readOnly: true - mountPath: /etc/openssl name: etcopenssl readOnly: true - mountPath: /etc/pki/tls name: etcpkitls readOnly: true volumes: - hostPath: path: /srv/kubernetes name: srvkube - hostPath: path: /var/log/kube-apiserver.log name: logfile - hostPath: path: /etc/ssl name: etcssl - hostPath: path: /usr/share/ssl name: usrsharessl - hostPath: path: /var/ssl name: varssl - hostPath: path: /usr/ssl name: usrssl - hostPath: path: /usr/lib/ssl name: usrlibssl - hostPath: path: /usr/local/openssl name: usrlocalopenssl - hostPath: path: /etc/openssl name: etcopenssl - hostPath: path: /etc/pki/tls name: etcpkitls
apiVersion: v1 kind: Pod metadata: name: kube-controller-manager spec: containers: - command: - /bin/sh - -c - /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --cluster-name=e2e-test-bburns --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --cloud-provider=gce --service-account-private-key-file=/srv/kubernetes/server.key --v=2 --leader-elect=true 1>>/var/log/kube-controller-manager.log 2>&1 image: gcr.io/google_containers/kube-controller-manager:fda24638d51a48baa13c35337fcd4793 livenessProbe: httpGet: path: /healthz port: 10252 initialDelaySeconds: 15 timeoutSeconds: 1 name: kube-controller-manager volumeMounts: - mountPath: /srv/kubernetes name: srvkube readOnly: true - mountPath: /var/log/kube-controller-manager.log name: logfile - mountPath: /etc/ssl name: etcssl readOnly: true - mountPath: /usr/share/ssl name: usrsharessl readOnly: true - mountPath: /var/ssl name: varssl readOnly: true - mountPath: /usr/ssl name: usrssl readOnly: true - mountPath: /usr/lib/ssl name: usrlibssl readOnly: true - mountPath: /usr/local/openssl name: usrlocalopenssl readOnly: true - mountPath: /etc/openssl name: etcopenssl readOnly: true - mountPath: /etc/pki/tls name: etcpkitls readOnly: true hostNetwork: true volumes: - hostPath: path: /srv/kubernetes name: srvkube - hostPath: path: /var/log/kube-controller-manager.log name: logfile - hostPath: path: /etc/ssl name: etcssl - hostPath: path: /usr/share/ssl name: usrsharessl - hostPath: path: /var/ssl name: varssl - hostPath: path: /usr/ssl name: usrssl - hostPath: path: /usr/lib/ssl name: usrlibssl - hostPath: path: /usr/local/openssl name: usrlocalopenssl - hostPath: path: /etc/openssl name: etcopenssl - hostPath: path: /etc/pki/tls name: etcpkitls
apiVersion: v1 kind: Pod metadata: name: kube-scheduler spec: hostNetwork: true containers: - name: kube-scheduler image: gcr.io/google_containers/kube-scheduler:34d0b8f8b31e27937327961528739bc9 command: - /bin/sh - -c - /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --v=2 --leader-elect=true 1>>/var/log/kube-scheduler.log 2>&1 livenessProbe: httpGet: path: /healthz port: 10251 initialDelaySeconds: 15 timeoutSeconds: 1 volumeMounts: - mountPath: /var/log/kube-scheduler.log name: logfile - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-s8ejd readOnly: true volumes: - hostPath: path: /var/log/kube-scheduler.log name: logfile
apiVersion: v1 kind: Pod metadata: name: etcd-server spec: hostNetwork: true containers: - image: gcr.io/google_containers/etcd:2.0.9 name: etcd-container command: - /usr/local/bin/etcd - --name - ${NODE_NAME} - --initial-advertise-peer-urls - http://${NODE_IP}:2380 - --listen-peer-urls - http://${NODE_IP}:2380 - --advertise-client-urls - http://${NODE_IP}:4001 - --listen-client-urls - http://127.0.0.1:4001 - --data-dir - /var/etcd/data - --discovery - ${DISCOVERY_TOKEN} ports: - containerPort: 2380 hostPort: 2380 name: serverport - containerPort: 4001 hostPort: 4001 name: clientport volumeMounts: - mountPath: /var/etcd name: varetcd - mountPath: /etc/ssl name: etcssl readOnly: true - mountPath: /usr/share/ssl name: usrsharessl readOnly: true - mountPath: /var/ssl name: varssl readOnly: true - mountPath: /usr/ssl name: usrssl readOnly: true - mountPath: /usr/lib/ssl name: usrlibssl readOnly: true - mountPath: /usr/local/openssl name: usrlocalopenssl readOnly: true - mountPath: /etc/openssl name: etcopenssl readOnly: true - mountPath: /etc/pki/tls name: etcpkitls readOnly: true volumes: - hostPath: path: /var/etcd/data name: varetcd - hostPath: path: /etc/ssl name: etcssl - hostPath: path: /usr/share/ssl name: usrsharessl - hostPath: path: /var/ssl name: varssl - hostPath: path: /usr/ssl name: usrssl - hostPath: path: /usr/lib/ssl name: usrlibssl - hostPath: path: /usr/local/openssl name: usrlocalopenssl - hostPath: path: /etc/openssl name: etcopenssl - hostPath: path: /etc/pki/tls name: etcpkitls
“Kubernetes Cluster HA如何配置”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。