亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Kubeadm部署Kubernetes集群的步驟

發布時間:2020-05-28 11:32:44 來源:億速云 閱讀:261 作者:Leah 欄目:云計算

這篇文章給大家分享的是Kubeadm部署Kubernetes集群的步驟,相信大部分人都還不知道怎么部署,為了讓大家學會,給大家總結了以下內容,話不多說,一起往下看吧。

一、環境準備

操作系統

IP地址

主機名

組件

CentOS7.5

192.168.200.111

docker-server1

kubeadmkubeletkubectldocker-ce

CentOS7.5

192.168.200.112

docker-server2

kubeadmkubeletkubectldocker-ce

CentOS7.5

192.168.200.113

docker-server3

kubeadmkubeletkubectldocker-ce

注意:所有主機配置推薦CPU2C+  Memory2G+

Kubeadm部署Kubernetes集群的步驟


1.1、主機初始化配置

所有主機配置禁用防火墻和selinux,配置主機名

[root@localhost ~]# iptables -F

[root@localhost ~]# setenforce 0

[root@localhost ~]# systemctl stop firewalld

 

不同主機名稱不同(分別為docker-server2docker-server3

[root@localhost ~]# hostname docker-server1

[root@localhost ~]# bash

 

[root@docker-server1 ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.111     docker-server1

192.168.200.112    docker-server1

192.168.200.113    docker-server1

 

[root@docker-server1 ~]# scp /etc/hosts 192.168.200.112:/etc/

[root@docker-server1 ~]# scp /etc/hosts 192.168.200.113:/etc/

 

禁用swap虛擬內存

[root@docker-server1 ~]# vim /etc/fstab

#/dev/mapper/centos-swap swap swap    defaults        0 0    #禁用swap自動掛載

 

[root@docker-server1 ~]# swapoff /dev/mapper/centos-swap

[root@docker-server1 ~]# free -h

              total        used        free      shared  buff/cache   available

Mem:           1.9G        749M        101M         10M        1.1G        906M

Swap:            0B          0B          0B

1.2、部署docker環境

安裝docker-ce(所有主機配置)

[root@docker-server1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@docker-server1 ~]# yum -y install yum-utils device-mapper-persistent-data lvm2

[root@docker-server1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@docker-server1 ~]# ls /etc/yum.repos.d/

backup  CentOS-Base.repo  CentOS-Media.repo  docker-ce.repo

 

[root@docker-server1 ~]# yum -y install docker-ce

[root@docker-server1 ~]# systemctl start docker && systemctl enable docker

 

阿里云鏡像加速器(所有主機配置)

[root@docker-server1 ~]# cat << END > /etc/docker/daemon.json

{

        "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]

}

END

[root@docker-server1 ~]# systemctl daemon-reload

[root@docker-server1 ~]# systemctl restart docker

 

[root@docker-server1 ~]# docker version

Client: Docker Engine - Community

 Version:           19.03.5

 API version:       1.40

 Go version:        go1.12.12

 Git commit:        633a0ea

 Built:             Wed Nov 13 07:25:41 2019

 OS/Arch:           linux/amd64

 Experimental:      false

 

Server: Docker Engine - Community

 Engine:

  Version:          19.03.5

  API version:      1.40 (minimum version 1.12)

  Go version:       go1.12.12

  Git commit:       633a0ea

  Built:            Wed Nov 13 07:24:18 2019

  OS/Arch:          linux/amd64

  Experimental:     false

 containerd:

  Version:          1.2.10

  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339

 runc:

  Version:          1.0.0-rc8+dev

  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657

 docker-init:

  Version:          0.18.0

  GitCommit:        fec3683

 

1.3、相關組件版本說明

組件

版本

說明

kubernetes

1.17.3

主程序

docker

19.03.5

容器

flannel

0.11.0

網絡插件

etcd

3.3.15

數據庫

coredns

1.6.2

dns組件

kubernetes-dashboard

2.0.0-beta5

web界面

 

二、部署kubernetes集群

2.1、組件介紹

三個節點都需要安裝下面三個組件

l  kubeadm:安裝工具,使所有的組件都會以容器的方式運行

l  kubectl:客戶端連接K8S API工具

l  kubelet:運行在node節點,用來啟動容器的工具

2.2、配置阿里云yum

所有主機配置yum

推薦使用阿里云的yum源安裝:

[root@docker-server1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

 

[root@docker-server1 ~]# ls /etc/yum.repos.d/

backup  CentOS-Base.repo  CentOS-Media.repo  docker-ce.repo  kubernetes.repo

 

K8S目前最新版本是:1.17.3

[root@docker-server1 ~]# yum -y info kubeadm

已加載插件:fastestmirror, langpacks

Loading mirror speeds from cached hostfile

 * base: mirrors.aliyun.com

 * extras: mirrors.aliyun.com

 * updates: mirrors.aliyun.com

kubernetes/signature                                                                                |  454 B  00:00:00    

https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 檢索密鑰

導入 GPG key 0xA7317B0F:

 用戶ID     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"

 指紋       : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f

 來自       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 檢索密鑰

kubernetes/signature                                                                                | 1.4 kB  00:00:01 !!!

kubernetes/primary                                                                                  |  64 kB  00:00:00    

kubernetes                                                                                                         469/469

可安裝的軟件包

名稱    kubeadm

架構    x86_64

版本    1.17.3

發布    0

大小    8.7 M

    kubernetes

簡介    Command-line utility for administering a Kubernetes cluster.

網址    https://kubernetes.io

協議    ASL 2.0

描述    Command-line utility for administering a Kubernetes cluster.

2.3、安裝kubelet kubeadm kubectl

所有主機配置

[root@docker-server1 ~]# yum install -y kubelet kubeadm kubectl

[root@docker-server1 ~]# rpm -qa | grep kube*

kubeadm-1.17.3-0.x86_64

kubelet-1.17.3-0.x86_64

kubernetes-cni-0.7.5-0.x86_64

kubectl-1.17.3-0.x86_64

[root@docker-server1 ~]# systemctl enable kubelet && systemctl start kubelet

2.4、加載內核模塊

所有主機配置

[root@docker-server1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_nonlocal_bind = 1

net.ipv4.ip_forward = 1

vm.swappiness=0

EOF

[root@docker-server1 ~]# sysctl –system

 

[root@docker-server1 ~]# vim /etc/sysctl.conf

net.ipv4.ip_forward = 1

[root@docker-server1 ~]# sysctl -p

 

[root@docker-server1 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- br_netfilter

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

 

[root@docker-server1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules

[root@docker-server1 ~]# bash /etc/sysconfig/modules/ipvs.modules

[root@docker-server1 ~]# lsmod | grep -E "ip_vs|nf_conntrack_ipv4"

ip_vs_sh               12688  0

ip_vs_wrr              12697  0

ip_vs_rr               12600  0

ip_vs                 141432  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr

nf_conntrack_ipv4      15053  2

nf_defrag_ipv4         12729  1 nf_conntrack_ipv4

nf_conntrack          133053  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntr

ack_ipv4libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

 

2.5、配置kubeadm-config.yaml

kubeadm-config.yaml組成部署說明:

l  InitConfiguration:用于定義一些初始化配置,如初始化使用的token以及apiserver地址等;

l  ClusterConfiguration:用于定義apiserveretcdnetworkschedulercontroller-managermaster組件相關配置項

l  KubeletConfiguration:用于定義kubelet組件相關的配置項

l  KubeProxyConfiguration:用于定義kube-proxy組件相關的配置項

 

master節點安裝,master 定于為192.168.200.111,通過如下指令創建默認的kubeadm-config.yaml文件:

[root@docker-server1 ~]# kubeadm config print init-defaults  > kubeadm-config.yaml

W0212 21:18:11.685591    2403 validation.go:28] Cannot validate kube-proxy config - no validator is available

W0212 21:18:11.685648    2403 validation.go:28] Cannot validate kubelet config - no validator is available

 

kubeadm-config.yaml配置

[root@docker-server1 ~]# vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.200.111  #master節點的IP

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: 192.168.200.111                         #修改為IP地址,如果使用域名,必須保證解析正常

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd                       #etcd容器的目錄掛載到本地的/var/lib/etcd目錄下,防止數據丟失

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers          #鏡像倉庫地址,可以修改為gcr.azk8s.cn/google_containers

kind: ClusterConfiguration

kubernetesVersion: v1.17.3                     #Kubernetes軟件版本

networking:

  dnsDomain: cluster.local

  serviceSubnet: 10.96.0.0/12

  podSubnet: 10.244.0.0/16        #添加這個內容

scheduler: {}

 

2.6、安裝master節點

可以預先下載鏡像

[root@docker-server1 ~]# kubeadm config images pull --config kubeadm-config.yaml

 

安裝matser節點

[root@docker-server1 ~]# kubeadm init --config kubeadm-config.yaml

W0214 15:07:53.469593   65073 validation.go:28] Cannot validate kube-proxy config - no validator is available

W0214 15:07:53.469677   65073 validation.go:28] Cannot validate kubelet config - no validator is available

[init] Using Kubernetes version: v1.17.3

[preflight] Running pre-flight checks

         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd

". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [192.168.200.111 kubernetes kubernetes.default kubernetes.default.sv

c kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.111][certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [192.168.200.111 localhost] and IPs [192.168.200.111 127.0.0.1 ::1

][certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [192.168.200.111 localhost] and IPs [192.168.200.111 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W0214 15:12:35.410900   65073 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,R

BAC"[control-plane] Creating static Pod manifest for "kube-scheduler"

W0214 15:12:35.413190   65073 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,R

BAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/ma

nifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 34.504759 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in th

e cluster[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node 192.168.200.111 as control-plane by adding the label "node-role.kubernetes.io/master=

''"[mark-control-plane] Marking the node 192.168.200.111 as control-plane by adding the taints [node-role.kubernetes.io/master

:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term cer

tificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstra

p Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

#安裝完成,在master上操作

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

 

#用于添加node節點  

kubeadm join 192.168.200.111:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:c2d6067d5c3b12118275958dee222226d09a89fc5fb559687dc989d2508d5a50

 

kubeadm init 主要執行了以下操作:

?       [init]:指定版本進行初始化操作

?       [preflight] :初始化前的檢查和下載所需要的Docker 鏡像文件

?       [kubelet-start] :生成kubelet 的配置文件”/var/lib/kubelet/config.yaml”,沒有這個文件kubelet無法啟動,所以初始化之前的kubelet 實際上啟動失敗。

?       [certificates]:生成Kubernetes 使用的證書,存放在/etc/kubernetes/pki 目錄中。

?       [kubeconfig] :生成 Kubeconfig 文件,存放在/etc/kubernetes 目錄中,組件之間通信需要使用對應文件。

?       [control-plane]:使用/etc/kubernetes/manifest 目錄下的YAML 文件,安裝 Master 組件。

?       [etcd]:使用/etc/kubernetes/manifest/etcd.yaml 安裝Etcd 服務。

?       [wait-control-plane]:等待control-plan 部署的Master 組件啟動。

?       [apiclient]:檢查Master 組件服務狀態。

?       [uploadconfig]:更新配置

?       [kubelet]:使用configMap 配置kubelet

?       [patchnode]:更新CNI 信息到Node 上,通過注釋的方式記錄。

?       [mark-control-plane]:為當前節點打標簽,打了角色Master,和不可調度標簽,這樣默認就不會使用Master 節點來運行Pod

?       [bootstrap-token]:生成token 記錄下來,后邊使用kubeadm join 往集群中添加節點時會用到

?       [addons]:安裝附加組件CoreDNS kube-proxy

2.7、查看容器

[root@docker-server1 ~]# docker ps -a

CONTAINER ID        IMAGE                                                           COMMAND                  CREATED             STATUS

              PORTS               NAMES88167513e26b        7d54289267dc                                                    "/usr/local/bin/kube…"   6 minutes ago       Up 6

minutes                            k8s_kube-proxy_kube-proxy-trrsg_kube-system_1d8ad663-c8d8-4429-9bfa-62c0644d048b_004ef064f9de7        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1   "/pause"                 6 minutes ago       Up 6 m

inutes                            k8s_POD_kube-proxy-trrsg_kube-system_1d8ad663-c8d8-4429-9bfa-62c0644d048b_0116a097c0f34        5eb3b7486872                                                    "kube-controller-man…"   6 minutes ago       Up 6

minutes                            k8s_kube-controller-manager_kube-controller-manager-192.168.200.111_kube-system_655c81bbe85f53920741e98506a879a4_0aa9676158688        303ce5db0e90                                                    "etcd --advertise-cl…"   6 minutes ago       Up 6

minutes                            k8s_etcd_etcd-192.168.200.111_kube-system_263a5d6fc4cb43d1291a4e7fc493a149_02f5d3ee4c848        78c190f736b1                                                    "kube-scheduler --au…"   6 minutes ago       Up 6

minutes                            k8s_kube-scheduler_kube-scheduler-192.168.200.111_kube-system_75516e998e1ab97384d969d8ccd139db_0f5d54e1fe069        0cae8d5cc64c                                                    "kube-apiserver --ad…"   6 minutes ago       Up 6

minutes                            k8s_kube-apiserver_kube-apiserver-192.168.200.111_kube-system_c4b84d01dcb983c440c0474273fb535c_04c4a714c82fe        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1   "/pause"                 6 minutes ago       Up 6 m

inutes                            k8s_POD_kube-controller-manager-192.168.200.111_kube-system_655c81bbe85f53920741e98506a879a4_06d5de46ad990        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1   "/pause"                 6 minutes ago       Up 6 m

inutes                            k8s_POD_kube-apiserver-192.168.200.111_kube-system_c4b84d01dcb983c440c0474273fb535c_0a1436b78e49e        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1   "/pause"                 6 minutes ago       Up 6 m

inutes                            k8s_POD_etcd-192.168.200.111_kube-system_263a5d6fc4cb43d1291a4e7fc493a149_03a6901465499        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1   "/pause"                 6 minutes ago       Up 6 m

inutes                            k8s_POD_kube-scheduler-192.168.200.111_kube-system_75516e998e1ab97384d969d8ccd139db_0

 

根據提示操作

kubectl 默認會在執行的用戶家目錄下面的.kube 目錄下尋找config 文件。這里是將在初始化時[kubeconfig]步驟生成的admin.conf 拷貝到.kube/config

[root@docker-server1 ~]# mkdir -p $HOME/.kube

[root@docker-server1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@docker-server1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

在該配置文件中,記錄了API Server 的訪問地址,所以后面直接執行kubectl 命令就可以正常連接到API Server

 

查看node節點和組件

[root@docker-server1 ~]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

controller-manager   Healthy   ok                 

scheduler            Healthy   ok                 

etcd-0               Healthy   {"health":"true"}

 

[root@docker-server1 ~]# kubectl get nodes

NAME              STATUS     ROLES    AGE     VERSION

192.168.200.111   NotReady   master   7m52s   v1.17.3

 

[root@docker-server1 ~]# kubectl get pods -n kube-system

NAME                                      READY   STATUS    RESTARTS   AGE

coredns-7f9c544f75-wx6q9                  0/1     Pending   0          52m

coredns-7f9c544f75-x5nff                  0/1     Pending   0          52m

etcd-192.168.200.111                      1/1     Running   0          52m

kube-apiserver-192.168.200.111            1/1     Running   0          52m

kube-controller-manager-192.168.200.111   1/1     Running   0          52m

kube-proxy-pfz6z                          1/1     Running   0          52m

kube-scheduler-192.168.200.111            1/1     Running   0          52m

 

發現兩個問題:

1)         core節點為pending:是因為需要node節點,但是還沒有安裝node節點,所以是pending

2)         statusnotready狀態:是因為還沒有安裝網絡插件

2.8、安裝flannel

Master 節點NotReady 的原因就是因為沒有使用任何的網絡插件,此時Node Master的連接還不正常。目前最流行的Kubernetes 網絡插件有FlannelCalicoCanalWeave 這里選擇使用flannel

 

master 節點上執行,執行完成后需要等flannel pods 運行起來,這需要點時間:

[root@docker-server1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@docker-server1 ~]# sed -i 's@quay.io@quay.azk8s.cn@g' kube-flannel.yml

 

[root@docker-server1 ~]# kubectl apply -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

 

[root@docker-server1 ~]# kubectl get nodes

NAME              STATUS   ROLES    AGE     VERSION

192.168.200.111   Ready    master   9m24s   v1.17.3

已經是ready狀態

2.9、安裝node節點

安裝node方式一:

可以根據master安裝時的提示信息

kubeadm join 192.168.200.111:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:c2d6067d5c3b12118275958dee222226d09a89fc5fb559687dc989d2508d5a50

 

安裝node方式二:

master節點查看token信息

[root@docker-server1 ~]# cat kubeadm-config.yaml |grep token

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

 

docker-server2主機:

[root@docker-server2 ~]# kubeadm config print join-defaults > kubeadm-config.yaml

[root@docker-server2 ~]# vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

caCertPath: /etc/kubernetes/pki/ca.crt

discovery:

  bootstrapToken:

    apiServerEndpoint: 192.168.200.111:6443

    token: abcdef.0123456789abcdef

    unsafeSkipCAVerification: true

  timeout: 5m0s

  tlsBootstrapToken: abcdef.0123456789abcdef

kind: JoinConfiguration

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: 192.168.200.112

  taints: null

 

[root@docker-server2 ~]# kubeadm join --config kubeadm-config.yaml

W0212 22:13:36.627811    3819 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when

control-plane flag is not set.[preflight] Running pre-flight checks

         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd

". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system names

pace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

 

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

docker-server3主機:

[root@docker-server3 ~]# kubeadm config print join-defaults > kubeadm-config.yaml

[root@docker-server3 ~]# vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

caCertPath: /etc/kubernetes/pki/ca.crt

discovery:

  bootstrapToken:

    apiServerEndpoint: 192.168.200.111:6443

    token: abcdef.0123456789abcdef

    unsafeSkipCAVerification: true

  timeout: 5m0s

  tlsBootstrapToken: abcdef.0123456789abcdef

kind: JoinConfiguration

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: 192.168.200.112

  taints: null

 

[root@docker-server3 ~]# kubeadm join --config kubeadm-config.yaml

W0212 22:13:38.565506    3838 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when

control-plane flag is not set.[preflight] Running pre-flight checks

         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd

". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system names

pace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

 

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

master查看node節點信息

[root@docker-server1 ~]# kubectl get nodes

NAME              STATUS   ROLES    AGE    VERSION

192.168.200.111   Ready    master   17m    v1.17.3

192.168.200.112   Ready    <none>   111s   v1.17.3

192.168.200.113   Ready    <none>   109s   v1.17.3

 

master查看Pod信息

[root@docker-server1 ~]# kubectl get pods -n kube-system

NAME                                      READY   STATUS    RESTARTS   AGE

coredns-7f9c544f75-6b8gq                  1/1     Running   0          17m

coredns-7f9c544f75-tjg2l                  1/1     Running   0          17m

etcd-192.168.200.111                      1/1     Running   0          17m

kube-apiserver-192.168.200.111            1/1     Running   0          17m

kube-controller-manager-192.168.200.111   1/1     Running   0          17m

kube-flannel-ds-amd64-bl49r               1/1     Running   3          2m24s

kube-flannel-ds-amd64-dfkgr               1/1     Running   0          9m14s

kube-flannel-ds-amd64-j74w7               1/1     Running   0          2m26s

kube-proxy-442vz                          1/1     Running   0          2m26s

kube-proxy-trrsg                          1/1     Running   0          17m

kube-proxy-xnn74                          1/1     Running   0          2m24s

kube-scheduler-192.168.200.111            1/1     Running   0          17m

2.10 節點管理命令

以下命令無需執行,僅作為了解

重置master配置

[root@docker-server1 ~]# kubeadm reset

刪除node配置

[root@docker-server2 ~]# docker ps -aq|xargs  docker rm -f

[root@docker-server2 ~]# systemctl stop kubelet

[root@docker-server2 ~]# rm -rf /etc/kubernetes/*

[root@docker-server2 ~]# rm -rf /var/lib/kubelet/*

三、安裝Dashboard UI

3.1部署Dashboard

dashboardgithub倉庫地址:https://github.com/kubernetes/dashboard

代碼倉庫當中,有給出安裝示例的相關部署文件,我們可以直接獲取之后,直接部署即可

[root@docker-server1 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

默認這個部署文件當中,會單獨創建一個名為kubernetes-dashboard的命名空間,并將kubernetes-dashboard部署在該命名空間下。dashboard的鏡像來自docker hub官方,所以可不用修改鏡像地址,直接從官方獲取即可。

3.2、開放端口設置

在默認情況下,dashboard并不對外開放訪問端口,這里簡化操作,直接使用nodePort的方式將其端口暴露出來,修改serivce部分的定義:

[root@docker-server1 ~]# vim recommended.yaml

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kubernetes-dashboard

spec:

  type: NodePort            #添加

  ports:

    - port: 443

      targetPort: 8443

      nodePort: 32443          #添加

  selector:

    k8s-app: kubernetes-dashboard

3.3、權限配置

由于這個權限太小,修改一個超級管理員權限

[root@docker-server1 ~]# vim recommended.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: kubernetes-dashboard

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

  - kind: ServiceAccount

    name: kubernetes-dashboard

    namespace: kubernetes-dashboard

 

[root@docker-server1 ~]# kubectl apply -f recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

 

獲取token

[root@docker-server1 ~]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secret -n kubernetes-dashboard |grep kubernetes-dashboard-token | awk '{print $1}') |grep token | awk '{print $2}'

kubernetes-dashboard-token-fk762

kubernetes.io/service-account-token

eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5aYmhQMDA4aktaeUVyQVpBd3Y5VUNsTXFQV1VBeTRhSml4ZWlmNUV2NzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mazc2MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZmYTVmZDM2LWIyOTItNDc3NS1hMWU0LThiOGE5MTY1NmI3ZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.J_XYUsmSB1wWApYQkSebgd3BvEHoZe5pBgayw8N0xG6TYBsPhMEBVyhE6pR-P-R2eZKPAK9xkajMIwxtwxnIi2NTPv--FiecLINj2_XV7pegkEmd7AREXEPQmjGqM3Fulc7VkVFaG1YIdRmgi069GImpqFuTF0t19wOaloetUHY6LMRJsyHyesjvc2V82a_qgrFNcVtw9l0b8HhxebRIH6crhCMXKRpsjeF8zUg-Aq4ZfJxxEcc6wM2bOzAh00vJECHKBc7sTH2va8xic7GL_hMyE5SZzSOVeaulODWCc5hQdSc2BxeY4TVFz6GJXDC6ZgVj8gnNgUXxw3NVSiDmyg

 

使用token登錄系統

Kubeadm部署Kubernetes集群的步驟

Kubeadm部署Kubernetes集群的步驟

到此K8S集群安裝全部完成

四、應用部署測試

下面我們部署一個簡單的Nginx WEB服務,該容器運行時會監聽80端口,同時訪問/info路徑會顯示容器的主機名。服務由3個容器實例構成,并且通過Nodeport方式暴露給用戶。

[root@docker-server1 ~]# kubectl run nginxweb --image=nginx  --port=80 --replicas=3

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/nginxweb created

 

查看創建的對象,可以看到已經有3pod在運行了

[root@docker-server1 ~]# kubectl get deployment

NAME       READY   UP-TO-DATE   AVAILABLE   AGE

nginxweb   0/3     3            0           14s

[root@docker-server1 ~]# kubectl get po

NAME                        READY   STATUS              RESTARTS   AGE

nginxweb-6d7457b898-5qcbs   0/1     ContainerCreating   0          31s

nginxweb-6d7457b898-m5tvh   0/1     ContainerCreating   0          31s

nginxweb-6d7457b898-v58bj   0/1     ContainerCreating   0          31s

 

創建svc,通過Nodeport方式暴露服務

[root@docker-server1 ~]# kubectl expose deployment nginxweb --name=nginxwebsvc --port=80  --target-port=80  --type=NodePort

service/nginxwebsvc exposed

 

查看svc,可以看到NodePort隨機分配的端口為30715

[root@docker-server1 ~]# kubectl get svc

NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE

kubernetes    ClusterIP   10.96.0.1     <none>        443/TCP        147m

nginxwebsvc   NodePort    10.96.63.33   <none>        80:30715/TCP   52s

接下來,在用戶操作系統就可以通過master主機的ip地址 http://192.168.200.111:30715/ 來訪問這個nginxwebsvc了,nginxwebsvc 會把80口的請求再負載均衡到實際的nginxweb pod

Kubeadm部署Kubernetes集群的步驟

Kubeadm部署Kubernetes集群的步驟

Kubeadm部署Kubernetes集群的步驟


以上就是Kubeadm部署Kubernetes集群的步驟,詳細使用情況還需要大家自己親自動手使用過才能領會。如果想了解更多相關內容,歡迎關注億速云行業資訊頻道!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

新和县| 锡林浩特市| 英吉沙县| 清苑县| 原平市| 大宁县| 常熟市| 滦平县| 定兴县| 天门市| 嘉荫县| 林周县| 靖安县| 两当县| 九江市| 黄平县| 颍上县| 阜阳市| 都昌县| 蚌埠市| 鸡西市| 汤阴县| 济源市| 天全县| 清水县| 澄城县| 青川县| 增城市| 郎溪县| 额济纳旗| 崇礼县| 广河县| 仁化县| 玉环县| 甘孜| 英德市| 左权县| 临朐县| 东明县| 永善县| 苏尼特左旗|