亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

K8s完整多節點部署(線網實戰!含排錯!)

發布時間:2020-03-02 12:22:22 來源:網絡 閱讀:1168 作者:JarryZ 欄目:云計算

K8s多節點部署---->使用Nginx服務實現負載均衡---->UI界面展示


特別注意:此實驗開始前必須要先部署單節master的k8s群集
可以見本人上一篇博客:https://blog.csdn.net/JarryZho/article/details/104193913

環境部署:
相關軟件包及文檔:

鏈接:https://pan.baidu.com/s/1l4vVCkZ03la-VpIFXSz1dA
提取碼:rg99

使用Nginx做負載均衡:

lb1:192.168.195.147/24 mini-2

lb2:192.168.195.133/24 mini-3

Master節點:

master1:192.168.18.128/24 CentOS 7-3

master2:192.168.18.132/24 mini-1

Node節點:

node1:192.168.18.148/24 CentOS 7-4

node2:192.168.18.145/24 CentOS 7-5

VRRP漂移地址:192.168.18.100


多master群集架構圖:

K8s完整多節點部署(線網實戰!含排錯!)


------master2部署------

第一步:優先關閉master2的防火墻服務
[root@master2 ~]# systemctl stop firewalld.service
[root@master2 ~]# setenforce 0
第二步:在master1上操作,復制kubernetes目錄到master2
[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.18.132:/opt
The authenticity of host '192.168.18.132 (192.168.18.132)' can't be established.
ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.
ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.18.132' (ECDSA) to the list of known hosts.
root@192.168.18.132's password:
token.csv                                                 100%   84    90.2KB/s   00:00
kube-apiserver                                            100%  934   960.7KB/s   00:00
kube-scheduler                                            100%   94   109.4KB/s   00:00
kube-controller-manager                                   100%  483   648.6KB/s   00:00
kube-apiserver                                            100%  184MB  82.9MB/s   00:02
kubectl                                                   100%   55MB  81.5MB/s   00:00
kube-controller-manager                                   100%  155MB  70.6MB/s   00:02
kube-scheduler                                            100%   55MB  77.4MB/s   00:00
ca-key.pem                                                100% 1675     1.2MB/s   00:00
ca.pem                                                    100% 1359     1.5MB/s   00:00
server-key.pem                                            100% 1675     1.2MB/s   00:00
server.pem                                                100% 1643     1.7MB/s   00:00
第三步:復制master1中的三個組件啟動腳本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2
[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.18.132:/usr/lib/systemd/system/
root@192.168.18.132's password:
kube-apiserver.service                                    100%  282   286.6KB/s   00:00
kube-controller-manager.service                           100%  317   223.9KB/s   00:00
kube-scheduler.service                                    100%  281   362.4KB/s   00:00
第四步:master2上操作,修改配置文件kube-apiserver中的IP
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master2 cfg]# vim kube-apiserver
5 --bind-address=192.168.18.132 \
7 --advertise-address=192.168.18.132 \
#第5和7行IP地址需要改為master2的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出
第五步:拷貝master1上已有的etcd證書給master2使用

特別注意:master2一定要有etcd證書,否則apiserver服務無法啟動

[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.18.132:/opt/
root@192.168.18.132's password:
etcd                                                      100%  516   535.5KB/s   00:00
etcd                                                      100%   18MB  90.6MB/s   00:00
etcdctl                                                   100%   15MB  80.5MB/s   00:00
ca-key.pem                                                100% 1675     1.4MB/s   00:00
ca.pem                                                    100% 1265   411.6KB/s   00:00
server-key.pem                                            100% 1679     2.0MB/s   00:00
server.pem                                                100% 1338   429.6KB/s   00:00
第六步:啟動master2中的三個組件服務
[root@master2 cfg]# systemctl start kube-apiserver.service
[root@master2 cfg]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master2 cfg]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago

[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master2 cfg]# systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago

[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master2 cfg]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago
第七步:增加環境變量并生效
[root@master2 cfg]# vim /etc/profile
#末尾添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@master2 cfg]# source /etc/profile
[root@master2 cfg]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
192.168.18.145   Ready    <none>   21h   v1.12.3
192.168.18.148   Ready    <none>   22h   v1.12.3
#此時可以看到node1和node2的加入情況

此時master2部署完畢


------Nginx負載均衡部署------

注意:此處使用nginx服務實現負載均衡,1.9版本之后的nginx具有了四層的轉發功能(負載均衡),該功能中多了stream

多節點原理:

和單節點不同,多節點的核心點就是需要指向一個核心的地址,我們之前在做單節點的時候已經將vip地址定義過寫入k8s-cert.sh腳本文件中(192.168.18.100),vip開啟apiserver,多master開啟端口接受node節點的apiserver請求,此時若有新的節點加入,不是直接找moster節點,而是直接找到vip進行spiserver的請求,然后vip再進行調度,分發到某一個master中進行執行,此時master收到請求之后就會給改node節點頒發證書

第一步:上傳keepalived.conf和nginx.sh兩個文件到lb1和lb2的root目錄下
`lb1`
[root@lb1 ~]# ls
anaconda-ks.cfg       keepalived.conf  公共  視頻  文檔  音樂
initial-setup-ks.cfg  nginx.sh         模板  圖片  下載  桌面

`lb2`
[root@lb2 ~]# ls
anaconda-ks.cfg       keepalived.conf  公共  視頻  文檔  音樂
initial-setup-ks.cfg  nginx.sh         模板  圖片  下載  桌面
第二步:lb1(192.168.18.147)操作
[root@lb1 ~]# systemctl stop firewalld.service
[root@lb1 ~]# setenforce 0

[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`重新加載yum倉庫`
[root@lb1 ~]# yum list
`安裝nginx服務`
[root@lb1 ~]# yum install nginx -y

[root@lb1 ~]# vim /etc/nginx/nginx.conf
#在12行下插入以下內容
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 192.168.18.128:6443;     #此處為master1的ip地址
        server 192.168.18.132:6443;     #此處為master2的ip地址
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`檢測語法`
[root@lb1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

[root@lb1 ~]# cd /usr/share/nginx/html/
[root@lb1 html]# ls
50x.html  index.html
[root@lb1 html]# vim index.html
14 <h2>Welcome to mater nginx!</h2>     #14行中添加master以作區分
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`啟動服務`
[root@lb2 ~]# systemctl start nginx
瀏覽器驗證訪問,輸入192.168.18.147,可以訪問master的nginx主頁

K8s完整多節點部署(線網實戰!含排錯!)

部署keepalived服務
[root@lb1 html]# yum install keepalived -y
`修改配置文件`
[root@lb1 html]# cd ~
[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes
#用我們之前上傳的keepalived.conf配置文件,覆蓋安裝完成后原有的配置文件

[root@lb1 ~]# vim /etc/keepalived/keepalived.conf
18     script "/etc/nginx/check_nginx.sh"       #18行目錄改為/etc/nginx/,腳本后寫
23     interface ens33      #eth0改為ens33,此處的網卡名稱可以使用ifconfig命令查詢
24     virtual_router_id 51     #vrrp路由ID實例,每個實例是唯一的
25     priority 100             #優先級,備服務器設置90
31     virtual_ipaddress {
32         192.168.18.100/24    #vip地址改為之前設定好的192.168.18.100
#38行以下刪除
#修改完成后按Esc退出插入模式,輸入:wq保存退出

`寫腳本`
[root@lb1 ~]# vim /etc/nginx/check_nginx.sh     
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")    #統計數量

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
#匹配為0,關閉keepalived服務
#寫入完成后按Esc退出插入模式,輸入:wq保存退出
[root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@lb1 ~]# ls /etc/nginx/check_nginx.sh
/etc/nginx/check_nginx.sh       #此時腳本為可執行狀態,綠色
[root@lb1 ~]# systemctl start keepalived

[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
       valid_lft 1370sec preferred_lft 1370sec
    inet `192.168.18.100/24` scope global secondary ens33       #此時漂移地址在lb1中
       valid_lft forever preferred_lft forever
    inet6 fe80::1cb1:b734:7f72:576f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
第三步:lb2(192.168.18.133)操作
[root@lb2 ~]# systemctl stop firewalld.service
[root@lb2 ~]# setenforce 0

[root@lb2 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`重新加載yum倉庫`
[root@lb2 ~]# yum list
`安裝nginx服務`
[root@lb2 ~]# yum install nginx -y

[root@lb2 ~]# vim /etc/nginx/nginx.conf
#在12行下插入以下內容
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 192.168.18.128:6443;     #此處為master1的ip地址
        server 192.168.18.132:6443;     #此處為master2的ip地址
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`檢測語法`
[root@lb2 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

[root@lb2 ~]# vim /usr/share/nginx/html/index.html
14 <h2>Welcome to backup nginx!</h2>    #14行中添加backup以作區分
#修改完成后按Esc退出插入模式,輸入:wq保存退出
`啟動服務`
[root@lb2 ~]# systemctl start nginx
瀏覽器驗證訪問,輸入192.168.18.133,可以訪問master的nginx主頁

K8s完整多節點部署(線網實戰!含排錯!)

部署keepalived服務
[root@lb2 ~]# yum install keepalived -y
`修改配置文件`
[root@lb2 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes
#用我們之前上傳的keepalived.conf配置文件,覆蓋安裝完成后原有的配置文件

[root@lb2 ~]# vim /etc/keepalived/keepalived.conf
18     script "/etc/nginx/check_nginx.sh"       #18行目錄改為/etc/nginx/,腳本后寫
22     state BACKUP     #22行角色MASTER改為BACKUP
23     interface ens33  #eth0改為ens33
24     virtual_router_id 51     #vrrp路由ID實例,每個實例是唯一的
25     priority 90      #優先級,備服務器為90
31     virtual_ipaddress {
32         192.168.18.100/24    #vip地址改為之前設定好的192.168.18.100
#38行以下刪除
#修改完成后按Esc退出插入模式,輸入:wq保存退出

`寫腳本`
[root@lb2 ~]# vim /etc/nginx/check_nginx.sh     
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")    #統計數量

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
#匹配為0,關閉keepalived服務
#寫入完成后按Esc退出插入模式,輸入:wq保存退出
[root@lb2 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@lb2 ~]# ls /etc/nginx/check_nginx.sh
/etc/nginx/check_nginx.sh       #此時腳本為可執行狀態,綠色

[root@lb2 ~]# systemctl start keepalived
[root@lb2 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33
       valid_lft 958sec preferred_lft 958sec
    inet6 fe80::578f:4368:6a2c:80d7/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
#此時沒有192.168.18.100,因為地址在lb1(master)上
第四步:驗證地址漂移
`停止lb1中的nginx服務`
[root@lb1 ~]# pkill nginx
[root@lb1 ~]# systemctl status nginx
● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 五 2020-02-07 12:16:39 CST; 1min 40s ago
#此時狀態為關閉

`檢查keepalived服務是否同時被關閉`
[root@lb1 ~]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
#此時keepalived服務被關閉,說明check_nginx.sh腳本執行成功

[root@lb1 ~]# ps -ef |grep nginx |egrep -cv "grep|$$"
0
#此時判斷條件為0,應該停止keepalived服務

`查看lb1上的漂移地址是否存在`
[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
       valid_lft 1771sec preferred_lft 1771sec
    inet6 fe80::1cb1:b734:7f72:576f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
#此時192.168.18.100漂移地址消失,如果雙機熱備成功,該地址應該漂移到lb2上

`再查看lb2看漂移地址是否存在`
[root@lb2 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33
       valid_lft 1656sec preferred_lft 1656sec
    inet 192.168.18.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::578f:4368:6a2c:80d7/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
#此時漂移地址192.168.18.100到了lb2上,說明雙機熱備成功
第五步:恢復操作
`在lb1上啟動nginx和keepalived服務`
[root@lb1 ~]# systemctl start nginx
[root@lb1 ~]# systemctl start keepalived

`漂移地址又會重新回到lb1上`
[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
       valid_lft 1051sec preferred_lft 1051sec
    inet 192.168.18.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1cb1:b734:7f72:576f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
#反之lb2上的漂移地址就會消失
第六步:此時我們用宿主機的cmd命令測試測試漂移地址是否聯通
C:\Users\zhn>ping 192.168.18.100

正在 Ping 192.168.18.100 具有 32 字節的數據:
來自 192.168.18.100 的回復: 字節=32 時間<1ms TTL=64
來自 192.168.18.100 的回復: 字節=32 時間<1ms TTL=64
來自 192.168.18.100 的回復: 字節=32 時間=1ms TTL=64
來自 192.168.18.100 的回復: 字節=32 時間<1ms TTL=64

192.168.18.100 的 Ping 統計信息:
    數據包: 已發送 = 4,已接收 = 4,丟失 = 0 (0% 丟失),
往返行程的估計時間(以毫秒為單位):
    最短 = 0ms,最長 = 1ms,平均 = 0ms
#此時可以ping通,說明可以訪問此虛擬IP
第七步:在宿主機中使用192.168.18.100地址訪問到的就應該是我們之前設置的master的nginx主頁,也就是lb1

K8s完整多節點部署(線網實戰!含排錯!)


第八步:開始修改node節點配置文件統一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
node1:
[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
5     server: https://192.168.18.100:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
5     server: https://192.168.18.128:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
5     server: https://192.168.18.128:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

`替換完成直接自檢`
[root@node1 ~]# cd /opt/kubernetes/cfg/
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.18.100:6443
kubelet.kubeconfig:    server: https://192.168.18.100:6443
kube-proxy.kubeconfig:    server: https://192.168.18.100:6443

[root@node1 cfg]# systemctl restart kubelet.service
[root@node1 cfg]# systemctl restart kube-proxy.service
node2:
[root@node2 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
5     server: https://192.168.18.100:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

[root@node2 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
5     server: https://192.168.18.128:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

[root@node2 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
5     server: https://192.168.18.128:6443       #5行改為Vip的地址
#修改完成后按Esc退出插入模式,輸入:wq保存退出

`替換完成直接自檢`
[root@node2 ~]# cd /opt/kubernetes/cfg/
[root@node2 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.18.100:6443
kubelet.kubeconfig:    server: https://192.168.18.100:6443
kube-proxy.kubeconfig:    server: https://192.168.18.100:6443

[root@node2 cfg]# systemctl restart kubelet.service
[root@node2 cfg]# systemctl restart kube-proxy.service
第九步:在lb01上查看nginx的k8s日志
[root@lb1 ~]# tail /var/log/nginx/k8s-access.log
192.168.18.145 192.168.18.128:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.145 192.168.18.132:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.148 192.168.18.128:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
192.168.18.148 192.168.18.132:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
第十步:在master1上操作
`測試創建pod`
[root@master1 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

`查看狀態`
[root@master1 ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-7hdfj   0/1     ContainerCreating   0          32s
#此時狀態為ContainerCreating正在創建中

[root@master1 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-7hdfj   1/1     Running   0          73s
#此時狀態為Running,表示創建完成,運行中

`注意:日志問題`
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7hdfj)
#此時日志不可看,需要開啟權限

`綁定群集中的匿名用戶賦予管理員權限`
[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj        #此時就不會報錯了

`查看pod網絡`
[root@master1 ~]# kubectl get pods -o wide
NAME                  READY     STATUS    RESTARTS   AGE      IP            NODE         NOMINATED NODE
nginx-dbddb74b8-7hdfj   1/1     Running   0          20m   172.17.32.2   192.168.18.148  <none>

在對應網段的node1節點上操作可以直接訪問
[root@node1 ~]# curl 172.17.32.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h2>Welcome to nginx!</h2>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#此時看到的就是容器中nginx的信息
訪問就會產生日志,我們就可以回到master1上查看日志
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
172.17.32.1 - - [07/Feb/2020:06:52:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
#此時就可以看到node1使用網關(172.17.32.1)進行訪問的記錄

------創建UI顯示界面------

在master1上創建dashborad工作目錄
[root@master1 ~]# cd k8s/
[root@master1 k8s]# mkdir dashboard
[root@master1 k8s]# cd dashboard/
#此處需要上傳頁面文件到此文件夾下

K8s完整多節點部署(線網實戰!含排錯!)

`此時就可以看到頁面的yaml文件`
[root@master1 dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml

`創建頁面,順序一定要注意`
[root@master1 dashboard]# kubectl create -f dashboard-rbac.yaml     #授權訪問api
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
[root@master1 dashboard]# kubectl create -f dashboard-secret.yaml   #進行加密
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
[root@master1 dashboard]# kubectl create -f dashboard-configmap.yaml    #配置應用
configmap/kubernetes-dashboard-settings created
[root@master1 dashboard]# kubectl create -f dashboard-controller.yaml   #控制器
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
[root@master1 dashboard]# kubectl create -f dashboard-service.yaml      #發布出去進行訪問
service/kubernetes-dashboard created

`完成后查看創建在指定的kube-system命名空間下`
[root@master1 dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          3m27s

`查看如何訪問`
[root@master1 dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          3m27s
[root@master1 dashboard]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          4m21s

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.169   <none>        443:30001/TCP   4m15s
驗證:在瀏覽器中輸入nodeIP就可以訪問:

K8s完整多節點部署(線網實戰!含排錯!)

解決方法:關于谷歌瀏覽器無法訪問題
`在master1中:`
[root@master1 dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "NanJing",
           "ST": "NanJing"
       }
   ]
}
EOF

K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
#修改完成后按Esc退出插入模式,輸入:wq保存退出

[root@master1 dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
2020/02/07 16:47:49 [INFO] generate received request
2020/02/07 16:47:49 [INFO] received CSR
2020/02/07 16:47:49 [INFO] generating key: rsa-2048
2020/02/07 16:47:49 [INFO] encoded CSR
2020/02/07 16:47:49 [INFO] signed certificate with serial number 612466244367800695250627555980294380133655299692
2020/02/07 16:47:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
secret "kubernetes-dashboard-certs" deleted
secret/kubernetes-dashboard-certs created

[root@master1 dashboard]# vim dashboard-controller.yaml
 45         args:
 46           # PLATFORM-SPECIFIC ARGS HERE
 47           - --auto-generate-certificates
 #在47行下插入以下內容
 48           - --tls-key-file=dashboard-key.pem
 49           - --tls-cert-file=dashboard.pem
#修改完成后按Esc退出插入模式,輸入:wq保存退出

`重新部署`
[root@master1 dashboard]# kubectl apply -f dashboard-controller.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kubernetes-dashboard configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kubernetes-dashboard configured
#此時頁面會提示:繼續前往192.168.18.148(不安全)

K8s完整多節點部署(線網實戰!含排錯!)

K8s完整多節點部署(線網實戰!含排錯!)

`生成令牌`
[root@master1 dashboard]# kubectl create -f k8s-admin.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

`保存`
[root@master1 dashboard]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-l9z5f        kubernetes.io/service-account-token   3      30s
#dashboard-admin-token-l9z5f后面要用于查看令牌
default-token-8hwtl                kubernetes.io/service-account-token   3      2d3h
kubernetes-dashboard-certs         Opaque                                11     11m
kubernetes-dashboard-key-holder    Opaque                                2      26m
kubernetes-dashboard-token-crqvs   kubernetes.io/service-account-token   3      25m

`查看令牌`
[root@master1 dashboard]# kubectl describe secret dashboard-admin-token-l9z5f -n kube-system
Name:         dashboard-admin-token-l9z5f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 115a70a5-4988-11ea-b617-000c2986f9b2

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbDl6NWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTE1YTcwYTUtNDk4OC0xMWVhLWI2MTctMDAwYzI5ODZmOWIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DdqS8xHxQYUw68NpqR1XIqQRgOFS3nsrfhjPe1pdqbt6PepAf1pOaDYTJ2cGtbA89J4v0go-6ZWc1BiwidMcthVv_LgXD9cD_5RXN_GoYqsEFFFgkzdyG0y4_BSowMCheS9tGCzuo-O-w_U5gPz3LGTwMRPyRbfEVDaS3Dign_b8SASD_56WkHkSGecI42t1Zct5h2Mnsam_qPhpfgMCzwxQ8l8_8XK6t5NK6orSwL9ozAmX5XGR9j4EL06OKy6al5hAHoB1k0srqT_mcj8Lngt7iq6VPuLVVAF7azAuItlL471VR5EMfvSCRrUG2nPiv44vjQPghnRYXMWS71_B5w
ca.crt:     1359 bytes
namespace:  11 bytes
#整個token段落就是我們需要復制的令牌
把令牌粘貼之后登錄,得到UI界面:

K8s完整多節點部署(線網實戰!含排錯!)

以上就是完整的K8s多節點的完整部署到頁面呈現的過程!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

衢州市| 淮南市| 浠水县| 土默特右旗| 大方县| 通道| 德令哈市| 略阳县| 郓城县| 克什克腾旗| 准格尔旗| 和田市| 神木县| 潢川县| 固镇县| 泉州市| 望都县| 项城市| 张掖市| 安图县| 都昌县| 富宁县| 静宁县| 河东区| 江山市| 石台县| 安乡县| 常宁市| 永城市| 台州市| 武乡县| 万全县| 乌鲁木齐县| 沽源县| 巴塘县| 凤冈县| 万安县| 东乡族自治县| 衢州市| 伊宁市| 始兴县|