亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

MySQL之高可用架構—MHA

發布時間:2020-05-17 05:05:55 來源:網絡 閱讀:602 作者:zengwj1949 欄目:建站服務器

    MySQL高可用目前有heartbeat+drbd、MHA、MySQL復制等幾種較成熟的方案,heartbeat+drbd的方案可擴展性較差,而且讀寫都由主服務器負責,從庫并不提供讀功能,適合于數據增長量不大、一致性要求很高的環境,如銀行、金融業等。今天重點講下MHA的高可用架構。

    MHA是一款優秀的高可用環境下故障切換和主從提升的高可用軟件。在MySQL故障切換過程中,MHA能做到0-30秒之內自動完成數據庫的故障切換,并且在切換的過程中,最大限度的保證數據的一致性,以達到真正意義上的高可用。MHA高可用建立在MySQL主從復制的基礎上,先了解下MySQL復制最常見的兩種方式:

  • 異步復制:主庫寫入并提交事務之后,把記錄寫進主庫二進制日志即返回客戶端,主庫和從庫的數據存在一定的延遲,這樣就存在一定的隱患,當主庫提交了一個事務,并且寫入了二進制日志,而從庫尚未得到主庫推送的二進制日志時,此時主庫宕機,將造成主從服務器的數據不一致。

  • 半同步復制:主庫在每次提交事務成功時,并不及時反饋給客戶端,而是等待其中一個從庫也接收到二進制日志并寫入中繼日志之后,才返回操作成功給客戶端。

    MHA組成:

  • MHA Manager:管理節點,可以單獨的部署在一臺獨立的服務器上,管理多個master-slave集群,也可以部署在一臺Slave上。

  • MHA Node:數據節點,運行在每臺MySQL服務器上。

MHA Manager會定時探測集群中的master節點,當master出現故障時,它可以自動將最新數據的slave提升為新master,然后將其它所有的slave重新指向新的master。整個故障轉移過程對應用程序是完全透明的。

    MHA工作原理:

1)從宕機的master保存二進制日志事件

2)識別含有最新更新的Slave

3) 應用差異的中繼日志到其它從服務器

4)應用從master保存的二進制日志事件

5)提升一個新的Slave為master

6)使其它的Slave連接到新的master并復制


示例:MHA高可用架構(如果在內網可以關閉防火墻,否則請開啟相應的端口)

Manager:node1:192.168.154.128

Master:node2:192.168.154.156

Slave:node3:192.168.154.130

Slave:node4:192.168.154.154


一 配置主從復制:

1)主節點:

[root@node2 ~]# vim /etc/my.cnf 

innodb_file_per_table=1        #開啟獨立的表空間

skip_name_resolve              #禁止域名解析

log-bin=master-bin

relay-log=relay-bin

server-id=1

[root@node2 ~]# service mysqld restart

查看二進制日志信息

mysql> show master status;

+-------------------+----------+--------------+------------------+

| File              | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+-------------------+----------+--------------+------------------+

| master-bin.000001 |      106 |              |                  |

+-------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

建立授權用戶:

mysql> grant replication slave,replication client on *.* to 'slave'@'192.168.154.%' identified by 'slave';

Query OK, 0 rows affected (0.06 sec)


mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)


2)從節點:

[root@node3 ~]# vim /etc/my.cnf 

innodb_file_per_table=1

skip_name_resolve

log-bin=slave-bin

relay-log=relay-bin

server_id=2

read_only=1

relay_log_purge=0

[root@node3 ~]# service mysqld restart


[root@node4 ~]# vim /etc/my.cnf

innodb_file_per_table=1

skip_name_resolve

log-bin=slave-bin

relay-log=relay-bin

server_id=3

read_only=1                    #開啟只讀模式

relay_log_purge=0              #關閉自動清理中繼日志

[root@node4 ~]# service mysqld restart


設置同步:

mysql> change master to master_host='192.168.154.156',master_user='slave',master_password='slave',master_log_file='master-bin.000001',master_log_pos=106;

Query OK, 0 rows affected (0.03 sec)


mysql> start slave;

Query OK, 0 rows affected (0.01 sec)


mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.154.156

                  Master_User: slave

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: master-bin.000001

          Read_Master_Log_Pos: 354

               Relay_Log_File: relay-bin.000002

                Relay_Log_Pos: 500

        Relay_Master_Log_File: master-bin.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB: 

          Replicate_Ignore_DB: 

           Replicate_Do_Table: 

       Replicate_Ignore_Table: 

      Replicate_Wild_Do_Table: 

  Replicate_Wild_Ignore_Table: 

                   Last_Errno: 0

                   Last_Error: 

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 354

              Relay_Log_Space: 649

              Until_Condition: None

               Until_Log_File: 

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File: 

           Master_SSL_CA_Path: 

              Master_SSL_Cert: 

            Master_SSL_Cipher: 

               Master_SSL_Key: 

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error: 

               Last_SQL_Errno: 0

               Last_SQL_Error: 

1 row in set (0.00 sec)


3)在master節點上創建具有管理權限的賬號

mysql> grant all on *.* to 'zwj'@'192.168.154.%' identified by 'zwj';

Query OK, 0 rows affected (0.00 sec)


二 配置集群間的密鑰登陸

在node1上:

[root@node1 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.156

[root@node1 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.130

[root@node1 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.154

[root@node1 ~]# ssh 192.168.154.154 'ifconfig'            #驗證

eth0      Link encap:Ethernet  HWaddr 00:0C:29:67:65:ED  

          inet addr:192.168.154.154  Bcast:192.168.154.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe67:65ed/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:26253 errors:0 dropped:0 overruns:0 frame:0

          TX packets:42416 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000 

          RX bytes:23453164 (22.3 MiB)  TX bytes:2514457 (2.3 MiB)

          Interrupt:19 Base address:0x2024 


在node2上:

[root@node2 ~]# ssh-keygen -t rsa

[root@node2 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.128

[root@node2 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.130

[root@node2 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.154


在node3上:

[root@node3 log]# ssh-keygen -t rsa

[root@node3 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.128

[root@node3 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.156

[root@node3 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.154


在node4上:

[root@node4 ~]# ssh-keygen -t rsa

[root@node4 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.128

[root@node4 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.156

[root@node4 ~]# ssh-copy-id -i ./.ssh/id_rsa.pub root@192.168.154.130


三 安裝MHA Manager,在node1上:

[root@node1 ~]# yum install perl-DBD-MySQL -y

[root@node1 ~]# tar -zxf mha4mysql-node-0.56.tar.gz 

[root@node1 ~]# cd mha4mysql-node-0.56

[root@node1 mha4mysql-node-0.56]# perl Makefile.PL 

[root@node1 mha4mysql-node-0.56]# make

[root@node1 mha4mysql-node-0.56]# make install


[root@node1 mha4mysql-manager-0.56]# yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y        #安裝MHA Manger依賴的perl模塊

[root@node1 ~]# tar -zxf mha4mysql-manager-0.56.tar.gz 

[root@node1 ~]# cd mha4mysql-manager-0.56

[root@node1 mha4mysql-manager-0.56]# perl Makefile.PL

[root@node1 mha4mysql-manager-0.56]# make

[root@node1 mha4mysql-manager-0.56]# make install


四 安裝MySQL node(在所有MySQL服務器上)

[root@node2 ~]#yum install perl-DBD-MySQL -y

[root@node2 ~]# cd mha4mysql-node-0.56/

[root@node2 mha4mysql-node-0.56]# perl Makefile.PL 

[root@node2 mha4mysql-node-0.56]# make

[root@node2 mha4mysql-node-0.56]# make install 


五 創建工作目錄,配置MHA:

[root@node1 ~]# mkdir -pv /etc/masterha

[root@node1 ~]# vim /etc/masterha/appl.cnf

[server default]

user=zwj

password=zwj

manager_workdir=/etc/masterha/appl

manager_log=/etc/masterha/appl/manager.log

remote_workdir=/etc/masterha/appl

ssh_user=root

repl_user=slave

repl_password=slave

ping_interval=1


[server1]

hostname=192.168.154.156


[server2]

hostname=192.168.154.130

candidate_master=1                        #設置為備選的master


[server3]

hostname=192.168.154.154


六 檢查SSH連接狀態:

[root@node1 ~]# masterha_check_ssh --conf=/etc/masterha/appl.cnf 

Wed May 10 00:12:58 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Wed May 10 00:12:58 2017 - [info] Reading application default configuration from /etc/masterha/appl.cnf..

Wed May 10 00:12:58 2017 - [info] Reading server configuration from /etc/masterha/appl.cnf..

Wed May 10 00:12:58 2017 - [info] Starting SSH connection tests..

Wed May 10 00:13:15 2017 - [debug] 

Wed May 10 00:12:59 2017 - [debug]  Connecting via SSH from root@192.168.154.154(192.168.154.154:22) to root@192.168.154.156(192.168.154.156:22)..

Wed May 10 00:13:05 2017 - [debug]   ok.

Wed May 10 00:13:05 2017 - [debug]  Connecting via SSH from root@192.168.154.154(192.168.154.154:22) to root@192.168.154.130(192.168.154.130:22)..

Wed May 10 00:13:15 2017 - [debug]   ok.

Wed May 10 00:13:20 2017 - [debug] 

Wed May 10 00:12:58 2017 - [debug]  Connecting via SSH from root@192.168.154.130(192.168.154.130:22) to root@192.168.154.156(192.168.154.156:22)..

Wed May 10 00:13:11 2017 - [debug]   ok.

Wed May 10 00:13:11 2017 - [debug]  Connecting via SSH from root@192.168.154.130(192.168.154.130:22) to root@192.168.154.154(192.168.154.154:22)..

Wed May 10 00:13:20 2017 - [debug]   ok.

Wed May 10 00:13:35 2017 - [debug] 

Wed May 10 00:12:58 2017 - [debug]  Connecting via SSH from root@192.168.154.156(192.168.154.156:22) to root@192.168.154.130(192.168.154.130:22)..

Wed May 10 00:13:15 2017 - [debug]   ok.

Wed May 10 00:13:15 2017 - [debug]  Connecting via SSH from root@192.168.154.156(192.168.154.156:22) to root@192.168.154.154(192.168.154.154:22)..

Wed May 10 00:13:35 2017 - [debug]   ok.

Wed May 10 00:13:35 2017 - [info] All SSH connection tests passed successfully.


七 檢查整個復制環境:

[root@node1 ~]# masterha_check_repl --conf=/etc/masterha/appl.cnf 

...

192.168.154.156(192.168.154.156:3306) (current master)

 +--192.168.154.130(192.168.154.130:3306)

 +--192.168.154.154(192.168.154.154:3306)


Wed May 10 00:33:36 2017 - [info] Checking replication health on 192.168.154.130..

Wed May 10 00:33:36 2017 - [info]  ok.

Wed May 10 00:33:36 2017 - [info] Checking replication health on 192.168.154.154..

Wed May 10 00:33:36 2017 - [info]  ok.

Wed May 10 00:33:36 2017 - [warning] master_ip_failover_script is not defined.

Wed May 10 00:33:36 2017 - [warning] shutdown_script is not defined.

Wed May 10 00:33:36 2017 - [info] Got exit code 0 (Not master dead).


MySQL Replication Health is OK.


八 開啟MHA Manager監控:

[root@node1 ~]# nohup masterha_manager --conf=/etc/masterha/appl.cnf > /etc/masterha/appl/manager.log 2>&1 &

[1] 8300

查看MHA Manager監控:

[root@node1 ~]# masterha_check_status --conf=/etc/masterha/appl.cnf 

appl (pid:8300) is running(0:PING_OK), master:192.168.154.156

關閉MHA Manager監控:

[root@node1 ~]# masterha_stop --conf=/etc/masterha/appl.cnf 

Stopped appl successfully.

[1]+  Exit 1                  nohup masterha_manager --conf=/etc/masterha/appl.cnf > /etc/masterha/appl/manager.log 2>&1


九 模擬主庫宕機:

[root@node2 ~]# service mysqld stop

Stopping mysqld:                                           [  OK  ]

查看slave(node4),可見master已發生變化,

...

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.154.130

                  Master_User: slave

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: slave-bin.000003

          Read_Master_Log_Pos: 106

               Relay_Log_File: relay-bin.000002

                Relay_Log_Pos: 251

        Relay_Master_Log_File: slave-bin.000003

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes



向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

昂仁县| 海阳市| 勐海县| 辛集市| 霸州市| 博客| 海丰县| 盱眙县| 宝清县| 定兴县| 九龙坡区| 南丰县| 凯里市| 高邑县| 无棣县| 灵台县| 东乡县| 东至县| 汝州市| 眉山市| 板桥市| 梁河县| 和平区| 区。| 洛南县| 始兴县| 辽阳市| 商都县| 沙湾县| 筠连县| 上杭县| 安国市| 新丰县| 大宁县| 梓潼县| 芜湖市| 普兰县| 新巴尔虎右旗| 内江市| 荣成市| 镇坪县|