您好,登錄后才能下訂單哦!
本文主要分析 sql thread中system lock出現的原因,但是筆者并明沒有系統的學習過master-slave的代碼,這也是2018年的一個目標,2018年我都排滿了,悲劇。所以如果有錯誤請指出,也作為一個筆記用于后期學習。同時也給出筆者現在知道的幾種造成延遲的可能和延遲計算的方式。
其實每次show slave status命令的時候后臺會調用函數show_slave_status_send_data進行及時計算,這個延遲并不是保存在哪里的。棧幀如下:
#0 show_slave_status_send_data (thd=0x7fffd8000cd0, mi=0x38ce2e0, io_gtid_set_buffer=0x7fffd800eda0 "e859a28b-b66d-11e7-8371-000c291f347d:42-100173", sql_gtid_set_buffer=0x7fffd8011ac0 "e859a28b-b66d-11e7-8371-000c291f347d:1-100173") at /mysql/mysql-5.7.17/sql/rpl_slave.cc:3602 #1 0x0000000001867749 in show_slave_status (thd=0x7fffd8000cd0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:3982 #2 0x0000000001867bfa in show_slave_status_cmd (thd=0x7fffd8000cd0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:4102
其計算方式基本就是這段代碼
time_diff= ((long)(time(0) - mi->rli->last_master_timestamp) - mi->clock_diff_with_master);
稍微解釋一下:
這里我們也看到event中common header中的timestamp和slave本地時間起了決定因素。因為每次發起命令time(0)都會增加,所以即便event中common header中的timestamp的時間不變延遲也是不斷加大的。
另外還有一段有名的偽代碼如下:
/* The pseudo code to compute Seconds_Behind_Master: if (SQL thread is running) { if (SQL thread processed all the available relay log) { if (IO thread is running) print 0; else print NULL; } else compute Seconds_Behind_Master; } else print NULL; */
其實他也來自函數 show_slave_status_send_data,有興趣的自己在看看。我就不過多解釋了。
這部分還可以參考一下淘寶內核月報
我發現有朋友這方面有疑問就做個簡單的解釋
當然binlog寫入到binlog文件或者什么時候傳輸到slave和event的生成沒有什么聯系。
下面是一個小事物典型的event生命周期
>Gtid Event:Pos:234(0Xea) N_pos:299(0X12b) Time:1513135186 Event_size:65(bytes) Gtid:31704d8a-da74-11e7-b6bf-52540a7d243:100009 last_committed=0 sequence_number=1 -->Query Event:Pos:299(0X12b) N_Pos:371(0X173) Time:1513135186 Event_size:72(bytes) Exe_time:0 Use_db:test Statment(35b-trun):BEGIN /*!Trx begin!*/ Gno:100009 ---->Map Event:Pos371(0X173) N_pos:415(0X19f) Time:1513135186 Event_size:44(bytes) TABLE_ID:108 DB_NAME:test TABLE_NAME:a Gno:100009 ------>Insert Event:Pos:415(0X19f) N_pos:455(0X1c7) Time:1513135186 Event_size:40(bytes) Dml on table: test.a table_id:108 Gno:100009 >Xid Event:Pos:455(0X1c7) N_Pos:486(0X1e6) Time:1513135186 Event_size:31(bytes) COMMIT; /*!Trx end*/ Gno:100009
這部分是我總結現有的我知道的原因:
這些原因都是我遇到過的。接下來我想分析一下從庫system lock形成的原因。
問題主要是出現在我們的線上庫的從庫上,我們經常發現某些數據量大的數據庫,sql thread經常處于system lock狀態下,大概表現如下:
mysql> show processlist; +----+-------------+-----------+------+---------+------+----------------------------------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+------+---------+------+----------------------------------+------------------+ | 3 | root | localhost | test | Sleep | 426 | | NULL | | 4 | system user | | NULL | Connect | 5492 | Waiting for master to send event | NULL | | 5 | system user | | NULL | Connect | 104 | System lock | NULL | | 6 | root | localhost | test | Query | 0 | starting | show processlist | +----+-------------+-----------+------+---------+------+----------------------------------+------------------+
對于這個狀態官方有如下描述:
The thread has called mysql_lock_tables() and the thread state has not been updated since. This is a very general state that can occur for many reasons. For example, the thread is going to request or is waiting for an internal or external system lock for the table. This can occur when InnoDB waits for a table-level lock during execution of LOCK TABLES. If this state is being caused by requests for external locks and you are not using multiple mysqld servers that are accessing the same MyISAM tables, you can disable external system locks with the --skip-external-locking option. However, external locking is disabled by default, so it is likely that this option will have no effect. For SHOW PROFILE, this state means the thread is requesting the lock (not waiting for it).
顯然不能解決我的問題,一時間也是無語。而我今天在測試從庫手動加行鎖并且sql thread沖突的時候發現了這個狀態,因此結合gdb調試做了如下分析,希望對大家有用,也作為后期我學習的一個筆記。
這里直接給出原因供大家直接參考:
必要條件:
由于大量的小事物如UPDATE/DELETE table where一行數據,這種只包含一行DML event的語句,table是一張大表。
如果是大量的表沒有主鍵或者唯一鍵可以考慮修改參數slave_rows_search_algorithms 試試。但是innodb中不用主鍵或者主鍵不選擇好就等于自殺。
下面的分析是我通過gdb代碼得出的結論可能有誤
我們知道所有的state都是mysql上層的一種狀態,如果要發生狀態的改變必須要調用THD::enter_stage來改變,而system lock則是調用mysql_lock_tables進入的狀態,同時從庫SQL_THREAD中還有另外一種狀態重要的狀態reading event from the relay log。
這里是rpl_slave.cc handle_slave_sql函數中的很小的一部分主要用來證明我的分析。
/* Read queries from the IO/THREAD until this thread is killed */ while (!sql_slave_killed(thd,rli)) //大循環 { THD_STAGE_INFO(thd, stage_reading_event_from_the_relay_log); //這里進入reading event from the relay log狀態 if (exec_relay_log_event(thd,rli)) //這里會先調入next_event函數讀取一條event,然后調用lock_tables但是如果不是第一次調用lock_tables則不需要調用mysql_lock_tables //邏輯在lock_tables調用mysql_lock_tables則會將狀態置為system lock,然后進入innodb層進行數據的查找和修改 }
這里還特地請教了阿里的印風兄驗證了一下mysql_lock_tables是myisam實現表鎖的函數innodb會設置為共享鎖。
這里我們先拋開query event/map event等。只考慮DML event
->進入reading event from the relay log狀態
->讀取一條event(參考next_event函數)
->進入system lock狀態
->innodb層進行查找和修改數據
->進入reading event from the relay log狀態 ->讀取一條event(參考next_event函數) ->進入system lock狀態 ->innodb層進行查找和修改數據 ->進入reading event from the relay log狀態 ->讀取一條event(參考next_event函數) ->innodb層進行查找和修改數據 ->進入reading event from the relay log狀態 ->讀取一條event(參考next_event函數) ->innodb層進行查找和修改數據 ....直到本事物event執行完成
因此我們看到對于一個小事物我們的sql_thread會在加system lock的情況下進行對數據進行查找和修改,因此得出了我的結論,同時如果是innodb 層 鎖造成的sql_thread堵塞也會在持有system lock的狀態下。但是對于一個大事物則不一樣,雖然出現同樣的問題的但是其狀態是reading event from the relay log。所以如果出現system lock一般就是考慮前文給出的結論,但是前文給出的結論不一定都會引起system lock,這個要看是否是大事物。
以下的部分是我進行gdb的時候用到斷點和棧幀是我自己看的
mysql_lock_tables 本函數更改狀態為system lock
gdb打印:p tables[0]->s->table_name
THD::enter_stage 本函數改變狀態
gdb打印:p new_stage->m_name
ha_innobase::index_read innodb查找數據接口
gdb打印:p index->table_name
ha_innobase::delete_row innodb刪除數據接口
exec_relay_log_event 獲取event并且應用
gdb 打印:ev->get_type_code()
#0 THD::enter_stage (this=0x7fffec000970, new_stage=0x2ccd180, old_stage=0x0, calling_func=0x2216fd0 "mysql_lock_tables", calling_file=0x22167d8 "/mysql/mysql-5.7.17/sql/lock.cc", calling_line=323) at /mysql/mysql-5.7.17/sql/sql_class.cc:731 #1 0x00000000017451a6 in mysql_lock_tables (thd=0x7fffec000970, tables=0x7fffec005e38, count=1, flags=0) at /mysql/mysql-5.7.17/sql/lock.cc:323 #2 0x00000000014fe8da in lock_tables (thd=0x7fffec000970, tables=0x7fffec012b70, count=1, flags=0) at /mysql/mysql-5.7.17/sql/sql_base.cc:6630 #3 0x00000000014fe321 in open_and_lock_tables (thd=0x7fffec000970, tables=0x7fffec012b70, flags=0, prelocking_strategy=0x7ffff14e2360) at /mysql/mysql-5.7.17/sql/sql_base.cc:6448 #4 0x0000000000eee1d2 in open_and_lock_tables (thd=0x7fffec000970, tables=0x7fffec012b70, flags=0) at /mysql/mysql-5.7.17/sql/sql_base.h:477 #5 0x000000000180e7c5 in Rows_log_event::do_apply_event (this=0x7fffec024790, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/log_event.cc:10626 #6 0x00000000017f7b7b in Log_event::apply_event (this=0x7fffec024790, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/log_event.cc:3324 #7 0x00000000018690ff in apply_event_and_update_pos (ptr_ev=0x7ffff14e2818, thd=0x7fffec000970, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:4709 #8 0x000000000186a7f2 in exec_relay_log_event (thd=0x7fffec000970, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:5224//這里可以看到不同event不同的處理 #9 0x0000000001870db6 in handle_slave_sql (arg=0x357fc50) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:7332//這里是整個主邏輯 #10 0x0000000001d5442c in pfs_spawn_thread (arg=0x7fffd88fb870) at /mysql/mysql-5.7.17/storage/perfschema/pfs.cc:2188 #11 0x00007ffff7bc7851 in start_thread () from /lib64/libpthread.so.0 #12 0x00007ffff672890d in clone () from /lib64/libc.so.6
#0 ha_innobase::index_read (this=0x7fffec0294c0, buf=0x7fffec0297b0 "\375\311y", key_ptr=0x0, key_len=0, find_flag=HA_READ_AFTER_KEY) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:8540 #1 0x000000000192126c in ha_innobase::index_first (this=0x7fffec0294c0, buf=0x7fffec0297b0 "\375\311y") at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:9051 #2 0x00000000019214ba in ha_innobase::rnd_next (this=0x7fffec0294c0, buf=0x7fffec0297b0 "\375\311y") at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:9149 #3 0x0000000000f4972c in handler::ha_rnd_next (this=0x7fffec0294c0, buf=0x7fffec0297b0 "\375\311y") at /mysql/mysql-5.7.17/sql/handler.cc:2947 #4 0x000000000180e1a9 in Rows_log_event::do_table_scan_and_update (this=0x7fffec035c20, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/log_event.cc:10475 #5 0x000000000180f453 in Rows_log_event::do_apply_event (this=0x7fffec035c20, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/log_event.cc:10941 #6 0x00000000017f7b7b in Log_event::apply_event (this=0x7fffec035c20, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/log_event.cc:3324 #7 0x00000000018690ff in apply_event_and_update_pos (ptr_ev=0x7ffff14e2818, thd=0x7fffec000970, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:4709 #8 0x000000000186a7f2 in exec_relay_log_event (thd=0x7fffec000970, rli=0x393b9c0) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:5224 #9 0x0000000001870db6 in handle_slave_sql (arg=0x357fc50) at /mysql/mysql-5.7.17/sql/rpl_slave.cc:7332 #10 0x0000000001d5442c in pfs_spawn_thread (arg=0x7fffd88fb870) at /mysql/mysql-5.7.17/storage/perfschema/pfs.cc:2188 #11 0x00007ffff7bc7851 in start_thread () from /lib64/libpthread.so.0 #12 0x00007ffff672890d in clone () from /lib64/libc.so.6
作者微信:
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。