亚洲中字慕日产2020,大陆极品少妇内射AAAAAA,无码av大香线蕉伊人久久,久久精品国产亚洲av麻豆网站

資訊專欄INFORMATION COLUMN

MySQL集群搭建(3)-MMM高可用架構(gòu)

_ivan / 2199人閱讀

摘要:上個文章集群搭建主主從模式中我們知道如何搭建主主從模式,今天這個文章正式進入高可用的架構(gòu)。由開發(fā),用來管理和監(jiān)控雙主復制,雖然是雙主架構(gòu),但是業(yè)務(wù)上同一時間只允許一個節(jié)點進行寫入操作。包含兩類角色和分別對應(yīng)讀寫節(jié)點和只讀節(jié)點。

上個文章 MySQL集群搭建(2)-主主從模式 中我們知道如何搭建 MySQL 主主從模式,今天這個文章正式進入 MySQL 高可用的架構(gòu)。

1 MMM 介紹 1.1 簡介

MMM 是一套支持雙主故障切換以及雙主日常管理的第三方軟件。MMM 由 Perl 開發(fā),用來管理和監(jiān)控雙主復制,雖然是雙主架構(gòu),但是業(yè)務(wù)上同一時間只允許一個節(jié)點進行寫入操作。

MMM 包含兩類角色: writerreader, 分別對應(yīng)讀寫節(jié)點和只讀節(jié)點。

使用 MMM 管理雙主節(jié)點的情況下,當 writer 節(jié)點出現(xiàn)宕機(假定是 master1),程序會自動移除該節(jié)點上的讀寫 VIP,切換到 Master2 ,并設(shè)置 Master2read_only = 0, 同時,所有 Slave 節(jié)點會指向 Master2

除了管理雙主節(jié)點,MMM 也會管理 Slave 節(jié)點,在出現(xiàn)宕機、復制延遲或復制錯誤,MMM 會移除該節(jié)點的 VIP,直到節(jié)點恢復正常。

1.2 組件

MMM 由兩類程序組成

monitor: 監(jiān)控集群內(nèi)數(shù)據(jù)庫的狀態(tài),在出現(xiàn)異常時發(fā)布切換命令,一般和數(shù)據(jù)庫分開部署

agent: 運行在每個 MySQL 服務(wù)器上的代理進程,monitor 命令的執(zhí)行者,完成監(jiān)控的探針工作和具體服務(wù)設(shè)置,例如設(shè)置 VIP、指向新同步節(jié)點

其架構(gòu)如下:

1.3 切換流程

以上述架構(gòu)為例,描述一下故障轉(zhuǎn)移的流程,現(xiàn)在假設(shè) Master1 宕機

Monitor 檢測到 Master1 連接失敗

Monitor 發(fā)送 set_offline 指令到 Master1 的 Agent

Master1 Agent 如果存活,下線寫 VIP,嘗試把 Master1 設(shè)置為 read_only=1

Moniotr 發(fā)送 set_online 指令到 Master2

Master2 Agent 接收到指令,執(zhí)行 select master_pos_wait() 等待同步完畢

Master2 Agent 上線寫 VIP,把 Master2 節(jié)點設(shè)為 read_only=0

Monitor 發(fā)送更改同步對象的指令到各個 Slave 節(jié)點的 Agent

各個 Slave 節(jié)點向新 Master 同步數(shù)據(jù)

從整個流程可以看到,如果主節(jié)點出現(xiàn)故障,MMM 會自動實現(xiàn)切換,不需要人工干預(yù),同時我們也能看出一些問題,就是數(shù)據(jù)庫掛掉后,只是做了切換,不會主動補齊丟失的數(shù)據(jù),所以 MMM 會有數(shù)據(jù)不一致性的風險。

2 MMM 安裝 2.1 yum 安裝

如果服務(wù)器能連網(wǎng)或者有合適 yum 源,直接執(zhí)行以下命令安裝

# 增加 yum 源(如果默認 yum 源有,這一步可以忽略)
yum install epel-release.noarch 
# 在 agent 節(jié)點執(zhí)行
yum install -y mysql-mmm-agent
# 在 monitor 節(jié)點執(zhí)行
yum install -y mysql-mmm-monitor

執(zhí)行該安裝命令,會安裝以下軟件包或依賴

mysql-mmm-agent.noarch 0:2.2.1-1.el5
libart_lgpl.x86_64 0:2.3.17-4                                                 
mysql-mmm.noarch 0:2.2.1-1.el5                                                
perl-Algorithm-Diff.noarch 0:1.1902-2.el5                                     
perl-DBD-mysql.x86_64 0:4.008-1.rf                                            
perl-DateManip.noarch 0:5.44-1.2.1                                            
perl-IPC-Shareable.noarch 0:0.60-3.el5                                        
perl-Log-Dispatch.noarch 0:2.20-1.el5                                         
perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5                              
perl-Log-Log4perl.noarch 0:1.13-2.el5                                         
perl-MIME-Lite.noarch 0:3.01-5.el5                                            
perl-Mail-Sender.noarch 0:0.8.13-2.el5.1                                      
perl-Mail-Sendmail.noarch 0:0.79-9.el5.1                                      
perl-MailTools.noarch 0:1.77-1.el5                                            
perl-Net-ARP.x86_64 0:1.0.6-2.1.el5                                           
perl-Params-Validate.x86_64 0:0.88-3.el5                                      
perl-Proc-Daemon.noarch 0:0.03-1.el5                                          
perl-TimeDate.noarch 1:1.16-5.el5                                             
perl-XML-DOM.noarch 0:1.44-2.el5                                              
perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1                                       
perl-XML-RegExp.noarch 0:0.03-2.el5                                           
rrdtool.x86_64 0:1.2.27-3.el5                                                 
rrdtool-perl.x86_64 0:1.2.27-3.el5 

其他系統(tǒng)安裝方式可以參考官網(wǎng)

2.2 手動安裝 1). 下載安裝包

進入 MMM 下載頁面 Downloads MMM for MySQL,點擊下載,如圖

下載完成上傳到服務(wù)器上

2). 安裝依賴
yum install -y wget perl openssl gcc gcc-c++
wget http://xrl.us/cpanm --no-check-certificate
mv cpanm /usr/bin
chmod 755 /usr/bin/cpanm
cat > /root/list << EOF
install Algorithm::Diff
install Class::Singleton
install DBI
install DBD::mysql
install File::Basename
install File::stat
install File::Temp
install Log::Dispatch
install Log::Log4perl
install Mail::Send
install Net::ARP
install Net::Ping
install Proc::Daemon
install Thread::Queue
install Time::HiRes
EOF
 
for package in `cat /root/list`
do
    cpanm $package
done
3). 安裝
tar -xvf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
ps: 大部分時候,數(shù)據(jù)庫機器都是不允許連接外網(wǎng)的,這個時候只能把上述依賴的 RPM 包一個個下載下來拷到服務(wù)器上
3 數(shù)據(jù)庫環(huán)境準備

操作前已經(jīng)準備好了一套主主從架構(gòu)的數(shù)據(jù)庫,搭建方法可以參考以往文章,具體信息如下

節(jié)點信息
IP 系統(tǒng) 端口 MySQL版本 節(jié)點 讀寫 說明
10.0.0.247 Centos6.5 3306 5.7.9 Master 讀寫 主節(jié)點
10.0.0.248 Centos6.5 3306 5.7.9 Standby 只讀,可切換為讀寫 備主節(jié)點
10.0.0.249 Centos6.5 3306 5.7.9 Slave 只讀 從節(jié)點
10.0.0.24 Centos6.5 - - monitor - MMM Monitor
VIP 信息
簡稱 VIP 類型
RW-VIP 10.0.0.237 讀寫VIP
RO-VIP1 10.0.0.238 讀VIP
RO-VIP2 10.0.0.239 讀VIP
架構(gòu)圖

參考配置

Master1

[client]
port = 3306
default-character-set=utf8mb4
socket = /data/mysql_db/test_mmm/mysql.sock

[mysqld]
datadir = /data/mysql_db/test_mmm
basedir = /usr/local/mysql57
tmpdir = /tmp
socket = /data/mysql_db/test_mmm/mysql.sock
pid-file = /data/mysql_db/test_mmm/mysql.pid
skip-external-locking = 1
skip-name-resolve = 1
port = 3306
server_id = 2473306

default-storage-engine = InnoDB
character-set-server = utf8mb4
default_password_lifetime=0

auto_increment_offset = 1
auto_increment_increment = 2

#### log ####
log_timestamps=system
log_bin = /data/mysql_log/test_mmm/mysql-bin
log_bin_index = /data/mysql_log/test_mmm/mysql-bin.index
binlog_format = row
relay_log_recovery=ON
relay_log=/data/mysql_log/test_mmm/mysql-relay-bin
relay_log_index=/data/mysql_log/test_mmm/mysql-relay-bin.index
log_error = /data/mysql_log/test_mmm/mysql-error.log

#### replication ####
log_slave_updates = 1
replicate_wild_ignore_table = information_schema.%,performance_schema.%,sys.%

#### semi sync replication settings #####
plugin_dir=/usr/local/mysql57/lib/plugin
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1

Master2

[client]
port = 3306
default-character-set=utf8mb4
socket = /data/mysql_db/test_mmm/mysql.sock

[mysqld]
datadir = /data/mysql_db/test_mmm
basedir = /usr/local/mysql57
tmpdir = /tmp
socket = /data/mysql_db/test_mmm/mysql.sock
pid-file = /data/mysql_db/test_mmm/mysql.pid
skip-external-locking = 1
skip-name-resolve = 1
port = 3306
server_id = 2483306

default-storage-engine = InnoDB
character-set-server = utf8mb4
default_password_lifetime=0

auto_increment_offset = 2
auto_increment_increment = 2

#### log ####
log_timestamps=system
log_bin = /data/mysql_log/test_mmm/mysql-bin
log_bin_index = /data/mysql_log/test_mmm/mysql-bin.index
binlog_format = row
relay_log_recovery=ON
relay_log=/data/mysql_log/test_mmm/mysql-relay-bin
relay_log_index=/data/mysql_log/test_mmm/mysql-relay-bin.index
log_error = /data/mysql_log/test_mmm/mysql-error.log

#### replication ####
log_slave_updates = 1
replicate_wild_ignore_table = information_schema.%,performance_schema.%,sys.%

#### semi sync replication settings #####
plugin_dir=/usr/local/mysql57/lib/plugin
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1

Slave

[client]
port = 3306
default-character-set=utf8mb4
socket = /data/mysql_db/test_mmm/mysql.sock

[mysqld]
datadir = /data/mysql_db/test_mmm
basedir = /usr/local/mysql57
tmpdir = /tmp
socket = /data/mysql_db/test_mmm/mysql.sock
pid-file = /data/mysql_db/test_mmm/mysql.pid
skip-external-locking = 1
skip-name-resolve = 1
port = 3306
server_id = 2493306

default-storage-engine = InnoDB
character-set-server = utf8mb4
default_password_lifetime=0

read_only=1

#### log ####
log_timestamps=system
log_bin = /data/mysql_log/test_mmm/mysql-bin
log_bin_index = /data/mysql_log/test_mmm/mysql-bin.index
binlog_format = row
relay_log_recovery=ON
relay_log=/data/mysql_log/test_mmm/mysql-relay-bin
relay_log_index=/data/mysql_log/test_mmm/mysql-relay-bin.index
log_error = /data/mysql_log/test_mmm/mysql-error.log

#### replication ####
log_slave_updates = 1
replicate_wild_ignore_table = information_schema.%,performance_schema.%,sys.%

#### semi sync replication settings #####
plugin_dir=/usr/local/mysql57/lib/plugin
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1
新建用戶

在主節(jié)點中執(zhí)行下列建立 MMM 用戶的命令,由于是測試環(huán)境,密碼就設(shè)為和賬號一樣

CREATE USER "mmm_monitor"@"%"        IDENTIFIED BY "mmm_monitor";
CREATE USER "mmm_agent"@"%"          IDENTIFIED BY "mmm_agent";
GRANT REPLICATION CLIENT                   ON *.* TO "mmm_monitor"@"%";
GRANT SUPER, REPLICATION CLIENT, PROCESS   ON *.* TO "mmm_agent"@"%";
FLUSH PRIVILEGES;
4 配置 MMM 4.1 配置文件

MMM 有3個配置文件,分別是 mmm_agent.conf, mmm_common.conf, mmm_mon.conf, 在目錄 /etc/mysql-mmm 下。如果區(qū)分集群,也就是說一臺服務(wù)器跑多個 MMM,那么配置文件可以這樣命名 mmm_agent_cluster.conf, mmm_common_cluster.conf, mmm_mon_cluster.conf, 其中 cluster 表示集群名稱

mmm_common.conf , 通用配置,在所有 MMM 節(jié)點都需要

mmm_agent.conf, agent 配置,在 MMM Agent 節(jié)點需要

mmm_mon.conf, monitor 配置,在 MMM Monitor 節(jié)點需要

這次配置,我們把集群名命名為 test_mmm, 下面是具體配置

mmm_common

在所有節(jié)點新建 /etc/mysql-mmm/mmm_common_test_mmm.conf, 根據(jù)實際情況寫上

active_master_role  writer



    cluster_interface       eth0                                        # 群集的網(wǎng)絡(luò)接口

    agent_port              9989                                        # agent 監(jiān)聽端口,如果有多個 agent,需要更改默認端口
    mysql_port              3306                                        # 數(shù)據(jù)庫端口,默認為3306

    pid_path                /var/run/mysql-mmm/mmm_agentd_test_mmm.pid  # pid路徑, 要和啟動文件對應(yīng)
    bin_path               /usr/libexec/mysql-mmm                       # bin 文件路徑

    replication_user        repl                                        # 復制用戶
    replication_password    repl                                        # 復制用戶密碼

    agent_user              mmm_agent                                   # 代理用戶,用來設(shè)置 `read_only` 等
    agent_password          mmm_agent                                   # 代理用戶密碼


                            # master1 的 host 名
    ip              10.0.0.247              # master1 的 ip
    mode            master                  # 角色屬性,master 代表是主節(jié)點
    peer            cluster02               # 與 master1 對等的服務(wù)器的 host 名,雙主中另一個的主機名


                            # master2 的 host 名
    ip              10.0.0.248              # master2 的 ip
    mode            master                  # 角色屬性,master 代表是主節(jié)點
    peer            cluster01               # 與 master2 對等的服務(wù)器的 host 名,雙主中另一個的主機名


                            # slave 的 host 名
    ip              10.0.0.249              # slave 的 ip
    mode            slave                   # 角色屬性,slave 代表是從節(jié)點



                               # writer 角色配置
    hosts           cluster01, cluster02    # 能進行寫操作的服務(wù)器的 host 名
    ips             10.0.0.237              # writer 的 VIP
    mode            exclusive               # exclusive 代表只允許存在一個主節(jié)點(寫節(jié)點),也就是只能提供一個寫的 VIP


                                          # writer 角色配置
    hosts           cluster01, cluster02, cluster03    # 能進行讀操作的服務(wù)器的 host 名
    ips             10.0.0.238,10.0.0.239              # reader 的 VIP
    mode            balanced                           # balanced 代表負載均衡可以多個 host 同時擁有此角色
mmm_agent

在所有 agent 的節(jié)點新建 /etc/mysql-mmm/mmm_agent_test_mmm.conf 文件,寫上以下內(nèi)容

Cluster1

include mmm_common_test_mmm.conf  # common 文件名,對應(yīng)上述寫下的文件
this cluster01  # 當前節(jié)點名稱,對應(yīng) common 文件 host 名

Cluster2

include mmm_common_test_mmm.conf
this cluster02

Cluster3

include mmm_common_test_mmm.conf
this cluster03
mmm_mon

在 monitor 節(jié)點新建 /etc/mysql-mmm/mmm_mon_test_mmm.conf 文件,寫下監(jiān)控節(jié)點配置

include mmm_common_test_mmm.conf                                    # common 文件名


    ip               127.0.0.1                                   # 監(jiān)聽 IP
    port             9992                                        # 監(jiān)聽端口
    pid_path         /var/run/mysql-mmm/mmm_mond_test_mmm.pid    # PID 文件位置, 要和啟動文件對應(yīng)
    bin_path         /usr/libexec/mysql-mmm                      # bin目錄
    status_path      /var/lib/mysql-mmm/mmm_mond_test_mmm.status # 狀態(tài)文件位置
    ping_ips         10.0.0.247, 10.0.0.248, 10.0.0.249          # 需要監(jiān)控的主機 IP,對應(yīng) MySQL 節(jié)點 IP
    auto_set_online  30                                          # 自動恢復 online 的時間



    monitor_user      mmm_monitor             # 監(jiān)控用的 MySQL 賬號
    monitor_password  mmm_monitor             # 監(jiān)控用的 MySQL 密碼



    check_period      2       # 監(jiān)控周期
    trap_period       4       # 一個節(jié)點被檢測不成功的時間持續(xù) trap_period 秒,就認為失去連接
    max_backlog       900     # 主從延遲超過這個值就會設(shè)為 offline


debug 0                         # 是否開啟 debug 模式

PS1: 以上配置文件在使用的時候需要去掉注釋
PS2: 如果只有一個集群,可以在默認配置文件上改

4.2 啟動文件

安裝成功后,會在 /etc/init.d/ 下生成配置啟動文件

[root@chengqm ~]# ls /etc/init.d/mysql*
/etc/init.d/mysqld  /etc/init.d/mysql-mmm-agent  /etc/init.d/mysql-mmm-monitor
mysql-mmm-agent

在所有 agent 節(jié)點執(zhí)行

cp /etc/init.d/mysql-mmm-agent /etc/init.d/mysql-mmm-agent-test-mmm

打開 /etc/init.d/mysql-mmm-agent-test-mmm, 如果你的配置文件頭部是這樣的

CLUSTER=""


#-----------------------------------------------------------------------
# Paths
if [ "$CLUSTER" != "" ]; then
    MMM_AGENTD_BIN="/usr/sbin/mmm_agentd @$CLUSTER"
    MMM_AGENTD_PIDFILE="/var/run/mmm_agentd-$CLUSTER.pid"
else
    MMM_AGENTD_BIN="/usr/sbin/mmm_agentd"
    MMM_AGENTD_PIDFILE="/var/run/mmm_agentd.pid"
fi

echo "Daemon bin: "$MMM_AGENTD_BIN""
echo "Daemon pid: "$MMM_AGENTD_PIDFILE""

改為

CLUSTER="test_mmm"


#-----------------------------------------------------------------------
# Paths
if [ "$CLUSTER" != "" ]; then
    MMM_AGENTD_BIN="/usr/sbin/mmm_agentd @$CLUSTER"
    MMM_AGENTD_PIDFILE="/var/run/mysql-mmm/mmm_agentd_$CLUSTER.pid"
else
    MMM_AGENTD_BIN="/usr/sbin/mmm_agentd"
    MMM_AGENTD_PIDFILE="/var/run/mysql-mmm/mmm_agentd.pid"
fi

echo "Daemon bin: "$MMM_AGENTD_BIN""
echo "Daemon pid: "$MMM_AGENTD_PIDFILE""

如果打開發(fā)現(xiàn)是這樣的

MMMD_AGENT_BIN="/usr/sbin/mmm_agentd"
MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd.pid"
LOCKFILE="/var/lock/subsys/mysql-mmm-agent"
prog="MMM Agent Daemon"

改為

...
CLUSTER="test_mmm"
MMMD_AGENT_BIN="/usr/sbin/mmm_agentd @$CLUSTER"
MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd_$CLUSTER.pid"
LOCKFILE="/var/lock/subsys/mysql-mmm-agent_CLUSTER$"
prog="MMM Agent Daemon"
mysql-mmm-monitor

monitor 節(jié)點執(zhí)行

cp /etc/init.d/mysql-mmm-monitor /etc/init.d/mysql-mmm-monitor-test-mmm

打開 /etc/init.d/mysql-mmm-monitor-test-mmm, 把文件開始部分改為

# Cluster name (it can be empty for default cases)
CLUSTER="test_mmm"
LOCKFILE="/var/lock/subsys/mysql-mmm-monitor-${CLUSTER}"
prog="MMM Monitor Daemon"

if [ "$CLUSTER" != "" ]; then
        MMMD_MON_BIN="/usr/sbin/mmm_mond @$CLUSTER"
        MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond_$CLUSTER.pid"
else 
        MMMD_MON_BIN="/usr/sbin/mmm_mond"
        MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond.pid"
fi

start() {
...

如果打開啟動文件發(fā)現(xiàn)和本文的啟動文件有出入,可以根據(jù)實際情況進行修改,確保啟動 monitor 命令為 /usr/sbin/mmm_mond @$CLUSTER 且 pid 文件和配置文件一致即可

PS: 如果只有一個集群,可以直接使用默認啟動文件
注意: 配置文件的 PID 文件位置要和啟動文件的 PID 文件位置要一致,如果不一致就改為一致

5 啟動 MMM

啟動 MMM 的順序是

啟動 MMM Monitor

啟動 MMM Agent

關(guān)閉 MMM 的順序則反過來執(zhí)行

5.1 啟動 Monitor

monitor 節(jié)點上執(zhí)行啟動命令,示例如下

[root@chengqm ~]# /etc/init.d/mysql-mmm-monitor-test-mmm start
Starting MMM Monitor Daemon:                               [  OK  ]

如果啟動有報錯查看 mmm 日志,mmm 日志放在 /var/log/mysql-mmm/ 目錄下

5.2 啟動 Agent

在所有 agent 節(jié)點執(zhí)行啟動命令,示例如下

[root@cluster01 ~]# /etc/init.d/mysql-mmm-agent-test-mmm start
Daemon bin: "/usr/sbin/mmm_agentd @test_mmm"
Daemon pid: "/var/run/mmm_agentd-test_mmm.pid"
Starting MMM Agent daemon... Ok
5.3 觀察 mmm 狀態(tài)

monitor 節(jié)點執(zhí)行 mmm_control @cluster show 命令查看各節(jié)點狀態(tài)

[root@chengqm ~]# mmm_control @test_mmm show
  cluster01(10.0.0.247) master/ONLINE. Roles: reader(10.0.0.238), writer(10.0.0.237)
  cluster02(10.0.0.248) master/ONLINE. Roles: reader(10.0.0.239)
  cluster03(10.0.0.249) slave/ONLINE. Roles: 

monitor 節(jié)點執(zhí)行 mmm_control @cluster checks all 命令檢測所有節(jié)點

[root@chengqm ~]# mmm_control @test_mmm checks all
cluster01  ping         [last change: 2018/12/05 20:06:35]  OK
cluster01  mysql        [last change: 2018/12/05 20:23:59]  OK
cluster01  rep_threads  [last change: 2018/12/05 20:24:14]  OK
cluster01  rep_backlog  [last change: 2018/12/05 20:24:14]  OK: Backlog is null
cluster02  ping         [last change: 2018/12/05 20:06:35]  OK
cluster02  mysql        [last change: 2018/12/05 20:23:59]  OK
cluster02  rep_threads  [last change: 2018/12/05 20:24:14]  OK
cluster02  rep_backlog  [last change: 2018/12/05 20:24:14]  OK
cluster03  ping         [last change: 2018/12/05 20:06:35]  OK
cluster03  mysql        [last change: 2018/12/05 20:23:59]  OK
cluster03  rep_threads  [last change: 2018/12/05 20:24:14]  OK
cluster03  rep_backlog  [last change: 2018/12/05 20:24:14]  OK: Backlog is null

在 Cluster1 主機查看 VIP 情況

[root@cluster01 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:de:80:33 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.247/16 brd 10.0.255.255 scope global eth0
    inet 10.0.0.238/32 scope global eth0
    inet 10.0.0.237/32 scope global eth0
    inet6 fe80::f816:3eff:fede:8033/64 scope link 
       valid_lft forever preferred_lft forever

可以看到 VIP 和 MMM 描述的一致

6 MMM 切換

MMM 切換有兩種方式,手動切換和自動切換

6.1 直接切換 role

相關(guān)命令: mmm_control [@cluster] move_role [writer/reader] host 給某個節(jié)點增加角色

讓我們測試一下

當前節(jié)點狀態(tài)

[root@chengqm ~]# mmm_control @test_mmm show
  cluster01(10.0.0.247) master/ONLINE. Roles: reader(10.0.0.238), writer(10.0.0.237)
  cluster02(10.0.0.248) master/ONLINE. Roles: reader(10.0.0.239)
  cluster03(10.0.0.249) slave/ONLINE. Roles: 

Cluster1 VIP

[mysql@cluster01 ~]$ ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:de:80:33 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.247/16 brd 10.0.255.255 scope global eth0
    inet 10.0.0.238/32 scope global eth0
    inet 10.0.0.237/32 scope global eth0
    inet6 fe80::f816:3eff:fede:8033/64 scope link 
       valid_lft forever preferred_lft forever

Master1 read_only 狀態(tài)

[mysql@cluster01 ~]$  /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show variables like "read_only"";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only     | OFF   |
+---------------+-------+

Cluster2 VIP

[mysql@cluster02 ~]$ ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:66:7e:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.248/16 brd 10.0.255.255 scope global eth0
    inet 10.0.0.239/32 scope global eth0
    inet6 fe80::f816:3eff:fe66:7ee8/64 scope link 
       valid_lft forever preferred_lft forever

Master2 read_only 狀態(tài)

[mysql@cluster02 ~]$ /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show variables like "read_only"";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only     | ON    |
+---------------+-------+

Slave 同步指向

[mysql@cluster03 ~]$ /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show slave status G";
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.247
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
...
....
切換

執(zhí)行 mmm_control @test_mmm move_role writer cluster02 切換

[root@chengqm ~]# mmm_control @test_mmm move_role writer cluster02
OK: Role "writer" has been moved from "cluster01" to "cluster02". Now you can wait some time and check new roles info!
[root@chengqm ~]# mmm_control @test_mmm show
  cluster01(10.0.0.247) master/ONLINE. Roles: reader(10.0.0.238)
  cluster02(10.0.0.248) master/ONLINE. Roles: reader(10.0.0.239), writer(10.0.0.237)
  cluster03(10.0.0.249) slave/ONLINE. Roles: 

切換后 cluster2 VIP

[mysql@cluster02 ~]$ ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:66:7e:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.248/16 brd 10.0.255.255 scope global eth0
    inet 10.0.0.239/32 scope global eth0
    inet 10.0.0.237/32 scope global eth0
    inet6 fe80::f816:3eff:fe66:7ee8/64 scope link 
       valid_lft forever preferred_lft forever

切換后 Master2 read_only 狀態(tài)

[mysql@cluster02 ~]$ /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show variables like "read_only"";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only     | OFF   |
+---------------+-------+

切換后 Slave 同步指向

[mysql@cluster03 ~]$ /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show slave status G";
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.248
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60

可以看到切換成功

6.2 使用"上線""下線"功能切換

切換操作也可以用以下兩個命令完成

mmm_control [@cluster] set_offline host 下線節(jié)點

mmm_control [@cluster] set_online host 上線節(jié)點

現(xiàn)在我們想把寫節(jié)點從 Master2 切換到 Master1,可以進行如下操作

mmm_control @test_mmm set_offline cluster02
mmm_control @test_mmm set_online cluster02

切換后的效果是一樣的,就不演示了

6.3 宕機自動切換

現(xiàn)在我們演示一下 Master2 數(shù)據(jù)庫掛掉后自動切換情況

kill master2

查看 MMM monitor 日志,看到切換過程

[root@chengqm ~]# tail -8 /var/log/mysql-mmm/mmm_mond_test_mmm.log 
2018/12/06 18:09:27  WARN Check "rep_backlog" on "cluster02" is in unknown state! Message: UNKNOWN: Connect error (host = 10.0.0.248:3306, user = mmm_monitor)! Lost connection to MySQL server at "reading initial communication packet", system error: 111
2018/12/06 18:09:30 ERROR Check "mysql" on "cluster02" has failed for 4 seconds! Message: ERROR: Connect error (host = 10.0.0.248:3306, user = mmm_monitor)! Lost connection to MySQL server at "reading initial communication packet", system error: 111
2018/12/06 18:09:31 FATAL State of host "cluster02" changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2018/12/06 18:09:31  INFO Removing all roles from host "cluster02":
2018/12/06 18:09:31  INFO     Removed role "reader(10.0.0.238)" from host "cluster02"
2018/12/06 18:09:31  INFO     Removed role "writer(10.0.0.237)" from host "cluster02"
2018/12/06 18:09:31  INFO Orphaned role "writer(10.0.0.237)" has been assigned to "cluster01"
2018/12/06 18:09:31  INFO Orphaned role "reader(10.0.0.238)" has been assigned to "cluster01"

查看節(jié)點狀態(tài)

[root@chengqm ~]# mmm_control @test_mmm show
  cluster01(10.0.0.247) master/ONLINE. Roles: reader(10.0.0.238), reader(10.0.0.239), writer(10.0.0.237)
  cluster02(10.0.0.248) master/HARD_OFFLINE. Roles: 
  cluster03(10.0.0.249) slave/ONLINE. Roles: 

Cluster1 VIP 情況

[mysql@cluster01 ~]$ ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:de:80:33 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.247/16 brd 10.0.255.255 scope global eth0
    inet 10.0.0.238/32 scope global eth0
    inet 10.0.0.237/32 scope global eth0
    inet6 fe80::f816:3eff:fede:8033/64 scope link 
       valid_lft forever preferred_lft forever

切換后 Slave 同步指向

[mysql@cluster03 ~]$ /usr/local/mysql57/bin/mysql -S /data/mysql_db/test_mmm/mysql.sock -e "show slave status G";
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.247
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60

可以看到數(shù)據(jù)庫宕機后, MMM 會自動切換, 從而實現(xiàn)高可用

7. 總結(jié) 7.1 MMM 優(yōu)點

MMM 可以管理主備節(jié)點,并實現(xiàn)全節(jié)點高可用

當節(jié)點出現(xiàn)問題的時候自動切換,恢復后自動上線

7.2 MMM 缺點

在進行主從切換時, 容易造成數(shù)據(jù)丟失。

MMM Monitor 服務(wù)存在單點故障 ,也就是說, MMM 本身不是高可用的,所以監(jiān)控端要和數(shù)據(jù)庫分開部署以防數(shù)據(jù)庫和監(jiān)控都出現(xiàn)問題

筆者在實際使用過程中發(fā)現(xiàn):

主備切換偶爾會造成從節(jié)點同步失敗(主鍵沖突、記錄不存在)

宕機切換恢復后節(jié)點有數(shù)據(jù)丟失

7.3 MMM 適用場景

對數(shù)據(jù)一致性要求不高,允許丟失少量數(shù)據(jù),比如說評論、資訊類數(shù)據(jù)

讀操作頻繁,需要在所有節(jié)點上進行讀操作負載均衡(后續(xù)文章會說到怎么做負載均衡)

到此, MMM 高可用架構(gòu)搭建完畢

8. 附 8.1 問題及解決方案 1). 配置文件讀寫權(quán)限

問題描述

FATAL Configuration file /etc/mysql-mmm/mmm_agent*.conf is world writable!
FATAL Configuration file /etc/mysql-mmm/mmm_agent*.conf is world readable!

解決方案

chmod 664 /etc/mysql-mmm/*
2). 重復監(jiān)聽

問題描述

這個問題容易出現(xiàn)在多個 MMM 監(jiān)控實例的情況下, 報錯如下

FATAL Listener: Can’t create socket!

解決方案

檢查配置文件端口是否沖突

檢查機器端口是否被占用

3). 網(wǎng)卡配置不對

問題描述

FATAL Couldn’t configure IP ‘192.168.1.202’ on interface ‘em1’: undef

解決方案

ifconfig 命令查看網(wǎng)卡,更改配置文件

8.2 mmm 6 種狀態(tài)及變化原因 狀態(tài)

online

admin_offline

hard_offline

awaiting_recovery

replication_delay

replication_fail

變化原因:

ONLINE: Host is running without any problems.

ADMIN_OFFLINE: host was set to offline manually.

HARD_OFFLINE: Host is offline (Check ping and/or mysql failed)

AWAITING_RECOVERY: Host is awaiting recovery

REPLICATION_DELAY: replication backlog is too big (Check rep_backlog failed)

REPLICATION_FAIL: replication threads are not running (Check rep_threads failed)

其他說明

Only hosts with state ONLINE may have roles. When a host switches from ONLINE to any other state, all roles will be removed from it.

A host that was in state REPLICATION_DELAY or REPLICATION_FAIL will be switched back to ONLINE if everything is OK again, unless it is flapping (see Flapping).

A host that was in state HARD_OFFLINE will be switched to AWAITING_RECOVERY if everything is OK again. If its downtime was shorter than 60 seconds and it wasn"t rebooted or auto_set_online is > 0 it will be switched back to ONLINE automatically, unless it is flapping (see Flapping again).

Replication backlog or failure on the active master isn"t considered to be a problem, so the active master will never be in state REPLICATION_DELAY or REPLICATION_FAIL.

Replication backlog or failure will be ignored on hosts whos peers got ONLINE less than 60 seconds ago (That"s the default value of master-connect-retry).

If both checks rep_backlog and rep_threads fail, the state will change to REPLICATION_FAIL.

If auto_set_online is > 0, flapping hosts will automatically be set to ONLINE after flap_duration seconds.

參考: mmm 官方文檔

文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請注明本文地址:http://www.ezyhdfw.cn/yun/17833.html

相關(guān)文章

發(fā)表評論

0條評論

最新活動
閱讀需要支付1元查看
<