日志

使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(二)

 来源    2020-08-02    0  

在上一篇文章《使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一) 》 中,已经完成了MGR+ProxySQL集群的搭建,也测试了ProxySQL实现业务层面的故障转移,接下来继续测试读写分离。


王国维大师笔下的人生三境界:

第一重境界:昨夜西风凋碧树。独上高楼,望尽天涯路;
第二重境界:衣带渐宽终不悔,为伊消得人憔悴;
第三重境界:众里寻他千百度,蓦然回首,那人却在灯火阑珊处。

作为一个一根筋的学渣程序员,我还没能想透彻。但是数据库读写分离的三境界却有了一定的了解,我们不妨来看一看MySQL数据库读写分离的三境界。

第一重境界:人工实现读写分离。通过IP、端口读写分离,业务层面人工识别读写语句,然后将其分配到不同的主机,实现读写分离;
第二重境界:正则实现读写分离。通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机;
第三重境界:识别TOP SQL,将高负载SQL分发到不同的主机;


(一)第一境界:人工实现读写分离

通过IP、端口读写分离,业务层面人工识别读写语句,然后使用不同的连接数据库配置信息,将其分配到不同的主机,实现读写分离。在ProxySQL里面,我们是通过端口来实现读写分离的。具体操作如下:

STEP1:配置ProxySQL在两个端口上侦听,并且重新启动ProxySQL

mysql -uadmin -padmin -h127.0.0.1 -P6032
mysql> SET mysql-interfaces='0.0.0.0:6401;0.0.0.0:6402';
-- save it on disk and restart proxysql
mysql> SAVE MYSQL VARIABLES TO DISK;
mysql> PROXYSQL RESTART;

STEP2:配置路由规则,通过端口将请求分发到不同的组

mysql> INSERT INTO mysql_query_rules (rule_id,active,proxy_port,destination_hostgroup,apply) VALUES (1,1,6401,10,1), (3,1,6402,20,1);
mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
mysql> SAVE MYSQL QUERY RULES TO DISK;

这样,通过6401端口访问数据库的请求就会被转发到组1(写组)中,通过6402端口访问数据库的请求会被转发到组3(读组)中,从而实现读写分离,具体使用6401端口还是6402端口访问数据库,取决于开发人员人工识别SQL的读写特性。


(二)第二境界:使用正则表达式实现读写分离

通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机。操作过程如下

STEP1:为避免干扰测试,删除之前定义的规则

DELETE FROM mysql_query_rules;

STEP2:定义新的读写分离规则

INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);

LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;

现在,ProxySQL的路由规则为:

  • SELECT FOR UPDATE操作将被路由到组1(写组);
  • 其它的SELECT语句将被路由到组3(读组);
  • 其它的路由到默认组,即组1。



这里对使用正则表达式方式进行测试,整个过程如下:

(1)测试之前读写组信息修改

-- 根据组的规则:最多1个写节点,其余的写节点放入备用写组。目前我们可以看到节点192.168.10.13是写节点,其余2个节点是备用写节点,没有读节点
mysql> select * from mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 0                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
row in set (0.00 sec)

mysql> mysql> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 192.168.10.13 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
rows in set (0.01 sec)


-- 为了实现读写分离,需要有读节点,我们可以修改writer_is_also_reader参数,让backup_writer_hostgroup中的节点既做备用写节点,又做读节点
mysql> update  mysql_group_replication_hostgroups set writer_is_also_reader = 2 ;
Query OK, 1 row affected (0.00 sec)

mysql> select * from mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 2                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
row in set (0.00 sec)

mysql> 
mysql> select * from runtime_mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 0                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
row in set (0.00 sec)


--需要生效、永久保存mysql server配置
mysql> load mysql servers to runtime;
Query OK, 0 rows affected (0.01 sec)

mysql> save mysql servers to disk;
Query OK, 0 rows affected (0.03 sec)


mysql> select * from runtime_mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 2                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
row in set (0.01 sec)


-- 最终mysql server的组信息如下
mysql> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 192.168.10.13 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 3            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 3            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
rows in set (0.00 sec)


(2)导入规则

-- 为避免测试干扰,先删除之前的规则
DELETE FROM mysql_query_rules;

-- 导入规则
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);

-- 生效、保存规则
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;


(3)测试规则是否生效

测试SQL语句:

mysql -uusera -p123456 -h192.168.10.10 -P6033 

-- 写测试
insert into testdb.test01 values(3,'c');

-- 读测试
SELECT * from testdb.test01;

-- 正则大小写测试
select * from testdb.test01;

-- select for update测试
SELECT * from testdb.test01 FOR UPDATE;
select * from testdb.test01 FOR UPDATE;

exit;

ProxySQL将SQL语句分发到哪一台主机上执行,可以查看统计视图:stats_mysql_query_digest和stats_mysql_query_digest_reset。两个表的内容和结构相同,但是查询stats_mysql_query_digest_reset表会自动将内部统计信息重置为零,即执行了stats_mysql_query_digest_reset的查询后,2个表的数据都会被完全清除。这里我们直接使用stats_mysql_query_digest_reset来查询上面的测试:

mysql> select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+--------------------+----------+----------------------------------------+------------+
| hostgroup | schemaname         | username | digest_text                            | count_star |
+-----------+--------------------+----------+----------------------------------------+------------+
| 1         | information_schema | usera    | SELECT * from testdb.test01 FOR UPDATE | 1          |
| 3         | information_schema | usera    | select * from testdb.test01            | 1          |
| 3         | information_schema | usera    | SELECT * from testdb.test01            | 1          |
| 1         | information_schema | usera    | select * from testdb.test01 FOR UPDATE | 1          |
| 1         | information_schema | usera    | insert into testdb.test01 values(?,?)  | 1          |
| 1         | information_schema | usera    | select @@version_comment limit ?       | 1          |
+-----------+--------------------+----------+----------------------------------------+------------+
rows in set (0.00 sec)

可以看到,正则表达式规则不区分大小写,并且根据匹配规则,已经将SQL发到了对应的主机上执行。

个人觉得基于正则表达式路由SQL语句到不同主机执行已经十分智能了,然而ProxySQL官方并不建议这么干,因为我们无法准确知道各类型的SQL语句的开销,从而可能会导致流量分布不均。

接下来我们来看看ProxySQL推荐的方法,基于正则表达式和摘要进行读写拆分。


(三)第三境界:使用正则表达式和digest实现读写分离

以下是ProxySQL推荐的有效设置读写分离的配置过程:
(1)配置ProxySQL以将所有流量仅发送到一个MySQL主节点,写和读都发送到一个节点;
(2)检查stats_mysql_query_digest哪些是最昂贵的SELECT语句;
(3)确定哪些昂贵的语句应移至读节点;
(4)配置mysql_query_rules(创建规则)以仅将昂贵的SELECT语句发送给读者

总之,想法非常简单:仅发送那些你想发送的SQL给读节点,而不是发送所有SELECT语句。

我们来整理一下整个过程:

STEP1:去除规则,让所有SQL语句都在默认组上执行

mysql> delete from mysql_query_rules;
Query OK, 2 rows affected (0.00 sec)

mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)

mysql> SAVE MYSQL QUERY RULES TO DISK;
Query OK, 0 rows affected (0.01 sec)


STEP2:查找最昂贵的SQL

假设目前所有读写操作都在同一台机器上执行,且执行了很久,读写比例都具有代表性,我们可以使用stats_mysql_query_digest查找最昂贵的SQL,可以多维度进行查找。

(1)查找查询总耗时最多的5个SQL

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY sum_time DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time |
+--------------------+--------------------------+------------+----------+
| 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WH | 9594       | 9837782  |
| 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984       | 9756595  |
| 0xD84E4E04982951C1 | SELECT c FROM sbtest9 WH | 9504       | 9596185  |
| 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664       | 9530433  |
| 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744       | 9513180  |
+--------------------+--------------------------+------------+----------+
rows in set (0.00 sec)


(2)查看执行次数最多的5个SQL语句

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY count_star DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time |
+--------------------+--------------------------+------------+----------+
| 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984       | 9756595  |
| 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744       | 9513180  |
| 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664       | 9530433  |
| 0x03744DC190BC72C7 | SELECT c FROM sbtest5 WH | 9604       | 9343514  |
| 0x1E7B7AC5611F30C2 | SELECT c FROM sbtest6 WH | 9594       | 9245838  |
+--------------------+--------------------------+------------+----------+


(3)查看平均执行时间最长的5个SQL语句

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time, sum_time/count_star as avg_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY avg_time DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time | avg_time |
+--------------------+--------------------------+------------+----------+----------+
| 0x0DCAF47B4A363A7A | SELECT * from testdb.tes | 1          | 11400    | 11400    |
| 0x2050E81DB9C7038E | select * from testdb.tes | 1          | 10817    | 10817    |
| 0xF340A73F6EDA5B20 | SELECT c FROM sbtest11 W | 964        | 1726994  | 1791     |
| 0xC867A28C90150A81 | SELECT DISTINCT c FROM s | 929        | 1282699  | 1380     |
| 0x283AA9863F85EFC8 | SELECT DISTINCT c FROM s | 963        | 1318362  | 1369     |
+--------------------+--------------------------+------------+----------+----------+


(4)查看平均执行时间最长的5个SQL语句,且满足平均执行时间大于1s,并显示该SQL执行时间占所有SQL执行时间的百分比

SELECT  digest,SUBSTR(digest_text,0,25),count_star,sum_time,sum_time/count_star as avg_time,round(sum_time/1000000*100/(SELECT sum(sum_time/1000000) FROM stats_mysql_query_digest ),3) as pct 
FROM    stats_mysql_query_digest 
WHERE   digest_text LIKE 'SELECT%' 
AND     sum_time/count_star > 1000000
ORDER BY avg_time DESC LIMIT 5;

说明:在测试该语句时,是使用sysbench压测出来的数据,发现存在一个sum_time非常大的SQL,导致在求sum(sum_time)时返回NULL值,故先做了预处理,把sum_time/1000000变为进行计算。


STEP3:结合digest和正则表达式实现路由

我们先观察一下,未使用路由规则时候的流量分布,可以看到,所有流量都到了hostgroup1

mysql>  select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+--------------------+----------+---------------------------------------------------------------------+------------+
| hostgroup | schemaname         | username | digest_text                                                         | count_star |
+-----------+--------------------+----------+---------------------------------------------------------------------+------------+
| 1         | information_schema | usera    | SET PROFILING = ?                                                   | 1          |
| 1         | information_schema | usera    | SHOW DATABASES                                                      | 3          |
| 1         | information_schema | usera    | SHOW VARIABLES LIKE ?;                                              | 2          |
| 1         | information_schema | usera    | SET NAMES utf8mb4                                                   | 3          |
| 1         | tssysbench         | usera    | INSERT INTO sbtest15 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1285       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest14 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1309       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest13 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1303       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest12 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1240       |
| 1         | tssysbench         | usera    | UPDATE sbtest3 SET k=k+? WHERE id=?                                 | 1280       |
| 1         | tssysbench         | usera    | UPDATE sbtest2 SET k=k+? WHERE id=?                                 | 1280       |
| 1         | tssysbench         | usera    | UPDATE sbtest1 SET k=k+? WHERE id=?                                 | 1219       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest15 WHERE id BETWEEN ? AND ? ORDER BY c | 1207       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 1262       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 1227       |


插入路由规则:

-- 根据digest插入规则,匹配特定的SQL语句
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(1,1,'0x0DCAF47B4A363A7A',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(2,1,'0x63F9BD89D906209B',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(3,1,'0x10D8D9CC551E199B',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(4,1,'0xC867A28C90150A81',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(5,1,'0x283AA9863F85EFC8',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(6,1,'0x16BD798E66615299',3,1);

-- 根据正则表达式插入规则,匹配所有SELECT 开头的语句
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(7,1,'^SELECT COUNT\(\*\)',3,1);


-- 使规则生效、保存
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;


STEP4:使用sysbench查询,再次查看流量分布,可以看到,符合路由条件的SQL语句已经转移到了hostgroup3执行。

mysql> select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+------------+----------+---------------------------------------------------------------------+------------+
| hostgroup | schemaname | username | digest_text                                                         | count_star |
+-----------+------------+----------+---------------------------------------------------------------------+------------+
| 1         | tssysbench | usera    | UPDATE sbtest3 SET k=k+? WHERE id=?                                 | 863        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 841        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest13 WHERE id BETWEEN ? AND ? ORDER BY c | 765        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest12 WHERE id BETWEEN ? AND ? ORDER BY c | 837        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 813        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest10 WHERE id BETWEEN ? AND ? ORDER BY c | 861        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest9 WHERE id BETWEEN ? AND ? ORDER BY c  | 835        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN ? AND ? ORDER BY c  | 823        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest6 WHERE id BETWEEN ? AND ? ORDER BY c  | 834        |
| 1         | tssysbench | usera    | UPDATE sbtest5 SET c=? WHERE id=?                                   | 870        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest4 WHERE id BETWEEN ? AND ? ORDER BY c  | 802        |
| 1         | tssysbench | usera    | UPDATE sbtest1 SET c=? WHERE id=?                                   | 835        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest3 WHERE id BETWEEN ? AND ? ORDER BY c  | 838        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest2 WHERE id BETWEEN ? AND ? ORDER BY c  | 885

至此,以实现根据负载进行流量分发。





================================================================================================================

附1:读写分离路由规则表解析

读写分离路由解析信息存放在mysql_query_rules表中,表的语法如下:

CREATE TABLE mysql_query_rules (
    rule_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 0,
    username VARCHAR,
    schemaname VARCHAR,
    flagIN INT CHECK (flagIN >= 0) NOT NULL DEFAULT 0,
    client_addr VARCHAR,
    proxy_addr VARCHAR,
    proxy_port INT,
    digest VARCHAR,
    match_digest VARCHAR,
    match_pattern VARCHAR,
    negate_match_pattern INT CHECK (negate_match_pattern IN (0,1)) NOT NULL DEFAULT 0,
    re_modifiers VARCHAR DEFAULT 'CASELESS',
    flagOUT INT CHECK (flagOUT >= 0),
    replace_pattern VARCHAR CHECK(CASE WHEN replace_pattern IS NULL THEN 1 WHEN replace_pattern IS NOT NULL AND match_pattern IS NOT NULL THEN 1 ELSE 0 END),
    destination_hostgroup INT DEFAULT NULL,
    cache_ttl INT CHECK(cache_ttl > 0),
    cache_empty_result INT CHECK (cache_empty_result IN (0,1)) DEFAULT NULL,
    reconnect INT CHECK (reconnect IN (0,1)) DEFAULT NULL,
    timeout INT UNSIGNED,
    retries INT CHECK (retries>=0 AND retries <=1000),
    delay INT UNSIGNED,
    next_query_flagIN INT UNSIGNED,
    mirror_flagOUT INT UNSIGNED,
    mirror_hostgroup INT UNSIGNED,
    error_msg VARCHAR,
    OK_msg VARCHAR,
    sticky_conn INT CHECK (sticky_conn IN (0,1)),
    multiplex INT CHECK (multiplex IN (0,1,2)),
    gtid_from_hostgroup INT UNSIGNED,
    log INT CHECK (log IN (0,1)),
    apply INT CHECK(apply IN (0,1)) NOT NULL DEFAULT 0,
    comment VARCHAR)

重要列的含义如下:

  • rule_id         :规则的id,是主键,具有唯一非空特性,规则匹配时,按照rule_id从小到大匹配;
  • active          :规则是否启用,1代表启用;
  • username:   : 匹配来自特定用户的流量;
  • client_addr   :匹配来自特定客户端的流量;
  • proxy_addr   : 匹配特定本地IP上的传入流量;
  • proxy_port    : 匹配特定本地端口上的传入流量,具体见上面使用端口进行读写分离的方案;
  • digest           : 将查询与特定摘要匹配,每个相同的SQL文本都会生成一个唯一的diagst码(类似Oracle的sql_id),按照码进行匹配;
  • match_digest :将查询摘要与正则表达式匹配;
  • match_pattern:将查询文本与正则表达式匹配;
  • destination_hostgroup:将匹配的查询路由到该主机组,除非存在已启动的事务并且已登录的用户将transaction_persistent标志设置为1(请参见表mysql_users),否则将发生这种情况。
  • cache_ttl     :查询结果缓存保留的时间(单位:s);
  • timeout       :执行匹配或重写的查询的最大超时(以毫秒为单位)。如果查询的运行时间超过特定阈值,则会自动终止该查询。如果未指定超时,则mysql-default_query_timeout应用全局变量
  • retries         : 在执行查询检测到失败的情况下,重新执行查询的次数
  • apply           : 如果这只为1,则不再匹配后面的查询规则。


附2:本次实验用到的sysbench脚本

-- 准备阶段sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=50000  --report-interval=10 --threads=4  --time=120 prepare

-- 测试阶段
sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=500000  --report-interval=10 --threads=4 --time=120 run

-- 清除阶段
sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=500000  --report-interval=10 --threads=4 --time=120 cleanup


【完】

相关文章
基于MGR+Atlas的读写分离尝试,以及MGR+Keepalived+Atlas自动故障转移+读写分离设想
日志目的是尝试altas的读写分离,现有一套搭建好做测试的MGR(单主),于是就腿搓绳,在MGR基础上搭建altas. 复制环境准备 读写分离理论上讲,跟复制模式没有关系,atlas负责的是重定向读写,至 ...
1
版本5.7.19,5.7.21中找不到MySQL Group Replication插件
问答我试图在MySQL 5.7.21上启用MySQL组复制插件,根据文档(https://dev.mysql.com/doc/refman/5.7/en/group-replication.html)应该 ...
MySQL Group Replication 动态添加成员节点
日志前提: MySQL GR 3节点(node1.node2.node3)部署成功,模式定为多主模式,单主模式也是一样的处理. 在线修改已有GR节点配置 分别登陆node1.node2.node3,执行以 ...
Mysql 5.7 基于组复制(MySQL Group Replication) - 运维小结
日志之前介绍了Mysq主从同步的异步复制(默认模式).半同步复制.基于GTID复制.基于组提交和并行复制 (解决同步延迟),下面简单说下Mysql基于组复制(MySQL Group Replication ...
1
mysql group replication 主节点宕机恢复
日志一.mysql group replication 生来就要面对两个问题: 一.主节点宕机如何恢复. 二.多数节点离线的情况下.余下节点如何继续承载业务. 在这里我们只讨论第一个问题.也就是说当主结点 ...
MySQL Group Replication 介绍
日志2016-12-12,一个重要的日子,mysql5.7.17 GA版发布,正式推出Group Replication(组复制) 插件,通过这个插件增强了MySQL原有的高可用方案(原有的Replica ...
mysql group replication 安装&配置详解
日志一.原起: 之前也有写过mysql-group-replication (mgr) 相关的文章.那时也没有什么特别的动力要写好它.主要是因为在 mysql-5.7.20 之前的版本的mgr都有着各种各 ...
1
找到当前mysql group replication 环境的primary结点
日志一.起原: mysql group replication 有两种模式.第一种是single primary 也就是说单个primary .这个模式下只有这一个主可以写入: 第二种是multi pri ...
1
MySQL Group Replication配置
日志MySQL Group Replication简述 MySQL 组复制实现了基于复制协议的多主更新(单主模式). 复制组由多个 server成员构成,并且组中的每个 server 成员可以独立地执行事 ...
1
MySQL Group Replication(组复制MGR)
日志MGR基本要求: 1.InnoDB存储引擎 2.主键,每个表必须具有已定义的主键或等效的主键,其中等效项是非null唯一键 3.IPv4网络 4.网络性能 5.开启二进制日志并开启GTID模式 6.m ...
MySQL Group Replication的安装部署
日志一.简介 这次给大家介绍下MySQL官方最新版本5.7.17中GA的新功能 Group Replication . Group Replication是一种可用于实现容错系统的技术.复制组是一组通过消 ...
Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication
日志Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication Overview Galera Cluster 由 Coders ...
1
MySQL Group Replication 搭建[Multi-Primary Mode]
日志1. 环境准备 CentOS7.3 percona-server-5.7.18-14 两台服务器ip地址和主机名 10.0.68.206 yhjr-osd-mysql01-uat 10.0.68.20 ...
Mysql Group Replication 简介及单主模式组复制配置转
日志一 Mysql Group Replication简介    Mysql Group Replication(MGR)是一个全新的高可用和高扩张的MySQL集群服务.    高一致性,基于原生复制及p ...
1
MySQL Group Replication 技术点
日志mysql group replication,组复制,提供了多写(multi-master update)的特性,增强了原有的mysql的高可用架构.mysql group replication基 ...
MySQL Group Replication
日志在>=mysql5.7.17的版本中开始支持组复制插件.组复制中的成员至少需要三个才会起到容错作用,各成员在通信层通过原子广播及总订单消息的 传递一起应用或回滚事务组从而达到数据的强一致性.组复 ...
在EC2上使用MySQL提供高可用性和故障转移
问答我想拥有一个高度可用的MySQL系统,具有自动故障转移功能,可在Amazon EC2实例上运行. 解决此问题的标准方法是Heartbeat DRBD问题,但我发现很多帖子暗示DRBD不适用于EC2,尽 ...
1
主从式Mysql复制的自动故障转移策略 – 为什么这不起作用?
问答我想对一些MySQL服务器的故障转移策略提出一些反馈,我正在为一个集群进行调查,我想检查是否有一些显而易见的东西,我不会在这里丢失. 一个应用程序服务器,它在日常操作中连接到mysql主服务器,并且有 ...
2
mysql – 对数据库的故障转移支持
问答我们目前正在评估不同数据库中的故障转移支持. 我们之前使用的是HSQLDB,但它似乎没有群集/复制支持. 我们的要求只是拥有两个数据库服务器,一个仅用于同步备份,但如果主服务器关闭,则辅助服务器应自动 ...
1
Kubernetes 之 MySQL 持久存储和故障转移(十一)
日志一.规划 我们接着之前的文档的架构规划进行下面的操作. IP 角色 192.168.1.200 k8s-master 192.168.1.201 k8s-node01 192.168.1.202 k8 ...
1