首页 > 代码库 > mysql 执行计划分析三看, explain,profiling,optimizer_trace

mysql 执行计划分析三看, explain,profiling,optimizer_trace

http://blog.csdn.net/xj626852095/article/details/52767963

step 1

 使用explain 查看执行计划, 5.6后可以加参数 explain format=json xxx 输出json格式的信息

 

step 2 

使用profiling详细的列出在每一个步骤消耗的时间,前提是先执行一遍语句

 

#打开profiling 的设置SET profiling = 1;SHOW VARIABLES LIKE ‘%profiling%‘;#查看队列的内容show profiles;  #来查看统计信息show profile block io,cpu for query 3;


step 3  

 

Optimizer trace是MySQL5.6添加的新功能,可以看到大量的内部查询计划产生的信息, 先打开设置,然后执行一次sql,最后查看`information_schema`.`OPTIMIZER_TRACE`的内容

 

#打开设置SET optimizer_trace=‘enabled=on‘;  #最大内存根据实际情况而定, 可以不设置SET OPTIMIZER_TRACE_MAX_MEM_SIZE=1000000;SET END_MARKERS_IN_JSON=ON;SET optimizer_trace_limit = 1;SHOW VARIABLES LIKE ‘%optimizer_trace%‘;#执行所需sql后,查看该表信息即可看到详细的执行过程SELECT * FROM `information_schema`.`OPTIMIZER_TRACE`;

 

 

 

MySQL索引选择不正确并详细解析OPTIMIZER_TRACE格式

http://blog.csdn.net/melody_mr/article/details/48950601

一 表结构如下: 

CREATE TABLE t_audit_operate_log (
  Fid bigint(16) AUTO_INCREMENT,
  Fcreate_time int(10) unsigned NOT NULL DEFAULT ‘0‘,
  Fuser varchar(50) DEFAULT ‘‘,
  Fip bigint(16) DEFAULT NULL,
  Foperate_object_id bigint(20) DEFAULT ‘0‘,
  PRIMARY KEY (Fid),
  KEY indx_ctime (Fcreate_time),
  KEY indx_user (Fuser),
  KEY indx_objid (Foperate_object_id),
  KEY indx_ip (Fip)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

执行查询:

MySQL> explain select count(*) from t_audit_operate_log where Fuser=‘XX@XX.com‘ and Fcreate_time>=1407081600 and Fcreate_time<=1407427199\G

*************************** 1. row ***************************

id: 1

select_type: SIMPLE

table: t_audit_operate_log

type: ref

possible_keys: indx_ctime,indx_user

key: indx_user

key_len: 153

ref: const

rows: 2007326

Extra: Using where

 

发现,使用了一个不合适的索引, 不是很理想,于是改成指定索引:

mysql> explain select count(*) from t_audit_operate_log use index(indx_ctime) where Fuser=‘CY6016@cyou-inc.com‘ and Fcreate_time>=1407081600 and Fcreate_time<=1407427199\G

*************************** 1. row ***************************

id: 1

select_type: SIMPLE

table: t_audit_operate_log

type: range

possible_keys: indx_ctime

key: indx_ctime

key_len: 5

ref: NULL

rows: 670092

Extra: Using where

实际执行耗时,后者比前者快了接近10

问题: 很奇怪,优化器为何不选择使用 indx_ctime 索引,而选择了明显会扫描更多行的 indx_user 索引。

分析2个索引的数据量如下:  两个条件的唯一性对比:

select count(*) from t_audit_operate_log where Fuser=‘XX@XX.com‘;
+----------+
| count(*) |
+----------+
| 1238382 | 
+----------+

select count(*) from t_audit_operate_log where Fcreate_time>=1407254400 and Fcreate_time<=1407427199;
+----------+
| count(*) |
+----------+
| 198920 | 
+----------+

显然,使用索引indx_ctime好于indx_user,但MySQL却选择了indx_user. 为什么?

于是,使用 OPTIMIZER_TRACE进一步探索.

 

二  OPTIMIZER_TRACE的过程说明

以本处事例简要说明OPTIMIZER_TRACE的过程.

查看OPTIMIZER_TRACE方法:

1.set optimizer_trace=‘enabled=on‘;    --- 开启trace

2.set optimizer_trace_max_mem_size=1000000;    --- 设置trace大小

3.set end_markers_in_json=on;    --- 增加trace中注释

4.select * from information_schema.optimizer_trace\G;

 

[plain] view plain copy 
 
  1. {\  
  2.   "steps": [\  
  3.     {\  
  4.       "join_preparation": {\  ---优化准备工作  
  5.         "select#": 1,\  
  6.         "steps": [\  
  7.           {\  
  8.             "expanded_query": "/* select#1 */ select count(0) AS `count(*)` from `t_audit_operate_log` where ((`t_audit_operate_log`.`Fuser` = ‘XX@XX.com‘) and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\  
  9.           }\  
  10.         ] /* steps */\  
  11.       } /* join_preparation */\  
  12.     },\  
  13.     {\  
  14.       "join_optimization": {\  ---优化工作的主要阶段,包括逻辑优化和物理优化两个阶段  
  15.         "select#": 1,\  
  16.         "steps": [\  ---优化工作的主要阶段, 逻辑优化阶段  
  17.           {\  
  18.             "condition_processing": {\  ---逻辑优化,条件化简  
  19.               "condition": "WHERE",\  
  20.               "original_condition": "((`t_audit_operate_log`.`Fuser` = ‘XX@XX.com‘) and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))",\  
  21.               "steps": [\  
  22.                 {\  
  23.                   "transformation": "equality_propagation",\  ---逻辑优化,条件化简,等式处理  
  24.                   "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ‘XX@XX.com‘) and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\  
  25.                 },\  
  26.                 {\  
  27.                   "transformation": "constant_propagation",\  ---逻辑优化,条件化简,常量处理  
  28.                   "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ‘XX@XX.com‘) and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\  
  29.                 },\  
  30.                 {\  
  31.                   "transformation": "trivial_condition_removal",\  ---逻辑优化,条件化简,条件去除  
  32.                   "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ‘XX@XX.com‘) and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\  
  33.                 }\  
  34.               ] /* steps */\  
  35.             } /* condition_processing */\  
  36.           },\  ---逻辑优化,条件化简,结束  
  37.           {\  
  38.             "table_dependencies": [\  ---逻辑优化, 找出表之间的相互依赖关系. 非直接可用的优化方式.   
  39.               {\  
  40.                 "table": "`t_audit_operate_log`",\  
  41.                 "row_may_be_null": false,\  
  42.                 "map_bit": 0,\  
  43.                 "depends_on_map_bits": [\  
  44.                 ] /* depends_on_map_bits */\  
  45.               }\  
  46.             ] /* table_dependencies */\  
  47.           },\  
  48.           {\  
  49.             "ref_optimizer_key_uses": [\   ---逻辑优化,  找出备选的索引  
  50.               {\  
  51.                 "table": "`t_audit_operate_log`",\  
  52.                 "field": "Fuser",\  
  53.                 "equals": "‘XX@XX.com‘",\  
  54.                 "null_rejecting": false\  
  55.               }\  
  56.             ] /* ref_optimizer_key_uses */\  
  57.           },\  
  58.           {\  
  59.             "rows_estimation": [\   ---逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描和索引扫描的代价估算. 每个索引都估算索引扫描代价  
  60.               {\  
  61.                 "table": "`t_audit_operate_log`",\  
  62.                 "range_analysis": {\  
  63.                   "table_scan": {\---逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描的代价  
  64.                     "rows": 8150516,\  
  65.                     "cost": 1.73e6\  
  66.                   } /* table_scan */,\  
  67.                   "potential_range_indices": [\ ---逻辑优化, 列出备选的索引. 后续版本字符串变为potential_range_indexes  
  68.                     {\  
  69.                       "index": "PRIMARY",\---逻辑优化, 本行表明主键索引不可用  
  70.                       "usable": false,\  
  71.                       "cause": "not_applicable"\  
  72.                     },\  
  73.                     {\  
  74.                       "index": "indx_ctime",\---逻辑优化, 索引indx_ctime  
  75.                       "usable": true,\  
  76.                       "key_parts": [\  
  77.                         "Fcreate_time",\  
  78.                         "Fid"\  
  79.                       ] /* key_parts */\  
  80.                     },\  
  81.                     {\  
  82.                       "index": "indx_user",\---逻辑优化, 索引indx_user  
  83.                       "usable": true,\  
  84.                       "key_parts": [\  
  85.                         "Fuser",\  
  86.                         "Fid"\  
  87.                       ] /* key_parts */\  
  88.                     },\  
  89.                     {\  
  90.                       "index": "indx_objid",\---逻辑优化, 索引  
  91.                       "usable": false,\  
  92.                       "cause": "not_applicable"\  
  93.                     },\  
  94.                     {\  
  95.                       "index": "indx_ip",\---逻辑优化, 索引  
  96.                       "usable": false,\  
  97.                       "cause": "not_applicable"\  
  98.                     }\  
  99.                   ] /* potential_range_indices */,\  
  100.                   "setup_range_conditions": [\ ---逻辑优化, 如果有可下推的条件,则带条件考虑范围查询  
  101.                   ] /* setup_range_conditions */,\  
  102.                   "group_index_range": {\---逻辑优化, 如带有GROUPBY或DISTINCT,则考虑是否有索引可优化这种操作. 并考虑带有MIN/MAX的情况  
  103.                     "chosen": false,\  
  104.                     "cause": "not_group_by_or_distinct"\  
  105.                   } /* group_index_range */,\  
  106.                   "analyzing_range_alternatives": {\---逻辑优化,开始计算每个索引做范围扫描的花费(等值比较是范围扫描的特例)  
  107.                     "range_scan_alternatives": [\  
  108.                       {\  
  109.                         "index": "indx_ctime",\ ---[A]  
  110.                         "ranges": [\  
  111.                           "1407081600 <= Fcreate_time <= 1407427199"\  
  112.                         ] /* ranges */,\  
  113.                         "index_dives_for_eq_ranges": true,\  
  114.                         "rowid_ordered": false,\  
  115.                         "using_mrr": true,\  
  116.                         "index_only": false,\  
  117.                         "rows": 688362,\  
  118.                         "cost": 564553,\ ---逻辑优化,这个索引的代价最小  
  119.                         "chosen": true\ ---逻辑优化,这个索引的代价最小,被选中. (比前面的table_scan 和其他索引的代价都小)  
  120.                       },\  
  121.                       {\  
  122.                         "index": "indx_user",\  
  123.                         "ranges": [\  
  124.                           "XX@XX.com <= Fuser <= XX@XX.com"\  
  125.                         ] /* ranges */,\  
  126.                         "index_dives_for_eq_ranges": true,\  
  127.                         "rowid_ordered": true,\  
  128.                         "using_mrr": true,\  
  129.                         "index_only": false,\  
  130.                         "rows": 1945894,\  
  131.                         "cost": 1.18e6,\  
  132.                         "chosen": false,\  
  133.                         "cause": "cost"\  
  134.                       }\  
  135.                     ] /* range_scan_alternatives */,\  
  136.                     "analyzing_roworder_intersect": {\  
  137.                       "usable": false,\  
  138.                       "cause": "too_few_roworder_scans"\  
  139.                     } /* analyzing_roworder_intersect */\  
  140.                   } /* analyzing_range_alternatives */,\---逻辑优化,开始计算每个索引做范围扫描的花费. 这项工作结算  
  141.                   "chosen_range_access_summary": {\---逻辑优化,开始计算每个索引做范围扫描的花费. 总结本阶段最优的.  
  142.                     "range_access_plan": {\  
  143.                       "type": "range_scan",\  
  144.                       "index": "indx_ctime",\  
  145.                       "rows": 688362,\  
  146.                       "ranges": [\  
  147.                         "1407081600 <= Fcreate_time <= 1407427199"\  
  148.                       ] /* ranges */\  
  149.                     } /* range_access_plan */,\  
  150.                     "rows_for_plan": 688362,\  
  151.                     "cost_for_plan": 564553,\  
  152.                     "chosen": true\    -- 这里看到的cost和rows都比 indx_user 要来的小很多---这个和[A]处是一样的,是信息汇总.  
  153.                   } /* chosen_range_access_summary */\  
  154.                 } /* range_analysis */\  
  155.               }\  
  156.             ] /* rows_estimation */\ ---逻辑优化, 估算每个表的元组个数. 行估算结束  
  157.           },\  
  158.           {\  
  159.             "considered_execution_plans": [\ ---物理优化, 开始多表连接的物理优化计算  
  160.               {\  
  161.                 "plan_prefix": [\  
  162.                 ] /* plan_prefix */,\  
  163.                 "table": "`t_audit_operate_log`",\  
  164.                 "best_access_path": {\  
  165.                   "considered_access_paths": [\  
  166.                     {\  
  167.                       "access_type": "ref",\ ---物理优化, 计算indx_user索引上使用ref方查找的花费,  
  168.                       "index": "indx_user",\  
  169.                       "rows": 1.95e6,\  
  170.                       "cost": 683515,\  
  171.                       "chosen": true\  
  172.                     },\ ---物理优化, 本应该比较所有的可用索引,即打印出多个格式相同的但索引名不同的内容,这里却没有。推测是bug--没有遍历每一个索引.  
  173.                     {\  
  174.                       "access_type": "range",\---物理优化,猜测对应的是indx_time(没有实例可进行调试,对比5.7的跟踪信息猜测而得)  
  175.                       "rows": 516272,\  
  176.                       "cost": 702225,\---物理优化,代价大于了ref方式的683515,所以没有被选择  
  177.                       "chosen": false\   -- cost比上面看到的增加了很多,但rows没什么变化 ---物理优化,此索引没有被选择  
  178.                     }\  
  179.                   ] /* considered_access_paths */\  
  180.                 } /* best_access_path */,\  
  181.                 "cost_for_plan": 683515,\ ---物理优化,汇总在best_access_path 阶段得到的结果  
  182.                 "rows_for_plan": 1.95e6,\  
  183.                 "chosen": true\   -- cost比上面看到的竟然小了很多?虽然rows没啥变化  ---物理优化,汇总在best_access_path 阶段得到的结果  
  184.               }\  
  185.             ] /* considered_execution_plans */\  
  186.           },\  
  187.           {\  
  188.             "attaching_conditions_to_tables": {\---逻辑优化,尽量把条件绑定到对应的表上  
  189.               } /* attaching_conditions_to_tables */\  
  190.           },\  
  191.           {\  
  192.             "refine_plan": [\  
  193.               {\  
  194.                 "table": "`t_audit_operate_log`",\---逻辑优化,下推索引条件"pushed_index_condition";其他条件附加到表上做为过滤条件"table_condition_attached"  
  195.               }\  
  196.             ] /* refine_plan */\  
  197.           }\  
  198.         ] /* steps */\  
  199.       } /* join_optimization */\ \---逻辑优化和物理优化结束  
  200.     },\  
  201.     {\  
  202.       "join_explain": {} /* join_explain */\  
  203.     }\  
  204.   ] /* steps */\  

 

 

三 其他一个相似问题
单表扫描,使用ref和range从索引获取数据一例  
http://blog.163.com/li_hx/blog/static/183991413201461853637715/

 


四 问题的解决方式

遇到单表上有多个索引的时候,在MySQL5.6.20版本之前的版本,需要人工强制使用索引,以达到最好的效果.

mysql 执行计划分析三看, explain,profiling,optimizer_trace