首页 > 代码库 > Mysql性能优化----SQL语句优化、索引优化、数据库结构优化、系统配置优化、服务器硬件优化
Mysql性能优化----SQL语句优化、索引优化、数据库结构优化、系统配置优化、服务器硬件优化
一、SQL语句优化
1-1.MySQL慢日志
1).慢日志开启方式和存储格式
如何发现有问题的SQL? 使用Mysql慢日志对有效率问题的SQL进行监控
前期准备
mysql> show variables like '%log_queri%'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | log_queries_not_using_indexes | OFF | +-------------------------------+-------+ # 记录未使用索引的查询 mysql> set global log_queries_not_using_indexes=on; Query OK, 0 rows affected (0.03 sec)
mysql> show variables like 'slow_query_log'; +----------------+-------+ | Variable_name | Value | +----------------+-------+ | slow_query_log | OFF | +----------------+-------+ # 开启慢查询日志 mysql> set global slow_query_log=on; Query OK, 0 rows affected (0.32 sec)
# 把大于10毫秒的查询记录到日志里 mysql> show variables like 'long_query_time'; +-----------------+-----------+ | Variable_name | Value | +-----------------+-----------+ | long_query_time | 10.000000 | +-----------------+-----------+
mysql> use sakila; Database changed mysql> show tables; 23 rows in set (0.00 sec) # 查看慢查日志在什么地文 mysql> show variables like 'slow_query_log_file%'; +---------------------+--------------------------------+ | Variable_name | Value | +---------------------+--------------------------------+ | slow_query_log_file | /var/lib/mysql/ubuntu-slow.log | +---------------------+--------------------------------+ mysql> select * from store limit 10; +----------+------------------+------------+---------------------+ | store_id | manager_staff_id | address_id | last_update | +----------+------------------+------------+---------------------+ | 1 | 1 | 1 | 2006-02-15 04:57:12 | | 2 | 2 | 2 | 2006-02-15 04:57:12 | +----------+------------------+------------+---------------------+<pre name="code" class="sql">changwen@ubuntu:~$ sudo tail -50 /var/lib/mysql/ubuntu-slow.log /usr/sbin/mysqld, Version: 5.6.30-0ubuntu0.15.10.1 ((Ubuntu)). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument /usr/sbin/mysqld, Version: 5.6.30-0ubuntu0.15.10.1 ((Ubuntu)). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument # Time: 160908 0:47:01 # User@Host: root[root] @ localhost [] Id: 2 # Query_time: 0.019114 Lock_time: 0.000065 Rows_sent: 2 Rows_examined: 2 use sakila; SET timestamp=1473320821; select * from store limit 10;慢查日志的存储格式
2).慢查日志分析工具之mysqldumpslow
changwen@ubuntu:~$ mysqldumpslow --help
changwen@ubuntu:~$ sudo mysqldumpslow -t 3 /var/lib/mysql/ubuntu-slow.log | more
3).慢查日志分析工具之pt-query-digest
4).通过explain查询和分析SQL的执行计划
mysql> explain select customer_id,first_name,last_name from customer; +----+-------------+----------+------+---------------+------+---------+------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+------+---------+------+------+-------+ | 1 | SIMPLE | customer | ALL | NULL | NULL | NULL | NULL | 599 | NULL | +----+-------------+----------+------+---------------+------+---------+------+------+-------+ 1 row in set (0.94 sec)
5).Count()和Max()的优化
查询最后支付时间 -- 优化max()函数
可以看到返回请求数据的行数有一万多条,不是很好,优化如下
mysql> create index idx_paydate on payment(payment_date); mysql> explain select max(payment_date) from payment \G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: NULL type: NULL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL Extra: Select tables optimized away 1 row in set (0.03 sec)
mysql> create table t(id int); Query OK, 0 rows affected (0.62 sec) mysql> insert into t values(1),(2),(null); Query OK, 3 rows affected (0.20 sec) Records: 3 Duplicates: 0 Warnings: 0 <pre name="code" class="sql">mysql> select count(*), count(id) ,count(id='2'),count(id='2' or null)from t; +----------+-----------+---------------+-----------------------+ | count(*) | count(id) | count(id='2') | count(id='2' or null) | +----------+-----------+---------------+-----------------------+ | 3 | 2 | 2 | 1 | +----------+-----------+---------------+-----------------------+ 1 row in set (0.00 sec)mysql> select count(release_year=‘2006‘ or null) as ‘2006‘ ,count(release_year=‘2007‘ or null) as ‘2007‘ from film;+------+------+| 2006 | 2007 |+------+------+| 540 | 160 |+------+------+6).子查询的优化
mysql> create table t1(tid int); Query OK, 0 rows affected (0.21 sec) mysql> insert into t1 values(1),(1); Query OK, 1 row affected (0.11 sec) -- 加distinct即可 mysql> select t.id from t join t1 on t.id = t1.tid; +------+ | id | +------+ | 1 | | 1 | +------+ 2 rows in set (0.00 sec) mysql> select * from t where t.id in (select t1.tid from t1); +------+ | id | +------+ | 1 | +------+ 1 row in set (0.25 sec)7).group by的优化
mysql> explain select actor.first_name, actor.last_name, count(*) from sakila.film_actor inner join sakila.actor USING(actor_id) group by film_actor.actor_id \G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: actor type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 200 Extra: Using temporary; Using filesort *************************** 2. row *************************** id: 1 select_type: SIMPLE table: film_actor type: ref possible_keys: PRIMARY,idx_fk_film_id key: PRIMARY key_len: 2 ref: sakila.actor.actor_id rows: 13 Extra: Using index 2 rows in set (0.01 sec)上面那个sql查询可以看到,使用了临时表和文件排序,这样不好,优化如下
mysql> explain select actor.first_name, actor.last_name, c.cnt from sakila.actor inner join (select actor_id, count(*) as cnt from sakila.film_actor group by actor_id )as c USING(actor_id) \G *************************** 1. row *************************** id: 1 select_type: PRIMARY table: actor type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 200 Extra: NULL *************************** 2. row *************************** id: 1 select_type: PRIMARY table: <derived2> type: ref possible_keys: <auto_key0> key: <auto_key0> key_len: 2 ref: sakila.actor.actor_id rows: 27 Extra: NULL *************************** 3. row *************************** id: 2 select_type: DERIVED table: film_actor type: index possible_keys: PRIMARY,idx_fk_film_id key: PRIMARY key_len: 4 ref: NULL rows: 5462 Extra: Using index 3 rows in set (0.00 sec)8).Limit查询的优化
mysql> explain select film_id,description from sakila.film order by title limit 50,5; +----+-------------+-------+------+---------------+------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+----------------+ | 1 | SIMPLE | film | ALL | NULL | NULL | NULL | NULL | 1000 | Using filesort | +----+-------------+-------+------+---------------+------+---------+------+------+----------------+优化步骤1:使用有索引的列或主键进行order by操作
mysql> explain select film_id,description from sakila.film order by film_id limit 50,5; +----+-------------+-------+-------+---------------+---------+---------+------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+---------+---------+------+------+-------+ | 1 | SIMPLE | film | index | NULL | PRIMARY | 2 | NULL | 55 | NULL | +----+-------------+-------+-------+---------------+---------+---------+------+------+-------+ 1 row in set (0.00 sec)如果上面为500,5则rows是505,如果数太多,也会影响性能,优化如下:
优化步骤2:记录一次返回的主键,在下次查询时使用主键过滤
mysql> explain select film_id,description from sakila.film where film_id>55 and film_id<=60 order by film_id limit 1,5; +----+-------------+-------+-------+---------------+---------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+---------+---------+------+------+-------------+ | 1 | SIMPLE | film | range | PRIMARY | PRIMARY | 2 | NULL | 5 | Using where | +----+-------------+-------+-------+---------------+---------+---------+------+------+-------------+上面主键一定要是顺序排序的
Mysql性能优化----SQL语句优化、索引优化、数据库结构优化、系统配置优化、服务器硬件优化
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。