首页 > 代码库 > 小峰mybatis(3)mybatis分页和缓存
小峰mybatis(3)mybatis分页和缓存
一、mybatis分页-逻辑分页和物理分页:
逻辑分页:
mybatis内置的分页是逻辑分页;数据库里有100条数据,要每页显示10条,mybatis先把100条数据取出来,放到内存里,从内存里取10条;虽然取出的是10条,但是性能不好,几千条上万条没问题,数据量大性能就有问题了;小项目使用没问题;正式的项目数据量都很大就不使用了;
物理分页:
开发的时候用的:拼sql,真正实现分页;
现有数据库记录:
1、逻辑分页
1)测试代码StudentTest2.java:
package com.cy.service; import java.util.HashMap; import java.util.List; import java.util.Map; import org.apache.ibatis.session.RowBounds; import org.apache.ibatis.session.SqlSession; import org.apache.log4j.Logger; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.cy.mapper.StudentMapper; import com.cy.model.Student; import com.cy.util.SqlSessionFactoryUtil; public class StudentTest2 { private static Logger logger = Logger.getLogger(StudentTest2.class); private SqlSession sqlSession=null; private StudentMapper studentMapper=null; @Before public void setUp() throws Exception { sqlSession=SqlSessionFactoryUtil.openSession(); studentMapper=sqlSession.getMapper(StudentMapper.class); } @After public void tearDown() throws Exception { sqlSession.close(); } /** * 逻辑分页,实现过程:先把所有数据都查出来,再从内存中从0开始,取3条数据; */ @Test public void findStudent() { logger.info("查询学生逻辑分页"); int offset = 0; //start;开始 int limit = 3; //limit: 每页大小; RowBounds rowBound = new RowBounds(offset, limit); //RowBounds里面有分页信息 List<Student> studentList=studentMapper.findStudent(rowBound); for(Student student:studentList){ System.out.println(student); } } @Test public void findStudent2() { logger.info("查询学生物理分页"); Map<String, Object> map = new HashMap<String, Object>(); map.put("start", 0); map.put("size", 3); List<Student> studentList=studentMapper.findStudent2(map); for(Student student:studentList){ System.out.println(student); } } }
2)StudentMapper.java接口:
1 //逻辑分页 RowBounds里面有分页信息 2 public List<Student> findStudent(RowBounds rowBound); 3 4 //物理分页 5 public List<Student> findStudent2(Map<String, Object> map);
3)StudentMapper.xml映射文件:
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="com.cy.mapper.StudentMapper"> <resultMap type="com.cy.model.Student" id="StudentResult"> <id property="id" column="id"/> <result property="name" column="name"/> <result property="age" column="age"/> <result property="remark" column="remark"/> </resultMap> <!-- 逻辑分页 --> <select id="findStudent" resultMap="StudentResult"> select * from t_student </select> <!-- 物理分页 --> <select id="findStudent2" parameterType="Map" resultMap="StudentResult"> select * from t_student <if test="start!=null and size!=null"> limit #{start}, #{size} </if> </select> </mapper>
console:
二、mybatis缓存:
什么时候使用缓存:
并发量很大,并且都是查询的;这种情况使用缓存很好,服务器的内存要高点;这样的话性能好,也减轻数据库的压力;
配置二级缓存:
1)StudentMapper.xml:
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="com.cy.mapper.StudentMapper"> <!-- 1,size:表示缓存cache中能容纳的最大元素数。默认是1024; 2,flushInterval:定义缓存刷新周期,以毫秒计; 3,eviction:定义缓存的移除机制;默认是LRU(least recently userd,最近最少使用),还有FIFO(first in first out,先进先出) 4,readOnly:默认值是false,假如是true的话,缓存只能读。 --> <cache size="1024" flushInterval="60000" eviction="LRU" readOnly="false"/> <resultMap type="com.cy.model.Student" id="StudentResult"> <id property="id" column="id"/> <result property="name" column="name"/> <result property="age" column="age"/> <result property="remark" column="remark"/> </resultMap> <select id="findStudents" resultMap="StudentResult" flushCache="false" useCache="true"> select * from t_student </select> <insert id="insertStudent" parameterType="Student" flushCache="true"> insert into t_student values(null,#{name},#{age},#{pic},#{remark}); </insert> </mapper>
select:
useCache: 默认true;默认使用缓存;
flushCashe:清空缓存;false:不清空缓存;
insert:
flushCashe:默认true;清掉缓存;update,delete默认flushCache也是true;
参考之前的mybatis二级缓存的文章;model好像要实现Serializable:
public class Student implements Serializable{ ... }
小峰mybatis(3)mybatis分页和缓存
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。