首页 > 代码库 > 企业私有云之共享存储ceph在centos7安装与应用

企业私有云之共享存储ceph在centos7安装与应用

构建云设施,存储是一个重要组件,所以本文主要介绍一下我这里如何使用ceph的。

云软件选择openstack,版本是Mitaka,部署系统是centos 7.1,ceph版本是10.2.2.

选择ceph的原因是,免费、开源、支持多,并且市面上大部分都是选择ceph做云存储。

另外本文是参考了http://www.vpsee.com/2015/07/install-ceph-on-centos-7/

目录

一、ceph安装

二、openstack里应用ceph集群

三、glance应用ceph

四、删除osd节点

五、ceph使用混合磁盘

下面是开始安装

可以参考官方的http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

一、ceph安装

主机环境

一个adm,3个mon,3个osd,复制2份

下面是hosts配置(每个主机都有)

10.10.128.18 ck-ceph-adm
10.10.128.19 ck-ceph-mon1
10.10.128.20 ck-ceph-mon2
10.10.128.21 ck-ceph-mon3
10.10.128.22 ck-ceph-osd1
10.10.128.23 ck-ceph-osd2
10.10.128.24 ck-ceph-osd3

另外需要对mon与osd节点进行一些优化

绑定盘符
ll /sys/block/sd*|awk ‘{print $NF}‘|sed ‘s/..//‘|awk -F ‘/‘ ‘{print "DEVPATH==\""$0"\", NANE=\""$NF"\", MODE=\"0660\""}‘>/etc/udev/rules.d/90-ceph-disk.rules
#关闭节能模式
for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done
#增加pid数量
echo "kernel.pid_max = 4194303"|tee -a /etc/sysctl.conf
#增加最大打开文件数量
echo "fs.file-max = 26234859"|tee -a /etc/sysctl.conf
#增加顺序度
for READ_KB in /sys/block/sd*/queue/read_ahead_kb; do [ -f $READ_KB ] || continue; echo 8192 > $READ_KB; done
#增加IO调度队列
for REQUEST in /sys/block/sd*/queue/nr_requests; do [ -f $REQUEST ] || continue; echo 20480 > $REQUEST; done
#配置IO调度器
for SCHEDULER in /sys/block/sd*/queue/scheduler; do [ -f $SCHEDULER ] || continue; echo deadline > $SCHEDULER; done
#关闭swwap
echo "vm.swappiness = 0" | tee -a /etc/sysctl.conf

每个主机也最好是把主机名修改跟hosts里一致

1、创建用户

useradd -m ceph-admin
su - ceph-admin
mkdir -p ~/.ssh
chmod 700 ~/.ssh
cat << EOF > ~/.ssh/config
Host *
    Port 50020
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF
chmod 600 ~/.ssh/config

做ssh信任

ssh-keygen -t rsa -b 2048

之后一路回车就行

复制id_rsa.pub到其他节点的/home/ceph/.ssh/authorized_keys

chmod 600 .ssh/authorized_keys

之后给予ceph sudo权限

修改/etc/sudoers

ceph-admin ALL=(root)       NOPASSWD:ALL

之后在这个配置文件里关闭

Defaults    requiretty

在这行前加#

对osd组服务器进行磁盘格式化

如果只是测试,可以直接使用目录,正式使用,还是直接裸设备格式化

cat auto_parted.sh
#!/bin/bash
name="b c d e f g h i"
for i in ${name}; do
    echo "Creating partitions on /dev/sd${i} ..."
    parted -a optimal --script /dev/sd${i} -- mktable gpt
    parted -a optimal --script /dev/sd${i} -- mkpart primary xfs 0% 100%
    sleep 1
    mkfs.xfs -f /dev/sd${i}1 &
done

然后运行

2、安装epel(所有节点)

yum -y install epel-release

3、安装ceph源(所有节点,如果是不使用ceph-deploy安装使用,否则使用ceph-deploy自动安装)

yum -y install yum-plugin-priorities
rpm --import https://download.ceph.com/keys/release.asc
rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
cd /etc/yum.repos.d/
sed -i ‘s@download.ceph.com@mirrors.ustc.edu.cn/ceph@g‘ ceph.repo
yum -y install ceph ceph-radosgw

4、管理节点配置

安装定制软件

yum install ceph-deploy -y

进行初始化

su - ceph-admin
mkdir ck-ceph-cluster
cd ck-ceph-cluster
ceph-deploy new ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3

有几个mon节点就写几个

配置

echo "osd pool default size = 2">>ceph.conf
echo "osd pool default min size = 2">>ceph.conf
echo "public network = 10.10.0.0/16">>ceph.conf
echo "cluster network = 172.16.0.0/16">>ceph.conf

请注意如果是多个网卡的话,最好把public与cluster单独区分出来,cluster是集群通信与同步数据网络,public是供监控与客户端连接网络。

在所有节点安装ceph(如果是想使用ceph-deploy安装就进行,如果使用了第3步,可以忽略这步)

ceph-deploy install  ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3

监控节点初始化

ceph-deploy mon create-initial

对osd节点进行数据盘初始化

ceph-deploy disk zap ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi
ceph-deploy osd create ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi

ceph-deploy disk zap ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi
ceph-deploy osd create ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi

ceph-deploy disk zap ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi
ceph-deploy osd create ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi

同步配置

ceph-deploy admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3
ceph-deploy --overwrite-conf admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3 ck-ceph-osd1 ck-ceph-osd2
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

对所有节点/etc/ceph修改权限

sudo chown -R ceph:ceph /etc/ceph

查看集群信息

[ceph-admin@ck-ceph-adm ~]$ ceph -s
    cluster 2aafe304-2dd1-48be-a0fa-cb9c911c7c3b
     health HEALTH_OK
     monmap e1: 3 mons at {ck-ceph-mon1=10.10.128.19:6789/0,ck-ceph-mon2=10.10.128.20:6789/0,ck-ceph-mon3=10.10.128.21:6789/0}
            election epoch 6, quorum 0,1,2 ck-ceph-mon1,ck-ceph-mon2,ck-ceph-mon3
     osdmap e279: 40 osds: 40 up, 40 in
            flags sortbitwise
      pgmap v96866: 2112 pgs, 3 pools, 58017 MB data, 13673 objects
            115 GB used, 21427 GB / 21543 GB avail
                2112 active+clean

二、openstack里应用ceph集群

可以参考官网http://docs.ceph.com/docs/master/rbd/rbd-openstack/

1、创建池子

ceph osd pool create volumes 1024 1024

这个1024的pg_num与pgp_num的值,大家参考http://docs.ceph.com/docs/master/rados/operations/placement-groups/

2、安装ceph客户端工具

在所有cinder节点与计算节点都安装

rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
yum install ceph-common

3、同步配置

同步/etc/ceph/ceph.conf

把adm里的同步到cinder节点与计算节点

4、安全认证(ceph管理节点)

运行cinder用户访问ceph权限

ceph auth get-or-create client.cinder mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=volumes‘

5、把key加入节点(管理节点)

ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring

6、密钥文件管理(管理节点)

ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

把密钥加入到libvirt使用

获取uuid

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

登陆计算节点,把uuid改为上面的

cat > secret.xml <<EOF
<secret ephemeral=‘no‘ private=‘no‘>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type=‘ceph‘>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

重启服务

systemctl restart openstack-nova-compute.service

使用virsh secret-list查看是否有此密钥

如果不在所有节点使用,那么在把云硬盘挂载到实例的时候出现/var/log/nova/nova-compute.log

2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher     rv = meth(*args, **kwargs)
2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 554, in attachDeviceFlags
2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher     if ret == -1: raise libvirtError (‘virDomainAttachDeviceFlags() failed‘, dom=self)
2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher libvirtError: Secret not found: rbd no secret matches uuid ‘9c0e4528-bd0f-4fe8-a3cd-7b1b9bb21d63‘

7、配置cinder(在cinder节点)

修改/etc/cinder/cinder.conf配置

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

并把下面修改

enabled_backends = ceph

重启服务

systemctl restart  openstack-cinder-volume.service target.service

三、glance应用ceph

1、创建池子(在ceph管理节点操作)

ceph osd pool create images 128

2、设置权限(在ceph管理节点操作)

ceph auth get-or-create client.glance mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images‘

3、在glance主机里安装ceph

yum install ceph-common

4、复制ceph配置文件到glance节点

同步/etc/ceph/ceph.conf

5、配置认证

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring

6、配置glance文件

修改/etc/glance/glance-api.conf 

[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

7、重启服务

systemctl restart  openstack-glance-api.service   openstack-glance-registry.service

8、上传镜像并测试

glance image-create --name centos64-test1 --disk-format qcow2   --container-format bare --visibility public --file /tmp/CentOS-6.4-x86_64.qcow2 --progress
[root@ceph-mon ceph]# rados -p images ls
rbd_header.7eca70122ade
rbd_data.7eca70122ade.0000000000000000
rbd_directory
rbd_data.7eca70122ade.0000000000000001
rbd_data.7ee831dac577.0000000000000000
rbd_header.7ee831dac577
rbd_id.c7a81292-773f-457a-859c-2784d780544c
rbd_data.7ee831dac577.0000000000000001
rbd_data.7ee831dac577.0000000000000002
rbd_id.a5ae8722-698a-4a84-aa29-500144616001

四、删除osd节点

1、移出集群(管理节点执行)

ceph osd out 7 (ceph osd tree中,REWEIGHT值变为0)

2、停止服务(目标节点执行)

systemctl stop ceph-osd@7 (ceph osd tree中,状态变为DOWN)

3、移出crush

ceph osd crush remove osd.7

4、删除key

ceph auth del osd.7

5、移除osd

ceph osd rm 7

6、查找其所在主机是否还有osd,若有,进入第7步骤,否则

ceph osd crush remove `hostname`

7、修改并同步ceph.conf文件

vi /etc/ceph/ceph.conf

8、删除目录文件

rm –rf * /var/lib/ceph/osd/ceph-7

五、ceph使用混合磁盘

下面是使用sas 15k 600g与sas 7.2k 4T做混合存储

下面是修改前的

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    44436G     44434G        1844M             0
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS
    rbd      0         0         0        22216G           0
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ rados df
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
rbd                        0            0            0            0            0            0            0            0            0
  total used         1888268            0
  total avail    46592893204
  total space    46594781472

1、获取当前crush map,反编译它

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd getcrushmap -o default-crushmapdump
got crush map from osdmap epoch 238
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ crushtool -d default-crushmapdump -o default-crushmapdump-decompiled
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ cat default-crushmapdump-decompiled
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 device18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ck-ceph-osd1 {
  id -2   # do not change unnecessarily
  # weight 6.481
  alg straw
  hash 0  # rjenkins1
  item osd.0 weight 0.540
  item osd.1 weight 0.540
  item osd.2 weight 0.540
  item osd.3 weight 0.540
  item osd.4 weight 0.540
  item osd.5 weight 0.540
  item osd.6 weight 0.540
  item osd.7 weight 0.540
  item osd.8 weight 0.540
  item osd.9 weight 0.540
  item osd.10 weight 0.540
  item osd.11 weight 0.540
}
host ck-ceph-osd2 {
  id -3   # do not change unnecessarily
  # weight 8.641
  alg straw
  hash 0  # rjenkins1
  item osd.12 weight 0.540
  item osd.13 weight 0.540
  item osd.14 weight 0.540
  item osd.15 weight 0.540
  item osd.16 weight 0.540
  item osd.17 weight 0.540
  item osd.19 weight 0.540
  item osd.20 weight 0.540
  item osd.21 weight 0.540
  item osd.22 weight 0.540
  item osd.23 weight 0.540
  item osd.24 weight 0.540
  item osd.25 weight 0.540
  item osd.26 weight 0.540
  item osd.27 weight 0.540
  item osd.28 weight 0.540
}
host ck-ceph-osd3 {
  id -4   # do not change unnecessarily
  # weight 6.481
  alg straw
  hash 0  # rjenkins1
  item osd.29 weight 0.540
  item osd.30 weight 0.540
  item osd.31 weight 0.540
  item osd.32 weight 0.540
  item osd.33 weight 0.540
  item osd.34 weight 0.540
  item osd.35 weight 0.540
  item osd.36 weight 0.540
  item osd.37 weight 0.540
  item osd.38 weight 0.540
  item osd.39 weight 0.540
  item osd.40 weight 0.540
}
host ck-ceph-osd4 {
  id -5   # do not change unnecessarily
  # weight 21.789
  alg straw
  hash 0  # rjenkins1
  item osd.41 weight 3.631
  item osd.42 weight 3.631
  item osd.43 weight 3.631
  item osd.44 weight 3.631
  item osd.45 weight 3.631
  item osd.46 weight 3.631
}
root default {
  id -1   # do not change unnecessarily
  # weight 43.392
  alg straw
  hash 0  # rjenkins1
  item ck-ceph-osd1 weight 6.481
  item ck-ceph-osd2 weight 8.641
  item ck-ceph-osd3 weight 6.481
  item ck-ceph-osd4 weight 21.789
}

# rules
rule replicated_ruleset {
  ruleset 0
  type replicated
  min_size 1
  max_size 10
  step take default
  step chooseleaf firstn 0 type host
  step emit
}

# end crush map

2、对crushmap文件进行修改,在root default后面,创建2个新的osd root域,分别是sas-15(对于sas 15k硬盘)与sas-7(对于sas 7.2k硬盘)

root sas-10 {
        id -6
  alg straw
  hash 0
        item osd.0 weight 0.540
        item osd.1 weight 0.540
        item osd.2 weight 0.540
        item osd.3 weight 0.540
        item osd.4 weight 0.540
        item osd.5 weight 0.540
        item osd.6 weight 0.540
        item osd.7 weight 0.540
        item osd.8 weight 0.540
        item osd.9 weight 0.540
        item osd.10 weight 0.540
        item osd.11 weight 0.540
        item osd.12 weight 0.540
        item osd.13 weight 0.540
        item osd.14 weight 0.540
        item osd.15 weight 0.540
        item osd.16 weight 0.540
        item osd.17 weight 0.540
        item osd.19 weight 0.540
        item osd.20 weight 0.540
        item osd.21 weight 0.540
        item osd.22 weight 0.540
        item osd.23 weight 0.540
        item osd.24 weight 0.540
        item osd.25 weight 0.540
        item osd.26 weight 0.540
        item osd.27 weight 0.540
        item osd.28 weight 0.540
        item osd.29 weight 0.540
        item osd.30 weight 0.540
        item osd.31 weight 0.540
        item osd.32 weight 0.540
        item osd.33 weight 0.540
        item osd.34 weight 0.540
        item osd.35 weight 0.540
        item osd.36 weight 0.540
        item osd.37 weight 0.540
        item osd.38 weight 0.540
        item osd.39 weight 0.540
        item osd.40 weight 0.540
}

id是参考上面的id,累加就行,alg与hash不需要动,然后把对于sas 15k的硬盘osd都加入到sas-10里

下面是把sas 7.2k的osd都加入到sas-7里

root sas-7 {
        id -7
        alg straw
        hash 0
        item osd.41 weight 3.631
        item osd.42 weight 3.631
        item osd.43 weight 3.631
        item osd.44 weight 3.631
        item osd.45 weight 3.631
        item osd.46 weight 3.631
}

3、下面是新增crush rule规则,这个规则是为了做匹配使用,设置哪些池子使用什么osd,在rule replicated_ruleset后面添加

rule sas-15-pool {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take sas-15
        step chooseleaf firstn 0 type osd
        step emit
}

rule sas-7-pool {
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take sas-7
        step chooseleaf firstn 0 type osd
        step emit
}

4、把规则注入到集群

下面是完整的规则

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ cat default-crushmapdump-decompiled
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 device18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ck-ceph-osd1 {
  id -2   # do not change unnecessarily
  # weight 6.481
  alg straw
  hash 0  # rjenkins1
  item osd.0 weight 0.540
  item osd.1 weight 0.540
  item osd.2 weight 0.540
  item osd.3 weight 0.540
  item osd.4 weight 0.540
  item osd.5 weight 0.540
  item osd.6 weight 0.540
  item osd.7 weight 0.540
  item osd.8 weight 0.540
  item osd.9 weight 0.540
  item osd.10 weight 0.540
  item osd.11 weight 0.540
}
host ck-ceph-osd2 {
  id -3   # do not change unnecessarily
  # weight 8.641
  alg straw
  hash 0  # rjenkins1
  item osd.12 weight 0.540
  item osd.13 weight 0.540
  item osd.14 weight 0.540
  item osd.15 weight 0.540
  item osd.16 weight 0.540
  item osd.17 weight 0.540
  item osd.19 weight 0.540
  item osd.20 weight 0.540
  item osd.21 weight 0.540
  item osd.22 weight 0.540
  item osd.23 weight 0.540
  item osd.24 weight 0.540
  item osd.25 weight 0.540
  item osd.26 weight 0.540
  item osd.27 weight 0.540
  item osd.28 weight 0.540
}
host ck-ceph-osd3 {
  id -4   # do not change unnecessarily
  # weight 6.481
  alg straw
  hash 0  # rjenkins1
  item osd.29 weight 0.540
  item osd.30 weight 0.540
  item osd.31 weight 0.540
  item osd.32 weight 0.540
  item osd.33 weight 0.540
  item osd.34 weight 0.540
  item osd.35 weight 0.540
  item osd.36 weight 0.540
  item osd.37 weight 0.540
  item osd.38 weight 0.540
  item osd.39 weight 0.540
  item osd.40 weight 0.540
}
host ck-ceph-osd4 {
  id -5   # do not change unnecessarily
  # weight 21.789
  alg straw
  hash 0  # rjenkins1
  item osd.41 weight 3.631
  item osd.42 weight 3.631
  item osd.43 weight 3.631
  item osd.44 weight 3.631
  item osd.45 weight 3.631
  item osd.46 weight 3.631
}
root default {
  id -1   # do not change unnecessarily
  # weight 43.392
  alg straw
  hash 0  # rjenkins1
  item ck-ceph-osd1 weight 6.481
  item ck-ceph-osd2 weight 8.641
  item ck-ceph-osd3 weight 6.481
  item ck-ceph-osd4 weight 21.789
}
root sas-15 {
        id -6
        alg straw
        hash 0
        item osd.0 weight 0.540
        item osd.1 weight 0.540
        item osd.2 weight 0.540
        item osd.3 weight 0.540
        item osd.4 weight 0.540
        item osd.5 weight 0.540
        item osd.6 weight 0.540
        item osd.7 weight 0.540
        item osd.8 weight 0.540
        item osd.9 weight 0.540
        item osd.10 weight 0.540
        item osd.11 weight 0.540
        item osd.12 weight 0.540
        item osd.13 weight 0.540
        item osd.14 weight 0.540
        item osd.15 weight 0.540
        item osd.16 weight 0.540
        item osd.17 weight 0.540
        item osd.19 weight 0.540
        item osd.20 weight 0.540
        item osd.21 weight 0.540
        item osd.22 weight 0.540
        item osd.23 weight 0.540
        item osd.24 weight 0.540
        item osd.25 weight 0.540
        item osd.26 weight 0.540
        item osd.27 weight 0.540
        item osd.28 weight 0.540
        item osd.29 weight 0.540
        item osd.30 weight 0.540
        item osd.31 weight 0.540
        item osd.32 weight 0.540
        item osd.33 weight 0.540
        item osd.34 weight 0.540
        item osd.35 weight 0.540
        item osd.36 weight 0.540
        item osd.37 weight 0.540
        item osd.38 weight 0.540
        item osd.39 weight 0.540
        item osd.40 weight 0.540
}
root sas-7 {
        id -7
        alg straw
        hash 0
        item osd.41 weight 3.631
        item osd.42 weight 3.631
        item osd.43 weight 3.631
        item osd.44 weight 3.631
        item osd.45 weight 3.631
        item osd.46 weight 3.631
}
# rules
rule replicated_ruleset {
  ruleset 0
  type replicated
  min_size 1
  max_size 10
  step take default
  step chooseleaf firstn 0 type host
  step emit
}
rule sas-15-pool {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take sas-15
        step chooseleaf firstn 0 type osd
        step emit
}

rule sas-7-pool {
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take sas-7
        step chooseleaf firstn 0 type osd
        step emit
}
# end crush map

注入集群

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ crushtool -c default-crushmapdump-decompiled -o default-crushmapdump-compiled
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd setcrushmap -i default-crushmapdump-compiled
set crush map

应用后查看osd tree

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd tree
ID WEIGHT   TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY
-7 21.78598 root sas-7
41  3.63100     osd.41                 up  1.00000          1.00000
42  3.63100     osd.42                 up  1.00000          1.00000
43  3.63100     osd.43                 up  1.00000          1.00000
44  3.63100     osd.44                 up  1.00000          1.00000
45  3.63100     osd.45                 up  1.00000          1.00000
46  3.63100     osd.46                 up  1.00000          1.00000
-6 21.59973 root sas-15
 0  0.53999     osd.0                  up  1.00000          1.00000
 1  0.53999     osd.1                  up  1.00000          1.00000
 2  0.53999     osd.2                  up  1.00000          1.00000
 3  0.53999     osd.3                  up  1.00000          1.00000
 4  0.53999     osd.4                  up  1.00000          1.00000
 5  0.53999     osd.5                  up  1.00000          1.00000
 6  0.53999     osd.6                  up  1.00000          1.00000
 7  0.53999     osd.7                  up  1.00000          1.00000
 8  0.53999     osd.8                  up  1.00000          1.00000
 9  0.53999     osd.9                  up  1.00000          1.00000
10  0.53999     osd.10                 up  1.00000          1.00000
11  0.53999     osd.11                 up  1.00000          1.00000
12  0.53999     osd.12                 up  1.00000          1.00000
13  0.53999     osd.13                 up  1.00000          1.00000
14  0.53999     osd.14                 up  1.00000          1.00000
15  0.53999     osd.15                 up  1.00000          1.00000
16  0.53999     osd.16                 up  1.00000          1.00000
17  0.53999     osd.17                 up  1.00000          1.00000
19  0.53999     osd.19                 up  1.00000          1.00000
20  0.53999     osd.20                 up  1.00000          1.00000
21  0.53999     osd.21                 up  1.00000          1.00000
22  0.53999     osd.22                 up  1.00000          1.00000
23  0.53999     osd.23                 up  1.00000          1.00000
24  0.53999     osd.24                 up  1.00000          1.00000
25  0.53999     osd.25                 up  1.00000          1.00000
26  0.53999     osd.26                 up  1.00000          1.00000
27  0.53999     osd.27                 up  1.00000          1.00000
28  0.53999     osd.28                 up  1.00000          1.00000
29  0.53999     osd.29                 up  1.00000          1.00000
30  0.53999     osd.30                 up  1.00000          1.00000
31  0.53999     osd.31                 up  1.00000          1.00000
32  0.53999     osd.32                 up  1.00000          1.00000
33  0.53999     osd.33                 up  1.00000          1.00000
34  0.53999     osd.34                 up  1.00000          1.00000
35  0.53999     osd.35                 up  1.00000          1.00000
36  0.53999     osd.36                 up  1.00000          1.00000
37  0.53999     osd.37                 up  1.00000          1.00000
38  0.53999     osd.38                 up  1.00000          1.00000
39  0.53999     osd.39                 up  1.00000          1.00000
40  0.53999     osd.40                 up  1.00000          1.00000
-1 43.39195 root default
-2  6.48099     host ck-ceph-osd1
 0  0.53999         osd.0              up  1.00000          1.00000
 1  0.53999         osd.1              up  1.00000          1.00000
 2  0.53999         osd.2              up  1.00000          1.00000
 3  0.53999         osd.3              up  1.00000          1.00000
 4  0.53999         osd.4              up  1.00000          1.00000
 5  0.53999         osd.5              up  1.00000          1.00000
 6  0.53999         osd.6              up  1.00000          1.00000
 7  0.53999         osd.7              up  1.00000          1.00000
 8  0.53999         osd.8              up  1.00000          1.00000
 9  0.53999         osd.9              up  1.00000          1.00000
10  0.53999         osd.10             up  1.00000          1.00000
11  0.53999         osd.11             up  1.00000          1.00000
-3  8.64099     host ck-ceph-osd2
12  0.53999         osd.12             up  1.00000          1.00000
13  0.53999         osd.13             up  1.00000          1.00000
14  0.53999         osd.14             up  1.00000          1.00000
15  0.53999         osd.15             up  1.00000          1.00000
16  0.53999         osd.16             up  1.00000          1.00000
17  0.53999         osd.17             up  1.00000          1.00000
19  0.53999         osd.19             up  1.00000          1.00000
20  0.53999         osd.20             up  1.00000          1.00000
21  0.53999         osd.21             up  1.00000          1.00000
22  0.53999         osd.22             up  1.00000          1.00000
23  0.53999         osd.23             up  1.00000          1.00000
24  0.53999         osd.24             up  1.00000          1.00000
25  0.53999         osd.25             up  1.00000          1.00000
26  0.53999         osd.26             up  1.00000          1.00000
27  0.53999         osd.27             up  1.00000          1.00000
28  0.53999         osd.28             up  1.00000          1.00000
-4  6.48099     host ck-ceph-osd3
29  0.53999         osd.29             up  1.00000          1.00000
30  0.53999         osd.30             up  1.00000          1.00000
31  0.53999         osd.31             up  1.00000          1.00000
32  0.53999         osd.32             up  1.00000          1.00000
33  0.53999         osd.33             up  1.00000          1.00000
34  0.53999         osd.34             up  1.00000          1.00000
35  0.53999         osd.35             up  1.00000          1.00000
36  0.53999         osd.36             up  1.00000          1.00000
37  0.53999         osd.37             up  1.00000          1.00000
38  0.53999         osd.38             up  1.00000          1.00000
39  0.53999         osd.39             up  1.00000          1.00000
40  0.53999         osd.40             up  1.00000          1.00000
-5 21.78899     host ck-ceph-osd4
41  3.63100         osd.41             up  1.00000          1.00000
42  3.63100         osd.42             up  1.00000          1.00000
43  3.63100         osd.43             up  1.00000          1.00000
44  3.63100         osd.44             up  1.00000          1.00000
45  3.63100         osd.45             up  1.00000          1.00000
46  3.63100         osd.46             up  1.00000          1.00000

5、下面是创建池子

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool create sas-15-pool 1024 1024
pool ‘sas-15-pool‘ created
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas
pool 1 ‘sas-15-pool‘ replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 240 flags hashpspool stripe_width 0

这个池子名称要跟之前配置文件里rule设置的一样,下面是设置crush规则,让刚才这个sas-15-pool能应用到配置文件里对应池子

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool set sas-15-pool crush_ruleset 1
set pool 1 crush_ruleset to 1
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas
pool 1 ‘sas-15-pool‘ replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 242 flags hashpspool stripe_width 0

sas 15k的池子配置好了,下面是配置sas 7.2k的池子

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool create sas-7-pool 256 256
pool ‘sas-7-pool‘ created
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas-7
pool 2 ‘sas-7-pool‘ replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 244 flags hashpspool stripe_width 0
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool set sas-7-pool crush_ruleset 2
set pool 2 crush_ruleset to 2
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas-7
pool 2 ‘sas-7-pool‘ replicated size 2 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 256 pgp_num 256 last_change 246 flags hashpspool stripe_width 0

查看集群存储空间

使用40个sas 15k 600g与6个sas 7.2k4T弄的存储,副本2份

所以sas 15k的话,可用空间是12t,sas 7.2k的话也是12t 

[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    44436G     44434G        1904M             0
POOLS:
    NAME            ID     USED     %USED     MAX AVAIL     OBJECTS
    rbd             0         0         0        22216G           0
    sas-15-pool     1         0         0        11061G           0
    sas-7-pool      2         0         0        11155G           0
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ rados df
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
rbd                        0            0            0            0            0            0            0            0            0
sas-15-pool                0            0            0            0            0            0            0            0            0
sas-7-pool                 0            0            0            0            0            0            0            0            0
  total used         1950412            0
  total avail    46592831060
  total space    46594781472

有问题博客留言,我看到会及时答复。

本文出自 “吟—技术交流” 博客,请务必保留此出处http://dl528888.blog.51cto.com/2382721/1863309

企业私有云之共享存储ceph在centos7安装与应用