首页 > 代码库 > OpenStack Cinder 与各种后端存储技术的集成叙述与实践

OpenStack Cinder 与各种后端存储技术的集成叙述与实践

              Cinder项目为管理快设备而生,它最重要的地方就是如何做到和各种存储后端就到完美适配,用好后端存储的功能,本文为Cinder 多种后端存储(LVM, FC+SAN, iSCSI+SAN, NFS, VMWARE, Glusterfs)的场景总结, 以防自己将来忘记,欢迎交流, 共同成长微笑


1.LVM

开始OpenStack Cinder实践之旅的入门存储, cinder.conf 什么都不配,默认就是使用LVM, LVM的原理

先把分区用pvcreate做成物理卷, 再把多个物理卷做成一个卷组,然后创建volume的时候就通过lvcreate分配lvm逻辑卷。

做部署时,用dd在当前目录创建一个设定大小(本例中为10G)的文件(cinder-volumes),然后通过losetup命令把他影射为loop device(虚拟快设备),然后基于这个快设备建立逻辑卷, 然后就是建立vg, 建立vg的时候可以一次包含多个pv,本例只使用了一个。


dd if=/dev/zero of=/vol/cinder-volumes bs=1 count=0 seek=10G 
# Mount the file. 
loopdev=`losetup -f` 
losetup $loopdev /vol/cinder-volumes 
# Initialize as a physical volume. 
pvcreate $loopdev 
# Create the volume group. 
vgcreate cinder-volumes $loopdev 
# Verify the volume has been created correctly. 
pvscan

建立好volume group后,使用cinder.conf的初始配置即可

重启cinder-volume服务

就可以进行正常的volume创建,挂载,卸载等等操作了

question1:LVM如何实现挂载?

       创建很简单,通过lvcreate命令即可,挂载稍微复杂点, 先要把卷export为scsi存储目标设备(target,会有个lun ID),  然后通过linux scsi initiator软件实现到目标设备的连接,这里用到两个软件,scsi tagert管理软件(这个里面有多种如Tgt,Lio,Iet,ISERTgt,默认使用Tgt, 都是为装有SCSI initiator的操作系统提供块级scsi存储)与linux scsi initiator,所以两次操作分别对应命令为tgtadm与iscsiadm。

2. FC(Fibre Channel) +SAN 设备

   要求:a) 计算节点所在的机器一定要有HBA卡(光纤网卡),
                    查看host上有无HBA卡方式:

                   一种方法:   

$ lspci
20:00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
20:00.1 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02
                     二种方法:

                    可以查看/sys/class/fc_host/

                    当有两块光纤网卡,则有host1 与host2两个目录                   
$ cat /sys/class/fc_host/host1/port_name
0x10000090fa1b825a wwpn (作用如同MAC地址)
                b) 网卡通过光纤线连接到后端存储上, 以ibm的svc为例,必须保证连接上了,可以登录svc图形界面查看host是不是active的, 或者ssh登录svc,运行命令
ww_2145:SVC:superuser>svcinfo lsfabric -delim ! -wwpn "10000090fa1b825a"

10000090FA1B825A!0A0C00!3!node_165008!500507680130DBEA!2!0A0500!active!x3560m4-06MFZF1!!Host
                     这样才能保证做卷的挂载与卸载时没有问题
下面是实践,以Storwize设备为例:
volume_driver = cinder.volume.drivers.storwize_svc.StorwizeSVCDriver
san_ip = 10.2.2.123
san_login = superuser
#san_password = passw0rd
san_private_key = /svc_rsa
storwize_svc_volpool_name = DS3524_DiskArray1
storwize_svc_connection_protocol = FC

san_password 与san_private_key可以二选一,推荐san_private_key 方式,这个私钥文件用ssh-keygen生成,生成好留下私钥, 把公钥放到san设备上,以后其他host也想连接此存储设备时, 可直接使用此私钥, 不需重复生成。

测试过程,建个volume

[root@localhost ~]#  cinder create --display-name test55 1
[root@localhost ~]#  nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 24f7e457-f71a-43ce-9ca6-4454fbcfa31f | available | test55       | 1    | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
用以下instance来进行attach挂载虚拟硬盘, 省了boot instance的过程~
[root@localhost ~]#  nova list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 77d7293f-7a20-4f36-ac86-95f4c24b29ae | test2 | ACTIVE | -          | Running     | net_local=10.0.1.5 |
+--------------------------------------+-------+--------+------------+-------------+--------------------+
[root@localhost ~]# nova volume-attach 77d7293f-7a20-4f36-ac86-95f4c24b29ae 24f7e457-f71a-43ce-9ca6-4454fbcfa31f
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |
| serverId | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |
| volumeId | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |
+----------+--------------------------------------+
[root@localhost ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |   in-use  |    test55    |  1   |     None    |  false   | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |
+--------------------------------------+-----------+--------------+------+-------------+-------------+

3. iSCSI+SAN 设备  

这个是通过TCP/IP 协议来连接存储设备的,只需要保证存储服务节点能够ping通san ip和计算机点能ping存储设备上的iSCSI node ip即可。

   以ibm的svc或者v7000为例。
   与FC部分配置唯一的不同就是
storwize_svc_connection_protocol = FC ==》 storwize_svc_connection_protocol = iSCSI
   测试过程同上,一切ok


4. 使用VMWARE

这个主要是使用vcenter来管理快存储。cinder这个其实就是封装了一层, 最终都是调用vcenter的存储管理的功能。就像是一个中转一样,修改cinder.conf中如下配置项

volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
vmware_host_ip = $VCENTER_HOST_IP
vmware_host_username = $VCENTER_HOST_USERNAME
vmware_host_password = $VCENTER_HOST_PASSWORD
vmware_wsdl_location = $WSDL_LOCATION
# VIM Service WSDL Location
# example, 'file:///home/SDK5.5/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl

测试过程同2,一切即ok。

5.NFS

非常普遍的一种网络文件系统,原理可google,直接开始cinder中的实践 

 第一步: 规划好NFS存储server端, 分别分布在那些节点,那些目录,这里在两个节点做规划,作为nfs server端,10.11.0.16:/var/volume_share和10.11.1.178:/var/volume_share,在这两台机器上建好目录/var/volume_share, 并export为nfs存储,在两个节点上启动nfs服务

      
  第二步:建立/etc/cinder/share.txt,内容如下, 告知可以被mount的共享存储            
10.11.0.16:/var/volume_share
10.11.1.178:/var/volume_share
                 修改权限及用户组              
$ chmod 0640 /etc/cinder/share.txt
$ chown root:cinder /etc/cinder/share.txt
  第三步:编辑/etc/cinder/cinder.conf              
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/shares.txt
nfs_mount_point_base=$state_path/mnt
 重启cinder-volume服务,ok了,测试过程和2一样。

有一次变更环境,voluem-attach报了错:
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     connector)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 239, in attach
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     device_type=self['device_type'], encryption=encryption)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1263, in attach_volume
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     disk_dev)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1250, in attach_volume
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     rv = execute(f,*args,**kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     rv = meth(*args,**kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 419, in attachDeviceFlags
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher libvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized

这个错来自libvirt,做以下设置即可,先察看virt_use_nfs是off还是on
$ /usr/sbin/getsebool virt_use_nfs
如果是off,做以下设置
$ /usr/sbin/setsebool -P virt_use_nfs on

6.GlusterFS     

       写这么多, 觉得这个是比较好的,难怪redhat会收购它, 有眼光啊,它为分布式文件系统,可扩展到几个PB数量级的集群文件系统。可以把多个不同类型的存储块通过Infiniband RDMA或者TCP/IP汇聚成一个大的并行网络文件系统。

       简单总结自己体会到的它的两个特性
           1.横向扩展能力强, 可以把不同节点的brick server组合起来,形成大的并行网络文件系统
           2.可以做软RAID,通过条带技术[stripe] 和镜像卷[replica], 提高并发读写速度和容灾能力

        下面提供一个cinder+glusterfs实践全过程, 穿插叙述glusterfs的优良特性的说明和使用

 第一步:首先安装部署好gluterfs server环境:

        本例中使用10.11.0.16和10.11.1.178作为连个节点,首先要在它们上装包

        两种方式:yum源 or RPM 包

            1:yum -y install glusterfs glusterfs-fuse glusterfs-server
            2:去以下网址下载包, 例如http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.0/RHEL/epel-6.5/x86_64/
                        glusterfs-3.5.0-2.el6.x86_64.rpm         glusterfs-fuse-3.5.0-2.el6.x86_64.rpm    glusterfs-server-3.5.0-2.el6.x86_64.rpm   
                        glusterfs-cli-3.5.0-2.el6.x86_64.rpm     glusterfs-libs-3.5.0-2.el6.x86_64.rpm  

                  我下载了3.5版本的,利用rpm的方式安装上。

        装好之后,规划好多节点上的brick server,本例中将在10.11.1.178 和10.11.0.16上分别建立/var/data_cinder和/var/data_cinder2目录,并在10.11.1.178上建立存储集群cfs。

       1.启动10.11.1.178和10.11.0.16上的glusterd服务

[root@chen ~]# /etc/init.d/glusterd start

       2.在10.11.1.178上察看存储池状态
[root@kvm-10-11-1-178 ~]# gluster peer probe 10.11.0.16
[root@kvm-10-11-1-178 ~]# gluster peer probe 10.11.1.178 #本地也可以不执行
     3.创建存储集群
      用法:$ gluster volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
      stripe 条带,类似做RAID0, 提高读写性能的,
      replica 顾名思义,镜像,类似于做RAID1, 数据会成镜像的写

      stripe+ replica 可以做RAID10,此时stripe COUNT * replica  COUNT =brick-server  COUNT, 说多了,哈哈

[root@kvm-10-11-0-16 var]# mkdir data_cinder
[root@kvm-10-11-0-16 var]# mkdir data_cinder2
[root@kvm-10-11-1-178 var]# mkdir data_cinder
[root@kvm-10-11-1-178 var]# mkdir data_cinder2
[root@kvm-10-11-1-178 var]# gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder2 10.11.1.178:/var/data_cinder 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder2 force
volume create: cfs: success: please start the volume to access data

注意:不要 gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder2 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder 10.11.1.178:/var/data_cinder2 force, 因为前两个是做RAID1, 在同一个节点上就起不到容灾能力了。

      4. start 存储集群
用法:$ gluster  volume start <NEW-VOLNAME>
[root@kvm-10-11-1-178 var]# gluster volume start cfs
volume start: cfs: success
[root@kvm-10-11-1-178 ~]# gluster volume info all
 Volume Name: cfs
Type: Striped-Replicate
Volume ID: ac614af9-11b8-4ff3-98e6-fe8c3a2568b6
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.11.0.16:/var/data_cinder2
Brick2: 10.11.1.178:/var/data_cinder
Brick3: 10.11.0.16:/var/data_cinder
Brick4: 10.11.1.178:/var/data_cinder2

 第二步:client端,也就是cinder-volume service所在的节点,这端除了glusterfs-server包不用装,其他都要装上,这端就和nfs那些一样了,保证服务启动时会做好mount。

建立/etc/cinder/share.conf,内容如下, 告知可以被mount的集群存储            

10.11.1.178:/cfs
                 修改权限及用户组              
$ chmod 0640 /etc/cinder/share.conf
$ chown root:cinder /etc/cinder/share.conf
cinder.conf 配置

glusterfs_shares_config = /etc/cinder/shares.conf
glusterfs_mount_point_base = /var/lib/cinder/volumes
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver

[root@chen ~]# for i in api scheduler volume; do sudo service openstack-cinder-${i} restart; done

[root@chen ~]# cinder create --display-name  chenxiao-glusterfs 1
[root@chen ~]# cinder list 
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |     Status     |    Display Name    | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+ 
| 866f7084-c624-4c11-a592-8c00fcabfb23 |   available    | chenxiao-glusterfs |  1   |     None    |  false   |                                      |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
每个brick server上都有此存储数据的分布, 都为512M, 只有1/2G 是因为做个RAID0, 分布在四处,总共有2G,是因为做了RAID1,以其中一个为例:
[root@kvm-10-11-1-178 data_cinder]# ls -al
总用量 20
drwxrwxr-x    3 root cinder      4096 6月  18 20:43 .
drwxr-xr-x.  27 root root        4096 6月  18 10:24 ..
drw-------  240 root root        4096 6月  18 20:39 .glusterfs
-rw-rw-rw-    2 root root   536870912 6月  18 20:39 volume-866f7084-c624-4c11-a592-8c00fcabfb23
boot个instance, 进行attach操作。
[root@chen data_cinder]# nova volume-attach f5b7527e-2ab8-424c-9842-653bd73e8f26 866f7084-c624-4c11-a592-8c00fcabfb23
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| id       | 866f7084-c624-4c11-a592-8c00fcabfb23 |
| serverId | f5b7527e-2ab8-424c-9842-653bd73e8f26 |
| volumeId | 866f7084-c624-4c11-a592-8c00fcabfb23 |
+----------+--------------------------------------+

[root@chen data_cinder]# cinder list
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |     Status     |    Display Name    | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+ 
| 866f7084-c624-4c11-a592-8c00fcabfb23 |     in-use     | chenxiao-glusterfs |  1   |     None    |  false   | f5b7527e-2ab8-424c-9842-653bd73e8f26 |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+