首页 > 代码库 > 分布式文件系统---GlusterFS安装配置

分布式文件系统---GlusterFS安装配置

一、环境规划

GlusterFS服务端:10.100.0.41/10.100.0.44
GlusterFS客户端:10.100.0.43


二、所需软件包

glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64


三、安装

     3.1、服务端安装

#wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/CentOS/glusterfs-epel.repo

#wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/CentOS/glusterfs-epel.repo

#service glusterd start
#chkconfig glusterd on

     3.2、配置存储目录

          GlusterFS默认不能使用根分区作为GlusterFS的存储目录,本例中使用第二块磁盘来做

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x423f8af8.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won‘t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to
         switch off the mode (command ‘c‘) and change display units to
         sectors (command ‘u‘).

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-78325, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-78325, default 78325):
Using default value 78325

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
39321600 inodes, 157286382 blocks
7864319 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
4800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
# mkdir /gfs
# mount /dev/sdb1 /gfs
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              30G  1.8G   27G   7% /
tmpfs                 495M     0  495M   0% /dev/shm
/dev/sda1             194M   28M  157M  15% /boot
/dev/sda4              65G  180M   62G   1% /home
/dev/sdb1             591G  198M  561G   1% /gfs

     3.3、加入随机启动

#echo "/dev/sdb1                                 /gfs                    ext4    defaults        1 2" >> /etc/fstab

     3.4、在第一台GlusterFS server中加入存储节点。即(10.100.0.44)

# gluster peer probe 10.100.0.44       # 加入新节点
.......

     3.5、查看节点信息

# gluster peer status
Number of Peers: 1

Hostname: 10.100.0.44
Uuid: c674ccbb-072b-4f26-97a2-facd15add645
State: Peer in Cluster (Connected)

     3.5、创建存储卷。即集群磁盘

#gluster volume create gfsv replica 2 10.100.0.41:/gfs 10.100.0.44:/gfs
Creation of volume gfsv has been successful. Please start the volume to access data.

     3.6、启动存储卷

# gluster volume start gfsv      # 启动卷

     3.7、查看存储卷信息

# gluster volume info

Volume Name: gfsv
Type: Replicate
Volume ID: be627a08-1d36-40f5-add0-1277be47c328
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.100.0.41:/gfs/data
Brick2: 10.100.0.44:/gfs/data

     3.8、配置访问ACL规则

# gluster volume set gfs auth.allow 10.100.0.*     # 授权访问


四、客户端安装配置

     4.1、安装客户端

          客户端需要fuse支持

# yum -y install glusterfs glusterfs-fuse glusterfs-api glusterfs-devel glusterfs-libs

     4.2、挂载glusterfs存储

#mount -t glusterfs -o rw 10.100.0.41:gfsv /home/gfs/data/



五:存储卷模式

     (1)distribute volume:分布式卷,文件通过hash算法分布到brick server上,这种卷是glusterfs的基础和最大特点;

     (2)stripe volume:条带卷,类似RAID0,条带数=brick server数量,文件分成数据块以Round Robin方式分布到brick server上,并发粒度是数据块,大文件性能高;
     (3)replica volume:镜像卷,类似RAID1,镜像数=brick server数量,所以brick server上文件数据相同,构成n-way镜像,可用性高;
     (4)distribute stripe volume:分布式条带卷,brick server数量是条带数的倍数,兼具distribute和stripe卷的特点;
     (5)distribute replica volume:分布式镜像卷,brick server数量是镜像数的倍数,兼具distribute和replica卷的特点; 


六、 平衡布局是很有必要的,因为布局结构是静态的,当新的bricks加入现有卷,新创建的文件会分布到旧的bricks中,所以需要平衡布局结构,使新加入 的bricks生效。布局平衡只是使新布局生效,并不会在新的布局移动老的数据,如果你想在新布局生效后,重新平衡卷中的数据,还需要对卷中的数据进行平 衡。

gluster volume rebalance VOLNAME fix-layout start

gluster volume rebalance VOLNAME migrate-data start

也可以两步合一步同时操作

gluster volume rebalance VOLNAME start

你也可以删除/停止卷

gluster volume stop/delete VOLNAME


你可以在在平衡过程中查看平衡信息:


你也可以暂停平衡,再次启动平衡的时候会从上次暂停的地方继续开始平衡。


gluster volume rebalance VOLNAME stop






本文出自 “亮公子” 博客,请务必保留此出处http://iyull.blog.51cto.com/4664834/1946587

分布式文件系统---GlusterFS安装配置