GlusterFS学习之路(三)客户端挂载和管理GlusterFS卷
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
GlusterFS学习之路(三)客户端挂载和管理GlusterFS卷
可以使⽤Gluster Native Client⽅法在GNU / Linux客户端中实现⾼并发性,性能和透明故障转移。
可以使⽤NFS v3访问gluster卷。
已经对GNU / Linux客户端和其他操作系统中的NFS实现进⾏了⼴泛的测试,例如FreeBSD,Mac OS X,以及Windows 7(Professional和Up)和Windows Server 2003.其他NFS客户端实现可以与gluster⼀起使⽤NFS服务器。
使⽤Microsoft Windows以及SAMBA客户端时,可以使⽤CIFS访问卷。
对于此访问⽅法,Samba包需要存在于客户端。
总结:GlusterFS⽀持三种客户端类型。
Gluster Native Client、NFS和CIFS。
Gluster Native Client是在⽤户空间中运⾏的基于FUSE的客户端,官⽅推荐使⽤Native Client,可以使⽤GlusterFS的全部功能。
1、使⽤Gluster Native Client挂载
Gluster Native Client是基于FUSE的,所以需要保证客户端安装了FUSE。
这个是官⽅推荐的客户端,⽀持⾼并发和⾼效的写性能。
在开始安装Gluster Native Client之前,您需要验证客户端上是否已加载FUSE模块,并且可以访问所需的模块,如下所⽰:
[root@localhost ~]# modprobe fuse #将FUSE可加载内核模块(LKM)添加到Linux内核
[root@localhost ~]# dmesg | grep -i fuse #验证是否已加载FUSE模块
[ 569.630373] fuse init (API version 7.22)
安装Gluseter Native Client:
[root@localhost ~]# yum -y install glusterfs-client #安装glusterfs-client客户端
[root@localhost ~]# mkdir /mnt/glusterfs #创建挂载⽬录
[root@localhost ~]# mount.glusterfs 192.168.56.11:/gv1 /mnt/glusterfs/ #挂载/gv1
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G 1.4G 19G 7% /
devtmpfs 231M 0 231M 0% /dev
tmpfs 241M 0 241M 0% /dev/shm
tmpfs 241M 4.6M 236M 2% /run
tmpfs 241M 0 241M 0% /sys/fs/cgroup
/dev/sda1 197M 97M 100M 50% /boot
tmpfs 49M 0 49M 0% /run/user/0
192.168.56.11:/gv1 4.0G 312M 3.7G 8% /mnt/glusterfs
[root@localhost ~]# ll /mnt/glusterfs/ #查看挂载⽬录的内容
total 100000
-rw-r--r-- 1 root root 102400000 Aug 704:30 100M.file
[root@localhost ~]# mount #查看挂载信息
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
......
192.168.56.11:/gv1 on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
⼿动挂载卷选项:
⾃动挂载卷:
除了使⽤mount挂载,还可以使⽤/etc/fstab⾃动挂载
语法格式:HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 00
举个例⼦:
192.168.56.11:/gv1 /mnt/glusterfs glusterfs defaults,_netdev 00
(1)停⽌卷
[root@gluster-node1 ~]# gluster volume stop gv1
(2)删除卷
[root@gluster-node1 ~]# gluster volume delete gv1
(3)扩展卷
GlusterFS⽀持在线进⾏卷的扩展。
如果添加的节点还不是集群中的节点,需要使⽤下⾯命令添加到集群
语法:# gluster peer probe <SERVERNAME>
扩展卷语法:# gluster volume add-brick <VOLNAME> <NEW-BRICK>
(4)收缩卷
收缩卷和扩展卷相似据以Brick为单位。
语法:# gluster volume remove-brick <VOLNAME> <BRICKNAME> start
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 start #删除brick
volume remove-brick start: success
ID: dd0004f0-b3e6-45d6-80ed-90506dc16159
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 status #查看remove brick操作的状态
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
gluster-node3 35 0Bytes 3500 completed 0:00:00
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 commit #显⽰completed状态后,提交remove-brick操作volume remove-brick commit: success
[root@gluster-node1 ~]# gluster volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on
(5)迁移卷
要替换分布式卷上的brick,需要添加⼀个新的brick,然后删除要替换的brick。
在替换的过程中会触发重新平衡的操作,会将移除的brick中的数据到新加⼊的brick中。
注意:这⾥仅⽀持可以对分布式复制卷或复制卷使⽤"replace-brick"命令进⾏替换操作。
(1)初始卷test-volume的配置信息
[root@gluster-node1 gv1]# gluster volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on
(2)test-volume挂载⽬录的⽂件和在实际存储位置的⽂件信息
[root@gluster-node1 gv1]# ll
total 0
-rw-r--r-- 1 root root 0 Aug 1322:22 file1
-rw-r--r-- 1 root root 0 Aug 1322:22 file2
-rw-r--r-- 1 root root 0 Aug 1322:22 file3
-rw-r--r-- 1 root root 0 Aug 1322:22 file4
-rw-r--r-- 1 root root 0 Aug 1322:22 file5
[root@gluster-node1 gv1]# ll /storage/brick1/
total 0
-rw-r--r-- 2 root root 0 Aug 1322:22 file1
-rw-r--r-- 2 root root 0 Aug 1322:22 file2
-rw-r--r-- 2 root root 0 Aug 1322:22 file5
[root@gluster-node2 ~]# ll /storage/brick1/
total 0
-rw-r--r-- 2 root root 0 Aug 132018 file3
-rw-r--r-- 2 root root 0 Aug 132018 file4
(3)添加新brick gluster-node3:/storage/brick1
[root@gluster-node1 ~]# gluster volume add-brick test-volume gluster-node3:/storage/brick1/ force
volume add-brick: success
volume add-brick: success
(4)启动remove-brick
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 start
volume remove-brick start: success
ID: 2acdaebb-25a9-477c-807e-980a6086796e
(5)查看remove-brick的状态是否为completed
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
gluster-node2 2 0Bytes 200 completed 0:00:00
(6)确认删除旧的brick
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 commit
volume remove-brick commit: success
(7)test-volume的最新配置
[root@gluster-node1 ~]# gluster volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node3:/storage/brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on
(8)检查新增brick的⽂件存储信息,原先存储在gluster-node2节点的⽂件移动到了gluster-node3中
[root@gluster-node3 ~]# ll /storage/brick1/
total 0
-rw-r--r-- 2 root root 0 Aug 132018 file3
-rw-r--r-- 2 root root 0 Aug 132018 file4
(6)系统配额
[root@gluster-node1 ~]# gluster volume quota test-volume enable #启⽤配额
volume quota : success
[root@gluster-node1 ~]# gluster volume quota test-volume disable #禁⽤配额
volume quota : success
[root@gluster-node1 ~]# mount -t glusterfs 127.0.0.1:/test-volume /gv1 #挂载test-volume卷
[root@gluster-node1 ~]# mkdir /gv1/quota #创建限制的⽬录
[root@gluster-node1 ~]# gluster volume quota test-volume limit-usage /quota 10MB #对/gv1/quota⽬录限制
[root@gluster-node1 ~]# gluster volume quota test-volume list #查看⽬录限制信息
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/quota 10.0MB 80%(8.0MB) 0Bytes 10.0MB No No
[root@gluster-node1 ~]# gluster volume set test-volume features.quota-timeout 5 #设置信息的超时时间
[root@gluster-node1 quota]# cp /gv1/20M.file . #拷贝20M⽂件到/gv1/quota下,已经超出了限额,但是依旧可以成功,由于限制的值较⼩,可能受到算法的影响[root@gluster-node1 quota]# cp /gv1/20M.file ./20Mb.file #再拷贝20M的⽂件,就会提⽰超出⽬录限额
cp: cannot create regular file ‘./20Mb.file’: Disk quota exceeded
[root@gluster-node1 gv1]# gluster volume quota test-volume remove /quota #删除某个⽬录的quota设置
volume quota : success
备注:
quota功能,主要是对挂载点下的某个⽬录进⾏空间限额,如:/mnt/glusterfs/data⽬录,⽽不是对组成卷组的空间进⾏限制。
(7)I/O信息查看
Profile Command 提供接⼝查看⼀个卷中的每⼀个brick的IO信息。
[root@gluster-node1 ~]# gluster volume profile test-volume start #启动profiling,之后则可以进⾏IO信息查看
Starting volume profile on test-volume has been successful
[root@gluster-node1 ~]# gluster volume profile test-volume info #查看IO信息,可以查看到每个brick的IO信息
Brick: gluster-node1:/storage/brick1
------------------------------------
Cumulative Stats:
Block Size: 32768b+ 131072b+
No. of Reads: 00
No. of Writes: 2312
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.000.00 us 0.00 us 0.00 us 122 FORGET
0.000.00 us 0.00 us 0.00 us 160 RELEASE
0.000.00 us 0.00 us 0.00 us 68 RELEASEDIR
Duration: 250518 seconds
Data Read: 0 bytes
Data Written: 40960000 bytes
Interval 1 Stats:
Duration: 27 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Brick: gluster-node3:/storage/brick1
------------------------------------
Cumulative Stats:
Block Size: 1024b+ 2048b+ 4096b+
No. of Reads: 000
No. of Writes: 3110
Block Size: 8192b+ 16384b+ 32768b+
No. of Reads: 001
No. of Writes: 29151668
Block Size: 65536b+ 131072b+
No. of Reads: 0156
No. of Writes: 620
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.000.00 us 0.00 us 0.00 us 3 RELEASE
0.000.00 us 0.00 us 0.00 us 31 RELEASEDIR
Duration: 76999 seconds
Data Read: 20480000 bytes
Data Written: 20480000 bytes
Interval 1 Stats:
Duration: 26 seconds
Data Read: 0 bytes
Data Written: 0 bytes
[root@gluster-node1 ~]# gluster volume profile test-volume stop #查看结束后关闭profiling功能
Stopping volume profile on test-volume has been successful
(8)Top监控
Top command 允许你查看bricks的性能例如:read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls 所有的查看都可以设置top数,默认100
# gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt]//查看打开的fd
[root@gluster-node1 ~]# gluster volume top test-volume open brick gluster-node1:/storage/brick1 list-cnt 3
Brick: gluster-node1:/storage/brick1
Current open fds: 0, Max open fds: 4, Max openfd time: 2018-08-1311:53:24.099217
Count filename
=======================
1 /98.txt
1 /95.txt
1 /87.txt
# gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt] //查看调⽤次数最多的读调⽤
[root@gluster-node1 ~]# gluster volume top test-volume read brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
157 /20M.file
# gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt] //查看调⽤次数最多的写调⽤
[root@gluster-node1 ~]# gluster volume top test-volume write brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
915 /20M.file
# gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt]
# gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt] //查看次数最多的⽬录调⽤
[root@gluster-node1 ~]# gluster volume top test-volume opendir brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
7 /quota
[root@gluster-node1 ~]# gluster volume top test-volume readdir brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
7 /quota
# gluster volume top VOLNAME read-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt] //查看每个Brick的读性能[root@gluster-node1 ~]# gluster volume top test-volume read-perf bs 256 count 1 brick gluster-node3:/storage/brick1 Brick: gluster-node3:/storage/brick1
Throughput 42.67 MBps time0.0000 secs
MBps Filename Time
==== ======== ====
0 /20M.file2018-08-1403:32:24.7443
# gluster volume top VOLNAME write-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt]//查看每个Brick的写性能[root@gluster-node1 ~]# gluster volume top test-volume write-perf bs 256 count 1 brick gluster-node1:/storage/brick1 Brick: gluster-node1:/storage/brick1
Throughput 16.00 MBps time0.0000 secs
MBps Filename Time
==== ======== ====
0 /quota/20Mb.file2018-08-1411:34:21.957635
0 /quota/20M.file2018-08-1411:31:02.767068。