ceph分布式文件系统性能测试
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
一、ceph Filesystem方式
bill@ubuntu:/mycephfs$ sudo fio --rw=rw --bs=1m --size=10M --numjobs=20 --group_reporting --name=test-rw
test-rw: (g=0): rw=rw, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=1
...
test-rw: (g=0): rw=rw, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=1
fio-2.1.3
Starting 20 processes
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
Jobs: 20 (f=20): [MMMMMMMMMMMMMMMMMMMM] [300.0% done] [12275KB/0KB/0KB /s] [11/0/0 iops] Jobs: 20 (f=20): [MMMMMMMMMMMMMMMMMMMM] [400.0% done] [10229KB/0KB/0KB /s] [9/0/0 iops] [Jobs: 20 (f=20): [MMMMMMMMMMMMMMMMMMMM] [23.8% done] [11264KB/1024KB/0KB /s] [11/1/0 iopsJobs: 20 (f=20): [MMMMMMMMMMMMMMMMMMMM] [23.1% done] [3072KB/7168KB/0KB /s] [3/7/0 iops] Jobs: 19 (f=19): [MMMMMM_MMMMMMMMMMMMM] [22.6% done] [4091KB/4091KB/0KB /s] [3/3/0 iops] Jobs: 18 (f=18): [MMMMMM_MMMMM_MMMMMMM] [22.2% done] [2048KB/6144KB/0KB /s] [2/6/0 iops] Jobs: 16 (f=16): [_MM_MM_MMMMM_MMMMMMM] [22.0% done] [5120KB/5120KB/0KB /s] [5/5/0 iops] Jobs: 15 (f=15): [_MM__M_MMMMM_MMMMMMM] [21.7% done] [4091KB/8183KB/0KB /s] [3/7/0 iops] Jobs: 13 (f=13): [_MM____MM_MM_MMMMMMM] [21.6% done] [4091KB/6137KB/0KB /s] [3/5/0 iops] Jobs: 12 (f=12): [_MM____MM_M__MMMMMMM] [21.4% done] [3072KB/5120KB/0KB /s] [3/5/0 iops] Jobs: 2 (f=2): [________________EMM_] [20.9% done] [5120KB/9216KB/0KB /s] [5/9/0 iops] [eta 01m:12s]
test-rw: (groupid=0, jobs=20): err= 0: pid=1650: Wed May 7 09:44:48 2014
read : io=102400KB, bw=5481.6KB/s, iops=5, runt= 18681msec
clat (usec): min=174, max=14680K, avg=2272423.84, stdev=3878923.36
lat (usec): min=174, max=14680K, avg=2272426.51, stdev=3878922.12
clat percentiles (usec):
| 1.00th=[ 175], 5.00th=[ 181], 10.00th=[ 193], 20.00th=[ 237],
| 30.00th=[ 660], 40.00th=[ 1144], 50.00th=[ 2416], 60.00th=[577536],
| 70.00th=[2113536], 80.00th=[3391488], 90.00th=[9109504], 95.00th=[11993088],
| 99.00th=[13697024], 99.50th=[14745600], 99.90th=[14745600], 99.95th=[14745600],
| 99.99th=[14745600]
b
w (KB /s): min= 139, max= 2856, per=14.30%, avg=783.94, stdev=673.22
write: io=102400KB, bw=5481.6KB/s, iops=5, runt= 18681msec
clat (usec): min=257, max=3273.7K, avg=212125.31, stdev=543752.09
lat (usec): min=274, max=3273.7K, avg=212154.82, stdev=543751.58
clat percentiles (usec):
| 1.00th=[ 258], 5.00th=[ 266], 10.00th=[ 274], 20.00th=[ 278],
| 30.00th=[ 290], 40.00th=[ 302], 50.00th=[ 394], 60.00th=[ 668],
| 70.00th=[259072], 80.00th=[337920], 90.00th=[419840], 95.00th=[552960],
| 99.00th=[2736128], 99.50th=[3260416], 99.90th=[3260416], 99.95th=[3260416],
| 99.99th=[3260416]
bw (KB /s): min= 414, max= 5731, per=55.96%, avg=3067.19, stdev=1768.00
lat (usec) : 250=10.00%, 500=25.50%, 750=13.00%, 1000=3.50%
lat (msec) : 2=6.50%, 4=0.50%, 250=1.00%, 500=16.50%, 750=2.00%
lat (msec) : 1000=1.00%, 2000=3.00%, >=2000=17.50%
cpu : usr=0.00%, sys=0.04%, ctx=146, majf=0, minf=499
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=100/w=100/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=102400KB, aggrb=5481KB/s, minb=5481KB/s, maxb=5481KB/s, mint=18681msec, maxt=18681msec
WRITE: io=102400KB, aggrb=5481KB/s, minb=5481KB/s, maxb=5481KB/s, mint=18681msec, maxt=18681msec
-------------------------------------------------------------
记录一下ceph的状态
[root@linux150 ~]# ceph status
cluster 69360bca-313e-42d6-8b57-34169d3cf6c2
health HEALTH_WARN
monmap e1: 1 mons at {a=192.168.1.150:6789/0}, election epoch 1, quorum 0 a
mdsmap e14: 1/1/1 up {0=a=up:active}
osdmap e21: 2 osds: 2 up, 2 in
pgmap v593: 576 pgs, 3 pools, 200 MB data, 81 objects
2478 MB used, 38461 MB / 40940 MB avail
576 active+clean
[root@linux150 ~]# ceph osd tree
# id weight type name up/down reweight
-1 2 root default
-3 2 rack unknownrack
-2 1 host linux104
0 1 osd.0 up 1
-4 1 host linux105
1 1 osd.1 up 1
------------------------------------------------------------------------
二、ceph block device方式
[root@linux105 myceph]# fio --rw=rw --bs=1m --size=10M --numjobs=20 --group_reporting --name=test-rw
test-rw: (g=0): rw=rw, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=1
...
fio-2.1.7
Starting 20 processes
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file
(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
Jobs: 2 (f=2): [________MM__________] [72.7% done] [0KB/2045KB/0KB /s] [0/1/0 iops] [eta 00m:03s] s]
test-rw: (groupid=0, jobs=20): err= 0: pid=2538: Tue May 13 15:34:24 2014
read : io=102400KB, bw=12436KB/s, iops=12, runt= 8234msec
clat (msec): min=11, max=3424, avg=792.54, stdev=526.67
lat (msec): min=11, max=3424, avg=792.54, stdev=526.67
clat percentiles (msec):
| 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 127],
| 30.00th=[ 494], 40.00th=[ 898], 50.00th=[ 1012], 60.00th=[ 1057],
| 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205],
| 99.00th=[ 1844], 99.50th=[ 3425], 99.90th=[ 3425], 99.95th=[ 3425],
| 99.99th=[ 3425]
bw (KB /s): min= 296, max= 4007, per=9.82%, avg=1221.00, stdev=578.75
write: io=102400KB, bw=12436KB/s, iops=12, runt= 8234msec
clat (msec): min=1, max=2938, avg=181.79, stdev=473.59
lat (msec): min=1, max=2938, avg=182.10, stdev=473.60
clat percentiles (usec):
| 1.00th=[ 1720], 5.00th=[ 1784], 10.00th=[ 1800], 20.00th=[ 1816],
| 30.00th=[ 1896], 40.00th=[ 4448], 50.00th=[ 4640], 60.00th=[ 5984],
| 70.00th=[ 7840], 80.00th=[26240], 90.00th=[937984], 95.00th=[1089536],
| 99.00th=[2146304], 99.50th=[2932736], 99.90th=[2932736], 99.95th=[2932736],
| 99.99th=[2932736]
bw (KB /s): min= 348, max= 3512, per=12.93%, avg=1607.73, stdev=684.17
lat (msec) : 2=15.50%, 4=2.50%, 10=17.00%, 20=9.50%, 50=7.00%
lat (msec) : 250=2.00%, 500=4.50%, 750=3.00%, 1000=9.00%, 2000=28.50%
lat (msec) : >=2000=1.50%
cpu : usr=0.02%, sys=0.37%, ctx=485, majf=0, minf=592
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=100/w=100/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=102400KB, aggrb=12436KB/s, minb=12436KB/s, maxb=12436KB/s, mint=8234msec, maxt=8234msec
WRITE: io=102400KB, aggrb=12436KB/s, minb=12436KB/s, maxb=12436KB/s, mint=8234msec, ma
xt=8234msec
Disk stats (read/write):
rbd1: ios=354/53, merge=53/8, ticks=91797/114639, in_queue=580044, util=98.14%
-------------------------------------------------------------
三、本地盘测试如下:
bill@ubuntu:/test$ sudo fio --rw=rw --bs=1m --size=10M --numjobs=20 --group_reporting --name=test-rw
test-rw: (g=0): rw=rw, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=1
...
test-rw: (g=0): rw=rw, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=1
fio-2.1.3
Starting 20 processes
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
test-rw: Laying out IO file(s) (1 file(s) / 10MB)
Jobs: 15 (f=15): [MMMMMM__MMMMM_MEMM_M] [30.0% done] [30476KB/52825KB/0KB /s] [29/51/0 iops] [eta 00m:07s]
test-rw: (groupid=0, jobs=20): err= 0: pid=1705: Wed May 7 10:02:02 2014
read : io=102400KB, bw=27579KB/s, iops=26, runt= 3713msec
clat (msec): min=69, max=666, avg=418.67, stdev=220.78
lat (msec): min=69, max=666, avg=418.67, stdev=220.78
clat percentiles (msec):
| 1.00th=[ 70], 5.00th=[ 88], 10.00th=[ 112], 20.00th=[ 184],
| 30.00th=[ 217], 40.00th=[ 265], 50.00th=[ 453], 60.00th=[ 611],
| 70.00th=[ 635], 80.00th=[ 644], 90.00th=[ 652], 95.00th=[ 660],
| 99.00th=[ 668], 99.50th=[ 668], 99.90th=[ 668], 99.95th=[ 668],
| 99.99th=[ 668]
bw (KB /s): min= 1026, max= 2989, per=6.30%, avg=1738.25, stdev=357.66
write: io=102400KB, bw=27579KB/s, iops=26, runt= 3713msec
clat (usec): min=486, max=1658.8K, avg=199159.91, stdev=382021.57
lat (usec): min=506, max=1658.9K, avg=199212.05, stdev=382025.77
clat percentiles (usec):
| 1.00th=[ 486], 5.00th=[ 1064], 10.00th=[ 1624], 20.00th=[ 2224],
| 30.00th=[ 4832], 40.00th=[ 7520], 50.00th=[21376], 60.00th=[67072],
| 70.00th=[144384], 80.00th=[259072], 90.00th=[528384], 95.00th=[1302528],
| 99.00th=[1613824], 99.50th=[1662976], 99.90th=[1662976], 99.95th=[1662976],
| 99.99th=[1662976]
bw (KB /s): min= 617, max= 5988, per=11.05%, avg=3046.22, stdev=2069.78
lat (usec) :
500=0.50%, 750=0.50%, 1000=0.50%
lat (msec) : 2=6.50%, 4=5.00%, 10=7.00%, 20=4.00%, 50=4.50%
lat (msec) : 100=8.00%, 250=21.50%, 500=13.00%, 750=25.00%, 1000=1.00%
lat (msec) : 2000=3.00%
cpu : usr=0.03%, sys=1.19%, ctx=506, majf=0, minf=559
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=100/w=100/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=102400KB, aggrb=27578KB/s, minb=27578KB/s, maxb=27578KB/s, mint=3713msec, maxt=3713msec
WRITE: io=102400KB, aggrb=27578KB/s, minb=27578KB/s, maxb=27578KB/s, mint=3713msec, maxt=3713msec
Disk stats (read/write):
dm-0: ios=389/19, merge=0/0, ticks=96056/16, in_queue=96140, util=94.88%, aggrios=370/14, aggrmerge=30/5, aggrticks=87000/16, aggrin_queue=87016, aggrutil=94.48%
sda: ios=370/14, merge=30/5, ticks=87000/16, in_queue=87016, util=94.48%
Ceph浅析(中):结构、工作原理及流程
/article/2014-04-08/2819192-ceph-swift-on-openstack-m/2
将 Ceph 存储集群集成到 OpenStack 云中
/developerworks/cn/cloud/library/cl-openstackceph/
ceph的CRUSH数据分布算法介绍
/?p=122
ceph的体系架构
/ceph/ceph4e2d658765876863/ceph-1/architecture301067b667843011/copy_of_architecture301067b667843011