EMC CLARiiON CX480安装配置文档

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

XXXXXXXX
EMC CLARiiON CX480安装配置文档
2009.12.21
目录
一、初始化EMC CLARiiON CX480磁盘阵列 (1)
二、EMC CLARiiON CX480存储空间配置 (3)
1、创建RAID Group (3)
2、创建LUN (8)
3、创建MetaLUN (11)
4、创建Hot Spare盘 (19)
三、光纤交换机划Zone (22)
四、安装Navisphere和PowerPath软件 (22)
1、安装Navisphere Agent和CLI软件 (22)
2、安装PowerPath多路径软件 (23)
五、EMC CLARiiON CX480创建Storage Group并映射主机 (23)
六、主机识别EMC CLARiiONCX480存储空间 (27)
七、EMC CLARiiON SnapView BCV配置 (32)
1、将Clone LUN映射给主机BJWMPA01 (32)
2、创建安全信任账号 (34)
3、创建Clone Private LUN (34)
4、创建Clone Group (36)
5、Clone Group添加LUN并初始化Clone LUN (39)
6、编写脚本执行BCV同步 (44)
7、主机挂载Clone LUN (45)
一、初始化EMC CLARiiON CX480磁盘阵列
首先在一台笔记本电脑或台式机安装EMC CLARiiON磁盘阵列初始化工具Navisphere Array Initialization Tool(NSI)。

将CX480的SPA和SPB的网络端口和笔记本电脑通过HUB连接在一起,如下图
以Windows平台为例,完成初始化工作,首先从开始菜单启动NSI,如下图
然后点击“Next”开始扫描当前可用的磁盘阵列
显示发现磁盘阵列,设备名称为FCNCX094700030,然后点击“Next”继续下一步
设置SPA的IP地址为“109.10.1.184”,名称为“CX480-SPA”,设置SPB的IP地
址为“109.10.1.185”,名称为“CX480-SPB”,另外设置子网掩码为“255.255.255.0”,网关为“109.10.1.1”,然后点击“Next”继续
最后提示设置登录Navisphere Manager的用户名为“admin”,密码为“password”,设置完毕,磁盘阵列重新reboot,重启完毕,完成磁盘阵列初始化工作。

二、EMC CLARiiON CX480存储空间配置
1、创建RAID Group
根据CCPF规划,按如下划分RAID Group
在一台安装Java JRE环境的Windows主机上,启动Internet Explorer浏览器,输入CX480 SPA(或SPB)的IP地址109.10.1.184,登录到存储管理界面Navisphere Manager,输入用户及密码“admin/password”,如下图
右键点击“RAID Group”选择“Create RAID Group”,如下图
在弹出界面,RAID Group ID为“0”,RAID Type选择“RAID 5”,以“Manual”方式选择我们需要的磁盘,如下图
在右侧栏中清除所有硬盘,然后从左侧栏硬盘中选择所需磁盘,添加到右侧栏中,选择完毕,点击“OK”确认
然后回到先前页面中,检查硬盘选择是否正确,确认无误,点击“Apply”确认
最后提示RAID Group 0创建成功,如下图
然后按上述过程创建其他RAID Group。

2、创建LUN
根据CCPF规划,划分传统Flare LUN如下所示,每个LUN大小为55GB,分别分配给不同的SP,如下图
右键点击RAID Group 1,选择“Create LUN”,如下图
选择LUN容量大小为55GB,LUN ID为1,如下图
然后点击“Advanced”标签,选择“Default Owner”为“SPA”
最后点击“Apply”完成创建LUN工作。

最后按CCPF规划划分其他LUN。

3、创建MetaLUN
根据CCPF,将创建55GB大小的单元LUN合并成为MetaLUN,以获得较高的IO性能,LUN与MetaLUN合并关系如下图所示。

以MetaLUN1为例,说明将LUN1~LUN4合并为MeteaLUN1的过程。

首先展开RAID Group 1,在LUN1上右键选择“Expand”,如下图
进入到扩展存储向导,点击“Next”继续下一步,如下图
点击扩展模式为“Striping”,然后点击“Next”继续下一步,如下图
然后选择磁盘LUN2、LUN3和LUN4,将这些磁盘添加到LUN1,然后点击“Next”继续
设置新LUN的大小,选择第二项“Maximum Capacity”,如下图
设置新LUN的名称为“MetaLUN1”,Default Owner为“SPA”,然后点击“Next”继续
最后显示配置总揽,确认无误后点击“Finish”完成扩容,如下图
最后提示MetaLUN1创建成功,点击“Finish”完成向导,如下图
然后按照上述过程,创建MetaLUN2~MetaLUN7的扩容。

4、创建Hot Spare盘
根据CCPF,创建Hot Spare磁盘。

首先仍然需要创建RAID Group,然后选择所需磁盘,如Bus0 Enclosure 0 Disk 13,如下图
然后在RAID Type中选择“Hot Spare”,如下图
然后系统自动创建Hot Spare LUN 8191,Hot Spare磁盘创建完毕,如下图
然后按上述过程创建其他热备磁盘。

创建完毕,显示如下图
三、光纤交换机划Zone
在光纤交换机上为CX480的SP和主机HBA卡划Zone,保证主机能够正确识别存储设备。

四、安装Navisphere和PowerPath软件
1、安装Navisphere Agent和CLI软件
将软件介质NaviHostAgent-HPUX-32-NA-en_US-6.29.0.6.1-1.dep和NaviCLI-HPUX-32-NA-en_US-6.29.0.6.1-1.dep上传到HP-UX主机的/tmp/EMC/Agent 目录,然后执行命令安装Host Agent和Navisphere CLI工具包:
#/usr/sbin/swinstall -s /tmp/EMC/Agent/NaviHostAgent-HPUX-32-NA-en_US-6.29.0.6.1-1.dep -x mount_all_filesystems=false NAVIAGENT
#/usr/sbin/swinstall -s /tmp/EMC/Agent/NaviCLI-HPUX-32-NA-en_US-6.29.0.6.1-1.dep -x mount_all_filesystems=false NAVICLI
启动和停止Agent进程
启动Agent进程:
#/sbin/init.d/agent start
停止Agent进程:
#/sbin/init.d/agent stop
2、安装PowerPath多路径软件
对于HP-UX 11i V3版本,安装PowerPath Version5.1,操作系统需要安装到September 2008(200809)HP-UX 补丁包。

首先将PowerPath EMCPower.HPUX.PI.5.1.0.GA.b160.tar.gz和 5.1补丁包EMCPower.HPUX.PI.5.1.SP1.GA.b019.tar.gz上传到/tmp/EMC/PP目录下,然后通过gunzip命令进行解压缩:
#gunzip EMCPower.HPUX.PI.5.1.0.GA.b160.tar.gz
#gunzip EMCPower.HPUX.PI.5.1.SP1.GA.b019.tar.gz
解压缩完毕,在PP目录下创建PPDepot目录,再次将tar文件释放到PPDepot 目录
#swcopy -x mount_all_filesystems=false -s /tmp/EMC/PP/EMCPower.HPUX.PI.5.1.0.GA.b160.tar EMCpower @ /tmp/EMC/PP/PPdepot
#swcopy -x mount_all_filesystems=false -s /tmp/EMC/PP/EMCPower.HPUX.PI.5.1.SP1.b190.tar EMCpower_patch511 @ /tmp/EMC/PP/PPdepot
最后执行命令开始安装PowerPath和补丁包,由于需要重新编译内核,安装完毕,操作系统自动重启
#swinstall -x autoreboot=true -x mount_all_filesystems=false -s /tmp/EMC/PP/PPdepot/ EMCpower EMCpower_patch511
安装完毕,输入PowerPath的软件license,执行命令如下:
#emcpreg –add BTPP-DB4E-UFQE-Q33U-M99C-TDNY
五、EMC CLARiiON CX480创建Storage Group并映射
主机
右键点击“Storage Groups”选择“Create Storage Group”,如下图
输入Storage Group的名字“BJWMPA01”,点击“OK”确定,如下图
Storage Group创建完成,右键点击组“BJWMPA01”,选择“Select LUNs”,如下图
从SPA和SPB中选择相应的LUN,如下图
然后点击标签“Hosts”,选择要映射的主机,,从左侧将bjwmpa01主机添加到右侧,如下图
最后点击OK确认,完成LUN到主机的映射。

然后按照上述过程再创建组“BJWMPD01”,同样将相应LUN分配给主机BJWMPD01。

六、主机识别EMC CLARiiONCX480存储空间
通过如下命令来扫描识别磁盘阵列设备:
#ioscan –fnC disk
#insf –e
#ioscan -kfnC disk
#ioscan -funCdisk|grep DGC #查看EMC设备
disk 446 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.1 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 447 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.2 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 448 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.3 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 449 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.4 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 450 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.5 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 451 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.6 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 452 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.7 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 453 0/6/0/0/0/0/2/0/0/1.1.42.0.0.1.0 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 17 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.3 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 18 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.4 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 19 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.5 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 20 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.6 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 21 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.7 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 22 0/6/0/0/0/0/2/0/0/1.1.42.0.0.7.0 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 23 0/6/0/0/0/0/2/0/0/1.1.42.0.0.7.1 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 438 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.1 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 439 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.2 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 440 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.3 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 441 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.4 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 442 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.5 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 443 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.6 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 444 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.7 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 445 0/6/0/0/0/0/2/0/0/1.1.43.0.0.1.0 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 24 0/6/0/0/0/0/2/0/0/1.1.43.0.0.6.3 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 25 0/6/0/0/0/0/2/0/0/1.1.43.0.0.6.4 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 26 0/6/0/0/0/0/2/0/0/1.1.43.0.0.6.5 sdisk CLAIMED DEVICE DGC CX4-480WDR5
CX4-480WDR5
disk 28 0/6/0/0/0/0/2/0/0/1.1.43.0.0.6.7 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 29 0/6/0/0/0/0/2/0/0/1.1.43.0.0.7.0 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 30 0/6/0/0/0/0/2/0/0/1.1.43.0.0.7.1 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 430 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.1 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 431 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.2 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 432 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.3 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 433 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.4 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 434 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.5 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 435 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.6 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 436 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.7 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 437 0/6/0/0/0/0/4/0/0/1.1.42.0.0.1.0 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 4 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.3 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 5 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.4 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 6 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.5 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 7 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.6 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 8 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.7 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 9 0/6/0/0/0/0/4/0/0/1.1.42.0.0.7.0 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 10 0/6/0/0/0/0/4/0/0/1.1.42.0.0.7.1 sdisk CLAIMED DEVICE DGC CX4-480WDR5
disk 406 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.1 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 407 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.2 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 418 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.3 sdisk CLAIMED DEVICE DGC CX4-480WDR10
CX4-480WDR10
disk 424 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.5 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 427 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.6 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 428 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.7 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 429 0/6/0/0/0/0/4/0/0/1.1.43.0.0.1.0 sdisk CLAIMED DEVICE DGC CX4-480WDR10
disk 31 0/6/0/0/0/0/4/0/0/1.1.43.0.0.6.3 sdisk CLAIMED DEVICE DGC 然后通过powermt命令生成并保存多路径磁盘设置
#powermt config
查看当前EMC磁盘以及多路径信息:
#powermt display dev=all class=clariion
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD02600142425D33BE0DE11 [MetaLUN 1]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.1 c14t0d1 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.1 c15t0d1 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.1 c16t0d1 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.1 c17t0d1 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD026002012627C95E2DE11 [LUN 1003]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.6.3 c14t6d3 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.6.3 c15t6d3 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.6.3 c16t6d3 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.6.3 c17t6d3 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD0260020BF64143CE0DE11 [MetaLUN 2]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.2 c14t0d2 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.2 c15t0d2 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.2 c16t0d2 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.2 c17t0d2 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD02600428A3D583CE0DE11 [MetaLUN 4]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.4 c14t0d4 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.4 c15t0d4 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.4 c16t0d4 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.4 c17t0d4 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD0260044DA277B3CE0DE11 [MetaLUN 5]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.5 c14t0d5 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.5 c15t0d5 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.5 c16t0d5 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.5 c17t0d5 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD026004EDAC6313CE0DE11 [MetaLUN 3]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
=============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.3 c14t0d3 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.3 c15t0d3 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.3 c16t0d3 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.3 c17t0d3 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD0260072D3C3B23CE0DE11 [MetaLUN 7]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.0.7 c14t0d7 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.0.7 c15t0d7 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.0.7 c16t0d7 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.0.7 c17t0d7 SP B2 active alive 0 0
CLARiiON ID=FCNCX094700030 [BJWMPA01]
Logical device ID=60060160EBD02600D400433CA9E0DE11 [LUN 29]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
14 0/6/0/0/0/0/2/0/0/1.1.42.0.0.1.0 c14t1d0 SP A0 active alive 0 0
15 0/6/0/0/0/0/2/0/0/1.1.43.0.0.1.0 c15t1d0 SP B0 active alive 0 0
16 0/6/0/0/0/0/4/0/0/1.1.42.0.0.1.0 c16t1d0 SP A2 active alive 0 0
17 0/6/0/0/0/0/4/0/0/1.1.43.0.0.1.0 c17t1d0 SP B2 active alive 0 0
最后保存配置
#powermt save
七、EMC CLARiiON SnapView BCV配置
1、将Clone LUN映射给主机BJWMPA01
右键点击“Storage Group”,将LUN1003,LUN1005~LUN1010映射给主机bjwmpa01,如下图
映射完毕,显示如下
2、创建安全信任账号
在主机bjwmpa01上使用naviseccli添加安全信任账号
#naviseccli -address 109.10.1.184 -AddUserSecurity -password password -scope 0 -user admin 3、创建Clone Private LUN
右键点击存储FCNCX094700030,选择“SnapView”>“Clone Fracture Properties”,如下图

若通过命令行操作,则执行:
#naviseccli -address 109.10.1.184 clone -allocatecpl -spA 1001 -spB 1002
4、创建Clone Group
7个Clone Group分别命令为CG1、CG2、CG3、CG4、CG5、CG6和CG7。

右键点击存储FCNCX094700030,选择“SnapView”>“Create Clone Group”,如下图
输入Clone Group名称为“CG1”,并选择源LUN为“MetaLUN1”,然后点击OK 确认,如下图
最后按照上述过程,分别为MetaLUN2~MetaLUN7创建Clone Group CG1~CG7,如下图
若通过命令来操作,则执行:
#naviseccli -address 109.10.1.184 clone -createclonegroup -name CG1 -luns 1 -o
#naviseccli -address 109.10.1.184 clone -createclonegroup -name CG2 -luns 5 -o
#naviseccli -address 109.10.1.184 clone -createclonegroup -name CG3 -luns 9 -o
#naviseccli -address 109.10.1.184 clone -createclonegroup -name CG4 -luns 13 -o #naviseccli -address 109.10.1.184 clone -createclonegroup -name CG5 -luns 17 -o #naviseccli -address 109.10.1.184 clone -createclonegroup -name CG6 -luns 21 -o #naviseccli -address 109.10.1.184 clone -createclonegroup -name CG7 -luns 25 -o 5、Clone Group添加LUN并初始化Clone LUN
按如下规划,做Meta LUN和Clone LUN之间的映射关系,如下表
展开SnapView菜单,右键点击“CG1”选择“Add Clone”,如下图
选择LUN1003,并选择“Initial Sync Required”,同步速率默认为“Medium”,可根据实际情况调控,然后点击“OK”,如下图
Clone LUN添加完毕,开始执行同步并显示进度,如下图
按照上述过程,将CG2~CG7添加Clone LUN并进行同步
若通过命令来操作,则执行:
#naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1003 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG2 -luns 1005 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1006 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1007 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1008 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1009 –IsSyncRequired 1 -o #naviseccli -address 109.10.1.184 clone -addclone -name CG1 -luns 1010 –IsSyncRequired 1 -o
6、编写脚本执行BCV同步
编写脚本bcv_clone_cx480.sh,脚本内容如下:
#!/usr/bin/ksh
NAVISECCLI=/opt/Navisphere/bin/naviseccli #naviseccli
ID=0100000000000000
CX480SPA=109.10.1.184
for i in 1 2 3 4 5 6 7
do
condition=`$NAVISECCLI -h $CX480SPA clone -listclone -name CG$i -cloneid $ID -CloneCondition|grep CloneCondition|awk -F: '{print $2}'`
if echo $condition|grep -i Fractured >/dev/null 2>&1; then
echo "$Action synchronizing CG$i started."
$NAVISECCLI -h $CX480SPA clone -syncclone -name CG$i -cloneid $ID -o
else
echo "Clone Group CG$i must be fractured before it can be synchronized."
$NAVISECCLI -h $CX480SPA clone -fractureclone -name CG$i -cloneid $ID -o
sleep 5
$NAVISECCLI -h $CX480SPA clone -syncclone -name CG$i -cloneid $ID -o
fi
done
sleep 300
i=1
while [ $i -le 7 ]
do
state=`$NAVISECCLI -h $CX480SPA clone -listclone -name CG$i -cloneid $ID -CloneState|grep CloneState|awk '{print $2}'`
if echo $state | egrep 'Consistent|Synchronized' >/dev/null 2>&1; then
i=$(($i+1))
else
echo "Waiting for CG$i to synchronize..."
sleep 10
fi
done
sleep 10
$NAVISECCLI -h $CX480SPA clone -consistentfractureclones -CloneGroupNameCloneId CG1 $ID CG2 $ID CG3 $ID CG4 $ID CG5 $ID CG6 $ID CG7 $ID -o
echo "Consistentfractureclones were splited successfully.\n"
7、主机挂载Clone LUN
通过脚本执行BCV同步,然后通过一致性组分离后,主机BJWMPA01主机扫描磁盘并通过powermt命令识别Clone LUN设备。

最后主机创建新VG,并更改VGID,将Clone LUN添加到新VG并激活,最后将LV挂载到相应挂载点。

相关文档
最新文档