Terabyte IDE RAID-5 Disk Arrays

合集下载

组装raid5磁盘阵列服务器

组装raid5磁盘阵列服务器

手把手教你组装raid5磁盘阵列服务器硬件raid5组建:最近又亲手给一个朋友组装了一台采用双核心P4 820D处理器的8硬盘的1U机架式存储型服务器,在组装过程中,分别组建了硬件Raid5和软件Raid5的磁盘阵列,过程很值得玩味,现在写出详细的设置过程,以期抛砖引玉,给大家带来更多一点启发。

首先将服务器组装好,然后给硬盘插上SATA的数据线,插入主板上的四个SATA接口,用并口线连接好我的LG刻录机当光驱用,这个主板只提供了1个并口IDE接口用来接光驱正好,连上显示器、键盘、鼠标,开机测试,启动顺利,按DEL键进入bios。

在BIOS里看到,主板已经识别出四块西数250G大容量硬盘和LG刻录机。

硬盘和LG刻录机接下来开启主板的硬件raid5模式,将这四个硬盘组成raid5磁盘阵列,富士康这款主板虽然不错,但是美中不足的是说明书竟然是英文的,如果是E文不好的朋友初次使用难免要发晕,我当初也是琢磨半天,又打了富士康公司800技术服务电话,直到把值班小姐问烦了,才搞明白大概其,下面大家就跟我来。

先移动光标到integrated peripherals回车。

选择OnChip IDE Device,再回车。

选择SATA Mode,主板默认这个选项是IDE,也就是不采用raid模式,现在回车进入设置界面。

移动光标选择raid,然后回车。

启动画面显示,四个物理硬盘已经被主板raid功能识别出来,提示按CTRL-I进入raid详细设置。

进入raid详细设置界面,在MAIN MENU界面里选择第一项Create RAID Volume,新建raid卷。

现在进入CREATE VOLUME MENU界面,在第一项Name里给新卷起个名字,我这里用的是Volume0,你也可以用tanghua之类的,移动光标到第二项RAID Level,选择raid模式,这里有raid0、raid1、raid10、raid5四个选择,我们自然要选择梦寐以求的raid5。

替换RAID-5卷的磁盘区域

替换RAID-5卷的磁盘区域

替换RAID-5卷的磁盘区域如果不能重新激活包含部分RAID-5卷的磁盘而且该卷没有返回到“良好”状态,则应当替换RAID-5卷中出现故障的磁盘区域。

?使用Windows界面?使用命令行使用Windows界面1.打开“计算机管理(本地)”。

2.在控制台树中,单击“计算机管理(本地)”,单击“存储”,然后单击“磁盘管理”。

3.右键单击出现故障的磁盘上的RAID-5卷部分,单击“修复卷”,然后按屏幕上的说明操作。

注意?要在本地计算机上执行此过程,您必须是本地计算机Backup Operators组或Administrators组的成员,或者您必须被委派了适当的权限。

要远程执行此过程,您必须是远程计算机上Backup Operators组或Administrators组的成员。

如果计算机已加入某个域,则Domain Admins组的成员可能会执行该过程。

作为安全性最佳操作,请考虑使用“运行方式”执行此过程。

详细信息,请参阅默认本地组、默认组以及使用“运行方式”。

?要打开“计算机管理”,请依次单击“开始”和“控制面板”,双击“管理工具”,然后双击“计算机管理”。

?要替换RAID-5卷的磁盘区域,您必须有一个未分配空间与要修复的区域至少一样大的动态磁盘。

如果没有具有足够未分配空间的动态磁盘,则不能使用“修复卷”命令。

(要验证是否有足够的空间,请右键单击该磁盘,单击“属性”,然后检查“未分配空间”的大小。

这一大小可能比图形或列表视图中的略小。

)?当RAID-5卷的某个成员发生严重故障(例如断电或整个硬盘故障)时,运行Windows Server2003操作系统的计算机可以通过RAID-5卷的其余成员重新生成数据。

?如果RAID-5卷的故障是由于某个设备的电源或电缆出现故障造成的,一旦修复硬件故障后,可以重新生成RAID-5卷中出现故障的成员的数据。

?在完成重新生成数据之前,RAID-5卷将不会在“磁盘管理”中显示“良好”。

IBM服务器做RAID5步骤

IBM服务器做RAID5步骤

IBM服务器做RAID5步骤1. 在Adaptec磁盘阵列控制器上创建Raid(容器)在这种阵列卡上创建容器的步骤如下(注意:请预先备份您服务器上的数据,配置磁盘阵列的过程将会删除服务器硬盘上的所有数据!):第1步,首先当系统在自检的过程中出现如(图1)提示时,同时按下“Ctrl+A”组合键。

进入如(图2)所示的磁盘阵列卡的配置程序界面。

图一图二第2步,然后选择“Container configuration utility”,进入如(图3)所示配置界面。

图三第3步,选择“Initialize Drivers“选项去对新的或是需要重新创建容器的硬盘进行初始化(注意: 初始话硬盘将删去当前硬盘上的所有数据),按回车后进入如(图4)所示界面。

在这个界面中出现了RAID卡的通道和连接到该通道上的硬盘,使用“Insert”键选中需要被初始化的硬盘(具体的使用方法参见界面底部的提示,下同)。

图四第4步,全部选择完成所需加入阵列的磁盘后,按空格键,系统键弹出如(图5)所示警告提示框。

提示框中提示进行初始化操作将全部删除所选硬盘中的数据,并中断所有正在使用这些硬盘的用户。

图五第5步,按“Y”键确认即可,进入如(图6)所示配置主菜单(Main Menu)界面。

硬盘初始化后就可以根据您的需要,创建相应阵列级别(RAID1,RAID0等)的容器了。

这里我们以RAID5为例进行说明。

在主菜单界面中选择“Create container”选项。

图六第6步,按回车键后进入如(图7)所示配置界面,用“insert”键选中需要用于创建Container(容器)的硬盘到右边的列表中去。

然后按回车键。

在弹出来的如(图8)所示配置界面中用回车选择RAID级别,输入Container的卷标和大小。

其它均保持默认不变。

然后在“Done”按钮上单击确认即可。

图七图八第7步,这是系统会出现如(图9)所示提示,提示告诉用户当所创建的容器没有被成功完成“Scrub(清除)”之前,这个容器是没有冗余功能的。

RAID5安装全步骤解析

RAID5安装全步骤解析

RAID5说到磁盘阵列〔RAID,Redundant Array of Independent Disks〕,现在几乎成了网管员所必须掌握的一门技术之一,特别是中小型企业,因为磁盘阵列应用非常广泛,它是当前数据备份的主要方案之一。

然而,许多网管员只是在各种媒体上看到相关的理论知识介绍,却并没有看到一些实际的磁盘阵列配置方法,所以仍只是一知半解,到自己真正配置时,却无从下手。

本文要以一个具体的磁盘阵列配置方法为例向大家介绍磁盘阵列的一些根本配置方法,给出一些关键界面,使各位对磁盘阵列的配置有一个理性认识。

当然为了使各位对磁盘阵列有一个较全面的介绍,还是先来简要回忆一下有关磁盘阵列的理论知识,这样可以为实际的配置找到理论依据。

一、磁盘阵列实现方式磁盘阵列有两种方式可以实现,那就是“软件阵列〞与“硬件阵列〞。

软件阵列是指通过网络操作系统自身提供的磁盘管理功能将连接的普通SCSI卡上的多块硬盘配置成逻辑盘,组成阵列。

如微软的Windows NT/2000 Server/Server 2003和NetVoll的NetWare两种操作系统都可以提供软件阵列功能,其中Windows NT/2000 Server/Server 2003可以提供RAID 0、RAID 1、RAID 5;NetWare操作系统可以实现RAID 1功能。

软件阵列可以提供数据冗余功能,但是磁盘子系统的性能会有所降低,有的降代还比拟大,达30%左右。

硬件阵列是使用专门的磁盘阵列卡来实现的,这就是本文要介绍的对象。

现在的非入门级效劳器几乎都提供磁盘阵列卡,不管是集成在主板上或非集成的都能轻松实现阵列功能。

硬件阵列能够提供在线扩容、动态修改阵列级别、自动数据恢复、驱动器漫游、超高速缓冲等功能。

它能提供性能、数据保护、可靠性、可用性和可管理性的解决方案。

磁盘阵列卡拥有一个专门的处理器,如Intel的I960芯片,HPT370A/372 、Silicon Image SIL3112A等,还拥有专门的存贮器,用于高速缓冲数据。

RAID5的配置中添加硬盘

RAID5的配置中添加硬盘

RAID5的配置中添加硬盘执行以下操作,假定配置的是PERC 3/Di(双通道集成)控制器。

本文以Windows 2000操作系统下的设置为例。

1. 确认已备份服务器中的所有数据。

2. 安装最新的PERC 操作系统的驱动程序,然后重新启动计算机。

3.更新最新的PREC Firmware(固件),然后重新启动计算机。

4. 如果没有安装最新版的Array Manager(磁盘阵列管理器),先安装,并安装相应的Array Manager SNMP补丁包。

请确保系统BIOS和ESM版本最新,然后重新启动计算机。

5.关闭计算机,插入3个新硬盘驱动器,然后通过PERC BIOS Config(CTRL+M)进行初始化。

6. 启动服务器,进入操作系统。

7. 运行Array Manager。

8. 在Array Manager中,展开Arrays文件夹。

9. 展开PERC 2 Subsystem文件夹。

10. 展开Logical Array(逻辑阵列)入口。

11. 展开Array Group 0入口。

12. 展开Virtual Disk 0。

13. 右键单击Virtual Disk 0,然后选择Reconfigure(重新配置)。

14. Array Disks(阵列磁盘组)左侧的选项中,显示全部8个磁盘驱动器,其中原有的5个磁盘驱动器已选中(打勾)。

15. 选中3个新磁盘驱动器,这样全部8个驱动器均已选中。

16. 在右侧窗口中选择RAID-5,键入新的容量(可以设置高于此值的最大与最小容量――键入最大值)。

17. 选择所需条带化大小(或保留默认值),然后单击OK。

在我的服务器中,RAID-5磁盘阵列中原有3块容量为36GB的硬盘驱动器(仅含有了10Gb数据),现向其中添加3块容量均为36GB 的硬盘,然后将其从3块硬盘重新配置成6块硬盘,共花费6.5个小时(这是必要的,因为必须从当前配置的3块硬盘中读取数据/奇偶校验,然后在全部的6块磁盘中重新计算/并重新写入)。

RAID 5

RAID 5
RAID 5
存储解决方案
01 工作原理
03 读写
目录
02 校验 04 存储
目录Biblioteka 05 创建07 RAID5数据恢复
06 RAID5故障分析
RAID 5是一种存储性能、数据安全和存储成本兼顾的存储解决方案。 RAID 5可以理解为是RAID 0和RAID 1 的折中方案。RAID 5可以为系统提供数据安全保障,但保障程度要比Mirror低而磁盘空间利用率要比Mirror高。 RAID 5具有和RAID 0相近似的数据读取速度,只是多了一个奇偶校验信息,写入数据的速度比对单个磁盘进行写 入操作稍慢。同时由于多个数据对应一个奇偶校验信息,RAID 5的磁盘空间利用率要比RAID 1高,存储成本相对 较低,是运用较多的一种解决方案
第4步,打开“指派驱动器号和路径”对话框,选中“指派以下驱动器号”单选框。单击右侧的下拉三角按钮, 为该RAID-5卷指派驱动器号,以便于访问和管理。单击“下一步”按钮。
第5步,在“卷区格式化”对话框中保持“按下列设置格式化这个卷”单选框为选中状态,“文件系统”和 “分配单位大小”选项均采用默认值。在“卷标”编辑框中输入一个卷标用于和其他卷进行区别,并选中“快速 格式化”复选框。单击“下一步”按钮。
因为RAID-5的每块物理盘中都有校验信息,所以分析RAID-5就需要比RAID-0多一个因素,即校验块的位置 和方向,另外,RAID-5中数据块的走向也会不一样,分为异步和同步,也就是说,RAID-5有四个因素很重要,第 一个是RAID中每个条带的大小,也就是“A”或“B”。
这些数据块所占用的扇区数;第二个因素是RAID中硬盘的排列顺序,也就是盘序;第三个因素是校验块的循 环方向;第四个因素是数据块的走向。

RAID5教程——电脑发烧友必看

RAID5教程——电脑发烧友必看

磁盘阵列(RAID,Redundant Array of Independent Disks),若干个单独的硬盘组成一个大容量的逻辑磁盘,不仅能提升磁盘读写速率,而且能保证更高的数据安全性。

RAID 5是目前应用最广泛的RAID技术,至少要3个硬盘组成。

各块独立硬盘进行条带化分割,相同的条带区进行奇偶校验(异或运算),校验数据平均分布在每块硬盘上。

以n 块硬盘构建的RAID 5阵列可以有n-1块硬盘的容量,存储空间利用率非常高。

任何一块硬盘上的数据丢失,均可以通过校验数据推算出来,所以当其中一个硬盘有故障,更换硬盘后可以恢复这个硬盘的数据,一块硬盘的损坏不会造成数据的丢失。

RAID 5具有数据安全、读写速度快,空间利用率高等优点,应用非常广泛。

磁盘阵列其样式有三种,一是外接式磁盘阵列柜、二是内接式磁盘阵列卡,三是利用软件来仿真。

本教程以内接式磁盘阵列卡为例,详细介绍RAID5的组建过程。

1.电脑安装磁盘阵列卡,并安装好驱动。

2.开机按CTRL H进入RAID操作界面,选择configuration wizard,回车。

3. 选择add configuration,然后next。

4. 选择nanual configuration,next。

5. 选中slot0~3,然后点击add to array。

6. 点击accept DG,next。

7. 点击next。

8. 点击add to span。

9. 点击next。

10. 按如下图参数配置,然后点击update size。

11. 点accept,然后next。

12. 点击yes。

13. 点击next。

14. 点击accept 。

15. 点击yes。

16. 点击cancel。

17. 点击yes。

18. 点击home回到主页。

19. 选择exit。

20. 点击yes,然后重启电脑即可。

Seagate 5005 4005 3005 Series 硬盘阵列硬件安装与维护指南说明书

Seagate 5005 4005 3005 Series 硬盘阵列硬件安装与维护指南说明书

Seagate 5005/4005/3005 Series Hardware Installation and Maintenance GuideAbstractThis document describes initial hardware setup for Seagate 5005/4005/3005 Series enclosures. It also describes removal and installation of customer-replaceable units for these enclosures. The document is intended for use by storage system administrators familiar with servers and computer networks, network administration, storage system administration and configurations, storage area network management, and relevant protocols.Firmware Version: G280P/N 83-00007188-12-01Revision AOctober 2019© 2017-2019 Seagate Technology LLC. All rights reserved. Seagate, Seagate Technology and the Spiral logo are registered trademarks of Seagate Technology LLC in the United States and/or other countries. Dot Hill and RealStor are either trademarks or registered trademarks of Seagate Technology LLC or one of its affiliated companies in the United States and/or other countries. All other trademarks or registered trademarks are the property of their respective owners. When referring to drive capacity, one gigabyte, or GB, equals one billion bytes and one terabyte, or TB, equals one trillion bytes. Your computer’s operating system may use a different standard of measurement and report a lower capacity. In addition, some of the listed capacity is used for formatting and other functions, and thus will not be available for data storage. Actual data rates may vary depending on operating environment and other factors. The export or re-export of Seagate hardware or software is regulated by the U.S. Department of Commerce, Bureau of Industry and Security (for more information, visit ), and may be controlled for export, import and use in other countries. All coded instruction and program statements contained herein is, and remains copyrighted works and confidential proprietary information of Seagate Technology LLC or its affiliates. Any use, derivation, dissemination, reproduction, or any attempt to modify, reproduce, distribute, disclose copyrighted material of Seagate Technology LLC, for any reason, in any manner, medium, or form, in whole or in part, if not expressly authorized, is strictly prohibited. Seagate reserves the right to change, without notice, product offerings or specifications.ContentsAbout this guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Enclosure user interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 CNC ports used for host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 HD mini-SAS ports used for host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Intended audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Related documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Document conventions and symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1Safety guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Safe handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Electrical safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Rack system safety precautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Enclosure configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 CompactFlash. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Supercapacitor pack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Enclosure variants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2U12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2U24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5U84 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2U enclosure core product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2U enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2U enclosure rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2U enclosure chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5U enclosure core product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5U enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5U enclosure rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5U enclosure chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5U enclosure drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Operator’s (Ops) panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2U enclosure Ops panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5U enclosure Ops panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Power cooling module – 2U enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 580W PCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Multiple power cooling modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 System airflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Power supply unit – 5U enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Fan cooling module – 5U enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Controller and expansion modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 12Gb/s controller module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Controller failure when a single-controller is operational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 12Gb/s expansion module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Disk drive module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Contents3DDIC in 5U chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Enclosure management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Management interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Planning for installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Preparing for installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Preparing the site and host server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Unpacking the enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Required tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Requirements for rackmount installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Rackmount rail kit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Installing the 2U enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Installing the 5U enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 FDE considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Connecting the controller enclosure and optional expansion enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Cable requirements for expansion enclosures used with a controller enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Testing enclosure connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Grounding checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Host system requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Cabling considerations for RBOD/EBOD configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Connecting the controller enclosure to hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 CNC technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 SAS protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Host connection for RBOD/EBOD configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Fibre Channel host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 10GbE iSCSI host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 1Gb iSCSI host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 HD mini-SAS host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Connecting direct attach RBOD/EBOD configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Connecting a management host on the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Updating firmware for RBOD/EBOD configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 New user setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Obtaining IP values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Setting network port IP addresses using DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Setting network port IP addresses using the CLI port and cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Configure controller network ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Set IPv4 values for network ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Set IPv6 values for network ports1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Change the CNC port mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Connecting two controller enclosures to replicate volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Cabling for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Host ports and replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Powering on/powering off. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4ContentsDisk drive LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Disk modules used in 2U chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 DDICs used in 5U chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Accessing the SMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Configuring and provisioning the storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 RBOD/EBOD configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 JBOD configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5Troubleshooting and problem solving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Initial start-up problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Faulty power cords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Computer does not recognize the enclosure system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2U enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 580W PCM LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Ops panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Disk drive carrier module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 IOM LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5U enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 PSU LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Fan cooling module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Ops panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Drawer LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 DDIC LED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 IOM LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Temperature sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Troubleshooting 2U enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 PCM Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Thermal monitoring and control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Thermal alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Troubleshooting 5U enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Thermal considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 USB CLI port connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Fault isolation methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Options available for performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Performing basic steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 If the enclosure does not initialize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Correcting enclosure IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Host I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Dealing with hardware faults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Isolating a host-side connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Isolating a controller module expansion port connection fault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Isolating replication faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Continuous operation during replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Customer-replaceable units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Contents5。

浪潮服务器教你如何做RAID5

浪潮服务器教你如何做RAID5

浪潮服务器教你如何做RAID5浪潮服务器教你如何做RAID5步骤:第1步:进入主板BIOS.开机按F2,进入BIOS 界面后,进入到Advanced(高级)选项卡,将里面的Raid功能启用,然后再转到Boot 选项卡,设置光驱第一启动。

然后保存退出。

第2 步:当启动计算机时按下ESC键,进入文字界面,会有Ctrl+E或是Ctrl+I等等提示,按照提示按组合键进入RAID BIOS设置。

如(图1)图1第3步:要重新配置一个RAID,请选中“New Configuration”;如果已经存在一个可以使用的逻辑磁盘,请选中“View/Add Configuration”,并按回车键。

在此,我们以新建磁盘阵列为例进行介绍。

选择“New Configuration”选项。

按回车键后,如(图2)所示。

图2第4 步:进入如(图三)所示配置界面。

按空格键选中要创建RAID的磁盘.第5步:当把你所要做RAID的硬盘选择好后,按F10键。

出现如(图4)所示图4第6 步:按“空格键”选择阵列跨接信息,出现Span-1,出现如(图5)所示,然后按F10保存。

图5第7 步:按照您的需要在(图6)所示的RAID=5那里回车,选择您所要做的阵列方式。

然后选中“Accept”,并按回车键确认。

第8步:出现如(图7)所示的画面时,这里可以看出阵列内的磁盘个数,直接按回车键。

第9步:这里提示是否保存配置,选择Yes按回车键就可以了。

如(图8)所示第10 步:刚创建的逻辑磁盘需要经过初始化才能使用。

按ESC 键返回到如(图1)所示的主菜单,选中“Initialize”选项,并按回车键,进入如(图9)所示初始化逻辑磁盘界面。

图9第11步:按空格键选中需要初始化的逻辑磁盘,按F10键,弹出一个询问对话框,如(图10)所示。

选中“YES”,并按回车键,弹出初始化进程(注意,初始化磁盘化损坏磁盘中的原有数据,需事先作好备份)。

图10第12步:初始化完成后,按任意键继续,并重启系统,RAID配置完成。

【最新精选】2016系统raid5组建(软raid5)

【最新精选】2016系统raid5组建(软raid5)

2003系统下raid5的组建步骤:1.建立好2003server系统,并且准备至少3块大小同样的硬盘连接在电脑上,打开磁盘管理,如图:2.在磁盘1上单击鼠标右键,点击新建卷,如图:3.在创建卷中选择RAID-5,单击下一步:4.双击左边的的磁盘2和3添加到RAID-5中,如图:5.安装完成后等待格式化,RAID格式化是多块硬盘同时进行的,如图:6.格式化后,进行同步计算,软RAID此计算式靠CPU完成,因此配置决定速度:7.同步后raid-5组建完成,磁盘管理中如图:8.在我的电脑中,raid-5如图:其中上图中的E盘为RAID-5卷注意:组建RAID-5的三块硬盘必须都是动态磁盘,转换动态磁盘的方法参见RAID-0制作第一步【附加公文一篇,不需要的朋友可以下载后编辑删除,谢谢】关于进一步加快精准扶贫工作意见为认真贯彻落实省委、市委扶贫工作文件精神,根据《关于扎实推进扶贫攻坚工作的实施意见》和《关于进一步加快精准扶贫工作的意见》文件精神,结合我乡实际情况,经乡党委、政府研究确定,特提出如下意见:一、工作目标总体目标:“立下愚公志,打好攻坚战”,从今年起决战三年,实现全乡基本消除农村绝对贫困现象,实现有劳动能力的扶贫对象全面脱贫、无劳动能力的扶贫对象全面保障,不让一个贫困群众在全面建成小康社会进程中掉队。

总体要求:贫困村农村居民人均可支配收入年均增幅高于全县平均水平5个百分点以上,遏制收入差距扩大趋势和贫困代际传递;贫困村基本公共服务主要指标接近全县平均水平;实现扶贫对象“两不愁三保障”(即:不愁吃、不愁穿,保障其义务教育、基本医疗和住房)。

年度任务:2015-2017年全乡共减少农村贫困人口844人,贫困发生率降至3%以下。

二、精准识别(一)核准对象。

对已经建档立卡的贫困户,以收入为依据再一次进行核实,逐村逐户摸底排查和精确复核,核实后的名单要进行张榜公示,对不符合政策条件的坚决予以排除,确保扶贫对象的真实性、精准度。

raidrive 分盘符

raidrive 分盘符

RAIDrive是一个虚拟磁盘驱动器软件,它可以将多个硬盘或RAID阵列组合成一个虚拟驱动器。

在Windows操作系统中,RAIDrive可以使用不同的分盘符来标识虚拟驱动器的不同分区。

要为RAIDrive分盘符,可以按照以下步骤进行操作:
1.打开Windows操作系统,并确保已经安装了RAIDrive软件。

2.右键单击“计算机”或“此电脑”,选择“管理”选项。

3.在计算机管理窗口中,展开“存储”部分,然后点击“磁盘管理”。

4.在磁盘管理窗口中,找到RAIDrive虚拟驱动器的分区(通常会显示为未分配的磁盘空间),右键单击该分区并选择“新建简单卷”。

5.按照向导的指示完成分区的创建过程。

在选择分区大小和文件系统时,可以根据需要进行设置。

6.完成分区创建后,新的分盘符将出现在磁盘管理窗口中,表示为RAIDrive虚拟驱动器的分区。

7.现在可以使用该分盘符来访问和存储数据。

请注意,具体的操作步骤可能会因Windows操作系统的版本和RAIDrive软件的不同而有所差异。

如果遇到问题,建议参考RAIDrive的官方文档或联系技术支持以获取帮助。

RAID5设置操作步骤

RAID5设置操作步骤

RAID5设置操作步骤RAID5 设置方法步骤1. 打开服务器后,待出现提示后按“Ctrl + H”进入RAID设置界面;2. 当进入Adapter Selection 界面后,选择“Start”;3. 当进入MegaRAID BIOS Config Utility Virtual Configuration 界面后,选择“Configuration Wizard”选项;4. 当进入MegaRAID BIOS Config UtilityConfiguration Wizard 界面后,选择“New Configuration”选项;5. 之后出现的Select Configuration Method 选择第一项“Manual Configuration”,然后选择“NEXT”进入下一步;6. 进入Configuration Preview界面:将Drivers下属的所有驱动全部添加到右侧Virtual Drivers中;添加方法如下:将鼠标点中Drivers下的Slot 单击“Add T o Array”当6个全部添加完毕后,单击“Accept DG”之后单击“NEXT”进入下一步;7. 进入Span Definition界面:单击左侧Array With Free Space 下的“Driver Group”选择“Add T o Span”完毕后单击“NEXT”进入下一步;8. 更改RAID Level右侧的下拉菜单选择“RAID 5”之后单击“Update Size”自动配置,选择适当的“Select Size”单击“Accept”接受,弹出对话框选择“YES”之后“NEXT”进入下一步;9. 之后系统自动回到Configuration Preview界面:单击右侧Virtual Drivers下的“VD0:RAID5…”选择“Accept”接受,弹出对话框选择“YES”;10. 进入Virtual Driver界面,选择“Cancel”,出现对话框选择“YES”;11. 快速安装,待安装之后返回首界面,“EXIT”离开,重新启动;。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Terabyte IDE RAID-5Disk ArraysD.A.Sanders,L.M.Cremaldi,V.Eschenburg,R.Godang,wrence,C.Riley,D.J.SummersUniversity of Mississippi,Department of Physics and Astronomy,University,MS38677,USAD.L.PetravickFNAL,CD-Integrated Systems Development,MS120,P.O.Box500,Batavia,IL60510,USAHigh energy physics experiments are currently recording large amounts of data and in a few years will berecording prodigious quantities of data.New methods must be developed to handle this data and make analysisat universities possible.We examine some techniques that exploit recent developments in commodity hardware.We report on tests of redundant arrays of integrated drive electronics(IDE)disk drives for use in offline highenergy physics data analysis.IDE redundant array of inexpensive disks(RAID)prices now are less than the costper terabyte of million-dollar tape robots!The arrays can be scaled to sizes affordable to institutions withoutrobots and used when fast random access at low cost is important.1.IntroductionWe report tests,using the Linux operating system, of redundant arrays of integrated drive electronics (IDE)disk drives for use in particle physics Monte Carlo simulations and data analysis[1].Parts costs of total systems using commodity IDE disks are now at the$2000per terabyte level.A revolution is in the making.Disk storage prices have now decreased to the point where they are lower than the cost per ter-abyte of300terabyte Storage Technology tape silos. The disks also offer far better granularity;even small institutions can afford to deploy systems.The faster random access of disk versus tape is another major ad-vantage.Our tests include reports on software redun-dant arrays of inexpensive disks–Level5(RAID-5) systems running under Linux2.4using Promise Ultra 133disk controllers that allow disks larger than137 GB.The137GB limit comes from28-bit logical block addressing,which allows228512byte blocks on IDE disks.Recently48-bit logical block addressing has been implemented.RAID-5protects data in case of a catastrophic single disk failure by providing parity bits.Journalingfile systems are used to allow rapid recovery from system crashes.Our data analysis strategy is to encapsulate data and CPU processing power together.Data is stored on many PCs.Analysis of a particular part of a data set takes place locally on,or close to,the PC where the data resides.The network backbone is only used to put results together.If the I/O overhead is moder-ate and analysis tasks need more than one local CPU to plow through data,then each of these disk arrays could be used as a localfile server to a few comput-ers sharing a local ethernet switch.These commodity 8-port gigabit ethernet switches would be combined with a single high end,fast backplane switch allow-ing the connection of a thousand PCs.We have also successfully tested using Network File System(NFS) software to connect our disk arrays to computers that cannot run Linux2.4.RAID[2]stands for Redundant Array of Inexpen-sive Disks.Many industry offerings meet all of the qualifications except the inexpensive part,severely limiting the size of an array for a given budget.This is now changing.The different RAID levels can be defined as follow:•RAID-0:“Striped.”Disks are combined intoone physical device where reads and writes ofdata are done in parallel.Access speed is fastbut there is no redundancy.•RAID-1:“Mirrored.”Fully redundant,but thesize is limited to the smallest disk.•RAID-4:“Parity.”For N disks,1disk is usedas a parity bit and the remaining N−1disks arecombined.Protects against a single disk failurebut access speed is slow since you have to updatethe parity disk for each write.Some,but not all,files may be recoverable if two disks fail.•RAID-5:“Striped-Parity.”As with RAID-4,the effective size is that of N−1disks.However,since the parity information is also distributedevenly among the N drives the bottleneck ofhaving to update the parity disk for each writeis avoided.Protects against a single disk failureand the access speed is fast.RAID-5,using enhanced integrated drive electron-ics(EIDE)disks under Linux software,is now avail-able[3].Redundant disk arrays do provide protec-tion in the most likely single disk failure case,that in which a single disk simply stops working.This removes a major obstacle to building large arrays of EIDE disks.However,RAID-5does not totally pro-tect against other types of disk failures.RAID-5will offer limited protection in the case where a single disk stops working but causes the whole EIDE bus to fail (or the whole EIDE controller card to fail),but only temporarily stops them from functioning.This would temporarily disable the whole RAID-5array.If re-placing the bad disk solves the problem,i.e.the failure did not permanently damage data on other disks,thenTable I Comparison of Large EIDE Disks for a RAID-5ArrayDisk Model Size(GB)RPM Cost/GB GB/platter Cache Buffer Warranty Maxtor D540X[4]1605400$1.03402MB3yearMaxtor DiamondMax16[5]2505400$1.09832MB1year Maxtor MaXLine Plus II[6]2507200$1.52838MB3year Western Digital WD2500JB[7]2507200$1.31838MB3year IBM-Hitachi180GXP[8]1807200$1.00608MB3yearthe RAID-5array would recover normally.Similarly if only the controller card was damaged then replacing it would allow the RAID-5array to recover normally. However,if more than one disk was damaged,espe-cially if thefile or directory structure information was damaged,the entire RAID-5array would be damaged. The remaining failure mode would be for a disk to be delivering corrupted data.There is no protection for this inherent to RAID-5;however,a longitudinal par-ity check on the data,such as a checksum record count (CRC),could be built into event headers toflag the problem.Redundant copies of data that are very hard to recreate are still needed.RAID-5does allow one to ignore backing up data that is only moderately hard to recreate.rge DisksIn today’s marketplace,the cost per terabyte of disks with EIDE interfaces is about half that of disks with SCSI(Small Computer System Interface).The EIDE interface is limited to2drives on each bus and SCSI is limited to7(14with wide SCSI).The only ma-jor drawback of EIDE disks is the limit in the length of cable connecting the drives to the drive controller. This limit is nominally18inches;however,we have successfully used24inch long cables[9].Therefore, one is limited to about10disks per box for an ar-ray(or perhaps20with a“double tower”).To get a large RAID array one needs to use large capacity disk drives.There have been some problems with us-ing large disks,primarily the maximum addressable size.We have addressed these problems in an ear-lier papers[10,11].Because of these concerns and because we wanted to put more drives into an array than could be supported by the motherboard we opted to use PCI disk controller cards.In the past we have tested both Promise Technologies ULTRA66and UL-TRA100disk controller cards in RAID-5disk arrays consisting of either80or100GB disks[11].Each of the PCI disk controller cards support four drives.We now report on our tests of the Promise Technologies ULTRA133TX2[12]that supports disk drives with capacity greater than137GB.Using arrays of disk drives,as shown in Table I,the cost per terabyte is similar to that of cost of Storage Technology tape silos.However,RAID-5arrays offer a lot better granularity since they are scalable down to a terabyte.For example,if you wanted to store10TB of data you would still have to pay about$1,000,000 for the tape silo but only$20,000for a RAID-5array. Thus,even small institutions can afford to deploy sys-tems.And the Terabyte disk arrays can be used as caches to take full advantage of Grid Computing[13].3.RAID ArraysThere exist disk controllers that implement RAID-5 protocols right in the controller,for example3ware’s Escalade7500series[14],which will handle up to12 EIDE drives.These controllers cost$600and,at the time that we built the system shown in Table III, did not support disk drives larger than137Gigabytes [15].Therefore,we focused our attention on software RAID-5implementations[3,16],which we tested ex-tensively.There are also various commercial RAID systems that rely on a hardware RAID controller.Examples of these are shown in Table II.They are typically3U or larger rack mounted systems.However,commercial systems have not been off-the-shelf commodity items. This is changing and the only drawback is that,even allowing for cost of assembly,they are anywhere from twice to over twenty-five times as expensive.Table II Some Commodity Hardware RAID Arrays.System Capacity Size Price/GB a Apple Xserve RAID 2.52TB3U$4.36 Dell EMC CX200 2.2TB3U$13.63 HP7100 2.2TB2×3U$50.21 IBM DF4000R 2.2TB2×3U$20.08 Sun StorEdge T3 2.64TB3×3.5U$54.66a Based on suggested retail Prices on February7,2003[17]3.1.HardwareWe now report on the use of disks with capacity greater than137GB.The drives we consider for use with a RAID-5array are compared in Table I.The disk we tested was the Maxtor D540X160GB disk [4].In general,the internal I/O speed of a disk is proportional to its rotational speed and increases as a function of platter capacity.One should note that the“spin–up”of these drives takes1.8-2.5Amps at 12Volts(typically22W total for both12V and5V). When assembling an array we had to worry about the“spin-up”current draw on the12V part of the power supply.With8disks in the array(plus the system disk)we would have exceeded the capacity of the power supply that came with our tower case,so we decided to add a second off-the-shelf power supply rather than buying a more expensive single supply.By using2power supplies we benefit from under loading the supplies.The benefits include both a longer life-time and better cooling since the heat generated is distributed over2supplies,each with their own cool-ing fans.We used the hardware shown in Table III for our array test.Many of the components we chose are generic;thus,components from other manufacturers also work.We have measured the wall power con-sumption for the whole disk array box in Table III.It uses276watts at startup and156watts during normal sustained running.Table III Components used in our1Terabyte RAID-5 disk arraySystem UnitComponent Price40GB IBM system disk[18]$658–160GB Maxtor RAID-5disks[4]$1702–Promise ATA/133PCI cards[12]$324–StarTech24”ATA/100cables[9]$3AMD Athlon1.9GHz/266CPU[19]$77Asus A7M266motherboard,audio[20]$672–256MB DDR PC2100DIMMs$33In-Win Q500P Full Tower Case[21]$77Sparkle15A@12V power supply[22]$342–Antec80mm ball bearing case fans$8110Alert temperature alarm[23]$15Pine8MB AGP video card[24]$15SMC EZ card10/100ethernet[25]$12Toshiba16x DVD,48x CDROM$36Sony1.44MBfloppy drive$12KeyTronic104key PS/2keyboard$7DEXXA3button PS/2mouse$4Total$1922To install the second power supply we had to modify our tower case with a jigsaw and a hand drill.We also had to use a jumper to ground the green wire in the 20-pin block ATXPWR connector to fake the power-on switch.When installing the two disk controller cards care had to be taken that they did not share interrupts with other highly utilized hardware such as the video card and the ethernet card.We also tried to make sure that they did not share interrupts with each other. There are16possible interrupt requests(IRQs)that allow the various devices,such as EIDE controllers, video cards,mice,serial,and parallel ports,to com-municate with the CPU.Most PC operating systems allow sharing of IRQs but one would naturally want to avoid overburdening any one IRQ.There are also a special class of IRQs used by the PCI bus,they are called PCI IRQs(PIRQ).Each PCI card slot has4 interrupt numbers.This means that they share some IRQs with the other slots;therefore,we had to juggle the cards we used(video,2EIDE controllers,and an ethernet).When we tried to use a disk as a“Slave”on a moth-erboard EIDE bus,we found that it would not run at the full speed of the bus and slowed down the access speed of the entire RAID-5array.This was a prob-lem of either the motherboard’s basic input/output system(BIOS)or EIDE controller.This problem was not in evidence when using the disk controller cards. Therefore,we decided that rather than take a factor of10hit in the access speed we would rather use8 instead of9hard disks.3.2.SoftwareFor the actual tests we used Linux kernel2.4.17 with the RedHat7.2(see /) distribution(we had to upgrade the kernel to this level)and applied a patch to allow support for greater than137GB disks(see /and see /).The latest stable kernel version is2.4.20(see /). We needed the2.4.x kernel to allow full support for “Journaling”file systems.Journalingfile systems pro-vide rapid recovery from crashes.A computer canfin-ish its boot-up at a normal speed,rather than wait-ing to perform afile system check(FSCK)on the en-tire RAID array.This is then conducted in the back-ground allowing the user to continue to use the RAID array.There are now4different Linux Journaling file systems:XFS,a port from SGI[26];JFS,a port from IBM[27];ext3[28],a Journalized version of the standard ext2file system;and ReiserFS from namesys [29].Comparisons of these Journalingfile systems have been done elsewhere[30].When we tested our RAID-5arrays only ext3and the ReiserFS were eas-ily available for the2.4.x kernel;therefore,we tested 2different Journalingfile systems;ReiserFS and ext3.We opted on using ext3for two reasons:1)At the time there were stability problems with ReiserFS and NFS (this has since been resolved with kernel2.4.7)and 2)it was an extension of the standard ext2fs(it was originally developed for the2.2kernel)and,if synced properly could be mounted as ext2.Ext3is the only one that will allow direct upgrading from ext2,this is why it is now the default for RedHat since7.2.NFS is a veryflexible system that allows one to managefiles on several computers inside a network as if they were on the local hard disk.So,there’s no need to know what actualfile system they are stored under nor where thefiles are physically located in order to access them.Therefore,we use NFS to connect these disks arrays to computers that cannot run Linux2.4. We have successfully used NFS to mount disk arrays on the following types of computers:a DECstation 5000/150running Ultrix4.3A,a Sun UltraSparc10 running Solaris7,a Macintosh G3running MacOS X, and various Linux boxes with both the2.2and2.4 kernels.As an example,in Spring2002we built a pair of one Terabyte Linux RAID-5arrays,as described in section 3.1,to store CMS Monte Carlo data at CERN.They were mounted using NFS,via gigabit ethernet.They remotely served the random background data to the CMS Monte Carlo Computers,as if it was local.While this is not as efficient as serving the data directly,it is clearly a viable technique[31].We also are cur-rently using two,NFS mounted,RAID-5boxes,one at SLAC and one at the University of Mississippi,to run analysis software with the BaBar KANGA and CMS CMSIM/ORCA code.We have performed a few simple speed tests.The first was“hdparm-tT/dev/xxx”.This test simply reads a64MB chunk of data and measures the speed. On a single drive we saw read/write speeds of about 30MB/s.The whole array saw an increase to95 MB/s.When we tried writing a textfile using a simple FORTRAN program(we wrote“All work and no play make Jack a dull boy”108times),the speed was about 95MB/s While mounted via NFS over100Mb/s eth-ernet the speed was2.12MB/s,limited by both the ethernet speed and the NFS communication overhead. In the past[1],we have been able to get much higher fractions of the rated ethernet bandwidth by using the lower level TCP/IP socket protocol[32]in place of the higher level NFS protocol.TCP/IP sockets are more cumbersome to program,but are much faster.We also tested what actually happens when a disk fails by turning the power offto one disk in our RAID-5array.One could continue to read and writefiles, but in a“degraded”mode,that is without the parity safety net.When a blank disk was added to replace the failed disk,again one could continue to read and writefiles in a mode where the disk access speed is reduced while the system rebuilt the missing disk as a background job.This speed reduction in disk access was due to the fact that the parity regeneration is a major disk access in its own right.For more details, see reference[16].The performance of Linux IDE software drivers is improving.The latest standards[33]include support for command overlap,READ/WRITE direct mem-ory access QUEUED commands,scatter/gather data transfers without intervention of the CPU,and eleva-tor mand overlap is a protocol that allows devices that require extended command time to per-form a bus release so that commands may be executed by the other device on the mand queuing allows the host to issue concurrent commands to the same device.Elevator seeks minimize disk head move-ment by optimizing the order of I/O commands.The Hitachi/IBM180GXP disk[8]supports elevator seeks under the new ATA6standard[33].We did encounter a few problems.We had to mod-ify“MAKEDEV”to allow for more than eight IDE devices,that is to allow for disks beyond“/dev/hdg”. For version2.x one would have to actually modify the script;however,for version3.x we just had to modify thefile“/etc/makedev.d/ide”.This should no longer be a problem with newer releases of Linux.Another problem was the2GBfile size limit.Older operating system and compiler libraries used a32 bit“long-integer”for addressingfiles;therefore,they could not normally addressfiles larger than2GB (231).There are patches to the Linux2.4kernel and glibc but there are still some problems with NFS and not all applications use these patches.We have found that the current underlyingfile sys-tems(ext2,ext3,reiserfs)do not have a2GBfile size limit.The limit for ext2/ext3is in the petabytes. The2.4kernel series supports largefiles(64-bit off-sets).Current versions of GNU libc support large files.However,by default the32-bit offset interface is used.To use64-bit offsets,C/C++code must be recompiled with the following as thefirst line:#define_FILE_OFFSET_BITS64or the code must use the*64functions(i.e.open be-comes open64,etc.)if they exist.This functionality is not included in GNU FORTRAN(g77);however,it should be possible to write a simple wrapper C pro-gram to replace the OPEN statement(perhaps called open64).We have succeeded in writingfiles larger than2GB using a simple C program with“#define FILE OFFSET BITS64”as thefirst line.This works over NFS version3but not version2.While RAID-5is recoverable for a hardware fail-ure,there is no protection against accidental deletion offiles.To address this problem we suggest a sim-ple script to replace the“rm”command.Rather than deletingfiles it would move them to a“/raid/Trash”or better yet a“/raid/.Trash”directory on the RAID-5disk array(similar to the“Trash can”in the Macin-tosh OS).The system administrator could later purgethem as space is needed using an algorithm based on criteria such asfile size,file age,and user quota. 4.High Energy Physics StrategyWe encapsulate data and CPU processing power.A block of real or Monte Carlo simulated data for an analysis is broken up into groups of events and distributed once to a set of RAID disk boxes,which each may also serve a few additional processors via a local8-port gigabit ethernet switch(see Figure1).Figure1:An example of a RAID-5disk array mounted on several local CPUs via a8-port gigabit switch. Examples of commodity gigabit ethernet switches and PCI adapters are seen in Table IV.Dual processorTable IV Examples of Commodity Gigabit Ethernet Switches and Adapters.Company Model Type CostLinksys[34]EG008W8-port switch$162D–Link[35]DGS–1008T8-port switch$312Netgear[36]GS508T8-port switch$502Netgear[37]GS524T24-port switch$1499D–Link[38]DGE500T PCI adapter$46Intel[39]82540EM PCI adapter$41 boxes would also add more local CPU power.Events are stored on disks close to the CPUs that will process them to minimize I/O.Events are only moved once. Event parallel processing has a long history of success in high energy physics[1,40,41].The data from each analysis are distributed among all the RAID arrays so all the computing power can be brought to bear on each analysis.For example,in the case of an important analysis (such as a Higgs analysis),one could put50GB of data onto each of100RAID arrays and then bring the full computing power of700CPUs into play.Instances of an analysis job are run on each local cluster in parallel. Several analyses jobs may be running in memory or queued to each local cluster to level loads.The data volume of the results(e.g.histograms)is small and is gathered together over the network backbone.Results are examined and the analysis is rerun.The system is inherently fault tolerant.If three of a hundred clusters are down,one still gets97%of the data and analysis is not impeded.RAID-5arrays should be treated as fairly secure, large,high-speed“scratch disks”.RAID-5just means that disk data will be lost less frequently.Data which is very hard to re-create still needs to reside on tape. The inefficiency of an offline tape vault can be an ad-vantage.Its harder to erase your entire raw data set with a single keystroke,if thousands of tapes have to be physically mounted.Someone may ask why all the write protect switches are being reset before all is lost. Its the same reason the Air Force has real people with keys in ICBM silos.The granularity offered by RAID-5arrays allows a university or small experiment in a laboratory to set up a few terabyte computer farm,while allowing a large Analysis Site or Laboratory to set up a few hun-dred terabyte or a petabyte computer system.For a large site,they would not necessarily have to purchase the full system at once,but buy and install the sys-tem in smaller parts.This would have two advantages, primarily they would be able to spread the cost over a few years and secondly,given the rapid increase in both CPU power and disk size,one could get the best “bang for the buck”.What would be required to build a1/4petabyte system(similar size as a tape silo)?Start with eight 250GB Maxtor disks in a box.The Promise Ultra133 card allows one to exceed the137GB limit.Each box provides7×250GB=1750GB of usable RAID-5 disk space in addition to a CPU for computations. 280terabytes is reached e23com-modity8-port gigabit ethernet switches($170each)to connect the161boxes to a24-port commodity gigabit ethernet switch.See Figure2.This could easilyfit in a room that was formerly occupied by a few old Main-frames,say an area of about a hundred square me-ters.The power consumption would be25kilowatts, 45kilowatts if they all start up at once.One would need to build up operational experience for smooth running.As newer disks arrive that hold yet more data,even a petabyte system would become feasible. If one still needed more processing power per RAID ar-ray you could substitute for each RAID-5CPU shown in Figure2,6CPUs plus1RAID-5CPU connected by an8-port gigabit ethernet switch as described in Figure1.Multiple CPUs per motherboard provide another alternative to adjust the disk space to pro-cessing power ratio.Grid Computing[13]will entail the movement of large amounts of data between various sites.RAID-5arrays will be needed as disk caches both during the transfer and when it reaches itsfinal destination. Another example that can apply to Grid ComputingFigure2:A schematic of a1/4petabyte(or larger)system.is the Fermilab Mass Storage System,Enstore[42], where RAID arrays are used as a disk cache for a Tape Silo.Enstore uses RAID arrays to stage tapes to disk allowing faster analysis of large data sets. 5.ConclusionWe have tested redundant arrays of IDE disk drives for use in offline high energy physics data analysis and Monte Carlo simulations.Parts costs of total systems using commodity IDE disks are now at the$2000per terabyte level,a lower cost per terabyte than Storage Technology tape silos.The disks,however,offer much better granularity;even small institutions can afford them.The faster access of disk versus tape is a ma-jor added bonus.We have tested software RAID-5 systems running under Linux2.4using Promise Ul-tra133disk controllers.RAID-5provides parity bits to protect data in case of a single catastrophic disk failure.Tape backup is not required for data that can be recreated with modest effort.Journalingfile systems permit rapid recovery from crashes.Our data analysis strategy is to encapsulate data and CPU pro-cessing power.Data is stored on many PCs.Analysis for a particular part of a data set takes place locally on the PC where the data resides.The network is only used to put results modity8-port gigabit ethernet switches combined with a single high end,fast backplane switch[43]would allow one to con-nect over a thousand PCs,each with a terabyte of disk space.Some tasks may need more than one CPU to go through the data even on one RAID array.For such tasks dual CPUs and/or several boxes on one local 8-port ethernet switch should be adequate and avoids overwhelming the backbone switching fabric connect-ing an entire installation.Again the backbone is only used to put results together.Current high energy physics experiments,such as BaBar at SLAC,feature relatively low data acquisi-tion rates,only3MB/s,less than a third of the rates taken at Fermilabfixed target experiments a decade ago[1].The Large Hadron Collider experiments CMS and Atlas,with data acquisition rates starting at100 MB/s,will be more challenging and require physical architectures that minimize helter skelter data move-ment if they are to fulfill their promise.In many cases, architectures designed to solve particular processing problems are far more cost effective than general solu-tions[1,40].As Steve Wolbers in his talk at CHEP03 [44]reminded us,all data processing groups can not depend on Moore’s Law to save them.Data acqui-sition groups want to write out additional interest-ing events.Programmers like to adopt new languages that are further abstracted from the CPUs running them.Small objects and pointers seem tofind their way into code.Machines hate to interrupt pipelines and love direct addressing.Universities want networks to transfers billions of events quickly.Even Gordon Moore may not be able to do all of this simultane-ously.Efficiency may still be useful.Designing time critical code[45],regardless of the language chosen, tofit into larger blocks without pointers can increase speed by a factor of10to100.Code to methodically bit-pack events into the minimum possible size may be worth writing[46].If events are smaller,more can be stored on a given disk and more can be transferred over a given network in a day.All of this requires planning at an early stage.No software package will generate it automatically.Techniques explored in this paper,physically en-capsulating data and CPUs together,may be useful. Terabyte disk arrays at small institutions are now puting has progressed since the days when science was done by punching a few kilobytes into pa-per tape[47].AcknowledgmentsMany thanks to S.Bracker,J.Izen,L.Lueking,R. Mount,M.Purohit,W.Toki,and T.Wildish for their help and suggestions.This work was supported in part by the U.S.Department of Energy under Grant Nos. DE-FG05-91ER40622and DE-AC02-76CH03000.References[1]For example,a decade ago the Fermilab E791col-laboration recorded and reconstructed50TB of raw data in order to generate charm physics re-sults.For details of the saga,in which more data was written to tape than in all previous HEP ex-periments combined,see:S.Amato,J.R.de Mello Neto,J.de Miranda,C.James,D.J.Summers and S.B.Bracker,Nucl.Instrum.Meth.A324,535(1993)[arXiv:hep-ex/0001003];S.Bracker and S.Hansen,[arXiv:hep-ex/0210034];S.Hansen, D.Graupman,S.Bracker and S.Wickert,IEEE Trans.Nucl.Sci.34,1003(1987);S.Bracker,K.Gounder,K.Hendrix and D.Sum-mers,IEEE Trans.Nucl.Sci.43,2457(1996) [arXiv:hep-ex/9511009];E.M.Aitala et al.[E791Collaboration],Phys.Rev.Lett.77,2384(1996)[arXiv:hep-ex/9606016];E.M.Aitala et al.[E791Collaboration],Phys.Rev.Lett.80,1393(1998)[arXiv:hep-ph/9710216];E.M.Aitala et al.[E791Collaboration],Phys.Rev.Lett.83,32(1999)[arXiv:hep-ex/9903012];E.M.Aitala et al.[E791Collaboration],Phys.Lett.B403,377(1997)[arXiv:hep-ex/9612005];E.M.Aitala et al.[E791Collaboration],Phys.Lett.B462,401(1999)[arXiv:hep-ex/9906045];E.M.Aitala et al.[E791Collaboration],Phys.Rev.D57,13(1998)[arXiv:hep-ex/9608018]. [2]D.A.Patterson,G.Gibson and R.H.Katz,Sig-mod Record17,109(1988).[3]M.de Icaza,I.Molnar,and G.Oxman,“Thelinux raid-1,4,5code,”in3rd Annu.Linux Expo’97,(April1997).[4]Maxtor.(2001)DiamondMax D540X./en/documentation/data sheets/d540x datasheet.pdf[2003].[5]Maxtor.(2003)DiamondMax16./en/documentation/data sheets/diamondmax16data sheet.pdf[2003].[6]Maxtor.(2003)Maxline ATA./en/documentation/data sheets/maxline data sheet.pdf[2003]. [7]Western Digital Corp.(2003)Specifications forthe WD Caviar WD2500JB./en/products/current/drives.asp?Model=WD2500JB[2003].[8]Hitachi Global Storage Technologies.(2003)Deskstar180GXP./hdd/desk/ds180gxp.htm;/ tech/techlib.nsf/techdocs/CF02BAB6EA8E3B7F87256C16006B1CFA/$file/D180GXP ds.pdf[2003].[9].(2002)24In.Dual Drive UltraATA/66/100Cable./ststore/itemdetail.cfm?product id=IDE6624[2003].The ATA/100standard uses an80wire cable to transmit up to133Megabytes per second. [10]D.Sanders,C.Riley,L.Cremaldi,D.Summersand D.Petravick,in puting in High-Energy Physics(CHEP98),Chicago,IL, (Aug.31-Sep41998)[arXiv:hep-ex/9912067].[11]D.A.Sanders,L.M.Cremaldi,V.Eschenburg,wrence,C.Riley,D.J.Summers andD.L.Petravick,IEEE Trans.Nucl.Sci.49,1834(2002)[arXiv:hep-ex/0112003].[12]Promise Technologies,inc.(2001)Ultra133TX2–Ultra ATA/133Controller for66MHz PCI Moth-erboards./marketing/datasheet/file/U133TX2DS.pdf[2003]and/marketing/datasheet/file/Ultra133tx2DS v2.pdf[2003].Each ATA/PCI Promise card controls four disks.[13]P.Avery,Phil.Trans.Roy.Soc.Lond.360,1191(2002);L.Lueking et al.,Lect.Notes Comput.Sci.2242,177(2001).[14]3ware.(2003)Escalade7500Series ATA RAIDController./products/pdf/Escalade7500SeriesDS1-7.qk.pdf[2003];3ware.(2003)Escalade7500-12ATA RAIDController./products/pdf/12-PortDS1-7.qk.pdf[2003]. [15]K.Abendroth,“personal communication,”email:kent.abendroth@.[16]J.Østergaard.(2000)The software-RAIDHOWTO./HOWTO/Software-RAID-HOWTO.html. [17]Apple Computers.(2003)Xserve RAID./server/pdfs/L26325A XserveRAID TO.pdf[2003]. [18]IBM.(2002)IBM Deskstar60GXP hard diskdrive./tech/techlib.nsf/techdocs/85256AB8006A31E587256A7600736475/$file/D60GXP ds.pdf[2003].[19]AMD.(2002)AMD Athlon Processor ProductBrief./us-en/Corporate/VirtualPressRoom/0,,51104543∼24415,00.html[2003]We bought our AMD CPU boxed with a fan. [20]ASUS.(2002)ASUS A7M266./mb/socketa/a7m266/overview.htm[2003].[21]In-Win Development,inc.(2002)IW-Q500ATXFull Tower Case./home/detail.php?show=features&event=ATX&class=Full Tower&type=Q-Series&model=IW-Q500[2003]Note:the Q500P case comes with a300Watt power supply.。

相关文档
最新文档