EMC磁盘阵列配置指导书

合集下载

EMC.CX300磁盘阵列配置手册

EMC.CX300磁盘阵列配置手册

Procedure created on: 10/30/06 12:52SITE AND CONFIGURATION INFORMATION:Installation/Upgrade Information:Change Control Ref Number:Installing or Upgrading CE:Installation/Upgrade Date/Time: (EST)Customer and Site Information:Customer Name:Site ID/Region or Country: |Lab Contact:Contact Phone Number:Site Address:Clarify Case Number:Array Information:Serial Number of Array(s):Software in configuration RevisionFLARE/Access LogixNavisphere AgentNavisphere ManagerPowerPathATF/CDEVxVM DMPSnapViewMirrorView/AMirrorView/SSAN Copy/SAN Copy/ECLARalert/OnAlertAdmsnapAdmhostEMC Replication Manager (ERM)REPORTING PROBLEMS:If you find any errors in this procedure or have comments regarding this application, please send email to CLARiiONProcedureGenerator@. Be sure to reference any modules by the correct filename (located to the right of the module title).CONFIGURATION DESCRIPTION:∙Product: CX-Series (CX200-CX700) *∙Activity: Install New Array and New Host∙Model: CX300∙Installation Type: Field Installed in EMC Cabinet∙Power Source Type: Standard AC Installation∙Software Release: Release 19 (02.19.xxx)∙Management Software: Navisphere GUI∙Manage a Legacy Array with Navisphere 6.x on the Same LAN: No∙Environment: Access Logix∙Connection Type: Switch (SAN)∙Hot Spare(s): No∙CLARalert: already installed∙Host 1: Solaris PCI∙ Boot Device: Server is Boot Device∙ Number of HBAs/NICs: Single HBA/NIC∙ Type of HBA/NIC: Emulex∙ Cluster Environment: Clustered host∙ Failover Type: PowerPath SE∙EIP/EIP-2 instructions: No∙Employee of: EMC Authorized Service Provider (ASP)∙∙* Unless otherwise noted, this procedure uses the term "CX-Series" to refer to CX3-80, CX3-40, CX3-40c, CX3-20, CX3-20c, CX700, CX600, CX500-Series, CX400, CX300-Series, and CX200-Series.WARNINGS:If this is a procedure for a cloned host, delete the HostIDFile.txt file before installing any CLARiiON software. Refer to EMC Knowledgebase article emc66921 for moreinformation.TABLE OF CONTENTS:To link immediately to a specific page within this procedure, position the cursor over the pagenumber on the right and click.Denotes a checkpointVerify you have reviewed the latest CLARiiON CCA rules cncca010_R002 (3)Review E-Lab Interoperability Navigator for additional config steps plstr020_R002 (3)Verify the customer has completed the following for PowerPath inpwp010_R003 (3)Verify the host is connected to the LAN cnlan010 _R001 (4)Install PowerPath SE on a Solaris host inpwp100_R007 (4)Install Navisphere Agent and CLI software on a Solaris server incli030_R001 (7)Service providers begin here to continue the installation procedure inpwp110_R001 (9)Install the SPS tray and unit(s) in the CX300/CX300i/CX200 cabinet incx3090_R003 (9)Install the mounting rails and enclosure(s) in the cabinet indae040_R003 (12)Set up a CX300/CX300i DAE2 disk enclosure (if applicable) incx3100_R001 (19)Set up a CX-Series DAE2P disk enclosure (if applicable) incx7070_R002 (21)Cable and power-up a CX300/CX300i storage system incx3120_R001 (25)Install the switch cnswt090 _R003 (29)Cable HBAs to switch ports and check connections inhba330_R003 (30)Cable CX300 SPs to switch ports and check connections cnspr090 _R001 (31)Configure the switch cnswt110 _R003 (31)Cable the CX300 to the LAN cnlan110_R001 (33)Install array software cnman810_R007 (34)Create switch zones and add them to the switch configuration cnswt100_R003 (36)Add persistent bindings to the Solaris Emulex kernel files cnsol020 _R001 (36)Add LUNs to the Solaris sd.conf configuration file (Emulex) cnlun080_R003 (37)Verify connection between SP Agent and host system cncli010_R001 (38)Verify each HBA sees its targets on a Solaris server cnhba090 _R001 (39)Set array properties in Navisphere Manager cnnav140_R004 (39)Create RAID groups in Navisphere Manager cnlun550_R007 (40)Create LUNs on RAID groups with Navisphere Manager cnlun560_R008 (41)Create storage groups and connect hosts cnlun570_R003 (42)Set failover settings for PowerPath (Solaris) cnfsw110_R003 (43)Allow Solaris server to see target LUNs cnlan030 _R001 (45)Make LUNs available to Solaris cnlun320_R002 (46)Configure PowerPath for missing devices (Solaris) inpwp060_R001 (47)Install the Navisphere Service Taskbar / Disk Replacement Utility innst020_R003 (47)For your own records: Collect environment configuration information instr090_R002 (50)Verify the presence of the Power Down Sequence Label cnpwr020_R004 (51)CLARalert Proactive Remote Support cnalr030_R001 (52)Enter the CX300 TLA part and serial numbers into Clarify database instr210_R006 (54)Verify you have reviewed the latest CLARiiON CCA rules cncca010_R0021. Verify you have reviewed the latest version of the CLARiiON CCA Rules document. Thedocument can be obtained by clicking the link provided below or by navigating to the CLARiiON Procedure Generator default installation folder (C:\Program Files\CLARiiON ProcedureGenerator\CCA_Rules.pdf).Open CLARiiON CCA Rules Document DoneTable of ContentsReview E-Lab Interoperability Navigator for additional config steps plstr020_R0022. Verify that you have read all footnotes that are associated with the tables displayed in the E-LabInteroperability Navigator Host Connectivity report. In some cases (e.g., when running aparticular revision of an operating system or operating system patch), the footnotes include additional steps that may need to be performed and are not included in this procedure. DoneTable of ContentsVerify the customer has completed the following for PowerPath inpwp010_R0033. IMPORTANT: Prior to the arrival of the service provider, the customer should complete thefollowing tasks (in the given order) as applicable to the configuration. However, theseinstructions are provided in the following modules in case the customer has not successfully completed the tasks. If these tasks are c omplete, then skip to the module titled ―Service providers begin here to continue the installation procedure‖.1. Connect the server to the LAN2. Install the correct HBAs/NICs and driver as stated in the E-Lab InteroperabilityNavigator (ESM)3. Set parameters to optimize the driver in a CLARiiON environment4. If applicable, remove ATF or CDE if migrating to PowerPath5. Install PowerPath6. Install the Host Agent DoneTable of ContentsVerify the host is connected to the LAN cnlan010_R0014. Verify that the host is connected to the LAN.If it is not connected, then attach the LAN cable to the Ethernet port on the back of the server. DoneTable of Contents∙Install PowerPath SE on a Solaris host inpwp100_R007 5. CAUTION: This procedure is specific to the installation of PowerPath version 5.0, which is thecurrent shipping version for Solaris hosts. For instructions on installing older versions ofPowerPath, refer to the applicable PowerPath installation manual available on PowerLink.PowerPath SE is the default policy on CLARiiON systems without a valid PowerPath license.Load balancing is not in effect and I/O routing on failure is limited to one host bus and one port on each storage processor. This policy is required for non-disruptive upgrades. It protectsagainst storage processor (SP) and backend failures, but not against HBA or host loop failures.PowerPath SE does not ship with a license key, because one is not required for installation.Verify the following prior to the installation of PowerPath SE:∙Verify that ATF or CDE is not installed on the Solaris host. If any versions of ATF or CDE are installed on the host, uninstall them and reconfigure applications andsystem services that use ATF pseudo device names to use standard Solaris nativenamed devices (cXtXdXsX) instead before continuing.∙Review the patch ReadMe files to determine which patches (if any) you want to install after PowerPath, and whether those patches have any added prerequisitesthat must be met before you install PowerPath.∙Determine if the PowerPath software you are installing requires the removal or presence of a previous version of PowerPath. Some full versions require theprevious version to be removed while others do not. Also, some patches requirethe full version to be present while others require it to be removed. Refer to thePowerPath Release Notes and/or PowerPath patch readme files for your specificversion to determine what needs to be present/removed and if and when a rebootis necessary in order to install your specific PowerPath software version and/orpatch. These documents are available on . Done6. Mount the PowerPath CD-ROM:a. Verify that you are logged in as root.b.Insert the CD-ROM in the CD-ROM drive.The CD should mount automatically. If it does not, then you must mount itmanually. For example, to mount the CD on /cdrom/cdrom0, enter:mount -F hsfs -r /dev/dsk/c x t y d z s0 /cdrom/cdrom0where x, y, and z are values specific to the host’s CD-ROM drive.For example: mount -F hsfs -r /dev/dsk/c0t2d0s0 /cdrom/cdrom07. Install PowerPath software:a. If you do not have a graphics terminal, run the script filename command to recordpkgadd output in the specified file. (After pkgadd completes, use CTRL-D to stoprecording the output.)b. Change to the /mount_point/UNIX/SOLARIS directory.∙On SPARC hosts, enter:cd /cdrom/cdrom0/UNIX/SOLARIS∙On Opteron hosts, enter:cd /cdrom/cdrom0/UNIX/SOLARIS_i386c. Start the installation by entering:/usr/sbin/pkgadd -d .NOTE: Note the required space and period after the -d parameter.d. At the packages available prompt, enter 1and press Enter.e. At the prompt for an installation directory for the program files, press Enter to acceptthe default base directory (/opt) or type the path to an alternate base directory andpress Enter.NOTE: PowerPath installs its files in /basedir/EMCPower; the installation processcreates the EMCPower subdirectory. Make a note of the name and location of thePowerPath base directory for future reference.f. At the prompt to continue the installation, enter y and press Enter.g. The screen will display information about the installation. At the prompt, enter q andpress Enter.h. If VERITAS VxVM is installed or will be installed, perform the applicable substepdepending on the version. Otherwise, skip this step.∙VERITAS VxVM version 4.1:If DGC devices were previously set as jbod, unset them as follows:vxddladm rmjbod vid=DGC∙VERITAS VxVM version 4.0 and earlier:Run the following command after PowerPath has been installed:vxddladm addjbod vid=DGC pagecode=0x83 offset=8 length=16This command needs to be run only once and will take effect on the nextreboot.8. After PowerPath is installed on the host, perform the following:a. Remove the CD-ROM as follows:∙If the CD-ROM volume management daemon vold is running, unmount andeject the CD-ROM. Enter the following command and remove the CD-ROM:eject∙If vold is not running, unmount the CD-ROM by entering the followingcommand and then ejecting the CD-ROM:unmount /cdrom/cdrom0b. If required by your specific PowerPath version or patch, reboot the host by entering:reboot -- -rNOTE: If the sd or ssd driver does not exist on the host, you will see a forceloadfailed warning during boot. You can safely ignore this warning.9. If necessary, install any PowerPath patches from the following URL:NOTE: A readme file that explains how to install the patch accompanies every patch release.This file will also state whether you need to reboot the host after the installation of the patch.Install the patch package by entering the following command:patchadd .10. Verify that PowerPath is installed properly on the host:a. Enter the command:pkginfo -1 EMCpowerYou should see output similar to the following:NOTE: When you install PowerPath on an AMD Opteron host, i386 appears inthe ARCH row.PKGINST: EMCpowerNAME: EMC PowerPathCATEGORY: systemARCH: sparcVERSION: 5.0.0_bxxxBASEDIR: /optVENDOR: EMC CorporationPSTAMP: cambridge951018123443INSTDATE: Feb 15 2006 08:24STATUS: completely installedFILES: 286 installed pathnames5 shared pathnames38 directories121 executables107843 blocks used (approx)b. Verify that the PowerPath and PowerPath Volume Manager kernel extensions areloaded on the host by entering:modinfo | grep emcYou should see output similar to the following:31 7bbfa000 2c28 163 1 emcp (PP Driver 5.0.0)32 7bb3a000 30ce8 154 1 emcpmpx (PP MPX Ext 5.0.0)33 1336620 21e60 - 1 emcpsf (PP SF 5.0.0)34 7ae00000 dada8 - 1 emcpsapi (PP SAPI Ext 5.0.0)35 12be610 12440 - 1 emcpcg (PP CG Ext 5.0.0)36 7bb7e6d0 3180 - 1 emcpgpx (PP GPX Ext 5.0.0)37 7b68fe88 78e8 - 1 emcpdm (PP DM Manager 5.0.0)38 7b7a7e10 2c8 - 1 emcpioc (PP PIOC 5.0.0)NOTE: The emcpmp , emcpmpc , emcpmpap , and emcpmpaa extensions present inprevious releases have been replaced by the emcpmpx extension in PowerPath 5.0for Solaris. Furthermore, the HighRoad (emcphr ) extension has been removedfrom PowerPath 5.0.Table of Contents∙ Install Navisphere Agent and CLI software on a Solaris server incli030_R00111. Verify the following requirements have been met before continuing:∙ The Solaris host is running a supported distribution of Solaris.∙ The supported HBA hardware and driver are installed.∙ Agent and/or CLI are not already installed. If either is installed, then remove eachone before continuing with the next step.∙ There is a configured TCP/IP network connection to allow the server to send LUNmapping information to the storage system and allow Manager or CLI tocommunicate with the storage system over the network.Done 12. Install the Host Agent and CLI software:a. If you are installing the Host Agent, ensure that each storage system is connectedto the Solaris server where you are installing the Host Agent.b. Verify that you are logged in as superuser (root) at the Solaris server.c. If any previous revision of Navisphere Agent or CLI is installed on the server,remove it before continuing.d. Insert the CD-ROM into the host’s CD -ROM drive.e. In a shell, enter the following commands:cd /cdrom/cdrom0pkgadd –d /solaris/naviagnt.pkgf. Select the packages to install by performing one of the following:∙Enter 1 (to install the Host Agent)∙Enter 2 (to install Navisphere CLI)∙Enter All (to install both Host Agent and CLI)g. Enter y at the next two prompts.The installation program looks for any Agent configuration files you may alreadyhave on your system. If the program does not find any agent configuration files,then you have finished installing the Agent; skip to step k below.If the program does find any existing configuration files, it displays a message likethe following:At Least 1 saved config file exists for Navisphere Agent.Please select 1 of the following:[1] Restore/etc/Navisphere/.Naviagent-config.000120:1059[2] Restore/etc/Navisphere/.Naviagent-config.000121:1408[3] QuitSelect number 1 -3.h. Select the existing file you want to serve as the Agent configuration file. Thesoftware will retain that file and rename it with the required Agent filename,agent.config.Generally, you will want to use the most recent file, as shown by the numeric datesuffix. To use the default configuration file, specify the number for the Quit option.i. When the installation of the Host Agent is complete, the following messageappears:Installation of <NAVIGENT> was successful.If you are also installing the CLI, the program prompts::Processing package instance <NAVICLI> from.....j. Answer y to the CLI installation prompts as you did for the Agent.k. When installation is complete, exit the /cdrom directory (for example, execute cd/).Run the eject command and then remove the CD from the host’s CD-ROM drive.IMPORTANT: Any user who can access the management station can change ordelete the Navisphere files you just installed. You may want to change permissionson these files to restrict access to them.13. Modify the user login scripts:Use a text editor (for example, vi) to modify login scripts as described below:∙ If you are running Common Desktop Environment, remove the comment from thelast line in $HOME/.dtprofile. The line should read: DTSOURCEPROFILE=true∙ Make the following additions to the specified paths in $HOME/.profile or $HOME/.cshrc, and export each path:Add the text /opt/Navisphere/bin to PATHAdd the text /opt/Navisphere/man to MANPATHAdd the text /opt/Navisphere/lib to LD_LIBRARY_PATHIMPORTANT: You must export each path after you have modified it.14. Enter the following command to start the Host Agent:/etc/init.d/agent startTable of Contents Service providers begin here to continue the installation procedure inpwp110_R00115. If the customer has completed all of the customer tasks stated earlier, then begin at this point to continue the installation procedure.DoneTable of Contents∙ Install the SPS tray and unit(s) in the CX300/CX300i/CX200 cabinet incx3090_R00316. Install the rails in the cabinet:a. After you decide where in the cabinet you want to install the SPS tray, find theunoccupied 1U section on the rear channels. It is suggested that you install theSPS tray immediately beneath the DPE or SPE, if possible. For more informationon EMC recommended device placement configurations, refer to the EMC Railsand Enclosures Field Installation Guide (P/N 300-001-799).NOTE: On the EMC 40U and 40U-C cabinets, the 1U increments are marked by ahorizontal line or small hole in the channel.b. From the front of the cabinet, insert the alignment pin on one mounting railassembly into the middle hole of the selected 1U space on a rear channel. Thefollowing figure shows the correct channel holes for the alignment pins.Donec. Extend the rail to the front cabinet channel. Align the holes of the front rail flange tothe inside of the channel, ensure that the rail is level, and then install two M5 x 16-mm Phillips securing screws in the top and bottom holes, as shown in the figurebelow.NOTE: Leave the screws slightly loose to allow for adjustment after you install thetray. Do not insert a screw in the middle hole yet.d. At the back of the cabinet, insert and tighten two M5 x16-mm securing screws inthe holes above and below the alignment pin, as shown above.e. Repeat steps b through d for the other rail assembly.17. Install a filler panel for SPS B (if applicable):If you are installing a single SPS unit, install the SPS filler unit over the SPS B space in the trayas shown in the figure below. If you are installing a typical configuration with two SPS units,skip this step.a. Align the filler unit to the inside left rear of the tray.b. Use 4 M4 x 8-mm panhead screws to secure the unit to the tray.18. Install the SPS tray in the cabinet:a. From the front of the cabinet, align the SPS tray with the channels on the mountingrails. Position the tray as shown in the figure below.b. Slide the tray onto the mounting rails in the cabinet, until the flanges of the tray areflush with the cabinet channels.c. Do not insert any new screws, but tighten the four securing screws (two on eachside) that hold the mounting rails to the channels.19. Install the SPS units in an SPS tray:a. Remove the SPS units from their packaging.b. Working from the front of the cabinet, slide the SPS units onto the tray as shown inthe figure below.NOTE: To eliminate potential bow caused by the weight of the SPS(s), press upon the middle of the tray with one hand while you secure the units to the tray. (Thetray will not bow if placed on the bottom of the cabinet.)c. Attach the front fastening bracket, and insert and tighten the 6 M4 x 10-mmflathead securing screws as shown in the figure above.d. Insert and tighten the 8 M4 x 8-mm panhead securing screws that secure the SPSunits to the back of the tray, as shown in the figure above.20. Install latch brackets and bezel:a. Use one Phillips M5 x 16-mm screw to secure a latch bracket to each frontchannel. (The brackets include small alignment bumps to correctly orient them to the channel).b. Press the bezel onto the latch brackets until it snaps into place.Table of ContentsInstall the mounting rails and enclosure(s) in the cabinet indae040_R00321. NOTE: Since EMC uses easily adjustable universal rails to mount these items in a NEMA-standard cabinet, these instructions apply to non-EMC cabinets as well.NOTE: It is recommended to locate a storage processor enclosure directly above the SPS tray. In configurations where a 3U disk enclosure shares an SPS with a processor enclosure, locate the disk enclosure directly above the processor enclosure. The 3U disk enclosures are typically stacked above their respective storage processor enclosure to facilitate the routing of Fibre Channel cables. If necessary, refer to the EMC Rails and Enclosures Field Installation Guide (P/N 300-001-799) for more information on specific device placement requirements within your cabinet.Done22. Install the adjustable rails:Physical placement of DAE and DAE2P chassis within a cabinetGoal: When installing chassis into a cabinet in the field, it is important that the physical placement of the DAE chassis types within the cabinet allows for standard cabling practices and allows DAE2P (Stiletto) chassis to be segregated on their own back-end loop whenever possible.Background: With the introduction of the DAE2P chassis, it now matters where a DAE2P is mounted within a cabinet in relation to other DAE2 chassis (FC or ATA).Mixing DAE types: The new design of the DAE2P chassis takes advantage of the point-to-point technology within the chassis. Because of this technology, it offers better backend fault isolation if a problem arises. This benefit can only be realized if the DAE2P chassis on a given backend bus are not mixed with the original DAE2. Since ATA-DAE chassis are also point to point within the enclosure, mixing them with DAE2P enclosures on the same backend loop will not compromise new backend isolation capabilities. For planned future cable tests however, loop segregation is suggested if possible.Because of the way we cable chassis together (Enclosure ID 0 of each bus, ascending up a cabinet with enclosure 1 of each bus etc), it is important that the physical position within the cabinet allows for the standard cabling we have been using in the field, while attempting to make back-end busses homogeneous.Guidelines in order of priority∙Balance back-end busses (qty of chassis/bus)∙Segregate chassis types by bus∙Maintain current cabling strategyNOTE: It is better to have a FC DAE on the same back-end bus as a Klondike ATA chassis rather than have it on the same bus as a DAE2PExamples:Example CX5001 DPE, 1 ATA and 3 DAE2P chassisSince the DPE is a Katana chassis, keep the ATA chassis on Loop 0 and cablethe 3 DAE2P chassis to bus 1.1_2 DAE2P1_1 DAE2P0_1 ATA1_0 DAE2P0_0 DPE/KatanaExample CX5001 DPE,2 ATA and 2 DAE2P chassisSince Bus 0 has the Katana Boot chassis, the 2 DAE2P chassis are placed onBUS 10_2 ATA1_1 DAE2P0_2 ATA1_0 DAE2P0_0 DPE/KatanaExample CX7002 DAE2P and 4 ATA chassis. The boot Chassis is a DAE2P.Install Both DAE2P chassis on BUS 01_1 ATA0_1 DAE2P3_0 ATA2_0 ATA1_0 ATABoot chassis0_0 DAE2PCX700 SPEchassisCX300 ExampleWith only 1 BUS available there is no option to segregate the DAE2P chassis. 0_3 DAE2P0_2 ATA0_1 ATA0_0 DPE Katanaa. Locate and unpack the rail kit that accompanied your enclosure. Verify that the kitincludes the following:∙ 2 adjustable rails – Note that the front edge of each rail is stamped L or R for left and right sides, when they face the cabinet front.∙12 Phillips M5 12.7mm screws∙ 2 clip nuts for M5 screws∙3U or 4U (front rack panels) with keysb. In most cases, the space into which you will install your enclosure is covered by a fillerpanel, which is attached to latch brackets.Remove any filler panel and use a flat-blade screwdriver or similar tool to pry off thelatch brackets as shown below.NOTE: The pre-drilled holes in the cabinet channels are based on the NEMA-standard U measurement. The holes are pre-drilled at distances of 1/2 inch, 5/8 inch, 5/8 inch (totaling one U), then 1/2 inch, 5/8 inch, 5/8 inch (another U), and so on. On EMC 40U and 40U-C cabinets, horizontal lines or small holes in the channel mark the 1Uincrements.c. From the front of the cabinet, insert the right rail alignment pins above and below thebottom U mark on the rear NEMA channel.d. Pull the adjustable rail forward to the inner side of the front channel (the surface facingthe rear of the cabinet), and align the four holes on the rail with those in the channel.e. Secure the rail to the front channel using two of the provided screws at the middle twoholes of the rail. Tighten the screws.f. Repeat steps c through e on the left side, with the left rail.g. If you are installing a 4U enclosure, secure a clip nut over the center hole in the top(fourth) U of the enclosure space on each front channel.h. From the rear of the cabinet, use two of the provided screws to secure each rail to therear channel. Leave the screws slightly loose to allow for adjustment after you installthe enclosure.23. Install the enclosure(s) (SPE, DPE2, DAE2, or DAE2P) in the cabinet:Physical placement of DAE and DAE2P chassis within a cabinetGoal: When installing chassis into a cabinet in the field, it is important that the physicalplacement of the DAE chassis types within the cabinet allows for standard cabling practices andallows DAE2P (Stiletto) chassis to be segregated on their own back-end loop wheneverpossible.Background: With the introduction of the DAE2P chassis, it now matters where a DAE2P ismounted within a cabinet in relation to other DAE2 chassis (FC or ATA).The new design of the DAE2P chassis takes advantage of the point-to-point technology withinthe chassis. Because of this technology, it offers better back-end fault isolation if a problemarises. This benefit can only be realized if all the chassis on a given back-end bus are DAE2Pchassis.Because of the way we cable chassis together (Enclosure ID 0 of each bus, ascending up a cabinet with enclosure 1 of each bus etc), it is important that the physical position within the cabinet allows for the standard cabling we have been using in the field, while attempting to make back-end busses homogeneous.Guidelines:∙Segregate chassis types by bus∙Maintain current cabling strategy∙Balance back-end busses (qty of chassis/bus)NOTE: It is better to have a FC DAE on the same back-end bus as a Klondike ATA chassis rather than have it on the same bus as a DAE2PExamples:Example CX5001 DPE, 1 ATA and 3 DAE2P chassisSince the DPE is a Katana chassis, keep the ATA chassis on Loop 0 and cablethe 3 DAE2P chassis to bus 1.1_2 DAE2P1_1 DAE2P0_1 ATA1_0 DAE2P0_0 DPE/KatanaExample CX5001 DPE,2 ATA and 2 DAE2P chassisSince Bus 0 has the Katana Boot chassis, the 2 DAE2P chassis are placed onBUS 10_2 ATA1_1 DAE2P0_2 ATA1_0 DAE2P0_0 DPE/KatanaExample CX7002 DAE2P and 4 ATA chassis. The boot Chassis is a DAE2P.Install Both DAE2P chassis on BUS 01_1 ATA0_1 DAE2P3_0 ATA2_0 ATA1_0 ATABoot chassis0_0 DAE2PCX700 SPEchassis。

EMC CX300磁盘阵列安装说明

EMC CX300磁盘阵列安装说明

EMC CX300 磁盘阵列安装说明一.硬件连接Server 1和Server 2作Cluster,用光纤连接至CX300的SPA的两个LC端口,SPB不连接。

光纤通道为数据路径。

网线连接至SPA的RJ45接口,为专门的管理路径。

电源接至两个SPS再分别接至SPA和SPB,注意要将SPS的信号线连接至对应的SP,即一个SPS的电源线和信号线要连接至同一SP。

二.LED指示灯1.SP 绿色四秒一次:自检一秒一次:POST一秒四次:进OS不闪烁:启动正常,进入OS2.SPS自上至下Online 绿色闪烁:充电中常亮:在线,已准备好On Battery 黄色常亮:外部电源断开,放电中Replace Battery 黄色常亮:电池放电完Internal Check 黄色常亮:报错三.初始化所有的EMC设备使用前均需要初始化三次。

1.用Null Modem串口线连接服务器与CX300的SPA,在服务器上建立PPP连接,COM1,波特率115200, 开启硬件控制,用户名“Clariion”,密码“Clariion!”。

连接后,在浏览器中使用192.168.1.1/setup登陆至SPA,在其中更改IP地址(218.16.3.221),主机名(CX300-Spa),子网掩码(255.255.255.0),网关(218.16.3.250)和SPB的IP地址,确认后SPA会重启。

(注:SP的IP配置建议一般使用192.168.0.X)2.对SPB做同样操作。

(218.16.3.222,CX300-Spb) 3.再同样用串口线PPP连接至SPA确定上述参数正确。

四.软件安装1.在服务器上安装Navisphere Management Server 和Navisphere UI,通过局域网来管理CX300。

2.同时要在服务器上安装Navisphere Agent,添加用户system@218.16.3.221和添加用户system@218.16.3.202,也可到Navisphere Agent的安装目录中的文件agent.config中更改或添加用户。

EMCVNX磁盘阵列开关机操作指南

EMCVNX磁盘阵列开关机操作指南

上海小糸车灯有限公司存储项目VNX5500磁盘阵列开关机操作说明上海软盛信息技术有限公司2012年9月1、设备开机通电1.1开机前的准备1、检查磁盘阵列电源线缆及各部分连接的线缆是否都已经接好。

2、检查供电电源的来源是否是UPS设备输出,其供电电压电流是否稳定3、磁盘阵列在加电前,为确保磁盘柜散热和工作正常,请确认所有磁盘柜的每个槽位都已经插上硬盘和挡风板。

4、开机前需确保至少要有一个正常工作的SP,每个DAE都至少要一块正常工作的LCC。

1.2开机的基本顺序1、打开光纤交换机2、打开磁盘阵列电源3、打开服务器电源1.3磁盘阵列开机的顺序1、打开扩展柜DAE电源2、打开主控制柜DPE、SPS电源3、打开数据移动器(俗称NAS头)DataMover电源备注:对于本次项目,由于配备了EMC机柜及电源,DPE、DAE、DM等设备模块直接打开机柜PDU电源即完成加电。

4、打开ControlStation控制器电源(控制器开机电源见3.3各部分说明)1.4具体操作本次项目VNX5500 for file 和VNX5500 for unified 可以配备两个刀片(DataMover)以及一个控制站(Control Station)。

操作步骤:1. 验证每个机柜接线板的主开关/断路器是否处于打开状态。

如果要接通包含其他组件的机柜中的VNX5500的电源,请勿关闭机柜的断路器。

确保SPS开关处于关闭位置。

2. 确保SP A的电源线插入到SPS中,并且电源线固定扣也扣入到位。

3. 确保SP B的电源线插入到SPS A之外的另一电路上最近的配电装置(PDU)中,并且电源线固定扣也扣入到位。

在具有两个SPS的系统中,请将SPB插入到SPS B中。

4. 验证连接每个SPS 的电源线是否连接到适当的机柜接线板并且固定扣也扣入到位。

5. 验证全部DAE的电源线是否已插入到机柜的接线板中。

6. 打开SPS电源开关。

通常,存储阵列需要10 至12 分钟才能完成通电过程。

EMC存储系统安装配置手册

EMC存储系统安装配置手册

EMC存储系统安装配置手册目录第1章概述_________________________________________________________________ 61.1编写目的______________________________________________________________________ 6 1.2参考文档______________________________________________________________________ 7第2章存储硬件安装_________________________________________________________ 82.1EMC CX3中端磁盘阵列安装配置________________________________________________ 8第3章存储软件安装配置_____________________________________________________ 93.1整体说明______________________________________________________________________ 93.2IBM AIX平台软件安装__________________________________________________________ 93.2.1IBM pSeries 安装需求____________________________________________________________ 93.2.2ODM安装_______________________________________________________________________ 113.2.3PowerPath的安装 _______________________________________________________________ 163.2.3.1PowerPath5.0.0安装需求_______________________________________________________ 163.2.3.2PowerPath5.0.0安装流程_______________________________________________________ 163.2.4Navisphere Agent/Cli的安装_____________________________________________________ 193.2.4.1安装需求____________________________________________________________________ 193.2.4.2安装流程____________________________________________________________________ 193.2.5HACMP相关设置 ________________________________________________________________ 233.2.5.1提示:______________________________________________________________________ 233.2.5.2Set emcpowerreset in HACMP_________________________________________________ 233.2.5.3Add “cfgscsi_id” to HACMP(for CX3) ___________________________________________ 233.2.6常用命令 ________________________________________________________________________ 253.2.7IBM AIX主机识别PowerPath管理的设备___________________________________________ 263.3Redhat Linux平台软件安装____________________________________________________ 273.3.1PowerPath安装 _________________________________________________________________ 273.3.1.1安装需求____________________________________________________________________ 273.3.1.2安装流程____________________________________________________________________ 283.3.2Navisphere Agent/Cli安装_______________________________________________________ 283.3.2.1安装需求____________________________________________________________________ 283.3.2.2安装流程____________________________________________________________________ 293.3.3Linux主机识别PowerPath管理的设备 _____________________________________________ 293.4Windows2003平台软件安装 ___________________________________________________ 303.4.1PowerPath安装 _________________________________________________________________ 303.4.1.1安装需求____________________________________________________________________ 303.4.1.2安装流程____________________________________________________________________ 303.4.2Navisphere Agent/Cli安装_______________________________________________________ 383.4.2.1安装需求____________________________________________________________________ 383.4.2.2安装流程____________________________________________________________________ 383.4.3Windows主机识别PowerPath管理的设备_________________________________________ 47 3.5HP-UX平台软件安装 __________________________________________________________ 493.5.1PowerPath的安装 _______________________________________________________________ 493.5.1.1PowerPath5.0.1安装需求_______________________________________________________ 493.5.1.2PowerPath5.0.0安装流程_______________________________________________________ 50 3.5.2Navisphere Agent/Cli的安装_____________________________________________________ 543.5.2.1安装需求____________________________________________________________________ 543.5.2.2安装流程____________________________________________________________________ 54 3.5.3HPUX基本连接和认盘操作 ________________________________________________________ 56 3.5.4HP-UX主机识别PowerPath管理的设备 ____________________________________________ 583.6在HPUX和MCSG中使用认到的磁盘 ___________________________________________ 59 3.6.1单机环境下使用到的磁盘 __________________________________________________________ 593.6.2在MC/SG中使用认到的盘,并在其它主机上IMPORT VG________________________________ 613.7PP在HPUX平台使用Clariion磁盘创建LVM注意事项:_________________________ 62 3.7.1设置PV TIMEOUT值为180 _______________________________________________________ 62 3.7.2设置LV参数BBR为NONE _______________________________________________________ 62 3.7.3LV Striping_____________________________________________________________________ 623.8CX3Storage Group配置______________________________________________________ 63第1章概述1.1 编写目的本文档描述了EMC CX3中端磁盘阵列、Brocade 48000光纤交换机的硬件安装及配置流程;IBM AIX、HP-UX、Linux、Windows等相关操作系统连接EMC磁盘阵列时主机端软件的安装以及配置流程。

EMC磁盘阵列配置指导书

EMC磁盘阵列配置指导书
1. 创建脚本文件。 # vi wwnscript.sh 在文件中添加以下内容:
#!/bin/sh for i in ‘cfgadm |grep fc-fabric|awk ’{print $1}’‘;do dev="‘cfgadm -lv $i|grep devices |awk ’{print $NF}’‘" wwn=grep ’Host Bus’|awk’{print $4}’‘" echo "$i: $wwn" done
1.2 组成
电源
EMC CX 3-20 磁盘阵列由电源、控制器、磁盘柜三大模块组成。
电源模块的前视图如图 1-1 所示,有 A0、A1、B0、B1 四个部分。 图1-1 电源模块前视图
控制器
EMC CX 3-20 磁盘阵列有 2 个控制器,控制器后视图如图 1-2 所示。
文档版本 03 (2006-03-01)
屏幕显示信息中第 1 列的 c1、c3、c4 表示 HBA 控制器。 步骤 4 配置 HBA。
文档版本 03 (2006-03-01)
华为技术有限公司
2-1
插图目录
# cfgadm -c configure c1
# cfgadm -c configure c3
# cfgadm -c configure c4 步骤 5 注册 HBA。
SmartAX MA5100 故障处理
目录
目录
1 产品介绍.......................................................................................................................................1-1

EMCC配置管理手册

EMCC配置管理手册

CX500配置管理手册1.登录由于所有的阵列管理软件都已在阵列上安装完毕,并且EMC管理软件的管理方式是基于浏览器的,所以用户打开管理软件Navisphere Manager非常方便,只需在客户端安装JAVA 运行环境---j2sdk-1_4_1_01-windows-i586.exe弹出登录提示框后输入用户名和密码,初始设置为admin/password(用户可自行修改),登陆后进入主管理界面,如下图,3.监控、浏览硬件状态打开管理界面后,用户可详细地了解到阵列上的硬件配置以及他们的状态信息,这其中的硬件包括硬盘、风扇、电源模块、电池、处理单元等等,如下图,打开Physical的树状节点即可。

4.添加、删除、修改用户和密码EMC在用户管理这一块提供了多种角色,主要分为Administrator 和Monitor,Administrator有权利对阵列进行配置、属性修改等活动,而Monitor只能对阵列进行监控浏览。

用户可根据需要创建不同的用户。

如下图打开用户管理界面打开以后,如下图,进行用户的添加如下图,进行用户密码的修改修改完毕后退出重新登陆即可。

5.浏览、修改阵列属性阵列上有几个Cache参数由于涉及到性能问题,而且同客户的应用息息相关,所以,有必要在属性端调整一下读写Cache的大小,如下图,打开CX500的属性页在改变Cache的大小前先将SPA、SPB的读写Cache置成disable,如下图根据应用对读写的要求重新分配读写Cache的大小,如下图修改完以后,重新在将SPA、SPB的读写Cache置成enable,如下图6.浏览阵列于主机的连接信息在阵列连接主机前,我们有必要先观察一下主机的HBA卡是否已经注册到阵列端,一般主机在安装完HBA卡驱动,并安装好Navisphere Agent后,HBA卡信息会自动注册到阵列上,如下图打开连接状态页面如下图,察看连接状况其中Login属性,表示物理连接正常,Register属性表示注册成功。

emc规划手册(阵列适用)

emc规划手册(阵列适用)

∙ 1 一.关于性能的探讨o 1.1 1.性能的定义o 1.2 2.应用的设计▪ 1.2.1 A. 为顺序或者随机I/O的优化▪ 1.2.2 B. I/O 的大小▪ 1.2.3 C. 暂时的模式和峰值的表现(temporal patterns and peak activities)o 1.3 3.主机文件系统影响▪ 1.3.1 A.文件系统的缓冲和组合(coalesce)▪ 1.3.2 B.最小化I/O的大小:文件系统的request size▪ 1.3.3 C.最大化的I/O大小▪ 1.3.4 D.文件系统的fragmentation▪ 1.3.5 F.校正对齐问题▪ 1.3.6 G.Linux的I/O fragementingo 1.4 4.卷管理器Volume Managers▪ 1.4.1 A. Plaid 应该做的▪ 1.4.2 B. Plaid 不应该做的▪ 1.4.3 C. Plaid 为高带宽的设置▪ 1.4.4 D. Plaids and OLTPo 1.5 5. 主机HBA的影响▪ 1.5.1 A. HBA卡的限制▪ 1.5.2 B. Powerpatho 1.6 6. MetaLUNs▪ 1.6.1 A. 对比metaLUN和卷管理器▪ 1.6.2 B. MetaLUN的使用说明和推荐▪ 1.6.3 C. MetaLUN的扩充战略o 1.7 7.存储控制器的影响▪ 1.7.1 A. CLARiiON的存储控制器▪ 1.7.2 B. 磁盘的级别和性能o 1.8 8.RAID引擎的缓存▪ 1.8.1 A. 缓存的大小和速度▪ 1.8.2 B. 缓存的设定o 1.9 9.后端设备(磁盘的子系统)▪ 1.9.1 B. LUN的分布▪ 1.9.2 C.系统和启动硬盘的影响▪ 1.9.3 D.使用LUN和RAID组的编号方式▪ 1.9.4 E.最小化硬盘的竞争▪ 1.9.5 F.Stripe和Stripe element的大小▪ 1.9.6 G. CLARiiON RAID 5的stripe优化▪ 1.9.7 H. 每一个RAID组的硬盘的个数▪ 1.9.8 I.在一个存储系统里应该使用多少个硬盘▪ 1.9.9 J. 硬盘的类型和大小∙ 2 二.为可用性和冗余做考虑o 2.1 1. 高可用性的配属o 2.2 2. RAID-level的考虑▪ 2.2.1 A. RAID 5▪ 2.2.2 B. RAID 1/0▪ 2.2.3 C. RAID 3▪ 2.2.4 D. 热备份(Hot spares)o 2.3 3. 把RAID组通过总线和DAE绑定▪ 2.3.1 A. 跨DAE来绑定硬盘▪ 2.3.2 B. 跨后端总线绑定硬盘▪ 2.3.3 C. 通过DPE磁盘绑定▪ 2.3.4 D. 热备份的策略o 2.4 4. 数据复制的持续性一.关于性能的探讨性能调优有多重要呢?在一个Raid 5的阵列组中使用5-9块硬盘和使用默认的设置,CLARiiON光纤储系统能发挥极好的性能----这是EMC在性能测试实验室里测试自己的CLARiiON系统得出来的。

emc存储操作手册

emc存储操作手册

emc存储操作手册
EMC存储操作手册是一本详细介绍EMC存储系统操作的指南。

这本手册通常包含以下内容:
1. EMC存储系统的基本概述:介绍EMC存储系统的组成部分、架构、工作原理等。

2. 硬件安装和配置:详细说明如何正确地安装和配置EMC存
储系统的硬件组件,如磁盘阵列、控制器等。

3. 存储池和卷管理:说明如何创建、配置和管理存储池和逻辑卷,包括创建RAID组、配置LUN等。

4. 主机连接和配置:解释如何与主机建立连接并配置主机端的存储访问路径,如FC、iSCSI等。

5. 数据备份和恢复:介绍如何配置和管理EMC存储系统的备
份和恢复功能,如快照、镜像等。

6. 性能优化和故障排除:提供性能优化和故障排除的方法和技巧,如调整存储策略、处理存储故障等。

7. 工具和管理软件:列举常用的EMC存储系统管理工具和软件,如Navisphere、Unisphere等,并说明如何使用它们进行
存储管理。

8. 安全和权限管理:讲解如何设置访问权限、用户身份验证等
安全措施,以保护存储系统中的数据安全。

9. 最佳实践和建议:提供一些EMC存储系统的最佳实践和使用建议,以帮助用户更好地使用和管理存储系统。

EMC存储操作手册是用户在使用EMC存储系统时的重要参考资料,可以帮助用户了解和掌握存储系统的操作技巧和管理方法,提高存储系统的性能和可靠性。

EMC Clariion 系列磁盘阵列基本配置

EMC Clariion 系列磁盘阵列基本配置

EMC Clariion 系列磁盘阵列基本配置主机端需要安装的软件包2 安装hostagent3 安装NaviCLI (可选)4安装EMCPP.W2003_64.3.0.6.GA PowerPath for windows2003 32位软件包安装完软件后主机重新启动CX300管理及配置1 在IE地址栏输入SP IP(用来察看的机器需要安装jre1.4.2)登陆窗口用户名:admin 密码:password2 登陆后,进入主界面3 查看CX属性栏4 指定Memory 大小,标准安照write cache= 2 read cache5 启用SP Cache6 启用access control enable7 查看,CX已经安装和激活的功能项8 如何建RAID—Create RAID Group; 建LUN-----Bind LUN; 建组-----Create storage GroupRAID 组构建情况0----5号盘RAID GROUP11 RAID 组上建LUN12存储组构建存储组分配主机资源和LUN资源右键点击刚创建的storagegroup,选择select lunlun0 是刚才bind lun 菜单做的,把可用的lun0 从左边转移到右边,加入到这个storage group点选luns右边的hosts把主机也加入到storagegroup.前端服务器安装了navisphere hostagent后会自动注册,在这里就能看到把可用的host 从左边转移到右边,加入到这个storage group现在这个storagegroup 中加入了一个host ,一个lun .按ok 确认后主机hzgk1就会在磁盘管理内发现一颗新的硬盘,那颗硬盘就是加入到那个storagegroup里的lun .系统启动和停止系统启动:1.开启交换机2.开启所有与CX连接的DAE电源(如果有DAE)3.开启CX电源4.开主机,加载应用系统关闭:1.停止主机所有对盘阵的I/O2.如果应用系统是UNIX OS 要Umount file systems3.关闭SPS电源4.在SPS电源关闭后,将所有的DAE电源开关置于关闭状态5.关闭交换机。

EMC CX系列磁阵安装与维护手册

EMC CX系列磁阵安装与维护手册
2. 在笔记本上,选择 Start → Settings → Network and Dial up Connections. 点击建好的连接,输入用户名:clariion1992 密码:clariion1992
3. 连接成功后,打开一个网页,地址栏输入192.168.1.1/setup后,回车. 4. 此时回出现类似下图的界面:
l. 确认 Enable LCP extensions and Enable software compression 被激活, 然后点 击 OK.
m. 选中 Internet Protocol (TCP/IP) 后设置属性. 设置自动获取IP地址,DNS服务器地址. 相关参数设置表如下:
Line characteristic 设置
users and click Next.
f. 在 Completing Network Connection Wizard 对话框里, 给 此连接起 个名字 (例如, CX 系列 init)然后选择 Finish.
g. 在 Connect 对话框中点击 Properties.
h. 在 Properties 栏的 General 栏中, 点击 Configure.
七、 使用 Manager (Rev. 6.x) 创建 STORAGE GROUP
在 Navisphere 6.x 下, 通过以下步骤创建 storage group: a. 鼠标右击 storage-system 图标, 然后选择 Create Storage Groups.
b. 为此 Storage Group 创建一个新名字.
重要提示: 同一个 RAID 组的的所有磁盘必须是相同容量的的同等规格的磁盘. 而 hot
spare 盘必须大于或等于磁盘阵列中所有硬盘的容量. 3.被选的磁盘会呈现出来以供选择

EMC设备配置操作说明

EMC设备配置操作说明

EMC系统操作手册EMC光纤交换机、磁盘阵列操作手册设备型号:DS_300B&EMC CX-4 240承建单位:宁夏希望信息产业有限公司监理单位:深圳市东方天成管理顾问有限公司建设单位:宁夏医科大学附属医院编写日期:二零一零年一月目录1.1 EMC DS_300B_A (3)1.2 定义zone (4)2.1 EMC DS_300B_B (7)2.2 定义zone (8)3.1 EMC B300_B 配置信息 (11)4.1 EMC CX-4 240 配置信息 (16)4.2 初始化阵列系统 (16)4.3 阵列的设置 (20)4.4 主机软件安装和配置 (29)5.1 端口对照表 (34)1.1 EMC DS_300B_A在浏览器的地址栏里输入EMC DS_300B的默认ip地址10.77.77.77,用户名admin,密码password,定义本机的管理地址为10.0.11.63,管理机器名称为DS_300B_A,定义完毕后信息如下根据Manufacturer serial number ALJ0634E06B;Supplier serial number BRCALJ0634E06B;上网注册得到的License ID,激活8-15端口。

由图中可以看出,前面16个端口为深灰色可用状态。

1.2 定义zone定义zone,在zone admin 界面下定义Zone1,根据DS_300B_A端口对照表,将交换机的0,1,2端口加入到zone1当中,然后保存启动zone1,配置信息如下将交换机的0,1,3端口加入到zone2当中,然后保存启动zone2将交换机的0,1,4端口加入到zone3当中,然后保存启动zone3。

在交换机名称定义里,填入DS_300B_A,交换机信息如下修改交换机的默认管理地址为10.0.11.63,网关定义为10.0.11.1,以方便在网络中进行管理。

交换机许可信息如下,由下图可以看出总共激活16个端口。

dell emc powervault md3860f 系列存储阵列部署指南说明书

dell emc powervault md3860f 系列存储阵列部署指南说明书

Dell EMC PowerVault MD3860f 系列存储阵列部署指南注意、小心和警告:“注意”表示帮助您更好地使用该产品的重要信息。

:“小心”表示可能会损坏硬件或导致数据丢失,并告诉您如何避免此类问题。

:“警告”表示可能会导致财产损失、人身伤害甚至死亡。

© 2012 - 2018 Dell Inc. 或其子公司。

保留所有权利Dell、EMC 和其他商标为 Dell Inc. 或其子公司的商标。

其他商标均为其各自所有者的商标。

章 1: 简介 (5)系统要求 (5)存储阵列简介 (5)相关说明文件 (6)章 2: 硬件安装 (7)规划存储配置 (7)连接存储阵列 (7)配置 Dell EMC MD 系列存储阵列的光纤通道 (8)在 SAN 连接的存储阵列上配置光纤通道 (8)您可能需要的其他信息 (8)安装支持的光纤通道 HBA (8)在主机服务器上安装光纤通道 HBA (8)使用光纤通道交换机分区 (9)全局通用名称分区 (10)交换机分区准则 (10)在光纤通道交换机硬件上设置分区 (10)存储阵列布线 (10)冗余和非冗余布线 (10)SAN 连接布线 (10)SAN 连接布线示例 (11)远程复制布线示例 (12)混合环境 (13)PowerVault MD3060e 扩展柜布线 (14)使用新的 PowerVault Md3060e 扩展柜进行扩展 (16)章 3: 安装 MD Storage Manager (18)安装主机总线适配器和驱动程序 (18)图形化安装(推荐) (19)控制台安装 (19)无提示安装 (19)在 Windows 中进行无提示安装 (19)在 Linux 中进行无提示安装 (20)启用高级功能(可选) (20)升级 PowerVault MD Storage Manager (20)章 4: 安装后任务 (21)验证存储阵列查找 (21)初始设置任务 (21)章 5: 卸载 MD Storage Manager (23)从 Windows 中卸载 MD Storage Manager (23)从 Windows Server GUI 版本卸载 MD Storage Manager (23)目录3从 Windows Server Core 版本卸载 MD Storage Manager (23)从 Linux 中卸载 MD Storage Manager (24)章 6: Load balancing(负载平衡) (25)负载均衡策略 (25)带子集的轮询 (25)最少队列深度 (25)最少路径权重 (25)在 Linux 中设置负载平衡策略 (25)在 VMware 中设置负载平衡策略 (26)章 7: 附录—使用 SFP 模块和光纤电缆 (27)SFP 模块使用指南 (27)安装 SFP 模块 (27)卸下 SFP 模块 (28)光纤电缆使用指南 (28)安装光纤通道电缆 (28)卸下光纤通道电缆 (29)章 8: 附录—硬件布线妥善做法 (30)处理静电敏感组件 (30)远程复制的主机布线 (30)布线以提高性能 (30)给电缆贴标签 (30)章 9: 获得帮助 (31)联系戴尔 (31)找到 Dell EMC 系统服务标签 (31)4目录1简介本指南提供关于部署 Dell EMC PowerVault MD3860f 存储阵列的信息。

EMC AX4操作手册

EMC AX4操作手册

分享EMC AX4实施案例一、整体结构1. 总体拓扑结构:此次项目共采购了EMC 1台DS-220B光纤交换机、1套AX4存储柜、2台HP-380 G5服务器。

2.服务器部分设计:两台HP服务器的两块本地硬盘已配置好了RAID 1,并安装了Win2003操作系统及SQL20 05标准版双击模式(系统由用户提供)。

服务器均为1000M双网卡,其中一块用于连接LAN 网络,另一块用于心跳网络,IP为10.0.0.1和10.0.0.2。

3.存储部分设计:EMC,AX4管理控制端口IP:192.168.0.8和192.168.0.9默认用户名:admin ;默认密码:passwordAX4共4块750G硬盘做一个RAID 5,实际使用空间约为2.2TB。

交换机共有端口16个,可用端口8个,使用了前4个端口,其中1、2口接存储,3、4接服务器。

SQL Server 2005群集安装我不做介绍,有兴趣的朋友可以搜一下,很多。

二、磁盘阵列的安装1. 磁盘阵列的配置先创建RAID GROUP(定义组成一个RAID的磁盘数量),再BIND LUN(逻辑上将RAID 再分成主机识别的基本大小单元),最后创建STORAGE GROUP(将定义好的LUN和主机加到一个组里)。

1.1.1 创建raid group如图:在盘阵序列号上点击右键,出现菜单,选择Creat Raid Group如图:点击Apply。

点击Yes.创建成功。

1.1.2 Bind lun在盘阵序列号上点右键,在菜单上选择Bind Lun,如图,在LUN Size中输入我们需要的LUN的大小,依次创建。

1.1.3 创建storage group在界面中右键选择Storage Groups,,将创建好的LUN按照规划分配给Storage Group,同时将相应的主机划分到对应的Storage Group中。

如图:选择Create Storage Group,在弹出栏中输入我们要的Group名称。

EMC CX系列磁盘阵列配置

EMC CX系列磁盘阵列配置

目录1、安装光纤通道卡及其驱动 (2)2、安装服务器主机代理软件 (2)2.1 Linux系统 (2)2.1.1 将Agent软件上传到linux主机中 (2)2.1.2 安装Agent软件 (2)2.1.3 修改Agent配置文件 (2)2.1.4 启动Agent服务 (3)2.2 Windows系统 (3)2.2.1 软件包 (3)2.2.2 软件包安装 (3)3、安装Powerpath多路径管理软件 (6)3.1 Liunx 系统 (6)3.1.1 将Powerpath软件上传到linux主机中 (6)3.1.2 安装Powerpath软件 (6)3.1.3 Powerpath软件注册 (7)3.1.4 系统重启查看状态 (7)3.2 Windows 系统 (8)3.2.1 软件包 (8)3.2.2 软件包安装 (8)4、光纤交换机管理 (11)4.1 光纤交换机基本配置 (11)4.2 光纤交换机基本命令行管理 (12)4.3 光纤交换机连接拓扑 (15)4.4 光纤交换机图形化配置(WEB UI) (15)5、磁盘阵列配置 (17)5.1 磁盘阵列基本管理 (17)5.2 LUN建立 (18)5.3 建立存储组 (19)5.4 主机注册 (20)5.5 建立主机与LUN关联(主机挂载磁盘) (21)1、安装光纤通道卡及其驱动在服务器上安装好光纤通道卡,建议安装新版本的驱动。

驱动程序查看HBA卡芯片后,去相关网站可以下载到对应操作系统的驱动程序。

2、安装服务器主机代理软件2.1 Linux系统2.1.1 将Agent软件上传到linux主机中上传的文件为”naviagentcli-6.26.30.0.99-1.noarch.rpm”2.1.2 安装Agent软件[root@DB01 ~]# rpm –ivh naviagentcli-6.26.30.0.99-1.noarch.rpm Preparing... ########################################### [100%]1:NaviHostAgent-Linux-########################################### [100%]2.1.3 修改Agent配置文件[root@DB01 ~]# emcsoft]# vi /etc/Navisphere/agent.config############################ Navisphere agent.config###########################user system@172.16.0.251 /需要添加的user system@172.16.0.252 /需要添加的注意:地址为磁盘阵列的两个控制器的管理地址2.1.4 启动Agent服务/etc/init.d/naviagent start注:可以等安装完多路径管理软件后一起重启系统。

拟购EMC存储设备配置清单

拟购EMC存储设备配置清单
b SAS,数量15个。
6.配备存储资源管理软件unisphere。
7.配备路径故障隔离与负载均衡软件powerpath for VMware许可证(1 CPU)。
8.配备路径故障隔离与负载均衡软件powerpath for windows许可证(2 CPU)。
9.配备配套EMC DS300 SAN交换机2台,规格:(1)8Gbps SAN交换机,激活16口;(2)配备机架安装导轨;(3)配备20根LC-LC光纤跳线(短波OM2):5米12根、30米8根。
10.服务:首次安装服务和3年保修服务(7*24*4),该服务要求为原厂服务。
11.其余标配。
1台
拟购EMC存储设备配置清单
设备名称
规 格配 置 要 求
数量
备注
EMC VNX 5300
磁盘阵列
1.整机必须是原厂生产制造产品。
2.配备双冗余磁盘阵列控制器,每一控制器CPU≥4核,要求CPU主频≥1.6GHZ。
3.配备磁盘阵列一级缓存16G,提供Cache保护措施,停电数据写入硬盘保护。
4.配置8Gb FC前端主机端口8个、6Gb SAS后端磁盘端口4个(2条BE SAS总线)。

EMC存储配置手册

EMC存储配置手册

1CLARiiON初级安装使用Navisphere Storage System Initialization Wizard初始化第一次安装时需要使用Navisphere Storage System Initialization Wizard工具。

打开该工具。

接受License Agreement后点击Next继续点击Next系统将扫描存在的存储系统选择需要配置的存储系统,然后输入SP A和SP B的IP地址创建用户名和密码,默认用户名:admin 密码:password完成CX3-40的初始化这样CX3-40的初始化就完成了。

下次只需要通过浏览器输入SP的IP地址就可以直接登录。

登陆WEB管理界面:在地址栏输入存储的IP地址,键入用户名和密码(地址默认为10.10.10.198)输入用户名和密码(admin/password)1.1基本配置1.划分Raid Group右击数组,选择“Create RAID Group”可以选择RG的ID号,然后在Disk Selection里选择Manual,然后点击Select,手工指定需要划分的硬盘选择A vailiable Disks里需要的磁盘,划到右侧Selected Disks里点击OK根据提示点击Y es2.创建资料LUN右击数组,选择Bind LUN选择需要bind lun的raid group,这里选择raid group 3为新bind的lun分配一个LUN ID,这里分配LUN 21选择LUN属于哪个SP,这里属于SP B在LUN Size中填入LUN的大小,这里的LUN是10GB,然后点击Apply按照提示点击Y es继续点击OK完成查看LUN 21的属性,看到LUN 21的大小是10GB,Percent Bound是100,说明LUN 21已经Bind完成,可以分配给主机访问使用了3.创建Hot Spare右击准备需要创建的Hot Spare的Raid Group,选择Bind LUN在Raid Type中,选择Hot Spare选项然后分配LUN ID(一盘hotsapre的id分配的大点,以便于和数据lun区分),最后点击Apply 确定。

emcvnx配置手册

emcvnx配置手册

第1章文档目录第1章VNX5400 硬件部分 ................................................................... 错误!未定义书签。

1.1.硬件部分: (4)1.2.设备基本信息 (4)1.2.1.开机注意事项 (4)1.2.2.开机环节 (5)1.2.3.关机前注意事项 (5)1.3.使用UNISPHERE进行配置 (6)1.3.1.登陆到管理界面 (6)1.3.2.创立Raid Group (12)1.3.3.创立LUN ............................................................................... 错误!未定义书签。

1.3.4.创立Storage Group ............................................................... 错误!未定义书签。

1.3.5.注册主机 ................................................................................ 错误!未定义书签。

1.3.6指派LUN ............................................................................... 错误!未定义书签。

1.3.7指派主机 ................................................................................ 错误!未定义书签。

1.3.7创立Mirror ............................................................................. 错误!未定义书签。

EMC CX3-80磁盘阵列配置指南

EMC CX3-80磁盘阵列配置指南

EMC CX3-80磁盘阵列配置指南目录1EMC CX3-80产品描述 (1)2NA VISPHERE 管理套件 (2)2.1基础配置介绍 (2)2.1.1.1Cache (2)2.1.1.2RAID GROUP (3)2.1.1.3LUN (3)2.1.1.4STORAGE GROUP (3)2.2配置实例 (3)2.2.1IE浏览器进入Navisphere管理界面 (3)2.2.2Navisphere管理界面 (4)2.2.3Cache设定 (5)2.2.4RAID GROUP (7)2.2.5BIND LUN (9)2.2.6Connectivity status (11)2.2.7STORAGE GROUP (12)2.3安全设置 (14)2.4性能分析设置 (15)1EMC CX3-80产品描述EMC CLARiiON CX3 机型80 是世界上最大、处理能力最强的中端网络存储阵列。

CLARiiON CX3 机型 80 基于 CLARiiON CX3 UltraScale 体系结构,它提供了高性能、高容量的网络存储,使您能够处理数据最密集的工作负载和大型整合项目。

使用 CX3 机型 80,您可以满足当前的需求,并可以方便地将容量扩展到 353 TB 以适应未来的增长。

获得业务所需的“五个9”的可用性(99.999%)。

向导驱动的软件功能使日常管理更容易。

高级功能可保护您的数据。

EMC CLARiiON CX3 机型 80 提供了:特性好处UltraScale 体系结构通过一流的技术和端到端4 Gb/s 带宽获得优化的性能、可扩展性和更高的自我恢复能力。

分层存储混合高性能和低成本/高容量驱动器以满足您的需要和预算要求。

光纤通道连接利用八个4 Gb/s 光纤通道端口和最多256 个高可用、双连接主机。

MetaLUN 技术通过在线LUN 扩展提高性能和容量利用率。

虚拟LUN 技术通过在阵列内无中断迁移数据,方便地管理分层存储部署。

EMC设备配置操作说明

EMC设备配置操作说明

EMC设备配置操作说明EMC系统操作手册EMC光纤交换机、磁盘阵列操作手册设备型号:DS_300B&EMC CX-4 240承建单位:宁夏希望信息产业有限公司监理单位:深圳市东方天成管理顾问有限公司建设单位:宁夏医科大学附属医院编写日期:二零一零年一月目录1.1 EMC DS_300B_A (5)1.2 定义zone (6)2.1 EMC DS_300B_B (10)2.2 定义zone (11)3.1 EMC B300_B 配置信息 (15)4.1 EMC CX-4 240 配置信息 (20)4.2 初始化阵列系统 (20)4.3 阵列的设置 (27)4.4 主机软件安装和配置 (36)5.1 端口对照表 (50)1.1 EMC DS_300B_A在浏览器的地址栏里输入EMC DS_300B 的默认ip地址10.77.77.77,用户名admin,密码password,定义本机的管理地址为10.0.11.63,管理机器名称为DS_300B_A,定义完毕后信息如下根据Manufacturer serial number ALJ0634E06B;Supplier serial number BRCALJ0634E06B;上网注册得到的License ID,激活8-15端口。

由图中可以看出,前面16个端口为深灰色可用状态。

1.2 定义zone定义zone,在zone admin 界面下定义Zone1,根据DS_300B_A端口对照表,将交换机的0,1,2端口加入到zone1当中,然后保存启动zone1,配置信息如下将交换机的0,1,3端口加入到zone2当中,然后保存启动zone2将交换机的0,1,4端口加入到zone3当中,然后保存启动zone3。

在交换机名称定义里,填入DS_300B_A,交换机信息如下修改交换机的默认管理地址为10.0.11.63,网关定义为10.0.11.1,以方便在网络中进行管理。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
2 配置磁盘阵列...............................................................................................................................2-1
2.1 准备工作......................................................................................................................................................2-1 2.2 在主机端配置和注册 HBA ........................................................................................................................2-1 2.3 登录磁盘阵列..............................................................................................................................................2-2 2.4 磁盘阵列基本配置......................................................................................................................................2-3
SmartAX MA5100 故障处理
目录
目录
1 产品介绍.......................................................................................................................................1-1
1.2 组成
电源
EMC CX 3-20 磁盘阵列由电源、控制器、磁盘柜三大模块组成。
电源模块的前视图如图 1-1 所示,有 A0、A1、B0、B1 四个部分。 图1-1 电源模块前视图
控制器
EMC CX 3-20 磁盘阵列有 2 个控制器,控制器后视图如图 1-2 所示。
文档版本 03 (2006-03-01)
1.1 硬件配置......................................................................................................................................................1-1 1.2 组成..............................................................................................................................................................1-1
华为技术有限公司
1-1
插图目录
图1-2 控制器后视图
单个控制器的接口如图 1-3 所示。 图1-3 控制器接口说明图
SmartAX MA5100 故障处理
1-2
华为技术有限公司
文档版本 03 (2006-03-01)
SmartAX MA5100 故障处理
插图目录
2 配置磁盘阵列
2.1 准备工作
在配置磁盘阵列前,需要做好以下准备工作:
屏幕显示信息中第 1 列的 c1、c3、c4 表示 HBA 控制器。 步骤 4 配置 HBA。
文档版本 03 (2006-03-01)
华为技术有限公司
2-1
插图目录
# cfgadm -c configure c1
# cfgadm -c configure c3
# cfgadm -c configure c4 步骤 5 注册 HBA。
文档版本 03 (2006-03-01)
华为技术有限公司
iii
SmartAX MA5100 故障处理ຫໍສະໝຸດ 插图目录1 产品介绍
1.1 硬件配置
EMC CX 3-20 磁盘阵列主要硬件配置如下:
两个 2.8 GHz P4 Xeon CPU 4 GB 缓存内存 UltraPoint 技术 DAE 4 个前端 4Gb/s 光纤通道主机端口 8 个后端 4Gb/s 光纤通道磁盘端口
文档版本 03 (2006-03-01)
华为技术有限公司
i
SmartAX MA5100 故障处理
插图目录
插图目录
图 1-1 电源模块前视图.....................................................................................................................................1-1 图 1-2 控制器后视图 ........................................................................................................................................1-2 图 1-3 控制器接口说明图.................................................................................................................................1-2 图 2-1 磁盘阵列管理软件界面.........................................................................................................................2-3 图 2-2 创建 RAID 组.........................................................................................................................................2-4 图 2-3 Connectivity Status ..................................................................................................................................2-6 图 2-4 Create Storage Group...............................................................................................................................2-6 图 2-5 Storage Group ..........................................................................................................................................2-7 图 2-6 Connect Hosts ..........................................................................................................................................2-7 图 2-7 LUNs 页签...............................................................................................................................................2-8
1. 创建脚本文件。 # vi wwnscript.sh 在文件中添加以下内容:
#!/bin/sh for i in ‘cfgadm |grep fc-fabric|awk ’{print $1}’‘;do dev="‘cfgadm -lv $i|grep devices |awk ’{print $NF}’‘" wwn="‘luxadm -e dump_map $dev |grep ’Host Bus’|awk’{print $4}’‘" echo "$i: $wwn" done
2.4.1 创建 RAID 组.....................................................................................................................................2-3 2.4.2 在 RAID 组中创建 LUN....................................................................................................................2-5 2.4.3 注册主机 HBA 卡 ..............................................................................................................................2-5 2.4.4 创建磁盘组.........................................................................................................................................2-6 2.4.5 在主机端检查磁盘阵列.....................................................................................................................2-8
相关文档
最新文档