安装Veritas Cluster Server 4.0

合集下载

Veritas_cluster_安装说明

Veritas_cluster_安装说明

Veritas HA/cluster软件安装说明目录一.简介1.V eritas Cluster Server(或quickstart)二.使用说明1.软件安装前的准备和说明2.软件的安装3.初步配置VCS一.简介1.Veritas Cluster ServerV eritas Cluster Server是一个高可用管理软件,它应用于一组服务器中为用户应用提供安全保障。

具体的讲,VCS保证应用在一台服务器上故障时,可以快速地切换到另一台服务器上,从而,提供应用和数据的高可用性。

V eritas Cluster Server具有如下特点:●可实现32节点系统的应用切换。

●切换速度快( <10秒)●完全GUI图形界面,简单易用●完全实现多应用多级切换(应用级切换 ),适用多种应用并存的系统。

某一应用的切换可以不对其它应用产生影响●远程PC监控界面和管理界面,便于无人值守的应用需求●与 Veritas file system 结合,提供高速的切换●与 Veritas Netbackup结合,提供备份系统的热备份●拥有多种应用的agent(web server、IP、数据库、多网卡、NFS、文件系统、disk、进程、SNMP、Netbackup、HSM、文件),适用领域相当广泛。

●用户可以自由定制agent,适应特殊应用。

●故障报警机制,全局网络管理。

●应用故障后,可以在本地重试,确保排除偶然因素。

二. 使用说明1.软件安装前的准备和说明●安装cluster server的操作系统平台是solaris8,RAM至少是128M。

在根目录下必须有/opt文件夹。

●必须在两主机的根目录下有.rhosts文件,此文件的内容是+即可。

●两主机的心跳线必须安装接通。

两主机的外网口保持互连相同。

●在两主机的/etc/hosts文件上必须有对方主机名的解析。

●在两主机上的/etc/hosts文件中配置四块网卡的IP地址,类似如下:#s1127.0.0.1 localhost192.168.2.31 s1 loghost192.168.3.31 s11172.16.2.31 heartbeats1-0172.16.3.31 heartbeats1-1#r1192.168.2.32 r1192.168.3.32 r11172.16.2.32 heartbeatr1-0172.16.3.32 heartbeatr1-1●在/etc/netmasks中加入四个网段的掩码, 类似如下:192.168.2.0 255.255.255.0192.168.3.0 255.255.255.0172.16.2.0 255.255.255.0172.16.3.0 255.255.255.0●在/etc下创建下列四个文件:hostname.eri0 (其内容是:s1)hostname.hme2hostname.hme0hostname.hme1●一般安装的顺序是先安装操作系统,然后是volume manager,然后是cluster,然后是sybase和其他的应用程序。

双机中Veritas的详细配置

双机中Veritas的详细配置

Veritas HA安装配置手册目录1 安装 Veritas Storage Foundation (2)1.1 下载软件 (2)1.2 准备工作 (2)1.3 安装 SF for RAC (4)1.4 重新配置 SF for Oracle RAC (7)2 用 VEA 配置 Disk Group (7)2.1 安装 VEA (7)2.2 新建 DG (8)2.3 创建 Volumes 和 File System (9)2.4 在另外一个节点上配置 DG (11)3 安装 oracle 软件,创建数据库 (11)3 .1 安装 oracle 软件 (11)3 .2 用 Veritas 的 ODM 代替 Oracle 的 libodm 库 (11)3.3 手工创建数据库 (12)4 用 VCM 配置 Veritas HA (12)4.1 安装 VCM 图形界面,管理 VCS (12)4. 2 增加服务组 (12)4. 3 增加资源组 (12)4.4 在 resource 中设置 link 关系 (15)4.5 ha 配置文件 (15)5 HA 监控配置 (15)5.1 监控 oracle instance (15)5.2 listener 监控 (17)6 HA 切换 (17)6.1 HA 切换日志 (17)6.2 HA 切换测试 (17)7 常用命令 (18)7.1 Veritas DMP (18)7.2 Veritas VxVM (19)7.3 Veritas VCS (19)8 SMTP 报警邮件配置 (20)9 参考文档211 安装 Veritas Storage Foundation1.1 下载软件/vcsmcSF for RAC Release 5.1 on LINUX对应软件包 VRTS_SF_HA_Solutions_5.1_PR1_RHEL5_SLES10_x64.tar.gzVeritas Enterprise Administrator 3.4.15.0 (Linux)对应软件包 VRTSobgui-3.4.15.0-0.i686.rpmVCS Cluster Manager Java_Console 5.1 (Linux)对应软件包 VCS_Cluster_Manager_Java_Console_5.1_for_Linux.rpm1.2 准备工作# 软件要求Software VersionDatabase Oracle RAC 11g Release 2OperatingsystemRed Hat Enterprise Linux 5 (RHEL 5) Update 3 or laterOracle Enterprise Linux 5.3 (OEL 5.3) or laterSUSE Linux Enterprise Server 10 with SP2 (SLES 10 SP2)SUSE Linux Enterprise Server 10 with SP3 ( SLES 10 SP3)# 配置/etc/hosts在2 台主机/etc/hosts 上增加各自的IP192.168.0.49 host1192.168.0.50 host2# 配置2 个节点root 用户的ssh 信任# 配NTP# 准备共享磁盘存储上的 lun 必须对所有节点可见。

Linux下Veritas Cluster Sever安装配置手册

Linux下Veritas Cluster Sever安装配置手册

Linux下Veritas Cluster Sever安装配置手册2015年3月2日目录一、安装前准备 (1)1. 硬件要求 (1)2. 软件要求 (1)3. 软硬件检查: (1)4. 所需的系统RPM包 (1)5. IP规划 (1)二、安装步骤 (1)1. 安装前检查 (1)2. 修改/etc/hosts文件 (1)3. 建立双机信任机制(分别在两个节点执行) (1)4. 安装配置 (2)1)安装前检查 (2)2)vcs安装配置 (2)3)安装完成以后,在root用户的.bash_profile文件中增加如下PATH: (9)4)安装完成查看VCS进程是否存在 (9)三、VG和逻辑卷创建 (9)1. 使用fdisk -l查看所有存储分配的lun (9)2. 创建PV查看pv状态(在共享存储上) (9)3. 创建VG查看VG状态 (9)4. 创建lv查看lv状态 (10)5. 建立挂载点 (10)四、JAVA Console客户端配置Mount资源组 (10)五、切换测试 (11)1. 将系统H0030服务切换到系统H0031 (11)2. 将系统H0031服务再切换到系统H0030 (11)3. Reboot系统H0030查看是否自动切换到H0031 (11)六、VCS卸载 (11)1.删除资源组 (12)2.停止VCS服务 (12)3.卸载VCS (12)4.重启主机 (12)七、VCS常用命令 (12)一、安装前准备1.硬件要求需要两台服务器,一台存储设备,交换机一台,网卡2个以上2.软件要求操作系统: Redhat-6.4VCS版本:Rhel6_x86_ 64(需要license)3.软硬件检查:A)确保主机与磁盘阵列,网卡物理连接正确无误B)确保两台主机均能访问到相同的共享磁盘设备4.所需的系统RPM包ksh-200110202-14.el5.x86_64.rpm5.IP规划二、安装步骤1.安装前检查2.修改/etc/hosts文件#vi /etc/hosts添加如下:10.217.5.78 H003010.217.5.79 H0031192.168.1.78 H0030-priv192.168.1.79 H0031-priv3.建立双机信任机制(分别在两个节点执行)#ssh-keygen -t rsa#chmod 755 ~/.ssh#scp ~/.ssh/id_rsa.pub H0031:/root/.ssh/authorized_keys #exec /usr/bin/ssh-agent $SHELL# ssh-add4.安装配置1)安装前检查# ./installer –precheck2)vcs安装配置#./installer3)安装完成以后,在root用户的.bash_profile文件中增加如下PATH:PATH=$PATH:$HOME/bin:/opt/VRTS/bin:/opt/VRTSvcs/bin:/opt/VRTSvlic/binexport PATH注:查看确认/etc/VRTSvcs/conf /types.cf和/etc/VRTSvcs/conf/config/types.cf文件大小相同4)安装完成查看VCS进程是否存在[root@H0030 rhel6_x86_64]# ps -ef|grep vcsroot 10282 1 0 18:26 ? 00:00:00 /opt/VRTSvcs/bin/hadroot 10288 1 0 18:26 ? 00:00:00 /opt/VRTSvcs/bin/CmdServerroot 10292 1 0 18:26 ? 00:00:00 /opt/VRTSvcs/bin/hashadowroot 10313 1 0 18:26 ? 00:00:00 /opt/VRTSvcs/bin/HostMonitor -type HostMonitor -agdir /root 10430 6395 0 18:28 pts/8 00:00:00 grep vcs三、VG和逻辑卷创建1.使用fdisk -l查看所有存储分配的lun# fdisk -l |grep Disk2.创建PV查看pv状态(在共享存储上)本例共享存储有两个/dev/sddlmaa、/dev/sddlmab#pvcreate -f /dev/sddlmaa#pvcreate -f /dev/sddlmaa3.创建VG查看VG状态#vgcreate -s 128 datavg /dev/sddlmaa#vgextend datavg /dev/sddlmab#vgdisplay -v /dev/datavg在另一系统上激活vg#vgscan#vgchange -ay /dev/datavg#vgdisplay -v /dev/datavg4.创建lv查看lv状态#lvcreate -L 200G -n datalv datavg#lvdisplay在另一系统上激活lv#lvscan#lvchange -ay /dev/datavg/datalv#lvdisplay5.建立挂载点#mkdir /data四、JAVA Console客户端配置Mount资源组1.右键Add Service Group,添加系统,设置优先级和服务组类型。

Veritas安装配置手册-CFS

Veritas安装配置手册-CFS

山西电信CFSHA安装配置报告Oct, 2008Symantec Consulting Services .文件信息版本Subject Symantec 大中国区顾问服务VersionAuthor 胡华波Comments关于作者该文件可以与下列作者取得联系胡华波ConsultantSymantec Consulting Service, Greater China Region *********************修改历史日期作者版本修改内容目录1安装CFSHA4.1 for Linux (4)1.1环境准备 (4)1.2修改安装脚本 (4)1.3安装软件 (5)1.4安装MP4补丁 (17)1.5安装RP2补丁 (25)1.6配置软件 (26)2配置信息 (35)2.1/etc/llttab (35)2.2/etc/VRTSvcs/conf/config/main.cf (35)2.3图形方式配置DG,Volume,FileSystem (38)2.4命令方式配置DG,Volume,FileSystem (45)1 安装CFSHA4.1 for Linux安装软件版本为VCS for RedHat Linux1.1 环境准备配置ssh于每台主机上建立ssh public key[root@SX-MMS-SA-1 rhel4_i686]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:1e:ce:2e:19:73:10:f6:2c:d7:09:4a:b8:02:3f:45:b0 root@SX-MMS-SA-为了scp所有主机同时安装软件,需要主机间scp不用输密码,copy SX-MMS-SA-1上ssh public key至另一主机上[********************]#scpid_rsa.pubSX-MMS-SA-2:/root/.ssh/authorized_keysroot@sx-mms-sa-2's password:id_rsa.pub 100% 226 0.2KB/s 00:001.2 修改安装脚本进入安装路径[root@SX-MMS-SA-1 install]# pwd/sf/rhel4_i686/storage_foundation_cluster_file_system/scripts/install[root@SX-MMS-SA-1 install]# lsCPI41LINUX.pm messages rex vxgettext[root@SX-MMS-SA-1 install]# vi CPI41LINUX.pm将脚本内“4AS”修改为“4ES”$CMD{STOP}{vxsvc}="/opt/VRTS/bin/vxsvcctrl stop";$COMM{LINUX}{SUPPARCHES} = [ qw( i586 i686 ia64 x86_64 ) ];$COMM{LINUX}{SUPPRHRELS} = [ qw( 4AS ) ];$COMM{LINUX}{SUPPSUSERELS} = [ qw( 9 ) ];修改后为$CMD{STOP}{vxsvc}="/opt/VRTS/bin/vxsvcctrl stop";$COMM{LINUX}{SUPPARCHES} = [ qw( i586 i686 ia64 x86_64 ) ];$COMM{LINUX}{SUPPRHRELS} = [ qw( 4ES ) ];$COMM{LINUX}{SUPPSUSERELS} = [ qw( 9 ) ];1.3 安装软件[root@SX-MMS-SA-1 storage_foundation_cluster_file_system]# ./installsfcfs -installonlyVERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATIONPROGRAMCopyright (c) 2005 VERITAS Software Corporation. All rights reserved.VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are trademarks orregistered trademarks of VERITAS Software Corporation. VERITAS and the VERITAS Logo Reg.U.S. Pat. & Tm. Off. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies.Enter the system names separated by spaces on which to install SFCFS: SX-MMS-SA-2 SX-MMS-SA-1Checking system communication:installsfcfs requires that ssh commands used between systems execute without prompting forpasswords or confirmations. If installsfcfs asks for a login password, stop installsfcfsand run it again with the ssh configured for passwordless logins, or configure rsh and usethe -usersh option.Verifying communication with SX-MMS-SA-2 .............................. ping successfulAttempting ssh -x with SX-MMS-SA-2 .................................. ssh -x successfulAttempting scp with SX-MMS-SA-2 ........................................ scp successfulChecking OS version on SX-MMS-SA-2 ............................... Linux 2.6.9-78.ELsmpChecking VRTScavf rpm ................................................... not installedChecking Machine Type on SX-MMS-SA-2 (i686)Checking Linux Distribution on SX-MMS-SA-2 ................................. Redhat 4ESCreating log directory on SX-MMS-SA-2 ............................................ DoneChecking OS version on SX-MMS-SA-1 ............................... Linux 2.6.9-78.ELsmpChecking VRTScavf rpm ................................................... not installedChecking Machine Type on SX-MMS-SA-1 (i686)Checking Linux Distribution on SX-MMS-SA-1 ................................. Redhat 4ESLogs for installsfcfs are being created in /var/tmp/installsfcfs1022150043.Using /usr/bin/ssh -x and /usr/bin/scp to communicate with remote systems.Initial system check completed successfully.Press [Enter] to continue:VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMVERITAS Infrastructure rpm installation:Installing VERITAS Infrastructure rpms on SX-MMS-SA-2:Checking VRTSvlic rpm ................................................... not installedChecking VRTScpi rpm .................................................... not installedChecking file system space ................................ required space is availableCopying VRTSvlic rpm to SX-MMS-SA-2 .............................................. DoneInstalling VRTSvlic 3.02 on SX-MMS-SA-2 .......................................... DoneCopying VRTScpi rpm to SX-MMS-SA-2 ............................................... DoneInstalling VRTScpi 4.1.0.151 on SX-MMS-SA-2 ...................................... DoneInstalling VERITAS Infrastructure rpms on SX-MMS-SA-1:Checking VRTSvlic rpm ................................................... not installedChecking VRTScpi rpm .................................................... not installedChecking file system space ................................ required space is availableInstalling VRTSvlic 3.02 on SX-MMS-SA-1 .......................................... DoneInstalling VRTScpi 4.1.0.151 on SX-MMS-SA-1 ...................................... Done VERITAS Infrastructure rpms installed successfully.Press [Enter] to continue:VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMEach system requires a SFCFS product license before installation. License keys foradditional product features should also be added at this time.Some license keys are node locked and are unique per system. Other license keys, such as demo keys and site license keys, are registered on all systems and must be entered on thefirst system.SFCFS Licensing Verification:Checking SFCFS license key on SX-MMS-SA-2 ................................ not licensedEnter a SFCFS license key for SX-MMS-SA-2: [?] IZPG-I3R3-I6ON-37MG-KX7L-G943-P Registering VERITAS Storage Foundation for Cluster File System DEMO key on SX-MMS-SA-2Do you want to enter another license key for SX-MMS-SA-2? [y,n,q,?] (n)Registering IZPG-I3R3-I6ON-37MG-KX7L-G943-P on SX-MMS-SA-1Checking SFCFS license key on SX-MMS-SA-1 ... Storage Foundation Cluster File System HA DemoDo you want to enter another license key for SX-MMS-SA-1? [y,n,q,?] (n)SFCFS licensing completed successfully.Press [Enter] to continue:VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMinstallsfcfs can install the following optional SFCFS rpms:VRTSobgui VERITAS Enterprise AdministratorVRTSvcsmn VERITAS Cluster Server Man PagesVRTSvcsApache VERITAS Cluster Server Apache AgentVRTSvcsdc VERITAS Cluster Server DocumentationVRTScscm VERITAS Cluster Server Cluster ManagerVRTScssim VERITAS Cluster Server SimulatorVRTSvmdoc VERITAS Volume Manager DocumentationVRTSvmman VERITAS Volume Manager Manual PagesVRTSlvmconv VERITAS Linux LVM to VxVM ConverterVRTSap VERITAS Action ProviderVRTStep VERITAS Task ProviderVRTSfsdoc VERITAS File System DocumentationVRTSfsmnd VERITAS File System Software Developer Kit Manual Pages1) Install all of the optional rpms2) Install none of the optional rpms3) View rpm descriptions and select optional rpmsSelect the optional rpms to be installed on all systems? [1-3,q,?] (1)VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMinstallsfcfs will install the following SFCFS rpms:VRTSperl VERITAS Perl 5.8.0 RedistributionVRTSob VERITAS Enterprise Administrator ServiceVRTSobgui VERITAS Enterprise AdministratorVRTSllt VERITAS Low Latency TransportVRTSgab VERITAS Group Membership and Atomic BroadcastVRTSvxfen VERITAS I/O FencingVRTSvcs VERITAS Cluster ServerVRTSvcsmg VERITAS Cluster Server Message CatalogsVRTSvcsag VERITAS Cluster Server Bundled AgentsVRTSvcsdr VERITAS Cluster Server Disk Reservation Modules and Utilities VRTSvcsmn VERITAS Cluster Server Man PagesVRTSvcsApache VERITAS Cluster Server Apache AgentVRTSvcsdc VERITAS Cluster Server DocumentationVRTSjre VERITAS Java Runtime Environment RedistributionVRTScscm VERITAS Cluster Server Cluster ManagerVRTScssim VERITAS Cluster Server SimulatorVRTScscw VERITAS Cluster Server Configuration WizardsVRTSweb VERITAS Java Web ServerVRTSvcsw VERITAS Cluster Manager (Web Console)Press [Enter] to continue:...continued:VRTScutil VERITAS Cluster UtilitiesVRTSvxvmcommon VERITAS Volume Manager Common Package.VRTSvxvmplatform VERITAS Volume Manager Platform Specific Package.VRTSvmdoc VERITAS Volume Manager DocumentationVRTSvmman VERITAS Volume Manager Manual PagesVRTSvmpro VERITAS Volume Manager Management Services ProviderVRTSfspro VERITAS File System Management Services ProviderVRTSalloc VERITAS Volume Manager Intelligent Storage ProvisioningVRTSddlpr VERITAS Device Discovery Layer Services ProviderVRTSlvmconv VERITAS Linux LVM to VxVM ConverterVRTSvxfscommon VERITAS File System Common Package.VRTSvxfsplatform VERITAS File System Platform Specific Package.VRTSap VERITAS Action ProviderVRTStep VERITAS Task ProviderVRTSfsman VERITAS File System Manual PagesVRTSfsdoc VERITAS File System DocumentationVRTSfssdk VERITAS File System Software Developer KitVRTSfsmnd VERITAS File System Software Developer Kit Manual PagesVRTScavf VERITAS Cluster Server Agents for Cluster File SystemVRTSglm VERITAS Group Lock ManagerPress [Enter] to continue:VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMChecking system installation requirements:Checking SFCFS installation requirements on SX-MMS-SA-2:Checking VRTSperl rpm ................................................... not installedChecking VRTSob rpm ..................................................... not installedChecking VRTSobgui rpm .................................................. not installedChecking VRTSllt rpm .................................................... not installedChecking VRTSgab rpm .................................................... not installedChecking VRTSvxfen rpm .................................................. not installedChecking VRTSvcs rpm .................................................... not installedChecking VRTSvcsmg rpm .................................................. not installedChecking VRTSvcsag rpm .................................................. not installedChecking VRTSvcsdr rpm .................................................. not installedChecking VRTSvcsmn rpm .................................................. not installedChecking VRTSvcsApache rpm .............................................. not installedChecking VRTSvcsdc rpm .................................................. not installedChecking VRTSjre rpm .................................................... not installedChecking VRTScscm rpm ................................................... not installedChecking VRTScssim rpm .................................................. not installedChecking VRTScscw rpm ................................................... not installedChecking VRTSweb rpm .................................................... not installedChecking VRTSvcsw rpm ................................................... not installedChecking VRTScutil rpm .................................................. not installedChecking VRTSvxvmcommon rpm ............................................. not installed Checking VRTSvxvmplatform rpm ........................................... not installed Checking VRTSvmdoc rpm .................................................. not installedChecking VRTSvmman rpm .................................................. not installedChecking VRTSvmpro rpm .................................................. not installedChecking VRTSfspro rpm .................................................. not installedChecking VRTSalloc rpm .................................................. not installedChecking VRTSddlpr rpm .................................................. not installedChecking VRTSlvmconv rpm ................................................ not installedChecking VRTSvxfscommon rpm ............................................. not installed Checking VRTSvxfsplatform rpm ........................................... not installedChecking VRTSap rpm ..................................................... not installedChecking VRTStep rpm .................................................... not installedChecking VRTSfsman rpm .................................................. not installedChecking VRTSfsdoc rpm .................................................. not installedChecking VRTSfssdk rpm .................................................. not installedChecking VRTSfsmnd rpm .................................................. not installedChecking VRTScavf rpm ................................................... not installedChecking VRTSglm rpm .................................................... not installedChecking file system space ................................ required space is available Checking for patch(1) rpm ..................................... version 2.5.4 installedChecking vxsvc process .................................................... not runningChecking had process ...................................................... not runningChecking hashadow process ................................................. not runningChecking CmdServer process ................................................ not runningChecking notifier process ................................................. not runningChecking vxfen driver ..................................................... not runningChecking gab driver ....................................................... not runningChecking llt driver ....................................................... not running Checking SFCFS installation requirements on SX-MMS-SA-1:Checking VRTSperl rpm ................................................... not installedChecking VRTSob rpm ..................................................... not installedChecking VRTSobgui rpm .................................................. not installedChecking VRTSllt rpm .................................................... not installedChecking VRTSgab rpm .................................................... not installedChecking VRTSvxfen rpm .................................................. not installedChecking VRTSvcs rpm .................................................... not installedChecking VRTSvcsmg rpm .................................................. not installedChecking VRTSvcsag rpm .................................................. not installedChecking VRTSvcsdr rpm .................................................. not installedChecking VRTSvcsmn rpm .................................................. not installedChecking VRTSvcsApache rpm .............................................. not installed Checking VRTSvcsdc rpm .................................................. not installedChecking VRTSjre rpm .................................................... not installedChecking VRTScscm rpm ................................................... not installedChecking VRTScssim rpm .................................................. not installedChecking VRTScscw rpm ................................................... not installedChecking VRTSweb rpm .................................................... not installedChecking VRTSvcsw rpm ................................................... not installedChecking VRTScutil rpm .................................................. not installedChecking VRTSvxvmcommon rpm ............................................. not installed Checking VRTSvxvmplatform rpm ........................................... not installed Checking VRTSvmdoc rpm .................................................. not installedChecking VRTSvmman rpm .................................................. not installedChecking VRTSvmpro rpm .................................................. not installedChecking VRTSfspro rpm .................................................. not installedChecking VRTSalloc rpm .................................................. not installedChecking VRTSddlpr rpm .................................................. not installedChecking VRTSlvmconv rpm ................................................ not installedChecking VRTSvxfscommon rpm ............................................. not installed Checking VRTSvxfsplatform rpm ........................................... not installedChecking VRTSap rpm ..................................................... not installedChecking VRTStep rpm .................................................... not installedChecking VRTSfsman rpm .................................................. not installedChecking VRTSfsdoc rpm .................................................. not installedChecking VRTSfssdk rpm .................................................. not installedChecking VRTSfsmnd rpm .................................................. not installedChecking VRTScavf rpm ................................................... not installedChecking VRTSglm rpm .................................................... not installedChecking file system space ................................ required space is availableChecking for patch(1) rpm ..................................... version 2.5.4 installedChecking vxsvc process .................................................... not runningChecking had process ...................................................... not runningChecking hashadow process ................................................. not runningChecking CmdServer process ................................................ not runningChecking notifier process ................................................. not runningChecking vxfen driver ..................................................... not runningChecking gab driver ....................................................... not runningChecking llt driver ....................................................... not runningInstallation requirement checks completed successfully.Press [Enter] to continue:VERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMI/O FencingI/O Fencing requires manual configuration after SFCFS Installation.If not properly configured, I/O Fencing will become disabled.It needs to be determined at this time if you plan to configureI/O Fencing in order to properly set the expected Fencing mode.See the Storage Foundation Cluster File System Installationand Administration Guide for more information on I/O Fencing.Will you be configuring I/O Fencing? [y,n,q,?] (y) nVERITAS STORAGE FOUNDATION CLUSTER FILE SYSTEM 4.1 INSTALLATION PROGRAMSFCFS can be installed on systems consecutively or simultaneously. Installing on systems consecutively takes more time but allows for better error handling.Would you like to install Storage Foundation Cluster File System HA on all systems simultaneously? [y,n,q,?] (y)Installing Storage Foundation Cluster File System HA 4.1 on all systems simultaneously:Copying VRTSperl rpm to SX-MMS-SA-2 ............................... Done 1 of 117 stepsInstalling VRTSperl 4.1.0.0 on SX-MMS-SA-1 ........................ Done 2 of 117 stepsInstalling VRTSperl 4.1.0.0 on SX-MMS-SA-2 ........................ Done 3 of 117 stepsCopying VRTSob rpm to SX-MMS-SA-2 ................................. Done 4 of 117 stepsInstalling VRTSob 3.2.540 on SX-MMS-SA-1 .......................... Done 5 of 117 stepsInstalling VRTSob 3.2.540 on SX-MMS-SA-2 .......................... Done 6 of 117 stepsInstalling VRTSobgui 3.2.540 on SX-MMS-SA-1 ....................... Done 7 of 117 stepsInstalling VRTSllt 4.1.00.10 on SX-MMS-SA-1 ....................... Done 8 of 117 stepsCopying VRTSobgui rpm to SX-MMS-SA-2 .............................. Done 9 of 117 stepsInstalling VRTSgab 4.1.00.10 on SX-MMS-SA-1 ...................... Done 10 of 117 stepsInstalling VRTSvxfen 4.1.00.10 on SX-MMS-SA-1 .................... Done 11 of 117 stepsInstalling VRTSobgui 3.2.540 on SX-MMS-SA-2 ...................... Done 12 of 117 stepsCopying VRTSllt rpm to SX-MMS-SA-2 ............................... Done 13 of 117 stepsInstalling VRTSllt 4.1.00.10 on SX-MMS-SA-2 ...................... Done 14 of 117 stepsCopying VRTSgab rpm to SX-MMS-SA-2 ............................... Done 15 of 117 stepsInstalling VRTSvcs 4.1.00.10 on SX-MMS-SA-1 ...................... Done 16 of 117 stepsInstalling VRTSvcsmg 4.1.00.10 on SX-MMS-SA-1 .................... Done 17 of 117 steps Installing VRTSvcsag 4.1.00.10 on SX-MMS-SA-1 .................... Done 18 of 117 steps Installing VRTSgab 4.1.00.10 on SX-MMS-SA-2 ...................... Done 19 of 117 stepsCopying VRTSvxfen rpm to SX-MMS-SA-2 ............................. Done 20 of 117 stepsInstalling VRTSvcsdr 4.1.00.10 on SX-MMS-SA-1 .................... Done 21 of 117 stepsInstalling VRTSvcsmn 4.1.00.10 on SX-MMS-SA-1 .................... Done 22 of 117 steps Installing VRTSvxfen 4.1.00.10 on SX-MMS-SA-2 .................... Done 23 of 117 stepsInstalling VRTSvcsApache 4.1.00.10 on SX-MMS-SA-1 ................ Done 24 of 117 steps Installing VRTSvcsdc 4.1.00.10 on SX-MMS-SA-1 .................... Done 25 of 117 stepsCopying VRTSvcs rpm to SX-MMS-SA-2 ............................... Done 26 of 117 stepsInstalling VRTSjre 1.4 on SX-MMS-SA-1 ............................ Done 27 of 117 stepsInstalling VRTScscm 4.4.00.10 on SX-MMS-SA-1 ..................... Done 28 of 117 steps Installing VRTSvcs 4.1.00.10 on SX-MMS-SA-2 ...................... Done 29 of 117 stepsCopying VRTSvcsmg rpm to SX-MMS-SA-2 ............................. Done 30 of 117 stepsInstalling VRTScssim 4.1.00.10 on SX-MMS-SA-1 .................... Done 31 of 117 stepsInstalling VRTSvcsmg 4.1.00.10 on SX-MMS-SA-2 .................... Done 32 of 117 steps Installing VRTScscw 4.1.00.10 on SX-MMS-SA-1 ..................... Done 33 of 117 stepsCopying VRTSvcsag rpm to SX-MMS-SA-2 ............................. Done 34 of 117 stepsInstalling VRTSweb 4.2 on SX-MMS-SA-1 ............................ Done 35 of 117 stepsInstalling VRTSvcsag 4.1.00.10 on SX-MMS-SA-2 .................... Done 36 of 117 steps Copying VRTSvcsdr rpm to SX-MMS-SA-2 ............................. Done 37 of 117 stepsInstalling VRTSvcsw 4.4.00.10 on SX-MMS-SA-1 ..................... Done 38 of 117 stepsInstalling VRTSvcsdr 4.1.00.10 on SX-MMS-SA-2 .................... Done 39 of 117 stepsCopying VRTSvcsmn rpm to SX-MMS-SA-2 ............................. Done 40 of 117 stepsInstalling VRTScutil 4.1.00.10 on SX-MMS-SA-1 .................... Done 41 of 117 stepsInstalling VRTSvcsmn 4.1.00.10 on SX-MMS-SA-2 .................... Done 42 of 117 stepsCopying VRTSvcsApache rpm to SX-MMS-SA-2 ......................... Done 43 of 117 stepsInstalling VRTSvcsApache 4.1.00.10 on SX-MMS-SA-2 ................ Done 44 of 117 steps Installing VRTSvxvmcommon 4.1.00.10 on SX-MMS-SA-1 ............... Done 45 of 117 steps Copying VRTSvcsdc rpm to SX-MMS-SA-2 ............................. Done 46 of 117 stepsInstalling VRTSvcsdc 4.1.00.10 on SX-MMS-SA-2 .................... Done 47 of 117 stepsCopying VRTSjre rpm to SX-MMS-SA-2 ............................... Done 48 of 117 stepsInstalling VRTSjre 1.4 on SX-MMS-SA-2 ............................ Done 49 of 117 stepsInstalling VRTSvxvmplatform 4.1.00.10 on SX-MMS-SA-1 ............. Done 50 of 117 steps Copying VRTScscm rpm to SX-MMS-SA-2 .............................. Done 51 of 117 stepsInstalling VRTSvmdoc 4.1.00.10 on SX-MMS-SA-1 .................... Done 52 of 117 stepsInstalling VRTScscm 4.4.00.10 on SX-MMS-SA-2 ..................... Done 53 of 117 stepsInstalling VRTSvmman 4.1.00.10 on SX-MMS-SA-1 .................... Done 54 of 117 stepsCopying VRTScssim rpm to SX-MMS-SA-2 ............................. Done 55 of 117 stepsInstalling VRTSvmpro 4.1.00.10 on SX-MMS-SA-1 .................... Done 56 of 117 stepsInstalling VRTSfspro 4.1.00.10 on SX-MMS-SA-1 .................... Done 57 of 117 stepsInstalling VRTScssim 4.1.00.10 on SX-MMS-SA-2 .................... Done 58 of 117 stepsCopying VRTScscw rpm to SX-MMS-SA-2 .............................. Done 59 of 117 stepsInstalling VRTSalloc 4.1.00.10 on SX-MMS-SA-1 .................... Done 60 of 117 stepsInstalling VRTScscw 4.1.00.10 on SX-MMS-SA-2 ..................... Done 61 of 117 stepsInstalling VRTSddlpr 4.1.00.10 on SX-MMS-SA-1 .................... Done 62 of 117 stepsCopying VRTSweb rpm to SX-MMS-SA-2 ............................... Done 63 of 117 stepsInstalling VRTSlvmconv 4.1.00.10 on SX-MMS-SA-1 .................. Done 64 of 117 stepsInstalling VRTSvxfscommon 4.1.00.10 on SX-MMS-SA-1 ............... Done 65 of 117 steps Installing VRTSweb 4.2 on SX-MMS-SA-2 ............................ Done 66 of 117 stepsCopying VRTSvcsw rpm to SX-MMS-SA-2 .............................. Done 67 of 117 stepsInstalling VRTSvcsw 4.4.00.10 on SX-MMS-SA-2 ..................... Done 68 of 117 stepsCopying VRTScutil rpm to SX-MMS-SA-2 ............................. Done 69 of 117 stepsInstalling VRTScutil 4.1.00.10 on SX-MMS-SA-2 .................... Done 70 of 117 stepsInstalling VRTSvxfsplatform 4.1.00.10 on SX-MMS-SA-1 ............. Done 71 of 117 stepsCopying VRTSvxvmcommon rpm to SX-MMS-SA-2 ........................ Done 72 of 117 steps Installing VRTSap 2.00 on SX-MMS-SA-1 ............................ Done 73 of 117 stepsInstalling VRTStep 1.20 on SX-MMS-SA-1 ........................... Done 74 of 117 stepsInstalling VRTSfsman 4.1.00.10 on SX-MMS-SA-1 .................... Done 75 of 117 stepsInstalling VRTSfsdoc 4.1.00.10 on SX-MMS-SA-1 .................... Done 76 of 117 stepsInstalling VRTSvxvmcommon 4.1.00.10 on SX-MMS-SA-2 ............... Done 77 of 117 stepsInstalling VRTSfssdk 4.1.00.10 on SX-MMS-SA-1 .................... Done 78 of 117 stepsInstalling VRTSfsmnd 4.1.00.10 on SX-MMS-SA-1 .................... Done 79 of 117 stepsCopying VRTSvxvmplatform rpm to SX-MMS-SA-2 ...................... Done 80 of 117 steps Installing VRTScavf 4.1.00.10 on SX-MMS-SA-1 ..................... Done 81 of 117 stepsInstalling VRTSglm 4.1.00.10 on SX-MMS-SA-1 ...................... Done 82 of 117 stepsInstalling VRTSvxvmplatform 4.1.00.10 on SX-MMS-SA-2 ............. Done 83 of 117 steps Copying VRTSvmdoc rpm to SX-MMS-SA-2 ............................. Done 84 of 117 stepsInstalling VRTSvmdoc 4.1.00.10 on SX-MMS-SA-2 .................... Done 85 of 117 stepsCopying VRTSvmman rpm to SX-MMS-SA-2 ............................. Done 86 of 117 stepsInstalling VRTSvmman 4.1.00.10 on SX-MMS-SA-2 .................... Done 87 of 117 steps Copying VRTSvmpro rpm to SX-MMS-SA-2 ............................. Done 88 of 117 stepsInstalling VRTSvmpro 4.1.00.10 on SX-MMS-SA-2 .................... Done 89 of 117 stepsCopying VRTSfspro rpm to SX-MMS-SA-2 ............................. Done 90 of 117 stepsInstalling VRTSfspro 4.1.00.10 on SX-MMS-SA-2 .................... Done 91 of 117 stepsCopying VRTSalloc rpm to SX-MMS-SA-2 ............................. Done 92 of 117 stepsInstalling VRTSalloc 4.1.00.10 on SX-MMS-SA-2 .................... Done 93 of 117 stepsCopying VRTSddlpr rpm to SX-MMS-SA-2 ............................. Done 94 of 117 stepsInstalling VRTSddlpr 4.1.00.10 on SX-MMS-SA-2 .................... Done 95 of 117 stepsCopying VRTSlvmconv rpm to SX-MMS-SA-2 ........................... Done 96 of 117 stepsInstalling VRTSlvmconv 4.1.00.10 on SX-MMS-SA-2 .................. Done 97 of 117 steps Copying VRTSvxfscommon rpm to SX-MMS-SA-2 ........................ Done 98 of 117 steps Installing VRTSvxfscommon 4.1.00.10 on SX-MMS-SA-2 ............... Done 99 of 117 steps Copying VRTSvxfsplatform rpm to SX-MMS-SA-2 ..................... Done 100 of 117 steps Installing VRTSvxfsplatform 4.1.00.10 on SX-MMS-SA-2 ............ Done 101 of 117 steps Copying VRTSap rpm to SX-MMS-SA-2 ............................... Done 102 of 117 stepsInstalling VRTSap 2.00 on SX-MMS-SA-2 ........................... Done 103 of 117 stepsCopying VRTStep rpm to SX-MMS-SA-2 .............................. Done 104 of 117 stepsInstalling VRTStep 1.20 on SX-MMS-SA-2 .......................... Done 105 of 117 stepsCopying VRTSfsman rpm to SX-MMS-SA-2 ............................ Done 106 of 117 stepsInstalling VRTSfsman 4.1.00.10 on SX-MMS-SA-2 ................... Done 107 of 117 stepsCopying VRTSfsdoc rpm to SX-MMS-SA-2 ............................ Done 108 of 117 stepsInstalling VRTSfsdoc 4.1.00.10 on SX-MMS-SA-2 ................... Done 109 of 117 stepsCopying VRTSfssdk rpm to SX-MMS-SA-2 ............................ Done 110 of 117 stepsInstalling VRTSfssdk 4.1.00.10 on SX-MMS-SA-2 ................... Done 111 of 117 stepsCopying VRTSfsmnd rpm to SX-MMS-SA-2 ............................ Done 112 of 117 stepsInstalling VRTSfsmnd 4.1.00.10 on SX-MMS-SA-2 ................... Done 113 of 117 stepsCopying VRTScavf rpm to SX-MMS-SA-2 ............................. Done 114 of 117 stepsInstalling VRTScavf 4.1.00.10 on SX-MMS-SA-2 .................... Done 115 of 117 stepsCopying VRTSglm rpm to SX-MMS-SA-2 .............................. Done 116 of 117 steps。

Veritas Cluster Server for Oracle双机热备的配置

Veritas Cluster Server for Oracle双机热备的配置

Veritas Cluster Server for Oracle双机热备的配置-概述将Oracle的双机放在DB2双机后面讲有两个原因:一是DB2的配置相对于oracle的配置来说比较简单,数据库的模式也比较容易理解,从简单的开始了解有利于用户的学校,而其中相似的地方用户可以参照DB2的配置;二是DB2双机的配置,也只能说是oracle双机配置的子集,用户在学习了DB2的双机之后,oracle双机配置的很多相似的地方简单说明一下即可,不会让用户感觉到重复,但是用户可以比较一下这两种模式的异同,有利于用户选择更合适自己的双机配置模式。

将sybase放在最后并不是因为它更复杂,而是它在这三个数据库之中,用户相对比较少,需要的人不多。

-DB2与Oracle数据库的对比DB2和Oracle有很多的不同,要想了解清楚,那个不是一朝一夕的功夫了。

幸运的是,因为我们现在只是需要做双机配置,所以我们只是在可能会影响配置的概念上,做一个简单的比较。

1.配置结构的不同:DB2数据库的双机热备只支持一种模式,就是DB2的程序在两台机器上各有一份,只有数据文件存放在共享存储中,如下图所示:图1,DB2双机配置结构图这种配置模式的优点是有利于数据库的升级,当其中systemA需要升级的时候,就把服务切换到systemB上运行,升级A的DB2程序,之后还可以把服务切换回到A来,然后升级B的DB2程序。

这个升级过程不会影响用户的DB2使用,因为总有一台机器可以使用DB2程序来响应用户的服务请求。

对于oracle来说不但可以支持这种程序存放在不同机器上的做法,而且支持把oracle的程序文件也同时放在共享盘上,其结构图如下所示:图2,oracle双机结构图-程序在各个服务器上图3,oracle双机结构图-程序和数据都在共享盘上将数据与程序同时放在共享盘上的优点有两个:一是节省磁盘空间,用户只需要保留一份数据库备份;二是有利于程序的一致性,不会因为数据库版本的不同,产生差异,可以避免产生一些莫名的问题。

VERITAS产品手册

VERITAS产品手册

一、前言企业只有在运行正常时才能够创造经济效益,一旦关键任务系统出现故障,每一分每一秒都意味着重大损失。

尽管这个道理显而易见,但许多公司都做不到对故障服务器的迅速恢复,原因是没有人愿意做这项极其复杂的工作,除非迫在眉睫非做不可。

系统崩溃时,大部分的公司IT部门都把大量的时间花在了恢复系统上,想尽办法能让系统达到可以从磁带恢复备份数据的状态。

他们需要重新安装操作系统和配置硬件,寻找技术熟练人员来完成这些复杂工作,从而浪费了大量的宝贵时间。

由于服务器恢复工作的压力非常大,因而会不可避免地发生错误,进而危及恢复的完整性。

IT每天都面临着严峻挑战,既要提供高度的应用、数据可用性与更高的服务级别要求,同时又要限制成本费用。

随着数据中心日益扩展,其中包括异架构操作系统,多种硬件配置和许多的节/站点等,使得上述挑战变得更加错综复杂。

VERITAS高可用解决方案旨在以集中方式控制服务器和存储系统,以减少这种复杂性。

VERITAS 提供专业化的服务,帮助客户评估应用系统的风险和需求,并通过一系列解决方案和各种技术,保障数据安全、系统不中断运行、高性能,并最大化投资回报二、产品介绍VERITAS 致力于为客户提供效用计算的IT框架,为客户提供一系列配套的解决方案,它们由经过全面测试和集成的产品组成,能够提供每个级别的可用性,从本地磁带备份到高水准数据和应用可用性,乃至广域网环境下灾难恢复。

1、存储基础软件(Storage Foundation)–Storage Foundation–Storage Foundation for Oracle–Storage Foundation for DB2–Storage Foundation for Oracle RAC–Storage Foundation for Cluster File System(1)Storage Foundation 技术特点存储在线管理,减少因磁盘系统维护造成的停机时间--应用使用的逻辑卷的在线扩充或缩小--逻辑卷结构的在线调整--逻辑卷数据的在线转移高性能、在线管理的文件系统--在线碎片整理--在线扩充或缩小--在线数据转移提供真正的数据共享--同种平台(Sun或HP)可以同时存储一个文件系统--适用性强的在线管理功能,极大地减少因系统维护造成的停机,近一步提高了应用系统的高可用性。

VCP认证 第02部分 VMware vSphere 4.0 ESX4.0安装

VCP认证 第02部分 VMware vSphere 4.0 ESX4.0安装
VMware vSphere 4.0 03 ESX4.0 安装
Product Support Engineering
VMware Confidential
Agenda
Module 1 – vSphere 更新 Module 2– ESX 4.0 安装 Module 3 – vSphere 4.0 Licensing Module 4 – vCenter Server 4.0 Module 5 – Remote CLI
VI4 - Mod 1-2 - Slide
9
性能增强的建议
RAM – The ESX host might require more RAM for the service console if you are running third-party management applications or backup agents.
Broadcom NetXtreme 570x gigabit controllers Intel PRO/1000 adapters
VI4 - Mod 1-2 - Slide 4
ESX 硬件要求
For best performance and security, use separate Ethernet controllers for the service console and the virtual machines. A SCSI adapter, Fibre Channel adapter, or internal RAID controller: Basic SCSI controllers are Adaptec Ultra-160 and Ultra-320, LSI Logic Fusion-MPT, and most NCR/Symbios SCSI controllers. Fibre Channel. See the Storage / SAN Compatibility Guide. RAID adapters supported are HP Smart Array, Dell PercRAID (Adaptec RAID and LSI MegaRAID), and IBM (Adaptec) ServeRAID controllers.

Veritas Cluster for AIX安装配置指南

Veritas Cluster for AIX安装配置指南

VCS for AIX安装配置指南1.建议安装最新的AIX补丁oslevel –r #5100-05 表示已经安装了ML052.硬件环境准备✓公网(如en0, en1)已经连接到网络上✓私网(如en2, en3)通过两根交叉线互连✓共享磁盘阵列连接正常,并能被两台服务器识别✓足够的磁盘空间(/opt 78M、/usr 13M、/var 2M、/ 3M)3.设置PATH和MANPATH环境变量PATH=/sbin:/usr/sbin:/opt/VRTSvcs/bin:$PATH; export PATH MANPATH=$MANPATH:/opt/VRTS/man; export MANPATH4.允许两台服务器之间可以rlogin在每台服务器的根目录下编辑/.rhosts文件,在其中写上一个+号。

为了安全起见,在安装完成后建议将/.rhosts文件删除。

5.修改/etc/pse.conf使得LLT Driver工作✓去除’ethernet driver’所在行前的#注释符,如下所示d+ dlpi en /dev/dlpi/en # streams dlpi ethernet driver✓在#PSE Modules部分加上如下一行d llt # LLTdriver✓Reboot the server6.设置SCSI Identifier对于光纤磁盘阵列,跳过这一步,直接进入第7步。

每台服务器必须使用唯一的SCSI Identifier,检查是否每台AIX服务器使用相同的SCSI Identifier,通常AIX SCSI Identifier被设置成缺省值7。

通过下面的命令可以显示id值:dy_db2:/opt>#lsdev -Cc adapter | grep scsiscsi0 Available 1S-08 Wide/Ultra-3 SCSI I/O Controllerscsi1 Available 1S-09 Wide/Ultra-3 SCSI I/O Controllerscsi2 Available 1c-08 Wide/Fast-20 SCSI I/O Controllerdy_db2:/opt>#lsattr -El scsi0 -a idid 7 Adapter card SCSI ID Truedy_db1:/>#lsdev -Cc adapter | grep scsiscsi0 Available 1S-08 Wide/Ultra-3 SCSI I/O Controllerscsi1 Available 1S-09 Wide/Ultra-3 SCSI I/O Controllerscsi2 Available 1c-08 Wide/Fast-20 SCSI I/O Controllerdy_db1:/>#lsattr -El scsi0 -a idid 7 Adapter card SCSI ID True通过smitty scsia或下面的命令可以更改id值:dy_db1:/>chdev –p –l scsi –a id=5更改完成后需要重新启动服务器,然后通过lspv命令去检查是否每台服务器都能识别共享磁盘阵列上的硬盘设备。

VCS安装配置步骤

VCS安装配置步骤

Veritas Cluster Server安装配置指导(针对SYBASE应用)说明:以下所有截取图形中所有名称、IP地址等均为截图时状态,请参照实际情况修改。

一.安装前准备工作1.心跳网卡配置安装VCS需要服务器有两块网卡直接连接作心跳侦测用。

两块网卡不使用任何WINDOWS 系统自带的协议,所以要将网卡属性中所有已勾选的选项去掉:为便于安装和维护,最好将两网络连接名称更换为易于识别的名称(如priv1和priv2)。

2.域配置安装VCS需要在两台服务器配置成域控制器(一主一备,或一台为主域控制器,另一台直接加入到该域)。

同时安装DNS(域名解析系统)。

同时最好配置一下WINS地址打开运行,填入dcpromo:安装域完成后重新启动。

3.DNS配置添加上网方法4.磁盘阵列配置首先在磁盘阵列柜上创建RAID5,同时设置一个热备盘。

完成后将磁盘分两个区,同时做两个主机通道(HOST LUN)。

在每台服务器上进行如下配置:进入SCSI卡配置窗口,选择通道,进入配置视图的Advanced设置,将SCAN SCSI设备选项DISABLE或SCAN …ONLY。

目的是服务器启动时不进行SCSI设备检测,以免出错???????????????二.软件安装1.HA安装首先插入安装介质,出现安装向导:我们选择“Storage Foundation HA 4.1 for windows”;在此,我们选择“Complete/Custom”选项;接着,点击“Next”;认真阅读软件许可协议后选择“I accept the terms of the license agreement”继续;输入license key,点“ADD”,然后NEXT;选中“VERITAS Storage Foundation HA 4.1 for Windows”,点击“Next”;此处,是选择需要安装的Agent,选择后点“Next”;选择需要安装VCS的服务器,点击“ADD”,然后选择安装目录,默认为:c:\program files\veritas\,点“Next”继续;屏幕显示安装选项报告。

Veritas Cluster Servet管理员手册中文译本

Veritas Cluster Servet管理员手册中文译本
VERITAS4.0 版本产品更新之前必须安装 Maintenance Pack 1补丁。如果你还没 有 安装 VERITAS4.0版本的产品,请查看 VERITAS Storage Solutions Getting Started Guide 或者 VERITAS Cluster File Solutions Getting Started Guide 中的版本注释和安装指令
Veritas Cluster Serve 管理员手册翻译
这是小弟学习 VCS 的笔记和翻译,如有不足或者不对之处请大家指正,本月小 弟学习 VCS,希望能同大家一起研习。 Maintenance Pack 1 (Solaris) Storage Solutions 产品包含以下部件: VERITAS File System VERITAS Volume Manager VERITAS Storage Foundation /Storage Foundation HA VERITAS Storage Foundation for Oracle VERITAS Storage Foundation for DB2 VERITAS Storage Foundation for Sybase VERITAS SANPoint Control QuickStart Cluster File Solutions 产品包含以下部件: VERITAS Storage Foundation Cluster File System/Storage Foundation Cluster File System HA VERITAS Storage Foundation for Oracle RAC
ቤተ መጻሕፍቲ ባይዱ
一个父资源(parent)依靠在子资源(child)依存性是同类的,均匀性没有周期的依存 性 资源属性 定义特殊的特征在个别的资源。VCS 使用属性值来运行适当的命令或者系统执行一个 操作在资源上 每个资源有一组必要属性(required attributes)必须被定义为了能够使 VCS 来管理 资源 资源类型 资源 通 过 资 源 类 型 来 分 类 。 例 如 : disk group;network interface cards;IP addresses;mount point;databases VCS 提供一套预先规定的资源类型 --一些绑定(some bundled),一些附加(some addons),除此以外需要创建新资源类型 Agent:VCS 如何控制资源 Agent 是控制资源的进程。每个资源类型有一个相对应的 agent 来管理,每个 cluster 系统仅仅运行一个 agent 进程对于每个活动的资源类型,不论多少单一的资源类型在 使用。 agent 控制资源使用一组定义设置,也叫 entry points。对于每个 agent 有4个 entry point: 启动资源(online) 关闭资源(offline) 检查资源状态(monitor) 杀死资源或者清除如果需要,当一个资源故障被适度的离线(clean) cluster 通讯 VCS 要求一个 cluster 通讯通道 在系 统之 间在 一个 cluster 来服务作 为 cluster interconnect,这个通道也有时候引用位置作为 private network 因为它经常执行利 用专门用于 Ethernet network. Veritas 推荐你使用最少2条专门用于通讯的连接通道并且是分开的基础连接。 cluster interconnect 有两个主要的用途: 1. 确定 cluster 全体成员---在 cluster 中的全体成员被确定通过系统发送和接受心

Veritas Cluster Server入门手册

Veritas Cluster Server入门手册

一、VCS 入门基本知识
VCS 全称 VERITAS Cluster Server,顾名思义,就是 起到集群管 理的功能。 Symantec 的 VCS 集成在 SFHA 产品中,当然也可以单独购买 VCS,主要是一个 HA 的角 色。
VCS 是一个商用的企业级软件解决方案,它可提供全面的可用性管理,把计划的 和非计划的停机时间降到最低。该产品能满足发展的但严格的世界电子商务模式所要 求的正常工作时间。电子商务需要增加不停机时间以保证为顾客进行各种服务;不管 哪种企业,多大规模,VERITAS Cluster Server (VCS)都能为他们的“无间断商务” 发挥重要作用。

2、VCS 基本概念
要搞懂 VCS,需要对下列这些基本概念搞清楚: (1)Cluser:就是集群,一个集群就是一群机器来共享同一组硬件存储设备,VCS 监 控这所有机器上运行的程序,出现任何问题,就将它在另一台机器上运行。一个集群 是通过同一个 cluster-ID 来识别的。这一组机器通过各种心跳线来保持通讯。 (2)Resources and resource types,资源包括硬件和软件资源,例如硬盘,网卡, 数据库,IP 地址,程序等等各种概念,这些都可以被 VCS 控制,状态基本就是两 种:ONLINE 和 OFFLINE。VCS 的作用就是监控这些资源。资源的概念是逻辑的,例如, 可以将 IP 地址和网卡设成一个资源。 (3)Agents 针对各种资源,可以开发各种 Agent,VCS 就是通过 Agent 来控制各种资 源,例如导入数据库,启动等等各种操作。有个朋友说过一句”Agent 的成熟度决定 了一个产品的成熟度”,呵呵,很有道理啊 (4)Resource Dependencies ,任何东西都有依赖性,何况资源阿,例如启动一个 web 服务资源,应该先把网卡和 IP 启动吧,如果网卡资源有问题,这台机器上所有的资 源产不多都应高 FAILOVER 了,这就是依赖性。 (5)Heartbeat 心跳,主流的保持集群同步的方式,就看大家谁做的好了。VERITAS 整 个通讯基本都是自己写的,主要包括 LLT(LOW Latency Thansport)和 GAB (Group Membership and Atomic Broadcast)。LLT 依赖于 MAC 地址实现稳定的底层协议,GAB 基于 LLT,实现 VCS 资源的同步。关于 LLT 和 GAB 有很多内容,这里就不叙述了。 (6)Splitbrian 恩,如果一个集群由于网络原因被分成了 2 个和多个部分,资源该在 哪些机器上启动呢,这个问题涉及内容很多,以后再讨论。

Veritas_Cluster用户使用手册

Veritas_Cluster用户使用手册

VCS用户使用手册日常管理 (4)命令列表 (4)命令一览 (8)VCS缺省值 (10)GABCONFIG设置选项 (11)VCS 安装包 SOLARIS (11)VCS 安装 (12)VCS系统上的进程 (12)VCS启动配置文件 (13)VCS操作命令 (13)LLT GAB命令操作 (13)离线维护过程 (25)LOGS 日志信息 (25)手工升级维护过程 (26)WEB 界面地址 (29)VCS QUICKSTART 命令 (29)GAB LLT 端口 (29)VERITAS TCP端口 (30)图形界面使用指南 (31)在使用图形管理界面之前 (31)设置图形显示界面 (31)关于集群管理器用户 (31)通过命令行添加一个用户 (31)启动集群管理器 (32)添加和设置一个集群面板 (32)集群管理器窗口 (33)集群监视器 (33)集群浏览器 (34)命令中心 (40)集群窗口框 (42)模板浏览窗口 (42)管理集群 (43)启动/关闭一个集群 (43)从集群管理器中管理用户。

(44)设置用户 (45)获取集群极其对象的状态信息 (46)打开、保存、关闭集群设置 (47)管理集群对象 (47)添加和删除服务组 (48)添加/删除资源 (49)添加/删除系统 (50)管理资源和服务组关系 (50)为一个服务组管理系统 (52)将服务组启动 (52)将服务组解除在线使用 (53)将资源启动在线 (54)使资源组脱机 (54)使用向导创建一个新的服务组 (55)浏览日志信息 (56)进口额外资源类型信息 (57)设置编辑者 (57)日常管理命令列表命令一览启动 VCS 启动图形界面hastart (-force) (-stale) hagui停止 VCS# hastop -local [-force | -evacuate]-local 在你运行命令的系统上停止had# hastop -sys system_name[-force | -evacuate]-sys在你定义的系统上停止had# hastop -all [-force] -all停止cluster中所有系统的had在线更改VCS配置信息haconf –makerw …make changes…haconf –dump –makrero 得到当前的Cluster状态# hastatus -summary代理的操作手工启动和停止代理. # haagent -start agent_name -sys system_name# haagent -stop agent_name -sys system_name 添加和删除用户服务组操作VCS缺省值私网中的心跳频率 = 0.5 sec可以在 /etc/llttab 修改 1/100th secs.Low-Pri 网络心跳频率 = 1.0 sec可以在 /etc/llttab 修改 1/100th secs.重起后切换的间隔 = 60 sec可以在案hasys's ShutdownTimeout属性中修改资源监控间隔 (资源类型) = 60 sec监控离线的资源 (资源类型) = 300 secLLT宣告系统死亡时间 = 21 sec(16 sec peer inactive + 5 sec GAB stable timeout value)Peer inactive可以用"set-timer" 在 /etc/llttab中修改草案1/100th secs. Stable timeout value可以用"gabconfig -t"修改.GAB-Had心跳 = 15 sec (通过VCS_GAB_TIMEOUT环境变量设置,单位milliseconds, 需要重起had)GAB允许HAD在panic前被杀掉的时间 (IOFENCE) = 15 sec (通过gabconfig –f设置)最多网络心跳 = 8最多磁盘心跳 = 4VCS had engine端口 14141VCS Web Server端口 = 8181LLT SAP 值 = 0xcafeGABCONFIG设置选项在系统上运行 "gabconfig -l" 会显示当前的GAB设置.可以通过gabconfig修改.例子:draconis # gabconfig -lGAB Driver ConfigurationDriver state : ConfiguredPartition arbitration: Disabled Control port seed : EnabledHalt on process death: DisabledMissed heartbeat halt: DisabledHalt on rejoin : DisabledKeep on killing : DisabledRestart : EnabledNode count : 2Disk HB interval (ms): 1000Disk HB miss count : 4IOFENCE timeout (ms) : 15000Stable timeout (ms) : 5000gabconfig的选项如何对应修改的值:Driver state -cPartition arbitration -sControl port seed -n2 or -xHalt on process death -pMissed heartbeat halt -bHalt on rejoin -jKeep on killing -kIOFENCE timeout (ms) -fStable timeout (ms) -tVCS 安装包 SOLARISVRTScsc m, VCS Cluster Manager (Java Console)♉ VRTSga b, Group Membership and Atomic Broadcast♉ VRTSll t, Low Latency Transport♉ VRTSper l, Perl for VRTSvcs♉ VRTSvc s, VERITAS Cluster Server♉ VRTSvcsa g, VCS Bundled Agents♉ VRTSvcsm g, VCS Message Catalogs♉ VRTSvcsm n, VCS Manual Pages♉ VRTSvcso r, VCS Oracle Enterprise Extension♉ VRTSvcs w, Cluster Manager (Web Console)♉ VRTSvli c, VERITAS License Utilities♉ VRTSwe b, VERITAS Web GUI Engine♉ VRTSvcsd c, VCS Documentation.VCS 安装安装必须按照如下顺序:VRTSlltVRTSgabVRTSvcsVRTSperlVRTScscmVRTSvcsorVCS系统上的进程在VCS系统上可以发现一些如下的进程:root 577 1 0 Sep 14 ? 16:53 /opt/VRTSvcs/bin/hadroot 582 1 0 Sep 14 ? 0:00/opt/VRTSvcs/bin/hashadowroot 601 1 0 Sep 14 ? 2:33/opt/VRTSvcs/bin/DiskGroup/DiskGroupAgent -type DiskGrouproot 603 1 0 Sep 14 ? 0:56/opt/VRTSvcs/bin/IP/IPAgent -type IProot 605 1 0 Sep 14 ? 10:17/opt/VRTSvcs/bin/Mount/MountAgent -type Mountroot 607 1 0 Sep 14 ? 11:23/opt/VRTSvcs/bin/NIC/NICAgent -type NICroot 609 1 0 Sep 14 ? 31:14/opt/VRTSvcs/bin/Oracle/OracleAgent -type Oracleroot 611 1 0 Sep 14 ? 3:34/opt/VRTSvcs/bin/SPlex/SPlexAgent -type SPlexroot 613 1 0 Sep 14 ? 8:06/opt/VRTSvcs/bin/Sqlnet/SqlnetAgent -type Sqlnetroot 20608 20580 0 12:04:03 pts/1 0:20/opt/VRTSvcs/bin/../gui/jre1.1.6/bin/../bin/sparc/green_threads/jre -mx128m VCSVCS启动配置文件VCS启动和停止脚本包括:/etc/rc2.d/S70llt/etc/rc2.d/S92gab/etc/rc3.d/S99vcs/etc/rc0.d/K10vcs核心VCS配置文件包括:/etc/VRTSvcs/conf/config/main.cf/etc/VRTSvcs/conf/config/types.cf/etc/llttab/etc/gabtab/etc/llthostsVCS操作命令停止VCS HASTOP在所有系统上停止VCS,同时不停止所有的服务组:/opt/VRTSvcs/bin/hastop -all -force注: 如果cluster是置于可读写模式,这是唯一的停止cluster的方法. 如果Cluster是可读写的, 你会得到一个.stale .再执行hastart不会启动已经离线的服务组.在本地停止VCS和服务组:/opt/VRTSvcs/bin/hastop -local在本地停止VCS, 并且不停止本地的服务组:/opt/VRTSvcs/bin/hastop -local -force在本地停止VCS并将服务组切换到另一台系统上:/opt/VRTSvcs/bin/hastop -local -evacuateLLT GAB命令操作/sbin/gabconfig -a 校验LLT和GAB在运行./sbin/lltstat -n 显示心跳状态/sbin/lltstat -nvv 显示最多32个节电的心跳和MAC地址./sbin/lltstat -p 显示端口状态/sbin/lltconfig -a list 显示LLT连接的MAC地址./sbin/lltconfig -T query 显示心跳的频率.在两个系统间测试LLT通信:/opt/VRTSllt/llttest -p 1>transmit -n <name of other node> -c 5000/opt/VRTSllt/llttest -p 1 (on other node)>receive -c 5000/opt/VRTSllt/lltdump -f <network link device>显示LLT通信./opt/VRTSllt/lltshow -n <node name> 显示LLT内核结构./opt/VRTSllt/dlpiping -vs <network link device>打开dlpiping服务器 server./opt/VRTSllt/dlpiping -c <network link device> <MAC address of other node>发送LLT包给另一节点并看反应.GABCONFIG LLTCONFIG 广播机制GAB和LLT在TCP/IP OSI栈的第二层工作. LLT是Data Link Provider Interface (DLPI)协议.GAB 处理: (1) Cluster成员管理(2) 管理心跳(3) 在Cluster广播信息LLT 处理: (1) Cluster的系统ID(2) 为多个cluster设置cluster ID.(3) 调试网络心跳频率.心跳频率在私有网是0.5秒, 在low-pri的网络上是1秒.用 "/sbin/lltconfig -T query"发现当前的频率.用gabconfig控制广播和启动.例子:如果Cluster有四个系统,/etc/gabtab应包括:/sbin/gabconfig -c -n 4VCS只有在四个系统都启动后才会启动.为了在少于四个的系统启动VCS, 执行gabconfig时加少于四的数值node count.如果没有其他系统可用,手工广播Cluster:/sbin/gabconfig -c -x确认LLT和GAB是否已启动:/sbin/gabconfig -aGAB Port Memberships=============================================================== Port a gen 4b2f0011 membership 01 Port h gen a6690001 membership 01"a" 表示GAB在通信, "h" 表示VCS在通信. "01" 表示系统0和系统1.gen表示随机生成的数值.GAB Port Memberships===================================Port a gen a36e0003 membership 01Port a gen a36e0003 jeopardy 1Port h gen fd570002 membership 01Port h gen fd570002 jeopardy 1该输出表明一根心跳已断,VCS在jeopardy模式.GAB Port Memberships=============================================================== Port a gen 3a24001f membership 01 Port h gen a10b0021 membership 0 Port h gen a10b0021 visible ;1该输出表明系统1的GAB与它的VCS daemons失去联系.GAB Port Memberships=============================================================== Port a gen 3a240021 membership 01 该输出表明VCS daemons在当前系统上已停止,但GAB和LLT还在运行.设置LLT配置信息:/sbin/lltconfig -a list关闭GAB:/sbin/gabconfig -U卸载GAB (或 LLT)模块:modinfo | grep <gab | llt> (发现模块号)modunload -i <module number>关闭LLT:lltconfig -U监控LLT状态的命令:/sbin/lltstat -n 显示心跳状态/sbin/lltstat -nvv 显示心跳和MAC地址/sbin/lltstat -p 显示端口状态/etc下重要的VCS配置文件:/etc/rc2.d/S70llt/etc/rc2.d/S92gab/etc/rc3.d/S99vcs/etc/llttab/etc/gabtab/etc/llthosts例子:Low Latency Transport配置文件 /etc/llttab:set-node cp01set-cluster 3link hme1 /dev/hme:1 - ether - -link qfe0 /dev/qfe:0 - ether - -link-lowpri qfe4 /dev/qfe:4 - ether - -startGroup Membership Atomic Broadcast配置文件/etc/gabtab: /sbin/gabconfig -c -n3Low Latency Hosts 表 /etc/llthosts:1 cp012 cp023 cp03这些文件启动LLT和GAB:/etc/rc2.d/S70llt/etc/rc2.d/S92gab/dev的link必须存在:ln -s ../devices/pseudo/clone@0:llt llt在 /devices/pseudo :crw-rw-rw- 1 root sys 11,109 Sep 21 10:38 clone@0:lltcrw-rw-rw- 1 root sys 143, 0 Sep 21 10:39 gab@0:gab_0crw-rw-rw- 1 root sys 143, 1 Feb 1 16:59 gab@0:gab_1crw-rw-rw- 1 root sys 143, 2 Sep 21 10:39 gab@0:gab_2crw-rw-rw- 1 root sys 143, 3 Sep 21 10:39 gab@0:gab_3crw-rw-rw- 1 root sys 143, 4 Sep 21 10:39 gab@0:gab_4crw-rw-rw- 1 root sys 143, 5 Sep 21 10:39 gab@0:gab_5crw-rw-rw- 1 root sys 143, 6 Sep 21 10:39 gab@0:gab_6crw-rw-rw- 1 root sys 143, 7 Sep 21 10:39 gab@0:gab_7crw-rw-rw- 1 root sys 143, 8 Sep 21 10:39 gab@0:gab_8crw-rw-rw- 1 root sys 143, 9 Sep 21 10:39 gab@0:gab_9crw-rw-rw- 1 root sys 143, 10 Sep 21 10:39 gab@0:gab_acrw-rw-rw- 1 root sys 143, 11 Sep 21 10:39 gab@0:gab_bcrw-rw-rw- 1 root sys 143, 12 Sep 21 10:39 gab@0:gab_ccrw-rw-rw- 1 root sys 143, 13 Sep 21 10:39 gab@0:gab_dcrw-rw-rw- 1 root sys 143, 14 Sep 21 10:39 gab@0:gab_ecrw-rw-rw- 1 root sys 143, 15 Sep 21 10:39 gab@0:gab_f/etc/name_to_major:llt 109gab 143------------------------------------------------------启动VCSVCS只在一台系统上本地启动. 如果main.cf在各个系统上不一致,你必须手工启动或重起其它的系统.启动有需要广播的main.cf文件的系统.启动VCS:/opt/VRTSvcs/bin/hastart如果其它系统已经启动和广播, 当前系统的VCS会载入其它系统的main.cf.启动VCS并设置配置文件为stale,即使它是有效的:/opt/VRTSvcs/bin/hastart -stale这会在cluster环境中生成.stale文件如果VCS无法正常启动, 配置信息会变成stale. 如果.stale文件存在并且你需要立刻启动cluster,使用"force"选项:/opt/VRTSvcs/bin/hastart -force在启动所有系统的VCS后,必须让VCS将cluster配置信息写到磁盘的main.cf文件.这会移掉.stale文件. .stale文件在强制启动后自动被删除./opt/VRTSvcs/bin/haconf -dump -makero当一个系统加入cluster中和cluster在线更改配置时,main.cf, types.cf和include 文件被自动写入.HASTATUS状态显示校验Cluster是否正常运行:/opt/VRTSvcs/bin/hastatus (会显示实时的VCS信息)/opt/VRTSvcs/bin/hastatus -sum/opt/VRTSvcs/bin/hasys -display启动和停止服务组在系统上可以手工启动和停止服务组.hagrp -online <service group> -sys <host name>hagrp -offline <service group> -sys <host name>服务组的切换和暂停切换服务组到其它系统:hagrp -switch <Group name> -to <Hostname of other Node>暂停服务组:hagrp -freeze <Service Group> -presistent安装程序的位置man帮助信息在如下目录:/opt/VRTSllt/man/opt/VRTSgab/man/opt/VRTSvcs/man大部分的程序在:/opt/VRTSvcs/bin常用监控命令hastatus -summary 显示VCS Cluster环境的当前状态hasys -list 列出Cluster中的系统hasys -display 得到每个系统的详细信息hagrp -list 列出所有的服务组hagrp -resources <Service Group> 列出服务组的所有资源hagrp -dep <Service Group> 列出服务组的依赖关系hagrp -display <Service Group> 列出服务组的详细信息haagent -list 列出所有的代理haagent -display <Agent> 列出一个代理的详细信息hatype -list 列出所有的资源类型hatype -display <Resource Type> 列出一个资源类型的详细信息hatype -resources <Resource Type> 列出一个资源类型的所有资源hares -list 列出所有的资源hares -dep <Resource> 列出一个资源的依赖性hares -display <Resource> 列出一个资源的详细信息haclus -display 列出Cluster的属性------------------------------------------------------VCS命令设置步骤大部分命令存放在 /opt/VRTSvcs/bin.hagrp 切换系统查看服务组,服务组资源,依赖关系,属性启动, 停止, 切换, 暂停, 解冻, 禁止, 允许,刷新, 禁止和允许服务组中的资源hasys 检查系统参数列出cluster中的系统, 属性, 资源类型,资源和资源属性暂停,解冻系统haconf 导出HA配置信息hauser 管理VCS用户信息hastatus 检查Cluster状态haclus 检查Cluster属性hares 查看资源启动和停止资源, 查明状态, 清楚错误信息haagent 列出代理, 代理状态, 启动和停止代理hastop 停止VCShastart 启动VCShagui 改变Cluster配置信息hacf 生成main.cf文件. 确认本地配置信息haremajor 修改共享磁盘的Major number gabconfig 查看GAB的状态gabdiskhb 控制GAB心跳磁盘lltstat 查看llt状态其它进程:had VCS engine. 是高优先级的实时进程. hashadow 监控和重起VCS engine.halink 监控Cluster间的连接.HACF配置确认本地的配置信息有效:cd /etc/VRTSvcs/conf/confighacf -verify .生成main.cf文件:hacf -generate从main.cf生成main.cmd:hacf -cftocmd .从main.cmd生成main.cf:hacf -cmdtocf .HACONF配置文件MAIN.CF将VCS配置文件(main.cf)改为可读写:haconf -makerw将VCS配置文件改为只读.haconf -dump -makero例子:添加一个VCS用户:haconf -makerwhauser -add <username>haconf -dump -makero将一个新系统"sysa"加入服务组的系统列表中并设置优先级2:haconf -makerwhagrp -modify group1 SystemList -add sysa 2haconf -dump -makeroHASYS停止重起和切换如果系统在60秒内关机重起会导致切换为修改这个时间, 在每个系统执行:haconf -makerwhasys -modify <system name> ShutdownTimeout <seconds> haconf -dump -makero如果你不希望在重起时发生切换,将时间设成0VCS代理代理存放在/opt/VRTSvcs/bin.典型的代理包括:CLARiiON (commercial)DiskDiskGroupElifNoneFileNoneFileOnOffFileOnOnlyIPIPMultiNICMountMultiNICANFS (used by NFS server)NICOracle (Part of Oracle Agent - commercial)PhantomProcessProxyServiceGroupHBShare (used by NFS server)Sqlnet (Part of Oracle Agent - commercial)Volume这些代理会出现在进程表中:/opt/VRTSvcs/bin/Volume/VolumeAgent -type Volume/opt/VRTSvcs/bin/MultiNICA/MultiNICAAgent -type MultiNICA/opt/VRTSvcs/bin/Sqlnet/SqlnetAgent -type Sqlnet/opt/VRTSvcs/bin/Oracle/OracleAgent -type Oracle/opt/VRTSvcs/bin/IPMultiNIC/IPMultiNICAgent -type IPMultiNIC /opt/VRTSvcs/bin/DiskGroup/DiskGroupAgent -type DiskGroup/opt/VRTSvcs/bin/Mount/MountAgent -type Mount/opt/VRTSvcs/bin/Wig/WigAgent -type Wig删除VCS软件为删除VCS软件包,执行如下命令:pkgrm <VCS packages>rm -rf /etc/VRTSvcs /var/VRTSvcsinit 6MAIN.CF语法main.cf的结构如下:* include语句* cluster定义* system定义* snmp定义* service group定义* resource type定义* resource定义* resource dependency语句* service group dependency语句如下是main.cf的模板:####include "types.cf"include "<Another types file>.cf"...cluster <Cluster name> (UserNames = { root = <Encrypted password> }CounterInterval = 5Factor = { runque = 5, memory = 1, disk = 10, cpu = 25,network = 5 }MaxFactor = { runque = 100, memory = 10, disk = 100, cpu = 100, network = 100 })system <Hostname of the primary node>system <Hostname of the failover node>group <Service Group Name> (SystemList = { <Hostname of primary node>, <Hostname offailover node> }AutoStartList = { <Hostname of primary node> })<Resource Type> <Resource> (<Attribute of Resource> = <Attribute value><Attribute of Resource> = <Attribute value><Attribute of Resource> = <Attribute value>...)...<Resource Type> requires <Resource Type>...// resource dependency tree//// group <Service Group name>// {// <Resource Type> <Resource>// {// <Resource Type> <Resource>// .// .// .// {// <Resource Type> <Resource>// }// }// <Resource Type> <Resource>// }TYPES.CF语法如下是types.cf的例子:######type <Resource Type> (static str ArgList[] = { <attribute>, <attribute>, ... } NameRule = resource.<attribute>static str Operations = <value>static int NumThreads = <value>static int OnlineRetryLimit = <value>str <attribute>str <attribute> = <value>int <attribute> = <value>int <attribute>)...GROUP类型Failover服务组只能在一个系统上在线.Parallel服务组可以在多个系统上同时在线.服务组在三个条件下会在线:1. 用户发出命令2. 重起机器3. 发生切换操作通信方式VCS系统通过如下几种方式通信.1. 网络通道 (最多 8).2. 心跳盘或服务组心跳盘,GAB控制基于磁盘的通信.注: 心跳盘并不存放cluster的状态信息离线维护过程如下是如何在保证服务组在线的情况下进行文件系统维护的例子. 它包括在不影响其它资源和停止服务组的情况下停止某个资源1. haconf -makerw2. hagrp -freeze <service group> -persistent3. haconf -dump -makero现在进行维护工作,如卸载一个文件系统.如果你不希望在维护过程中监控资源,可以在维护前执行:hagrp -disableresources <service group>在维护后,重新装载文件系统4. haconf -makerw5. hagrp -unfreeze <service group> -persistent如果你禁止了一个资源,hagrp -enableresources <service group>6. haconf -dump -makero查找哪个资源还没启动.7. hastatus -sum8. hares -clear <mount resource>9. hares -online <mount resource> -sys <host name>确认服务组已经完全启动.10. hastatus -sumLOGS 日志信息VCS日志存放在:/var/VRTSvcs/log这些日志显示VCS engine和资源类型的错误.例子:-rw-rw-rw- 1 root other 22122 Aug 29 08:03 Application_A.log -rw-rw-rw- 1 root root 9559 Aug 15 13:02 DiskGroup_A.log-rw-rw-rw- 1 root other 296 Jul 17 17:55DiskGroup_ipm_A.log-rw-rw-rw- 1 root root 746 Aug 17 16:27 FileOnOff_A.log-rw-rw-rw- 1 root root 609 Jun 19 18:55 IP_A.log-rw-rw-rw- 1 root root 1130 Jul 21 14:33 Mount_A.log-rw-rw-rw- 1 root other 5218 May 14 13:16 NFS_A.log-rw-rw-rw- 1 root root 7320 Aug 15 12:59 NIC_A.log-rw-rw-rw- 1 root other 1042266 Aug 23 10:46 Oracle_A.log-rw-rw-rw- 1 root root 149 Mar 20 13:10 Oracle_ipm_A.log -rw-rw-rw- 1 root other 238 Jun 1 13:07 Process_A.log-rw-rw-rw- 1 root other 2812 Mar 21 11:45ServiceGroupHB_A.log-rw-rw-rw- 1 root root 6438 Jun 19 18:55 Sqlnet_A.log-rw-rw-rw- 1 root root 145 Mar 20 13:10 Sqlnet_ipm_A.log -rw-r--r-- 1 root other 16362650 Aug 31 08:58 engine_A.log-rw-r--r-- 1 root other 313 Mar 20 13:11 hacf-err_A.log-rw-rw-rw- 1 root root 1615 Jun 29 16:30 hashadow-err_A.log-rw-r--r-- 1 root other 2743342 Aug 1 17:12 hashadow_A.log drwxrwxr-x 2 root sys 3072 Aug 27 12:41 tmp这些标志出现在日志中.TAG_A: VCS内部信息. 需要联系客户服务.TAG_B: 指出错误和异常.TAG_C: 指出警告.TAG_D: 指出正常操作.TAG_E: 指出代理的状态.你可以通过修改资源类型的属性来提高日志的等级(TAG F-Z messages). 缺省是"error". 你可以选择 "none", "all", "debug", or "info".hatype -modify <Resource Type> LogLevel <option>手工升级维护过程1. 打开配置文件, 暂停所有的服务组, 关闭配置文件.haconf -makerwhagrp -freeze <Service Group> -persistenthaconf -dump makero2. 关闭VCS但保持服务组启动.hastop -all -force3. 确认所有系统上VCS已经停止.gabconfig -a4. 确认在磁盘上没有运行GAB.gabdiskhb -l如果有,在每个系统上删除.gabdiskhb -d5. 关闭GAB并确认它已经停止.gabconfig -Ugabconfig -a6. 找到GAB内核模块号并卸载.modinfo | grep gabmodunload -i <GAB module number>7. 关闭. 在每个系统上运行:lltconfig -U8. 找到LLT内核模块号并卸载.modinfo | grep lltmodunload -i <LLT module number>9. 在每个系统上重新命名VCS启动和停止脚本.cd /etc/rc2.dmv S70llt s70lltmv S92gab s92gabcd /etc/rc3.dmv S99vcs s99vcscd /etc/rc0.dmv K10vcs k10vcs10. 备份/etc/VRTSvcs/conf/config/main.cf.备份/etc/VRTSvcs/conf/config/types.cf.11. 删除旧VCS包.pkgrm VRTScscm VRTSvcs VRTSgab VRTSllt VRTSperl 安装新的VCS包.恢复main.cf和types.cf.12. 启动LLT, GAB和VCS.cd /etc/rc2.dmv s70llt S70lltmv s92gab S92gabcd /etc/rc3.dmv s99vcs S99vcscd /etc/rc0.dmv k10vcs K10vcs/etc/rc2.d/S70llt start/etc/rc2.d/S92gab/etc/rc3.d/S99vcs start13. 查看VCS状态.hastatushastatus -sum14. 解冻所有的服务组.haconf -makerwhagrp -unfreeze <Service Group> -persistenthaconf -dump -makeroVCS 系统名如果你经常改动系统的主机名, 最好让VCS使用唯一个名字.为了VCS不使用机器的主机名而使用自己的名字,需要在/etc/llttab 定义/etc/VRTSvcs/conf/sysname. 在main.cf中使用VCS的名字.例子:/etc/llthosts:0 sysA1 sysB/etc/VRTSvcs/conf/sysname:sysA/etc/VRTSvcs/conf/sysname:sysB/etc/llttab:set-node /etc/VRTSvcs/conf/sysnameHAD VCS版本查询had版本信息had -versionWEB 界面地址VCS http://<cluster IP>:8181/vcsVCSQS http://<cluster IP>:8181/vcsqsVCS QUICKSTART 命令VCS QuickStart只有一些简单命令:vcsqs -startvcsqs -stop [-shutdown] [-all] [-evacuate] vcsqs -grp [<group>]vcsqs -res [<resource>]vcsqs -config [<resource>]vcsqs -sysvcsqs -online <group> -sys <system>vcsqs -offline <group> [-sys <system>]vcsqs -switch <group> [-sys <system>]vcsqs -freeze <group>vcsqs -unfreeze <group>vcsqs -clear <group> [-sys <system>]vcsqs -flush <group> [-sys <system>]vcsqs -autoenable <group> -sys <system>vcsqs -usersvcsqs -addadmin <username>vcsqs -addguest <username>vcsqs -deleteuser <username>vcsqs -updateuser <username>vcsqs -intervals [<type>]vcsqs -modifyinterval <type> <value>vcsqs -versionvcsqs -helpGAB LLT 端口a GAB internal useb I/O Fencingd ODM (Oracle Disk Manager)f CFS (VxFS cluster feature)h VCS engine (HAD)j vxclk monitor portk vxclk synch portl vxtd (SCRP) portm vxtd replication porto vcsmm (Oracle RAC/OPS membership module)q qlog (VxFS QuickLog)s Storage Appliancet Storage Applianceu CVM (Volume Manager cluster feature) v CVMw CVMx GAB test user clientz GAB test kernel clientVERITAS TCP端口8181 VCS and GCM web server14141 VCS engine14142 VCS engine test14143 gabsim14144 notifier14145 GCM port14147 GCM slave port14149 tdd Traffic Director port14150 cmd server14151 GCM DNS14152 GCM messenger14153 VCS Simulator15151 VCS GAB TCP port图形界面使用指南在使用图形管理界面之前在你使用集群管理器之前,你必须:✓设置图形管理界面;✓正式设置中包含用户账号,如用户账号不存在,你必须创建一个;✓开始使用集群管理✓创建集群面板设置图形显示界面在UNIX工作站中设置界面1.键入下面命令,同意系统许可显示在桌面上:xhost +2.在创建集群管理器的地方设置外观参数DISPLAY,例如,键入下面命令显示在系统“myws”上export DISPLAY=myws:0关于集群管理器用户集群管理器有三种用户,以分配给他们的权限为基础:访问者,操作员和管理员◆如果用户账号是访问者权限,用户可以查看和使用集群,他们不可以修改设置或执行管理性任务。

Veritas Cluster Server VCS How To

Veritas Cluster Server VCS How To

Veritas Cluster Server (VCS) HOWTO:===================================$Id: VCS-HOWTO,v 1.25 2002/09/30 20:05:38 pzi Exp $Copyright (c) Peter Ziobrzynski, pzi@Contents:---------- Copyright- Thanks- Overview- VCS installation- Summary of cluster queries- Summary of basic cluster operations- Changing cluster configuration- Configuration of a test group and test resource type- Installation of a test agent for a test resource- Home directories service group configuration- NIS service groups configuration- Time synchronization services- ClearCase configurationCopyright:----------This HOWTO document may be reproduced and distributed in whole or in part, in any medium physical or electronic, as long as this copyright notice is retained on all copies. Commercial redistribution is allowed and encouraged; however, the author would like to be notified of any such distributions.All translations, derivative works, or aggregate works incorporating any this HOWTO document must be covered under this copyright notice. That is, you may not produce a derivative work from a HOWTO and impose additional restrictions on its distribution. Exceptions to these rules may be granted under certain conditions.In short, I wish to promote dissemination of this information through as many channels as possible. However, I do wish to retain copyright on this HOWTO document, and would like to be notified of any plans to redistribute the HOWTO.If you have questions, please contact me: Peter Ziobrzynski<pzi@>Thanks:-------- Veritas Software provided numerous consultations that lead to the cluster configuration described in this document.- Parts of this document are based on the work I have done forKestrel Solutions, Inc.- Basis Inc. for assisting in selecting hardware components and helpin resolving installation problems.- comp.sys.sun.admin Usenet community.Overview:---------This document describes the configuration of a two or more node Solaris Cluster using Veritas Cluster Server VCS 1.1.2 on Solaris 2.6. Numberof standard UNIX services are configured as Cluster Service Groups:user home directories, NIS naming services, time synchronization (NTP). In addition a popular Software Configuration Management system from Rational - ClearCase is configured as a set of cluster service groups. Configuration of various software components in the formof a cluster Service Group allows for high availability of the applicationas well as load balancing (fail-over or switch-over). Beside thatclusterconfiguration allows to free a node in the network for upgrades,testingor reconfiguration and then bring it back to service very quickly with little or no additional work.- Cluster topology.The cluster topology used here is called clustered pairs. Two nodes share disk on a single shared SCSI bus. Both computers and the diskare connected in a chain on a SCSI bus. Both differential or fast-wide SCSI buses can be used. Each SCSI host adapter in each node is assigned different SCSI id (called initiator id) so both computers can coexist on the same bus.+ Two Node Cluster with single disk:Node Node| /| /| /| /|/DiskA single shared disk can be replaced by two disks each on its private SCSI bus connecting both cluster nodes. This allows for disk mirroring across disks and SCSI buses.Note: the disk here can be understood as disk array or a disk pack.+ Two Node Cluster with disk pair:Node Node|\ /|| \ / || \ || / \ ||/ \|Disk DiskSingle pair can be extended by chaining additional node and connectingit to the pair by additional disks and SCSI buses. One or more nodescan be added creating N node configuration. The perimeter nodes havetwo SCSI host adapters while the middle nodes have four.+ Three Node Cluster:Node Node Node|\ /| |\ /|| \ / | | \ / || \ | | \ || / \ | | / \ ||/ \| |/ \|Disk Disk Disk Disk+ N Node Cluster:Node Node Node Node|\ /| |\ /|\ /|| \ / | | \ / | \ / || \ | | \ | ...\ || / \ | | / \ | / \ ||/ \| |/ \|/ \|Disk Disk Disk Disk Disk- Disk configuration.Management of the shared storage of the cluster is performed with the Veritas Volume Manager (VM). The VM controls which disks on the shared SCSI bus are assigned (owned) to which system. In Volume Manager disksare grouped into disk groups and as a group can be assigned for access from one of the systems. The assignment can be changed quicklyallowingfor cluster fail/switch-over. Disks that compose disk group can be scattered across multiple disk enclosures (packs, arrays) and SCSIbuses. We used this feature to create disk groups that contains VM volumes mirrored across devices. Below is a schematics of 3 cluster nodes connected by SCSI busses to 4 disk packs (we use Sun Multipacks). The Node 0 is connected to Disk Pack 0 and Node 1 on one SCSI bus andto Disk Pack 1 and Node 1 on second SCSI bus. Disks 0 in Pack 0 and 1are put into Disk group 0, disks 1 in Pack 0 and 1 are put into Diskgroup 1 and so on for all the disks in the Packs. We have 4 9 GB disksin each Pack so we have 4 Disk groups between Node 0 and 1 that can be switched from one node to the other.Node 1 is interfacing the the Node 2 in the same way as with the Node 0. Two disk packs Pack 2 and Pack 3 are configured with disk groups 4, 5,6 and7 as a shared storage between the nodes. We have a total of 8diskgroups in the cluster. Groups 0-3 can be visible from Node 0 or 1 and groups 4-7 from Node 1 and 2. Node 1 is in a privileged situation andcan access all disk groups.Node 0 Node 1 Node 2 ... Node N------- ------------------- ------|\ /| |\ /|| \ / | | \ / || \ / | | \ / || \ / | | \ / || \ / | | \ / || \ / | | \ / || \ / | | \ / || \ | | \ || / \ | | / \ || / \ | | / \ || / \ | | / \ || / \ | | / \ || / \ | | / \ || / \ | | / \ ||/ \| |/ \|Disk Pack 0: Disk Pack 1: Disk Pack 2: Disk Pack 3:Disk group 0: Disk group 4: +----------------------+ +------------------------+| Disk0 Disk0 | | Disk0 Disk0 |+----------------------+ +------------------------+Disk group 1: Disk group 5: +----------------------+ +------------------------+| Disk1 Disk1 | | Disk1 Disk1 |+----------------------+ +------------------------+Disk group 2: Disk group 6: +----------------------+ +------------------------+| Disk2 Disk2 | | Disk2 Disk2 |+----------------------+ +------------------------+Disk group 3: Disk group 7: +----------------------+ +------------------------+| Disk3 Disk3 | | Disk3 Disk3 |+----------------------+ +------------------------+- Hardware details:Below is a detailed listing of the hardware configuration of two nodes. Sun part numbers are included so you can order it directly form Sunstore and put it on your Visa:- E250:+ Base: A26-AA+ 2xCPU: X1194A+ 2x256MB RAM: X7004A,+ 4xUltraSCSI 9.1GB hard drive: X5234A+ 100BaseT Fast/Wide UltraSCSI PCI adapter: X1032A+ Quad Fastethernet controller PCI adapter: X1034A - MultiPack:+ 4x9.1GB 10000RPM disk+ Storedge Mulitpack: SG-XDSK040C-36G- Connections:+ SCSI:E250: E250:X1032A-------SCSI----->Multipack<----SCSI---X1032AX1032A-------SCSI----->Multipack<----SCSI---X1032A+ VCS private LAN 0:hme0----------Ethernet--->HUB<---Ethernet---hme0+ VCS private LAN 1:X1034A(qfe0)--Ethernet--->HUB<---Ethernet---X1034A(qfe0)+ Cluster private LAN:X1034A(qfe1)--Ethernet--->HUB<---Ethernet---X1034A(qfe1)+ Public LAN:X1034A(qfe2)--Ethernet--->HUB<---Ethernet---X1034A(qfe2) Installation of VCS-1.1.2----------------------------Two systems are put into the cluster: foo_c and bar_c- Set scsi-initiator-id boot prom envrionment variable to 5 on oneof the systems (say bar_c):ok setenv scsi-initiator-id 5ok boot -r- Install Veritas Foundation Suite 3.0.1.Follow Veritas manuals.- Add entries to your c-shell environment:set veritas = /opt/VRTSvmsasetenv VMSAHOME $veritassetenv MANPATH ${MANPATH}:$veritas/manset path = ( $path $veritas/bin )- Configure the ethernet connections to use hme0 and qfe0 as Cluster private interconnects. Do not create /etc/hostname.{hme0,qfe0}. Configure qfe2 as the public LAN network and qfe1 as Cluster main privatenetwork. The configuration files on foo_c:/etc/hosts:127.0.0.1 localhost# public network (192.168.0.0/16):192.168.1.40 bar192.168.1.51 foo# Cluster private network (network address 10.2.0.0/16):10.2.0.1 bar_c10.2.0.3 foo_c loghost/etc/hostname.qfe1:foo_c/etc/hostname.qfe2:fooThe configuration files on bar_c:/etc/hosts:127.0.0.1 localhost# Public network (192.168.0.0/16):192.168.1.40 bar192.168.1.51 foo# Cluster private network (network address 10.2.0.0/16):10.2.0.1 bar_c loghost10.2.0.3 foo_c/etc/hostname.qfe1:bar_c/etc/hostname.qfe2:bar- Configure at least two VM diskgroups on shared storage (Multipacks) working from on one of the systems (e.g. foo_c):+ Create cluster volume groups spanning both multipacksusing vxdiskadm '1. Add or initialize one or more disks':cluster1: c1t1d0 c2t1d0cluster2: c1t2d0 c2t2d0...Name vmdisks like that:cluster1: cluster101 cluster102cluster2: cluster201 cluster202...You can do it for 4 disk groups with this script:#!/bin/shfor group in 1 2 3 4;dovxdisksetup -i c1t${group}d0vxdisksetup -i c2t${group}d0vxdg init cluster${group}cluster${group}01=c1t${group}d0vxdg -g cluster${group} adddiskcluster${group}02=c2t${group}d0done+ Create volumes in each group mirrored across both multipacks.You can do it with the script for 4 disk groups with this script:#!/bin/shfor group in 1 2 3 4;dovxassist -b -g cluster${group} make vol01 8glayout=mirror cluster${group}01 cluster${group}02done+ or do all diskgroup and volumes in one script:#!/bin/shfor group in 1 2 3 4;dovxdisksetup -i c1t${group}d0vxdisksetup -i c2t${group}d0vxdg init cluster${group}cluster${group}01=c1t${group}d0vxdg -g cluster${group} adddiskcluster${group}02=c2t${group}d0vxassist -b -g cluster${group} make vol01 8glayout=mirror cluster${group}01 cluster${group}02done+ Create veritas file systems on the volumes:#!/bin/shfor group in 1 2 3 4;domkfs -F vxfs /dev/vx/rdsk/cluster$group/vol01 done+ Deport a group from one system: stop volume, deport a group:# vxvol -g cluster2 stop vol01# vxdg deport cluster2+ Import a group and start its volume on the other system tosee if this works:# vxdg import cluster2# vxrecover -g cluster2 -sb- With the shared storage configured it is important to know how to manually move the volumes from one node of the cluster to the other.I use a cmount command to do that. It is like a rc scritp with additionalargument for the disk group.To stop (deport) the group 1 on a node do:# cmount 1 stopTo start (import) the group 1 on the other node do:# cmount 1 startThe cmount script is as follows:#!/bin/shset -xgroup=$1case $2 instart)vxdg import cluster$groupvxrecover -g cluster$group -sbmount -F vxfs /dev/vx/dsk/cluster$group/vol01 /cluster$group ;;stop)umount /cluster$groupvxvol -g cluster$group stop vol01vxdg deport cluster$group;;esac- To remove all shared storage volumes and groups do:#!/bin/shfor group in 1 2 3 4; dovxvol -g cluster$group stop vol01vxdg destroy cluster$groupdone- Install VCS software:(from install server on athena)# cd /net/athena/export/arch/VCS-1.1.2/vcs_1_1_2a_solaris# pkgadd -d . VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs VRTSvcswz clsp+ correct /etc/rc?.d scripts to be links:If they are not symbolic links then it is hard to disable VCSstartup at boot. If they are just rename /etc/init.d/vcs tostop starting and stopping at boot.cd /etcrm rc0.d/K10vcs rc3.d/S99vcscd rc0.dln -s ../init.d/vcs K10vcscd ../rc3.dln -s ../init.d/vcs S99vcs+ add -evacuate option to /etc/init.d/vcs:This is optional but I find it important to switch-overall service groups from the node that is being shutdown.When I take a cluster node down I expect the rest of thecluster to pick up the responsibility to run all services.The default VCS does not do that. The only way to move agroup from one node to another is to crash it or do manualswitch-over using hagrp command.'stop')$HASTOP -local -evacuate > /dev/null 2>&1;;- Add entry to your c-shell environment:set vcs = /opt/VRTSvcssetenv MANPATH ${MANPATH}:$vcs/manset path = ( $vcs/bin $path )- To remove the VCS software:NOTE: required if demo installation fails.# sh /opt/VRTSvcs/wizards/config/quick_start -b# rsh bar_c 'sh /opt/VRTSvcs/wizards/config/quick_start -b'# pkgrm VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs VRTSvcswz clsp# rm -rf /etc/VRTSvcs /var/VRTSvcs# init 6- Configure /.rhosts on both nodes to allow each node transparent rsh root access to the other:/.rhosts:foo_cbar_c- Run quick start script from one of the nodes:NOTE: must run from /usr/openwin/bin/xterm - other xterms causeterminalemulation problems# /usr/openwin/bin/xterm &# sh /opt/VRTSvcs/wizards/config/quick_startSelect hme0 and qfe0 network links for GAB and LLT connections.The script will ask twice for the links interface names. Link 1 is hme0 and link2 is qfe0 for both foo_c and bar_c nodes.You should see the heartbeat pings on the interconnection hubs.The wizard creates LLT and GAB configuration files in /etc/llttab,/etc/gabtab and llthosts on each system:On foo_c:/etc/llttab:set-node foo_clink hme0 /dev/hme:0link qfe1 /dev/qfe:1startOn bar_c:/etc/llttab:set-node bar_clink hme0 /dev/hme:0link qfe1 /dev/qfe:1start/etc/gabtab:/sbin/gabconfig -c -n2/etc/llthosts:0 foo_c1 bar_cThe LLT and GAB communication is started by rc scripts S70llt andS92gabinstalled in /etc/rc2.d.- We can configure private interconnect by hand creating above files. - Check basic installation:+ status of the gab:# gabconfig -aGAB Port Memberships===============================================================Port a gen 1e4c0001 membership 01Port h gen dd080001 membership 01+ status of the link:# lltstat -nLLT node information:Node State Links* 0 foo_c OPEN 21 bar_c OPEN 2+ node parameters:# hasys -display- Set/update VCS super user password:+ add root user:# haconf -makerw# hauser -add rootpassword:...# haconf -dump -makero+ change root password:# haconf -makerw# hauser -update rootpassword:...# haconf -dump -makero- Configure demo NFS service groups:NOTE: You have to fix the VCS wizards first: The wizard perl scripts have a bug that makes the core dump in the middle of filling out configuration forms. The solution is to provide shell wrapper for one binary and avoid running it with specific set of parameters. Do the following in VCS-1.1.2 :# cd /opt/VRTSvcs/bin# mkdir tmp# mv iou tmp# cat << EOF > iou#!/bin/shecho "[$@]" >> /tmp/,.iou.logcase "$@" in'-c 20 9 -g 2 2 3 -l 0 3') echo "skip bug" >>/tmp/,.iou.log ;;*) /opt/VRTSvcs/bin/tmp/iou "$@" ;;esacEOF# chmod 755 iou+ Create NFS mount point directories on both systems:# mkdir /export1 /export2+ Run the wizard on foo_c node:NOTE: must run from /usr/openwin/bin/xterm - other xterms causeterminal emulation problems# /usr/openwin/bin/xterm &# sh /opt/VRTSvcs/wizards/services/quick_nfsSelect for groupx:- public network device: qfe2- group name: groupx- IP: 192.168.1.53- VM disk group: cluster1- volume: vol01- mount point: /export1- options: rw- file system: vxfsSelect for groupy:- public network device: qfe2- group name: groupy- IP: 192.168.1.54- VM disk group: cluster2- volume: vol01- mount point: /export2- options: rw- file system: vxfsYou should see: Congratulations!...The /etc/VRTSvcs/conf/config directory should have main.cf andtypes.cf files configured.+ Reboot both systems:# init 6Summary of cluster queries:----------------------------- Cluster queries:+ list cluster status summary:# hastatus -summary-- SYSTEM STATE-- System State FrozenA foo_c RUNNING 0A bar_c RUNNING 0-- GROUP STATE-- Group System Probed AutoDisabledStateB groupx foo_c Y N ONLINE B groupx bar_c Y NOFFLINEB groupy foo_c Y NOFFLINEB groupy bar_c Y N ONLINE+ list cluster attributes:# haclus -display#Attribute ValueClusterName my_vcsCompareRSM 0CounterInterval 5DumpingMembership 0Factor runque 5 memory 1 disk 10 cpu 25network 5GlobalCounter 16862GroupLimit 200LinkMonitoring 0LoadSampling 0LogSize 33554432MajorVersion 1MaxFactor runque 100 memory 10 disk 100 cpu 100network 100MinorVersion 10PrintMsg 0ReadOnly 1ResourceLimit 5000SourceFile ./main.cfTypeLimit 100UserNames root cDgqS68RlRP4k- Resource queries:+ list resources:# hares -listcluster1 foo_ccluster1 bar_cIP_192_168_1_53 foo_cIP_192_168_1_53 bar_c...+ list resource dependencies:# hares -dep#Group Parent Childgroupx IP_192_168_1_53 groupx_qfe1groupx IP_192_168_1_53 nfs_export1groupx export1 cluster1_vol01groupx nfs_export1 NFS_groupx_16groupx nfs_export1 export1groupx cluster1_vol01 cluster1groupy IP_192_168_1_54 groupy_qfe1groupy IP_192_168_1_54 nfs_export2groupy export2 cluster2_vol01groupy nfs_export2 NFS_groupy_16groupy nfs_export2 export2groupy cluster2_v cluster2+ list attributes of a resource:# hares -display export1#Resource Attribute System Valueexport1 ConfidenceLevel foo_c 100export1 ConfidenceLevel bar_c 0export1 Probed foo_c 1export1 Probed bar_c 1export1 State foo_c ONLINEexport1 State bar_c OFFLINEexport1 ArgListValues foo_c /export1 /dev/vx/dsk/cluster1/vol01 vxfs rw ""...- Groups queries:+ list groups:# hagrp -listgroupx foo_cgroupx bar_cgroupy foo_cgroupy bar_c+ list group resources:# hagrp -resources groupxcluster1IP_192_168_1_53export1NFS_groupx_16groupx_qfe1nfs_export1cluster1_vol01+ list group dependencies:# hagrp -dep groupx+ list of group attributes:# hagrp -display groupx#Group Attribute System Valuegroupx AutoFailOver global 1groupx AutoStart global 1groupx AutoStartList global foo_cgroupx FailOverPolicy global Prioritygroupx Frozen global 0groupx IntentOnline global 1groupx ManualOps global 1groupx OnlineRetryInterval global 0groupx OnlineRetryLimit global 0groupx Parallel global 0groupx PreOnline global 0groupx PrintTree global 1groupx SourceFile global ./main.cfgroupx SystemList global foo_c 0 bar_c 1groupx SystemZones globalgroupx TFrozen global 0groupx TriggerEvent global 1groupx UserIntGlobal global 0groupx UserStrGlobal globalgroupx AutoDisabled foo_c 0groupx AutoDisabled bar_c 0groupx Enabled foo_c 1groupx Enabled bar_c 1groupx ProbesPending foo_c 0groupx ProbesPending bar_c 0groupx State foo_c |ONLINE|groupx State bar_c |OFFLINE|groupx UserIntLocal foo_c 0groupx UserIntLocal bar_c 0groupx UserStrLocal foo_cgroupx UserStrLocal bar_c- Node queries:+ list nodes in the cluster:# hasys -listfoo_cbar_c+ list node attributes:# hasys -display bar_c#System Attribute Valuebar_c AgentsStopped 1bar_c ConfigBlockCount 54bar_c ConfigCheckSum 48400bar_c ConfigDiskState CURRENTbar_c ConfigFile /etc/VRTSvcs/conf/configbar_c ConfigInfoCnt 0bar_c ConfigModDate Wed Mar 29 13:46:19 2000bar_c DiskHbDownbar_c Frozen 0bar_c GUIIPAddrbar_c LinkHbDownbar_c Load 0bar_c LoadRaw runque 0 memory 0 disk 0 cpu 0 network 0bar_c MajorVersion 1bar_c MinorVersion 10bar_c NodeId 1bar_c OnGrpCnt 1bar_c SourceFile ./main.cfbar_c SysName bar_cbar_c SysState RUNNINGbar_c TFrozen 0bar_c UserInt 0bar_c UserStr- Resource types queries:+ list resource types:# hatype -listCLARiiONDiskDiskGroupElifNoneFileNoneFileOnOffFileOnOnlyIPIPMultiNICMountMultiNICANFSNICPhantomProcessProxyServiceGroupHBShareVolume+ list all resources of a given type:# hatype -resources DiskGroupcluster1cluster2+ list attributes of the given type:# hatype -display IP#Type Attribute ValueIP AgentFailedOnIP AgentReplyTimeout 130IP AgentStartTimeout 60IP ArgList Device Address NetMask Options ArpDelay IfconfigTwiceIP AttrChangedTimeout 60IP CleanTimeout 60IP CloseTimeout 60IP ConfInterval 600IP LogLevel errorIP MonitorIfOffline 1IP MonitorInterval 60IP MonitorTimeout 60IP NameRule IP_ + resource.AddressIP NumThreads 10IP OfflineTimeout 300IP OnlineRetryLimit 0IP OnlineTimeout 300IP OnlineWaitLimit 2IP OpenTimeout 60IP Operations OnOffIP RestartLimit 0IP SourceFile ./types.cfIP ToleranceLimit 0- Agents queries:+ list agents:# haagent -listCLARiiONDiskDiskGroupElifNoneFileNoneFileOnOffFileOnOnlyIPIPMultiNICMountMultiNICANFSNICPhantomProcessProxyServiceGroupHBShareVolume+ list status of an agent:# haagent -display IP#Agent Attribute ValueIP AgentFileIP Faults 0IP Running YesIP Started YesSummary of basic cluster operations:------------------------------------- Cluster Start/Stop:+ stop VCS on all systems:# hastop -all+ stop VCS on bar_c and move all groups out:# hastop -sys bar_c -evacuate+ start VCS on local system:# hastart- Users:+ add gui root user:# haconf -makerw# hauser -add root# haconf -dump -makero- Group:+ group start, stop:# hagrp -offline groupx -sys foo_c# hagrp -online groupx -sys foo_c + switch a group to other system:# hagrp -switch groupx -to bar_c + freeze a group:# hagrp -freeze groupx+ unfreeze a group:# hagrp -unfreeze groupx+ enable a group:# hagrp -enable groupx+ disable a group:# hagrp -disable groupx+ enable resources a group:# hagrp -enableresources groupx + disable resources a group:# hagrp -disableresources groupx + flush a group:# hagrp -flush groupx -sys bar_c- Node:+ feeze node:# hasys -freeze bar_c+ thaw node:# hasys -unfreeze bar_c- Resources:+ online a resouce:# hares -online IP_192_168_1_54 -sys bar_c+ offline a resouce:# hares -offline IP_192_168_1_54 -sys bar_c+ offline a resouce and propagte to children:# hares -offprop IP_192_168_1_54 -sys bar_c+ probe a resouce:# hares -probe IP_192_168_1_54 -sys bar_c+ clear faulted resource:# hares -clear IP_192_168_1_54 -sys bar_c- Agents:+ start agent:# haagent -start IP -sys bar_c+ stop agent:# haagent -stop IP -sys bar_c- Reboot a node with evacuation of all service groups:(groupy is running on bar_c)# hastop -sys bar_c -evacuate# init 6# hagrp -switch groupy -to bar_cChanging cluster configuration:--------------------------------You cannot edit configuration files directly while thecluster is running. This can be done only if cluster is down.The configuration files are in: /etc/VRTSvcs/conf/configTo change the configuartion you can:+ use hagui+ stop the cluster (hastop), edit main.cf and types.cf directly,regenerate main.cmd (hacf -generate .) and start the cluster (hastart)+ use the following command line based procedure on running clusterTo change the cluster while it is running do this:- Dump current cluster configuration to files and generate main.cmd file:# haconf -dump# hacf -generate .# hacf -verify .- Create new configuration directory:# mkdir -p ../new- Copy existing *.cf files in there:# cp main.cf types.cf ../new- Add new stuff to it:# vi main.cf types.cf- Regenerate the main.cmd file with low level commands:# cd ../new# hacf -generate .# hacf -verify .- Catch the diffs:# diff ../config/main.cmd main.cmd > ,.cmd- Prepend this to the top of the file to make config rw:# haconf -makerw- Append the command to make configuration ro:# haconf -dump -makero- Apply the diffs you need:# sh -x ,.cmdCluster logging:-----------------------------------------------------VCS logs all activities into /var/VRTSvcs/log directory.The most important log is the engine log engine.log_A.Each agent also has its own log file.The logging parameters can be displayed with halog command: # halog -infoLog on hades_c:path = /var/VRTSvcs/log/engine.log_Amaxsize = 33554432 bytestags = ABCDEConfiguration of a test group and test resource type:=======================================================To get comfortable with the cluster configuration it is useful tocreate your own group that uses your own resource. Example below demonstrates configuration of a "do nothing" group with one resourceof our own type.- Add group test with one resource test. Add this to/etc/VRTSvcs/conf/config/new/types.cf:type Test (str TesterNameRule = int IntAttrstr StringAttrstr VectorAttr[]str AssocAttr{}static str ArgList[] = { IntAttr, StringAttr, VectorAttr, AssocAttr })- Add this to /etc/VRTSvcs/conf/config/new/main.cf:group test (SystemList = { foo_c, bar_c }AutoStartList = { foo_c })Test test (IntAttr = 100StringAttr = "Testing 1 2 3"VectorAttr = { one, two, three }AssocAttr = { one = 1, two = 2 })- Run the hacf -generate and diff as above. Edit it to get ,.cmd file: haconf -makerwhatype -add Testhatype -modify Test SourceFile "./types.cf"haattr -add Test Tester -stringhatype -modify Test NameRule ""haattr -add Test IntAttr -integerhaattr -add Test StringAttr -stringhaattr -add Test VectorAttr -string -vectorhaattr -add Test AssocAttr -string -assochatype -modify Test ArgList IntAttr StringAttr VectorAttr AssocAttrhatype -modify Test LogLevel errorhatype -modify Test MonitorIfOffline 1hatype -modify Test AttrChangedTimeout 60hatype -modify Test CloseTimeout 60hatype -modify Test CleanTimeout 60hatype -modify Test ConfInterval 600hatype -modify Test MonitorInterval 60hatype -modify Test MonitorTimeout 60hatype -modify Test NumThreads 10hatype -modify Test OfflineTimeout 300hatype -modify Test OnlineRetryLimit 0hatype -modify Test OnlineTimeout 300hatype -modify Test OnlineWaitLimit 2hatype -modify Test OpenTimeout 60hatype -modify Test RestartLimit 0hatype -modify Test ToleranceLimit 0hatype -modify Test AgentStartTimeout 60hatype -modify Test AgentReplyTimeout 130hatype -modify Test Operations OnOffhaattr -default Test AutoStart 1haattr -default Test Critical 1haattr -default Test Enabled 1haattr -default Test TriggerEvent 0hagrp -add testhagrp -modify test SystemList foo_c 0 bar_c 1hagrp -modify test AutoStartList foo_chagrp -modify test SourceFile "./main.cf"hares -add test Test testhares -modify test Enabled 1hares -modify test IntAttr 100hares -modify test StringAttr "Testing 1 2 3"hares -modify test VectorAttr one two threehares -modify test AssocAttr one 1 two 2haconf -dump -makero- Feed it to sh:# sh -x ,.cmd- Both group test and resource Test should be added to the cluster Installation of a test agent for a test resource:-------------------------------------------------This agent does not start or monitor any specific resource. It just maintains its persistent state in ,.on file. This can be used as a template for other agents that perform some real work.- in /opt/VRTSvcs/bin create Test directory# cd /opt/VRTSvcs/bin# mkdir Test- link in the precompiled agent binary for script implemented methods: # cd Test# ln -s ../ScriptAgent TestAgent。

vcs4.0安装指南

vcs4.0安装指南

Veritas Cluster Server 4.0安装指南UU的实验室里面有一组放在那里不用的双机+盘阵,于是尝试从头开始安装一套cluster。

硬件简述:1. Node1: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)2. Node2: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)3. Shared Storage: D1000 (36G*3 HardDisk)一.安装操作系统安装2/04版本(2004年2月份)的Solaris 8。

在安装过程中需要选择英文作为主要语言。

二.安装EIS-CD安装EIS-CD 2/04版本,EIS-CD用于设置使用cluster的环境变量。

三.安装patch为了避免CPU虚高的问题,需要安装117000-05补丁。

该补丁可以从SUN公司官方网站下载。

下载该补丁以后解压将生成117000-05目录。

使用如下命令安装patch:patchadd 117000-05四.安装共享磁盘在本次环境中,我们使用SUN D1000盘阵作为共享磁盘,是SCSI接口的盘阵,如果是光纤接口的盘阵需要另行设置。

1.给盘阵加电2.将Node1和盘阵用SCSI连线连接3. Node1加电4.使Node1进入ok模式(在console窗口监控Node1的启动,当出现启动信息时,迅速按下Ctrl+Break,即可进入ok模式)5. {0} ok probe-scsi-all6. {0} ok boot –r7.重新启动以后进入操作系统,使用format命令确认盘阵已经被此系统加载8. Node1断电,Node2加电,重复4-7步,确认盘阵也可以被此台机器加载9.为了使两台机器同时读取共享存储,需要修改其中一台机器的SCSI ID。

给Node1加电,进入ok模式(此时的状态应该是Node1,Node2,存储均已加电,Node2目前使用format命令已经可以观察到盘阵被正常加载,Node1处于ok模式)。

WindowsNT4.0安装疑难解答指南

WindowsNT4.0安装疑难解答指南

INFO:Windows NT 4.0 安装疑难解答指南这篇文章中的信息适用于:Microsoft Windows NT Workstation 4.0Microsoft Windows NT Server 4.0====================================================Windows NT 4.0 安装疑难解答指南====================================================-------------------------------------------------------------------| 本文档所提供的信息仅代表了“现有状况”,不附带任何明示| | 或默示的保证,包括但不限于对特定目的适销性和/或适用性的|| 各项默示保证各项默示保证。

对于这些信息的使用及其准确性的|| 风险应全部由用户承担。

在以下条件下,可对本文信息进行复制|| 和分发:|| 1) 对要复制的所有文本都必须不作修改同时不能遗漏任何页面;|| 2) 本文信息的所有部分必须一起分发;| 3) 分发本信息不以赢利为目的。

|||| 版权所有(C) 1995 Microsoft Corporation。

保留所有权利。

| | Microsoft、MS-DOS 和Windows 是Microsoft Corporation 的注册商标,| | Windows NT 是Microsoft Corporation 的商标。

| | COMPAQ 是Compaq Computer Corporation 的注册商标。

|| Intel 和Pentium 是Intel Corporation 的注册商标。

| | MIPS 是MIPS Computer Systems, Inc. 的注册商标。

| | OS/2、PS/2 是International Business Machines Corporation. | | 的注册商标。

Veritas

Veritas

Veritas™Cluster Server 管理指南AIX5.12Veritas Cluster Server 管理指南本手册所述软件是根据许可协议而提供,仅可按该协议的条款使用。

产品版本: 5.1文档版本: 5.1.0法律声明Copyright © 2009 Symantec Corporation. © 2009 年 Symantec Corporation 版权所有。

All rights reserved. 保留所有权利。

Symantec、Symantec 徽标、Veritas 和 Veritas Storage Foundation 是Symantec Corporation 或其附属公司在美国和其他国家/地区的商标或注册商标。

“Symantec”和“赛门铁克”是 Symantec Corporation 在中国的注册商标。

其他名称可能为其各自所有者的商标,特此声明。

本文档中介绍的产品根据限制其使用、复制、分发和反编译/逆向工程的授权许可协议分发。

未经 Symantec Corporation(赛门铁克公司)及其特许人(如果存在)事先书面授权,不得以任何方式任何形式复制本文档的任何部分。

本文档按“现状”提供,对于所有明示或暗示的条款、陈述和保证,包括任何适销性、针对特定用途的适用性或无侵害知识产权的暗示保证,均不提供任何担保,除非此类免责声明的范围在法律上视为无效。

Symantec Corporation(赛门铁克公司)不对任何与提供、执行或使用本文档相关的伴随或后果性损害负责。

本文档所含信息如有更改,恕不另行通知。

根据 FAR 12.212 中的定义,授权许可的软件和文档被视为“商业计算机软件”,受 FAR 第 52.227-19 节“Commercial Computer Software - Restricted Rights”(商业计算机软件受限权利)和 DFARS 第 227.7202 节“Rights in CommercialComputer Software or Commercial Computer Software Documentation”(商业计算机软件或商业计算机软件文档权利)中的适用规定,以及所有后续法规中规定的权利的制约。

基于Zabbix的Veritas_Cluster_Server集群管理系统的监控技术

基于Zabbix的Veritas_Cluster_Server集群管理系统的监控技术

了监控的精细性和准确性,提高了核心业务系统 CTS 的 运维能力,符合气象信息化发展的需求。
引用 [1] 谭海波,汪华,金石声,等.气象区域站数据全流程监控系统的 设计和实现[J].中低纬山地气象,2021,45(6):111-115. [2] 唐维尧,白铁男,谭海波,等.贵州省GNSS/MET水汽报文自动 化处理系统的设计与实现[J].中低纬山地气象,2022,46(3):122-125. [3] 白铁男,唐维尧,谭海波,等.贵州天气雷达集约监控系统的研 究与实现[J].中低纬山地气象,2022,46(3):126-128. [4] 谭海波,唐维尧,白铁男,等.基于Zabbix集群系统的SQL Server 数据库监控方式[J].信息与电脑(理论版),2021,33(16):158-160. [5] 肖柳斯,胡东明,陈生,等.X 波段双偏振相控阵雷达的衰减订 正算法研究[J].气象,2021,47(6):703-716. [6] 田聪聪,张羽,曾琳,等.X波段相控阵雷达与 S波段雷达的对比 分析[J].广东气象,2021,43(3):55-59. [7] 张蔚然,吴翀,刘黎平,等.双偏振相控阵雷达与业务雷达的定 量对比及观测精度研究[J].高原气象,2021,40(2):424-435. [8] 马建立,陈明轩,李思腾,等.线性规划在X波段双线偏振多普勒 天气雷达差分传播相移质量控制中的应用[J].气象学报,2019,77 (3):516-528.
图 1 系统架构设计图 Fig.1 System architecture design diagram 收稿日期 :2023-05-11 作者简介 :金石声(1979—),男,贵州贵阳人,硕士研究生,高级工程师,研究方向 :数据处理与应用。
188
金石声 白铁男 谭海波等:基于 Zabbix 的 Veritas Cluster Server 集群管理系统的监控技术
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
硬件简述:
1. Node1: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)
2. Node2: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)
3. Shared Storage: D1000 (36G*3 HardDisk) 一. 安装操作系统
{0} ok nvedit
0: probe-all
1: cd /pci@8,700000/scsi@2,1
2: 5 " scsi-initiator-id" integer-property
3: device-end
4: install-console
5: banner
13. 保存第12步的配置
CounterInterval = 5
root@uulab-s22 # cd /opt/sf_ha.4.0.sol/storage_foundation
root@uulab-s22 # ./installsf
安装过程一路都是选择或者按照提示输入,总的来说比较简单,不再赘述。
六. 创建磁盘组
VCS安装完毕以后,将要求使用如下命令重新启动所有节点:
16. 在Node1,Node2上均用format命令检查,确认都成功挂载了共享存储
五. 安装VCS
1. 设置rhosts文件,在两台机器上都要执行,以下以Node1为例。
在/目录下添加rhosts文件,使远程登陆生效,用以简便地在两台机器上同时安装VCS
root@uulab-s22 # echo “+” > /.rhosts
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c50569190,0
1. c1t1d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c5056c1a7,0
root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraarch_vol
root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/hlr_vol
修改/etc/VRTSvcs/conf/config/main.cf文件,这是VCS的配置文件,该文件的修改可以使用命令行(比如hagrp,hares等命令)修改,也可以使用任何文本编辑器(比如vi)直接修改,此处我们选择使用vi进行修改。粗体字部分为需要新添加的行。
修改完毕以后该文件如下:
root@uulab-s22 # vxdisksetup -i c4t0d0
如果此命令执行以后报错如下,那么参看附录中的解决方法1。
VxVM vxdisksetup ERROR V-5-2-3535 c4t0d0s2: Invalid dmpnodename for disk device c4t0d0.
root@uulab-s22 # vxdisksetup -i c4t9d0
root@uulab-s22 # vxdg init hlrdg hlrdg-01=c4t0d0
root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-02=c4t8d0
root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-03=c4t9d0
{0} ok nvstore
14. 设置环境变量
{0} ok setenv use-nvramrc? true
use-nvramrc? = true
{0} ok setenv auto-boot? true
auto-boot? = true
15. 重新启动Node1
{0} ok reset-all
root@uulab-s22 # touch /etc/notrouter
root@uulab-s22 # vi /etc/hosts
添加如下行:
192.168.0.6 uulab-p22
192.168.0.8 uulab-p23
root@uulab-s22 # vi /etc/netmasks
10. 设置Node1的SCSI ID为5(默认为7)
{0} ok setenv scsi-initiator-id 5
11. 通过第5步中的屏幕信息我们可以知道该SCSI设备的系统表示符为/pci@8,700000/scsi@2,1,这个标识符将在下一步使用
12. 设置Node1的SCSI ID为5。(注意第2行的输入中" scsi-initiator-id",双引号之后有一个空格!)
root@uulab-s22 # vxassist -g hlrdg -b make oraarch_vol 8g layout=nostripe,nolog nmirror=2 &
root@uulab-s22 # vxassist -g hlrdg -b make hlr_vol 4g layout=nostripe,nolog nmirror=2 &
安装2/04版本(2004年2月份)的Solaris 8。在安装过程中需要选择英文作为主要语言。
二. 安装EIS-CD
安装EIS-CD 2/04版本,EIS-CD用于设置使用cluster的环境变量。
三. 安装patch
为了避免CPU虚高的问题,需要安装117000-05补丁。该补丁可以从SUN公司官方网站下载。下载该补丁以后解压将生成117000-05目录。使用如下命令安装patch:
八. 使用VxFS创建文件系统
root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oradata_vol
root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraredo_vol
添加如下行:
192.168.0.0 255.255.255.0
root@uulab-s22 # sync
root@uulab-s22 # reboot
3. 开始安装
VCS 4.0的安装已经可以简化到只使用一个命令,将同时安装Veritas Volume Manager 4.0, Veritas File System 4.0,Veritas Cluster Server 4.0以及其他一些相关软件。以下命令只需要在一个节点中执行即可。
include "types.cf"
cluster vcs_hlr_cluster (
UserNames = { admin = hijBidIfjEjjHrjDig }
ClusterAddress = "10.7.1.7" --此处为Cluster环境的虚拟IP
Administrators = { admin }
shutdown -y -i6 -g0
重启完毕,开始创建磁盘组。以下命令只需要在一个节点中执行即可。
使用format命令确认我们需要添加到磁盘组中的共享磁盘为c4t0d0,c4t8d0,c4t9d0
root@uulab-s22 # format
Searching for disks...done
2. c4t0d0
/pci@8,700000/scsi@2,1/sd@0,0
3. c4t8d0
/pci@8,700000/scsi@2,1/sd@8,0
4. c4t9d0
/pci@8,700000/scsi@2,1/sd@9,0
Specify disk (enter its number):
七. 创建卷
root@uulab-s22 # vxassist -g hlrdg -b make oradata_vol 15g layout=nostripe,nolog nmirror=2 &
root@uulab-s22 # vxassist -g hlrdg -b make oraredo_vol 5g layout=nostripe,nolog nmirror=2 &
5. {0} ok probe-scsi-all
6. {0} ok boot ?Cr
7. 重新启动以后进入操作系统,使用format命令确认盘阵已经被此系统加载
8. Node1断电,Node2加电,重复4-7步,确认盘阵也可以被此台机器加载
9. 为了使两台机器同时读取共享存储,需要修改其中一台机器的SCSI ID。给Node1加电,进入ok模式(此时的状态应该是Node1பைடு நூலகம்Node2,存储均已加电,Node2目前使用format命令已经可以观察到盘阵被正常加载,Node1处于ok模式)。
root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oradata_vol
root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraredo_vol
root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraarch_vol
相关文档
最新文档