Oracle RAC存储多路径的设置案例
ORACLE11Gr2RAC双节点+单实例DataGuard实施部署方案_V1.0
ORACLE 11Gr2 RAC双节点+单实例DataGuard实施部署方案V1.0数据运维部目录1 参考集群规划 (5)1.1硬件环境 (5)1.2软件环境 (5)1.3IP及存储规则 (5)2RAC主库安装实施 (6)2.1 主机环境准备 (6)2.1.1 操作系统安装 (6)2.1.2 服务器内存要求 (6)2.3 网络配置 (7)2.3.1 配置IP (网络工程师分配IP并配置好) (7)2.3.2 配置主机名 (8)2.3.3 配置/etc/hosts (8)2.4 防火墙、SELinux和NTP禁用 (9)2.4.1关闭服务器防火墙或开放端口: (9)2.4.2禁用SELinux: (9)2.4.3禁用NTP并删除其配置文件: (9)2.5 资源限额配置 (10)2.5.1编辑/etc/sysctl.conf,添加以下内容: (10)2.5.2编辑/etc/security/limits.conf,添加以下内容: (10)2.5.3编辑/etc/pam.d/login,添加以下内容: (11)2.5.4调整tmpfs大小 (11)2.6 用户和组配置 (11)2.6.1新建grid和oracle相关用户和组。
(11)2.6.2新建grid和oracle安装目录 (12)2.6.3配置grid和oracle用户环境变量 (12)2.7 依赖包安装 (13)2.8 SSH互信配置 (15)3RAC主库共享存储配置 (16)3.1 存储建设 (16)3.2 UDEV方式配置 (16)4 RAC主库安装Oracle grid集群件 (17)5.1 安装前检查 (18)5.2 解压及安装 (18)5.3 安装检查 (30)5.4 卸载grid (30)5 RAC主库配置ASM磁盘 (31)6 RAC主库安装Oracle数据库软件 (34)7 RAC主库创建数据库实例 (41)8 RAC数据库管理工作 (52)8.1RAC的启停 (52)8.2 RAC检查运行状况 (52)8.3 测试验证 (53)9备库安装实施 (54)9.1 主机环境准备 (54)9.1.1 操作系统 (54)9.1.2 服务器内存要求 (54)9.2 网络配置 (54)9.2.1 配置IP (54)9.2.2 配置主机名 (55)9.2.3 配置/etc/hosts (55)9.3 安装依赖包 (55)9.4修改内核参数 (57)9.5 修改oracle用户限制 (58)9.6修改/etc/pam.d/login (58)9.7关闭防火墙 (58)9.8更改安装所有者的ulimit设置 (59)9.9添加用户及创建安装目录 (59)9.10 设置oracle用户环境变量 (60)9.11 安装Oracle数据库软件 (60)9.12 配置监听及本地网络服务 (60)10.搭建DG (60)10.主库强制force logging (60)10.2开启主库的归档模式 (61)10.3主库创建standby redo log (61)10.4修改主库RAC参数,并生成pfile与密码文件一起传输到备库 (63)10.5创建备库的监听、修改和创建主备库的tnsname.ora 文件 (64)10.6备库创建目录 (65)10.7备库修改pfile并启动到nomount (65)10.8开始使用RMAN进行DG (67)10.9打开备库并开启apply service (68)11.测试结果 (68)12.其他相关查询和切换使用 (68)12.1 查看DG是否是实时应用 (68)12.2 备库关闭日志延时应用,恢复到日志实时应用 (69)12.3 在备库上面,关闭日志实时应用。
Oracle内部环境RAC实施
1.存储需求/dev/sdb ocfs文件系统(OCR和V oting Disk) 4G /dev/sdc VOL1(DG1) 90G /dev/sdd VOL2(DG1) 90G /dev/sde VOL3(RECOVERY) 42G /dev/sdf VOL4(RECOVERY) 42Geth0eth1参考如下代码:DEVICE=eth0BOOTPROTO=staticBROADCAST=192.168.1.255HW ADDR=00:14:85:25:9B:F2IPADDR=192.168.1.202NETMASK=255.255.255.0NETWORK=192.168.1.0ONBOOT=yes // 启动时自动开启TYPE=Ethernet4.操作系统安装(可选择自动分区)/ 根目录10Gswap 两倍内存大小2G/oracle oracle挂载目录50G选择安装包:X Window SystemGNOME Desktop EnvironmentEditorsGraphical InternetText-based InternetOffice/ProductivitySound and VideoGraphicsServer Configuration ToolsFTP ServerLegacy Network server(rsh、telnet) Development ToolsLegacy Software DevelopmentAdministration ToolsSystem Tools(vnc、sysstat、[ocfs、asm])Printing Support5.NTP时间同步服务器,时间同步crontab -e0 6,12,18 * * * /usr/sbin/ntpdate 192.168.1.2536.挂载存储分区[root@racdev2 ~]# vi /etc/iscsi.confdiscoverAddress=192.168.1.211/sbin/service iscsi restartiscsi-ls设置开机自动挂载chkconfig iscsi on7.数据库环境概况8.change os(system) languagevi /etc/sysconfig/i18n#LANG="en_US.UTF-8"#SUPPORTED="en_US.UTF-8:en_US:en"#SYSFONT="latarcyrheb-sun16"改为LANG="zh_CN.GB18030"LANGUAGE="zh_CN.GB18030:zh_CN.GB2312:zh_CN"SUPPORTED="zh_CN.GB18030:zh_CN:zh"SYSFONT="lat0-sun16"SYSFONTACM="8859-15"//以下经过测试通过(用shell则可)LANG="zh_CN.GB18030"LANGUAGE="zh_CN.GB18030:zh_CN.GB2312:zh_CN"SUPPORTED="zh_CN.GB18030:zh_CN:zh:en_US.UTF-8:en_US:en" SYSFONT="lat0-sun16"9.创建oracle用户。
11.2.0.1rac在线更换存储详细实施过程
Oracle 11.2.0.1 rac在线更换存储一、原有环境概述:二、更换步骤1、配置新存储由存储工程师配置新存储(包括底层raid,以及多路径问题)。
划分3个5G的lun给CRS磁盘组备用。
其他数据盘划分会议商讨决定)2、绑定裸设备Raw /dev/raw/raw1 /dev/mapper/crs1Raw /dev/raw/raw2 /dev/mapper/crs2Raw /dev/raw/raw3 /dev/mapper/crs3Raw /dev/raw/raw4 /dev/mapper/data1Raw /dev/raw/raw5 /dev/mapper/mdata2修改/etc/rc.d/rc.local文件,添加如下:Raw /dev/raw/raw1 /dev/mapper/crs1Raw /dev/raw/raw2 /dev/mapper/crs2Raw /dev/raw/raw3 /dev/mapper/crs3Raw /dev/raw/raw4 /dev/mapper/data1Raw /dev/raw/raw5 /dev/mapper/mdata2修改/etc/udev/rules.d/60-raw.rulesACTION=="add", KERNEL=="/dev/mapper/crs1",RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="/dev/mapper/crs2",RUN+="/bin/raw /dev/raw/raw2 %N"ACTION=="add", KERNEL=="/dev/mapper/crs3",RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="/dev/mapper/crs4",RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="/dev/mapper/data1",RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", KERNEL=="/dev/mapper/data2",RUN+="/bin/raw /dev/raw/raw6 %N" KERNEL=="raw[1-6]", OWNER="grid", GROUP="asmadmin", MODE="660"Start_udev3、创建CRS磁盘组通过asmca创建+CRS磁盘组(创建normal冗余模式,截图略)4、在线替换CRS盘当前vote、ocr盘的信息:rac1:orcl1:/home/oracle$crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- ---------1. ONLINE 62a96a4481504faabfc6b7807b7d1d1f (/dev/mapper/mpath1p1) [DATA] Located 1 voting disk(s).[root@rac1 bin]# ./ocrcheckStatus of Oracle Cluster Registry is as follows :Version : 3Total space (kbytes) : 262120Used space (kbytes) : 2700Available space (kbytes) : 259420ID : 1553871410Device/File Name : +DATADevice/File integrity check succeededDevice/File not configuredDevice/File not configuredDevice/File not configuredDevice/File not configuredCluster registry integrity check succeededLogical corruption check succeeded在线替换vote和ocr[root@rac1 bin]# ./ocrconfig -add '+CRS'[root@rac1 bin]# su - grid[grid@rac1 ~]$ cd $ORACLE_HOME[grid@rac1 grid]$ cd bin[grid@rac1 bin]$ ./crsctl replace votedisk '+CRS'Successful addition of voting disk 979e5bfb81b94ff7bfad60b1dc9b3b49.Successful addition of voting disk 7f4b6d6fab8e4f47bf6aee32b1caace8.Successful addition of voting disk 28fdeb68f99a4f2dbf277a377dc81b5b.Successful deletion of voting disk 9c4a4f2ded074f49bfc21a45af1936f4.Successfully replaced voting disk group with +CRS.CRS-4266: Voting file(s) successfully replaced[root@rac1 bin]# ./ocrconfig -delete '+DATA'由于11.2版本的bug,需要手工修改2号节点的/etc/oracle/ocr.loc文件:1号节点的该文件:[root@rac1 bin]# cat /etc/oracle/ocr.loc#Device/file +DATA getting replaced by device +CRSocrconfig_loc=+CRSlocal_only=false[root@rac2 bin]# cat /etc/oracle/ocr.lococrconfig_loc=+DATA -------(由于11.2版本的bug,2号节点的该指向未更新导致crs启动失败)local_only=FALSE[root@rac2 bin]# vi /etc/oracle/ocr.lococrconfig_loc=+CRSlocal_only=FALSE~查询替换后的信息:[grid@rac1 bin]$ ./crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- ---------1. ONLINE 979e5bfb81b94ff7bfad60b1dc9b3b49 (/dev/raw/raw1) [CRS]2. ONLINE 7f4b6d6fab8e4f47bf6aee32b1caace8 (/dev/raw/raw2 [CRS]3. ONLINE 28fdeb68f99a4f2dbf277a377dc81b5b (/dev/raw/raw3) [CRS]Located 3 voting disk(s).[grid@rac1 bin]$ ./ocrckeck-bash: ./ocrckeck: No such file or directory[root@rac1 bin]# ./ocrckeck-bash: ./ocrckeck: No such file or directory[root@rac1 bin]# ./ocrcheckStatus of Oracle Cluster Registry is as follows :Version : 3Total space (kbytes) : 262120Used space (kbytes) : 2700Available space (kbytes) : 259420ID : 1671730208Device/File Name : +CRSDevice/File integrity check succeededDevice/File not configuredDevice/File not configuredDevice/File not configuredDevice/File not configuredCluster registry integrity check succeededLogical corruption check succeededVote和ocr替换完毕。
rac存储规划
asmadmin、 /home/grid asmdba、asmoper dba、oper、 asmdba 存储组件
oracle
oinstall
/u01/app/11.2.0/grid /u01/app/oracle /home/oracle /u01/app/oracle/product/11.2 .0/dbhome_1 ASM 冗余 正常冗余 外部冗余 外部冗余 当前内存使 用量(M) 备注 3块盘 1块盘 1块盘
节点名称 syntong1 syntong2
数据库版本 Oracle Database 11g Release 2
Oracle 软件组件 软件组件 Grid Infrastructure Oracle RAC 操作系统 用户 grid 主组 oinstall 辅助组 主目录 Oracle 基目录/Oracle 主目录 /u01/app/grid
存储组件 OCR/voting datafile recovery
文件系统 ASM ASM ASM 拟分配 CPU数目
Hale Waihona Puke 卷大小 2GB 120GB 240GB 拟分配内 存
ASM 卷组名 +CRS +RACDB_DATA +FRA 虚拟化组件 拟分配硬盘
每块大小为2g
虚拟计算机名称 HA故障切换容量 (25%) VMware Center Server 综合前置等
当前硬盘使用量(G)
1 4GB
40GB
两个 Oracle RAC 节点和网络存储服务器配置如下: 节点 节点名称 racnode1 racnode2 实例名称 syntong1 syntong2 公共 IP 地址 专用 IP 地址 数据库名 称 syntong 处理器 RAM 1 个双核 Intel 至少8GB Xeon,3.00 GHz 1 个双核 Intel 至少8GB Xeon,3.00 GHz 网络配置 虚拟 IP 地址 操作系统 RHEL 6.3- (x86_64) RHEL 6.3- (x86_64)
X86资源池Oracle数据库多路径问题解决实施方案
X86资源池数据库多路径问题解决实施方案1背景存储多路径链路发生切换时,操作系统重新扫盘的时间并将磁盘权限全部初始化为root:disk,数据库用到的磁盘权限都发生了变化,导致数据库宕机。
并针对这个磁盘权限问题做了以下处理措施。
2实施操作按照下面的步骤,如果使用的是多径和想设置udev规则对多径设备1、确定分区别名目标设备2、编辑/etc/udev/rules.d/12-dm-permissions.rules文件设置设备属于权限为:grid:asmadminSYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN07", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN08", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN09", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN10", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN11", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN12", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN13", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN14", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN15", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN16", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN17", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN18", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN19", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN20", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN21", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN22", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN23", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN24", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN25", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN26", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN28", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN29", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN30", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN31", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN32", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN33", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN34", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN35", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN36", OWNER:="grid", GROUP:="asmadmin", MODE:="660",SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN37", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN38", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN39", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN40", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN41", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN42", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN43", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN44", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN45", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN46", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN47", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN48", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN49", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"ENV{DM_NAME}=="LUN59", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="mapper/$ENV{DM_NAME}"3、启动多路径服务4、检查权限3参考文档How to set udev rule for setting the disk permission on ASM disks when using multipath on OL 6.x (文档 ID 1521757.1)。
oracle RAC数据库双机双存储调整方案
Oracle RAC数据库双机双存储调整方案2014年7月1.项目背景Oracle RAC数据库系统由于Oracle RAC集群数据库双节点无法正常的工作,目前只有Oracle RAC数据库中的其中一个节点能够正常的对外提供服务。
为了使该系统能够高效、稳定和可靠的运行,需要对该系统的架构进行调整为双机双存储方式运行。
2.调整前的数据库系统架构Oracle RAC部署使用的是双存储,对于Oracle Clusterware集群而言需要奇数个存储磁盘(1,3,5个),那么每个存储划分的2个LUN,一共4个LUN;存储A划分LUN1和LUN2,存储B划分LUN3和LUN4,在系统层面使用了IBM HACMP 将LUN2和LUN4镜像,形成了LUN5;LUN1,LUN3和LUN5作为Oracle Clusterware 磁盘文件。
但是由于ASM自己管理的LUN1、LUN3和IBM HACMP管理的LUN5操作的块大小不一致,使得在某一时间点Oracle Clusterware磁盘文件的内容不一致,导致系统出现故障。
在数据库层面使用ASM FAILGROUP特性使数据在两个存储之间保持同步。
3.调整后的数据库系统架构调整后的,Oracle RAC使用的双存储之间的镜像完全由IBM AIX LVM和HACMP 配合完成,将通过AIX LVM镜像好的LV通过HACMP做成并发卷提供给上层的Oracle RAC使用。
例如,将已经在LVM中镜像成功的LV1,LV2,LV3作为Oracle Clusterware磁盘文件,将另外镜像的LV4,LV5,LV6等用于存放Oracle RAC数据库,Oracle ASM直接使用镜像成功的LV,不再利用ASM FAILGROUP特性提供镜像。
4.实施方案整个实施过程我们一共准备了3套方案,确保实施过程高效、安全,在规定的时间内恢复系统的正常运行。
方案一是我们主要实施的方案;方案二是方案一的备选方案,在方案一不能正常进行时实施;方案三是方案二的备选方案,在方案二无法进行时实施。
一、存储多路径配置
一、存储多路径配置①.配置说明:存储与主机之间采用双通道冗余链路连接,存储逻辑卷(LUN)与主机或主机组建立映射关系后,同一LUN在系统下方会生成2个影子卷,通过多路径的聚合,实现对存储磁盘的正常读写操作。
因MD3000系列存储不带有多路径软件,本次使用CentOS系统自带多路径软件。
②.配置步骤:a) 在未连接存储的情况下,检查配置多路径所需安装包# rpm -qa |grep device-mapperb) 安装多路径所需包及附属包# yum install device-mapper device-mapper-multipath (将有依赖关系的包都打上)注:因操作时,WebServer无法访问外网CentOS yum镜像源,此次在两台服务器上配置了本地yum源。
配置方法如下:# mount /dev/cdrom /media# vi /etc/yum.repo.d/localyum.repo在此文本里输入以下信息:[base]name=localyumbaseurl=file:///mediagpgcheck=0enable=1编辑完成后,保存退出。
原有repo文件,重命名为xxxxx.bak文件。
待服务器可联网时,修改回原样即可。
c)查看多路径状态# multipath –ll 查看多路径状态/etc/multipath.conf does not exist, blacklisting all devices.A sample multipath.conf file is located at/usr/share/doc/device-mapper-multipath-0.4.9/multipath.confYou can run /sbin/mpathconf to create or modify /etc/multipath.confDM multipath kernel driver not loaded ----DM模块没有加载如果模块没有加载成功请使用下列命初始化DM,或重启系统---Use the following commands to initialize and start DM for the first time:# modprobedm-multipath# modprobedm-round-robin# servicemultipathd start# chkconfig –level 2345 multipathd on #设置成开机自启动multipathdd)编辑多路径配置文件# vi /etc/multipath/multipath.confblacklist {devnode "^sda"wwid **********}defaults {user_friendly_names yespath_grouping_policymultibusfailback immediateno_path_retry fail}编辑完成,保存退出,重启multipath服务 [service multipathd restart]e)连接MD3000存储,并使用fdisk -l查看存储硬盘.确认无误后,执行以下操作# multipath -ll 查看多路径状态,正常的话,会回显一下内容mpatha (360a98******************************) dm-0 DELL,LUN ----创建了一个lun size=XXG features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=4 status=active|- 1:0:0:0 sdb 8:16 active ready running ----多路径下的两个盘符sdb和sdc.`- 2:0:0:0 sdc 8:64 active ready running# ls -l /dev/mapper目录/dev/mapper/ 下多个文件夹mpathb注:由于共享磁盘大小超过2TB,使用parted命令对磁盘进行初始化操作。
Oracle高级课程实操案例3-RAC增加节点
Oracle⾼级课程实操案例3-RAC增加节点RAC环境中增加节点该过程将通过以下7个步骤来实现:●考虑依赖性和前提条件●配置⽹络组件●安装Oracle集群件●配置Oracle集群件●安装Oracle软件●添加新实例(⼀个或者是多个)●执⾏⽇常管理任务1考虑依赖性和前提条件任何软件安装或者升级的第⼀个主要的步骤就是确保系统的完整备份可⽤,包括OS和数据⽂件。
下⼀步是验证系统要求、OS 版本和所有应⽤程序的补丁级别。
新节点应该具有与现有节点相同的OS版本,包括Oracle所需要的所有的补丁。
(⼀)使⽤OCR⽂件创建新的节点OS,主机名class3。
(⼆)class3创建两个⽹卡并修改三个节点的hosts⽂件。
192.168.1.118 class1192.168.1.119 class2192.168.1.120 class310.0.0.54 class1-priv10.0.0.55 class2-priv10.0.0.56 class3-priv192.168.1.250 class1-vip192.168.1.251 class2-vip192.168.1.252 class3-vip(三)class3上创建本地yun源,配置iscsi服务[root@class3 ~]# mount /dev/cdrom /mntmount: block device /dev/cdrom is write-protected, mounting read-only[root@class3 ~]# yum install -y iscsi-initiator-utilsLoaded plugins: rhnplugin, securityThis system is not registered with RHN.RHN support will be disabled.base1 | 1.3 kB 00:00base1/primary | 845 kB 00:00base1 3040/3040base2 | 1.3 kB 00:00base2/primary | 7.5 kB 00:00base2 36/36base3 | 1.3 kB 00:00base3/primary | 6.0 kB 00:00base3 32/32Setting up Install ProcessPackage iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64 already installed and latest versionNothing to do[root@class3 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.200192.168.1.200:3260,1 /doc/81b17d71284ac850ac024243.html .baitu:iscsi1[root@class3 ~]# iscsiadm -m node -T /doc/81b17d71284ac850ac024243.html .baitu:iscsi1 --login Logging in to [iface: default, target: /doc/81b17d71284ac850ac024243.html .baitu:iscsi1, portal: 192.168.1.200,3260] Login to [iface: default, target: /doc/81b17d71284ac850ac024243.html.baitu:iscsi1, portal: 192.168.1.200,3260]: successful [root@class3 ~]# fdisk -lDisk /dev/sda: 16.1 GB, 16106127360 bytes255 heads, 63 sectors/track, 1958 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sda1 * 1 1020 8193118+ 83 Linux/dev/sda2 1021 1958 7534485 83 LinuxDisk /dev/sdb: 1073 MB, 1073741824 bytes34 heads, 61 sectors/track, 1011 cylindersUnits = cylinders of 2074 * 512 = 1061888 bytesDevice Boot Start End Blocks Id System/dev/sdb1 1 472 489433+ 83 Linux/dev/sdb2 473 1011 558943 83 LinuxDisk /dev/sdc: 10.7 GB, 10737418240 bytes64 heads, 32 sectors/track, 10240 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytesDevice Boot Start End Blocks Id System/dev/sdc1 1 10240 10485744 83 LinuxDisk /dev/sdd: 10.7 GB, 10737418240 bytes64 heads, 32 sectors/track, 10240 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytesDevice Boot Start End Blocks Id System/dev/sdd1 1 10240 10485744 83 Linux(四)class3上添加本地硬盘两个,每个5GB,重启OS,按照之前的⽅法分区格式化。
Oracle 10g RAC安装与配置(ppt 32页)
共享文件系统(NTFS、EXT3)
维护方便
大型数据库不推荐使用
ASM( Oracle 10g新技术)
维护方便,性能较可靠
13
二、CRS安装升级
CRS安装 1、root用户执行rootpre.sh脚本 2、CRS_HOME路径 3、规划VIP地址 4、root用户执行
/home/oracle/oraInventory/orainstRoot.sh和 $CRS_HOME/root.sh脚本 5、VIP配置BUG
Server Mode All Initialization Parameters:……
26
建库
5、Database Storage 控制文件 路径、冗余、参数 表空间 路径、大小、参数 数据文件 路径、大小、参数 Redo log Groups 路径、大小、组数、成员数
27
建库
7
系统环境
硬件环境 硬件服务器: CPU(32bit、64bit|Intel-Itanium、AMD) 内存(32bit system Oracle可寻址内存4G, SGA 1.7G) 其他:存储、网卡等 网络环境:双网卡、心跳线、网络带宽等
8
系统环境
软件环境 系统版本: 需要经过Oracle认证 系统包:
30
Q&A
谢谢!
31
Oracle 10g RAC安装配置注意事项
1) 系统时间同步 2) 主机名命名(长度、特殊符号等) 3) 共享磁盘(划分、共享方式等) 4) OCR DISK和VOTING DISK读写属性 5) Root用户执行root.sh脚本 6) VIP配置BUG 7) 操作系统补丁包(不要太低或太高,要合理) 8) 操作系统经过认证 9) CRS和数据库软件升级 10) Redo log groups
多路径配置
multipath 多路径冗余I/O(Multipath I/O)是指服务器通过多条物理路径连接到块存储设备。
多路径冗余I/O也可以实现I/O的负载均衡,提高系统性能,但主要还是一种容错机制。
服务器和存储通过SAN 光纤环境连接光纤交换机,服务器到存储的间的连接可以有“1条或多条SAN 光纤线缆,通过多对多的连接模式形成存储多路径,主机到存储之间的IO由多条路径可以选择multipath 多路径解决问题1、每个主机到所对应的存储可以经过几条不同的路径,如果是同时使用的话,I/O流量如何分配?2、其中一条路径坏掉了,如何处理?3、在操作系统的角度来看,每条路径,操作系统会认为是一个实际存在的物理盘,但实际上只是通向同一个物理盘的不同路径而已,这样是在使用的时候,就给用户带来了困惑。
多路径软件就是为了解决上面的问题应运而生的。
多路径的主要功能就是和存储设备一起配合实现如下功能:1、I/O 流量分配:2、多路径冗余、3、磁盘的虚拟化multipath 多路径软件的组成multipath 多路径的软件包列表:device-mapper-multipath-0.4.9-87.el6.x86_64device-mapper-event-libs-1.02.95-2.el6.x86_64device-mapper-persistent-data-0.3.2-1.el6.x86_64device-mapper-multipath-libs-0.4.9-87.el6.x86_64device-mapper-1.02.95-2.el6.x86_64device-mapper-event-1.02.95-2.el6.x86_64device-mapper-libs-1.02.95-2.el6.x86_64依赖包列表libaio-0.8.8-7.1el6x86_64.rpmlibaio-0.3.107-10.e16.x86_64.rpmlibaio-devel-0.3.107-10.e16.x86_64.rpmdevice-mapper-multipath提供 multipathd 和 multipath 等工具和multipath.conf 等配置文件。
Oracle-RAC环境数据备份与恢复方案
Oracle RAC 环境数据备份与恢复方案【导读】某企业因项目需要在Oracle RAC集群环境下,根据实际情况对Oracle数据库进行备份;使用生产环境的rman全备数据,进行恢复数据搭建测试环境。
本文将详细介绍此案例中Oracle数据库rman全备份过程、Oracle RAC 环境下rman备份数据如何恢复至单机服务器。
考虑到非常的实用,将实施经验分享给更多同行进行交流学习。
一、背景环境生产环境使用两台DELL R840 服务器,安装了 linux centos 7.6操作系统,并配置多路径,使用 EMC untiy 作为共享存储,分配了2个1T LUN 存储数据库文件,1个500G LUN存放归档数据,3个30G LUN存放 OCR 、FALSH、GIMR 数据。
Oracle RAC 软件版本是19C 19.0.0.0.0。
二、数据备份1、备份策略为保障oracle rac 集群数据安全,因项目组要求设计数据库备份方案。
考虑到服务器RAC1与RAC2每台服务器自带2T本地可用容量,每次全备产生约400GB数据文件。
可将奇数天备份到RAC1,偶数天备份到RAC2,4*400GB=1.6TB,每台服务器可以备份4天的全量数据。
空间非常的富余,不计划使用rman的增量备份,直接全量备份近8天数据,恢复也较为方便。
2、备份过程在RAC1主机下执行,(RAC2同理)首先Oracle 数据库开启归档,归档模式下,才可以进行数据库的热备份、联机备份、手工备份等。
非归档模式下,只能进行冷备份。
当然我们rman备份是在线备份。
如下图:接着,创建rman脚本目录创建备份执行脚本,并加入定时任务创建备份数据清理脚本,并加入定时任务,只备份近4次数据,脚本会自动判断最近一次rman备份是否成功,不成功将不删除备份数据。
这里我调用了zabora.sh 脚本判断rman备份状态,sql语句也比较简单。
定时任务(RAC1)定时任务 (RAC2)记得重启定时任务创建rman执行脚本,用于被上述执行脚本调用3、测试验证第一次可手动执行备份任务,不等到凌晨自动执行查看日志log备份完成如下图:三、数据恢复因为本项目使用的是全备数据,无增量。
linux5+oracle 10g RAC环境挂载双存储后盘符乱序问题解决办法
linux5+oracle 10g RAC环境挂载双存储后盘符乱序问题解决办法近来部署了几次Red Hat 5.5+oracle 10g RAC数据库平台,共享存储为两套FC存储设备(生产+备份),生产存储上划分的7个lUN(2个ocr disk,3个voting disk,4个data disk,1个FRA disk),同时映射给两个节点,这10个lUN在操作系统下都以祼设备方式挂载;备份存储上划分的1个lUN同时映射给两个节点,并且在一台节点为以ext3文件系统挂载于/u02下;挂载方式如下:祼设备挂载于/etc/udev/rules.d/60-raw.rulesACTION=="add",KERNEL=="sdb1",RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add",KERNEL=="sdc1",RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add",KERNEL=="sdd1",RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add",KERNEL=="sde2",RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add",KERNEL=="sdf3",RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add",KERNEL=="sdg1",RUN+="/bin/raw /dev/raw/raw6 %N" ACTION=="add",KERNEL=="sdh1",RUN+="/bin/raw /dev/raw/raw7 %N" ACTION=="add",KERNEL=="sdi1",RUN+="/bin/raw /dev/raw/raw8 %N" ACTION=="add",KERNEL=="sdj1",RUN+="/bin/raw /dev/raw/raw9 %N" ACTION=="add",KERNEL=="sdk1",RUN+="/bin/raw /dev/raw/raw10 %N" KERNEL=="raw[1-9]", OWNER="oracle", GROUP="dba", MODE="660" KERNEL=="raw10", OWNER="oracle", GROUP="dba", MODE="660"备份存储挂载于其中一个节点:/etc/fstab/dev/sdl1 /u02 ext3 defaults 0 0结果在重新启动两台节点的操作系统时,偶尔会出现数据库无法启动的故障,经排查确认为两台存储的11个lUN在操作系统下的盘符发生变化而导致祼设备无法挂载,从而RAC数据库无法启动,具体表现为本来备份存储的lUN盘符为sdl,重启后却变成sdb,生产存储的10个lUN的盘符相应地都往后递增,所以原来的祼设备挂载将失效,数据库当然就无法启动了。
oracle 12c r2 RAC配置手册
ORACLE 12C R2 Real Application Cluster Installation Guide朱海清StarTimes Software Technology Co., LtdASM磁盘空间最低要求12C R2相比前一版本,OCR的磁盘占用需求有了明显增长。
为了方便操作,设置如下:External: 1个卷x40GNormal: 3个卷x30GHight: 5个卷x25GFlex: 3个卷x30GOCR+VOLTING+MGMT存储通常放到一个磁盘组,且选择Normal的冗余方式,也即最少3块asm磁盘80G空间。
操作系统安装操作系统安装时把“Server with GUI“和”Compatibility Libraries”勾上,其他都不用选择。
版本采用CentOS 7、RHEL 7或者Oracle Linux 7安装oracle预安装包yum install -y oracle-rdbms-server-12cR1-preinstall创建用户和组oracle用户和dba、oinstall组已经在上一步创建完毕。
rac所有节点的oracle用户和grid用户的uid和gid必须一致,所以创建的时候最好制定uid和gid。
groupadd --gid 54323 asmdbagroupadd --gid 54324 asmopergroupadd --gid 54325 asmadmingroupadd --gid 54326 opergroupadd --gid 54327 backupdbagroupadd --gid 54328 dgdbagroupadd --gid 54329 kmdbausermod --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmoper,backupdba,dgdba,kmdba oracle useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper grid安装目录mkdir -p /u01/app/12.2.0/gridmkdir -p /u01/app/gridmkdir -p /u01/app/oraclechown -R grid:oinstall /u01chown oracle:oinstall /u01/app/oraclechmod -R 775 /u01/用户环境变量grid环境变量cat <<EOF >>/home/grid/.bash_profileORACLE_SID=+ASM1ORACLE_HOME=/u01/12.2.0/gridPATH=$ORACLE_HOME/bin:$PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport ORACLE_SID CLASSPATH ORACLE_HOME LD_LIBRARY_PATH PATHEOF在节点2,ORACLE_SID=+ASM2oracle环境变量cat <<EOF >>/home/oracle/.bash_profileORACLE_SID=starboss1ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1ORACLE_HOSTNAME=rac01PATH=$ORACLE_HOME/bin:$PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport ORACLE_SID ORACLE_HOME ORACLE_HOSTNAME PATH LD_LIBRARY_PATH CLASSPATH EOF在节点2,ORACLE_SID=starboss2,ORACLE_HOSTNAME=rac02修改logind.conf# vi /etc/systemd/logind.confRemoveIPC=no# systemctl daemon-reload# systemctl restart systemcd-logind加载pam_limits.so模块echo "session required pam_limits.so" >> /etc/pam.d/login禁用selinuxsetenforce 0vi /etc/sysconfig/selinux禁用防火墙# systemctl stop firewalld && systemctl disable firewalld 修改ulimitcat <<EOF >> /etc/security/limits.d/99-grid-oracle-limits.conf oracle soft nproc 16384oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536oracle soft stack 10240oracle hard stack 32768grid soft nproc 16384grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536grid soft stack 10240grid hard stack 32768EOF创建自定义的ulimitcat <<EOF >> /etc/profile.d/oracle-grid.sh if [ $USER = "oracle" ]; thenif [ $SHELL = "/bin/ksh" ]; thenulimit -u 16384ulimit -n 65536elseulimit -u 16384 -n 65536fifiif [ $USER = "grid" ]; thenif [ $SHELL = "/bin/ksh" ]; thenulimit -u 16384ulimit -n 65536elseulimit -u 16384 -n 65536 fifiEOF修改共享内存分区大小将如下参数添加到/etc/fstab,具体大小数值根据实际情况调整,因为这个数值和物理内存以及MEMORY_TARGET有关。
ORACLE10gR2RAC安装-3.配置共享存储
ORACLE10gR2RAC安装-3.配置共享存储预览说明:预览图片所展示的格式为文档的源格式展示,下载源文件没有水印,内容可编辑和复制配置共享存储#cd /vmfs/volumes/CX320_R5_03#mkdir SHAREDISK#cd SHAREDISK#vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick shdisk01_10g.vmdk Creating disk 'shdisk01_10g.vmdk' and zeroing it out...Create: 100% done.#vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick shdisk02_10g.vmdk Creating disk 'shdisk02_10g.vmdk' and zeroing it out...Create: 100% done.#vmkfstools -c 1024m -a lsilogic -d eagerzeroedthick shdisk01_1g.vmdk Creating disk 'shdisk01_1g.vmdk' and zeroing it out...Create: 100% done.#vmkfstools -c 1024m -a lsilogic -d eagerzeroedthick shdisk02_1g.vmdk Creating disk 'shdisk02_1g.vmdk' and zeroing it out...Create: 100% done.# vi VM05003.vmx修改虚拟机配置文件,增加如下内容:disk.locking = "FALSE"diskLib.dataCacheMaxSize = "0"scsi1.sharedBus = "virtual"虚拟机配置:磁盘配置:OCR(oracle集群注册表)/dev/raw/raw3 /dev/sdd 1GASM/dev/raw/raw1 /dev/sdb 10G VOL1 for Oracle Data/dev/raw/raw2 /dev/sdc 10G VOL2 for flash_recovery_area 数据库备份/dev/raw/raw2 /dev/sdc 10G表决磁盘(voting disk)/dev/raw/raw4 /dev/sde 1G[root@racnode1 ~]# ls -l /dev/sd*brw-r----- 1 root disk 8, 0 Nov 17 09:10 /dev/sdabrw-r----- 1 root disk 8, 1 Nov 17 09:10 /dev/sda1brw-r----- 1 root disk 8, 2 Nov 17 09:10 /dev/sda2brw-r----- 1 root disk 8, 3 Nov 17 09:10 /dev/sda3brw-r----- 1 root disk 8, 4 Nov 17 09:10 /dev/sda4brw-r----- 1 root disk 8, 5 Nov 17 09:10 /dev/sda5brw-r----- 1 root disk 8, 8 Nov 17 09:10 /dev/sda8brw-r----- 1 root disk 8, 9 Nov 17 09:10 /dev/sda9brw-r----- 1 root disk 8, 16 Nov 17 09:10 /dev/sdb --10G DataDGbrw-r----- 1 root disk 8, 32 Nov 17 09:10 /dev/sdc --10G DataDGbrw-r----- 1 root disk 8, 48 Nov 17 09:10 /dev/sdd --2G OCR Diskbrw-r----- 1 root disk 8, 64 Nov 17 09:10 /dev/sde --2GVoting Disk[root@racnode1 ~]# fdisk -lDisk /dev/sda: 107.3 GB, 107374182400 bytes255 heads, 63 sectors/track, 13054 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sda1 * 1 653 5245191 83 Linux/dev/sda2 654 3264 20972857+ 82 Linux swap / Solaris /dev/sda3 3265 4569 10482412+ 83 Linux/dev/sda4 4570 13054 68155762+ 5 Extended/dev/sda5 4570 5874 10482381 83 Linux/dev/sda6 5875 7179 10482381 83 Linux/dev/sda7 7180 8484 10482381 83 Linux/dev/sda8 8485 10309 14659281 83 Linux/dev/sda9 10310 13054 22049181 83 LinuxDisk /dev/sdb: 10.7 GB, 10737418240 bytes255 heads, 63 sectors/track, 1305 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdc: 10.7 GB, 10737418240 bytes255 heads, 63 sectors/track, 1305 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdd: 1073 MB, 1073741824 bytes255 heads, 63 sectors/track, 130 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sde: 1073 MB, 1073741824 bytes255 heads, 63 sectors/track, 130 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes配置RAW盘:[root@racnode1 ~]# /bin/raw /dev/raw/raw1 /dev/sdb --rac1/dev/raw/raw1: bound to major 8, minor 16[root@racnode1 ~]# /bin/raw /dev/raw/raw2 /dev/sdc/dev/raw/raw2: bound to major 8, minor 32[root@racnode1 ~]# /bin/raw /dev/raw/raw3 /dev/sdd/dev/raw/raw4: bound to major 8, minor 64[root@racnode2 ~]# /bin/raw /dev/raw/raw1 /dev/sdb --rac2/dev/raw/raw1: bound to major 8, minor 16[root@racnode2 ~]# /bin/raw /dev/raw/raw2 /dev/sdc/dev/raw/raw2: bound to major 8, minor 32[root@racnode2 ~]# /bin/raw /dev/raw/raw3 /dev/sdd/dev/raw/raw3: bound to major 8, minor 48[root@racnode2 ~]# /bin/raw /dev/raw/raw4 /dev/sde/dev/raw/raw4: bound to major 8, minor 64然后再修改文件[root@node1 ~]# vi /etc/udev/rules.d/60-raw.rules增加如下内容:ACTION=="add", KERNEL=="/dev/sdb",RUN+="/bin/raw /dev/raw/raw1 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="16",RUN+="/bin/raw/dev/raw/raw1 %M %m" ACTION=="add", KERNEL=="/dev/sdc",RUN+="/bin/raw /dev/raw/raw2 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="32",RUN+="/bin/raw/dev/raw/raw2 %M %m" ACTION=="add", KERNEL=="/dev/sdd",RUN+="/bin/raw /dev/raw/raw3 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="48",RUN+="/bin/raw/dev/raw/raw3 %M %m" ACTION=="add",KERNEL=="/dev/sde",RUN+="/bin/raw /dev/raw/raw4 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="64",RUN+="/bin/raw/dev/raw/raw4 %M %m"chown oracle:dba /dev/raw/raw*并修改/etc/rc.localvi /etc/rc.localchown oracle:oinstall /dev/raw/raw* --在服务器启动时自动授权注:ASM无法启动就需要检查raw盘权限是否正确RAW盘重启生效:# service rawdevices restart --重启raw服务或者# cd /etc/rc.d/init.d# sh rawdevices startAssigning devices:/dev/raw/raw1 --> /dev/sdb1/dev/raw/raw1: bound to major 8, minor 17/dev/raw/raw2 --> /dev/sdb2/dev/raw/raw2: bound to major 8, minor 18/dev/raw/raw3 --> /dev/sdb3/dev/raw/raw3: bound to major 8, minor 19done# /sbin/chkconfig rawdevices on。
Oracle的闪回恢复区和归档日志多路径设置
Oracle的闪回恢复区和归档⽇志多路径设置Oracle9i开始提供闪回查询,以便能在需要的时候查到过去某个时刻的⼀致性数据,这是通过Undo实现的。
这个功能有很⼤的限制,就是相关事务的undo不能被覆盖,否则就⽆⼒回天了。
oracle10g⼤⼤的增强了闪回查询的功能,并且提供了将整个数据库回退到过去某个时刻的能⼒,这是通过引⼊⼀种新的flashback log实现的。
flashback log有点类似redo log,只不过redo log将数据库往前滚,flashback log则将数据库往后滚。
为了保存管理和备份恢复相关的⽂件,oracle10g提供了⼀个叫做闪回恢复区(Flashback recovery area)的新特性,可以将所有恢复相关的⽂件,⽐如flashback log,archive log,backup set等,放到这个区域集中管理。
1.设置闪回恢复区闪回恢复区主要通过3个初始化参数来设置和管理db_recovery_file_dest :指定闪回恢复区的位置db_recovery_file_dest_size :指定闪回恢复区的可⽤空间⼤⼩db_flashback_retention_target :指定数据库可以回退的时间,单位为分钟,默认1440分钟,也就是⼀天。
当然,实际上可回退的时间还决定于闪回恢复区的⼤⼩,因为⾥⾯保存了回退所需要的flash log。
所以这个参数要和db_recovery_file_dest_size配合修改。
2.启动flashback database设置了闪回恢复区后,可以启动闪回数据库功能。
⾸先,数据库必须已经处于归档模式1.关闭数据库SQL> shutdown immediate;2.启动数据库为mount模式SQL> startup mount3.显⽰和修改归档模式SQL> archive log listSQL> alter database archivelog;SQL> alter database open4.设置归档⽇志的格式SQL>alter system set log_archive_format='ARC%s%t%r.log' scope=spfile;5.设置归档⽇志的存放路径SQL>alter system set log_archive_dest='+data/arcl' scope=spfile;SQL>shutdown immediateSQL>startup6.强制切换归档⽇⾄SQL>alter system switch logfile;7.取消归档SQL>alter database noarchivelog;参数1.格式参数%s⽇志序列号%S⽇志序列号(带前导的0)%t重做线程编号%a活动的ID号%d数据库ID号%r RESELOGS的iD值SQL> archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination USE_DB_RECOVERY_FILE_DESTOldest online log sequence 156Next log sequence to archive 158Current log sequence 158然后,启动数据库到mount状态SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> startup mountORACLE instance started.Total System Global Area 285212672 bytesFixed Size 1218992 bytesVariable Size 75499088 bytesDatabase Buffers 205520896 bytesRedo Buffers 2973696 bytesDatabase mounted.SQL>alter database flashback on;数据库已更改。
Oracle 11gR2 RAC ASM Multipath ESXi Openfile
Oracle 11gR2 RAC ASM Multipath ESXiOpenfiler 1.介绍硬件配置:FUJITSU PRIMERGY RX300 S7 ,内存128G,硬盘2T虚拟机版本:VMware ESXi 5.5 update1Openfiler版本:2.99.1虚拟客户机:Oracle Linux 6 update 7Oracle数据库版本:11.2.0.4rac架构规划:安装VMware ESXi 5.5 update1 (略)安装Openfiler(略)Openfilter配置:https://10.10.10.200:446使用默认的用户名密码进行登陆User:openfilerPass:password安装Oracle Linux 6.7 (略)安装时选择“Desktop”模式,安装完成后关闭防火墙和selinux 安装必要的软件包:配置本地yum安装源:mount -o loop OracleLinux-R6-U7-Server-x86_64-dvd.iso /mediacd /etc/you.repos.dvilocal.repo[oel6]name=Enterprise Linux 6.7 DVDbaseurl=file:///media/Servergpgcheck=0enabled=1yum install oracle-rdbms-server-11gR2-preinstall安装Oracle:(以下操作在所有节点)cat /etc/sysctl.conf# oracle-rdbms-server-11gR2-preinstall setting for fs.file-max is 6815744fs.file-max = 6815744# oracle-rdbms-server-11gR2-preinstall setting for kernel.sem is '250 32000 100 128'kernel.sem = 250 32000 100 128# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmni is 4096kernel.shmmni = 4096# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 1073741824 on x86_64# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 2097152 on i386# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4398046511104 on x86_64 # oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4294967295 on i386 kernel.shmmax = 4398046511104# oracle-rdbms-server-11gR2-preinstall setting for kernel.panic_on_oops is 1 per Orabug 19212317kernel.panic_on_oops = 1# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_default is 262144net.core.rmem_default = 262144# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_max is 4194304net.core.rmem_max = 4194304# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_default is 262144net.core.wmem_default = 262144# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_max is 1048576net.core.wmem_max = 1048576# oracle-rdbms-server-11gR2-preinstall setting for fs.aio-max-nr is 1048576fs.aio-max-nr = 1048576# oracle-rdbms-server-11gR2-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500 net.ipv4.ip_local_port_range = 9000 65500cat /etc/security/limits.conf# oracle-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024oracle soft nofile 1024# oracle-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536oracle hard nofile 65536# oracle-rdbms-server-11gR2-preinstall setting for nproc soft limit is 16384# refer orabug15971421 for more info.oracle soft nproc 16384# oracle-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384oracle hard nproc 16384# oracle-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KBoracle soft stack 10240# oracle-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KBoracle hard stack 32768# oracle-rdbms-server-11gR2-preinstall setting for memlock hard limit is maximum of {128GB (x86_64) / 3GB (x86) or 90 % of RAM}oracle hard memlock 134217728# oracle-rdbms-server-11gR2-preinstall setting for memlock soft limit is maximum of {128GB (x86_64) / 3GB (x86) or 90% of RAM}oracle soft memlock 134217728grid soft nofile 1024grid hard nofile 65536grid soft nproc 16384grid hard nproc 16384grid soft stack 10240grid hard stack 32768grid hard memlock 134217728grid soft memlock 134217728配置iSCSI(启动器)服务:serviceiscsi startserviceiscsid startchkconfigiscsi onchkconfigiscsid on[root@host1 rules.d]# iscsiadm -m discovery -t sendtargets -p 10.10.10.20010.10.10.200:3260,1 .openfiler:tsn.a5e4c27b1e4d10.10.10.201:3260,1 .openfiler:tsn.a5e4c27b1e4d10.10.10.200:3260,1 .openfiler:tsn.a55e30d0c0e910.10.10.201:3260,1 .openfiler:tsn.a55e30d0c0e9手工登录:iscsiadm -m node -T 10.10.10.200:3260,1 .openfiler:tsn.a5e4c27b1e4d –l -p 10.10.10.200iscsiadm -m node -T 10.10.10.201:3260,1 .openfiler:tsn.a5e4c27b1e4d –l -p 10.10.10.200iscsiadm -m node -T 10.10.10.200:3260,1 10.10.10.200:3260,1 .openfiler:tsn.a55e30d0c0e9 –l -p 10.10.10.200iscsiadm -m node -T 10.10.10.201:3260,1 10.10.10.200:3260,1 .openfiler:tsn.a55e30d0c0e9 –l -p 10.10.10.200Display current sessions# iscsiadm -m sessiontcp: [1] 10.10.10.200:3260,1 .openfiler:tsn.a5e4c27b1e4d (non-flash)tcp: [2] 10.10.10.201:3260,1 .openfiler:tsn.a5e4c27b1e4d (non-flash)tcp: [3] 10.10.10.200:3260,1 .openfiler:tsn.a55e30d0c0e9 (non-flash)tcp: [4] 10.10.10.201:3260,1 .openfiler:tsn.a55e30d0c0e9 (non-flash)安装multipath:yum install device-mapper-multipath配置multipath:cat /etc/multipath.confblacklist {devnode "^sda[1-2]"}defaults {user_friendly_names yespath_grouping_policymultibusfailback immediateno_path_retry fail}启动服务:servicemultipathd startcat/etc/udev/rules.d/12-dm-permissions.rulesENV{DM_NAME}=="mpathb", OWNER:="grid", GROUP:="asmadmin", MODE:="0660", SYMLINK+="oracleasm/disk-$env{DM_NAME}"Reload UDEV ( OEL 6 style )udevadm control --reload-rulesstart_udevcat /etc/hosts:10.10.10.11 host1 10.10.10.12 host2 10.10.10.21 host1-vip10.10.10.22 host2-vip10.10.10.31 host-cluster host-cluster-scan192.168.1.11 host1-priv192.168.1.12 host2-priv建用户:groupadd -g 5000 asmadmingroupadd -g 5001 asmdbagroupadd -g 5002 asmopergroupadd -g 6000 oinstallgroupadd -g 6001 dbagroupadd -g 6002 operuseradd -u 2000 -g oinstall -G asmadmin,asmdba,asmoper griduseradd -u 2001 -g oinstall -G dba,asmdba oracleoracle用户可能之前已经创建,这里命令会报错可使用usermod命令修改oracle用户所属组,否则后台使用DBCA创建库的时候会出错passwd gridpasswd oracle建目录:mkdir –p /oracle/grid_basemkdir –p /oracle/grid_homemkdir –p /oracle/app/product/11.2/db_1chown –R grid:asmadmin /oraclechown –R oracle:oinstall /oracle/app设置环境变量:grid 用户环境变量:export ORACLE_BASE=/oracle/grid_baseexport ORACLE_HOME=/oracle/grid_homeexport GRID_HOME=/oracle/grid_homeexport PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:/sbin:/bin:/usr/sbin:/usr/binexport ORACLE_SID=+ASM1export LD_LIBRARY_PATH=$GRID_HOME/lib:$GRID_HOME/lib32export NLS_LANG=AMERICAN_AMERICA.ZHS16GBKoracle用户环境变量:export ORACLE_BASE=/oracle/appexport ORACLE_HOME=/oracle/app/product/11.2/db_1export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/sbin:/bin:/usr/sbin:/usr/bin export ORACLE_SID=rac1export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK安装Grid Infrastructure(使用grid用户):检查状态:[root@host1 ~]# crsctl stat res -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATA1.dgONLINE ONLINE host1ONLINE ONLINE host2ora.LISTENER.lsnrONLINE ONLINE host1ONLINE ONLINE host2ora.asmONLINE ONLINE host1 StartedONLINE ONLINE host2 Startedora.gsdOFFLINE OFFLINE host1OFFLINE OFFLINE host2workONLINE ONLINE host1ONLINE ONLINE host2ora.onsONLINE ONLINE host1ONLINE ONLINE host2--------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr1 ONLINE ONLINE host1ora.cvu1 ONLINE ONLINE host1ora.host1.vip1 ONLINE ONLINE host1ora.host2.vip1 ONLINE ONLINE host2ora.oc4j1 ONLINE ONLINE host1ora.scan1.vip1 ONLINE ONLINE host1打补丁(GI):两节点都执行,使用grid用户cd/oracle/grid_home/crs/install[root@host1 install]# ./rootcrs.pl -unlockUsing configuration parameter file: ./crsconfig_paramsCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host1' CRS-2673: Attempting to stop 'ora.crsd' on 'host1'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host1'CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'host1'CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host1'CRS-2673: Attempting to stop 'ora.oc4j' on 'host1'CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'host1'CRS-2677: Stop of 'ora.cvu' on 'host1' succeededCRS-2672: Attempting to start 'ora.cvu' on 'host2'CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host1' succeededCRS-2673: Attempting to stop 'ora.scan1.vip' on 'host1'CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host1' succeededCRS-2673: Attempting to stop 'ora.host1.vip' on 'host1'CRS-2676: Start of 'ora.cvu' on 'host2' succeededCRS-2677: Stop of 'ora.scan1.vip' on 'host1' succeededCRS-2672: Attempting to start 'ora.scan1.vip' on 'host2'CRS-2677: Stop of 'ora.host1.vip' on 'host1' succeededCRS-2672: Attempting to start 'ora.host1.vip' on 'host2'CRS-2676: Start of 'ora.scan1.vip' on 'host2' succeededCRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'host2'CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'host2' succeededCRS-2677: Stop of 'ora.oc4j' on 'host1' succeededCRS-2672: Attempting to start 'ora.oc4j' on 'host2'CRS-2676: Start of 'ora.host1.vip' on 'host2' succeededCRS-2677: Stop of 'ora.DATA1.dg' on 'host1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'host1'CRS-2677: Stop of 'ora.asm' on 'host1' succeededCRS-2676: Start of 'ora.oc4j' on 'host2' succeededCRS-2673: Attempting to stop 'ora.ons' on 'host1'CRS-2677: Stop of 'ora.ons' on 'host1' succeededCRS-2673: Attempting to stop 'work' on 'host1'CRS-2677: Stop of 'work' on 'host1' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host1' has completed CRS-2677: Stop of 'ora.crsd' on 'host1' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'host1'CRS-2673: Attempting to stop 'ora.evmd' on 'host1'CRS-2673: Attempting to stop 'ora.asm' on 'host1'CRS-2673: Attempting to stop 'ora.mdnsd' on 'host1'CRS-2677: Stop of 'ora.evmd' on 'host1' succeededCRS-2677: Stop of 'ora.mdnsd' on 'host1' succeededCRS-2677: Stop of 'ora.asm' on 'host1' succeededCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'host1'CRS-2677: Stop of 'ora.ctssd' on 'host1' succeededCRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'host1'CRS-2677: Stop of 'ora.cssd' on 'host1' succeededCRS-2673: Attempting to stop 'ora.crf' on 'host1'CRS-2677: Stop of 'ora.crf' on 'host1' succeededCRS-2677: Stop of 'ora.gipcd' on 'host1' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'host1'CRS-2677: Stop of 'ora.gpnpd' on 'host1' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host1' has completedCRS-4133: Oracle High Availability Services has been stopped.Successfully unlock /oracle/grid_home更新OPatch软件:cd /oracle/grid_homerm -rfOPatch/unzip p6880880_112000_Linux-x86-64.zipcd /oracle/soft/gridunzip p2*******_112040_Linux-x86-64.zipcd/oracle/grid_home/OPatch/./opatchnapply -oh /oracle/grid_home -local /oracle/soft/grid/20996923./opatchlspatches./opatchlsinventory打补丁(DB):两个节点都执行,使用oracle用户cd /oracle/app/product/11.2/db_1/rm -rfOPatch/unzip p6880880_112000_Linux-x86-64.zipcd/oracle/soft/db/unzip p2*******_112040_Linux-x86-64.zipcd /oracle/app/product/11.2/db_1/OPatch./opatchnapply -oh /oracle/app/product/11.2/db_1 -local /oracle/soft/db/20760982./opatchlspatches./opatchlsinventory在root用户下开启crs服务:/oracle/grid_home/rdbms/install/rootadd_rdbms.sh/oracle/grid_home /crs/install/rootcrs.pl -patch系统检查:crsctl check hascrsctl check crscrsctl stat res -t ifconfig–a创建数据库DBCA:调整VKTM优先级:参考文档:百度文库:Oracle 11gR2 RAC ESXiOpenfiler/link?url=zhS05_Rf3zH0CQF-RFpIwOr8S2-q2qw8IHFLDlz9e6UusN-WFN0L vHW9Tm0tRR-MCFuiajwOQaZimYyZpXUJ5zX2Hf45K_V2uTC2qh4wlXSOracle RAC安装中使用multipath实现存储设备持久化/s/blog_48567d850101jxmj.htmlLinux平台的多路径软件multipath的使用案例/23135684/viewspace-745789/Oracle 11g R2+RAC+ASM+OracleLinux6.4安装详解(图)/xmlrpc.php?r=blog/article&id=4681351&uid=29655480在Oracle Enterprise Linux 和iSCSI 上构建您自己的Oracle RAC 11g 集群/technetwork/cn/articles/hunter-rac11gr2-iscsi-083834-zhs.htmlRAC 11.2.0.4 setup using OPENFILER with Multipath ISCSI diskshttp://www.hhutzler.de/blog/rac-11-2-0-4-setup-using-openfiler-with-multipathed-iscsi-disks/#s etup-iscsi-clients-rac-nodesUDEV setup in a Multipath env for RAC/ASMhttp://www.hhutzler.de/blog/udev-setup-for-a-multipath-env/。
济南市公安局oracle rac据库实施
1.Oracle10g RAC数据库简介oracle10g RAC结构下图显示了Oracle RAC 10g配置的主要组件。
集群中的节点通常是单独的服务器(主机)。
硬件在硬件级别上,RAC 集群中的各节点共享三种功能:1. 对共享磁盘存储的访问2. 与专用网络的连接3. 对公共网络的访问。
共享磁盘存储Oracle RAC 依赖于一个共享磁盘体系结构。
数据库文件、联机重做日志和数据库的控制文件必须都能为集群中的每个节点所访问。
共享磁盘还存储Oracle Cluster Registry 和Voting Disk(稍后讨论)。
配置共享存储有多种方法,包括直接连接磁盘(通常是使用铜缆或光纤的SCSI)、存储区域网(SAN) 和网络连接存储(NAS)。
专用网络每个集群节点通过专用高速网络连接到所有其他节点,这种专用高速网络也称为集群互联或高速互联(HSI)。
Oracle 的Cache Fusion 技术使用这种网络将每个主机的物理内存(RAM) 有效地组合成一个高速缓存。
Oracle Cache Fusion 通过在专用网络上传输某个Oracle 实例高速缓存中存储的数据允许其他任何实例访问这些数据。
它还通过在集群节点中传输锁定和其他同步信息保持数据完整性和高速缓存一致性。
专用网络通常是用千兆以太网构建的,但是对于高容量的环境,很多厂商提供了专门为Oracle RAC 设计的低延迟、高带宽的专有解决方案。
Linux 还提供一种将多个物理NIC 绑定为一个虚拟NIC 的方法(此处不涉及)来增加带宽和提高可用性。
公共网络为维持高可用性,为每个集群节点分配了一个虚拟IP 地址(VIP)。
如果主机发生故障,则可以将故障节点的IP 地址重新分配给一个可用节点,从而允许应用程序通过相同的IP 地址继续访问数据库。
Oracle 集群就绪服务Oracle RAC 10g引进了Oracle 集群就绪服务(CRS) —一组用于集群环境的与平台无关的系统服务。
【ORACLE】ORACLERAC设置控制文件多路
ORACLE instance shut down.
[oracle@rac01 ~]$ srvctl start database -d proc
SQL> select inst_id,name from gv$controlfile;
博客园 用户登录 代码改变世界 密码登录 短信登录 忘记登录用户名 忘记密码 记住我 登录 第三方登录/注册 没有账户, 立即注册
【 ORACLE】 ORACLERAC设置控制文件多路
[oracle@rac01 ~]$ srvctl stop database -d proc -o immediate [oracle@rac01 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Sun Apr 8 20:58:16 2018
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
channel ORA_DISK_1: copied control file copy Finished restore at 08-APR-18
SQL> alter system set control_files='+DATA/proc/controlfile/current.274.971309975','+ARCH/p roc/CONTROLFILE/current.256.972939847' scope=spfile sid='*'; System altered.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Oracle RAC存储多路径的设置案例以redhat6、centos6、oracle6及Asianux4为例1.安装多路径的客户端如果是FC SAN: yum install device-mapper device-mapper-multipath -y如果是IP SAN: yum install iscsi-initiator-utils device-mapper device-mapper-multipath -y2.设置一个多路径的配置文件:/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf 的文件拷贝到/etc目录下面:3.启动multipath服务/etc/init.d/multipathd restart4.将所有/etc/multipath/bindings 设置为一致,两边的内容一样[root@rac81]# cat /etc/multipath/bindings# Multipath bindings, Version : 1.0# NOTE: this file is automatically maintained by the multipath program.# You should not need to edit this file in normal circumstances.## Format:# alias wwid#mpatha 3600605b005c1b03019ae96a616049c04mpathb 3600143801259f9320000500000360000mpathc 3600143801259f9320000500000420000mpathd 3600143801259f9320000500000460000mpathe 3600143801259f93200005000004a0000mpathf 3600143801259f93200005000003e0000mpathg 3600143801259f93200005000003a0000mpathh 3600143801259f93200005000004e0000mpathi 3600143801259f9320000500000520000mpathj 3600143801259f9320000500000560000mpathk 3600143801259f93200005000005a0000mpathl 3600143801259f93200005000005e0000mpathm 3600143801259f93200005000007a00004.配置multipath.conf 文件的磁盘项目devices {device {vendor "HP"product "HSV2[01]0|HSV300|HSV4[05]0"getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"prio aluahardware_handler "0"path_selector "round-robin 0"path_grouping_policy group_by_priofailback immediaterr_weight uniformno_path_retry 18rr_min_io_rq 1path_checker tur}}上述的内容根据磁盘柜的型号定制,如HP...blacklist {#wwid "3600605b005c192d019aeb93a121ef663"wwid "3600605b005c1b03019ae96a616049c04"devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"devnode "^hd[a-z]"}上述的内容要将本地磁盘的wwid号放入blacklist中,避免产生多路径multipaths {multipath {wwid 3600143801259f9320000500000360000alias disk1}multipath {wwid 3600143801259f9320000500000420000alias disk2}multipath {wwid 3600143801259f9320000500000460000alias disk3}multipath {wwid 3600143801259f93200005000004a0000alias disk4}multipath {wwid 3600143801259f93200005000003e0000alias disk5}multipath {wwid 3600143801259f93200005000003a0000alias disk6}multipath {wwid 3600143801259f93200005000004e0000alias disk7}multipath {wwid 3600143801259f9320000500000520000alias disk8}multipath {wwid 3600143801259f9320000500000560000alias disk9}multipath {wwid 3600143801259f93200005000005a0000alias disk10}multipath {wwid 3600143801259f93200005000005e0000alias disk11}multipath {wwid 3600143801259f93200005000007a0000alias disk12}}上述的内容是设置multipath别名项defaults {udev_dir /devpolling_interval 10path_selector "round-robin 0"path_grouping_policy multibusgetuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"prio aluapath_checker readsector0rr_min_io 100max_fds 8192rr_weight prioritiesfailback immediateno_path_retry failuser_friendly_names yes}上述defaults一定要打开,一般为默认即可;5.执行multipath -F命令将原来生成错误的多路径删除[root@rac81 ~]# multipath -F6.执行multipath -v2命令生成新的多路径[root@rac81 ~]# multipath -v27.使用multipath -ll查看生成多路径的情况[root@rac81 ~]# multipath -lldisk9 (3600143801259f9320000500000560000) dm-16 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:9 sdj 8:144 failed faulty running|- 6:0:1:9 sdv 65:80 failed faulty running|- 8:0:0:9 sdah 66:16 active ready running`- 8:0:1:9 sdat 66:208 active ready runningdisk8 (3600143801259f9320000500000520000) dm-14 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:8 sdi 8:128 failed faulty running|- 6:0:1:8 sdu 65:64 failed faulty running|- 8:0:0:8 sdag 66:0 active ready running`- 8:0:1:8 sdas 66:192 active ready runningdisk12 (3600143801259f93200005000007a0000) dm-18 HP,HSV360 size=100G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:12 sdm 8:192 failed faulty running|- 6:0:1:12 sdy 65:128 failed faulty running|- 8:0:0:12 sdak 66:64 active ready running`- 8:0:1:12 sdaw 67:0 active ready runningdisk7 (3600143801259f93200005000004e0000) dm-12 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:7 sdh 8:112 failed faulty running|- 6:0:1:7 sdt 65:48 failed faulty running|- 8:0:0:7 sdaf 65:240 active ready running`- 8:0:1:7 sdar 66:176 active ready runningdisk11 (3600143801259f93200005000005e0000) dm-20 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:11 sdl 8:176 failed faulty running|- 6:0:1:11 sdx 65:112 failed faulty running|- 8:0:0:11 sdaj 66:48 active ready running`- 8:0:1:11 sdav 66:240 active ready runningdisk6 (3600143801259f93200005000003a0000) dm-6 HP,HSV360 size=1.0G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:2 sdc 8:32 failed faulty running|- 6:0:1:2 sdo 8:224 failed faulty running|- 8:0:0:2 sdaa 65:160 active ready running`- 8:0:1:2 sdam 66:96 active ready runningdisk10 (3600143801259f93200005000005a0000) dm-22 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:10 sdk 8:160 failed faulty running|- 6:0:1:10 sdw 65:96 failed faulty running|- 8:0:0:10 sdai 66:32 active ready running`- 8:0:1:10 sdau 66:224 active ready runningdisk5 (3600143801259f93200005000003e0000) dm-4 HP,HSV360 size=1.0G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:3 sdd 8:48 failed faulty running|- 6:0:1:3 sdp 8:240 failed faulty running|- 8:0:0:3 sdab 65:176 active ready running`- 8:0:1:3 sdan 66:112 active ready runningdisk4 (3600143801259f93200005000004a0000) dm-10 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:6 sdg 8:96 failed faulty running|- 6:0:1:6 sds 65:32 failed faulty running|- 8:0:0:6 sdae 65:224 active ready running`- 8:0:1:6 sdaq 66:160 active ready runningdisk3 (3600143801259f9320000500000460000) dm-8 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:5 sdf 8:80 failed faulty running|- 6:0:1:5 sdr 65:16 failed faulty running|- 8:0:0:5 sdad 65:208 active ready running`- 8:0:1:5 sdap 66:144 active ready runningdisk2 (3600143801259f9320000500000420000) dm-0 HP,HSV360 size=500G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:4 sde 8:64 failed faulty running|- 6:0:1:4 sdq 65:0 failed faulty running|- 8:0:0:4 sdac 65:192 active ready running`- 8:0:1:4 sdao 66:128 active ready runningdisk1 (3600143801259f9320000500000360000) dm-2 HP,HSV360 size=1.0G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=10 status=active|- 6:0:0:1 sdb 8:16 failed faulty running|- 6:0:1:1 sdn 8:208 failed faulty running|- 8:0:0:1 sdz 65:144 active ready running`- 8:0:1:1 sdal 66:80 active ready running[root@rac81 ~]#。