在某客户处,AIX 6.1上安装11.2.0.3的RAC,在节点2执行root.sh时出现以下报错:
[root@WM02][]#/u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation User grid has the required capabilities to run CSSD in realtime mode OLR initialization - successful Adding Clusterware entries to inittab CRS-2672: Attempting to start 'ora.mdnsd' on 'wmngistrdb02' CRS-2676: Start of 'ora.mdnsd' on 'wmngistrdb02' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'wmngistrdb02' CRS-2676: Start of 'ora.gpnpd' on 'wmngistrdb02' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wmngistrdb02' CRS-2672: Attempting to start 'ora.gipcd' on 'wmngistrdb02' CRS-2676: Start of 'ora.gipcd' on 'wmngistrdb02' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'wmngistrdb02' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'wmngistrdb02' CRS-2672: Attempting to start 'ora.diskmon' on 'wmngistrdb02' CRS-2676: Start of 'ora.diskmon' on 'wmngistrdb02' succeeded CRS-2676: Start of 'ora.cssd' on 'wmngistrdb02' succeeded ASM created and started successfully. Disk Group CRSDG created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'system'.. Operation successful. Successful addition of voting disk 7ab4753e48534f26bfb19f08586ae0d5. Successful addition of voting disk 89db32773c184fe8bfdf4ac84fabc8f1. Successful addition of voting disk 14ad0504e7b84f55bf956a7e63107091. Successfully replaced voting disk group with +CRSDG. CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 7ab4753e48534f26bfb19f08586ae0d5 (/dev/rhdiskpower10) [CRSDG] 2. ONLINE 89db32773c184fe8bfdf4ac84fabc8f1 (/dev/rhdiskpower8) [CRSDG] 3. ONLINE 14ad0504e7b84f55bf956a7e63107091 (/dev/rhdiskpower9) [CRSDG] Located 3 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'wmngistrdb02' CRS-2676: Start of 'ora.asm' on 'wmngistrdb02' succeeded CRS-2672: Attempting to start 'ora.CRSDG.dg' on 'wmngistrdb02' CRS-2676: Start of 'ora.CRSDG.dg' on 'wmngistrdb02' succeeded PRCR-1079 : Failed to start resource ora.scan1.vip CRS-5017: The resource action "ora.scan1.vip start" encountered the following error: CRS-5005: IP Address: 10.92.6.19 is already in use in the network . For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/wmngistrdb02/agent/crsd/orarootagent_root/orarootagent_root.log". CRS-2674: Start of 'ora.scan1.vip' on 'wmngistrdb02' failed CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy start scan ... failed FirstNode configuration failed at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 9196. /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
初期由于分配IP存在不确定因素,一致以为是IP导致,但是实际不是。
从正常的安装来说,OCR盘在节点1执行时已经创建成功,节点2只需要应用即可。从此思路推断,是存储问题导致安装失败。
验证节点1和节点2存储是否划盘一致:
节点1:
Pseudo name=hdiskpower8 Symmetrix ID=000292604535 Logical device ID=008F state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 0 fscsi0 hdisk12 FA 7gA active alive 0 0 1 fscsi2 hdisk20 FA 8gA active alive 0 0 Pseudo name=hdiskpower9 Symmetrix ID=000292604535 Logical device ID=0090 state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 0 fscsi0 hdisk13 FA 7gA active alive 0 0 1 fscsi2 hdisk21 FA 8gA active alive 0 0 Pseudo name=hdiskpower10 Symmetrix ID=000292604535 Logical device ID=0094 state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 0 fscsi0 hdisk14 FA 7gA active alive 0 0 1 fscsi2 hdisk22 FA 8gA active alive 0 0
节点2:
Pseudo name=hdiskpower8 Symmetrix ID=000292604535 Logical device ID=01CB state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 1 fscsi2 hdisk24 FA 8gA active alive 0 1 0 fscsi0 hdisk9 FA 7gA active alive 0 0 Pseudo name=hdiskpower9 Symmetrix ID=000292604535 Logical device ID=01CF state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 0 fscsi0 hdisk10 FA 7gA active alive 0 0 1 fscsi2 hdisk25 FA 8gA active alive 0 1 Pseudo name=hdiskpower10 Symmetrix ID=000292604535 Logical device ID=01D7 state=alive; policy=SymmOpt; priority=0; queued-IOs=0; ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 0 fscsi0 hdisk11 FA 7gA active alive 0 0 1 fscsi2 hdisk26 FA 8gA active alive 0 1
果然两个节点的存储划盘是不一致的,至此问题已经很清楚,是由于存储划盘不一致导致节点2执行root.sh报CRS-5005和CRS-2632错。
我在vmware CentOS6.6安装Oracle 11gR2 rac,又遇到这个问题。
验证节点1和节点2存储是否划盘一致,用什么命令?
你都没有给出解决方案。
本文是基于EMC存储的问题,需要专门的命令查看
vmware的话只需看两边挂载点是否相同即可
有没有用vmware做过这个实验?
vmware无非就是两个节点的磁盘顺序问题,还有权限问题,建议你仔细检查下
楼主,我在VMVARE上安装11G RAC的时候也遇到了这个问题。但是当我将node2的SCANIP改成跟node1不同的时候,node2上跑root.sh就不会报错。但是最后都只能看到各自的资源。所以我这种情况应该不是磁盘的问题造成的吧?