OCM_Session7_10_安装clusterware(六)

2014-11-24 16:36:45 · 作者: · 浏览: 4
2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Error 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])] [root@rac2 ~]#
-------------------------------------------------------------------- 这里报错,有的文档这样修改,我没有验证过。 我是在rac1节点上修改 /u01/app/oracle/product/10.2.0/crs_1/bin/vipca和 /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl后,再次在rac2节点上运行 /u01/app/oracle/product/10.2.0/crs_1/root.sh,如下:
1.这里我没有验证过。有的文档说明可以这样修改。
Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps 运行vipca配置节点 Error 0(Native: listNetInterfaces:[3]) 本地网卡错误 [Error 0(Native: listNetInterfaces:[3])] 本地网卡错误
cd /u01/app/oracle/product/10.2.0/crs_1/bin ./oifcfg 这是oracle网卡配置工具,我们可以使用这个工具来检查网卡配置是否正确 oifcfg iflist 检查网卡配置 oifcfg setif -global eth0/192.168.1.0:public 指定全局公有网卡 oifcfg setif -global eth1/172.168.1.0:cluster_interconnect 指定全局私有网卡 oifcfg getif 获取配置结果,当rac2配置好后rac1自动生成vipca文件,oifcfg getif
-----------------------------------------------------------------------------------------

2.我是在第一个节点rac1修改后,再在rac2上执行一下/u01/app/oracle/product/10.2.0/crs_1/root.sh脚本:
-- ------------------------------------------------------------------------------------------- rac1节点
第一个文件[root@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/vipca 搜索/LD_ASSUME_KERNEL

#Remove this workaround when the bug 3937317 is fixed arch=`uname -m` if [ "$arch" = "i686" -o "$arch" = "ia64" ] then LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL fi unset LD_ASSUME_KERNEL --- 添加一行:清除环境变量 #End workaround

第二个文件[root@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

#Remove this workaround when the bug 3937317 is fixed LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL---添加一行:清除环境变量 # Run ops control utility
--------------------------------------------------------------------------------
3.rac2节点再次执行,没有出现错误:
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured Oracle CRS stack is already configured and will be running under init(1M) [root@rac2 ~]#

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.两个节点执行完毕后,点击OK,出现如下配置信息。
\
14.在执行oracle cluster verification utility出现错误,此时需要在任意节点上执行配置虚拟ip。
\
\
15.使用root用户配置虚拟ip 此步骤在rac1和rac2节点上都可操作 执行/u01/app/oracle/product/10.2.0/crs_1/bin/vipca 自动弹出图形化界面我们可以使用vipca来创建和配置VIP GSD ONS 资源
\


16.打开欢迎界面,点击“Next”
\
17. 系统自动找到public的eth0,点击“Next”,【虚拟ip是基于公有网卡eth0】
\
18 . 补填各节点对应的vip名称和ip地址 mask地址,点击“Next” Node name IP Alias Name IP address Subnet Mask rac1 rac1-vip.localdomain 192.168.1.152 255.255.255.0 rac2 rac2-vip.localdomain 192.168.1.154 255.255.255.0
\

19 .