WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
? ? ? ? rac1
? ? ? ? rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/oracle/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要耐心等待好几分钟(90s+600s),然后会出现一个报错,这是由于Oracle在10.2.0.1上的bug所致,解决办法是通过修改$ORA_CRS_HOME/bin下满的vipca和srvctl文件,分别在这2个文件的第124行和168行(注意,不是文件末尾,否则更改可能会无效)添加unset LD_ASSUME_KERNEL后保存退出,再重新再节点2执行root.sh
[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
[root@rac2 bin]# ./crs_stat -t
CRS-0202: No resources are registered.
此时由于还没有配置vip,因此没有资源被注册,去任意节点运行vipca(前提是这个节点的vipca已修改过),如果报以下错误:
[oracle@rac1 bin]$ vipca
Error 0(Native: listNetInterfaces:[3])
? [Error 0(Native: listNetInterfaces:[3])]
那么需要配置一下网卡:
[oracle@rac1 bin]$ ./oifcfg iflist
eth0? 192.168.1.0
eth1? 10.0.0.0
[oracle@rac1 bin]$ ./oifcfg getif
[oracle@rac1 bin]$ ./oifcfg setif -global eth0/192.168.1.0:public
[oracle@rac1 bin]$ ./oifcfg setif -global eth1/10.10.10.1:cluster_interconnect
[oracle@rac1 bin]$ ./oifcfg getif
eth0? 192.168.1.0? global? public
eth1? 10.10.10.1? global? cluster_interconnect
注意要有打开图形界面的权限,并用root用户去执行vipca,而不是oracle用户,否则会报权限不足
[oracle@rac1 bin]$ vipca
Insufficient privileges.
Insufficient privileges.
接着就会跳出vip配置助手的OUI界面,开始配置vip,输入vip的节点别名后会自动填补vip的IP地址(过程略)
运行完vipca后退出,再次执行crs_stat,就会发现资源都已经注册到crs了
[root@rac1 bin]# ./crs_stat -t
Name? ? ? ? ? Type? ? ? ? ? Target? ? State? ? Host? ? ? ?
------------------------------------------------------------
ora.rac1.gsd? application? ? ONLINE? ? ONLINE? ? rac1? ? ? ?
ora.rac1.ons? application? ? ONLINE? ? ONLINE? ? rac1? ? ? ?
ora.rac1.vip? application? ? ONLINE? ? ONLINE? ? rac1? ? ? ?
ora.rac2.gsd? application? ? ONLINE? ? ONLINE? ? rac2? ? ? ?
ora.rac2.ons? application? ? ONLINE? ? ONLINE? ? rac2? ? ? ?
ora.rac2.vip? application? ? ONLINE? ? ONLINE? ? rac2
最后有个小细节要注意,就是在ORACLE_HOME/bin路径下面也有srvctl的命令,也是需要通过设置unset LD_ASSUME_KERNEL来解决的,否则会和ORA_CRS_HOME/bin中的srvctl一样,报库文件链接错误,如果不想改,也不是不可以,那么就要调整一下PATH环境变量中ORA_CRS_HOME和ORACLE_HOME的位置了,因为在执行命令的时候,默认会从第一个路径开始查找,直到最后一个路径,这样就不会用到ORACLE_HOME/bin下的那个srvctl命令文件了,把ORA_CRS_HOME放到最前面,写法如下:
export PATH=$ORA_CRS_HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH
无论采用哪种方法(推荐使用第2种方法),都可以解决oracle用户不能执行srvctl和crsc