oracle 10g rac部署文档(四)
/log/message
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
手工卸载clusterware:如果clusterware安装失败可以使用下面方法卸载clusterware!安装成功就不要卸载了!脚本别瞎用!
cd /u01/app/crs_1/install
./rootdelete.sh
./rootdeinstall.sh
rm -fr /etc/ora*
rm -fr /etc/init.d/*.crs
rm -fr /etc/init.d/*.crsd
rm -fr /etc/init.d/*.css
rm -fr /etc/init.d/*.cssd
su - oracle
rm -fr $ORACLE_BASE/*
[node1.up.com] [node2.up.com]
[storage.up.com]
node1.up.com:
192.168.0.7
node2.up.com:
192.168.0.8
storage.up.com:
192.168.0.123
存储配置(storage):
1,scsi
[root@storage ~]# rpm -qa | grep scsi
scsi-target-utils-0.0-5.20080917snap.el5
2,准备>5G
dd if=/dev/zero of=/tmp/disk.img bs=1G count=5
3,发布这个空间
/etc/tgt/targets.conf
----------------------------
backing-store /tmp/disk.img
initiator-address 192.168.0.7
initiator-address 192.168.0.8
-----------------------------
service tgtd start
检测:
tgtadm --lld iscsi --op show --mode target
4,chkconfig
chkconfig tgtd on
节点配置(nodeX):
1,iscsi客户端
[root@node1 cluster]# rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.868-0.18.el5
[root@node1 cluster]# ls arb/iscsi/
ifaces isns nodes send_targets slp static
如果有替换,那么一定清除:
[root@node1 cluster]# rm -rf arb/iscsi/*
!!!主意:
service iscsid start
[root@node1 cluster]# iscsiadm -m discovery -t sendtargets -p 192.168.0.123:3260
192.168.0.123:3260,1 iqn.2010-04-07.com.up.storage:sharedisk
2,登录存储
/etc/init.d/iscsi start
3,Udev策略
方法:
udevinfo -a -p /sys/block/sdX
根据这个输出信息。
然后:
[root@ rules.d]# cat /etc/udev/rules.d/55-iscsi.rules
SUBSYSTEM=="block",SYSFS{size}=="19551042",SYSFS{model}=="VIRTUAL-DISK",SYMLINK="iscsidisk"
刷新:
start_udev
5,导入存储
[root@node1 cluster]# /etc/init.d/iscsi start
iscsid (pid 5714 5713) 正在运行...
设置 iSCSI 目标:Logging in to [iface: default, target: iqn.2010-04-07.com.up.storage:sharedisk, portal: 192.168.0.123,3260]
Login to [iface: default, target: iqn.2010-04-07.com.up.storage:sharedisk, portal: 192.168.0.123,3260]: successful
[确定]
[root@node1 cluster]# ls /dev/iscsi/ -l
总计 0
lrwxrwxrwx 1 root root 6 04-07 10:34 sharedisk -> ../sdb
6,修改LVM(支持Cluster)
yum install lvm2-cluster
[root@node1 cluster]# lvmconf --enable-cluster
[root@node1 cluster]# ls /etc/lvm/lvm.conf
/etc/lvm/lvm.conf
7,启动clvmd
要求cman启动的情况下
/etc/init.d/clvmd start
8,正常配置你的LVM
63 p
vcreate /dev/iscsidisk
64 vgcreate vg /dev/iscsidisk
65 lvcreate -n
66 lvcreate -h
67 lvcreate -L 4G -n lv01 vg
tar -xzvf ora.tar.gz 解压
alter system set LOCAL_LISTENER="(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521))" scope=both sid='instance'