首页 > 代码库 > 基于Workstation8、CentOS6.5实现12C RAC搭建安装

基于Workstation8、CentOS6.5实现12C RAC搭建安装

基于Workstation8、CentOS6.5实现12C RAC搭建安装

作者:HopToad
地点:杭州滨江
邮箱:appdevzw@163.comj
微信公众号:HopToad
欢迎各界交流
2014年12月


1 准备条件

1.1 硬件版本

家用PC机一台,硬盘空间150G以上,内存8G以上。
每个虚拟机(理论要求)配置4G内存

1.2 软件版本

虚拟化软件:VMware Workstation8 以上版本
操作系统: CentOS6.5/REHL6.5/OEL6.5以上
数据库版本:12C
ASMlib下载:
http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html


2 网络规划

家中机器的IP地址都由无线路由DHCP统一分配IP,网断192.168.1.*。但是考虑到虚拟机后续接入的便利性,故将IP固定,分配IP地址如下。

主机名 PUB Private ip VIP SCANIP 域名
slave1 192.168.1.201 10.10.0.201 192.168.1.211 192.168.1.220 hoptoad.com
slave2 192.168.1.202 10.10.0.202 192.168.1.212
slave2 192.168.1.203 10.10.0.203 192.168.1.213



3 创建虚拟机

虚拟机配置:CPU数量2X2
内存:2G
硬盘:30G
网卡: 2个(一个使用VMNET0,一个使用Bridge)
安装操作系统。

3.1 克隆虚拟机

安装完一个虚拟机操作系统后,可以直接进行clone.

3.1.1 修改udev文件

克隆完毕后需要修改/etc/udev/rules.d/70-persistent-net.rules,
删除
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:42:b0:b9", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
修改第二行NAME=”eth1” 为 NAME=”eth0”
这样克隆的虚拟机网卡看到的都是eth0开头。
也可以直接删除这个文件后重启。

3.1.2 修改主机名字

修改/etc/sysconfig/network文件中HOSTNAME=rac1
防止各个机器的主机名字一样。

4 共享存储实现

4.1 其中一个VM添加磁盘

右键其中任何一个虚拟机->setting…->add…->HardDisk->Next->create a new virtual disk –》勾选independent->next->SCSI选中->next->大小设置10G->next->选择single file->设置文件名字->设置文件存放路径->advanced…->选择虚拟设备节点scsi 1:0->确定。
同样再增加2个虚拟设备节点分别是1G scsi 1:1,scsi 1:2的磁盘。

4.2 设置磁盘共享

找到该虚拟机镜像存放的路径,本人放在D盘。
找到其中的.vmx后缀文件,编辑。加入如下变量。(关机设置)
disk.locking = "FALSE"
disk.EnableUUID ="true"
diskLib.dataCacheMaxSize = "0"
scsi1.sharedBus="virtual"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.datacachepagesize=4096
diskLib.maxunsyncedwrites=”0”
scsi1:0.deviceType="disk"
scsi1:1.deviceType="disk"




注意所有虚拟机都需要设置该变量。

4.3 其他虚拟机添加磁盘

右键其中任何一个虚拟机->setting…->add…->HardDisk->Next->using an existing virtual disk->next->选中之前创建的磁盘->选择虚拟设备节点scsi 1:0->确定
同样再增加2个已经存在的磁盘分别是1G scsi 1:1,scsi 1:2的磁盘。
同样处理所有其他节点的虚拟机。
完毕后重启,所有虚拟机可以看到3块共享的磁盘。
通过执行命令fdisk –l查看。

4.4 如果使用系统的ASM配置

命令如下:
systemctl enable oracleasm.service(Redhat 7之后)
/etc/init.d/oracleasm configure 或/etc/init.d/oracleasm configure –i
配置用户grid, asmdba
配置ASM盘 (错误日志 /var/log/oracleasm)
注:记得关闭SELINUX
如果没有没有UUID,继续格式化一下,如:mkfs.ext4 /dev/sdb


5 系统环境预准备

5.1 搭建YUM环境

5.1.1 传入操作系统镜像到各个虚拟机中
5.1.2 编辑/etc/yum.repos.d/rhel-source.repo

如下:
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=file:///mnt
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

5.1.3 执行如下命令,挂载光盘

[root@slave1 ~]# mount -o loop Red\ Hat\ Enterprise\ 6.5\ x86_64.iso /mnt

5.1.4 同样每个虚拟机都配置YUM环境

完毕,这样可以通过本地YUM源来安装相关RPM包。


5.2 解压数据库软件包

执行加压命令如下:
[root@slave1 ~]# unzip linuxamd64_12c_database_1of2.zip
同理解压所有压缩包。


5.3 安装数据库必备包

5.3.1 依赖包

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
sysstat-7.0.2
unixODBC-2.2.11 (32-bit) or later
? unixODBC-devel-2.2.11 (64-bit) or later
? unixODBC-2.2.11 (64-bit) or later

5.3.2 一键安装依赖包

我们已经配置了YUM本地源,所以直接进行意见安装即可。
yum install binutils compat-libstdc++* compat-libstdc++*.i686* elfutils-libelf* elfutils-libelf*.i686* gcc* gcc-c++* glibc* glibc*.i686* libaio* libaio*.i686* libgcc* libgcc*.i686* libstdc++* libstdc++*.i686* make sysstat* unixODBC* unixODBC*.i686 oracleasm-support compat-libcap* ksh libXext*i686 libXtst*i686 libX11*i686 libXau*i686 libxcb*i686 libXi*686 libXp.i686 libXp-devel.i686 libXt.i686 libXt-devel.i686 libXtst.i686 libXtst-devel.i686 make.x86_64 gcc.x86_64 libaio.x86_64 glibc-devel.i686 libgcc.i686 glibc-devel.x86_64 compat-libstdc++-33 glibc* gcc* make* compat-db* libstdc* libXp* libXtst* compat-libstdc++* *glibc* java

yum install libXp.x86_64 libXp.i686 elfutils-libelf.x86_64 elfutils-libelf-devel.x86_64 compat-db.i686 compat-db.x86_64 libstdc++-devel.i686 libstdc++-devel.x86_64 openmotif22.i686 openmotif22.x86_64 libaio-devel.i686 libaio-devel.x86_64 control-center.x86_64 make.x86_64 gcc.x86_64 sysstat.x86_64 libaio.i686 gcc-c++.x86_64 compat-libf2c-34.x86_64 compat-libf2c-34.i686 unixODBC.i686 unixODBC.x86_64 unixODBC-devel.i686 unixODBC-devel.x86_64 libgomp.x86_64 compat-libstdc++-33.x86_64 compat-libstdc++-33.i686 glibc.i686 glibc.x86_64 glibc-common.x86_64 glibc-devel.i686 glibc-devel.x86_64 glibc-headers.x86_64 libXmu.i686 libXmu.x86_64 libgcc.i686 libgcc.x86_64 kernel-headers.x86_64 libstdc++.i686 binutils.x86_64 libstdc++.x86_64 compat-libcap1.x86_64 ompat-libcap1.i686 smartmontools iscsi-initiator-utils install nfs-utils *ksh*

5.3.3 其他包安装

安装oracleasmlib 包
启动oracleasm
#service oracleasm start
红帽7.0后启动
#systemctl start oracleasm
安装cvuqdisk程序包(grid压缩包解压得到)

6 网络配置

根据网络规划配置网络地址。
配置网络命令
#setup 弹出配置对话框即可进行配置。
也可以编辑/etc/sysconfig/network-scripts/ifcfg-eth0 文件进行配置。
(这里是针对REHL的,SUSE的话略有不同)

6.1 编辑/etc/hosts文件

每个虚拟机的/etc/hosts文件。
如下:
192.168.1.201 slave1.hoptoad.com slave1
192.168.1.202 slave2.hoptoad.com slave2
192.168.1.203 slave3.hoptoad.com slave3

#Private IP
10.10.0.201 slave1-priv.hoptoad.com slave1-priv
10.10.0.202 slave2-priv.hoptoad.com slave2-priv
10.10.0.203 slave3-priv.hoptoad.com slave3-priv

#Vip
192.168.1.211 slave1-vip.hoptoad.com slave1-vip
192.168.1.212 slave2-vip.hoptoad.com slave2-vip
192.168.1.213 slave3-vip.hoptoad.com slave3-vip

#scanip
192.168.1.220 rac-scan.hoptoad.com rac-scan

7 系统参数设置

编辑/etc/sysctl.conf 文件,增加如下内容
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax=913516544
kernel.panic_on_oops=1
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
执行sysctl –p
使生效

编辑/etc/sysconfig/network
加入NOZEROCONF=yes


8 环境变量设置

8.1 创建目录用户

groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
groupadd asmdba
groupadd asmoper
useradd -g oinstall -G dba,asmdba,asmadmin,asmoper grid
useradd -g oinstall -G dba,oper,asmdba oracle

mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown grid:oinstall /u01/app/12.1.0/grid
chown grid:oinstall /u01/app/grid
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
chown -R grid:oinstall /u01


修改用户密码
passwd grid
passwd oracle


8.2 配置/etc/security/limits.conf

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240






8.3 配置用户环境变量.bash_profile

8.3.1 RAC1节点

#Grid 用户
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave1.hoptoad.com;
export ORACLE_SID=+ASM1;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/12.1.0/grid;
export NLS_DATE_FORMAT="yy-mm-dd HH24:MI:SS";
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib64;
#ORACLE 用户

export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave1.hoptoad.com;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1;
export ORACLE_UNQNAME=prod;
export ORACLE_SID=prod1;
export ORACLE_TERM=xterm;
export PATH=/usr/sbin:$PATH;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;


8.3.2 RAC2用户

#GRID用户
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave2.hoptoad.com;
export ORACLE_SID=+ASM2;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/12.1.0/grid;
export NLS_DATE_FORMAT="yy-mm-dd HH24:MI:SS";
export PATH=$ORACLE_HOME/bin:$PATH;
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib64;
#ORACLE 用户
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave2.hoptoad.com;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1;
export ORACLE_UNQNAME=prod;
export ORACLE_SID=prod2;
export ORACLE_TERM=xterm;
export PATH=/usr/sbin:$PATH;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

8.3.3 RAC3用户

#GRID用户
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave3.hoptoad.com;
export ORACLE_SID=+ASM2;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/12.1.0/grid;
export NLS_DATE_FORMAT="yy-mm-dd HH24:MI:SS";
export PATH=$ORACLE_HOME/bin:$PATH;
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib64;
#ORACLE 用户
export TMP=/tmp;
export TMPDIR=$TMP;
export ORACLE_HOSTNAME=slave3.hoptoad.com;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1;
export ORACLE_UNQNAME=prod;
export ORACLE_SID=prod3;
export ORACLE_TERM=xterm;
export PATH=/usr/sbin:$PATH;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

9 Grid、oracle用户无密码访问

9.1 执行ssh-keygen

每个节点的GRID 用户执行如下命令
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa

9.2 追加到本地authorized_keys

/bin/cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

9.3 authorized_keys追加到其他节点的authorized_keys

完毕后。

测试
#建立等效性
各2节点执行
$ssh slave1 date
$ssh slave1 -priv date
$ssh slave2 date
$ssh slave2 -priv date



10 ASM配置


本文档使用udev来实现ASM磁盘管理。

10.1 创建文件

在/etc/udev/rules.d/下创建如下文件
99-oracle-asmdevices.rules

10.2 编辑如下内容

根据实际情况进行修改

KERNEL=="sdb",PROGRAM=="/sbin/scsi_id -g -u /dev/sdb",RESULT=="36000c2900709abf31cbb677505e08064",NAME="asm_data",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc",PROGRAM=="/sbin/scsi_id -g -u /dev/sdc",RESULT=="36000c29258759d60b942ff5a2cbb37e1",NAME="asm_orc",OWNER="grid",GROUP="asmadmin",MODE="0660"

10.3 重新启动UDEV

/sbin/start_udev


10.4 DNS配置

10.4.1 安装DNS包

#yum install bind
开启启动
#chkconfig named on
Redhat 7之后执行
#systemctl enable named.service

10.4.2 配置/etc/named.conf


如下:
options {
directory "/var/named"; // Base directory for named
allow-transfer {"none";}; // Slave serves that can pull zone transfer. Ban everyone by
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
zone "1.168.192.IN-ADDR.ARPA." IN { // Reverse zone.
type master;
notify no;
file "192.168.1.zone";
};
zone "0.10.10.IN-ADDR.ARPA." IN { // Reverse zone.
type master;
notify no;
file "10.10.0.zone";
};

zone "hoptoad.com." IN {
type master;
notify no;
file "hoptoad.com.zone";
};

10.4.3 配置正反向文件

编辑/var/named/hoptoad.com. zone文件,正向解析
如下:
$TTL 1H ; Time to live
$ORIGIN hoptoad.com.
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial// (todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
@ IN NS rac1
;
IN A 192.168.1.100
rac-scan IN A 192.168.1.120
rac1 IN A 192.168.1.100
rac2 IN A 192.168.1.101
rac3 IN A 192.168.1.102
rac1-priv IN A 10.10.0.100
rac2-priv IN A 10.10.0.101
rac3-priv IN A 10.10.0.102
rac1-vip IN A 192.168.1.110
rac2-vip IN A 192.168.1.111
rac3-vip IN A 192.168.1.112
;
$ORIGIN hoptoad.com.
hoptoad.com. IN NS hoptoad.com.
编辑cat /var/named/192.168.1. zone文件反向解析
$TTL 1H
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
110 IN PTR rac-scan.hoptoad.com.
100 IN PTR rac1.hoptoad.com.
101 IN PTR rac2.hoptoad.com.
102 IN PTR rac3.hoptoad.com.
110 IN PTR rac1-vip.hoptoad.com.
111 IN PTR rac2-vip.hoptoad.com.
112 IN PTR rac3-vip.hoptoad.com.
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
110 IN PTR rac-scan.hoptoad.com.
100 IN PTR rac1.hoptoad.com.
101 IN PTR rac2.hoptoad.com.
102 IN PTR rac3.hoptoad.com.
110 IN PTR rac1-vip.hoptoad.com.
111 IN PTR rac2-vip.hoptoad.com.
112 IN PTR rac3-vip.hoptoad.com.
编辑cat /var/named/10.10.0.zone文件反向解析
$TTL 1H
@ IN SOA rac1.hoptoad.com root.rac1.hoptoad.com. (
2013011201 ; serial //(todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour
;
NS rac1.hoptoad.com.
100 IN PTR rac1-priv.hoptoad.com.
101 IN PTR rac2-priv.hoptoad.com.
102 IN PTR rac3-priv.hoptoad.com.


10.4.4 配置/etc/resolve.conf

options attempts: 2
options timeout: 1
search hoptoad.com
nameserver 192.168.1.100
//注:出错日志在/var/log/messages文件中
注意:NS 那行之间需要有TAB键不然报错。
10.4.5 检测DNS
确保每个主机名都要能解析IP 地址
[root@rac1 named]# nslookup rac1.hoptoad.com

11 安装前配置检查

安装GRID包中的 cvuqdisk-1.0.9-1.rpm RPM包(各个节点都安装)
执行如下命令:

11.1 检查一

./runcluvfy.sh stage -post hwos -n slave1,slave2,slave3 -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2,rac3 -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose
[grid@slave1 grid]$ ./runcluvfy.sh stage -post hwos -n slave1,slave2,slave3 -verbose

出现结果如下(环境不同略有不同):
Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "slave1"
Destination Node Reachable?
------------------------------------ ------------------------
slave1 yes
slave2 yes
slave3 yes
Result: Node reachability check passed from node "slave1"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
slave2 passed
slave1 passed
slave3 passed
Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
slave1 passed
slave2 passed
slave3 passed

Verification of the hosts config file successful


Interface information for node "slave1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.201 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:57:A6:F2 1500
eth1 10.10.0.201 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:57:A6:FC 1500


Interface information for node "slave2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.202 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:72:2C:6E 1500
eth1 10.10.0.202 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:72:2C:78 1500


Interface information for node "slave3"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.1.203 192.168.1.0 0.0.0.0 UNKNOWN 00:0C:29:40:A1:FC 1500
eth1 10.10.0.203 10.10.0.0 0.0.0.0 UNKNOWN 00:0C:29:40:A1:06 1500


Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1[192.168.1.201] slave2[192.168.1.202] yes
slave1[192.168.1.201] slave3[192.168.1.203] yes
slave2[192.168.1.202] slave3[192.168.1.203] yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) slave1,slave2,slave3


Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1:192.168.1.201 slave2:192.168.1.202 passed
slave1:192.168.1.201 slave3:192.168.1.203 passed
Result: TCP connectivity check passed for subnet "192.168.1.0"


Check: Node connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1[10.10.0.201] slave2[10.10.0.202] yes
slave1[10.10.0.201] slave3[10.10.0.203] yes
slave2[10.10.0.202] slave3[10.10.0.203] yes
Result: Node connectivity passed for subnet "10.10.0.0" with node(s) slave1,slave2,slave3


Check: TCP connectivity of subnet "10.10.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
slave1:10.10.0.201 slave2:10.10.0.202 passed
slave1:10.10.0.201 slave3:10.10.0.203 passed
Result: TCP connectivity check passed for subnet "10.10.0.0"


Interfaces found on subnet "192.168.1.0" that are likely candidates for a private interconnect are:
slave1 eth0:192.168.1.201
slave2 eth0:192.168.1.202
slave3 eth0:192.168.1.203

Interfaces found on subnet "10.10.0.0" that are likely candidates for a private interconnect are:
slave1 eth1:10.10.0.201
slave2 eth1:10.10.0.202
slave3 eth1:10.10.0.203

WARNING:
Could not find a suitable set of interfaces for VIPs
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "10.10.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Time zone consistency
Result: Time zone consistency check passed

Checking shared storage accessibility...

WARNING:
slave3:Cannot verify the shared state for device /dev/sda1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3

WARNING:
slave3:Cannot verify the shared state for device /dev/sda2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3

WARNING:
slave3:Cannot verify the shared state for device /dev/sda3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:
slave1,slave2,slave3

Disk Sharing Nodes (3 in count)
------------------------------------ ------------------------
/dev/sdb slave1 slave2 slave3

Disk Sharing Nodes (3 in count)
------------------------------------ ------------------------
/dev/sdc slave1 slave2 slave3


Shared storage check was successful on nodes "slave1,slave2,slave3"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Post-check for hardware and operating system setup was successful.
最后有successful 显示,完毕。

11.2 检查二



./runcluvfy.sh stage -pre crsinst -n slave1,slave2,slave3
./runcluvfy.sh stage -pre crsinst -n rac1,rac2,rac3
./runcluvfy.sh stage -pre crsinst -n rac1,rac2

12 Grid安装


每个节点都需要安装(采用Standand Cluster 和 Standand Asm)。

12.1 安装命令

#./runInstaller
执行命令进行安装。
主节点操作即可,其他节点会自动同步复制过去。

12.2 选者Skip software updates

Next

12.3 Install and configure grid infrastrue for a Cluster

Next

12.4 Configure a standard cluster

Next

12.5 Advance installation

Next

12.6 选中English

Next

12.7 配置Cluser Name,Scan Name, Scan port

取消GNS配置
Next

12.8 增加其他节点

Next(此处需要验证无密码访问)

12.9 公私网络接口确认

Next(public,private)

12.10 创建GI management Repository

Yes(12C新特性,小型数据库会和OCR&VF存在相同的位置)
Next

12.11 选择Use Oracle ASM from storage

Next

12.12 选择ASM磁盘

修改change discovery path…以便找到 asm磁盘。(确保容量大小够)

12.13 设置SYSASM密码

设置完毕
Next

12.14 设置IPMI

不设置
Next

12.15 配置OSAM,ASM DBA, ASMOPER

分别是asmadmin,asmdba,asmoper
Next

12.16 配置安装路径,Inventory路径

Next

12.17 不勾选自动运行脚本

Next(手动执行脚本)

12.18 安装

点击install 安装完毕。


12.19 手动执行脚本

务必按顺序执行脚本。




12.20 安装完毕操作

 检查集群状态:
  [grid@rac02 ~]$ crsctl check cluster
 所有 Oracle 实例 —(数据库状态):
  [grid@rac02 ~]$ srvctl status database -d racdb
 检查单个实例状态:
  [grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1
 节点应用程序状态:
 [grid@rac02 ~]$ srvctl status nodeapps
 列出所有的配置数据库:
  [grid@rac02 ~]$ srvctl config database
 数据库配置:
  [grid@rac02 ~]$ srvctl config database -d racdb –a
ASM状态以及ASM配置:
[grid@rac02 ~]$ srvctl status asm
 TNS监听器状态以及配置
  [grid@rac02 ~]$ srvctl status listener
 SCAN状态以及配置:
  [grid@rac02 ~]$ srvctl status scan
 VIP各个节点的状态以及配置:
  [grid@rac02 ~]$ srvctl status vip -n rac01
 节点应用程序配置 —(VIP、GSD、ONS、监听器)
[grid@rac02 ~]$ srvctl config nodeapps -a -g -s –l
 验证所有集群节点间的时钟同步:
  [grid@rac02 ~]$ cluvfy comp clocksync –verbose
以下操作需用root用户执行。
 在本地服务器上停止Oracle Clusterware 系统:
  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
  注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。
  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster –all
 在rac01和rac02上停止oracle clusterware系统
  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster –all
 在本地服务器上启动oralce clusterware系统:
  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
  注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。
 使用 SRVCTL 启动/停止所有实例:
  [oracle@rac01 ~]#srvctl stop database -d racdb
  [oracle@rac01 ~]#srvctl start database -d racdb



13 数据库安装

13.1 安装./runInstaller

取消My Oracle Support
Next

13.2 Skip software updates

Next

13.3 Install database software only

Next

13.4 Oracle RAC 安装

Next



14 创建数据库

使用DBCA创建数据库
此处省略一万字。


15 其他常识

15.1 VMWAR Workstation网桥

VMware虚拟网卡使用网络地址规划表
虚拟网卡名称 使用网段 子网掩码
VMnet1(即host网卡) 192.168.10.0 255.255.255.0
VMnet2(默认没有安装) 192.168.20.0 255.255.255.0
VMnet3(默认没有安装) 192.168.30.0 255.255.255.0
VMnet4(默认没有安装) 192.168.40.0 255.255.255.0
VMnet5(默认没有安装) 192.168.50.0 255.255.255.0
VMnet6(默认没有安装) 192.168.60.0 255.255.255.0
VMnet7(默认没有安装) 192.168.70.0 255.255.255.0
VMnet8(即NAT网卡) 192.168.80.0 255.255.255.0
地址只是为了统一和方便,读者可以根据自己的爱好进行规划。另外,在做实验的过程中,这个地址是可以随时修改的。
使用VMware Workstation创建虚拟机时,创建的虚拟机中可以包括网卡。根据需要选择使用何种虚拟网卡,从而决定连接到那个虚拟交换机。在VMware Workstation中,默认有3个虚拟交换机,分别是VMnet0(使用桥接网络)、VMnet1(仅主机网络)和VMnet8(NAT网络),也可以根据需要添加VMnet2~VMnet7和VMnet9等7个虚拟机交换机。


15.2 解决SQLPLUS下左右问题

15.2.1 安装readline

这个在光盘中有,直接yum安装即可。
#yum install readline*

15.2.2 下载

Rlwrap软件
http://utopia.knoware.nl/~hlub/uck/rlwrap/
并安装
#./configure
#make && make install
完成。

15.2.3 配置oracle用户的配置文件

为了方便,也可以在Oracle用户环境配置文件.bash_profile中加入如下语句:
stty erase ^h
alias sqlplus=‘rlwrap sqlplus‘
完毕。

15.3 ASM配置

#oracleasm configure -i
#oracleasm init
#oracleasm createdisk orc /dev/sdc1


检测报错:Asmlib installation and configuration verification
答:oracleasm configure –i
重新配置ORACLEASM_UID,ORACLEASM_GID
grid.asmadmin
所有节点都要配置。



ORA-27091: unable to queue I/O
答:
Oracleasm磁盘权限不够

15.4 修改/dev/shm

修改 /etc/fstab 文件。
tmpfs /dev/shm tmpfs defaults,size=2048M 0 0
重新挂载:
#mount -o remount /dev/shm
[root@centos-fuwenchao mntsda3]# df -h


CLSRSC-507: The root script cannot proceed on this node rac2 because either the first-node operations have not completed on node rac1 or there was an error in obtaining the status of the first-node operations.
答:NODE节点2执行错误
默认情况下这个文件位于$GI_HOME的cfgtoollogs下面,文件名称格式是rootcrs_<HOSTNAME>.log。
如 /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/ rootcrs_rac2_2014-12-29_05-13-14PM.log

15.5 其他错误

错误:
ERROR:
Reference data is not available for verifying prerequisites on this operating system distribution
Verification cannot proceed
答:数据版本从12.1.0.1换成12.1.0.2解决。


Error in invoking target ‘all_no_orcl‘ of makefile
答:数据版本从12.1.0.1换成12.1.0.2解决。


PRVF-5600 : On node "rac1" The following lines in file "/etc/resolv.conf" could not be parsed as they are not in proper format:
答: 安装DNS解决。

基于Workstation8、CentOS6.5实现12C RAC搭建安装