您好,登錄后才能下訂單哦!
安裝環境說明與虛擬機規化:
安裝環境
主機操作系統:windows 10
虛擬機VMware15:兩臺CentOS-7.6
Oracle Database software: Oracle11gR2 (11.2.0.1.0)
Cluster software: Oracle grid infrastructure 11gR2(11.2.0.1.0)
共享存儲:ASM
oracle傳送門 https://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.html
虛擬機規劃
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\asm\ocr.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\asm\ocr2.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\asm\votingdisk.vmdk
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\asm\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\asm\backup.vmdk
創建了兩個1G的ocr盤,一個1G的投票盤,一個20G的數據盤,一個10G的備份盤。
3、虛擬機添加磁盤,選定虛擬機然后編輯配置,添加磁盤選擇剛剛添加的虛擬磁盤,大概如下(2臺都執行)
虛擬機操作系統配置(沒有特別注明都是2臺均執行)
#firewall-cmd --state
#systemctl stop firewalld.service
#systemctl disable firewalld.service
#groupadd -g 1000 oinstall
#groupadd -g 1020 asmadmin
#groupadd -g 1021 asmdba
#groupadd -g 1022 asmoper
#groupadd -g 1031 dba
#groupadd -g 1032 oper
#useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
#useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
#mkdir -p /u01/app/11.2.0/grid
#mkdir -p /u01/app/grid
#mkdir /u01/app/oracle
#chown -R grid:oinstall /u01
#chown oracle:oinstall /u01/app/oracle
#chmod -R 775 /u01/
使配置生效 #sysctl -p
b.配置oracle、grid用戶的shell限制,在文件/etc/security/limits.conf 的末尾加入如下內容
#vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
c.配置login,在文件/etc/pam.d/login的末尾加入如下內容
#vi /etc/pam.d/login
session required pam_limits.so
d.修改操作系統發行版本名,將文件/etc/centos-release的原有內容注釋,添加redheat-7(因為oracle不支持centos7安裝)
e.使用安裝介質作為yum源安裝依賴的包
配置本地源的方式,自己先進行配置:
#mount /dev/sr0 /mnt/cdrom/
#vi /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///mnt/cdrom
gpgcheck=0
enabled=1
#yum clean all
#yum makecache
#yum install gcc gcc-c++ glibc glibc-devel ksh libgcc libstdc++ libstdc++-devel* make sysstat
4.配置IP和hosts、hostname(這里的hostname在系統安裝時已經設定為rac1和rac2,ip由虛擬機自動分配,因此只配hosts),在文件/etc/hosts 的末尾添加如下內容
#vi /etc/hosts
192.168.149.129 rac1
192.168.149.201 rac1-vip
192.168.96.128 rac1-priv
192.168.149.130 rac2
192.168.149.202 rac2-vip
192.168.96.129 rac2-priv
192.168.149.100 scan-ip
5.配置grid和oracle用戶環境變量
Oracle_sid需要根據節點不同進行修改
#su - grid
$ vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1 # RAC1使用這個配置
export ORACLE_SID=+ASM2 # RAC2使用這個配置
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
需要注意的是ORACLE_UNQNAME是數據庫名(這里用orcl),創建數據庫時指定多個節點是會創建多個實例,ORACLE_SID指的是數據庫實例名
#su - oracle
$vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1 # RAC1使用這個配置
export ORACLE_SID=orcl2 # RAC2使用這個配置
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
$ source .bash_profile使配置文件生效(修改環境變量都是在最末行追加,保存后都要source使配置生效)
7.配置oracle、grid用戶ssh互信
#su – oracle
$mkdir ~/.ssh
$chmod 755 .ssh
$/usr/bin/ssh-keygen -t rsa
$/usr/bin/ssh-keygen -t dsa
下面步驟只在節點1執行,將所有的key文件匯總到一個總的認證文件中
$ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
$ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
$ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
$ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
$ cd ~/.ssh/
$ scp authorized_keys rac2:~/.ssh/ 拷貝完整的key到節點2
[oracle@rac2 .ssh]chmod 600 authorized_keys 此處注意是登陸到節點2修改權限
下面步驟2個節點都要執行,否則后續安裝會報錯,相當重要,得到的結果應該是一個時間
$ssh rac1 date
$ssh rac2 date
$ssh rac1-priv date
$ssh rac2-priv date
grid的互信參照oracle用戶處理即可
8.配置裸盤
(1)fdisk -l 通過該命令查看磁盤,可看到我們后面添加上去的磁盤信息,磁盤從sdb到sdf
(2)分區、格式化磁盤。由于是共享磁盤,所以只需要在一個節點上執行即可
在節點1上格式化,以/dev/sdb為例:
#fdisk /dev/sdb
The number of cylinders for this disk is set to 3824.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3824, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-3824, default 3824):
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
格式化完畢后得到如下分區
#ls /dev/sd*
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
(3)在節點上添加裸設備,注意2個節點都要執行
#vi /etc/udev/rules.d/99-oracle-asmdevices.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
KERNEL=="raw[1]", MODE="0660", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[2]", MODE="0660", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[3]", MODE="0660", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[4]", MODE="0660", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[5]", MODE="0660", OWNER="grid", GROUP="asmadmin"
(4)啟動裸設備,2個節點都執行
#start_udev
(5)查看裸設備,2個節點都要查看
#raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
9.掛載安裝軟件文件夾
這里是主機windows系統開啟文件夾共享,將壓縮包解壓后供虛擬機掛載即可 ,這里的用戶名密碼目錄僅供參考
mkdir -p /home/grid/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/grid/db
mkdir -p /home/oracle/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/oracle/db
10.安裝用于環境檢查的cvuqdisk(雙機)
#cd /home/grid/db/rpm
#rpm -ivh cvuqdisk-1.0.7-1.rpm
11.手動運行cvu使用驗證程序驗證Oracle集群要求
#su - grid
$cd /home/grid/db/
$./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
注:這里檢查見過出現很多32位(i386或i686)的軟件包校驗不通過,我直接忽略不處理也沒有影響系統使用,因此可以忽略,另外NTP、pdksh的報錯也可以忽略,校驗時2機報 glibc不存在,因此只能重新打包,從網上下載包
glibc-2.17-260.el7_6.4.i686.rpm ,直接安裝存在依賴
error: Failed dependencies:
libfreebl3.so is needed by glibc-2.17-260.el7_6.4.i686
libfreebl3.so(N×××AWHASH_3.12.3) is needed by glibc-2.17-260.el7_6.4.i686
百度查詢需要打包 nss-softokn-freebl-3.36.0-5.el7_5.i686.rpm ,直接打包報錯:
error: Failed dependencies:
libc.so.6 is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.0) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.1) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.1.3) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.3) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.3.4) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libc.so.6(GLIBC_2.4) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libdl.so.2 is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libdl.so.2(GLIBC_2.0) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
libdl.so.2(GLIBC_2.1) is needed by nss-softokn-freebl-3.36.0-5.el7_5.i686
通過2個包一起安裝解決:
#rpm -ivh glibc-2.17-260.el7_6.4.i686.rpm nss-softokn-freebl-3.36.0-5.el7_5.i686.rpm
Preparing... ################################# [100%]
Updating / installing...
1:nss-softokn-freebl-3.36.0-5.el7_5################################# [ 50%]
2:glibc-2.17-260.el7_6.4 ################################# [100%]
安裝Grid,只需要在1個節點執行即可,另外一個節點自動同步
1.安裝流程
(1)開啟安裝(這里為了更好的可用性用Xmanager來遠程安裝,配置參考https://www.cnblogs.com/songyuejie/p/6372534.html)
#xhost +
#su - grid
$export DISPLAY=Windows主機IP:0.0
$cd cd /home/grid/db/
$./runInstaller
圖形化流程
跳過更新-安裝集群
自定義安裝
安裝語音默認英文
定義集群名字,SCAN Name 為hosts中定義的scan-ip,GNS去勾選
界面只有第一個節點rac1,點擊“Add”把第二個節點rac2加上,信息可以參照rac1填寫
選擇網卡,默認即可
配置ASM,這里選擇前面配置的裸盤raw1,raw2,raw3,冗余方式為External即不冗余。因為是不用于,所以也可以只選一個設備。這里的設備是用來做OCR注冊盤和votingdisk投票盤的。
配置ASM實例需要為具有sysasm權限的sys用戶,具有sysdba權限的asmsnmp用戶設置密碼,這里設置統一密碼,點擊OK即可
不選擇智能管理
檢查ASM實例權限分組情況
選擇grid軟件安裝路徑和base目錄 ,這里默認即可
選擇grid安裝清單目錄
環境檢測出現ntp等剛剛跑腳本檢測出的錯誤,選擇忽略全部
安裝概要,點擊安裝后開始安裝并且自動復制到rac2
安裝grid完成,提示需要root用戶依次執行腳本orainstRoot.sh ,root.sh (一定要先在rac1執行完腳本后,才能在其他節點執行)
rac1:
#/u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
#/u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: (回車)
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group OCR created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 496abcfc4e214fc9bf85cf755e0cc8e2.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
STATE File Universal Id File Name Disk group
在rac2用同樣的方法跑2個腳本,完成后點擊上圖中的ok按鈕再執行下一步,忽略報錯完成安裝
2.安裝grid后的資源檢查
#su - grid
$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online12345
檢查Clusterware資源$crs_stat -t -v
Name Type R/RA F/FT Target State Host
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1
ora.OCR.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1
檢查集群節點
$ olsnodes -n
rac1 1
rac2 2
檢查兩個節點上的Oracle TNS監聽器進程
$ ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'
LISTENER_SCAN1
LISTENER
確認針對Oracle Clusterware文件的Oracle ASM功能:
如果在 Oracle ASM 上安裝過了OCR和表決磁盤文件,則以Grid Infrastructure 安裝所有者的身份,使用給下面的命令語法來確認當前正在運行已安裝的Oracle ASM:
$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
3.為數據和快速恢復去創建ASM磁盤組(只在rac1執行即可)
#su - grid
$ asmca
這里看到安裝grid時配置的OCR盤已存在
點擊create,添加DATA盤,使用裸盤raw4
同樣創建FRA盤,使用裸盤raw5
ASM磁盤組的情況
ASM的實例
安裝Oracle database軟件(RAC)
1.安裝流程(只在rac1執行即可,圖形化設置參照asm的)
#su - oracle
$ cd db/database
$ ./runInstaller
進入圖形化界面,跳過更新選項接著選擇只安裝數據庫軟件
選擇Oracel Real Application Clusters database installation按鈕(默認),點擊select all 按鈕 確保勾選所有的節點
默認英文安裝
選擇安裝企業版軟件
選擇安裝Oracle軟件路徑,其中ORACLE_BASE,ORACLE_HOME均選擇之前配置好的,默認即可
oracle權限授予用戶組,默認即可
安裝前的預檢查,忽略所有檢查異常項
顯示安裝RAC的概要信息
開始安裝,會自動復制到其他節點
安裝過程的異常解決方案:
/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib//libnmectl.a(nmectlt.o): In function nmectlt_genSudoProps':<br/>nmectlt.c:(.text+0x76): undefined reference to
B_DestroyKeyObject'
nmectlt.c:(.text+0x7f): undefined reference to B_DestroyKeyObject'<br/>nmectlt.c:(.text+0x88): undefined reference to
B_DestroyKeyObject'
nmectlt.c:(.text+0x91): undefined reference to `B_DestroyKeyObject'
INFO: collect2: error: ld returned 1 exit status
INFO: /u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk:176: recipe for target '/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/emdctl' failed
make[1]: Leaving directory '/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib'
INFO: /u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk:52: recipe for target 'emdctl' failed
INFO: make[1]: [/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/emdctl] Error 1
INFO: make: [emdctl] Error 2
INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oraInventory/logs/installActions2017-05-02_12-37-15PM.log' for details.
Exception Severity: 1
方案:
修改“emdctl”的編譯參數,編輯“/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk”文件,將
171 #===========================
172 # emdctl
173 #===========================
174
175 $(SYSMANBIN)emdctl:
176 $(MK_EMAGENT_NMECTL)
改為
171 #===========================
172 # emdctl
173 #===========================
174
175 $(SYSMANBIN)emdctl:
176 $(MK_EMAGENT_NMECTL) -lnnz11
然后點擊retry
NFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'irman ioracle' of makefile '/u01/app/oracle/product/11.2.0/db_1/rdbms/lib/ins_rdbms.mk'. See '/u01/app/oraInventory/logs/installActions2019-04-30_03-12-13PM.log' for details.
解決方法如下:
cd $ORACLE_HOME/rdbms/admin
/usr/bin/make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk irman
然后點擊retry
安裝完,在每個節點用root用戶執行腳本
#/u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]: (回車)
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
執行過程出錯:
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
這是11.0.2.1的經典問題,一個bug,解決辦法也很簡單,開2個窗口一個跑root.sh在執行root.sh命令出現Adding daemon to inittab的時候在另外一個窗口反復執行/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
直到沒有出現/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
重新執行root.sh之前別忘了刪除配置:/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force-verbose
參考:
http://blog.csdn.net/tianlesoftware/article/details/8207629
到此完成數據庫軟件安裝
2.創建集群數據庫
在節點rac1上用oracle用戶執行dbca創建RAC數據庫(圖形設置參照之前)
#su - oracle
$dbca
這里如果看到的部署RAC安裝可能是RAC沒有啟動,grid用戶檢查RAC資源
$ crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
$ crs_stat -t -v
CRS-0184: Cannot communicate with the CRS daemon.
看到沒有起點,可以在root用戶下手動拉起服務
#/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
選擇創建數據庫
選擇自定義數據庫(也可以是通用)
配置類型選擇Admin-Managed,輸入全局數據庫名orcl,每個節點實例SID前綴為orcl,選擇雙節點
選擇默認,配置OEM,啟用數據庫自動維護任務
統一設置sys,system,dbsnmp,sysman用戶的密碼為oracle
使用ASM存儲,使用OMF(oracle的自動管理文件),數據區選擇之前創建的DATA磁盤組 (
a.這里有可能出現沒有磁盤的問題,參考解決方案https://www.cnblogs.com/cqubityj/p/6828946.html
本次是修改了grid下oracle的權限6751
b.還有可能出現磁盤沒有掛載的情況,解決方案:
$ asmcmd
asmcmd>lsdg --查看當前掛載情況
asmcmd>mount DATA 掛載名為data的磁盤)
設置ASM密碼為oracle_4U
(這里會報"ORA-01031: insufficient privileges"
先刪除文件/u01/app/11.2.0/grid/dbs/orapw+ASM,然后新生成文件,此時的密碼為oracle_4U
[grid@node1 bin]$ orapwd file=/u01/app/11.2.0/grid/dbs/orapw+ASM password=oracle_4U
[grid@node1 dbs]$ scp orapw+ASM Node2:/u01/app/11.2.0/grid/dbs/ --拷貝到另外一個節點
[grid@node1 bin]$ sqlplus / as sysasm
SQL>create user asmsnmp identified by oracle_4U;
SQL> grant sysdba to asmsnmp;
然后點擊retry即可)
指定數據閃回區,選擇之前創建好的FRA磁盤組,不開歸檔
組件選擇,默認
選擇字符集AL32UTF8
選擇默認的數據存儲信息
開始創建數據庫,勾選生成數據庫的腳本
數據庫的概要信息
開始安裝
安裝過程可能出現的問題:ORA-12547: TNS:lost contact
檢查$ORACLE_HOME/bin/oracle 和 $ORACLE_HOME/rdbms/lib/config.o 的文件大小是否為0,
如果大小為0,需要重新編譯oracle軟件:
[oracle@normal backup]$ relink all
writing relink log to: /u01/app/oracle/product/11.2.0/db_1/install/relink.log
完成數據庫實例安裝
RAC維護
1.查看服務狀態(gsd 是為了兼容9i,10G以后不再啟動,忽略offline)
#su - grid
$crs_stat -t
Name Type Target State Host
ora.DATA.dg ora....up.type ONLINE ONLINE rac1
ora.FRA.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.OCR.dg ora....up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora....se.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
2.檢查集群狀態
$srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
3.檢查CRS狀態
本地
$crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
集群
$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
4.查看集群中節點配置信息
$ olsnodes
rac1
rac2
$ olsnodes -n
rac1 1
rac2 2
$ olsnodes -n -i -s -t
rac1 1 rac1-vip Active Unpinned
rac2 2 rac2-vip Active Unpinned
5.查看集群件的表決磁盤信息
.$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
1. ONLINE 496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]
Located 1 voting disk(s).
6.查看集群SCAN VIP信息(參考示例)
$ srvctl config scan
SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan-ip/192.168.248.110
$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
7.啟、停集群數據庫
整個集群的數據庫啟停
進入grid用戶
$ srvctl stop database -d orcl
$ srvctl status database -d orcl
$ srvctl start database -d orcl
關閉所有節點
進入root用戶
關閉所有節點
# /u01/app/11.2.0/grid/bin/crsctl stop crs
實際只關閉了當前結點
啟動過程出現的異常(由于內存不夠后面降低了內存重新啟動虛擬機):
報錯:ORA-00845: MEMORY_TARGET not supported on this system
$ srvctl start database -d orcl
PRCR-1079 : Failed to start resource ora.orcl.db
ORA-00845: MEMORY_TARGET not supported on this system
CRS-2674: Start of 'ora.orcl.db' on 'rac1' failed
CRS-2632: There are no more servers to try to place resource 'ora.orcl.db' on that would satisfy its placement policy
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
ORA-00845: MEMORY_TARGET not supported on this system
CRS-2674: Start of 'ora.orcl.db' on 'rac2' failed
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
解決辦法:
加大/dev/shm,使他大于系統參數,由于停機后無法獲取系統參數,在此可以設置一個總內存大小:
#cat /etc/fstab | grep tmpfs
tmpfs /dev/shm tmpfs defaults,size=4G 0 0
#mount -o remount,size=4G /dev/shm
#df -h | grep shm
tmpfs 4.0G 0 4.0G 0% /dev/shm
接著重新啟動即可,然后進入數據庫修改內存
$ sqlplus / as sysdba
SQL> show parameter target; 其中/dev/shm必須大于MEMORY_TARGET,需要做修改下面2個值,改完需要重新啟動數據庫
sql> alter system set memory_max_target=12288M scope=spfile;
sql> alter system set memory_target=12288M scope=spfile;
EM管理(oracle用戶,其中一個節點執行即可,訪問地址https://節點IP:1158/em 用戶名密碼用上文設置的system之類的)
$ emctl status dbconsole
$ emctl start dbconsole
$ emctl stop dbconsole
本地sqlplus連接
安裝oracle客戶端版修改tsnames.ora
D:\develop\app\orcl\product\11.2.0\client_1\network\admin\tsnames.ora
添加
ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.248.110)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)12345678910
這里的HOST寫的是scan-ip
sqlplus sys/oracle@RAC_ORCL as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on 星期四 4月 14 14:37:30 2016
Copyright (c) 1982, 2010, Oracle. All rights reserved.
連接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select instance_name, status from v$instance;
INSTANCE_NAME STATUS
orcl1 OPEN
當開啟第二個命令行窗口連接時,發現實例名為orcl2,可以看出,scan-ip的加入可以具有負載均衡的作用。
如果連接的時候有報錯:ORA-12545
解決方案:
a.可以在本機hosts文件添加雙機的vip
b.或者修改數據庫的監聽配置
SQL> show parameter local
NAME TYPE VALUE
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=scan-vip)(PORT=1521))
parallel_force_local boolean FALSE
修改local_listener主機名為VIP-IP地址
SQL> alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST= 192.168.149.202)(PORT=1521))' sid='orcl2';
System altered.
SQL> alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST= 192.168.149.201)(PORT=1521))' sid='orcl1';
System altered.
容災演練(模擬單機運行)
參考文檔
https://blog.csdn.net/yujiajia111/article/details/82998601
本文參考文檔:
https://blog.csdn.net/u014595668/article/details/51160783
https://www.cnblogs.com/lijiaman/p/7673795.html
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。