您好,登錄后才能下訂單哦!
這篇文章主要介紹了oracle12cR2如何增加節點刪除節點挽救集群,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
數據庫版本:
[oracle@jsbp242305 ~]$ sqlplus -V
SQL*Plus: Release 12.2.0.1.0 Production
12.2數據庫目前有一個bug,sql*plus改sys用戶口令會hang住,因為12.2內部行為有了變化,更改密碼去更新一張基表的時候有問題,之前做過hanganalyze,我記得好像更改密碼的會話在等待raw cache lock。oracle官方建議sys用戶口令要用orapwd命令去修改,或者打one-offpatch(16002385)。而one-offpatch又依賴于一個最新的RU(27105253),我在打RU的時候直接報錯,節點1崩潰無法挽救,節點2存活,只好采用增刪節點的方式挽回集群。
12.2的數據庫全是坑,打補丁的時候千萬記得從節點2開始打!!
個人經驗,僅供參考。
##################刪除節點##############
IP信息,要刪除的是節點1: jsbp242305
[grid@jsbp242306 ~]$ more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#10.10.129.41 jsbp242306
#Public IP
10.11.176.75 jsbp242305
10.11.176.76 jsbp242306
#VIP
10.11.176.77 jsbp242305-vip
10.11.176.78 jsbp242306-vip
#SCAN
10.11.176.79 jqhwccdb-scan
#Private IP
2.1.176.75 jsbp242305-priv
2.1.176.76 jsbp242306-priv
查看是否是unpinned:
[grid@jsbp242306 ~]$ olsnodes -s -t
jsbp242305 Inactive Unpinned
jsbp242306 Active Unpinned
都是unpinned,不用運行crsctl unpin css命令。
軟件的home目錄是local的,不是共享的,所以在要刪除的節點執行下面命令:
[grid@jsbp242305 ~]$ /oracle/app/12.2.0/grid/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /oracle/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /oracle/app/12.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /oracle/app/grid
Checking for existence of central inventory location /oracle/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /oracle/app/12.2.0/grid
The following nodes are part of this cluster: jsbp242305
Checking for sufficient temp space availability on node(s) : 'jsbp242305'
## [END] Install check configuration ##
Traces log file: /oracle/app/oraInventory/logs//crsdc_2018-04-16_10-26-36-AM.log
Network Configuration check config START
Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_check2018-04-16_10-26-44-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_check2018-04-16_10-26-44-AM.log
Database Check Configuration START
Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_check2018-04-16_10-26-44-AM.log
Oracle Grid Management database was not found in this Grid Infrastructure home
Database Check Configuration END
################ DECONFIG CHECK OPERATION END #########################
########### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /oracle/app/12.2.0/grid
The following nodes are part of this cluster: jsbp242305
The cluster node(s) on which the Oracle home deinstallation will be performed are:jsbp242305
Oracle Home selected for deinstall is: /oracle/app/12.2.0/grid
Inventory Location where the Oracle home registered is: /oracle/app/oraInventory
Option -local will not modify any ASM configuration.
Oracle Grid Management database was not found in this Grid Infrastructure home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.out'
Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.err'
############ DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_clean2018-04-16_10-27-15-AM.log
ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_clean2018-04-16_10-27-15-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_clean2018-04-16_10-27-15-AM.log
Network Configuration clean config END
Run the following command as the root user or the administrator on node "jsbp242305".
/oracle/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
按照提示去執行:
[root@jsbp242305 ~]# /oracle/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp
The log of current session can be found at:
/oracle/app/oraInventory/logs/crsdeconfig_jsbp242305_2018-04-16_10-27-36AM.log
PRCR-1070 : Failed to check if resource ora.net1.network is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.helper is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.ons is registered
CRS-0184 : Cannot communicate with the CRS daemon.
2018/04/16 10:27:48 CLSRSC-180: An error occurred while executing the command '/oracle/app/12.2.0/grid/bin/srvctl config nodeapps'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jsbp242305'
CRS-2679: Attempting to clean 'ora.gipcd' on 'jsbp242305'
CRS-2681: Clean of 'ora.gipcd' on 'jsbp242305' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jsbp242305' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/04/16 10:28:11 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018/04/16 10:28:42 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2018/04/16 10:28:47 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
返回之前的節點,按enter:
################ DECONFIG CLEAN OPERATION END #########################
############## DECONFIG CLEAN OPERATION SUMMARY #######################
There is no Oracle Grid Management database to de-configure in this Grid Infrastructure home
Oracle Clusterware is stopped and successfully de-configured on node "jsbp242305"
Oracle Clusterware is stopped and de-configured successfully.
###################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_2018-04-16_10-26-34-AM.rsp
Location of logs /oracle/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
############## DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.out'
Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.err'
################ DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to jsbp242305
Setting CLUSTER_NODES to jsbp242305
Setting CRS_HOME to true
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-04-16_10-26-05AM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/oracle/app/12.2.0/grid' from the central inventory on the local node : Done
Delete directory '/oracle/app/12.2.0/grid' on the local node : Done
Failed to delete the directory '/oracle/app/grid/log/diag/asmcmd/user_root/jsbp242305/trace'. Either user has no permission to delete or it is in use.
The Oracle Base directory '/oracle/app/grid' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
## [END] Oracle install clean ##
################### DEINSTALL CLEAN OPERATION END #########################
################ DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/oracle/app/12.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/oracle/app/12.2.0/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Review the permissions and contents of '/oracle/app/grid' on nodes(s) 'jsbp242305'.
If there are no Oracle home(s) associated with '/oracle/app/grid', manually delete '/oracle/app/grid' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
###################################################################
############# ORACLE DEINSTALL TOOL END #############
結束。
在不刪除的節點2的Grid_home/bin下,root用戶,執行:
[root@jsbp242306 ~]# /oracle/app/12.2.0/grid/bin/crsctl delete node -n jsbp242305
CRS-4661: Node jsbp242305 successfully deleted.
驗證是否被刪除成功:
[grid@jsbp242306 ~]$ cluvfy stage -post nodedel -n jsbp242305
Verifying Node Removal ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...PASSED
Post-check for node removal was successful.
CVU operation performed: stage -post nodedel
Date: Apr 16, 2018 10:36:21 AM
CVU home: /oracle/app/12.2.0/grid/
User: grid
驗證被刪除節點vip資源是否存在:
$ srvctl config vip -node jsbp242305
vip資源狀態
ora.jsbp242305.vip
1 ONLINE INTERMEDIATE jsbp242306 FAILED OVER,STABLE
如果vip資源存在,刪掉它:
$ srvctl stop vip -vip jsbp242305-vip
$ srvctl remove vip -vip jsbp242305-vip
刪除vip資源需要root用戶權限:
[grid@jsbp242306 addnode]$ srvctl remove vip -vip jsbp242305-vip
Please confirm that you intend to remove the VIPs jsbp242305-vip (y/[n]) y
PRKO-2381 : VIP jsbp242305-vip is not removed successfully:
PRCN-2018 : Current user grid is not a privileged user
[grid@jsbp242306 addnode]$ which srvctl
/oracle/app/12.2.0/grid/bin/srvctl
[grid@jsbp242306 addnode]$ logout
[root@jsbp242306 ~]# /oracle/app/12.2.0/grid/bin/srvctl remove vip -vip jsbp242305-vip
Please confirm that you intend to remove the VIPs jsbp242305-vip (y/[n]) y
再次檢查vip資源狀態:
ora.jsbp242305.vip
1 OFFLINE OFFLINE STABLE
之前,我以為這個節點刪除之后還要添加回來,就沒有刪除,后面添加節點時報錯,還是要刪除的。
##########增加節點##################
用addnode.sh增加節點
2.在存在節點,驗證要增加的節點和集群一致性:
$ cluvfy stage -pre nodeadd -n jsbp242305 [-fixup] [-verbose]
如果驗證失敗,可以加fixup選項來修復集群.
[grid@jsbp242306 ~]$ cluvfy stage -pre nodeadd -n jsbp242305
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: jsbp242306:/usr ...PASSED
Verifying Free Space: jsbp242306:/var ...PASSED
Verifying Free Space: jsbp242306:/etc,jsbp242306:/sbin ...PASSED
Verifying Free Space: jsbp242306:/oracle/app/12.2.0/grid ...PASSED
Verifying Free Space: jsbp242306:/tmp ...PASSED
Verifying Free Space: jsbp242305:/usr ...PASSED
Verifying Free Space: jsbp242305:/var ...PASSED
Verifying Free Space: jsbp242305:/etc,jsbp242305:/sbin ...PASSED
Verifying Free Space: jsbp242305:/oracle/app/12.2.0/grid ...PASSED
Verifying Free Space: jsbp242305:/tmp ...PASSED
Verifying User Existence: oracle ...
Verifying Users With Same UID: 1101 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying User Existence: grid ...
Verifying Users With Same UID: 1100 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying User Existence: root ...
Verifying Users With Same UID: 0 ...PASSED
Verifying User Existence: root ...PASSED
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmoper ...PASSED
Verifying Group Existence: asmdba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall ...PASSED
Verifying Group Membership: asmdba ...PASSED
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: asmoper ...PASSED
Verifying Run Level ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.20.51.0.2 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED
Verifying Package: libgcc-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-4.4.7 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...PASSED
Verifying Package: sysstat-9.0.4 ...PASSED
Verifying Package: gcc-4.4.7 ...PASSED
Verifying Package: gcc-c++-4.4.7 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.81 ...PASSED
Verifying Package: glibc-2.12 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.12 (x86_64) ...PASSED
Verifying Package: libaio-0.3.107 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.107 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-5.43-1 ...PASSED
Verifying Package: net-tools-1.60-110 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...FAILED (PRVG-11550)
Verifying Node Addition ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying '/oracle/app/12.2.0/grid' ...PASSED
Verifying Node Addition ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...PASSED
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying subnet mask consistency for subnet "2.1.176.0" ...PASSED
Verifying subnet mask consistency for subnet "10.11.176.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...PASSED
Verifying ASM Integrity ...
Verifying Node Connectivity ...
Verifying Hosts File ...PASSED
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying subnet mask consistency for subnet "2.1.176.0" ...PASSED
Verifying subnet mask consistency for subnet "10.11.176.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...
Verifying ASM device sharedness check ...
Verifying Package: cvuqdisk-1.0.10-1 ...FAILED (PRVG-11550)
Verifying ASM device sharedness check ...FAILED (PRVG-11550)
Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...FAILED (PRVG-11550)
Verifying Database home availability ...PASSED
Verifying OCR Integrity ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Network Time Protocol (NTP) ...
Verifying '/etc/ntp.conf' ...PASSED
Verifying '/var/run/ntpd.pid' ...PASSED
Verifying Daemon 'ntpd' ...PASSED
Verifying NTP daemon or service using UDP port 123 ...PASSED
Verifying NTP daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying resolv.conf Integrity ...
Verifying (Linux) resolv.conf Integrity ...FAILED (PRVG-13159)
Verifying resolv.conf Integrity ...FAILED (PRVG-13159)
Verifying DNS/NIS name service ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying /boot mount ...PASSED
Verifying zeroconf check ...PASSED
Pre-check for node addition was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre nodeadd".
Verifying Package: cvuqdisk-1.0.10-1 ...FAILED
jsbp242305: PRVG-11550 : Package "cvuqdisk" is missing on node "jsbp242305"
Verifying Device Checks for ASM ...FAILED
Verifying ASM device sharedness check ...FAILED
Verifying Package: cvuqdisk-1.0.10-1 ...FAILED
jsbp242305: PRVG-11550 : Package "cvuqdisk" is missing on node "jsbp242305"
Verifying resolv.conf Integrity ...FAILED
jsbp242306: PRVG-13159 : On node "jsbp242306" the file "/etc/resolv.conf" could
not be parsed because the file is empty.
jsbp242306: Check for integrity of file "/etc/resolv.conf" failed
jsbp242305: PRVG-13159 : On node "jsbp242305" the file "/etc/resolv.conf" could
not be parsed because the file is empty.
jsbp242305: Check for integrity of file "/etc/resolv.conf" failed
Verifying (Linux) resolv.conf Integrity ...FAILED
jsbp242306: PRVG-13159 : On node "jsbp242306" the file "/etc/resolv.conf"
could not be parsed because the file is empty.
jsbp242305: PRVG-13159 : On node "jsbp242305" the file "/etc/resolv.conf"
could not be parsed because the file is empty.
CVU operation performed: stage -pre nodeadd
Date: Apr 16, 2018 10:52:03 AM
CVU home: /oracle/app/12.2.0/grid/
User: grid
可以看到,報錯里只需關注cvuqdisk這個安裝包缺失即可,手動安裝一下:
[root@jsbp242305 grid]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
/etc/resolv.conf這個不用理會。
3.To extend the Oracle Grid Infrastructure home to the node3, navigate to the Grid_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle Clusterware.
在存活節點的Grid_home/addnode目錄下,用grid用戶執行:
cd /oracle/app/12.2.0/grid/addnode
./addnode.sh --交互模式
./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" --靜默模式
[grid@jsbp242306 addnode]$ cd /oracle/app/12.2.0/grid/addnode
[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}"
[FATAL] [INS-43045] CLUSTER_NEW_NODE_ROLES parameter was not specified.
CAUSE: The CLUSTER_NEW_NODE_ROLES parameter was not provided for performing addnode operation.
ACTION: Ensure that CLUSTER_NEW_NODE_ROLES parameter is passed. Refer to installation guide for more information on the syntax of passing CLUSTER_NEW_VIRTUAL_HOSTNAMES parameter.
報錯,要指定CLUSTER_NEW_NODE_ROLES
./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"
再次嘗試:
[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"
[FATAL] [INS-40912] Virtual host name: jsbp242305-vip is assigned to another system on the network.
CAUSE: One or more virtual host names appeared to be assigned to another system on the network.
ACTION: Ensure that the virtual host names assigned to each of the nodes in the cluster are not currently in use, and the IP addresses are registered to the domain name you want to use as the virtual host name.
報錯:之前刪除vip沒有成功的緣故。刪除vip之后再次嘗試:
還是報錯,查看報錯日志好像是因為之前忽略的resolve.conf,重命名掉兩邊的resolve.conf文件:
[root@jsbp242306 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak
[root@jsbp242305 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak
再次執行:
[grid@jsbp242306 ~]$ cd /oracle/app/12.2.0/grid/addnode
[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"
[WARNING] [INS-40111] The specified Oracle Base location is not empty on following nodes: [jsbp242305].
ACTION: Specify an empty location for Oracle Base.
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 7% Done.
Copy Files to Remote Nodes in progress.
.................................................. 12% Done.
.................................................. 17% Done.
..............................
Copy Files to Remote Nodes successful.
You can find the log of this install session at:
/oracle/app/oraInventory/logs/addNodeActions2018-04-16_02-23-44-PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 49% Done.
Saving cluster inventory in progress.
.................................................. 83% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /oracle/app/12.2.0/grid was successful.
Please check '/oracle/app/12.2.0/grid/inventory/silentInstall2018-04-16_2-23-43-PM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 90% Done.
Update Inventory in progress.
Update Inventory successful.
.................................................. 97% Done.
As a root user, execute the following script(s):
1. /oracle/app/12.2.0/grid/root.sh
Execute /oracle/app/12.2.0/grid/root.sh on the following nodes:
[jsbp242305]
The scripts can be executed in parallel on all the nodes.
.................................................. 100% Done.
Successfully Setup Software.
按照提示,用root在添加的節點上跑下面的腳本:
/oracle/app/12.2.0/grid/root.sh
修改$ORACLE_HOME/network/admin/samples權限:
[grid@jsbp242305 admin]$ chmod 750 samples
6.Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.
用root用戶,按照提示,運行Grid_home/root.sh。
提示:
如果上面跑過root.sh,那么不用運行了。
再次檢查加入的節點是否有問題:
$ cluvfy stage -post nodeadd -n jsbp242305 [-verbose]
將resolve文件改回來:
[root@jsbp242306 ~]# mv /etc/resolv.conf.bak /etc/resolv.conf
[root@jsbp242305 ~]# mv /etc/resolv.conf.bak /etc/resolv.conf
提示:
如果是administrator-managed Oracle RAC
database,那么可能需要用dbca來增加數據庫實例。
附錄12.2官方文檔:
刪節點:
7.2.2 Deleting a Cluster Node on Linux and UNIX Systems
Delete a node from a cluster on Linux and UNIX systems.
Note:
You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.
See Also:Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance
If you delete the last node of a cluster that is serviced by GNS, then you must delete the entries for that cluster from GNS.
If you have nodes in the cluster that are unpinned, then Oracle Clusterware ignores those nodes after a time and there is no need for you to remove them.
If one creates node-specific configuration for a node (such as disabling a service on a specific node, or adding the node to the candidate list for a server pool) that node-specific configuration is not removed when the node is deleted from the cluster. Such node-specific configuration must be removed manually.
Voting files are automatically backed up in OCR after any changes you make to the cluster.
When you want to delete a Leaf Node from an Oracle Flex Cluster, you need only complete steps 1 through 4 of this procedure.
To delete a node from a cluster:
Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.
Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:
$ olsnodes -s -t
If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.
On the node that you are deleting, depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:
For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:
$ Grid_home/deinstall/deinstall -local
Caution:
If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster.
If you cut and paste the preceding command, then paste it into a text editor before pasting it to the command line to remove any formatting this document might contain.
If you have a shared home, then run the following commands in the following order on the node you want to delete.
Run the following command to deconfigure Oracle Clusterware:
$ Grid_home/crs/install/rootcrs.sh -deconfig -force
Run the following command from the Grid_home/oui/bin directory to detach the Grid home:
$ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
Manually delete any configuration files, as prompted by the installation utility.
From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:
# crsctl delete node -n node_to_be_deleted
Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:
$ cluvfy stage -post nodedel -n node_list [-verbose]
If you remove a cluster node on which Oracle Clusterware is down, then determine whether the VIP for the deleted node still exists, as follows:
$ srvctl config vip -node deleted_node_name
If the VIP still exists, then delete it, as follows:
$ srvctl stop vip -node deleted_node_name $ srvctl remove vip -vip deleted_vip_name
增加節點:
There are three methods you can use to add a node to your cluster.
Using Rapid Home Provisioning to Add a Node
If you have a Rapid Home Provisioning Server, then you can use Rapid Home Provisioning to add a node to a cluster with one command, as shown in the following example:
$ rhpctl addnode gihome -client rhpclient -newnodes clientnode2:clientnode2-vip -root
The preceding example adds a node named clientnode2 with VIP clientnode2-vip to the Rapid Home Provisioning Client named rhpclient, using root credentials (login for the node you are adding).
Using Oracle Grid Infrastructure Installer to Add a Node
If you do you not want to use Rapid Home Provisioning to add a node to the cluster, then you can use the Oracle Grid Infrastructure installer to accomplish the task.
To add a node to the cluster using the Oracle Grid Infrastructure installer
Run ./gridsetup.sh to start the installer.
On the Select Configuration Option page, select Add more nodes to the cluster.
On the Cluster Node Information page, click Add... to provide information for nodes you want to add.
When the verification process finishes on the Perform Prerequisite Checks page, check the summary and then click Install.
Using addnode.sh to Add Nodes
This procedure assumes that:
There is an existing cluster with two nodes named node1 and node2
You are adding a node named node3 using a virtual node name, node3-vip, that resolves to an IP address, if you are not using DHCP and Grid Naming Service (GNS)
You have successfully installed Oracle Clusterware on node1 and node2 in a local (non-shared) home, where Grid_home represents the successfully installed home
To add a node:
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home must identify your successfully installed Oracle Clusterware home.
See Also:
Oracle Grid Infrastructure Installation and Upgrade Guide for Oracle Clusterware installation instructions
Verify the integrity of the cluster and node3:
$ cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]
You can specify the -fixup option to attempt to fix the cluster or node if the verification fails.
To extend the Oracle Grid Infrastructure home to the node3, navigate to the Grid_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle Clusterware.
To run addnode.sh in interactive mode, run addnode.sh from Grid_home/addnode.
You can also run addnode.sh in silent mode for both Oracle Clusterware standard clusters and Oracle Flex Clusters.
For an Oracle Clusterware standard cluster:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES={node3-vip}"
If you are adding node3 to an Oracle Flex Cluster, then you can specify the node role on the command line, as follows:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
Notes:
If you are adding node3 to an extended cluster, then you can specify the node role on the command line, as follows:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_SITES={site1,site2}"
Hub Nodes always have VIPs but Leaf Nodes may not. If you use the preceding syntax to add multiple nodes to the cluster, then you can use syntax similar to the following, where node3 is a Hub Node and node4 is a Leaf Node:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
When you are adding Leaf nodes, only, you do not need to use the CLUSTER_NEW_VIRTUAL_HOSTNAMES parameter. For example:
./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_ROLES={leaf,leaf}"
If prompted, then run the orainstRoot.sh script as root to populate the /etc/oraInst.loc file with the location of the central inventory. For example:
# /opt/oracle/oraInventory/orainstRoot.sh
If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to node3:
If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3:
If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:
Note:
After running addnode.sh, ensure the Grid_home/network/admin/samples directory has permissions set to 750.
Run the srvctl config database -db db_name command on an existing node in the cluster to obtain the mount point information.
Run the following command as root on node3 to create the mount point:
# mkdir -p mount_point_path
Mount the file system that hosts the Oracle RAC database home.
Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:
$ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name" -cfs
Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
Note:
Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.
Run the Grid_home/root.sh script on node3 as root, where Grid_home is the Oracle Grid Infrastructure home.
Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:
$ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}" LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
Note:
Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.
Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:
$ ./addnode.sh "CLUSTER_NEW_NODES={node3}"
Run the Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.
Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.
Note:
If you ran the root.sh script in the step 5, then you do not need to run it again.
If you have a policy-managed database, then you must ensure that the Oracle home is cloned to the new node before you run the root.sh script.
If you have any administrator-managed database instances configured on the nodes which are going to be added to the cluster, then you must extend the Oracle home to the new node before you run the root.sh script.
Alternatively, remove the administrator-managed database instances using the srvctl remove instance command.
Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:
# srvctl start filesystem -device volume_device_name -node node3
Note:
Ensure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.
Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:
$ cluvfy stage -post nodeadd -n node3 [-verbose]
Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3 (the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.
感謝你能夠認真閱讀完這篇文章,希望小編分享的“oracle12cR2如何增加節點刪除節點挽救集群”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關注億速云行業資訊頻道,更多相關知識等著你來學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。