首页 » ORACLE 9i-23ai » remove a node from 11g r2 rac on OEL5(删除节点)

remove a node from 11g r2 rac on OEL5(删除节点)

Oracle RAC Nodes
Node Name Instance Name Database Name RAM Operating System scan soft version
znode1 rac1 rac.anbob.com 2GB OEL 5.8 – (x86) rac-scan.anbob.com(DNS ON ZNODE1) 11203
znode2 rac2 2GB OEL 5.8 – (x86)
znode3   [Remove] rac3 2GB OEL 5.8 – (x86)

shared storage use VMWARE WORKSTATION 8 ,UDEV + ASM (grid and db version 11203)

[grid@znode1 ~]$ srvctl config database -d rac
Database unique name: rac
Database name: rac
Oracle home: /u01/app/oracle/11.2.0/db1
Oracle user: oracle
Spfile: +DBDG/rac/spfilerac.ora
Domain: anbob.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: rac
Database instances: rac1,rac2,rac3
Disk Groups: DBDG,FLRV
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed
[grid@znode1 ~]$ srvctl status database -d rac
Instance rac1 is running on node znode1
Instance rac2 is running on node znode2
Instance rac3 is running on node znode3



[grid@znode1 ~]$ crsctl check cluster -all
**************************************************************
znode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
znode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
znode3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Remove Instance from OEM Database Control Monitoring

[oracle@znode1 ~]$ emca -displayConfig dbcontrol -cluster STARTED EMCA at Jul 31, 2012 3:13:53 PM EM Configuration Assistant, Version 11.2.0.3.0 Production Copyright (c) 2003, 2011, Oracle. All rights reserved. Enter the following information: Database unique name: rac Service name: rac.anbob.com Do you wish to continue? [yes(Y)/no(N)]: y Jul 31, 2012 3:14:02 PM oracle.sysman.emcp.EMConfig perform INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/rac/emca_2012_07_31_15_13_53.log. Jul 31, 2012 3:14:06 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage INFO: **************** Current Configuration **************** INSTANCE NODE DBCONTROL_UPLOAD_HOST ---------- ---------- --------------------- rac znode1 znode1.anbob.com rac znode2 znode1.anbob.com rac znode3 znode1.anbob.com Enterprise Manager configuration completed successfully FINISHED EMCA at Jul 31, 2012 3:14:06 PM DB Control deconfiguration removing the repository emca -deleteInst db 删完后EM 中就不会再有RAC3 我在这里是选择了删掉整个EM DB Control deconfiguration removing the repository [oracle@znode1 ~]$ emca -deconfig dbcontrol db -repos drop -cluster STARTED EMCA at Jul 31, 2012 3:28:46 PM EM Configuration Assistant, Version 11.2.0.3.0 Production Copyright (c) 2003, 2011, Oracle. All rights reserved. Enter the following information: Database unique name: rac Service name: rac.anbob.com Listener ORACLE_HOME [ /u01/app/11.2.0/grid ]: Password for SYS user: Password for SYSMAN user: ---------------------------------------------------------------------- WARNING : While repository is dropped the database will be put in quiesce mode. ---------------------------------------------------------------------- Do you wish to continue? [yes(Y)/no(N)]: y Jul 31, 2012 3:29:04 PM oracle.sysman.emcp.EMConfig perform INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/rac/emca_2012_07_31_15_28_46.log. Jul 31, 2012 3:29:07 PM oracle.sysman.emcp.util.DBControlUtil stopOMS INFO: Stopping Database Control (this may take a while) ... Jul 31, 2012 3:29:40 PM oracle.sysman.emcp.EMReposConfig invoke INFO: Dropping the EM repository (this may take a while) ... Jul 31, 2012 3:36:19 PM oracle.sysman.emcp.EMReposConfig invoke INFO: Repository successfully dropped Enterprise Manager configuration completed successfully FINISHED EMCA at Jul 31, 2012 3:36:30 PM

Backup the OCR using ocrconfig -manualbackup

[root@znode1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup znode2 2012/07/31 15:43:14 /u01/app/11.2.0/grid/cdata/rac/backup_20120731_154314.ocr

remove instance from cluster database by DBCA as oracle software owner

[oracle@znode1 ~]$ dbca -silent -deleteinstance -nodelist znode3 \ > -gdbname rac.anbob.com -instancename rac3 \ > -sysdbausername sys -sysdbapassword oracle Deleting instance 1% complete 2% complete 6% complete 13% complete 20% complete 26% complete 33% complete 40% complete 46% complete 53% complete 60% complete 66% complete Completing instance management. 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/rac.log" for further details. [oracle@znode1 ~]$ srvctl config database -d rac -v Database unique name: rac Database name: rac Oracle home: /u01/app/oracle/11.2.0/db1 Oracle user: oracle Spfile: +DBDG/rac/spfilerac.ora Domain: anbob.com Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: rac Database instances: rac1,rac2 Disk Groups: DBDG,FLRV Mount point paths: Services: Type: RAC Database is administrator managed SQL> select inst_id,instance_name from gv$instance; INST_ID INSTANCE_NAME ---------- ---------------- 1 rac1 2 rac2 SQL> select group#,thread# from v$log; GROUP# THREAD# ---------- ---------- 1 1 2 1 3 2 4 2 看到redo 日志组也被删除,从DBCA的日志中可以发现,位置$ORACLE_BASE/cfgtoollogs/dbca/trace.log_xxx [oracle@znode1 dbca]$ find -mtime -1 . ./rac.log ./trace.log_OraDb11g_home1_2012-07-31_03-48-22-PM ./DeleteInstanceStep.log more trace.log_OraDb11g_home1_2012-07-31_03-48-22-PM .. [Thread-32] [Verifier.setDatafileType:5273] setDatafileType:=1 [Thread-32] [DeleteInstanceStep.executeImpl:249] thread SQL = SELECT THREAD# FROM V$THREAD WHERE UPPER(INSTANCE) = UPPER('ra c3') [Thread-32] [DeleteInstanceStep.executeImpl:254] threadNum.length=1 [Thread-32] [DeleteInstanceStep.executeImpl:273] threadNum=3 [Thread-32] [DeleteInstanceStep.executeImpl:280] redoLog SQL =SELECT GROUP# FROM V$LOG WHERE THREAD# = 3 [Thread-32] [DeleteInstanceStep.executeImpl:286] redoLogGrNames length=2 [Thread-32] [DeleteInstanceStep.executeImpl:311] Group numbers=(5,6) [Thread-32] [DeleteInstanceStep.executeImpl:317] logFileName SQL=SELECT MEMBER FROM V$LOGFILE WHERE GROUP# IN (5,6) [Thread-32] [DeleteInstanceStep.executeImpl:322] logFiles length=4 [Thread-32] [DeleteInstanceStep.executeImpl:329] SQL= ALTER DATABASE DISABLE THREAD 3 [Thread-32] [DeleteInstanceStep.executeImpl:341] archive mode = false [Thread-32] [DeleteInstanceStep.executeImpl:358] SQL= ALTER DATABASE DROP LOGFILE GROUP 5 [Thread-32] [DeleteInstanceStep.executeImpl:358] SQL= ALTER DATABASE DROP LOGFILE GROUP 6 [Thread-32] [DeleteInstanceStep.executeImpl:404] SQL=DROP TABLESPACE UNDOTBS3 INCLUDING CONTENTS AND DATAFILES [Thread-32] [DeleteInstanceStep.executeImpl:531] sidParams.length=3 [Thread-32] [DeleteInstanceStep.executeImpl:546] SQL=ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID = 'rac3' [Thread-32] [DeleteInstanceStep.executeImpl:546] SQL=ALTER SYSTEM RESET instance_number SCOPE=SPFILE SID = 'rac3' [Thread-32] [DeleteInstanceStep.executeImpl:546] SQL=ALTER SYSTEM RESET thread SCOPE=SPFILE SID = 'rac3' [Thread-32] [SQLEngine.spoolOff:2035] Setting spool off = /u01/app/oracle/cfgtoollogs/dbca/DeleteInstanceStep.log [Thread-32] [SQLEngine.done:2189] Done called ... [oracle@znode3 db1]$ ps -ef |grep ora 发现oracle rdbms进程已不存在

verify listener no running in ORACLE_HOME

[oracle@znode3 db1]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: /u01/app/11.2.0/grid on node(s) znode3,znode2,znode1 End points: TCP:1521 11g r2 的LISTENER是运行在GIRD HOME里

Update Oracle Inventory

[oracle@znode3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode3}" -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3690 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. 我们检查一下运行结果 $ vi /u01/app/oraInventory/ContentsXML/inventory.xml 有原来的 <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> <NODE NAME="znode3"/> </NODE_LIST> </HOME> <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/11.2.0/db1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> <NODE NAME="znode3"/> </NODE_LIST> </HOME> 变成了 <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> <NODE NAME="znode3"/> </NODE_LIST> </HOME> <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/11.2.0/db1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="znode3"/> </NODE_LIST> </HOME> 以前在安装RDBMS 的OUI中提示未发现CLUSTER也是修改这个文件,另外注意 runinstaller的参数-local删除的只是本地znode3上的信息 并且/etc/oratab下也只保留了asm instance的信息,还要检查里面不能包含要删rac3的ORACLE_HOME,否则会出错。

remove oracle database software

[oracle@znode3 ContentsXML]$ cd $ORACLE_HOME/deinstall [oracle@znode3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... 小心后面的-local ,我就回车太快,集群的库都被删了

更新保留的所有节点Oracle Inventory as the racle software owner,Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster

[oracle@znode1 dbs]$ $ORACLE_HOME/oui/bin/runInstaller -updatenodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode1,znode2}" Starting Oracle Universal Installer... [oracle@znode2 db1]$ $ORACLE_HOME/oui/bin/runInstaller -updatenodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode1,znode2}" Starting Oracle Universal Installer... 再来看下这两个节点上的/u01/app/oraInventory/ContentsXML/inventory.xml <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/11.2.0/db1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> </NODE_LIST> </HOME>

Remove Node from Clusterware

查看znode3状态用root用户 [root@znode1 ~]# echo $GRID_HOME /u01/app/11.2.0/grid [root@znode1 ~]# $GRID_HOME/bin/olsnodes -s -t znode1 Active Unpinned znode2 Active Unpinned znode3 Active Unpinned 可以看到znode3已经是unpinned 状态,如果是pinned状态,就要用被删节点以外的节点用root用户登录,使用crsctl unpin 终止CSS中的RAC的这个成员,前提是那个节点上的CSS is running. $crsctl unpin css -n racnode3

Disable oracle clusterware on znode3

在停用之前要确保EMagent是停止的,因为我上面是直接删除了所有的EM配置所以不再附加细节 $emctl status agent $emctl stop dbconsole 在删除的节点上用root运行rootcrs.pl script,禁用cluster。 [root@znode3 ~]# cd /u01/app/11.2.0/grid/crs/install/ [root@znode3 install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params Network exists: 1/192.168.168.0/255.255.255.0/eth0, type static VIP exists: /znode1-vip/192.168.168.192/192.168.168.0/255.255.255.0/eth0, hosting node znode1 VIP exists: /znode2-vip/192.168.168.194/192.168.168.0/255.255.255.0/eth0, hosting node znode2 VIP exists: /znode3-vip/192.168.168.196/192.168.168.0/255.255.255.0/eth0, hosting node znode3 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'znode3' CRS-2673: Attempting to stop 'ora.crsd' on 'znode3' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'znode3' CRS-2673: Attempting to stop 'ora.oc4j' on 'znode3' CRS-2673: Attempting to stop 'ora.CRSDG.dg' on 'znode3' CRS-2673: Attempting to stop 'ora.DBDG.dg' on 'znode3' CRS-2673: Attempting to stop 'ora.FLRV.dg' on 'znode3' CRS-2677: Stop of 'ora.DBDG.dg' on 'znode3' succeeded CRS-2677: Stop of 'ora.FLRV.dg' on 'znode3' succeeded CRS-2677: Stop of 'ora.oc4j' on 'znode3' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'znode2' CRS-2676: Start of 'ora.oc4j' on 'znode2' succeeded CRS-2677: Stop of 'ora.CRSDG.dg' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'znode3' CRS-2677: Stop of 'ora.asm' on 'znode3' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'znode3' has completed CRS-2677: Stop of 'ora.crsd' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'znode3' CRS-2673: Attempting to stop 'ora.evmd' on 'znode3' CRS-2673: Attempting to stop 'ora.asm' on 'znode3' CRS-2673: Attempting to stop 'ora.mdnsd' on 'znode3' CRS-2677: Stop of 'ora.evmd' on 'znode3' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'znode3' succeeded CRS-2677: Stop of 'ora.ctssd' on 'znode3' succeeded CRS-2677: Stop of 'ora.asm' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'znode3' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'znode3' CRS-2677: Stop of 'ora.cssd' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'znode3' CRS-2677: Stop of 'ora.crf' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'znode3' CRS-2677: Stop of 'ora.gipcd' on 'znode3' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'znode3' CRS-2677: Stop of 'ora.gpnpd' on 'znode3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'znode3' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deconfigured Oracle clusterware stack on this node note: 如果不加-force选项还要在其它节点手动停止znode3上的VIP crsctl stop vip -i znode3-vip -f crsctl remove vip -i znode3-vip -f 只在当删除所有节点时才加-lastnode选项那样rootcrs.pl会清理OCR and voting disk里的数据.

Delete Node from Clusterware Configuration

在非删除的节点上以root用户删除节点3配置 [grid@znode2 ~]$ olsnodes -s -t znode1 Active Unpinned znode2 Active Unpinned znode3 Inactive Unpinned [root@znode1 ~]# $GRID_HOME/bin/crsctl delete node -n znode3 CRS-4661: Node znode3 successfully deleted. [root@znode1 ~]# $GRID_HOME/bin/olsnodes -s -t znode1 Active Unpinned znode2 Active Unpinned [grid@znode2 ~]$ olsnodes -s -t znode1 Active Unpinned znode2 Active Unpinned

Update Oracle Inventory on ZNODE3

切换到grid 用户,运行$GRID_HOME/oui/bin/runInstaller,另外还要注意-local选项 [grid@znode3 ~]$ cd $ORACLE_HOME [grid@znode3 grid]$ pwd /u01/app/11.2.0/grid [grid@znode3 grid]$ cd oui/bin/ [grid@znode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode3}" CRS=TRUE -local 再观察一下/u01/app/oraInventory/ContentsXML/inventory.xml znode3 <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="znode3"/> </NODE_LIST> </HOME> znode1,znode2 <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> <NODE NAME="znode3"/> </NODE_LIST> </HOME>

De-install Oracle Grid Infrastructure Software as the grid on znode3

grid@znode3 bin]$ cd $ORACLE_HOME [grid@znode3 grid]$ pwd /u01/app/11.2.0/grid [grid@znode3 grid]$ cd deinstall/ [grid@znode3 deinstall]$ ./deinstall -local 连续4个回车 Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER y y 按提示用root执行个脚本 note: 不能再犯同样的错误,注意-local,只删本节点 ,这里有几个交互操作时间要久一点 [root@znode3 ~]# rm -rf /etc/oraInst.loc [root@znode3 ~]# rm -rf /opt/ORCLfmap [root@znode3 ~]# rm -rf /u01/app/11.2.0 [root@znode3 ~]# rm -rf /u01/app/oracle 确认/etc/inittab 确认ohasd已不存在

Update Oracle Inventory on znode1 and znode2

[root@znode1 ~]# su - grid [grid@znode1 grid]$ cd $ORACLE_HOME/oui/bin [grid@znode1 bin]$ pwd /u01/app/11.2.0/grid/oui/bin [grid@znode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode1,znode2}" CRS=TRUE [grid@znode1 bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="znode1"/> <NODE NAME="znode2"/> </NODE_LIST> </HOME> [grid@znode1 bin]$ cluvfy stage -post nodedel -n racnode3 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "znode2" The Oracle Clusterware is healthy on node "znode1" CRS integrity check passed Result: Node removal check passed Post-check for node removal was successful. [grid@znode1 bin]$ crsctl check cluster -all ************************************************************** znode1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** znode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [grid@znode1 bin]$ 好,节点删除成功,但是注意znode3的信息还会一直保留在OCR中,也许是为了以后再次填加。剩下的工作就是清扫znode3的上痕迹,删除udev 绑定的共享磁盘文件,删除用户、组 thanks "Jeffrey Hunter " shared your knowledge
打赏

对不起,这篇文章暂时关闭评论。