Troubleshooting Oracle 11.2.0.4 RAC OCSSD start failed with error “sgipcnMctBind” “No buffer space available (74)”
我没有带你去看过长白山皑皑的白雪,我没有带你去感受过十月田间吹过的微风,我没有带你去看过沉甸甸地弯下腰,犹如智者一般的谷穗,我没有带你去见证过这一切,但是,亲爱的,我可以让你看到RAC报错时的一种分析方法。 ~_~
当私网出过网络问题恢复后,多次遇到过gipc无法启动的问题, 当然有些场景可以尝试kill 生存节点的gipc或gpnpd.bin(不会导致GI重启)后,进程会自动重启动,可以尝试关闭问题节点CRS stack,再重启试试, 分析检查查看GI alert log和ocssd.trc,gipcd.trc等alert中提到的一些agent日志(eg.oracssdagent_root.trc), 当然要先确认和排除网络是否已恢复,使用ping, traceroute和多播测试脚本, 有时还需要清理一些本地文件系统中遗留的一些socket文件, 本次分享一个AIX平台,11.2.0.4 4-NODES RAC,其实2节点无法启动的问题。
GI ALERT LOG
2022-06-20 12:22:36.689: [cssd(3866722)]CRS-1605:CSSD voting file is online: /dev/rhdiskpower5; details in /opt/oracrs/app/11.2.0/grid/log/server02/cssd/ocssd.log. 2022-06-20 12:31:37.927: [/opt/oracrs/app/11.2.0/grid/bin/cssdagent(41747522)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'. Details at (:CRSAGF00113:) {0:0:2} in /opt/oracrs/app/11.2.0/grid/log/server02/agent/ohasd/oracssdagent_root//oracssdagent_root.log. 2022-06-20 12:31:37.928: [cssd(3866722)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /opt/oracrs/app/11.2.0/grid/log/server02/cssd/ocssd.log 2022-06-20 12:31:37.928:
CSSD.TRC
2022-06-20 12:32:18.686: [ GPNP][1]clsgpnp_getCachedProfileEx: [at clsgpnp.c:624 clsgpnp_getCachedProfileEx] Result: (26) CLSGPNP_NO_PROFILE. Failed to get offline GPnP service profile. 2022-06-20 12:32:18.686: [ GPNP][1]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2108 clsgpnp_profileCallUrlInt] get-profile call to url "ipc://GPNPD_anbob02" disco "" [f=1 claimed- host: cname: seq: auth:] 2022-06-20 12:32:18.695: [ GPNP][1]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2236 clsgpnp_profileCallUrlInt] Result: (0) CLSGPNP_OK. Successful get-profile CALL to remote "ipc://GPNPD_anbob02" disco "" 2022-06-20 12:32:18.695: [GIPCXCPT][1] gipchaInternalReadGpnp: No network info configured in GPNP, using defaults, ret gipcretFail (1) 2022-06-20 12:32:18.896: [GIPCHGEN][1] gipchaInternalReadGpnp: configuring default multicast addresses 2022-06-20 12:32:18.896: [GIPCHGEN][1] gipchaInternalReadGpnp: configuring default bootstrap communications modes 2022-06-20 12:32:18.896: [GIPCHGEN][1] gipchaInternalReadGpnp: configuring bootstrap communications using: broadcast and multicast 2022-06-20 12:32:18.896: [GIPCHGEN][1] gipchaInternalReadGpnp: mcast address[ 0 ] 224.0.0.251 2022-06-20 12:32:18.896: [GIPCHGEN][1] gipchaInternalReadGpnp: mcast address[ 1 ] 230.0.1.0 2022-06-20 12:32:18.898: [GIPCHTHR][1543] gipchaWorkerThread: starting worker thread hctx 1106d91b0 [0000000000000010] { gipchaContext : host 'anbob02', name 'CSS_anbob-cluster', luid '7427b9cd-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xc062 } 2022-06-20 12:32:18.900: [GIPCHDEM][1800] gipchaDaemonThread: starting daemon thread hctx 1106d91b0 [0000000000000010] { gipchaContext : host 'anbob02', name 'CSS_anbob-cluster', luid '7427b9cd-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xc062 } 2022-06-20 12:32:18.913: [ GPNP][1800]clsgpnp_Init: [at clsgpnp0.c:586 clsgpnp_Init] '/opt/oracrs/app/11.2.0/grid' in effect as GPnP home base. 2022-06-20 12:32:18.913: [ GPNP][1800]clsgpnp_Init: [at clsgpnp0.c:632 clsgpnp_Init] GPnP pid=43451474, GPNP comp tracelevel=1, depcomp tracelevel=0, tlsrc:ORA_DAEMON_LOGGING_LEVELS, apitl:0, complog:1, tstenv:0, devenv:0, envopt:0, flags=0 2022-06-20 12:32:18.929: [ GIPC][1800] gipcCheckInitialization: possible incompatible non-threaded init from [clsgpnp0.c : 769], original from [clsssc.c : 984] 2022-06-20 12:32:18.932: [ GPNP][1800]clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:399 clsgpnpkwf_initwfloc] Using FS Wallet Location : /opt/oracrs/app/11.2.0/grid/gpnp/anbob02/wallets/peer/ [ CLWAL][1800]clsw_Initialize: OLR initlevel [30000] 2022-06-20 12:32:18.962: [ GPNP][1800]clsgpnp_getCachedProfileEx: [at clsgpnp.c:615 clsgpnp_getCachedProfileEx] Result: (26) CLSGPNP_NO_PROFILE. Can't get offline GPnP service profile: local gpnpd is up and running. Use getProfile instead. 2022-06-20 12:32:18.962: [ GPNP][1800]clsgpnp_getCachedProfileEx: [at clsgpnp.c:624 clsgpnp_getCachedProfileEx] Result: (26) CLSGPNP_NO_PROFILE. Failed to get offline GPnP service profile. 2022-06-20 12:32:18.962: [ GPNP][1800]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2108 clsgpnp_profileCallUrlInt] get-profile call to url "ipc://GPNPD_anbob02" disco "" [f=1 claimed- host: cname: seq: auth:] 2022-06-20 12:32:18.971: [ GPNP][1800]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2236 clsgpnp_profileCallUrlInt] Result: (0) CLSGPNP_OK. Successful get-profile CALL to remote "ipc://GPNPD_anbob02" disco "" 2022-06-20 12:32:18.971: [ GIPCLIB][1800] gipclibGetClusterGuid: retrieved cluster guid 5339ea0cbbd4dfcbff15c2ad92c7dd21 2022-06-20 12:32:19.192: [ GPNP][1800] clsgpnp_Init: [at clsgpnp0.c:586 clsgpnp_Init] '/opt/oracrs/app/11.2.0/grid' in effect as GPnP home base. 2022-06-20 12:32:19.192: [ GPNP][1800] clsgpnp_Init: [at clsgpnp0.c:632 clsgpnp_Init] GPnP pid=43451474, GPNP comp tracelevel=1, depcomp tracelevel=0, tlsrc:ORA_DAEMON_LOGGING_LEVELS, apitl:0, complog:1, tstenv:0, devenv:0, envopt:0, flags=2003 2022-06-20 12:32:19.205: [ GPNP][1800] clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:399 clsgpnpkwf_initwfloc] Using FS Wallet Location : /opt/oracrs/app/11.2.0/grid/gpnp/anbob02/wallets/peer/ [ CLWAL][1800]clsw_Initialize: OLR initlevel [70000] 2022-06-20 12:32:19.273: [ GPNP][1800] clsgpnp_profileCallUrlInt: [at clsgpnp.c:2108 clsgpnp_profileCallUrlInt] get-profile call to url "ipc://GPNPD_anbob02" disco "" [f=3 claimed- host: cname: seq: auth:] 2022-06-20 12:32:19.282: [ GPNP][1800] clsgpnp_profileCallUrlInt: [at clsgpnp.c:2236 clsgpnp_profileCallUrlInt] Result: (0) CLSGPNP_OK. Successful get-profile CALL to remote "ipc://GPNPD_anbob02" disco "" 2022-06-20 12:32:19.282: [ CLSINET][1800] Returning NETDATA: 1 interfaces 2022-06-20 12:32:19.282: [ CLSINET][1800] # 0 Interface 'en7',ip='172.18.3.67',mac='34-40-b5-b6-f1-db',mask='255.255.255.224',net='172.18.3.64',use='cluster_interconnect' 2022-06-20 12:32:19.282: [GIPCHGEN][1800] gipchaNodeAddInterface: adding interface information for inf 1122a1f30 { host '', haName 'CSS_anbob-cluster', local 0, ip '172.18.3.67', subnet '172.18.3.64', mask '255.255.255.224', mac '34-40-b5-b6-f1-db', ifname 'en7', numRef 0, numFail 0, idxBoot 0, flags 0x1841 } 2022-06-20 12:32:19.282: [GIPCHTHR][1543] gipchaWorkerCreateInterface: created local interface for node 'anbob02', haName 'CSS_anbob-cluster', inf 'udp://172.18.3.67:45782' 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcmodNetworkProcessBind: failed to bind endp 1122a5330 [0000000000000893] { gipcEndpoint : localAddr 'mcast://224.0.0.251:42424/172.18.3.67', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 0, sendp 1122b6950flags 0x0, usrFlags 0xc000 }, addr 1122b54d0 [0000000000000895] { gipcAddress : name 'mcast://224.0.0.251:42424/172.18.3.67', objFlags 0x0, addrFlags 0x1 } 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcmodNetworkProcessBind: slos op : sgipcnMctBind 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcmodNetworkProcessBind: slos dep : No buffer space available (74) 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcmodNetworkProcessBind: slos loc : bind 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcmodNetworkProcessBind: slos info: IP_ADD_MEMBERSHIP failed 2022-06-20 12:32:19.282: [GIPCXCPT][1543] gipcBindF [gipcInternalEndpoint : gipcInternal.c : 432]: EXCEPTION[ ret gipcretFail (1) ] failed to bind endp 1122a5330 [0000000000000893] { gipcEndpoint : localAddr 'mcast://224.0.0.251:42424/172.18.3.67', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 0, sendp 1122b6950flags 0x0, usrFlags 0xc000 }, addr 1122b6230 [000000000000089a] { gipcAddress : name 'mcast://224.0.0.251:42424/172.18.3.67', objFlags 0x0, addrFlags 0x0 }, flags 0x8000 2022-06-20 12:32:19.283: [GIPCXCPT][1543] gipcInternalEndpoint: failed to bind address to endpoint name 'mcast://224.0.0.251:42424/172.18.3.67', ret gipcretFail (1) 2022-06-20 12:32:19.283: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: EXCEPTION[ ret gipcretFail (1) ] failed to create local interface 'udp://172.18.3.67', 1122a1f30 { host '', haName 'CSS_anbob-cluster', local 0, ip '172.18.3.67', subnet '172.18.3.64', mask '255.255.255.224', mac '34-40-b5-b6-f1-db', ifname 'en7', numRef 0, numFail 0, idxBoot 0, flags 0x1841 }, hctx 1106d91b0 [0000000000000010] { gipchaContext : host 'anbob02', name 'CSS_anbob-cluster', luid '7427b9cd-00000000', numNode 0, numInf 1, usrFlags 0x0, flags 0x67 } 2022-06-20 12:32:19.283: [GIPCHGEN][1543] gipchaInterfaceDisable: disabling interface 1122a1f30 { host '', haName 'CSS_anbob-cluster', local 0, ip '172.18.3.67', subnet '172.18.3.64', mask '255.255.255.224', mac '34-40-b5-b6-f1-db', ifname 'en7', numRef 0, numFail 0, idxBoot 0, flags 0x1841 } 2022-06-20 12:32:19.283: [GIPCHDEM][1543] gipchaWorkerCleanInterface: performing cleanup of disabled interface 1122a1f30 { host '', haName 'CSS_anbob-cluster', local 0, ip '172.18.3.67', subnet '172.18.3.64', mask '255.255.255.224', mac '34-40-b5-b6-f1-db', ifname 'en7', numRef 0, numFail 0, idxBoot 0, flags 0x1861 } 2022-06-20 12:32:19.283: [ CSSD][1]clssnmOpenGIPCEndp: listening on gipcha://anbob02:nm2_anbob-cluster ... 2022-06-20 12:32:34.959: [GIPCHGEN][1800] gipchaNodeAddInterface: adding interface information for inf 11181f0b0 { host '', haName 'CSS_anbob-cluster', local 0, ip '172.18.3.67', subnet '172.18.3.64', mask '255.255.255.224', mac '34-40-b5-b6-f1-db', ifname 'en7', numRef 0, numFail 0, idxBoot 0, flags 0x1841 } 2022-06-20 12:32:34.971: [ CSSD][1029]clssscSelect: cookie accept request 110b21220 2022-06-20 12:32:34.971: [ CSSD][1029]clssgmAllocProc: (111e2b790) allocated 2022-06-20 12:32:34.972: [ CSSD][1029]clssgmClientConnectMsg: properties of cmProc 111e2b790 - 0,1,2,3,4 2022-06-20 12:32:34.972: [ CSSD][1029]clssgmClientConnectMsg: Connect from con(a0b) proc(111e2b790) pid(56295588) version 11:2:1:4, properties: 0,1,2,3,4 2022-06-20 12:32:34.972: [ CSSD][1029]clssgmClientConnectMsg: msg flags 0x0000 2022-06-20 12:32:35.007: [ CSSD][1]clssnmlgetslot:lease acquisition for node anbob02/slot 2 completed in 15105 msecs 2022-06-20 12:32:35.014: [ CSSD][1]clssnmvDHBValidateNcopy: node 1, anbob01, has a disk HB, but no network HB, DHB has rcfg 405750863, wrtcnt, 635601613, LATS 1315106873, lastSeqNo 0, uniqueness 1621540573, timestamp 1655699554/4094771838 2022-06-20 12:32:35.014: [ CSSD][1]clssnmvDHBValidateNcopy: node 2, anbob02, has a disk HB, but no network HB, DHB has rcfg 405750862, wrtcnt, 635539164, LATS 1315106873, lastSeqNo 0, uniqueness 1655699505, timestamp 1655699502/1315053902 2022-06-20 12:32:35.014: [ CSSD][1]clssnmvDHBValidateNcopy: node 3, anbob03, has a disk HB, but no network HB, DHB has rcfg 405750863, wrtcnt, 635585937, LATS 1315106873, lastSeqNo 0, uniqueness 1641498383, timestamp 1655699554/1316838977 2022-06-20 12:32:35.014: [ CSSD][1]clssnmvDHBValidateNcopy: node 4, anbob04, has a disk HB, but no network HB, DHB has rcfg 405750863, wrtcnt, 634978425, LATS 1315106873, lastSeqNo 0, uniqueness 1647893440, timestamp 1655699554/3511630202 .. 2022-06-20 12:32:58.958: [ CLSF][4129]Allocated CLSF context 2022-06-20 12:32:58.958: [ CSSD][5157]clssnmPollingThread: Spawned, poll interval 1000 2022-06-20 12:32:58.960: [ CSSD][5414]clssnmSendingThread: Spawned 2022-06-20 12:32:58.962: [ CSSD][5671]clssnmRcfgMgrThread: Spawned 2022-06-20 12:32:58.964: [ CSSD][5928]clssnmClusterListener: Spawned 2022-06-20 12:32:58.964: [ CSSD][5928]clssnmconnect: connecting to addr gipcha://anbob01:nm2_anbob-cluster 2022-06-20 12:32:58.965: [ SKGFD][3872]NOTE: No asm libraries found in the system 2022-06-20 12:32:58.965: [ CLSF][3872]Allocated CLSF context 2022-06-20 12:32:58.965: [ CSSD][3872]clssnmvKillBlockThread: spawned for disk /dev/rhdiskpower5 initial sleep interval (1000)ms 2022-06-20 12:32:58.965: [ CSSD][5928]clssscConnect: endp f35 - cookie 111264d50 - addr gipcha://anbob01:nm2_anbob-cluster 2022-06-20 12:32:58.965: [ CSSD][5928]clssnmconnect: connecting to node(1), endp(f35), flags 0x10002 2022-06-20 12:32:58.965: [GIPCHGEN][1543] gipchaNodeCreate: adding new node 113479810 { host 'anbob01', haName 'CSS_anbob-cluster', srcLuid 7427b9cd-aba3b2a6, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 1315130824, sentRegister 0, localMonitor 0, flags 0x0 } 2022-06-20 12:32:58.965: [GIPCHALO][1543] gipchaLowerSend: deffering startup of hdr 113473b18 { len 232, seq 0, type gipchaHdrTypeSend (1), lastSeq 0, lastAck 0, minAck 0, flags 0x0, srcLuid 00000000-00000000, dstLuid 00000000-00000000, msgId 0 }, node 113479810 { host 'anbob01', haName 'CSS_anbob-cluster', srcLuid 7427b9cd-aba3b2a6, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 1315130824, sentRegister 0, localMonitor 0, flags 0x0 } 2022-06-20 12:32:58.965: [GIPCHALO][1543] gipchaLowerProcessNode: no valid interfaces found to node for 1315130824 ms, node 113479810 { host 'anbob01', haName 'CSS_anbob-cluster', srcLuid 7427b9cd-aba3b2a6, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 1315130824, sentRegister 0, localMonitor 1, flags 0x4 }
Note:
在绑定多播port时失败,sgipcnMctBind 地址mcast://224.0.0.251:42424/172.18.3.67 而对于No buffer space available (74)通常是OS 资源错误,属于OS层问题,错误表示缺少可供 TCP/IP 堆栈使用的缓冲区空间,需要检查 udp_sendspace 和 udp_recvspace 配置是否充足,可以使用“netstat -l”命令观察发送和接收队列的大小. Error 74 was seen when sendto() system call failed to send a UDP message. Error 74 on Aix platform indicates No buffer space available which is an error returned by OS.It seems to be related to an AIX UDP buffer space issue.
使用cluvfy验证
# cluvfy stage -pre crsinst -n xxx -networks eth1:192.xxx.xxx.0:PUBLIC/eth2:192.xxx.xxx.0:cluster_interconnect -verbose
检查端口是否使用过多或占用
正常启动后,会启用2个udp和2个mcast, 分别对应ohasd, gipc, ocssd 进程, 可以使用netstat查看UDP占用.
如
2022-06-21 09:30:21.690 :GIPCHDEM:182658816: gipchaDaemonProcessInfUpdate: completed interface update host 'zdb001', haName '', hctx 0x5568452fab60 [0000000000000011] { gipchaContext : host 'zdb001', name 'CSS_zdb-cluster', luid '7d7dbf3a-00000000', name2 51cd-3c7d-68fe-c789, numNode 0, numInf 1, maxPriority 0, clientMode 3, nodeIncarnation 00000000-00000000 usrFlags 0x0, flags 0x40865 }
2022-06-21 09:30:21.690 :GIPCHDEM:182658816: gipchaDaemonProcessFailTransientInfs: failed transient interfaces (if any) for host zdb001 haname CSS_zdb-cluster
2022-06-21 09:30:21.690 :GIPCHDEM:182658816: gipchaDaemonProcessClientReq: processing req 0x55684612a380 type gipchaClientReqTypePublish (1)
2022-06-21 09:30:21.690 :GIPCHTHR:184235776: gipchaWorkerCreateInterface: created local interface for node 'zdb001', haName 'CSS_zdb-cluster', inf 'udp://10.10.10.235:55149' inf 0x7f5bd00c9210
2022-06-21 09:30:21.690 :GIPCHTHR:184235776: gipchaWorkerCreateInterface: created local bootstrap multicast interface for node 'zdb001', haName 'CSS_zdb-cluster', inf 'mcast://224.0.0.251:42424/10.10.10.235' inf 0x7f5bd00c9210
2022-06-21 09:30:21.690 :GIPCHTHR:184235776: gipchaWorkerCreateInterface: created local bootstrap multicast interface for node 'zdb001', haName 'CSS_zdb-cluster', inf 'mcast://230.0.1.0:42424/10.10.10.235' inf 0x7f5bd00c9210
2022-06-21 09:30:21.690 :GIPCHTHR:184235776: gipchaWorkerCreateInterface: created local bootstrap broadcast interface for node 'zdb001', haName 'CSS_zdb-cluster', inf 'udp://10.10.10.255:42424' inf 0x7f5bd00c9210
2022-06-21 09:30:21.690 :GIPCHDEM:182658816: gipchaDaemonProcessClientReq: processing req 0x7f5bcc060090 type gipchaClientReqTypeInfPublish (7)
# netstat -tanelpug
启动gipc 4级trace,获取更多日志信息. 使用多播测试的mcasttest.pl 脚本测试(MOS中下载)。
解决方法:
为了快整恢复影响,同事已重启操作系统解决,未查看更多信息。
有些情况下在bind时也会遇到Address already in use (98)错误, 通常是ip,port被占用, 发现大多数是udp 42424 port,可以确认是否被其它进程点用,或使用Helmut’s dtrace脚本跟踪端口:
TRACE SCRIPT : syscall::bind:entry { self->fd = arg0; self->sockaddr = arg1; sockaddrp =(struct sockaddr *)copyin(self->sockaddr, sizeof(struct sockaddr)); s = (char * )sockaddrp; self->port = ( unsigned short )(*(s+3)) + ( unsigned short ) ((*(s+2)*256)); self->ip1=*(s+4); self->ip2=*(s+5); self->ip3=*(s+6); self->ip4=*(s+7); } /* Generic DTRACE script tracking failed bind() system calls: */ syscall::bind:return /arg0<0 && execname != "crsctl.bin"/ { printf("- Exec: %s - PID: %d bind() failed with error : %d - fd : %d - IP: %d.%d.%d.%d - Port: %d " , execname, pid, arg0, self->fd, self->ip1, self->ip2, self->ip3, self->ip4, self->port ); } DTRACE OUTPUT : [root@hract21 DTRACE]# dtrace -s check_rac.d
当然oracle 也有一些已知的bug需要排除:
Bug 9593552 – fixed in 11.2.0.2 GI PSU3, 11.2.0.3 and above, crsd fails to join, refer to note 1337730.1 for details
Bug 12720728 – fixed in 11.2.0.2 GI PSU5, 11.2.0.3 GI PSU3, 11.2.0.4 and above, cssd fails to join, refer to note 1352887.1 for details
Bug 13334158 – fixed in 11.2.0.2 GI PSU5, 11.2.0.3 GI PSU1, 11.2.0.4 and above, cssd fails to join, refer to note 1456977.1 for details
Bug 13811209 – fixed in 11.2.0.3 GI PSU3, 11.2.0.4 and above, cssd fails to join, refer to note 1456977.1 for details
bug 13653178 – fixed in 11.2.0.3 GI PSU5, 11.2.0.4 and above, cssd fails to join, refer to note 1479380.1 for details.
Bug 16867451 : SOLX64-11.2.0.4-CSS: CSSD DID NOT COME BACK AFTER RESUME ONE OF PRIVATE NETWORKS
Bug 14693336 : GI does not start after recovery of private network ( Duplicate bug 19125577 bug 18667717 )
对不起,这篇文章暂时关闭评论。