What is “_enable_shared_pool_durations” ?
What is the meaning of “_enable_shared_pool_durations” parameter?
With sga_target set, the shared pool and the streams pool have of subpools subpools for 4 durations.
The durations are “instance”, “session”, “cursor”, and “execution”.
The main advantage of “_enable_shared_pool_durations = false” is that all the durations are combined into one pool and so a duration will not run out while another duration has free memory. This is true for both the shared pool and the streams pool.
The disadvantage of the setting is that neither the streams pool nor the shared pool can shrink. (Without the setting, only the execution duration of the shared pool is eligible to shrink.)
The setting does not affect the large pool because the large pool is not divided into durations.
With sga_target set, all the pools grow by transferring granules from the buffer cache. Shrinks if enabled go back to the buffer cache. There is no direct transfer from one pool to another pool or from one duration to another duration. All transfers involve the buffer cache as source or target.
Note that “_kghdsidx_count=4” creates four “list-set” pools (standard subpools) for the each of the shared pool, large pool, and streams pool. Each list-set has its own kgh latch. The list-set pools are orthogonal to the duration subpools. So if sga_target is set and durations are not disabled, the shared pool and the streams pool have 16 subpool/duration combinations, i.e. 4 subpools * 4 durations.
Impact of setting _enable_shared_pool_durations = false
This will change the architecture of memory in the pools. When set to FALSE, subpools within the SGA will no longer have 4 durations. Instead, each subpool will have only a single duration. This mimics the behavior in 9i, and the shared pool will no longer be able to shrink.
The advantage of this is that the performance issues documented in this note can be avoided. A duration will not encounter memory exhaustion while another duration has free memory.
The disadvantage is that the shared pool (and streams pool) are not able to shrink, mostly negating the benefits of ASMM.
# FROM TanelPoder Oracle Shared Pool Internals: List Chunk Position in the LRU List
Since Oracle 9.2 the shared pool can be “partitioned” into multiple parts. This was probably done for relieving shared pool latch contention for crappy applications (which use shared pool latches too much due to bad cursor or connection management).
The “partitions” are called shared pool subpools and there can be up to 7 subpools. Each subpool is protected by a separate shared pool latch and each subpool has its own freelists and LRU list.
There are a few different ways for detecting how many subpools you have in use.
select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1;
Oracle determines the number of needed subpools (during instance startup) based on your shared pool size and cpu_count. IIRC in 9.2 if you had 4 CPUs or more AND the shared_pool_size was bigger than 256 MB then 2 subpools were used, in 10g shared_pool_size had to be bigger for that, 512 MB I think and in 11g its 1GB. I don’t recall the exact threshold values and that’s not really important as you can see yourself how many subpools are in use with the above query.
You can set the _kghdsidx_count variable to 7, this parameter can be used to force the number of subpools you want. In 9.2 days it was actually quite common to set this back to 1 IF you had ORA-4031 errors AND the reason was diagnosed to be free space imbalance between subpools. However since 10g this has been almost unnecessary as Oracle has improved their heap management algorithms.
There’s a view X$KGHLU (@kghlu.sql script) that shows you a summary of shared pool (sub)heap allocations and how many of these chunks are in the “transient list” (used once) vs. “recurrent list” (used more than once). There’s just a single LRU list per shared pool (sub)heap, but chunks can reside in either “half” of the list, separated by a special chunk called SEPARATOR in shared pool heap dumps.
SQL> alter system set "_kghdsidx_count"=7 scope=spfile; SQL> startup force SQL> @kghlu SUB SSUB FLUSHED LRU LIST RECURRENT TRANSIENT FREE UNPIN LAST FRUNP RESERVED RESERVED RESERVED RESERVED CURRENT_TIME POOL POOL CHUNKS OPERATIONS CHUNKS CHUNKS UNSUCCESS UNSUCC SIZE SCANS MISSES MISS SIZE MISS MAX SZ ------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ---------- ---------- ----------- 2020-06-01 07:55:21 1 0 3421 9987 43 56 133 32848 77 54 12352 3088808 2020-06-01 07:55:21 2 0 1923 7351 401 414 1 4096 37 2 3977232 3977232 2020-06-01 07:55:21 3 0 2776 8528 137 199 137 4096 75 64 12352 3088808 2020-06-01 07:55:21 4 0 2546 8215 239 253 33 261752 38 13 12352 3088808 2020-06-01 07:55:21 5 0 2174 7781 342 357 14 816 29 14 16424 3088808 2020-06-01 07:55:21 6 0 2632 8389 174 214 42 4096 29 1 3088808 3088808 2020-06-01 07:55:21 7 0 1447 5879 516 558 1 4096 26 3 16408 3088808 7 rows selected.
When the KGH heap allocator doesn’t find usable free chunks in the heap freelist, it will start scanning and evicting unpinned recreatable chunks from the transient part of the LRU list, to avoid flushing out frequently used stuff.
*** AWR reports have volumes of relevant information
*** If using Oracle dynamic memory mgmt views like: v$sga_dynamic_components, v$sga_resize_ops …
like A case:
Because of changes in application load. The shared pool activity of this system is relatively frequent, and there are often RESIZEs between DB CACHE and SHARED POOL in SGA to adapt to changes in application load. For example, at around 10 am, concurrent small tasks suddenly increase, so the shared pool is not enough, so it needs to be expanded. Sometimes the background running batch is started again, and DB CACHE needs to be expanded. So after so much back and forth, the waiting for the shared pool increased. LOCK0 is often occupied by HANG when cleaning up shared pool objects, causing problems with cluster GES performance. ROW CACHE CLEANUP, LATCH SHARED POOL waiting appeared in LCK0. After LCK0 process hangs for 70 seconds, LMHB terminates the instance.
X$ tables:
X$KSMSS ( what type of storage is used and how much is allocated ) ( my monitoring scripts access this so does Tanel Poders sgastatx ).
X$KSMLRU ( good for identifying shared pool troublemakers )
X$KSMSP ( shared pool heaps )
X$KSMSPR ( shared pool reserved heaps )
X$KSMSP ( valuable for diagnosing 4031s and detecting fragmentation but some people have noted problems so caveat emptor ).
对不起,这篇文章暂时关闭评论。