V$SQL_WORKAREA_ACTIVEについて

フォーラムの皆様
V$SQL_WORKAREA_ACTIVEについて教えてください。
ハッシュ結合でTEMP領域を使用してしまうため
PGAの拡張を考えています。
V$SQL_WORKAREA_ACTIVEの
 WORK_AREA_SIZE
 EXPECTED_SIZE
 ACTUAL_MEM_USED
の値を参考にしようと思うのですが、リファレンスをみてもよくわかりません。
WORK_AREA_SIZE > EXPECTED_SIZEにする等、
具体的な設定方法を教えてください。
環境:
Oracle Database 11g
Redhat linux

直接の回答ではありませんが。。
バッチ等で利用されるなら、例えばそのセッションでWORKAREA_SIZE_POLICYをMANUALに設定してSORT_AREA_SIZEを設定したら対応できませんか?

Similar Messages

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • How to increase the size of sort_area_size

    How to increase the size of sort_area_size and what size should be according to the PROD database
    Thanks

    user10869960 wrote:
    Hi,
    Many Thanks Charles
    Oracle does not recommend using the SORT_AREA_SIZE parameter unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL working areas by setting PGA_AGGREGATE_TARGET instead. SORT_AREA_SIZE is retained for backward compatibility."
    --How can i know the instance is configured with the shared server option or not?This might be a tough question to answer. A shared server configuration may be enabled, but the clients may still connect using dedicated sessions, in which case PGA_AGGREGATE_TARGET would still apply.
    From
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2088.htm
    V$SESSION includes a column named SERVER which will contain one of the following for each of the sessions: DEDICATED, SHARED, PSEUDO, or NONE. As a quick check, you could query V$SESSION to see if any sessions are connected using a shared server connection.
    From:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm
    There are several parameters which are used to configure shared server support, as well as several views to monitor shared server.
    As Robert mentioned, when the WORKAREA_SIZE_POLICY is set to AUTO, the SORT_AREA_SIZE setting is not used, unless a shared server configuration is in use.
    --What default value is WORKAREA_SIZE_POLICY and SORT_AREA_SIZE ?From:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams157.htm
    "Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators (such as sort, group-by, hash-join, bitmap merge, and bitmap create) will be automatically sized. A nonzero value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater."
    Actually I am facing performence issue since long time till now i did not get the solution even i have raised SRs but i could not.When the issue occur system seems hang.what i monitored whenever hdisk0 and hdisk1 use 100% the issue occur.
    Regards,
    SajidWhen you say that you have had a performance issue for a long time, is it a performance problem faced by a single SQL statement, a single user, a single application, or everything on the server? If you are able to identify a single user, or SQL statement that is experiencing poor performance, I suggest starting with a 10046 trace at level 8 (wait events) or level 12 (wait events and bind variables) to determine why the execution appears to be slow. If you have not yet determined a specific user or SQL statement that is experiencing performance problems, you might start with either a Statspack Report or an AWR Report (AWR requires a separate license).
    If you believe that temp tablespace usage may be a contributing factor to the performance problem, you may want to periodically run this query, which will indicate currently in use temp tablespace usage:
    {code}
    SELECT /*+ ORDERED */
    TU.USERNAME,
    S.SID,
    S.SERIAL#,
    S.SQL_ID,
    S.SQL_ADDRESS,
    TU.SEGTYPE,
    TU.EXTENTS,
    TU.BLOCKS,
    SQL.SQL_TEXT
    FROM
    V$TEMPSEG_USAGE TU,
    V$SESSION S,
    V$SQL SQL
    WHERE
    TU.SESSION_ADDR=S.SADDR
    AND TU.SESSION_NUM=S.SERIAL#
    AND S.SQL_ID=SQL.SQL_ID
    AND S.SQL_ADDRESS=SQL.ADDRESS;
    {code}
    The SID and SERIAL# returned by the above could then be used to enable a 10046 trace for a session. The SQL_ID (and CHILD_NUMBER from V$SESSION in recent releases) could be used with DBMS_XPLAN.DISPLAY_CURSOR to return the execution plan for the SQL statement.
    You could also take a look in V$SQL_WORKAREA_ACTIVE to determine which, if any, SQL statement are resulting in single-pass, or multi-pass executions, which both access the temp tablespace.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Huge long time direct path read temp, but pga size is enough, one block p3

    Hi Gurus,
    Can you please kindly provide some points on my below questions. thanks
    my env
    select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    OS: Linux 4 2.6.39-100.5.1.el5uek
    session operation: update a partition which have 4 partitions and total 16G
    session trace info:
    the session keep at active status and waiting for below wait event for more than 70 hours, and os iostats and cpu are almost idle on most time.
    WAIT #8: nam='direct path read temp' ela= 7615 file number=202 first dba=105072 block cnt=1 obj#=104719 tim=1344850223569499
    WAIT #8: nam='direct path read temp' ela= 5989 file number=202 first dba=85264 block cnt=1 obj#=104719 tim=1344850392833257
    WAIT #8: nam='direct path read temp' ela= 319 file number=202 first dba=85248 block cnt=1 obj#=104719 tim=1344850399563184
    WAIT #8: nam='direct path read temp' ela= 358 file number=202 first dba=85232 block cnt=1 obj#=104719 tim=1344850406016899
    WAIT #8: nam='direct path read temp' ela= 349 file number=202 first dba=85216 block cnt=1 obj#=104719 tim=1344850413023792
    WAIT #8: nam='direct path read temp' ela= 7975 file number=202 first dba=85200 block cnt=1 obj#=104719 tim=1344850419495645
    WAIT #8: nam='direct path read temp' ela= 331 file number=202 first dba=85184 block cnt=1 obj#=104719 tim=1344850426233450
    WAIT #8: nam='direct path read temp' ela= 2641 file number=202 first dba=82880 block cnt=1 obj#=104719 tim=1344850432699800
    pgastat:
    NAME VALUE/1024/1024 UNIT
    aggregate PGA target parameter 18432 bytes
    aggregate PGA auto target 16523.1475 bytes
    global memory bound 1024 bytes
    total PGA inuse 75.7246094 bytes
    total PGA allocated 162.411133 bytes
    maximum PGA allocated 514.130859 bytes
    total freeable PGA memory 64.625 bytes
    PGA memory freed back to OS 40425.1875 bytes
    total PGA used for auto workareas 2.75195313 bytes
    maximum PGA used for auto workareas 270.407227 bytes
    total PGA used for manual workareas 0 bytes
    NAME VALUE/1024/1024 UNIT
    maximum PGA used for manual workareas 24.5429688 bytes
    bytes processed 110558.951 bytes
    extra bytes read/written 15021.2559 bytes
    Most operation in PGA via query on V$SQL_WORKAREA_ACTIVE
    IDX maintainenance (sort)
    My questions:
    1. why 'direct path read temp' just read one block every time, my understanding is this event can read one block and multiple blocks at one read call, why it keep read one block in my session?
    2. my pga size is big enough, why this operation can not be treated with in PGA memory, instead of read block from disk into temp tablespace?
    Thanks for you inputs.
    Roy

    951241 wrote:
    since the session(which was from hard code application) is completed.First of all, you showed wait events from sql trace in the first post. Is the tracing was disabled in the latest execution?
    >
    I just generated the AWR for that period, as get long elapsed time SQL as following
    Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id
    3,075.35 0 85.10 91.03 8.68 duhz2wtduz709
    524.11 1 524.11 14.50 99.29 0.30 3cpa9fxny9j35
    so I get execution plan as below for these two SQL,
    select * from table(dbms_xplan.display_awr('&v_sql_id')); duhz2wtduz709
    PLAN_TABLE_OUTPUT
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT  |             |       |       |     4 (100)|          |
    |   1 |  UPDATE           | WORK_PAY_LINE |       |       |            |          |
    |   2 |   INDEX RANGE SCAN| WORK_PAY_LINE |     1 |    37 |     3   (0)| 00:00:01 |
    Note
    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel thresholdI am not sure the why elapsed time in AWR is different with time in execution plan. Column "Time" in an execution plan is estimated time. In this execution plan Oracle expects to get 1 row, estimated time is 1 sec.
    So, you need to check why estimated cardinality is such low, check statistics on the table WORK_PAY_LINE.
    You update 10Gb from 16Gb table via Index Range Scan, it looks inefficient here by two reasons:
    1. when a table updated via Index Range Scan optimized index maintenance is used. As a result some amount (significant in your case) of workareas is required. Required size depends on size and number of updated indexes and "global memory bound", 1Gb in your case.
    2. if required table buffers will not be found in the cache it will be read from disk by single block reads. If you would use Full Table Scan then buffers for update most likely will be found in the cache because before it read by multiblock reads during Full Table Scan.
    Figures from your AWR indicate, that only ~ 9% the session waited for I/O and 91% it worked and used CPU
    Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id
    3,075.35 0 85.10 91.03 8.68 duhz2wtduz709 This amount of CPU time partially required for UPDATE 10Gb of data, partially for sorting during optimized index maintenance.
    I would propose to use Table Full Scan here.
    Also you can play around and create fake trigger on update, it will make impossible to use optimized index maintenance, usual index maintenance will be used. As a result you can check the same update with the same execution plan (with Index Range Scan) but without optimized index maintenance and "direct path .. temp" wait events.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • IDX MAINTENANCE

    Hi brother,
    What is te menaing below msg from v$sql_workarea_active ? Is a problem in SQL statement or index?
    SQL_HASH_VALUE SQL_ID WORKAREA_ADDRESS OPERATION_TYPE OPERATION_ID POLICY SID QCINST_ID QCSID ACTIVE_TIME WORK_AREA_SIZE EXPECTED_SIZE ACTUAL_MEM_USED MAX_MEM_USED NUMBER_PASSES TEMPSEG_SIZE TABLESPACE SEGRFNO# SEGBLK#
    386857929 g8u8agwbhxyy9 00000003C4A2E660 IDX MAINTENANCE (SOR 0 AUTO 116 1454984090 10350592 10350592 8970240 8970240 0
    Thanks

    Hii..
    Whats your sql statement..Let us also try the same in our test instance.
    thanks,
    baskar.l

  • Explain plan intepretation

    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
    PL/SQL Release 10.2.0.2.0 - Production
    CORE    10.2.0.2.0      Production
    TNS for Linux: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - ProductionExplain plan
    | Id  | Operation                                  | Name                          | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                           |                               |  7134K|  1238M|       |   448K  (2)| 01:29:39 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                            |                               |       |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                      | :TQ10003                      |  7134K|  1238M|       |   448K  (2)| 01:29:39 |       |       |  Q1,03 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY                           |                               |  7134K|  1238M|  2858M|   448K  (2)| 01:29:39 |       |       |  Q1,03 | PCWP |            |
    |   4 |     PX RECEIVE                             |                               |  7134K|  1238M|       |   408K  (2)| 01:21:42 |       |       |  Q1,03 | PCWP |            |
    |   5 |      PX SEND HASH                          | :TQ10002                      |  7134K|  1238M|       |   408K  (2)| 01:21:42 |       |       |  Q1,02 | P->P | HASH       |
    |*  6 |       HASH JOIN BUFFERED                   |                               |  7134K|  1238M|  1873M|   408K  (2)| 01:21:42 |       |       |  Q1,02 | PCWP |            |
    |   7 |        PX RECEIVE                          |                               |   137M|    13G|       | 97898   (3)| 00:19:35 |       |       |  Q1,02 | PCWP |            |
    |   8 |         PX SEND HASH                       | :TQ10000                      |   137M|    13G|       | 97898   (3)| 00:19:35 |       |       |  Q1,00 | P->P | HASH       |
    |   9 |          PX BLOCK ITERATOR                 |                               |   137M|    13G|       | 97898   (3)| 00:19:35 |     1 |     4 |  Q1,00 | PCWC |            |
    |* 10 |           TABLE ACCESS FULL                | ORII                          |   137M|    13G|       | 97898   (3)| 00:19:35 |   189 |   192 |  Q1,00 | PCWP |            |
    |  11 |        PX RECEIVE                          |                               |   466 | 28426 |       |     5   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |  12 |         PX SEND HASH                       | :TQ10001                      |   466 | 28426 |       |     5   (0)| 00:00:01 |       |       |  Q1,01 | P->P | HASH       |
    |* 13 |          TABLE ACCESS BY GLOBAL INDEX ROWID| MII                           |   466 | 28426 |       |     5   (0)| 00:00:01 | ROWID | ROWID |  Q1,01 | PCWC |            |
    |  14 |           NESTED LOOPS                     |                               |   177M|    13G|       |   119K  (1)| 00:23:58 |       |       |  Q1,01 | PCWP |            |
    |  15 |            PX BLOCK ITERATOR               |                               |   381K|  7073K|       | 24376   (3)| 00:04:53 |     1 |   126 |  Q1,01 | PCWC |            |
    |* 16 |             TABLE ACCESS FULL              | DI                            |   381K|  7073K|       | 24376   (3)| 00:04:53 |     1 |   126 |  Q1,01 | PCWP |            |
    |* 17 |            INDEX RANGE SCAN                | IDX_MII                       |   482 |       |       |     0   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       6 - access("RNII"."LEGAL_ENTITY_ID"="MII"."LEGAL_ENTITY_ID" AND "RNII"."ORDER_ID"="MII"."DISTRIBUTOR_ORDER_ID" AND
                  "RNII"."SHIPMENT_ID"="MII"."DISTRIBUTOR_SHIPMENT_ID" AND "RNII"."ASIN"="MII"."ASIN")
      10 - filter("RNII"."RNII_SOURCE_NAME"='distributor_shipment_item' AND "RNII"."SNAPSHOT_DAY"=TO_DATE('2010-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
                  "RNII"."LEGAL_ENTITY_ID"=101)
      13 - filter("MII"."LEGAL_ENTITY_ID"=101)
      16 - filter("DI"."GL_DATE">=TO_DATE('2009-12-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND "DI"."INVOICE_STATUS"='4' AND "DI"."LEGAL_ENTITY_ID"=101 AND
                  "DI"."GL_DATE"<TO_DATE('2010-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
      17 - access("MII"."INVOICE_ID"="DI"."INVOICE_ID")Below are my questions:
    Q1-Which operation should start first. Is that Nested Loop between di and mii tables or creating HASH table for rnii. When i monitored SQL execution in V$SQL_WORKAREA_ACTIVE, it was doing HASH JOIN operation (Operation ID=6) first. Query started executing with 16 parallel slaves (DOP for di=1,mii=1 and rnii=8 at table level). 8 slaves were doing HASH JOIN while remaining 8 were sitting idle. I want to understand why slaves were sitting idle and not doing NL? Each slave has used 1.59 GB (total ~12.5GB) on TEMP, which goes with CBO's assumption to use 13 GB TEMP.
    Once HASH JOIN (operation 6) is done, i have seen that 4 slaves were scanning mii table using 'db file sequential wait''. Which made me realize that RNII table was the driving and NL's join row source were acting as probing table. I am not sure whether my thinking is right or not? This query took 9 min to retrun 4.2 Million rows.
    Username                       QC/Slav SlaveSet        SID     QC SID Slave INST STATE    WAIT_EVENT                     QC INST Req. DOP Actual DOP
    DBA                            QC                      471        471          1 WAIT     PX Deq: Execute Reply
    - p005                        (Slave)        1        217                     1 NOT WAIT                                      1        8          8
    - p004                        (Slave)        1        442                     1 NOT WAIT                                      1        8          8
    - p003                        (Slave)        1        482                     1 NOT WAIT                                      1        8          8
    - p001                        (Slave)        1        394                     1 NOT WAIT                                      1        8          8
    - p000                        (Slave)        1        493                     1 NOT WAIT                                      1        8          8
    - p007                        (Slave)        1        353                     1 NOT WAIT                                      1        8          8
    - p006                        (Slave)        1        477                     1 NOT WAIT                                      1        8          8
    - p002                        (Slave)        1        313                     1 NOT WAIT                                      1        8          8
    - p015                        (Slave)        2        306                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p014                        (Slave)        2        432                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p013                        (Slave)        2        333                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p012                        (Slave)        2        476                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p011                        (Slave)        2        418                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p010                        (Slave)        2        275                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p009                        (Slave)        2        328                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p008                        (Slave)        2        510                     1 WAIT     PX Deq: Table Q Normal               1        8          8
           SID SQL_HASH_VALUE      QCSID OPERATION_ID TYPE                           POLICY               WSIZE_MB EXP_SIZE_MB ACT_SIZE_MB MAX_SIZE_MB     PASSES       TEMP
           328     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1596
           333     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1597
           275     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1596
           432     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1597
           510     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1597
           306     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1597
           418     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1597
           476     4177171096        471            6 HASH-JOIN                      AUTO                    63.36       63.36       55.07       55.07          1       1596Index on mii table is a GLOBAL non-partitioned index. Which made me think, why scanning of mii table were going in parallel with 'db file sequential wait'?
    When i use NO_SWAP_JOIN_INPUTS(rnii) hint in query, i got below Explain plan:
    | Id  | Operation                                  | Name                          | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                           |                               |  7134K|  1238M|       |   448K  (2)| 01:29:38 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                            |                               |       |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                      | :TQ10003                      |  7134K|  1238M|       |   448K  (2)| 01:29:38 |       |       |  Q1,03 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY                           |                               |  7134K|  1238M|  2858M|   448K  (2)| 01:29:38 |       |       |  Q1,03 | PCWP |            |
    |   4 |     PX RECEIVE                             |                               |  7134K|  1238M|       |   408K  (2)| 01:21:42 |       |       |  Q1,03 | PCWP |            |
    |   5 |      PX SEND HASH                          | :TQ10002                      |  7134K|  1238M|       |   408K  (2)| 01:21:42 |       |       |  Q1,02 | P->P | HASH       |
    |*  6 |       HASH JOIN BUFFERED                   |                               |  7134K|  1238M|  1948M|   408K  (2)| 01:21:42 |       |       |  Q1,02 | PCWP |            |
    |   7 |        PX RECEIVE                          |                               |   466 | 28426 |       |     5   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |   8 |         PX SEND HASH                       | :TQ10000                      |   466 | 28426 |       |     5   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
    |*  9 |          TABLE ACCESS BY GLOBAL INDEX ROWID| MATCHED_INVOICE_ITEMS         |   466 | 28426 |       |     5   (0)| 00:00:01 | ROWID | ROWID |  Q1,00 | PCWC |            |
    |  10 |           NESTED LOOPS                     |                               |   177M|    13G|       |   119K  (1)| 00:23:58 |       |       |  Q1,00 | PCWP |            |
    |  11 |            PX BLOCK ITERATOR               |                               |   381K|  7073K|       | 24376   (3)| 00:04:53 |     1 |   126 |  Q1,00 | PCWC |            |
    |* 12 |             TABLE ACCESS FULL              | DISTRIBUTOR_INVOICES          |   381K|  7073K|       | 24376   (3)| 00:04:53 |     1 |   126 |  Q1,00 | PCWP |            |
    |* 13 |            INDEX RANGE SCAN                | I_MII_INVOICE_ID_ITEM_ID      |   482 |       |       |     0   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
    |  14 |        PX RECEIVE                          |                               |   137M|    13G|       | 97898   (3)| 00:19:35 |       |       |  Q1,02 | PCWP |            |
    |  15 |         PX SEND HASH                       | :TQ10001                      |   137M|    13G|       | 97898   (3)| 00:19:35 |       |       |  Q1,01 | P->P | HASH       |
    |  16 |          PX BLOCK ITERATOR                 |                               |   137M|    13G|       | 97898   (3)| 00:19:35 |     1 |     4 |  Q1,01 | PCWC |            |
    |* 17 |           TABLE ACCESS FULL                | O_RECEIVED_NOT_INVOICED_ITEMS |   137M|    13G|       | 97898   (3)| 00:19:35 |   189 |   192 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       6 - access("RNII"."LEGAL_ENTITY_ID"="MII"."LEGAL_ENTITY_ID" AND "RNII"."ORDER_ID"="MII"."DISTRIBUTOR_ORDER_ID" AND
                  "RNII"."SHIPMENT_ID"="MII"."DISTRIBUTOR_SHIPMENT_ID" AND "RNII"."ASIN"="MII"."ASIN")
       9 - filter("MII"."LEGAL_ENTITY_ID"=101)
      12 - filter("DI"."GL_DATE">=TO_DATE('2009-12-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND "DI"."INVOICE_STATUS"='4' AND "DI"."LEGAL_ENTITY_ID"=101 AND
                  "DI"."GL_DATE"<TO_DATE('2010-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
      13 - access("MII"."INVOICE_ID"="DI"."INVOICE_ID")
      17 - filter("RNII"."RNII_SOURCE_NAME"='distributor_shipment_item' AND "RNII"."SNAPSHOT_DAY"=TO_DATE('2010-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
                  "RNII"."LEGAL_ENTITY_ID"=101)When i use above hint, query took 5 min only. During its execution, i have monitored that NL HASH JOIN (rnii) were happening in parallel using 8 slaves. Below is snippet of that:
    Username                       QC/Slav SlaveSet        SID     QC SID Slave INST STATE    WAIT_EVENT                     QC INST Req. DOP Actual DOP
    DBA                             QC                     471        471          1 WAIT     PX Deq: Execute Reply
    - p005                        (Slave)        1        275                     1 WAIT     db file sequential read              1        8          8
    - p004                        (Slave)        1        328                     1 NOT WAIT                                      1        8          8
    - p003                        (Slave)        1        432                     1 WAIT     db file sequential read              1        8          8
    - p001                        (Slave)        1        394                     1 NOT WAIT                                      1        8          8
    - p000                        (Slave)        1        477                     1 WAIT     db file parallel read                1        8          8
    - p007                        (Slave)        1        493                     1 NOT WAIT                                      1        8          8
    - p006                        (Slave)        1        482                     1 WAIT     db file sequential read              1        8          8
    - p002                        (Slave)        1        333                     1 WAIT     db file parallel read                1        8          8
    - p015                        (Slave)        2        442                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p014                        (Slave)        2        353                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p013                        (Slave)        2        510                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p012                        (Slave)        2        217                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p011                        (Slave)        2        476                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p010                        (Slave)        2        306                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p009                        (Slave)        2        313                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p008                        (Slave)        2        418                     1 WAIT     PX Deq: Table Q Normal               1        8          8
           SID SQL_HASH_VALUE      QCSID OPERATION_ID TYPE                           POLICY               WSIZE_MB EXP_SIZE_MB ACT_SIZE_MB MAX_SIZE_MB     PASSES       TEMP
           510     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           442     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           306     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           217     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           418     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           313     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           353     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           476     2547930387        471            6 HASH-JOIN                      AUTO                    83.45       83.44       89.03       89.03          0         71
           I believe 'db file parallel read' is due to TABLE PRE-FETCH (mii). My P_A_T is 4 GB.
    I wanted to understand what factor should i consider to decide which table should be the driving and which should be the probe?
    Thanks beforehand.

    OraDBA02 wrote:
    1) Why 8 slaves were sitting idle while other 8 slaves were busy building HASH table from rnii in first execution plan?I don't see why do you think that 8 slaves were idle and 8 were busy. This output:
    Username                       QC/Slav SlaveSet        SID     QC SID Slave INST STATE    WAIT_EVENT                     QC INST Req. DOP Actual DOP
    DBA                            QC                      471        471          1 WAIT     PX Deq: Execute Reply
    - p005                        (Slave)        1        217                     1 NOT WAIT                                      1        8          8
    - p004                        (Slave)        1        442                     1 NOT WAIT                                      1        8          8
    - p003                        (Slave)        1        482                     1 NOT WAIT                                      1        8          8
    - p001                        (Slave)        1        394                     1 NOT WAIT                                      1        8          8
    - p000                        (Slave)        1        493                     1 NOT WAIT                                      1        8          8
    - p007                        (Slave)        1        353                     1 NOT WAIT                                      1        8          8
    - p006                        (Slave)        1        477                     1 NOT WAIT                                      1        8          8
    - p002                        (Slave)        1        313                     1 NOT WAIT                                      1        8          8
    - p015                        (Slave)        2        306                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p014                        (Slave)        2        432                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p013                        (Slave)        2        333                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p012                        (Slave)        2        476                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p011                        (Slave)        2        418                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p010                        (Slave)        2        275                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p009                        (Slave)        2        328                     1 WAIT     PX Deq: Table Q Normal               1        8          8
    - p008                        (Slave)        2        510                     1 WAIT     PX Deq: Table Q Normal               1        8          8tells nothing about idleness, since this is just a snapshot.
    2) If you look at both execution plans everything (row source,bytes,IN-OUT,TEMP columns) is same,except the Join order.
    In first execution plan, join order is something like HASH (rnii, NL (di,mii)) while in second, it is HASH ((NL(di,mii),rnii).Read this blog post.
    I think, long elapsed time of first plan is due to IDLE status of 8 slaves while HASH table is getting build up.Don't think, just get the execution profile using extended SQL trace.
    3) Third question is regarding Index access in PARALLEL. (?). If you look at second plan, there are 2 px doing 'db file sequential read' on mii table and 2 px were doing 'db file parallel read'. The later is TABLE PRE-FETCH. I wanted to understand, how come index access (index on mii) got parallelised ? Is that Index leaf block scan in parallel or Table block access (from index rowid) in parallel ?Each slave performs IRS - it scans a part of driving table and performs NL join as usual for that part. That's it.

  • Double WINDOW SORT Operation

    Please review following SQL and it's execution plan. Why am I seeing 2 WINDOW SORT operations even though in sql . analytic function "row_number" has been used only once?
    Also, In step 3 of the plan, why "bytes" goes from 35 GB(4th step) to 88GB when row count remains the same. In fact , since I'm selecting just 1st row , both row count as well as "bytes" should have gone down. Shouldn't it?
      SELECT orddtl.ord_dtl_key, orddtl.ld_nbr, orddtl.actv_flg,
             orddtl.ord_nbr
         FROM (SELECT /*+ parallel(od, 8) parallel(sc,8) */  od.ord_dtl_key, od.ld_nbr, od.actv_flg,
                      od.ord_nbr,
                      ROW_NUMBER () OVER (PARTITION BY od.ord_dtl_key, od.START_TS ORDER BY sc.START_TS DESC)
                                                                          rownbr
                 FROM edw.order_detail od LEFT OUTER JOIN edw.srvc_code sc
                      ON (    sc.srvc_cd_key = od.srvc_cd_key
                          AND od.part_nbr = sc.part_nbr
                          AND od.item_cre_dt >= sc.START_TS
                          AND od.item_cre_dt < sc.END_TS
                WHERE od.part_nbr = 11 ) orddtl
        WHERE orddtl.rownbr = 1;Execution Plan
    | Id  | Operation                      | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |                   |    88M|   121G|       |  2353K (65)| 00:33:07 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                |                   |       |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10002          |    88M|   121G|       |  2353K (65)| 00:33:07 |       |       |  Q1,02 | P->S | QC (RAND)  |
    |*  3 |    VIEW                        |                   |    88M|   121G|       |  2353K (65)| 00:33:07 |       |       |  Q1,02 | PCWP |            |
    |*  4 |     WINDOW SORT PUSHED RANK    |                   |    88M|    35G|    75G|  2353K (65)| 00:33:07 |       |       |  Q1,02 | PCWP |            |
    |   5 |      PX RECEIVE                |                   |    88M|    35G|       |  2353K (65)| 00:33:07 |       |       |  Q1,02 | PCWP |            |
    |   6 |       PX SEND HASH             | :TQ10001          |    88M|    35G|       |  2353K (65)| 00:33:07 |       |       |  Q1,01 | P->P | HASH       |
    |*  7 |        WINDOW CHILD PUSHED RANK|                   |    88M|    35G|       |  2353K (65)| 00:33:07 |       |       |  Q1,01 | PCWP |            |
    |*  8 |         HASH JOIN RIGHT OUTER  |                   |    88M|    35G|       |  1610K (92)| 00:22:39 |       |       |  Q1,01 | PCWP |            |
    |   9 |          PX RECEIVE            |                   |  1133K|    32M|       |  1197  (20)| 00:00:02 |       |       |  Q1,01 | PCWP |            |
    |  10 |           PX SEND BROADCAST    | :TQ10000          |  1133K|    32M|       |  1197  (20)| 00:00:02 |       |       |  Q1,00 | P->P | BROADCAST  |
    |  11 |            PX BLOCK ITERATOR   |                   |  1133K|    32M|       |  1197  (20)| 00:00:02 |   KEY |   KEY |  Q1,00 | PCWC |            |
    |  12 |             TABLE ACCESS FULL  |  SRVC_CODE        |  1133K|    32M|       |  1197  (20)| 00:00:02 |     1 |     1 |  Q1,00 | PCWP |            |
    |  13 |          PX BLOCK ITERATOR     |                   |    88M|    32G|       |   188K (27)| 00:02:39 |   KEY |   KEY |  Q1,01 | PCWC |            |
    |  14 |           TABLE ACCESS FULL    |    ORDER_DETAIL   |    88M|    32G|       |   188K (27)| 00:02:39 |     1 |     1 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - filter("orddtl"."rownbr"=1)
       4 - filter(ROW_NUMBER() OVER ( PARTITION BY "od"."ORD_DTL_KEY","od"."START_TS" ORDER BY INTERNAL_FUNCTION("SC"."START_TS"(+))
                  DESC )<=1)
       7 - filter(ROW_NUMBER() OVER ( PARTITION BY "od"."ORD_DTL_KEY","od"."START_TS" ORDER BY INTERNAL_FUNCTION("SC"."START_TS"(+))
                  DESC )<=1)
       8 - access("od"."part_nbr"="SC"."part_nbr"(+) AND "SC"."SRVC_CD_KEY"(+)="od"."SRVC_CD_KEY")
           filter("od"."ITEM_CRE_DT"<"SC"."END_TS"(+) AND "od"."ITEM_CRE_DT">="SC"."START_TS"(+))

    Thanks Jonathan for your reply.
    This type of pattern happens quite frequently in parallel execution with aggregation. A layer of slave processes can do partial aggregation before passing a reduced result set to the query co-ordinator to finish the job.
    I wouldn't be 100% sure without building a model to check, but I think the logic of your quer allows the eight slaves to identify each "rownumber() = 1" for the data set they have collected, and the allows the query coordinator to do the window sort on the eight incoming rows (for each key) and determine which one of the eight is ultimate the highest date.So is it a normal pattern? Will step#7 & #4 do the same amount work as stated in PREDICATE information part of execution plan.?
    You’re correct! There are 8 slave processes appears to be performing WINDOW PUSHED RANK ( Step#7 in Execution Plan ) as you see in following output. Per execution plan and your comment, each one appears to be finding partial set of rows row_num <= 1. It’s apparently doing lots of work and very slow even with 8 processes. So not sure , how slow would be QC doing the same work just by itself.
    And as you see below , it’s [Step#7 ] very slow and half of the slaves performing multi pass sort operation. Even though , it was estimated 35GB for that operation, why it’s estimating work area size of only 6-14MB only? Also, It’s allocating so low amount of PGA than expected. P_A_T was set to approx 11 GB. Currently this was the only query/operation on the Instance.
    Why it’s not allocating more PGA for that operation? [My apologies for diverting from my original question ].
    I have included PGA stats as well here which was taken 5-10 minutes later than other PQ session information. It’s still shows that there is no shortage of PGA.
    Moreover, I have observed this behavior (under allocation of PGA) especially for WINDOWS SORT operations for other SQLs too. Is it normal behavior ? I’m on 10.2.0.4.
    select
    decode(px.qcinst_id,NULL,username,
    ' - '||lower(substr(pp.SERVER_NAME,
    length(pp.SERVER_NAME)-4,4) ) )"Username",
    decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
    to_char( px.server_set) "SlaveSet",
    to_char(s.sid) "SID",
    to_char(px.inst_id) "Slave INST",
    decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE,
    case  sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
    to_char(s.ROW_WAIT_OBJ#)  wait_OBID,
    decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
    to_char(px.qcinst_id) "QC INST",
    px.req_degree "Req. DOP",
    px.degree "Actual DOP"
    from gv$px_session px,
    gv$session s ,
    gv$px_process pp,
    gv$session_wait sw
    where px.sid=s.sid (+)
    and px.serial#=s.serial#(+)
    and px.inst_id = s.inst_id(+)
    and px.sid = pp.sid (+)
    and px.serial#=pp.serial#(+)
    and sw.sid = s.sid
    and sw.inst_id = s.inst_id
    order by
      decode(px.QCINST_ID,  NULL, px.INST_ID,  px.QCINST_ID),
      px.QCSID,
      decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
      px.SERVER_SET,
      px.INST_ID
    UNAME        QC/Slave SlaveSet SID       Slave INS STATE    WAIT_EVENT                      WAIT_OBID QC SID QC INS Req. DOP Actual DOP
    APPS_ORD     QC                1936      2         WAIT     PX Deq: Execute Reply          71031      1936
    - p006      (Slave)  1        1731      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p007      (Slave)  1        2159      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p002      (Slave)  1        2090      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p005      (Slave)  1        1965      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p001      (Slave)  1        1934      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p004      (Slave)  1        1843      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p000      (Slave)  1        1778      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p003      (Slave)  1        1751      2         WAIT     PX Deq: Execution Msg          71021      1936   2             8          8
    - p009      (Slave)  2        2138      2         NOT WAIT                                71031      1936   2             8          8
    - p012      (Slave)  2        1902      2         NOT WAIT                                71031      1936   2             8          8
    - p008      (Slave)  2        1921      2         NOT WAIT                                71031      1936   2             8          8
    - p013      (Slave)  2        2142      2         NOT WAIT                                71031      1936   2             8          8
    - p015      (Slave)  2        2091      2         NOT WAIT                                71031      1936   2             8          8
    - p014      (Slave)  2        2122      2         NOT WAIT                                71031      1936   2             8          8
    - p010      (Slave)  2        2146      2         NOT WAIT                                71031      1936   2             8          8
    - p011      (Slave)  2        1754      2         NOT WAIT                                71031      1936   2             8          8
    SELECT operation_type AS type                      ,
            workarea_address WADDR,
            operation_id as OP_ID,
            policy                                      ,
            vwa.sql_id,
            vwa.inst_id i#,
            vwa.sid                                     ,
            vwa.qcsid   QCsID,
            vwa.QCINST_ID  QC_I#,
            s.username uname,
            ROUND(active_time    /1000000,2)   AS a_sec ,
            ROUND(work_area_size /1024/1024,2) AS wsize ,
            ROUND(expected_size  /1024/1024,2) AS exp   ,
            ROUND(actual_mem_used/1024/1024,2) AS act   ,
            ROUND(max_mem_used   /1024/1024,2) AS MAX   ,
            number_passes                      AS p#,
            ROUND(tempseg_size/1024/1024,2)    AS temp
    FROM   gv$sql_workarea_active vwa ,
            gv$session s
    where  vwa.sid = s.sid
    and    vwa.inst_id = s.inst_id
    order by vwa.sql_id, operation_id, vwa.inst_id, username, vwa.qcsid
    TYPE            WADDR            OP_ID POLI SQL_ID         I#    SID  QCSID QC_I# UNAME                A_SEC      WSIZE        EXP        ACT        MAX   P#       TEMP
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3   2   2146   1936     2 APPS_ORD            1181.22      13.59      13.59       7.46      90.98    1        320
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       2142   1936     2 APPS_ORD            1181.07       7.03       7.03       4.02      90.98    0        288
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       2091   1936     2 APPS_ORD            1181.06       7.03       7.03        4.5      90.98    0        288
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       1921   1936     2 APPS_ORD            1181.09      13.59      13.59       2.24      90.98    1        320
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       2138   1936     2 APPS_ORD            1181.16       7.03       7.03       1.34      90.98    0        288
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       1754   1936     2 APPS_ORD            1181.09      14.06      14.06       5.77      90.98    1        320
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       2122   1936     2 APPS_ORD            1181.15       6.56       6.56        .24      90.98    0        288
    WINDOW (SORT)   07000003D2B03F90     7 AUTO 8z5s5wdy94ty3       1902   1936     2 APPS_ORD            1181.12      14.06      14.06       9.12      90.98    1        320
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       2142   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       2138   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       2122   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       2091   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       1921   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       1902   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       2146   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
    HASH-JOIN       07000003D2B03F28     8 AUTO 8z5s5wdy94ty3       1754   1936     2 APPS_ORD            1183.24      98.64      98.64     100.44     100.44    0
                                                              sum                                                872.07            838.21
    PGA Stats – taken 5-10 minutes later than above.
    select name, decode(unit,'bytes',round(value/1048576,2)||' MB', value) value from v$pgastat
    NAME                                               VALUE
    aggregate PGA target parameter                     11264 MB
    aggregate PGA auto target                          9554.7 MB
    global memory bound                                1024 MB
    total PGA inuse                                    902.21 MB
    total PGA allocated                                3449.64 MB
    maximum PGA allocated                              29155.44 MB
    total freeable PGA memory                          2140.56 MB
    process count                                      107
    max processes count                                379
    PGA memory freed back to OS                        77240169.56 MB
    total PGA used for auto workareas                  254.14 MB
    maximum PGA used for auto workareas                22797.02 MB
    total PGA used for manual workareas                0 MB
    maximum PGA used for manual workareas              16.41 MB
    over allocation count                              0
    bytes processed                                    323796668.77 MB
    extra bytes read/written                           183362312.02 MB
    cache hit percentage                               63.84
    recompute count (total)                            2054320
    SELECT
    PGA_TARGET_FOR_ESTIMATE/1048576 ESTMTD_PGA_MB,
       PGA_TARGET_FACTOR PGA_TGT_FCTR,
       ADVICE_STATUS ADV_STS,
       BYTES_PROCESSED/1048576 ESTMTD_MB_PRCD,
       ESTD_EXTRA_BYTES_RW/1048576 ESTMTD_XTRA_MB,
       ESTD_PGA_CACHE_HIT_PERCENTAGE ESTMTD_CHIT_PCT,
       ESTD_OVERALLOC_COUNT O_ALOC_CNT
    FROM V$PGA_TARGET_ADVICE
    ESTMTD_PGA_MB PGA_TGT_FCTR ADV ESTMTD_MB_PRCD ESTMTD_XTRA_MB ESTMTD_CHIT_PCT O_ALOC_CNT
            1,408         .125 ON     362,905,053    774,927,577              32      19973
            2,816          .25 ON     362,905,053    571,453,995              39        709
            5,632           .5 ON     362,905,053    249,201,001              59          5
            8,448          .75 ON     362,905,053    216,717,381              63          0
           11,264            1 ON     362,905,053    158,762,256              70          0
           13,517          1.2 ON     362,905,053    153,025,642              70          0
           15,770          1.4 ON     362,905,053    153,022,337              70          0
           18,022          1.6 ON     362,905,053    153,022,337              70          0
           20,275          1.8 ON     362,905,053    153,022,337              70          0
           22,528            2 ON     362,905,053    153,022,337              70          0
           33,792            3 ON     362,905,053    153,022,337              70          0
           45,056            4 ON     362,905,053    153,022,337              70          0
           67,584            6 ON     362,905,053    153,022,337              70          0
           90,112            8 ON     362,905,053    153,022,337              70          0

  • What percentage of sorting is done in Memory?

    DB Version:10g Release 2
    When you have a query with a large number of columns in the ORDER BY clause or a query without ORDER BY clause but with a larger number of columns in the SELECT list, the PGA_AGGREGATE_TARGET might not be sufficient to do this job. So, What % of PGA is set aside by Oracle for sorting? When this reserved % is reached in Memory(PGA_AGGREGATE_TARGET), temp tablespace is used,ie Disk sort. Right?

    J.Kiechle wrote:
    DB Version:10g Release 2
    When you have a query with a large number of columns in the ORDER BY clause or a query without ORDER BY clause but with a larger number of columns in the SELECT list, the PGA_AGGREGATE_TARGET might not be sufficient to do this job. So, What % of PGA is set aside by Oracle for sorting? When this reserved % is reached in Memory(PGA_AGGREGATE_TARGET), temp tablespace is used,ie Disk sort. Right?It's probably a bit different and a bit more complex, but in a nutshell the PGA_AGGREGATE_TARGET defines the upper limit that Oracle should use for the PGA areas of the processes used to run the database.
    There is a non-tunable part that contributes to the PGA_AGGREGATE_TARGET, that can be significantly influenced e.g. by large PL/SQL collections or Java programs. The memory consumed by these can not be controlled by Oracle and therefore can't be reduced, but will be used in the overall calculation.
    The tunable part consists of the SQL workareas that are used to sort, group or hash data as part of the SQL execution.
    The value of PGA_AGGREGATE_TARGET determines several internal parameters, among them are pgamax_size, smmmax_size and smmpx_max_size. These internal parameters control the maximum amount of memory that can by used a single process (_pga_max_size), a serial operation resp. "workarea" (_smm_max_size) and the maximum memory available for the operation of a parallel slave in a parallel operation (_smm_px_max_size).
    There is a significant difference between 10.2 and previous Oracle releases regarding these internal parameters:
    In pre-10.2 databases pgamax_size defaults to 200M, and smmmax_size is the least of 5% of PGA_AGGREGATE_TARGET and 50% of pgamax_size, and 100M (if you set pgamax_size larger than the default value). The smmpx_max_size is 30% of PGA_AGGREGATE_TARGET and is divided by the parallel degree of the parallel operation to determine the upper limit of a workarea size of a single parallel slave together with smmmax_size.
    In 10.2 the upper limits are driven by the smmmax_size which is derived from PGA_AGGREGATE_TARGET and can be larger than 100M if you have a PGA_AGGREGATE_TARGET greater than 1GB. The pgamax_size is then two times smmmax_size.
    So in pre-10.2 databases the default maximum size of a single sort is 100M, provided you've set PGA_AGGREGATE_TARGET set to 2GB or greater, but a single process - that could have multiple workareas or sorts simultaneously - is not allowed to allocate more than 200MB in total.
    In 10.2 and later you can have more than 100M for a single sort if you set your PGA_AGGREGATE_TARGET larger than 1GB, and a process can consume more than 200M in that case, too.
    For more information about these parameters, see e.g. these two interesting notes:
    http://christianbilien.wordpress.com/2007/05/01/two-useful-hidden-parameters-smmmax_size-and-pgamax-size/
    http://www.jlcomp.demon.co.uk/untested.html
    The amount of memory that remains after subtracting the non-tunable memory allocated from the PGA_AGGREGATE_TARGET and the number and size of concurrent tunable workareas determine the amount of memory available for newly established workareas, so that Oracle tries to do its best to allocate the available memory to all current workareas while at the same time attempts to stay below the PGA_AGGREGATE_TARGET. Obviously if many workareas are active concurrently the amount of memory available for each workarea will be less than the upper limits outlined above, down to a lower limit which is defined by the internal parameter smm_min_size (the greatest of 128k and 0.1% of PGA_AGGREGATE_TARGET).
    Given these constraints it is possible that Oracle consumes more than the PGA_AGGREGATE_TARGET, eg. if the non-tunable part already takes a significant part of the PGA_AGGREGATE_TARGET. You can see this e.g. in V$PGASTAT if the "over allocation count" statistic value is > 0.
    The cost based optimizer also uses the information derived from PGA_AGGREGATE_TARGET to calculate the cost of a sort resp. to estimate whether a sort will be in-memory or has to spill to disk.
    There are various views available that allow you to monitor the workarea information, among them are V$PGASTAT for an overall information regarding the PGA consumption, V$PGA_TARGET_ADVICE, V$PGA_TARGET_ADVICE_HISTOGRAM and V$SQL_WORKAREA_HISTOGRAM, V$SQL_WORKAREA_ACTIVE and V$SQL_WORKAREA for monitoring individual workareas.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Pga_used_mem column from v$process

    Hi Guys,
    1. Does the "pga_used_mem" in v$process consists of both tunable memory and untunable memory?
    2. v$sql_workarea_active <--- does this only show tunable memory? sorting etc.
    3. v$process_memory <--does this only show untunable memory?
    thanks!

    Why are you posting the same questions twice? v$process_memory
    Your other post was already answered but you didn't give anyone there credit for helping or answering your question.
    All of the system views are described in detail in the database reference guide.
    11gr2
    http://docs.oracle.com/cd/E14072_01/server.112/e10820/dynviews_2100.htm#REFRN30186
    11gr1
    http://docs.oracle.com/cd/B28359_01/server.111/b28320/dynviews_3058.htm
    10gr2
    http://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2023.htm

  • Query regarding direct path write

    Hi All,
    We have Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production running on Redhat 4.
    We have performance issue with high disk utilization.
    When seen statspack report, top event is always direct path write.
    when checked about direct path write found that it is consequence of direct path inserts/high sort operations.
    I checked v$sort_usage and identified query which is listed there most of time, but column segtype is HASH for that query.
    So is there chance i can conclude that this is the query which is causing direct path writes as confusing thing is SEGTYPE is HASH.
    Please let me know your comment about the same and also is there a way i can find sql statement which are writing to temp tablespace using direct path write.
    Regards
    Vinay

    v$sql_workarea_active gives information about the active work areas in the system. It shows the operation type, sid, memory usage and temp tablespace usage. You can check this to see what session is using the temporary tablespace for work areas.

  • PGA & Parallelism

    Hello,
    In dedicated server mode , each dedicate server process has its own PGA area.
    For parallel erserver processes, how does this mechanism work?
    Dows the coordinator process has this area and the slaves uses the coordinator's PGA or
    each slave parallel server process has its own private PGA.
    Thanks in advance,

    Each PX slave as well as QC has it's own private memory to use for workareas which is used exclusively, though V$SQL_WORKAREA_ACTIVE view may report that some PX slaves uses the very same workarea, for ex.:
    select sql_id
          ,workarea_address
          ,operation_type
          ,operation_id
          ,trunc(sum(work_area_size) / 1024 / 1024) "WA Size, MB"
          ,sum(tempseg_size) / 1024 / 1024 "Temp Size, MB"
          ,number_passes
          ,count(*)
      from v$sql_workarea_active
    where sid <> (select sid from v$mystat where rownum = 1)
    group by sql_id, workarea_address, operation_type, operation_id, number_passes;
    SQL_ID        WORKAREA_ADDRESS OPERATION_TYPE                 OPERATION_ID WA Size, MB Temp Size, MB NUMBER_PASSES   COUNT(*)
    134yn9m7w133v 000000014FFCED50 HASH-JOIN                                 9        1771           973             0          7
    134yn9m7w133v 000000014FFB87E8 HASH-JOIN                                 9         758           417             0          3
    134yn9m7w133v 000000014FFC5340 HASH-JOIN                                 9         254           139             0          1
    134yn9m7w133v 000000014FFAF0B0 HASH-JOIN                                 9         253           139             0          1This was captured on 10.2.0.3 IIRC.

Maybe you are looking for

  • Can't open iphoto '09, version 8.1.2

    When my daughter logs in as her username she cannot open iphoto all of the sudden.  The program attempts to start and the spinning beach ball appears and thats all.  She has always been able to access it and store her own pictures on her side.  We ca

  • Opening Help Topic in a New Window

    I'm posting this inquiry after searching the forum for the above topic. I have a client who requires that a contextual help page be opened in a new browser window when users press F1 in a web application screen. The client does not want help opened i

  • Can we use exceptions and conditions at the same time?

    can we use exceptions and conditions at the same time? Are there any dependencies between exceptions an conditions?

  • NullPointerException in oracle.jdbc.driver.T2CConnection.logon

    Hiya, I know little about Java, so I'm a bit lost when I get this error: java.lang.NullPointerException at oracle.jdbc.driver.T2CConnection.logon(T2CConnection.java:325) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:347) at

  • Why doesn't my website work in IE?

    Hi i'v designed a website in photoshop then brought it into dreamweaver threw slicing it up, then made it live, although the website only seems to work properly in safari and google chrome, not in IE and Firefox. the website is called: www.michaelwil