Pga info

Hi,
Is there any query for PGA? like sga we can find what are the parsed statements currently in sga by v$sqlarea.

http://idevelopment.info/data/Oracle/DBA_tips/Tuning/TUNING_16.shtml
Hth
Girish Sharma

Similar Messages

  • Please provide advices on tunning this query

    I have the 10gR2. the SGA and PGA info as:
    sql> @showpga
    NAME VALUE
    session uga memory 921,776
    session uga memory max 1,518,856
    session pga memory 1,569,048
    session pga memory max 1,896,728
    sum 5,906,408
    sql>
    sql> @vsga
    NAME VALUE
    Database Buffers 2,248,146,944
    Fixed Size 2,242,736
    Redo Buffers 10,813,440
    Variable Size 2,033,764,176
    sum 4,294,967,296
    The tables' info in the query:
    Tables info
    num_rows table_name last_analyzed
    54470 PA_PROJECTS_ALL 08-FEB-09
    2104470 PA_TASKS 08-FEB-09
    5420270 PA_RESOURCE_ASSIGNMENTS 08-FEB-09
    119610 PA_BUDGET_VERSIONS 08-FEB-09
    The query to be shown run more than 2 hours for returning 1263880 records.
    I ran it as:
    01:25:10 sql>> set autotrace trace
    01:25:22 sql>> SELECT
    01:25:32 2 'PRJ_'||UPPER(P.SEGMENT1),
    01:25:32 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
    01:25:32 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    01:25:32 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    01:25:32 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
    01:25:32 7 FROM PA_PROJECTS_ALL P
    01:25:32 8 , PA_TASKS T
    01:25:32 9 , PA_RESOURCE_ASSIGNMENTS A
    01:25:32 10 , PA_BUDGET_VERSIONS B
    01:25:32 11 WHERE P.PROJECT_ID = T.PROJECT_ID
    01:25:32 12 AND T.TASK_ID <> T.PARENT_TASK_ID
    01:25:32 13 AND T.PARENT_TASK_ID IS NOT NULL
    01:25:32 14 AND P.PROJECT_ID = B.PROJECT_ID
    01:25:32 15 AND P.PROJECT_ID = A.PROJECT_ID
    01:25:32 16 AND T.TASK_ID = A.TASK_ID
    01:25:32 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
    01:25:32 18 AND B.BUDGET_STATUS_CODE = 'B'
    01:25:32 19 AND B.BUDGET_TYPE_CODE = 'Current'
    01:25:32 20 AND B.CURRENT_FLAG = 'Y'
    01:25:32 21 /
    1263880 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 106 | 25304 |
    | 1 | NESTED LOOPS | | 1 | 106 | 25304 |
    | 2 | HASH JOIN | | 12 | 636 | 25280 |
    | 3 | HASH JOIN | | 9968 | 350K| 3579 |
    | 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
    | 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
    | 6 | HASH JOIN | | | | |
    | 7 | INDEX FAST FULL SCAN | PA_PROJECTS_U1 | 54470 | 691K| 145 |
    | 8 | INDEX FAST FULL SCAN | PA_PROJECTS_U2 | 54470 | 691K| 321 |
    | 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
    | 10 | TABLE ACCESS BY INDEX ROWID| PA_TASKS | 1 | 53 | 2 |
    | 11 | INDEX UNIQUE SCAN | PA_TASKS_U1 | 1 | | 1 |
    Statistics
    1 recursive calls
    0 db block gets
    4668610 consistent gets
    460575 physical reads
    10220 redo size
    77725800 bytes sent via SQL*Net to client
    884947 bytes received via SQL*Net from client
    126389 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1263880 rows processed
    04:02:44 sql>>
    It had run about 2.5 hrs.
    Then I tried to force the hash-join since we have hugh SGA and PGA.
    sql>> set time on
    02:31:59 sql>> set autotrace trace
    02:32:28 sql>>
    02:32:28 sql>> SELECT /*+ use_hash(p t) */
    02:32:41 2 'PRJ_'||UPPER(P.SEGMENT1),
    02:32:41 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
    02:32:41 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    02:32:41 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    02:32:42 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
    02:32:42 7 FROM PA_PROJECTS_ALL P
    02:32:42 8 , PA_TASKS T
    02:32:42 9 , PA_RESOURCE_ASSIGNMENTS A
    02:32:42 10 , PA_BUDGET_VERSIONS B
    02:32:42 11 WHERE P.PROJECT_ID = T.PROJECT_ID
    02:32:42 12 AND T.TASK_ID <> T.PARENT_TASK_ID
    02:32:42 13 AND T.PARENT_TASK_ID IS NOT NULL
    02:32:42 14 AND P.PROJECT_ID = B.PROJECT_ID
    02:32:42 15 AND P.PROJECT_ID = A.PROJECT_ID
    02:32:42 16 AND T.TASK_ID = A.TASK_ID
    02:32:42 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
    02:32:42 18 AND B.BUDGET_STATUS_CODE = 'B'
    02:32:42 19 AND B.BUDGET_TYPE_CODE = 'Current'
    02:32:42 20 AND B.CURRENT_FLAG = 'Y'
    02:32:42 21 /
    1263880 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 106 | 42350 |
    | 1 | HASH JOIN | | 1 | 106 | 42350 |
    | 2 | HASH JOIN | | 8 | 424 | 25280 |
    | 3 | HASH JOIN | | 9968 | 350K| 3579 |
    | 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
    | 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
    | 6 | HASH JOIN | | | | |
    | 7 | INDEX FAST FULL SCAN| PA_PROJECTS_U1 | 54470 | 691K| 145 |
    | 8 | INDEX FAST FULL SCAN| PA_PROJECTS_U2 | 54470 | 691K| 321 |
    | 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
    | 10 | TABLE ACCESS FULL | PA_TASKS | 1837K| 92M| 17041 |
    Statistics
    1 recursive calls
    0 db block gets
    535322 consistent gets
    355917 physical reads
    772 redo size
    79117543 bytes sent via SQL*Net to client
    884948 bytes received via SQL*Net from client
    126389 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1263880 rows processed
    04:48:07 sql>>
    it still had run 2 hrs.
    Based on the info presented to you, I would like to know your adivces on how to make the
    improvement.
    TIA

    I have the 10gR2.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    the SGA and PGA info as:
    sql> @showpga
    NAME VALUE
    session uga memory 921,776
    session uga memory max 1,518,856
    session pga memory 1,569,048
    session pga memory max 1,896,728
    sum 5,906,408
    sql>
    sql> @vsga
    NAME VALUE
    Database Buffers 2,248,146,944
    Fixed Size 2,242,736
    Redo Buffers 10,813,440
    Variable Size 2,033,764,176
    sum 4,294,967,296
    The tables' info in the query:
    Tables info
    num_rows table_name last_analyzed
    54470 PA_PROJECTS_ALL 08-FEB-09
    2104470 PA_TASKS 08-FEB-09
    5420270 PA_RESOURCE_ASSIGNMENTS 08-FEB-09
    119610 PA_BUDGET_VERSIONS 08-FEB-09
    The query to be shown run more than 2 hours for returning 1263880 records.
    A) I ran it as:
    01:25:10 sql>> set autotrace trace
    01:25:22 sql>> SELECT
    01:25:32 2 'PRJ_'||UPPER(P.SEGMENT1),
    01:25:32 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
    01:25:32 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    01:25:32 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    01:25:32 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
    01:25:32 7 FROM PA_PROJECTS_ALL P
    01:25:32 8 , PA_TASKS T
    01:25:32 9 , PA_RESOURCE_ASSIGNMENTS A
    01:25:32 10 , PA_BUDGET_VERSIONS B
    01:25:32 11 WHERE P.PROJECT_ID = T.PROJECT_ID
    01:25:32 12 AND T.TASK_ID <> T.PARENT_TASK_ID
    01:25:32 13 AND T.PARENT_TASK_ID IS NOT NULL
    01:25:32 14 AND P.PROJECT_ID = B.PROJECT_ID
    01:25:32 15 AND P.PROJECT_ID = A.PROJECT_ID
    01:25:32 16 AND T.TASK_ID = A.TASK_ID
    01:25:32 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
    01:25:32 18 AND B.BUDGET_STATUS_CODE = 'B'
    01:25:32 19 AND B.BUDGET_TYPE_CODE = 'Current'
    01:25:32 20 AND B.CURRENT_FLAG = 'Y'
    01:25:32 21 /
    1263880 rows selected.
    set markup html preformat on
    Rem
    Rem Use the display table function from the dbms_xplan package to display the last
    Rem explain plan. Force serial option for backward compatibility
    Rem
    set linesize 152
    set pagesize 0
    select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 106 | 25304 |
    | 1 | NESTED LOOPS | | 1 | 106 | 25304 |
    | 2 | HASH JOIN | | 12 | 636 | 25280 |
    | 3 | HASH JOIN | | 9968 | 350K| 3579 |
    | 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
    | 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
    | 6 | HASH JOIN | | | | |
    | 7 | INDEX FAST FULL SCAN | PA_PROJECTS_U1 | 54470 | 691K| 145 |
    | 8 | INDEX FAST FULL SCAN | PA_PROJECTS_U2 | 54470 | 691K| 321 |
    | 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
    | 10 | TABLE ACCESS BY INDEX ROWID| PA_TASKS | 1 | 53 | 2 |
    | 11 | INDEX UNIQUE SCAN | PA_TASKS_U1 | 1 | | 1 |
    Statistics
    1 recursive calls
    0 db block gets
    4668610 consistent gets
    460575 physical reads
    10220 redo size
    77725800 bytes sent via SQL*Net to client
    884947 bytes received via SQL*Net from client
    126389 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1263880 rows processed
    04:02:44 sql>>
    It had run about 2.5 hrs.
    B)
    Then I tried to force the hash-join since we have hugh SGA and PGA.
    sql>> set time on
    02:31:59 sql>> set autotrace trace
    02:32:28 sql>>
    02:32:28 sql>> SELECT /*+ use_hash(p t) */
    02:32:41 2 'PRJ_'||UPPER(P.SEGMENT1),
    02:32:41 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
    02:32:41 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    02:32:41 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
    02:32:42 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
    02:32:42 7 FROM PA_PROJECTS_ALL P
    02:32:42 8 , PA_TASKS T
    02:32:42 9 , PA_RESOURCE_ASSIGNMENTS A
    02:32:42 10 , PA_BUDGET_VERSIONS B
    02:32:42 11 WHERE P.PROJECT_ID = T.PROJECT_ID
    02:32:42 12 AND T.TASK_ID <> T.PARENT_TASK_ID
    02:32:42 13 AND T.PARENT_TASK_ID IS NOT NULL
    02:32:42 14 AND P.PROJECT_ID = B.PROJECT_ID
    02:32:42 15 AND P.PROJECT_ID = A.PROJECT_ID
    02:32:42 16 AND T.TASK_ID = A.TASK_ID
    02:32:42 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
    02:32:42 18 AND B.BUDGET_STATUS_CODE = 'B'
    02:32:42 19 AND B.BUDGET_TYPE_CODE = 'Current'
    02:32:42 20 AND B.CURRENT_FLAG = 'Y'
    02:32:42 21 /
    1263880 rows selected.
    set markup html preformat on
    Rem
    Rem Use the display table function from the dbms_xplan package to display the last
    Rem explain plan. Force serial option for backward compatibility
    Rem
    set linesize 152
    set pagesize 0
    select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 106 | 42350 |
    | 1 | HASH JOIN | | 1 | 106 | 42350 |
    | 2 | HASH JOIN | | 8 | 424 | 25280 |
    | 3 | HASH JOIN | | 9968 | 350K| 3579 |
    | 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
    | 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
    | 6 | HASH JOIN | | | | |
    | 7 | INDEX FAST FULL SCAN| PA_PROJECTS_U1 | 54470 | 691K| 145 |
    | 8 | INDEX FAST FULL SCAN| PA_PROJECTS_U2 | 54470 | 691K| 321 |
    | 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
    | 10 | TABLE ACCESS FULL | PA_TASKS | 1837K| 92M| 17041 |
    Statistics
    1 recursive calls
    0 db block gets
    535322 consistent gets
    355917 physical reads
    772 redo size
    79117543 bytes sent via SQL*Net to client
    884948 bytes received via SQL*Net from client
    126389 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1263880 rows processed
    04:48:07 sql>>
    it still had run 2 hrs.
    Based on the info presented to you, I would like to know your adivces on how to make the
    improvement.
    TIA

  • Huge long time direct path read temp, but pga size is enough, one block p3

    Hi Gurus,
    Can you please kindly provide some points on my below questions. thanks
    my env
    select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    OS: Linux 4 2.6.39-100.5.1.el5uek
    session operation: update a partition which have 4 partitions and total 16G
    session trace info:
    the session keep at active status and waiting for below wait event for more than 70 hours, and os iostats and cpu are almost idle on most time.
    WAIT #8: nam='direct path read temp' ela= 7615 file number=202 first dba=105072 block cnt=1 obj#=104719 tim=1344850223569499
    WAIT #8: nam='direct path read temp' ela= 5989 file number=202 first dba=85264 block cnt=1 obj#=104719 tim=1344850392833257
    WAIT #8: nam='direct path read temp' ela= 319 file number=202 first dba=85248 block cnt=1 obj#=104719 tim=1344850399563184
    WAIT #8: nam='direct path read temp' ela= 358 file number=202 first dba=85232 block cnt=1 obj#=104719 tim=1344850406016899
    WAIT #8: nam='direct path read temp' ela= 349 file number=202 first dba=85216 block cnt=1 obj#=104719 tim=1344850413023792
    WAIT #8: nam='direct path read temp' ela= 7975 file number=202 first dba=85200 block cnt=1 obj#=104719 tim=1344850419495645
    WAIT #8: nam='direct path read temp' ela= 331 file number=202 first dba=85184 block cnt=1 obj#=104719 tim=1344850426233450
    WAIT #8: nam='direct path read temp' ela= 2641 file number=202 first dba=82880 block cnt=1 obj#=104719 tim=1344850432699800
    pgastat:
    NAME VALUE/1024/1024 UNIT
    aggregate PGA target parameter 18432 bytes
    aggregate PGA auto target 16523.1475 bytes
    global memory bound 1024 bytes
    total PGA inuse 75.7246094 bytes
    total PGA allocated 162.411133 bytes
    maximum PGA allocated 514.130859 bytes
    total freeable PGA memory 64.625 bytes
    PGA memory freed back to OS 40425.1875 bytes
    total PGA used for auto workareas 2.75195313 bytes
    maximum PGA used for auto workareas 270.407227 bytes
    total PGA used for manual workareas 0 bytes
    NAME VALUE/1024/1024 UNIT
    maximum PGA used for manual workareas 24.5429688 bytes
    bytes processed 110558.951 bytes
    extra bytes read/written 15021.2559 bytes
    Most operation in PGA via query on V$SQL_WORKAREA_ACTIVE
    IDX maintainenance (sort)
    My questions:
    1. why 'direct path read temp' just read one block every time, my understanding is this event can read one block and multiple blocks at one read call, why it keep read one block in my session?
    2. my pga size is big enough, why this operation can not be treated with in PGA memory, instead of read block from disk into temp tablespace?
    Thanks for you inputs.
    Roy

    951241 wrote:
    since the session(which was from hard code application) is completed.First of all, you showed wait events from sql trace in the first post. Is the tracing was disabled in the latest execution?
    >
    I just generated the AWR for that period, as get long elapsed time SQL as following
    Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id
    3,075.35 0 85.10 91.03 8.68 duhz2wtduz709
    524.11 1 524.11 14.50 99.29 0.30 3cpa9fxny9j35
    so I get execution plan as below for these two SQL,
    select * from table(dbms_xplan.display_awr('&v_sql_id')); duhz2wtduz709
    PLAN_TABLE_OUTPUT
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT  |             |       |       |     4 (100)|          |
    |   1 |  UPDATE           | WORK_PAY_LINE |       |       |            |          |
    |   2 |   INDEX RANGE SCAN| WORK_PAY_LINE |     1 |    37 |     3   (0)| 00:00:01 |
    Note
    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel thresholdI am not sure the why elapsed time in AWR is different with time in execution plan. Column "Time" in an execution plan is estimated time. In this execution plan Oracle expects to get 1 row, estimated time is 1 sec.
    So, you need to check why estimated cardinality is such low, check statistics on the table WORK_PAY_LINE.
    You update 10Gb from 16Gb table via Index Range Scan, it looks inefficient here by two reasons:
    1. when a table updated via Index Range Scan optimized index maintenance is used. As a result some amount (significant in your case) of workareas is required. Required size depends on size and number of updated indexes and "global memory bound", 1Gb in your case.
    2. if required table buffers will not be found in the cache it will be read from disk by single block reads. If you would use Full Table Scan then buffers for update most likely will be found in the cache because before it read by multiblock reads during Full Table Scan.
    Figures from your AWR indicate, that only ~ 9% the session waited for I/O and 91% it worked and used CPU
    Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id
    3,075.35 0 85.10 91.03 8.68 duhz2wtduz709 This amount of CPU time partially required for UPDATE 10Gb of data, partially for sorting during optimized index maintenance.
    I would propose to use Table Full Scan here.
    Also you can play around and create fake trigger on update, it will make impossible to use optimized index maintenance, usual index maintenance will be used. As a result you can check the same update with the same execution plan (with Index Range Scan) but without optimized index maintenance and "direct path .. temp" wait events.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • Will this tiger woods pga '12 work on lion ??

    heres the link!! Thanks!!
    http://www.amazon.com/Tiger-Woods-PGA-TOUR-12/dp/B005IRLHBG/ref=sr_1_12?ie=UTF8& qid=1316123742&sr=8-12

    Look I need to know cause I have both the software and the computer recently purchased but remaining unopened in their respective boxes. I just want to make sure they will work on the computer before I open the box and take the computer out of the box and turn it on and try to install and use the software. I'm not sure the people at Walmart gave me proper info as they sold me these products together.

  • Oracle 9i Automatic PGA Memory Management

    Hello,
    my team and me, we are facing difficulties to change the size of the PGA used by our server processes for HASH JOIN, SORT... operators,
    here you can see the results of "select * from v$pgastat":
    [pgastat dynamic view results|http://pastebin.com/m210314dc]
    We have been increasing consecutively our pga_aggregate_target parameter from 1.7 Gb initially to 4Gb then at the end 6Gb, the value of "Global memory bound" and " aggregate pga auto target" on the link above are still equal to 0.
    I have been reading threads on the forum and documentation see below, I understand how the global memory manager (CKPT) computest the sql memory target and then the global memory bound, as far as I understand I can only "play" on the pga_aggregate_target value in order to increase the size of our PGAs (I exclude to play with hidden parameters).
    - Joze Senegacnik: Advanced Management of working areas in Oracle 9i/10g : http://tonguc.yilmaz.googlepages.com/JozeSenegacnik-PGAMemoryManagementvO.zip
    - Dageville Benoit and Zait Mohamed: SQL memory management in oracle 9i
    Here different information that could be usefull:
    OS: solaris 10 (db running in a non global zone)
    Arch: 64-bit sparcv9 kernel modules
    Physical memory: 32 Gb (being shared between all non global zones)
    Oracle version: 9.2.0.5 32bits
    Values of init parameters and hidden parameters that could be relevant:
    [init parameters|http://pastebin.com/m40340cf4]
    [hidden parameters|http://pastebin.com/m50d74c53]
    Maybe useful queries:
    over work areas views, I use the following script:
    [wa_analysis.sql|http://pastebin.com/d606ebd9b]
    and the result of it:
    [result of script wa_analysis.sql|http://pastebin.com/m5f49a2e5]

    Joze Senegacnik wrote:
    - either your sessions are using a lot of memory for storing variables like pl/sql arrays which is subtracted from automatic management: PGA_AGGREGATE_TARGET - (aggregated persistent area + a part of the run time area of all server processes)
    - you are hitting a bug
    - or maybe something elseI am really happy you come to this conclusion too, they are the same we made with my team and we have submitting to Oracle support via metalink SR 3-1216060641, we were asking if we hit the following bug (in note 1) or we leak about pl/sql or java... or else indeed,
    note 1: PGA_AGGREGATE_TARGET Assigned Memory Is Left Unconsumed When Set High [ID 844542.1]
    Joze Senegacnik wrote:
    I would like to know:
    1.) what were the values for global memory bound and autotarget immediately (or in short time) after the database restart or when you have increased them Just after the restart of the database and just after the change of P_A_T, we query v$pgastat immediately after and the value of global memory bound and auto target were equal to 0 byte,
    2.) If you are able to change value of PGA_AGGREGATE_TARGET (P_A_T) to 10GB what happens with global memory bound and auto traget. They should be positive at least for a short time. As this is a dynamic parameter you can change it for a short time, run queries and set it back.We plan to do this tonight, we have an "heavy" ITIL change management procedures that allow us to make changes approved by change manager and only during night maintenance window on production system, I come back to you tomorrow. But we have been increasing from 1,7Gb to 4Gb to 6Gb, each time I have been querying v$sgastat in the next 2 mins and global memory bound and auto target were equal to 0 byte.
    3.) Have you checked on the OS level how much memory are using server processes - do these numbers come along with what Oracle says. Not during problematic activities, meaning active work areas performing HASH-JOIN, SORT... operators,
    unfortunately it is a production system, even if he performs poorly, we are not allowed to try or retry the poor queries, but if it comes again I'll do it,
    during low activities, here the results paste with the scripts I used:
    [pga processes info in oracle|http://pastebin.com/f2e540062]
    I spooled the result rows of this previous script in /var/tmp/pga_processes.log then I loop over all processes pid and display pmap output anon info like this:
    h5. cat /var/tmp/pga_processes.log | awk -F' ' '{print $5}' | xargs -n 1 -i pmap -x {}| grep -v 'Addres' |egrep 'Kb' 2>&1 > /var/tmp/pga_processes_os.log
    then I merge line by line the two files with unix paste command, here the results:
    [os and oracle pga informations|http://pastebin.com/f4135c8a6]
    4.) How many server processes are running on you system in average/max and are you using just dedicated processes or also shared?in average 250, we are only using dedicated processes,
    5.) At time of low activity is the global memory bound still 0 or becomes > 0. I have been querying every 15 min during more than 24 hours low activities, it still stay to 0,
    5.) Are you experiencing paging/swapping on OS level?No, here orca figures for details:
    [free memory|http://img509.imageshack.us/img509/5897/ohuron1asd2gauge1024xfr.png]
    swap
    [pagein pageout|http://img121.imageshack.us/img121/6946/ohuron1asd2gaugepginper.png]
    [memory usage|http://img19.imageshack.us/img19/2213/ohuron1asd2gaugeppkerne.png]
    6.) Please post the result of: select * from X$QESMMSGA ;during low activities, [results X$QESMMSGA|http://pastebin.com/f61df7093]
    While you will be answering to my questions I'll try to figure out what we can do to properly diagnose the problem. As you are on 9i it is a little bit harder.I am really kind of your help, as we say in my country, "if you need tow arms one day to carry something, call me."
    --Jeremy Baumont                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to stop killing database using all of pga

    I have an application that is already written in java. It calls a package procedure
    that has
    inputs and outputs
    This report kills the database and the dba noticed that it was eating all of the pga and killing the system. I am looking at the code and I see that one of the outputs is a
    select ttype(col1,col2,...),tadtype(col1,col2...)
    bulk collect into one of the output variables.
    Just running a sample I noticed tha the bulk collect is collecting almost 30,000 rows and this ruinning in one session uses almost 250 mb of pga, when 10 users run the report the system dies. I have been told changing the java code will not happen. If there something I can do in the package to send the data back using bulk collect limit xxxx.
    I tired using 1000, but the report only see the last 1000 rows. There are other output variables so I am not sure how to not "keep" all of the collection for this until I send it with the rest of the output variables. I am using 10.1.0.5. I am not sure if I can somehow write a function to pipe_row the rows backs to the client wirthout keeping the whole mess.
    the psuedo code is something like this
    procedure x (
    pid IN s.SESSION_ID%TYPE
    ,piv_sort_on IN VARCHAR2
    ,piv_bno IN b.NO%TYPE
    ,piv_batch_type IN VARCHAR2
    ,piv_vt_cache_flag IN VARCHAR2
    ,potab_trans OUT TAB_TRANS
    ,potab_addr OUT TAB_ADDR
    ,pon_net_count OUT NUMBER
    ,pov_net_amount OUT VARCHAR2
    ,pov_currency_symbol OUT VARCHAR2
    ,pov_flag OUT TFLAG%TYPE
    ,pov_region_short_name OUT R.NAME%TYPE
    ,pov_error_msg OUT VARCHAR2
    do work ...
    huge bullk collect into ,potab_trans ,potab_addr -- about 20,000 - 50,000 rows want to change this
    -- send output back and not keep the huge are for
    -- these colllections.
    more work for rest of output variable info
    end
    Edited by: smklad on Sep 2, 2008 3:34 PM

    It is expected behaviour.
    When running PL/SQL, you are using the PL engine - this engine makes calls to the SQL engine whenever it hits SQL in your PL/SQL code. Data has to be transferred between the PL and SQL engines. All this is pretty neat as it allows you to mix the source code of two very different languages, PL and SQL, combine it seamlessly, and treat it as if it is a single language. Makes development and maintenance and design and what not a lot better.
    Each time your PL code uses SQL (opening a cursor, fetching from a cursor), that requires a context switch from the PL engine to the SQL engine. Context switching is unavoidable, but as it comes with a performance penalty needs to be reduces as far as possible.
    Enter bulk collect. Instead of doing a 100 context switches in a cursor fetch loop to fetch a 100 rows, a single bulk collect can used to collect a 100 rows. 100 context switches versus 1 context switch.
    Okay, but now where do that 100 rows go when bulk collecting? It has to be stored by the PL engine. The PL engine uses PGA memory. Thus it needs to allocate PGA to store these 100 rows.
    If you bulk collect a 1000 rows, it needs to allocate PGA for a 1000 rows. If a million rows... ouch.
    So a bulk collect decreases context switching. However, it increases the demand for PGA. It is therefore a balancing act when it comes to performance - decreasing context switching at the cost of an increase in PGA.
    And this is why it is mostly mandatory that you use the LIMIT clause when bulk collecting. Bulk collect a 100 to a 1000 rows at a time. And as you are managing the number of rows that can be bulk collected per context switch, you are managing the amount of PGA to expend.
    Last comment - you mention that the driving app is Java. Now I hope that this does not call PL/SQL, with PL/SQL doing a bulk collect, and then passing that collection back to Java. That is, plainly put, very stupid. Why?
    The SQL engine has an excellent db buffer cache. Java can bulk fetch from it directly (one can set the array fetch size in an Oracle client driver). It makes absolutely no sense to use the PL engine as an intermediate cache - have the PL engine fetch rows from the SQL engine cache, cache those rows in (expensive) PGA memory, and then pass that to Java. It is a lot slower as it has more moving parts. It cannot scale as this design uses very expensive server memory called PGA as a cache - instead of relying on the SQL db buffer cache that was explicitly designed for this very purpose.

  • PGA vs SGA

    hi,
    i read the documents which says that pga has individual user datas like bind variables.session info.
    here oracle how the pga communicate with sga for execution.
    what are the components for PGA
    thanks
    with regards

    user3266490 wrote:
    hi,
    i totally confused about PGA and shared pool.is there any relation between librarcy cache and pga.
    whether,when the user exeutes select,insert,update,delete these are in librarcy cache or PGA.
    i read that parsing,optimzation,execution,fetching there are happening in Library cahe.
    so why oracle created the runtime area in PGASo why there is a confusion? The parsing, execution, they are all happening in the SGA because what is needed is that if someone else also requires the same data, for example, the same execution plan, the same data buffers, it should all be accessible from the SGA which is going to be shared among the users. But the PGA contains the data structures which don't do this! Answer this, when you do an order by , does it mean that all the other users also start seeing teh data in the order by clause by default? The answer should be no. This sorted data is just kept in the pga of yours. Similarly, there is a bind variable which is shared among the users but two different users need to use two different values of it.Where it is best to keep those different values, in their own PGAs isn't it? Hence its the private memory of the sessions.
    HTH
    Aman....

  • Batch processing PGA memory settings.

    I was tweaking the 'area' parameters for my batch processing run on the database (as Oracle suggested here).
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams202.htm#sthref833
    You might want to adjust this parameter for decision support systems, batch jobs, or large CREATE INDEX operations.The one thing I noticed is when I increase sort_area_size (in manual mode), why is it that the hash_area_size automatically became twice the size of sort_area? But this doesn't happen the other way around?
    XX@d > sho parameter area
    NAME                                 TYPE        VALUE
    bitmap_merge_area_size               integer     1048576
    create_bitmap_area_size              integer     8388608
    hash_area_size                       integer     131072
    sort_area_retained_size              integer     0
    sort_area_size                       integer     65536
    workarea_size_policy                 string      AUTO
    XX@d > alter session set workarea_size_policy=manual;
    Session altered.
    Elapsed: 00:00:00.00
    XX@d > sho parameter area
    NAME                                 TYPE        VALUE
    bitmap_merge_area_size               integer     1048576
    create_bitmap_area_size              integer     8388608
    hash_area_size                       integer     131072
    sort_area_retained_size              integer     0
    sort_area_size                       integer     65536
    workarea_size_policy                 string      MANUAL
    XX@d > alter session set sort_area_size=1024000;
    Session altered.
    Elapsed: 00:00:00.01
    XX@d > sho parameter area
    NAME                                 TYPE        VALUE
    bitmap_merge_area_size               integer     1048576
    create_bitmap_area_size              integer     8388608
    hash_area_size                       integer     2048000
    sort_area_retained_size              integer     0
    sort_area_size                       integer     1024000
    workarea_size_policy                 string      MANUAL
    XX@d > alter session set hash_area_size=4096000;
    Session altered.
    Elapsed: 00:00:00.01
    XX@d > sho parameter area
    NAME                                 TYPE        VALUE
    bitmap_merge_area_size               integer     1048576
    create_bitmap_area_size              integer     8388608
    hash_area_size                       integer     4096000
    sort_area_retained_size              integer     0
    sort_area_size                       integer     1024000
    workarea_size_policy                 string      MANUALI also have a similar post in here, but, thought of opening another thread since the question is different:
    Manual pga management for Batch processing.

    Because by default it is 2*sort_area_size
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96536/ch172.htm
    And sort_area_size by default is 65536
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96536/ch1198.htm
    Gints Plivna
    http://www.gplivna.eu
    Message was edited by:
    gintsp
    Added info about sort_area_size

  • SGA, PGA and SHMMAX Setting

    If I have set SHMMAX to 8GB (on 64bit linux) value, does that mean that my SGA+PGA values need to remain within 8GB limit? If my SGA size is 6GB and PGA size is 4GB, does that mean that my OS is going to allocate two shared memory segments?

    The SHMMAX parameter is used to define the maximum size (in bytes) for a shared memory segment and should be set large enough for the largest SGA size.
    http://www.idevelopment.info/data/Oracle/DBA_tips/Linux/LINUX_8.shtml
    http://books.google.co.in/books?id=2ImPFP6Yk64C&pg=PA357&lpg=PA357&dq=PGA+is+included+in+SHMMAX&source=bl&ots=On3S7-CEX0&sig=LhOYOO946hrPZh-cIUEzWNiwpRg&hl=en&ei=xyLOTJqaC4yYvAONm7T3Dw&sa=X&oi=book_result&ct=result&resnum=5&ved=0CCwQ6AEwBA#v=onepage&q&f=false
    Please close the thread as answered.

  • Esquema de Cálculo - RM0002 - Info Record

    Pessoal,
    Boa noite!
    Gostaria de pedir uma ajuda. Estou em um projeto de implantação de SAP e no processo de testes de solicitação de cotação, encontrei um problema na momento de salvar o preço de mercado. O sistema acusa o seguinte erro: "Não foi possível determinar tipo de condição para o preço Nº mensagem 06657"
    Examinando a situação, verifiquei que o processo atualiza o Registro Info com as informações de cotação, mas nesse passo, o sistema reclama que a condição WOBT não está cadastrada para o esquema de cálculo RM0002.
    Verifiquei o esquema de cálculo, e confirmei que essa condição não estava cadastrada. Realizei o cadastro da condição no esquema, mas continuo encontrando o mesmo erro.
    Durante as análises, verifiquei que essa condição é utilizada no esquema de determinação de Pedido de Compras, entre outras coisas e consequentemente estava cadastrado no esquema RM0000.
    Por acaso alguém tem alguma idéia de como resolver esse problema? Procurei notas, posts em fóruns, porém sem sucesso.
    Desde já obrigado pela ajuda!
    Att,
    Raphael.

    Marcos,
    problema resolvido! Houve um problema em uma configuração que foi refeita e já está td ok!
    Determinar esquema de cálculo p/determinação preço de mercado em Determinar determinação de esquema.
    Muito obrigado!
    Att,
    Raphael

  • Partner application access to portal login info

    How can an SSO partner application (Java) tell whether or not a user has logged in to Portal?
    I need to log activity in a public application servlet, so I'd like to log the user as PUBLIC if not logged in or as their actual userid.
    I don't seem to have access to this info until the user has visited a secure part of the app.
    Any pointers would be appreciated.
    Thanks
    Rob

    DIY answer ...
    The cludge I used to get round this was ...
    Make a PL/SQL item which displays a Login or Logout link as appropriate, based on the current userid from portal.wwctx_api.get_user.
    The login link goes to a secure portal page called FORCE_LOGIN, passing a URL parameter called nextPageURL which contains the URL of the next page to show after the login is complete. You can use portal.wwpro_api_parameters.get_value( '_pageid', 'a'); to help build the current page URL if you want to retun to the current page.
    The FOIRCE_LOGIN page contains a PL/SQL item which builds an IFRAME whos src is a URL to my app servlet ForceLoginServlet, passing on the nextPageURL parameter. Use portal.wwpro_api_parameters.get_value( 'nextPageURL', 'a'); to help with that.
    The ForceLoginServlet is a secure servlet (set up in web.xml) so that forces a silent authentication to my app. All the servlet does is display HTML to redirect back to the URL in nextPageURL.
    Horrible! But it does the job.
    Anyone who know a better way of doing this, please tell me.
    Rob

  • How to create a report to bring all data from two different Info providers

    Hi All,
    I have a peculier problem while creating a report. I have two custom info providers one DSO and another Cube. There are only two common fields between these two Info providers . I need to create a report such that the report displays all the values from DSO but user can have the selection option on one of the fields in the Cube.
    Here is an example
    DSO Contents:
    DocNum-     DocItem-     DocText-     Amount-      Quantity
    10000----     10----            ABC----          100----           10
    10001----     20----     DSN----     200----     10
    10005----     20----     DSN----     200----     10
    Z1003----     10----     CAN----     500----     1
    Cube Contents
    DocNum-     DocItem-     Date-----          InvoiceAmt
    10000----     10----     1/10/2009----         50
    10001----     20----      2/20/2009----        100
    10005----     20----      2/25/2009----        100
    The report needs to be displayed as shown below when the user selects value for date from 1/10/2009 to 2/20/2009
    DocNum-     DocItem-     DocText-     Amount-      Quantity
    10000----     10----     ABC----     100----     10
    10001----     20----     DSN----     200----     10
    I hope this was clear for you to understand. I would really appricate if any one can answers about how to resolve this problem. I cannot add the date filed to DSO and I also have Doc Num and Item as the user selection fields in the report.
    Thank you all in advance and i would really appreciate for your suggestions.
    Regards
    Chinna
    Edited by: chinna2479 on Mar 3, 2009 7:38 PM
    Edited by: chinna2479 on Mar 3, 2009 7:39 PM

    Hi chinna,
    Two possible options, I can think of now, but both of them may be a compromise with performance.
    1. create an infoset and then a query on top of it, provided we have a one to one relation in both the targets. That is, the combination of doc and item number is not duplicate in either cube or ODS.
    2. Create a master data object of doc and item number and have date as an attribute. Load that from cube data and make date as navigational attr.
    Use this navgntal attr for selection in your report.
    Let us know, if you require any further info.
    Naveen.A

  • End routine to populate Info-cube.

    Hi ,
    Is it possible to load fileds of a Info-cube using End routines in the following scenairos.
    1.Loading fields of info-cube by referencing/using a master data table in End routine.
    2.Loading fields of info-cube by referencing/using a DSO fields  in End routine.
    3.Loading fields of info-cube by referencing/using a fields of another info-cube in End routine.
    Please advise.

    Hi Stalin,
    Before answering your question you need to understand something about "End routine" and "Expert routine".
    End Routine:
    - Result_fields and Result_package are available
    - End routine contains only those fields available in Data target.
    Start Routine:
    - Source_fields and Source_package are available
    - Start routine contains only those fields coming from source.
    Expert Routine:
    -  Source_fields, Source_package, Result_fields and Result_package are available
    So Now if you want write code to look up into some other cube, in look up you may need to test condition using source fields, in that case " Expert Routine" is only the option.
    For Ex
    my data target contains : x,y and z fields (it becomes result_field)
    source contains : a field ( it becomes source_field)
    now if i want to write look up code like this " select x,y and z fields from other cube where my a field value = other cube a field value. here u r accessing both S_F as well as R_F. So only the option is "EXPERT ROUTINE"
    or else u want to write code only with R_F then "End routine " is enough.
    Thanks,
    Gowd

  • My phone wont let me download anything even free stuff or update.it keep saying something wrong with my billing info so i fix it but still cant download.I signed out sign back in still nothing please help i'm getting angry

    My phone wont let me download anything even free stuff or update.it keep saying something wrong with my billing info so i fix it but still cant download.I signed out sign back in still nothing please help i'm getting angry

    If it says your billing info is wrong that means that your credit card issuer is refusing to approve your account. You will have to solve the problem with your bank or credit card company.

  • How do i delete my credit card info from my phone. because it wont let me update or download apps because i only have 50 cents on the card

    how do i delete my credit card info from my phone. because every time i try to update or download apps it wont let me because i only have 50 cents on my card. so i need help now before i smash this ****** *** phone and go with a galaxy s5

    This does not make much sense.
    Please explain.
    Are you getting an error message?  What does it say?

Maybe you are looking for

  • Mapping 2 contents to single real server port on CSS

    Hi I need to configure CSS accept connections to VIP on 2 different TCP ports and forward them to the same tcp port on real servers. I cannot use secondary IP addresses on real servers. Planned configuration is service REALSERVER1 port 80 ip address

  • OM Infotypes delta

    How will I monitor when there are changes(insert,update,delete) from OM infotypes? Can i use HRIADMIN transparent table to check aedtm if theres a change? Unlike in PA, you may configure some trigger fields using IMG to monitor changes for every chan

  • Hiding Tasks in the Approval Details

    I noticed the default "Provide Information" step appears in my Approval Details. Thisis meaningless to end-users. Is there any way to hide it?

  • You've got to be kidding creative cloud update

    I have been getting those annoying errors every time creative cloud needs to update.  The workaround supplied by Adobe is to uninstall the CC desktop app and run the cleaner.  Fine but it happens every time there is an update to the CC desktop app. 

  • Problem to load bank statement - termination ...closing record 62F missing

    Hi expert, When trying to load a bank statement with transaction FF_5, I get following error: Ttermination in statement no. 00000 of acct; closing record 62F missing. Do you know what can be the reason? Thank you. Kind regards, Linda