Diffrence Between SGA_TARGET and SGA_MAX_SIZE

Can Any one explain me what is the diference between
SGA_TARGET and SGA_MAX_SIZE
Thanks In Advance.

sga_max_size -- This parameter sets the hard limit up to which sga_target can dynamically adjust sizes. At database start time, Oracle will allocate sga_max_size in RAM (or set sga_max_size to the sum of the existing pool sizes), so in order not to waste RAM it may be a good idea to have sga_max_size and sga_target at the same value, but there may be times when you want to have the capability to adjust for peak loads. By setting this parameter higher than sga_target, you allow dynamic adjustment of the sga_target parameter.
sga_target -- This parameter is new in Oracle Database 10g and reflects the total size of memory footprint a SGA can consume. It includes in its boundaries the fixed SGA and other internal allocations, the (redo) log buffers, the shared pool, Java pool, streams pool, buffer cache, keep/recycle caches, and if they are specified, the non-standard block size caches.

Similar Messages

  • What is the diffrence between BAPI and RFC and business object

    Hi Experts,
    Can anybody tel me what is the diffrence between RFC and BAPI , and also what is ther relation with business object?
    Thanx in advance.
    Nilesh Hiwale

    Hi,
    BAPI's are associated with Business Objects and also they are RFC enabled.
    But RFC's are the FM's which can be called from external systems, those FM's can be used in many places based on the applications..
    Check these Links
    whats the difference between BAPI and RFC??
    Diff. Between BAPI and RFC
    Regards
    Kiran

  • What is the diffrence between PGA and UGA?

    Hi All
    Just one question
    "What is the diffrence between PGA and UGA?"
    Thanks

    PGA Memory
    The Program Global Area (PGA) is a memory region that contains data and control information for a single process (server or background). The PGA is made up of the following:
    Stack Space
    A PGA always contains a stack space, which is memory allocated to hold a session's variables, arrays, and other information.
    Session Information - (UGA)
    A PGA in an instance running without the multi-threaded server (named Shared Server in Oracle9i) requires additional memory for the user's session, such as private SQL areas and other information. If the instance is running the multi-threaded server, this extra memory is not in the PGA, but is instead allocated in the SGA (the Shared Pool).
    Shared SQL Areas
    Shared SQL areas are always in shared memory areas of the SGA (not the PGA), with or without the multi-threaded server.
    Non-shared and Writable
    The PGA is a non-shared memory area to which a process can write. One PGA is allocated for each server process; the PGA is exclusive to a server process and is read and written only by Oracle code acting on behalf of that process.
    UGA Memory
    The UGA, or User Global Area, is allocated in the PGA for each session connected to Oracle in a dedicated server environment. The PGA is memory allocated at the client to hold a stack which contains all of the session's variables, etc. In a Shared Server environment, Oracle allocates this memory in the Shapred Pool (the shared pool is contained in the SGA), for all sessions. This helps to reduce the PGA (client) memory footprint of Oracle, but will increase the SGA (shared pool) requirements.

  • What is the diffrence between pallet and quant

    Dear friends,
    what is the diffrence between pallet and quant

    Hi,
    Pallet is the material storage  device & quant is the stock of the material in a storage bin.
    I will request you if you are new comer to WM please refer any online WM guide for various terminology.
    Regards,
    PRashant

  • What is the diffrence between ASCII and BIN mode

    Hello All,
    What is the diffrence between ASCII and BIN mode
    Regards,
    Lisa.

    'ASC' :
    ASCII format. The table is transferred as text. The conversion exits are
    carried out. The output format additionally depends on the parameters
    CODEPAGE, TRUNC_TRAILING_BLANKS, and TRUNC_TRAILING_BLANKS_EOL.
    'IBM' :
    ASCII format with IBM codepage conversion (DOS). This format correspond
    to the 'ASC' format when using target codepage 1103. This codepage is
    often used for data exchange by disc.
    'DAT' :
    Column-by-column transfer. With this format, the data is transferred as
    with ASC text. However, no conversion exists are carried out and the
    columns are separated by tab characters. This format creates files that
    can be uploaded again with gui_upload or ws_upload.
    'DBF' :
    The data is downloaded in dBase format. Because in this format the file
    types of the individual columns are included, import problems, for
    example, into Microsoft Excel can be avoided, especially when
    interpreting numeric values.
    'WK1' :
    The data is downloaded in Lotus 1-2-3 format.
    'BIN' :
    Binary format. The data is transferred in binary format. There is no
    formatting and no codepage conversion. The data is interpreted row by
    row and not formatted in columns. Specify the length of the data in
    parameter BIN_FILESIZE. The table should consist of a column of type X,
    because especially in Unicode systems the conversion of structured data
    into binary data leads to errors.

  • What is the diffrence between LIS and LO Cockpi

    Hello All,
    What is the diffrence between LIS and LO **** pit.
    Regards,
    Lisa

    Hi Lisa,
    take a look to my weblog to have more info...
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    Hope it helps!
    Bye,
    Roberto

  • What is the diffrence between extends and creating new object?

    HI ALL,
    what is the diffrence between extends and creating new object?
    meaning
    class base{
    class derived extends base{
    class base{
    class derived {
    derived(){
    base var = new base();
    can u people tell me diffence from the above examples.
    THANKS.
    ANANDA

    When you create a new object you have to supply the class to which that
    object belongs. A class can extend from another class. If it does so
    explicitly you can define the 'parent' class from which the class extends.
    If you don't explicitly mention anything, the class will implicitly extend
    from the absolute base class named 'Object'.
    Your example is a bit convoluted: when you create a Derived object,
    its constructor creates another object, i.e. an object from the class from
    which the Derived class extends.
    Extending from a class and creating an object don't have much in common.
    kind regards,
    Jos

  • What is the diffrence between multicasting and broadcasting?

    hi friends
    What is the diffrence between multicasting and broadcasting?
    i'm bit confused in multicasting and broadcasting.

    Broadcasts go everywhere within a range determined by the sender.
    Broadcasting is deprecated and unliikely to go beyond the nearest router.
    Multicasts go everywhere where receivers have declared they are present.
    Multicast can be implemented beyond routers in a WAN which you control but ISP routers generally don't support it.

  • What is the diffrence between  map and map.entry in core java

    what is the diffrence between map and map.entry in core java . where it will be use ful. any one give one example plz.

    A Map contains Map.Entry's
    e.g.
            Map map = new LinkedHashMap(8);
            map.put(new Integer(1), "one");
            map.put(new Integer(2), "two");
            final Iterator iterator = map.entrySet().iterator();
            while (iterator.hasNext()) {
                Map.Entry entry = (Map.Entry) iterator.next();
                System.out.println("key=" + entry.getKey() + ", value=" + entry.getValue());
            }

  • What is the diffrence between OCI and OCCI?

    What is the diffrence between OCI and OCCI?

    Will Lee wrote:
    What is the diffrence between OCI and OCCI?Beside the other answers, there are a few additional points to consider:
    1) OCI is the "gold" standard API. New stuff is always available in OCI first, and only later trickles down to other APIs, like OCCI.
    2) OCI is a low-level API, harder to get started with, than OCCI. APIs in OCI are often "untyped", taking a void*, which opens the door for errors.
    3) In OCCI you set values, while in OCI you bind them. So OCCI takes a copy of your values, while OCI takes an address at which to later read the value. This opens the door to subtle bugs where you pass the address of a temporary in OCI, which later crashes in some mysterious ways. So OCCI is way safer in this regard.
    4) OCI is C code, which is very portable. Because OCCI is C++ code, and on Windows you can't easily mix and match libraries compiled with different versions of Visual C++ (VC6, 7, 8, 9), you have to wait for Oracle to make a new build with the latest MS compiler. Just see the number of questions on this OCI forum and the OCCI one.
    5) OCI is used internally by Oracle to write many of their own tools, it's the lingua franca between the Core DB group and the other groups. Since they use it themselves, it's much more stable that OCCI, which is mostly only used by outside customers.
    6) The way SQL objects are dealt with in OCI and OCCI is fundamentally different, to the point where you can't mix and match OCCI and OCI object calls.
    #1 above is one reason we had to abandon using OCCI, lacked support for the new in 11g BinaryXML, but that's just one example.
    IMHO OCI is the way to go, if you want the latest and greatest. Yes, it's more difficult to code against, so the learning curve is steeper, but once you've reached critical mass it's just fine. If you write code in C++ as opposed to C, you can easily make it a lot safer with a thin C++ layer on top which, unlike OCCI, still allows you access any OCI raw handle to do stuff the wrappers don't expose. My $0.02 ;-) --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • What is the diffrence between LLA and LZA of macbooc.imean quality?!

    what is the diffrence between LLA and LZA part numbers of macbooc.i mean quality?! i want to buy ME865LZA .is that ok?

    Those letters simply indicate the region where the MacBook will be sold.
    LL = North America
    LZ = Chile, Paraguay, Uruguay.
    http://www.jbfaq.com/article.asp?id=63

  • Whats is diffrence between Dataguard and Stand by?

    Hi All
    I was asked for diffrence between Dataguard and StandBy Database in an interview.
    I answered that prior to 9I, (in 8i) It was called StandBy Database and from 9I onwards, It is called DataGuard.
    But interviewer doesnt seems to be satisfied with my answer. Not sure why?
    Then i was asked for Diffrence in Logical Dataguard and Physical Dataguard?
    I replied with "Logical DG" is one in which sql stements are applied onto standby whereas "Physical DG" is one in which redo logs are shipped from one server to other by itself (with services) and secondary upgrades itself from those. Again, interviewer didnt feel happy enough.
    This has made me think, was my answer incorrect because i answered whatever i know. which i feel like is correct.
    Please help me identify correct answer.
    Thanks
    aps

    aps wrote:
    Hi All
    I was asked for diffrence between Dataguard and StandBy Database in an interview.
    I answered that prior to 9I, (in 8i) It was called StandBy Database and from 9I onwards, It is called DataGuard.No - DataGuard is intimately involved with Standby Database, but it is NOT standby. A Data Guard 'configuration' actually includes the primary database and one or more standby databases as well as some additional processes and the connectivity between these things.
    In simplistic terms:
    Standby Database is an operating mode of the database in which it continuously applies redo logs that are shipped from another (primary) database. The mode of operation often stops the database from being accessible to users.
    Logical standby database uses a 'recieving process' (SQL Apply) that extracts from the log and creates SQL statements which are applied to keep the standyby database in step with the primary.
    Physical standby database uses a form of continuous database recovery by directly applying the redo logs received from the primary.
    Data Guard was originally a set of scripts, but now is the entire environment including a set of processes that control the extraction of redo (directly from log bugger, from redo logs or archive redo logs) from the primary, shipping to the standby, ensuring that the logs are applied. Data Guard processes also include the mechanics needed to make the standby database active automatically (failover) or manually (switchover) and also to re-sync and make the original database active again (switchback).
    You could (should) read more about this. Oracle has a fine set of documentation which you can access from http://tahiti.oracle.com - indeed selecting Database 10gR2, and switching to the Books tab you get to http://www.oracle.com/pls/db102/portal.portal_db?selected=3 . And by scrolling down a little to "Data Guard Concepts and Administration" you would get decent introduction in the first 2 pages of Chapter 1 "Introduction to Oracle Data Guard".
    But interviewer doesnt seems to be satisfied with my answer. Not sure why?I would not be satisfied either. Especially if I was looking for swomeone to be responsible for availability scenarios for my corporate data.
    Then i was asked for Diffrence in Logical Dataguard and Physical Dataguard?
    I replied with "Logical DG" is one in which sql stements are applied onto standby whereas "Physical DG" is one in which redo logs are shipped from one server to other by itself (with services) and secondary upgrades itself from those. Again, interviewer didnt feel happy enough.Well ... you are moderately close on this. However, you seem to be missing "how do we get data from the primary in the SQL Apply case". The implication is that you are not at all comfortable with the basic concepts.
    Again, I would not be terribly happy either. I would not be confident that you had read Chapter 1 of the Data Guard Concepts manual. And that could imply that you had not read Chapter 1 of ANY of the Concepts manuals. That chapter in each of the manuals is an easy read, complete with pictures, and it describes the basics of operating the big machine that the interviewer wants to entrust you with.
    All that said, Data Guard is only available on Enterprise Edition. Standby capability is available on Standard Edition. And there are commercial products around that provide capability similar to Data Guard for Standard Edition.

  • Need Clarification on sga_target and sga_max_size

    HI,
    I need some clarification in SGA_TARGET and SGA_MAX_SIZE.
    I have the parameter like below.
    SGA_MAX_SIZE=10G
    SGA_TARGET=9G
    And I spread the 9G to all components like (DB_CACHE,SHARE_POOL etc.,).
    My doubt, Incase db need the memory more than 9GB, Whether it automatically take the extra 1G from sga_max_size
    or we have to change the sga_target to 10G.

    Unless and untill, we set the sga_taget=10G, The extra 1G(from sga_max_size) is not used.
    Am i correct?No - its wrong. Any change in the value of SGA_TARGET affects only the sizes of the auto-tuned components. If you increases its value then increased memory will be distributed in only among the components controlled. So, yes 1GB will be used, because have sga_max_size=10G. If you decrease the value then reduced memory is taken back by the auto-tuning policy from one or more of the auto-tuned components.
    If SGA_MAX_SIZE is greater than SGA_TARGET, you can increase SGA_TARGET without restarting the instance. Otherwise, you'd need to shutdown and restart the instance if you wanted to increase SGA_TARGET.
    Regards
    Girish Sharma

  • Diffrence between cpu and elapse time in tkprof

    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    >call count cpu elapsed disk query current rows
    ==================================================
    Parse 1 0.12 1.36 2 11 0 0
    Execute 1 14.30 720.20 46548 190520 205 100
    Fetch 0 0.00 0.00 0 0 0 0
    ======================================================
    total 2 14.42 721.56 46550 190531 205 100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times waited Max. Wait Total Waited
    ===========================================
    db file sequential read 46544 0.49 632.12
    db file scattered read 1 0.00 0.00
    my select statement
    SELECT cst.customer_id> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
    > ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
    > ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
    > ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
    >
    > FROM ar_receivable_applications_all ra
    > ,ar_cash_receipts_all cr
    > ,ar_payment_schedules_all ps
    > ,zz_ar_customer_summary_all cst
    > WHERE ra.cash_receipt_id = cr.cash_receipt_id
    > AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
    > AND ra.status = 'APP'
    > AND ra.display = 'Y'
    > AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    > AND ps.customer_id = cst.customer_id
    > AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
    > group by cst.customer_id ;
    Thanks,
    Anu

    user653066 wrote:
    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    call     count       cpu    elapsed       disk      query    current        rows
    ================================================================================
    Parse        1      0.12       1.36          2         11          0           0
    Execute      1     14.30     720.20      46548     190520        205         100
    Fetch        0      0.00       0.00          0          0          0           0
    ================================================================================
    total        2     14.42     721.56      46550     190531        205         100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173     (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on                      Times waited   Max. Wait  Total Waited
    ===========================================================================
    db file sequential read                     46544        0.49        632.12
    db file scattered read                          1        0.00          0.00
    SELECT  cst.customer_id
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
             ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0)  newlate
             ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
              FROM ar_receivable_applications_all ra
                  ,ar_cash_receipts_all           cr
                  ,ar_payment_schedules_all       ps
                  ,zz_ar_customer_summary_all cst
              WHERE ra.cash_receipt_id                 = cr.cash_receipt_id
              AND   ra.apply_date                BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
              AND   ra.status                          = 'APP'
              AND   ra.display                         = 'Y'
              AND   ra.applied_payment_schedule_id     = ps.payment_schedule_id
              AND   ps.customer_id                     = cst.customer_id          
              AND   NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
              group by cst.customer_id    ;           Toon Koppelaars seems to have pinpointed the problem. Where are the 74 seconds unaccounted for seconds (I might have calculated it incorrectly, but I arrived at 88.08 seconds of unaccounted for time: 721.56 total - 1.36 parse - 632.12 db file sequential reads)?
    It is interesting that the maximum wait for a single block read reported by TKPROF is 0.49 seconds - this might be an indication of excessive competition for the server's CPU - processes are waiting in the CPU run queue, and therefore not on the CPU. As Toon indicated, 632.12 of the 721.56 seconds were spent waiting for single block reads to complete with 46,544 blocks read. Note also that the query executed at dep=1, and TKPROF may be providing misleading information about what actually happened during those executions. An example of misleading information:
    CREATE TABLE T11 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T12 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T13 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T14 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE OR REPLACE TRIGGER HPM_T11 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T11
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T12
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T12 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T12
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T13
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T13 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T13
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T14
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_FIND_ME2';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    INSERT INTO T11 VALUES (1,'MY LITTLE TEST CASE');
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';The partial TKPROF output:
    INSERT INTO T11
    VALUES
    (1,'MY LITTLE TEST CASE')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          8          0           0
    Execute      1      0.00       0.00          0       9788         29           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0       9796         29           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9788 pr=7 pw=0 time=0 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID : 6asaf110fgaqg
    INSERT INTO T12 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.04       0.09          0          2        130         100
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.09          0          2        130         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9754 pr=7 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : db46bkvy509w4
    INSERT INTO T13 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute    100      1.31       1.27          0         93      10634       10000
    Fetch        0      0.00       0.00          0          0          0           0
    total      101      1.31       1.27          0         93      10634       10000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 2)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=164 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : 6542yyk084rpu
    INSERT INTO T14 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute  10001     41.60      41.84          0       8961      52859     1000000
    Fetch        0      0.00       0.00          0          0          0           0
    total    10003     41.60      41.84          0       8961      52859     1000000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 3)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=2 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file switch completion                      2        0.07          0.07
    ********************************************************************************In the above note that the "INSERT INTO T11" is reported as completing in 0 seconds, but it actually required roughly 42 seconds - and that would be visible by manually reviewing the resulting trace file. Also note that the log file switch completion wait was not reported for the "INSERT INTO T11" even though it impacted the execution time.
    Back to the possibility of CPU starvation causing lost time. Another test with an otherwise idle server, followed by a second test with the same server having 240 other processes fighting for CPU resources (a simulated load).
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_NO_LOAD';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    SELECT
      COUNT(*)
    FROM
      T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT
      2    COUNT(*)
      3  FROM
      4    T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:01.37With no load the COUNT(*) completed in 1.37 seconds. The TKPROF output looks like this:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.01       0.84        523        172          1           1
    total        3      0.01       0.84        523        172          1           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=172 pr=523 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=172 pr=523 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         3        0.02          0.04
      db file parallel read                           1        0.31          0.31
      db file scattered read                         52        0.03          0.47
    SQL ID : 96g93hntrzjtr
    select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
      sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
      spare1, spare2, avgcln
    from
    hist_head$ where obj#=:1 and intcol#=:2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.06          2          2          0           0
    total        3      0.00       0.06          2          2          0           0
    Misses in library cache during parse: 0
    Optimizer mode: RULE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=2 pr=2 pw=0 time=0 us)
          0   INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=2 pw=0 time=0 us)(object id 413)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         2        0.02          0.04
    SELECT
      COUNT(*)
    FROM
      T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.03       0.43       6558       6983          0           1
    total        4      0.03       0.44       6559       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6558 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6558 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        111        0.02          0.38
      SQL*Net message from client                     2        0.00          0.00Note that TKPROF reported that it only required 0.44 seconds for the query to execute while the SQL*Plus timing indicate that it required 1.37 seconds for the SQL statement to execute. The SQL optimization (parse) with dynamic sampling query accounted for the remaining time, yet TKPROF provided no indication that this was the case.
    Now the query with 240 other processes competing for CPU time:
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_WITH_LOAD';
    SELECT COUNT(*) FROM T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT COUNT(*) FROM T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:59.03The query this time required just over 59 seconds. The TKPROF output:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.28        423         69          0           1
    total        3      0.00       0.28        423         69          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=69 pr=423 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=69 pr=423 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                         54        0.01          0.27
      db file sequential read                         2        0.00          0.00
    SQL ID : 7h04kxpa13w1x
    SELECT COUNT(*)
    FROM
    T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.06      58.71       6551       6983          0           1
    total        4      0.06      58.74       6552       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6551 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6551 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        110        1.54         58.59
      SQL*Net message from client                     1        0.00          0.00Note in the above that the max wait for the db file scattered read is 1.54 seconds due to the extra CPU competition - about 3 times longer than your max wait for a single block read. On your database platform with single block reads, it might be possible that the time in the CPU run queue is not always counted in the db file sequential read wait time or the CPU wait time - what if your operating system is slow at returning timing information to the database instance due to CPU saturation - this might explain the 74 (or 88) lost seconds.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 28, 2009 10:26 AM
    Fixing formatting problems

  • WHAT WILL BE SGA_TARGET AND SGA_MAX_SIZE VALUE

    HI ALL
    RAM MEMORY 7360
    swap memory 8704 M
    OS--AIX6
    DATABASE SIZE ---50 GB
    WHAT WILL BE SGA_TARGET AND SGA_MAX_SIZE VALUE

    WHAT WILL BEThe values will be whatever you set them to be. OR sga_max_size will be automatically derived from db_cache_size + shared_pool_size + ............
    yes, I know that you are looking for a "thumb-rule" and your question is "_What *should* be ..."_ You might get many answers.
    The right answer is "what is really required based on the database, i/O pattern, concurrency etc
    Hemant K Chitale

Maybe you are looking for

  • Info about HR BAPIS...

    Hi guys, Im going to work on a new system that need to upload info in HR infotypes. I find out some Bapis that will help me, but my problem is that the documentation is "in process" =( the only thing that i really need to know is what exactly does ea

  • Button wont work with Text in the way, code to ignore text?

    Hey, I'm createing a menu but the text on button stops button animation on click, is there code to ignore the text? I dont want to have to create a new graphic for every button I've tried grouping but that didnt help please help me!! (Awesome product

  • Need a sample program for hierarchial oops ALV report

    Hello experts,                  I Need a sample program for hierarchial oops ALV report.

  • The problem with copying photos from iPhoto...

    Good day! I have a problem. After the system upgrade to Mac OS X(version 10.7.1) iPhoto(version 9.1.5) program was absolutely not perform adequately. I do a lot and often edit the photos using iPhoto, and then copy the finished photos on removable me

  • Workforce planning query

    Dear Experts,          While trying to do workforce planning (Workforce planning:Project View), I get the following message: "No requirements were found for the selection criteria". Kindly let me know what may be the possible reasons for getting this