Doubt regarding ADDM report

I want to generate ADDM report for more than 3 days as it shows 3 days by default? How to change that setting if at all it is possible?

check
http://www.oracle-base.com/articles/10g/automatic-database-diagnostic-monitor-10g.php#dbms_advisor
From above link :
BEGIN
-- Create an ADDM task.
DBMS_ADVISOR.create_task (
advisor_name => 'ADDM',
task_name => '970_1032_AWR_SNAPSHOT',
task_desc => 'Advisor for snapshots 970 to 1032.');
-- Set the start and end snapshots.
DBMS_ADVISOR.set_task_parameter (
task_name => '970_1032_AWR_SNAPSHOT',
parameter => 'START_SNAPSHOT',
value => 970);
DBMS_ADVISOR.set_task_parameter (
task_name => '970_1032_AWR_SNAPSHOT',
parameter => 'END_SNAPSHOT',
value => 1032);
-- Execute the task.
DBMS_ADVISOR.execute_task(task_name => '970_1032_AWR_SNAPSHOT');
END;
-- Display the report.
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_report('970_1032_AWR_SNAPSHOT') AS report
FROM dual;
SET PAGESIZE 24

Similar Messages

  • Query regarding the ADDM Report

    Hi All,
    My DB performance was quite slow during the last weekend because we had a major data load job and dbms_stats.gather_schema_stats jobs running simultaneously. So, we got an ADDM report generated for these 2 days and from that i could extract 2 things:
    1. The performance was slow because the dbms_stats.gather_schema_stats job was running simultaneoulsy on it.
    2. I could see that some of the SELECT queries on the tables in the schema got executed 145445335, 35, 30 and 20 times repeatedly on the DB. Now this leaves me shocked. Can anyone possibly explain the reason behind it? Was it because a gather_schema_stats was also running on the DB? But how would a select be affected by it? Would it be because it had a exclusive locks on those DB objects during the time it was running?
    Kindly suggest.
    Thanks in advance.

    Hi,
    Thanks for your response.
    No, i cannot see any of the terms like library cache locks and library cache pins with those SQL statements. The only things is see is:
    RECOMMENDATION 1: SQL Tuning, 15% benefit (28820 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "725bgkkhqs73v".
    RELEVANT OBJECT: SQL statement with SQL_ID 725bgkkhqs73v and
    PLAN_HASH 2688602638
    SELECT column1 from table1;
    ACTION: Investigate the SQL statement with SQL_ID "725bgkkhqs73v" for
    possible performance improvements.
    RELEVANT OBJECT: SQL statement with SQL_ID 725bgkkhqs73v and
    PLAN_HASH 2688602638
    RATIONALE: SQL statement with SQL_ID "725bgkkhqs73v" was executed 32
    times and had an average elapsed time of 900 seconds.
    Also, my DB is 10.2.0.3.0 and OS is HP-UNX B.11.23.

  • Performance issue in DB need help with analysing this ADDM report

    Hi,
    My environment:
    Os: RHEL5U3 / 11.1.0.7 64 bit / R12.1.1 64 bit
    Issue:
    Few days are am facing serious of performance problem in our Production instance. Normally the issue will occur 5 to 10 minutes occasionally per day. At the time of issue we not able to access the EBS application its taking time to load. But backend all the oracle, listener and apps services are up and running. No locks at table and session level. Cpu and memory usage is normal.
    We have monitored using "Enterprise Manager" for this issue and we found the wait session present more in Active session tab. At this time EBS application is not able access its loading too time. After some time the in Active session tab the wait session came normal and when we try to access the EBS application its working fine.
    We try to find the cause of the issue by running addm report. But am not able to understand what its says. Kindly suggests me
    ADDM Report for Task 'TASK_42656'
    Analysis Period
    AWR snapshot range from 14754 to 14755.
    Time period starts at 17-APR-12 11.00.22 AM
    Time period ends at 17-APR-12 12.00.33 PM
    Analysis Target
    Database 'PRD' with DB ID 1789440879.
    Database version 11.1.0.7.0.
    ADDM performed an analysis of instance PRD, numbered 1 and hosted at
    advgrpdb.advgroup.ae.
    Activity During the Analysis Period
    Total database time was 18674 seconds.
    The average number of active sessions was 5.17.
    Summary of Findings
    Description Active Sessions Recommendations
    Percent of Activity
    1 Top SQL by DB Time 3.43 | 66.33 5
    2 Buffer Busy 2.52 | 48.81 5
    3 Buffer Busy 1.39 | 26.81 2
    4 Log File Switches .91 | 17.56 1
    5 Buffer Busy .56 | 10.87 2
    6 Undersized SGA .38 | 7.37 1
    7 Commits and Rollbacks .28 | 5.42 1
    8 Undo I/O .18 | 3.53 0
    9 CPU Usage .13 | 2.57 1
    10 Top SQL By I/O .11 | 2.21 1
    Findings and Recommendations
    Finding 1: Top SQL by DB Time
    Impact is 3.43 active sessions, 66.33% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 1.59 active sessions, 30.8% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "a49xsqhv0h31b" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    Rationale
    SQL statement with SQL_ID "a49xsqhv0h31b" was executed 4686 times and
    had an average elapsed time of 1.2 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 85% of the database time spent in processing the SQL
    statement with SQL_ID "a49xsqhv0h31b".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 9% of the database time spent in
    processing the SQL statement with SQL_ID "a49xsqhv0h31b".
    Recommendation 3: SQL Tuning
    Estimated benefit is .56 active sessions, 10.91% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "5d7957yktf3nn" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    SQL statement with SQL_ID "5d7957yktf3nn" was executed 266 times and had
    an average elapsed time of 7.6 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 86% of the database time spent in processing the SQL
    statement with SQL_ID "5d7957yktf3nn".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 7% of the database time spent in
    processing the SQL statement with SQL_ID "5d7957yktf3nn".
    Finding 2: Buffer Busy
    Impact is 2.52 active sessions, 48.81% of total activity.
    Read and write contention on database blocks was consuming significant
    database time.
    Recommendation 1: Application Analysis
    Estimated benefit is 1.42 active sessions, 27.44% of total activity.
    Action
    Trace the cause of object contention due to SELECT statements in the
    application using the information provided.
    Related Object
    Database object with ID 34562.
    Rationale
    The SELECT statement with SQL_ID "a49xsqhv0h31b" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 1: Schema Changes
    Estimated benefit is .03 active sessions, .62% of total activity.
    Action
    Consider rebuilding the TABLE "APPLSYS.FND_LOGIN_RESP_FORMS" with object
    ID 34651 using a higher value for PCTFREE.
    Related Object
    Database object with ID 34651.
    Rationale
    The UPDATE statement with SQL_ID "cqc5crhxxt36t" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID cqc5crhxxt36t.
    UPDATE FND_LOGIN_RESP_FORMS FLRF SET END_TIME = SYSDATE WHERE
    FLRF.LOGIN_ID = :B2 AND FLRF.LOGIN_RESP_ID = :B1 AND FLRF.END_TIME IS
    NULL AND (FLRF.FORM_ID, FLRF.FORM_APPL_ID) = (SELECT F.FORM_ID,
    F.APPLICATION_ID FROM FND_FORM F, FND_APPLICATION A WHERE F.FORM_NAME
    = :B4 AND F.APPLICATION_ID = A.APPLICATION_ID AND
    A.APPLICATION_SHORT_NAME = :B3 )
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 4: Log File Switches
    Impact is .91 active sessions, 17.56% of total activity.
    Log file switch operations were consuming significant database time while
    waiting for checkpoint completion.
    This problem can be caused by use of hot backup mode on tablespaces. DML to
    tablespaces in hot backup mode causes generation of additional redo.
    Recommendation 1: Database Configuration
    Estimated benefit is .91 active sessions, 17.56% of total activity.
    Action
    Verify whether incremental shipping was used for standby databases.
    Symptoms That Led to the Finding:
    Wait class "Configuration" was consuming significant database time.
    Impact is .91 active sessions, 17.63% of total activity.
    Finding 5: Buffer Busy
    Impact is .56 active sessions, 10.87% of total activity.
    A hot data block with concurrent read and write activity was found. The block
    belongs to segment "ICX.ICX_SESSIONS" and is block 243489 in file 36.
    Recommendation 1: Application Analysis
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Investigate application logic to find the cause of high concurrent read
    and write activity to the data present in this block.
    Related Object
    Database block with object number 37562, file number 36 and block
    number 243489.
    Rationale
    The SQL statement with SQL_ID "5d7957yktf3nn" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    The SQL statement with SQL_ID "326up1aym56dd" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 326up1aym56dd.
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 2: Schema Changes
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Consider rebuilding the TABLE "ICX.ICX_SESSIONS" with object ID 37562
    using a higher value for PCTFREE.
    Related Object
    Database object with ID 37562.
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 6: Undersized SGA
    Impact is .38 active sessions, 7.37% of total activity.
    The SGA was inadequately sized, causing additional I/O or hard parses.
    The value of parameter "sga_target" was "4096 M" during the analysis period.
    Recommendation 1: Database Configuration
    Estimated benefit is .12 active sessions, 2.33% of total activity.
    Action
    Increase the size of the SGA by setting the parameter "sga_target" to
    4608 M.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Impact is .13 active sessions, 2.51% of total activity.
    Contention for latches related to the shared pool was consuming
    significant database time.
    Impact is 0 active sessions, .03% of total activity.
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 7: Commits and Rollbacks
    Impact is .28 active sessions, 5.42% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is .28 active sessions, 5.42% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 163 K and
    the average time per write was 68 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is .28 active sessions, 5.42% of total activity.
    Finding 8: Undo I/O
    Impact is .18 active sessions, 3.53% of total activity.
    Undo I/O was a significant portion (26%) of the total database I/O.
    No recommendations are available.
    Symptoms That Led to the Finding:
    The throughput of the I/O subsystem was significantly lower than
    expected.
    Impact is .08 active sessions, 1.46% of total activity.
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Finding 9: CPU Usage
    Impact is .13 active sessions, 2.57% of total activity.
    Time spent on the CPU by the instance was responsible for a substantial part
    of database time.
    Recommendation 1: SQL Tuning
    Estimated benefit is .13 active sessions, 2.57% of total activity.
    Finding 10: Top SQL By I/O
    Impact is .11 active sessions, 2.21% of total activity.
    Individual SQL statements responsible for significant user I/O wait were
    found.
    Recommendation 1: SQL Tuning
    Estimated benefit is .11 active sessions, 2.22% of total activity.
    Action
    Run SQL Tuning Advisor on the SQL statement with SQL_ID "b3pnc5yctv2z5".
    Related Object
    SQL statement with SQL_ID b3pnc5yctv2z5.
    INSERT INTO ZX_TRANSACTION_LINES_GT( APPLICATION_ID ,ENTITY_CODE
    ,EVENT_CLASS_CODE ,TRX_ID ,TRX_LEVEL_TYPE ,TRX_LINE_ID ,LINE_CLASS
    ,LINE_LEVEL_ACTION ,TRX_LINE_TYPE ,TRX_LINE_DATE
    ,LINE_AMT_INCLUDES_TAX_FLAG ,LINE_AMT ,TRX_LINE_QUANTITY ,UNIT_PRICE
    ,PRODUCT_ID ,PRODUCT_ORG_ID ,UOM_CODE ,PRODUCT_CODE ,SHIP_TO_PARTY_ID
    ,SHIP_FROM_PARTY_ID ,BILL_TO_PARTY_ID ,BILL_FROM_PARTY_ID
    ,SHIP_FROM_PARTY_SITE_ID ,BILL_FROM_PARTY_SITE_ID
    ,SHIP_TO_LOCATION_ID ,SHIP_FROM_LOCATION_ID ,BILL_TO_LOCATION_ID
    ,SHIP_THIRD_PTY_ACCT_ID ,SHIP_THIRD_PTY_ACCT_SITE_ID ,HISTORICAL_FLAG
    ,TRX_LINE_CURRENCY_CODE ,TRX_LINE_CURRENCY_CONV_DATE
    ,TRX_LINE_CURRENCY_CONV_RATE ,TRX_LINE_CURRENCY_CONV_TYPE
    ,TRX_LINE_MAU ,TRX_LINE_PRECISION ,HISTORICAL_TAX_CODE_ID
    ,TRX_BUSINESS_CATEGORY ,PRODUCT_CATEGORY ,PRODUCT_FISC_CLASSIFICATION
    ,LINE_INTENDED_USE ,PRODUCT_TYPE ,USER_DEFINED_FISC_CLASS
    ,ASSESSABLE_VALUE ,INPUT_TAX_CLASSIFICATION_CODE ,ACCOUNT_CCID
    ,BILL_THIRD_PTY_ACCT_ID ,BILL_THIRD_PTY_ACCT_SITE_ID ,TRX_LINE_NUMBER
    ,TRX_LINE_DESCRIPTION ,PRODUCT_DESCRIPTION ,USER_UPD_DET_FACTORS_FLAG
    ,DEFAULTING_ATTRIBUTE1 ) SELECT :B4 ,:B3 ,:B2
    ,PRL.REQUISITION_HEADER_ID ,:B1 ,PRL.REQUISITION_LINE_ID ,'INVOICE'
    ,NVL(PRL.TAX_ATTRIBUTE_UPDATE_CODE,'UPDATE') ,'ITEM'
    ,NVL(PRL.NEED_BY_DATE, SYSDATE) ,'N' ,NVL(PRL.AMOUNT,
    PRL.UNIT_PRICE*PRL.QUANTITY) ,PRL.QUANTITY ,PRL.UNIT_PRICE
    ,PRL.ITEM_ID ,(SELECT FSP.INVENTORY_ORGANIZATION_ID FROM
    FINANCIALS_SYSTEM_PARAMS_ALL FSP WHERE FSP.ORG_ID=PRL.ORG_ID)
    ,(SELECT MUM.UOM_CODE FROM MTL_UNITS_OF_MEASURE MUM WHERE
    MUM.UNIT_OF_MEASURE=PRL.UNIT_MEAS_LOOKUP_CODE) ,MSIB.SEGMENT1
    ,PRL.DESTINATION_ORGANIZATION_ID ,PV.PARTY_ID ,PRH.ORG_ID
    ,PV.PARTY_ID ,PVS.PARTY_SITE_ID ,PVS.PARTY_SITE_ID
    ,PRL.DELIVER_TO_LOCATION_ID ,(SELECT HZPS.LOCATION_ID FROM
    HZ_PARTY_SITES HZPS WHERE HZPS.PARTY_SITE_ID = PVS.PARTY_SITE_ID)
    ,(SELECT LOCATION_ID FROM HR_ALL_ORGANIZATION_UNITS WHERE
    ORGANIZATION_ID=PRH.ORG_ID) ,PRL.VENDOR_ID ,PRL.VENDOR_SITE_ID ,NULL
    ,NVL(PRL.CURRENCY_CODE, :B9 ) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_DATE,
    SYSDATE) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE, :B8 )
    ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_TYPE, :B7 )
    ,FC.MINIMUM_ACCOUNTABLE_UNIT ,NVL(FC.PRECISION, 2) ,NULL
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.TRX_BUSINESS_CATEGORY, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_CATEGORY, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_FISC_CLASSIFICATION,
    NULL), NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.LINE_INTENDED_USE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_TYPE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.USER_DEFINED_FISC_CLASS, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.ASSESSABLE_VALUE, NULL), NULL )
    ,DECODE(:B6 , 'REQIMPORT', PRL.TAX_NAME,
    DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.INPUT_TAX_CLASSIFICATION_CODE,
    NULL), NULL ) ) ,NVL((SELECT PRD.CODE_COMBINATION_ID FROM
    PO_REQ_DISTRIBUTIONS_ALL PRD WHERE PRD.REQUISITION_LINE_ID =
    PRL.REQUISITION_LINE_ID AND ROWNUM = 1), MSIB.EXPENSE_ACCOUNT )
    ,PV.VENDOR_ID ,PVS.VENDOR_SITE_ID ,PRL.LINE_NUM ,PRL.ITEM_DESCRIPTION
    ,PRL.ITEM_DESCRIPTION ,(SELECT 'Y' FROM DUAL WHERE :B6 = 'REQIMPORT'
    AND PRL.TAX_NAME IS NOT NULL) ,PRL.DESTINATION_ORGANIZATION_ID FROM
    PO_REQUISITION_HEADERS_ALL PRH, PO_REQUISITION_LINES_ALL PRL,
    ZX_LINES_DET_FACTORS ZXLDET, PO_VENDORS PV, PO_VENDOR_SITES_ALL PVS,
    MTL_SYSTEM_ITEMS_B MSIB, FND_CURRENCIES FC WHERE
    PRH.REQUISITION_HEADER_ID = :B5 AND PRH.REQUISITION_HEADER_ID =
    PRL.REQUISITION_HEADER_ID AND ZXLDET.APPLICATION_ID(+) = :B4 AND
    ZXLDET.ENTITY_CODE(+) = :B3 AND ZXLDET.EVENT_CLASS_CODE(+) = :B2 AND
    ZXLDET.TRX_LEVEL_TYPE(+) = :B1 AND ZXLDET.TRX_LINE_ID(+) =
    PRL.PARENT_REQ_LINE_ID AND PV.VENDOR_ID(+) = PRL.VENDOR_ID AND
    PVS.VENDOR_SITE_ID(+) = PRL.VENDOR_SITE_ID AND
    MSIB.INVENTORY_ITEM_ID(+) = PRL.ITEM_ID AND MSIB.ORGANIZATION_ID(+) =
    PRL.ORG_ID AND FC.CURRENCY_CODE(+) = PRL.CURRENCY_CODE AND
    NVL(PRL.MODIFIED_BY_AGENT_FLAG, 'N') = 'N' AND NVL(PRL.CANCEL_FLAG,
    'N') = 'N' AND NVL(PRL.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' AND
    PRL.LINE_LOCATION_ID IS NULL AND PRL.AT_SOURCING_FLAG IS NULL
    Rationale
    SQL statement with SQL_ID "b3pnc5yctv2z5" was executed 3 times and had
    an average elapsed time of 138 seconds.
    Rationale
    Average time spent in User I/O wait events per execution was 137
    seconds.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Regards
    Athish

    Few days are am facing serious of performance problem in our Production instanceFor production issues, please log a SR.
    Was this working before? If yes, any changes been done recently?
    Do you have the statistics collected up to date?
    Please see these docs.
    AutoInvoice Performance Issue When Processing Tax [ID 1059275.1]
    R12 : System Hangs When Attempting To Save Blanket Release After Applying Patch 11817843 [ID 1333336.1]
    Thanks,
    Hussein

  • Error in generating ADDM Report(Oracle 11g 64 bit EE on linux RHEL 5)

    I collected .dmp file from production using awrextr.sql and imported in our development side using awrload.sql .
    I am able to generate awr snapshots report out of it without any trouble.
    But When I try to generate addm report using addmrpti.sql I am facing following error(Please see output pasted below)
    Specify the Report Name
    ~~~~~~~~~~~~~~~~~~~~~~~
    The default report file name is addmrpt_1_7149_7156.txt. To use this name,
    press <return> to continue, otherwise enter an alternative.
    Enter value for report_name:
    Using the report name addmrpt_1_7149_7156.txt
    Running the ADDM analysis on the specified pair of snapshots ...
    begin
    ERROR at line 1:
    ORA-13711: Some snapshots in the range [7149, 7156] are missing key statistics.
    ORA-06512: at "SYS.DBMS_ADVISOR", line 201
    ORA-06512: at line 27
    Generating the ADDM report for this analysis ...
    ERROR:
    ORA-13608: The specified name NULL is invalid.
    ORA-06512: at "SYS.PRVT_ADVISOR", line 3122
    ORA-06512: at "SYS.DBMS_ADVISOR", line 585
    ORA-06512: at line 1
    End of Report
    Report written to addmrpt_1_7149_7156.txt
    SQL>
    Any clue or help will be really helpful for us.

    hello,
    have a look at this'
    ORA-13711:Some snapshots in the range [string, string] are missing key statistics.
    Cause:      Some AWR tables encountered errors while creating one or more
    snapshots in the given range. The data present in one or more of these missing
    tables is necessary to perform an ADDM analysis.
    Action:      Look in DBA_HIST_SNAP_ERROR to find what tables are missing in
    the given snapshot range. Use the ERROR_NUMBER column in that view
    together with the alert log to identify the reason for failure and take necessary action to
    prevent such failures in the future. Try running ADDM on a different snapshot range
    that does not include any incomplete snapshots.thanks and regards
    VD
    Edited by: Dixit on Aug 31, 2009 1:52 AM
    Edited by: Dixit on Aug 31, 2009 1:53 AM

  • Error trying to generate addmrpt (ADDM Report) on Oracle 10.1.0.4

    Hello,
    When I launch $ORACLE_HOME/rdbms/admin/addmrpt.sql on my Fedora 3 Oracle 10g connected using sys as sysdba (or system), I get the following error :
    Specify the Begin and End Snapshot Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Entrez une valeur pour begin_snap : 210
    Begin Snapshot Id specified: 210
    Entrez une valeur pour end_snap : 211
    End Snapshot Id specified: 211
    Specify the Report Name
    ~~~~~~~~~~~~~~~~~~~~~~~
    The default report file name is addmrpt_1_210_211.txt. To use this name,
    press <return> to continue, otherwise enter an alternative.
    Entrez une valeur pour report_name :
    Using the report name addmrpt_1_210_211.txt
    Running the ADDM analysis on the specified pair of snapshots ...
    begin
    ERREUR a la ligne 1 :
    ORA-13711: Des statistiques de cle sont absentes de certains cliches dans la
    plage [210, 211].
    ORA-06512: a "SYS.PRVT_ADVISOR", ligne 1283
    ORA-06512: a "SYS.DBMS_ADVISOR", ligne 190
    ORA-06512: a ligne 27
    Generating the ADDM report for this analysis ...
    ERROR:
    ORA-14552: operation DDL, COMMIT ou ROLLBACK interdite dans une instruction DML
    ou une interrogation
    ORA-06512: a "SYS.PRVT_ADVISOR", ligne 1750
    ORA-13608: Le nom de tache ou d'objet NULL n'est pas valide.
    ORA-06512: a "SYS.DBMS_ADVISOR", ligne 569
    ORA-06512: a ligne 1
    Any help to solve this problem ?
    Oups, when I use awrrpt on the same snapshot, I get a correct report.
    Regards,
    Freddy

    Please refer to the patchnote.htm that is bundled with 10.1.0.4.0 patch set. Only you are aware of what type of database configuration you have. You will want to pay close attention to "7.2.1.3 Set the SHARED_POOL_SIZE and JAVA_POOL_SIZE Initialization Parameters" and "7.2.2 Upgrade the Release 10.1 Database".
    Basically, I believe you have missed these required steps after installing the 10.1.0.4.0 patch set from the OUI:
    (pasted from the patchnote.htm)
    13.     Enter the following SQL*Plus commands:
    14.     SQL> STARTUP UPGRADE
    15.     SQL> SPOOL patch.log
    16.     SQL> @ORACLE_BASE\ORACLE_HOME\rdbms\admin\catpatch.sql
    17.     SQL> SPOOL OFF
    18.     
    19.     Review the patch.log file for errors and inspect the list of components that is displayed at the end of catpatch.sql script.
    This list provides the version and status of each SERVER component in the database.
    20.     If necessary, rerun the catpatch.sql script after correcting any problems.
    21.     Restart the database:
    22.     SQL> SHUTDOWN
    23.     SQL> STARTUP
    24.     
    25.     Run the utlrp.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time. This step is optional but recommended.
    26.     SQL> @ORACLE_BASE\ORACLE_HOME\rdbms\admin\utlrp.sql

  • Some doubts regarding sorcing cockpit in EBP

    Hi all,
    This is sankar bhatta , working in IBM . I am new to the EBP module. I got  few doubts regarding the sourcing cockpit.
    When a shopping cart comes to the sourcing cockpit of the purchaser. I mean in what cases??
    i am listing some of the cases where i have doubts.
    1) A user creates a SC , but he doesn't assign any vendors. in that case it comes to the sourcing cockpit of the Purchase . is it right??
    2)A user created a shopping cart ( with more than one item) without vendor and it has gone to SOCO of purchaser . he has assigned the vendor and orders it. the PO is created in the Backend. now because of some reason ( for example the vendor is not able to supply the item ) the purchase deletes one item from the PO in EB. In this case is the SC comes again to SOCO of the Purchaser???
    3) the user created the SC without vendor.The SC goes to the SOCO of the purchaser. The SC is in approval process. Now the user deletes one item and orders it again. In this case whether the SC goes to the SOCO again???
    4) User created a SC ( with more than one item) and approval is over. PO is created in the backend. Now, if the user or somebody who has the authorisation deletes the item in SC , does the SC goes to the SOCO of the purchaser??
    5)According to SAp standard functionallity, PO can be deleted only by one in purcasing organisation. Is this person person csn be a purchaser (or) some other person in the Purchasing organisation??
    Please answer these questions and if you know any other case where a SC comes to the SOCO of the Purcher please include that also.
    Thanks and regards
    Sankar Rao Bhatta.

    Hi Sankara,
    The Sourcing cockpit is much more simple than that. Your questions show you'rre very confused in this.
    There is only one customizing point used : SAP Reference IMG -> SAP Implementation Guide -> Supplier Relationship Management -> SRM Server -> Sourcing -> Define Sourcing for Product Categories
    If Sourcing for Product Categories is not configured, the system creates purchase orders in the local scenario for all requirements; these are incomplete if the source of supply is missing. If you require additional control options, for example, the facility to control processing at product level, you can use the following BAdI: Define Execution of Sourcing.
    The diferent options you have in the customizing point are:
    -Sourcing is never carried out: This is the default setting. Enterprise Buyer does not transfer any items to the purchaser's sourcing application ¨C independent of the status of the shopping cart.
    -Sourcing is always carried out: Enterprise Buyer transfers each item to Sourcing ¨C independent of the status of the shopping cart.
    -Sourcing is carried out for items without a source of supply: Enterprise Buyer transfers all requirements that have multiple sources of supply of which none is assigned, or if there is no source of supply for the requirement, to Sourcing.
    -Automatic requirement grouping; sourcing for items without assigned source of supply:
    If a source of supply is assigned to a requirement, the report BBP_SC_TRANSFER_GROUPED automatically groups requirements together for the creation of a PO. If the requirement does not have a source of supply, it appears in the work list of the sourcing application for manual assignment. Once you have assigned a source of supply, you can submit the requirement to the report.
    -Automatic grouping; sourcing is never carried out: If a source of supply is assigned to a requirement, the report BBP_SC_TRANSFER_GROUPED automatically groups requirements together for the creation of a PO. If the requirement does not have a source of supply, an incomplete PO is created.
    -Automatic bid invitation for items without a source of supply: Enterprise Buyer creates a bid invitation for all requirements that do not have any source of supply.
    So for your questions:
    1)depending on your customizing, this SC will lead to:
    -Backend PR (classic scenario)
    -Local incomplete PO (standalone or extended classic without soircing)
    -Requirement in the Sourcing cockpit (if customized)
    2)The modification of a PO will never change the initial document (SC and/or requirement), nothing 'comes back' into the sourcing cockpit
    3)If you customized the sourcink cockpit, the SC line goes into it only after the approval process
    4)This has nothing to do with the sourcing cockpit
    5)The POs can be deleted by people who have correct authorizations (that is purchasers of the document purch. org. in standard). Be careful the POs cannot be deleted as soon as they have been edited (as of R/3).
    Regards.
    Vadim
    PS: Please don't forget to reward points for helpful answers on your threads.

  • Doubt regarding SQL execution

    Hi Friends,
    Am using Oracle 10g DB - 10.2.0.3.0
    I have some basic doubts regarding sql query execution by Oracle.
    Say, Am executing a query from a toad/sqlplus session for the first time, it takes 10 secs then 1 sec and so on.
    Same thing happens for every 15 minutes.(Any specific reason for this ??).
    It takes more time when it executes first because of parsing and all those stuff but from then on it picks from the
    shared pool right??.. How long will it be there in Shared Pool does Oracle maintain any specific time period to clear that query from shared pool memory. How does Oracle handle this.
    Another thing is, say, I have a report query, I run this query monthly. What will be the execution time when I run this query each and every month. Will Oracle parse this query everytime I run. How do I improve the performance in this situation (May sound odd :)).
    Regards,
    Marlon

    Say, Am executing a query from a toad/sqlplus session for the first time, it takes 10 secs then 1 sec and so on.
    Same thing happens for every 15 minutes.(Any specific reason for this ??).
    It takes more time when it executes first because of parsing and all those stuff but from then on it picks from the
    shared pool right??.. How long will it be there in Shared Pool does Oracle maintain any specific time period to clear that query from shared pool memory. How does Oracle handle this. Share Pool caches the SQL statement. So when you execute the same SQL for the second time it goes for a soft parse. But this is not the only reason for the query to execute faster the second time. The time difference between a soft parse and hard parse is very minimal. So it really does not matter unless you are executing the same query several number of times.
    The thing that really matters is the Data Buffer Cache. That is the rows that are selected by your query are cached into the Data buffer that is available in the SGA. So for the next time when you run the same query the IO is reduced as the data is available in the memory and you don't have to go to your disk to get the data.
    But the data in Data Buffer is not persistent, meaning it follows the FIFO rule. That is first in first out. When the Data Buffer is full the content of the buffer is removed in the FIFO order.
    Another thing is, say, I have a report query, I run this query monthly. What will be the execution time when I run this query each and every month. Will Oracle parse this query every time I run. How do I improve the performance in this situation (May sound odd :)). Like the Data Buffer the Shared Pool is also maintained in the FIFO order. So if the query is still in the Shared Pool the query will be soft parsed else it will be hard parsed. But its very rare that you will have a query in your Shared Pool for a month.

  • How to apply recommendations given by addm report

    Hi Gurus
    Actually the addm report of my test database is giving some recommendations and i am not getting ,how to apply those on my database.
    so i am putting some data here,if anybody could give me some hint regarding that it would be of great help as i am new in dba.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time. RECOMMENDATION 1: Application Analysis, 9.9% benefit (147 seconds) ACTION: Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions. RATIONALE: The application was performing 112 transactions per minute with an average redo size of 3655 bytes per transaction. RECOMMENDATION 2: Host Configuration, 9.9% benefit (147 seconds) ACTION: Investigate the possibility of improving the performance of I/O to the online redo log files. RATIONALE: The average size of writes to the online redo log files was 3 K and the average time per write was 4 milliseconds. SYMPTOMS THAT LED TO THE FINDING: Wait class "Commit" was consuming significant database time. (9.9% impact [147 seconds]) FINDING 6: 8% impact (119 seconds) ---------------------------------- Wait event "process startup" in wait class "Other" was consuming significant database time. RECOMMENDATION 1: Application Analysis, 8% benefit (119 seconds) ACTION: Investigate the cause for high "process startup" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation. RATIONALE: The SQL statement with SQL_ID "NULL-SQLID" was found waiting for "process startup" wait event. RELEVANT OBJECT: SQL statement with SQL_ID NULL-SQLID RECOMMENDATION 2: Application Analysis, 8% benefit (119 seconds) ACTION: Investigate the cause for high "process startup" waits in Service "SYS$BACKGROUND". FINDING 7: 6.3% impact (93 seconds)
    NO RECOMMENDATIONS AVAILABLE ADDITIONAL INFORMATION: Hard parses due to cursor environment mismatch were not consuming significant database time. Hard parsing SQL statements that encountered parse errors was not consuming significant database time. Parse errors due to inadequately sized shared pool were not consuming significant database time. Hard parsing due to cursors getting aged out of shared pool was not consuming significant database time. Hard parses due to literal usage and cursor invalidation were not consuming significant database time. FINDING 8: 4.3% impact (63 seconds) ----------------------------------- The throughput of the I/O subsystem was significantly lower than expected. RECOMMENDATION 1: Host Configuration, 4.3% benefit (63 seconds) ACTION: Consider increasing the throughput of the I/O subsystem. Oracle's recommended solution is to stripe all data file using the SAME methodology. You might also need to increase the number of disks for better performance. Alternatively, consider using Oracle's Automatic Storage Management solution. SYMPTOMS THAT LED TO THE FINDING: Wait class "User I/O" was consuming significant database time. (13% impact [191 seconds]) FINDING 9: 4.1% impact (60 seconds) ----------------------------------- Buffer cache writes due to small log files were consuming significant database time. NO RECOMMENDATIONS AVAILABLE SYMPTOMS THAT LED TO THE FINDING: The throughput of the I/O subsystem was significantly lower than expected. (4.3% impact [63 seconds]) Wait class "User I/O" was consuming significant database time. (13% impact [191 seconds]) FINDING 10: 3.5% impact (51 seconds) ------------------------------------ Wait event "class slave wait" in wait class "Other" was consuming significant database time. RECOMMENDATION 1: Application Analysis, 3.5% benefit (51 seconds) ACTION: Investigate the cause for high "class slave wait" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ADDITIONAL INFORMATION ---------------------- Wait class "Administrative" was not consuming significant database time. Wait class "Application" was not consuming significant database time. Wait class "Cluster" was not consuming significant database time. Wait class "Concurrency" was not consuming significant database time. Wait class "Configuration" was not consuming significant database time. Wait class "Network" was not consuming significant database time. Wait class "Scheduler" was not consuming significant database time. The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.
    regards
    Richa

    I'm not sure what about the recommendations you don't understand as what you posted seems quite clear. Take #1 for example:
    Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.is telling you that it appears you are doing incremental commits. Are you? Can you change it if you are?
    When you respond please include full version information.

  • CPU wait events on ADDM report

    Hello,
    My Oracle version is:
    Connected to Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 Yesterday I was taking a look on an ADDM report and spot the following:
       Rationale
          SQL statement with SQL_ID "0mgk8gx9hj71d" was executed 777 times and had
          an average elapsed time of 42 seconds.
       Rationale
          Waiting for event "resmgr:cpu quantum" in wait class "Scheduler"
          accounted for 34% of the database time spent in processing the SQL
          statement with SQL_ID "0mgk8gx9hj71d".After that, I started looking for how ADDM could know that the SQL_ID "0mgk8gx9hj71d" waited 34% on "resmgr:cpu quantum" event. No lucky with that...
    The only wait event information related to a given SQL_ID I've found on v$active_session_history (or the AWR persisted table for it), but in the ASH there is no information about CPU wait events like "cpu quantum". When the session is waiting for CPU, there is no event related in v$ash.
    So, my question is: where ADDM got the information that the SQL waited 34% of the time on "resmgr:cpu quantum"?
    Thanks,
    Heitor Kirsten

    Hi,
    Is a session waiting for CPU resources ("res:cpu quantum") considered as an active session ? Maybe not.
    I guess (I made no test) that this maybe the reason why this kind of wait is not shown in the active session history .
    Regards
    Maurice

  • Doubt regarding joins in obiee

    hi gems...
    i have a doubt regarding BI analytics join.
    When i have imported all the tables from my schema in the repository, then it got imported with all the joins defined in the database.
    then i made several business models and create some reports.
    there i got some errors, which are mainly due to self join in the tables and more than one joins between two tables.
    my question is...are these two types of joins not supported in obiee???
    and if i want more than one join condition between two tables, then what can i do???
    thanks in advance...

    Hi User,
    OBIEE doesnot support self join. To avoid such circular joins ,make use of alias tables in the physical layer.The following is a list of the main reasons to create an alias table:
    To reuse an existing table more than once in your physical layer (without having to import it several times.
    To set up multiple alias tables, each with different keys, names, or joins.
    To help you design sophisticated star or snowflake structures in the business model layer. Alias tables are critical in the process of converting ER Schemas to Dimensional Schemas
    Rgds,
    Dpka

  • Doubts regarding RRI

    Hello Friends,
    I have some doubts regarding RRI, please go through below questions.
    I have 2 queries 1. sales report  2. Sales report world
    I have built a jump target in RSBBS by giving sender and receiver details
    In sales report I have the following data, my useer requirement is If he select one material -> right click -> goto -> jump to world report, then he wants to see the data for only that particular material , bu in my case it is showing for all the materials which are in 1st query.
    Material--Dist Channel-Month
    098647A639          G2     201002
    098647A781          G2     200909
    0265231927          G2     200901
    0986479560          G2     200912
    0204011474          G2     200912
    098647A711          G2     201002
    098647B772          G2     200909
    0986473371          G2     200901
    0265216627          G2     200901
    1987482222          G2     200901
    1987482202          G2     200901

    Hi,
    I have same requirement, I achieved by selecting go to on the respective company, Sales document (in my case). I have done no settings in assignment details.
    I have another RRI requirement.
    When we are in the report for Invoice, when we click on the Billing Document, It should go to transaction "VF03" in R/3.
    I can able to achieve (we have SSO logon configured).
    My requirement is when we select goto VF03, it should go to the respective billing document on which we right clicked, now i'ts displaying only VF03 transaction. It should display display of the respective Billing Document.
    Can anyone help to resolve the issue?
    Thanks in advance,
    Venky

  • High SQL version count and low executions from ADDM Report!!

    Hi, all.
    I am reading an ADDM Report.
    I found something strange.
    In the "SQL ordered by Version Count" section,
    an applicaion sql statement has the value of 45 Version count
    but has the value of 1 Executions. The time interval is one hour.
    The database has only one applicaion user,
    and thus all sessions connect to the same user.
    What does "45 version count" mean??
    The application is using bind variables.
    How can I reduce this number??
    I am hitting "library cache lock" wait event from time to time.
    Thanks and Regards.
    Message was edited by:
    user507290
    Message was edited by:
    user507290
    Message was edited by:
    user507290

    You could get a high version count if the bind variables are character type and allow large variations in length, see:
    http://jonathanlewis.wordpress.com/2007/01/05/bind-variables/
    The oddity of lots of versions with only one execution could be an unlucky timing thing relating to cursor invalidation - the child cursors can be left trailing with zero'ed statistics rather than being cleared out when they should be.
    Problems with library cache lock can be a consequence of large numbers of versions - simply because many people want, and may have to create, different child cursors for the same statement at the same time. (However, you've already received some comments about RAC and gathering stats which can be an underlying cause of that issue).
    Looking at it from a different angle, are there any global temporary tables involved, and if so are they 'on commit preserve rows', and Is this 10.1 or 10.2 ?
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • ADDM Report....little help needed

    DETAILED ADDM REPORT FOR TASK 'ADDM:151072109_1_2686' WITH ID 9530
    Analysis Period: 16-MAY-2011 from 11:00:55 to 11:29:38
    Database ID/Instance: 151072109/1
    Database/Instance Names: IU10G/iu10g
    Host Name: LIVEDB
    Database Version: 10.2.0.1.0
    Snapshot Range: from 2685 to 2686
    Database Time: 364 seconds
    Average Database Load: .2 active sessions
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    FINDING 1: 64% impact (234 seconds)
    Time spent on the CPU by the instance was responsible for a substantial part
    of database time.
    RECOMMENDATION 1: SQL Tuning, 31% benefit (112 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "57xtcdjqy9pv4".
    RELEVANT OBJECT: SQL statement with SQL_ID 57xtcdjqy9pv4 and
    PLAN_HASH 3175156280
    UPDATE FEESUM SET AMTPAY=(SELECT SUM(CRE_BAL) FROM STUJOURNAL
    WHERE REF_NO = :b1 AND STDJID LIKE 'SPY%' ) WHERE VHNO = :b1
    RATIONALE: SQL statement with SQL_ID "57xtcdjqy9pv4" was executed 2256
    times and had an average elapsed time of 0.051 seconds.
    RATIONALE: Average CPU used per execution was 0.049 seconds.
    RECOMMENDATION 2: Application Analysis, 28% benefit (101 seconds)
    ACTION: Parsing SQL statements were consuming significant CPU. Please
    refer to other findings in this task about parsing for further
    details.
    RECOMMENDATION 3: SQL Tuning, 13% benefit (48 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "0qz20ftp5t89r".
    RELEVANT OBJECT: SQL statement with SQL_ID 0qz20ftp5t89r and
    PLAN_HASH 1500325377
    SELECT COUNT(*) FROM ENTER_MSG WHERE SEND_UNSEND='U'
    ACTION: Investigate the SQL statement with SQL_ID "0qz20ftp5t89r" for
    possible performance improvements.
    RELEVANT OBJECT: SQL statement with SQL_ID 0qz20ftp5t89r and
    PLAN_HASH 1500325377
    SELECT COUNT(*) FROM ENTER_MSG WHERE SEND_UNSEND='U'
    RATIONALE: SQL statement with SQL_ID "0qz20ftp5t89r" was executed 167
    times and had an average elapsed time of 0.28 seconds.
    RATIONALE: Average CPU used per execution was 0.12 seconds.
    FINDING 2: 45% impact (164 seconds)
    SQL statements consuming significant database time were found.
    RECOMMENDATION 1: SQL Tuning, 31% benefit (112 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "57xtcdjqy9pv4".
    RELEVANT OBJECT: SQL statement with SQL_ID 57xtcdjqy9pv4 and
    PLAN_HASH 3175156280
    UPDATE FEESUM SET AMTPAY=(SELECT SUM(CRE_BAL) FROM STUJOURNAL
    WHERE REF_NO = :b1 AND STDJID LIKE 'SPY%' ) WHERE VHNO = :b1
    RATIONALE: SQL statement with SQL_ID "57xtcdjqy9pv4" was executed 2256
    times and had an average elapsed time of 0.051 seconds.
    RECOMMENDATION 2: SQL Tuning, 13% benefit (48 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "0qz20ftp5t89r".
    RELEVANT OBJECT: SQL statement with SQL_ID 0qz20ftp5t89r and
    PLAN_HASH 1500325377
    SELECT COUNT(*) FROM ENTER_MSG WHERE SEND_UNSEND='U'
    ACTION: Investigate the SQL statement with SQL_ID "0qz20ftp5t89r" for
    possible performance improvements.
    RELEVANT OBJECT: SQL statement with SQL_ID 0qz20ftp5t89r and
    PLAN_HASH 1500325377
    SELECT COUNT(*) FROM ENTER_MSG WHERE SEND_UNSEND='U'
    RATIONALE: SQL statement with SQL_ID "0qz20ftp5t89r" was executed 167
    times and had an average elapsed time of 0.28 seconds.
    FINDING 3: 31% impact (114 seconds)
    Hard parsing of SQL statements was consuming significant database time.
    NO RECOMMENDATIONS AVAILABLE
    ADDITIONAL INFORMATION:
    Hard parses due to cursor environment mismatch were not consuming
    significant database time.
    Hard parsing SQL statements that encountered parse errors was not
    consuming significant database time.
    Hard parses due to literal usage and cursor invalidation were not
    consuming significant database time.
    The SGA was adequately sized.
    FINDING 4: 2.4% impact (9 seconds)
    Soft parsing of SQL statements was consuming significant database time.
    RECOMMENDATION 1: Application Analysis, 2.4% benefit (9 seconds)
    ACTION: Investigate application logic to keep open the frequently used
    cursors. Note that cursors are closed by both cursor close calls and
    session disconnects.
    RECOMMENDATION 2: DB Configuration, 2.4% benefit (9 seconds)
    ACTION: Consider increasing the maximum number of open cursors a session
    can have by increasing the value of parameter "open_cursors".
    ACTION: Consider increasing the session cursor cache size by increasing
    the value of parameter "session_cached_cursors".
    RATIONALE: The value of parameter "open_cursors" was "700" during the
    analysis period.
    RATIONALE: The value of parameter "session_cached_cursors" was "20"
    during the analysis period.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ADDITIONAL INFORMATION
    Wait class "Application" was not consuming significant database time.
    Wait class "Commit" was not consuming significant database time.
    Wait class "Concurrency" was not consuming significant database time.
    Wait class "Configuration" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Wait class "User I/O" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The analysis of I/O performance is based on the default assumption that the
    average read time for one database block is 10000 micro-seconds.
    An explanation of the terminology used in this report is available when you
    run the report with the 'ALL' level of detail.
    How Can I Run SQL Tuning Advisor on the SQL statement with SQL_ID " XYZ " in EM ?
    And any other suggestion about above ADDM report will he appreciated..
    Regards..

    oracleRaj wrote:
    Thanks I have checked it, I wanted to know How Can I Run SQL Tuning Advisor on the SQL statement with SQL_ID " XYZ " in EM ?The links provided to you will tell you how to do it, and the EM wizard is fairly self explanatory. I'm going to ask why you want to. You ran a report for 1/2 an hour on a system presumably with at least 2 CPUs (though you don't say) so that means you have most likely over an hours worth of CPU available to you. Your total database time is 6 minutes. That doesn't sound like a struggling database to me, does it to you? If you manage to save 66% of the time then you'll have saved 4 minutes. Is that a worthwhile goal.
    However let's take a look at the most costly statement - the update - that consumes 112s or nearly 2 of your 6 minutes. However each execution only takes a twentieth of a second. Do your users notice that and want the update time to be (say) a fiftieth of a second instead? Where you might have an opportunity is in the fact that this statement is really quick, but is executed 2256 times in that half hour - that is 75 times a second. It's more than likely that this is a loop and that a more efficient way of doing this would be not to execute in the loop but execute a set based update. Unfortunately the SQL Tuning advisor isn't capable of making this sort of recommendation.
    In total then it looks like you've only actually got 4 minutes of your half hour that the SQL Tuning Advisor is likely to be able to improve. The count(*) might be improvable - if say you haven't got an index on the send_unsend column and that column is selective - but you still have to ask is the potential improvement worth it.
    Niall Litchfield
    http:/www.orawin.info

  • Addm report says less SGA then what is actually set

    I found a recommendation in addm report to increase the sga_target..quoted is lines from report..(notice that bold line)
    FINDING 5: 5% impact (262 seconds)
    The SGA was inadequately sized, causing additional I/O or hard parses.
    RECOMMENDATION 1: DB Configuration, 5% benefit (262 seconds)
    ACTION: Increase the size of the SGA by setting the parameter
    "sga_target" to 2560 M.
    ADDITIONAL INFORMATION:
    The value of parameter "sga_target" was "2048 M" during the analysis
    period.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Hard parsing of SQL statements was consuming significant
    database time. (3.3% impact [173 seconds])
    SYMPTOM: Wait class "User I/O" was consuming significant database time.
    (2.6% impact [135 seconds])
    but the initparameter says that the SGA_TARGET=2147483648(i,e 2148MB)
    Why is this inconsistancy ??

    Hi,
    Not really:12:32:56 sys:TEST@test> select 2147483648/1024/1024 from dual;
    2147483648/1024/1024
                    2048Do not count Mebibyte for Megabyte(click)!
    Regards,
    Yoann.

  • Generating ADDM report

    Hi,
    I am using Oracle Database 10.2.0.1 in Solaris 10 10/09 s10x_u8wos_08a X86. i am new to ADDM concept and i want to generate an ADDM report, so i ran the *@/opt/oracle/10.2.0/rdbms/admin/addmrpt.sql* script. It is asking to provide values for Specify the Begin and End Snapshot Ids. What does these value mean???
    Regards,
    007

    007 wrote:
    Hi,
    I am using Oracle Database 10.2.0.1 in Solaris 10 10/09 s10x_u8wos_08a X86. i am new to ADDM concept and i want to generate an ADDM report, so i ran the *@/opt/oracle/10.2.0/rdbms/admin/addmrpt.sql* script. It is asking to provide values for Specify the Begin and End Snapshot Ids. What does these value mean???
    Regards,
    007These are the snapshot IDs which oracle took while take snapshot of your database(at particular time). Its the same snapshot ids which you specify in awr reports. But the only thing is ADDM will get the recommendations for you based on those snap IDs.
    Have a look
    http://www.oracle-base.com/articles/10g/AutomaticDatabaseDiagnosticMonitor10g.php
    http://docs.oracle.com/cd/B28359_01/server.111/b28274/diag.htm#CHDGGFDC
    how to run and find ADDM report

Maybe you are looking for

  • 10.6.4 Server on Mini Server - DNS Problems - Slow, EDNS log messages

    Like a few of other people here, I'm having a hard time getting DNS to work smoothly on my new Mac Mini Server, now running Snow Leopard Server 10.6.4. I'd been running Leopard Server on a previous machine with much smoother DNS (though the Server Ad

  • How to Drop all data in a database?

    Hi There, I'm use to dropping all the objects in a database, however, how do I drop ONLY the DATA in the database? Is all data in the database stored in tables? Do I have to TRUNCATE all the tables in the database only? Thanks in advanced. J

  • Assigning mailfrom permissions in Exchange 2013

    I want to be able to run a PowerShell to assign a single user the mailfrom permission on all mailboxes in the database / organisation.  I have been unable to find a script that will do this although I have found scripts that will do this on a per use

  • SSL certificate unconfigured, but web server still defaults to https

    Hi all, I set up my web page as indicated in the Lion Server: Advanced Administration manual (Server/Web/Data/Sites/Default). I originally had a self-certified ssl certificate set up for all services, and the web server defaulted to port 443 with the

  • Muse 2014.2.1 svg place isn't correct

    Hi, I've just updated to 2014.2.1 & opened my project to find all my sgv's have moved. I then tried to replace them but the svg's are now not placing correctly - they look like they need to be cropped & and if i try and round the edges they seem to m