Performance issues during Maintenance

Using 2014 SP1
During weekend maintenance, we're experiencing poor performance when a particular PO's maintenance is running. Webaccess is...

Company merged one of us a 30ish N shop with GW 2012 the other a 27 user MS shop with exchange 2003
We've (the other IT guy and I) decided to create...

Similar Messages

  • Performance Issues during Upgrade of EBS from 11.5.10.2 to 12.1.1

    Hi,
    We're upgrading our EBS , from Rel 11.5.10.2 to 12.1.1.
    We're stuck , while running script ar120bnk.sql (ran more than 20 Hours) :
    Regarding the tables involved in this Process :
    select owner , table_name,num_rows,last_analyzed,sample_size
    from dba_tables
    where table_name in
    'RA_CUSTOMER_TRX_ALL',
    'RA_CUST_TRX_LINE_GL_DIST_ALL',
    'XLA_UPGRADE_DATES',
    'AR_SYSTEM_PARAMETERS_ALL',
    'RA_CUST_TRX_TYPES_ALL',
    'RA_CUST_TRX_LINE_GL_DIST_ALL',
    'XLA_TRANSACTION_ENTITIES_UPG')
    AR RA_CUSTOMER_TRX_ALL 55,540,740 04/02/2012 12:41:56 5554074
    AR RA_CUST_TRX_LINE_GL_DIST_ALL 380,513,830 04/02/2012 13:54:12 38051383
    AR RA_CUST_TRX_TYPES_ALL 90 04/02/2012 14:04:54 90
    AR AR_SYSTEM_PARAMETERS_ALL 6 04/02/2012 12:19:49 6
    XLA XLA_UPGRADE_DATES 4 05/02/2012 17:12:57 4
    As you can see: RA_CUST_TRX_LINE_GL_DIST_ALL is more tan 380 million rows !
    and RA_CUSTOMER_TRX_ALL is more than 55 million rows.
    We have more huge tables in the AR schema , and we would like to know if we are unique customer
    with huge AR schema objects , and if NOT how come that we are getting stuck on threed statment in
    AR schema.
    Bellow an output of all the objects that have more than 10 million rows in AR schema :
    select owner , table_name,to_char(num_rows,'999,999,999') ,last_analyzed
    from dba_tables
    where owner = 'AR'
    and num_rows > 10000000
    order by num_rows desc nulls last
    AR AR_DISTRIBUTIONS_ALL 408,567,520 04/02/2012 11:49:57
    AR RA_CUST_TRX_LINE_GL_DIST_ALL 380,513,830 04/02/2012 13:54:12
    AR MLOG$_AR_CASH_RECEIPTS_ALL 310,777,690 04/02/2012 12:30:33
    AR RA_CUSTOMER_TRX_LINES_ALL 260,211,090 04/02/2012 13:30:26
    AR AR_RECEIVABLE_APPLICATIONS_ALL 166,834,930 04/02/2012 12:16:54
    AR MLOG$_RA_CUSTOMER_TRX_ALL 150,962,980 04/02/2012 12:33:23
    AR AR_CASH_RECEIPT_HISTORY_ALL 145,737,410 04/02/2012 11:40:31
    AR RA_CUST_TRX_LINE_SALESREPS_ALL 130,287,580 04/02/2012 14:03:54
    AR AR_PAYMENT_SCHEDULES_ALL 108,652,480 04/02/2012 12:05:32
    AR RA_CUSTOMER_TRX_ALL 55,540,740 04/02/2012 12:41:56
    AR AR_CASH_RECEIPTS_ALL 53,182,340 04/02/2012 11:29:53
    AR AR_DOC_SEQUENCE_AUDIT 52,865,150 04/02/2012 11:52:46
    AR RA_MC_TRX_LINE_GL_DIST 17,317,730 04/02/2012 14:05:18
    AR AR_MC_DISTRIBUTIONS_ALL 13,037,030 04/02/2012 11:53:35
    AR AR_MC_RECEIVABLE_APPS 12,672,050 04/02/2012 11:53:57
    AR AR_TRX_SUMMARY 12,457,560 04/02/2012 12:20:16
    AR RA_CUST_RECEIPT_METHODS 11,105,750 04/02/2012 13:35:38
    AR HZ_ORGANIZATION_PROFILES 10,271,640 04/02/2012 12:24:44
    How to Upgrade AR tables whith Huge amount of Datas ( > 50 Millions Rows ) ?

    Hi,
    Dont worry, you are not the only one even we have one customer whose AR_DISTRIBUTIONS_ALL table is 80 GB now and i can even do select count(*) for this table.
    We had to keep this much data for business requirements, but i wonder if this is a bug or users mistake.
    Due to this we are facing seruios performance issues for AR reports and raised SRs but no resolution yet. And this guy who is assigned to us is reall ynot been helpful to fix the issue.
    Although we did not upgrade for this customer, but we migrated from 11.5.9 to R12.1.1 by re-implementation. But all these increasing size of these tables happened after migration.
    And i believe most of the time in your upgrade is going to building the indexes. You can ask Oracle if they can edit the driver file to skip building the indxes and rebuild them after upgrade. But again it will also take time.
    Another option for you is to "Archive and Purge the data" as per chapter 10 of
    11i Receivables user guide.
    http://docs.oracle.com/cd/B25284_01/current/acrobat/115arug.zip
    Thanks
    Edited by: EBSDBA on Feb 8, 2012 10:04 PM

  • Performance issue during SharePoint list data bind to html table using Ajax call(Rest API)

    Hello,
    I am having multiple lists in my SharePoint Site. I am using SharePoint REST APIs to get data from these lists and bind a HTML Table. Suppose, I have 5 lists with 1000 records each, I am looping 5000 times to bind each row(record) to this html table. This
    is causing performance issue which is taking a very long time to bind. 
    Is there any way So that I can reduce this looping OR is there any better approach to improve the performance. Please kindly Suggest.  Thank you for your help :)
    Warm Regards,
    Ratan Kumar Racha

    Hi Racha,
    For handling large data binding in a page,
    AngularJS would be a great option if you might would worry about the performance.
    You can get more information about using AngularJS from the two links below:
    https://www.airpair.com/angularjs/posts/angularjs-performance-large-applications
    http://www.sitepoint.com/10-reasons-use-angularjs/
    Best regards
    Patrick Liang
    TechNet Community Support

  • Database performance issue (8.1.7.0)

    Hi,
    We are having tablespace "payin" in our database (8.1.7.0) .
    This tablespace is the main Tablespace of our database which is dictionary managed and heavily accessed by the user SQL statements.
    Now we are facing the database performance issue during the peak time (i.e. at the month end) when no. of users use to run the no. of large reports.
    We have also increased the SGA sufficiently on the basis of RAM size.
    This tablespace is heavily accessed for the reports.
    Now my question is,
    Is this performance issue is because the tablespace is "dictionary managed" instead of locally managed ?
    because when i monitor the different sessions through OEM, the no. of hard parses is more for the connected users.
    Actually the hard parses should be less.
    In oracle 8.1.7.0 Can we convert dictionary managed tablespace to locally managed tablespace ?
    by doing so will the problem will get somewhat resolve ? will it reduce the overhead on the dictionary tables and on the shared memory ?
    If yes then how what is procedure to convert the tablespace from dictionary to locally managed ?
    With Regards

    If your end users are just running reports against this tablespace, I don't think that the tablespace management (LM/DM) matters here. You should be concerned more about the TEMP tablespace (for heavy sort operations) and your shared pool size (as you have seen hard parses go up).
    As already stated, get statspack running and also try tracing user sessions with wait events. Might give you more clues.

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Performance issue in DB need help with analysing this ADDM report

    Hi,
    My environment:
    Os: RHEL5U3 / 11.1.0.7 64 bit / R12.1.1 64 bit
    Issue:
    Few days are am facing serious of performance problem in our Production instance. Normally the issue will occur 5 to 10 minutes occasionally per day. At the time of issue we not able to access the EBS application its taking time to load. But backend all the oracle, listener and apps services are up and running. No locks at table and session level. Cpu and memory usage is normal.
    We have monitored using "Enterprise Manager" for this issue and we found the wait session present more in Active session tab. At this time EBS application is not able access its loading too time. After some time the in Active session tab the wait session came normal and when we try to access the EBS application its working fine.
    We try to find the cause of the issue by running addm report. But am not able to understand what its says. Kindly suggests me
    ADDM Report for Task 'TASK_42656'
    Analysis Period
    AWR snapshot range from 14754 to 14755.
    Time period starts at 17-APR-12 11.00.22 AM
    Time period ends at 17-APR-12 12.00.33 PM
    Analysis Target
    Database 'PRD' with DB ID 1789440879.
    Database version 11.1.0.7.0.
    ADDM performed an analysis of instance PRD, numbered 1 and hosted at
    advgrpdb.advgroup.ae.
    Activity During the Analysis Period
    Total database time was 18674 seconds.
    The average number of active sessions was 5.17.
    Summary of Findings
    Description Active Sessions Recommendations
    Percent of Activity
    1 Top SQL by DB Time 3.43 | 66.33 5
    2 Buffer Busy 2.52 | 48.81 5
    3 Buffer Busy 1.39 | 26.81 2
    4 Log File Switches .91 | 17.56 1
    5 Buffer Busy .56 | 10.87 2
    6 Undersized SGA .38 | 7.37 1
    7 Commits and Rollbacks .28 | 5.42 1
    8 Undo I/O .18 | 3.53 0
    9 CPU Usage .13 | 2.57 1
    10 Top SQL By I/O .11 | 2.21 1
    Findings and Recommendations
    Finding 1: Top SQL by DB Time
    Impact is 3.43 active sessions, 66.33% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 1.59 active sessions, 30.8% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "a49xsqhv0h31b" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    Rationale
    SQL statement with SQL_ID "a49xsqhv0h31b" was executed 4686 times and
    had an average elapsed time of 1.2 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 85% of the database time spent in processing the SQL
    statement with SQL_ID "a49xsqhv0h31b".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 9% of the database time spent in
    processing the SQL statement with SQL_ID "a49xsqhv0h31b".
    Recommendation 3: SQL Tuning
    Estimated benefit is .56 active sessions, 10.91% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "5d7957yktf3nn" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    SQL statement with SQL_ID "5d7957yktf3nn" was executed 266 times and had
    an average elapsed time of 7.6 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 86% of the database time spent in processing the SQL
    statement with SQL_ID "5d7957yktf3nn".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 7% of the database time spent in
    processing the SQL statement with SQL_ID "5d7957yktf3nn".
    Finding 2: Buffer Busy
    Impact is 2.52 active sessions, 48.81% of total activity.
    Read and write contention on database blocks was consuming significant
    database time.
    Recommendation 1: Application Analysis
    Estimated benefit is 1.42 active sessions, 27.44% of total activity.
    Action
    Trace the cause of object contention due to SELECT statements in the
    application using the information provided.
    Related Object
    Database object with ID 34562.
    Rationale
    The SELECT statement with SQL_ID "a49xsqhv0h31b" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 1: Schema Changes
    Estimated benefit is .03 active sessions, .62% of total activity.
    Action
    Consider rebuilding the TABLE "APPLSYS.FND_LOGIN_RESP_FORMS" with object
    ID 34651 using a higher value for PCTFREE.
    Related Object
    Database object with ID 34651.
    Rationale
    The UPDATE statement with SQL_ID "cqc5crhxxt36t" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID cqc5crhxxt36t.
    UPDATE FND_LOGIN_RESP_FORMS FLRF SET END_TIME = SYSDATE WHERE
    FLRF.LOGIN_ID = :B2 AND FLRF.LOGIN_RESP_ID = :B1 AND FLRF.END_TIME IS
    NULL AND (FLRF.FORM_ID, FLRF.FORM_APPL_ID) = (SELECT F.FORM_ID,
    F.APPLICATION_ID FROM FND_FORM F, FND_APPLICATION A WHERE F.FORM_NAME
    = :B4 AND F.APPLICATION_ID = A.APPLICATION_ID AND
    A.APPLICATION_SHORT_NAME = :B3 )
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 4: Log File Switches
    Impact is .91 active sessions, 17.56% of total activity.
    Log file switch operations were consuming significant database time while
    waiting for checkpoint completion.
    This problem can be caused by use of hot backup mode on tablespaces. DML to
    tablespaces in hot backup mode causes generation of additional redo.
    Recommendation 1: Database Configuration
    Estimated benefit is .91 active sessions, 17.56% of total activity.
    Action
    Verify whether incremental shipping was used for standby databases.
    Symptoms That Led to the Finding:
    Wait class "Configuration" was consuming significant database time.
    Impact is .91 active sessions, 17.63% of total activity.
    Finding 5: Buffer Busy
    Impact is .56 active sessions, 10.87% of total activity.
    A hot data block with concurrent read and write activity was found. The block
    belongs to segment "ICX.ICX_SESSIONS" and is block 243489 in file 36.
    Recommendation 1: Application Analysis
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Investigate application logic to find the cause of high concurrent read
    and write activity to the data present in this block.
    Related Object
    Database block with object number 37562, file number 36 and block
    number 243489.
    Rationale
    The SQL statement with SQL_ID "5d7957yktf3nn" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    The SQL statement with SQL_ID "326up1aym56dd" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 326up1aym56dd.
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 2: Schema Changes
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Consider rebuilding the TABLE "ICX.ICX_SESSIONS" with object ID 37562
    using a higher value for PCTFREE.
    Related Object
    Database object with ID 37562.
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 6: Undersized SGA
    Impact is .38 active sessions, 7.37% of total activity.
    The SGA was inadequately sized, causing additional I/O or hard parses.
    The value of parameter "sga_target" was "4096 M" during the analysis period.
    Recommendation 1: Database Configuration
    Estimated benefit is .12 active sessions, 2.33% of total activity.
    Action
    Increase the size of the SGA by setting the parameter "sga_target" to
    4608 M.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Impact is .13 active sessions, 2.51% of total activity.
    Contention for latches related to the shared pool was consuming
    significant database time.
    Impact is 0 active sessions, .03% of total activity.
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 7: Commits and Rollbacks
    Impact is .28 active sessions, 5.42% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is .28 active sessions, 5.42% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 163 K and
    the average time per write was 68 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is .28 active sessions, 5.42% of total activity.
    Finding 8: Undo I/O
    Impact is .18 active sessions, 3.53% of total activity.
    Undo I/O was a significant portion (26%) of the total database I/O.
    No recommendations are available.
    Symptoms That Led to the Finding:
    The throughput of the I/O subsystem was significantly lower than
    expected.
    Impact is .08 active sessions, 1.46% of total activity.
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Finding 9: CPU Usage
    Impact is .13 active sessions, 2.57% of total activity.
    Time spent on the CPU by the instance was responsible for a substantial part
    of database time.
    Recommendation 1: SQL Tuning
    Estimated benefit is .13 active sessions, 2.57% of total activity.
    Finding 10: Top SQL By I/O
    Impact is .11 active sessions, 2.21% of total activity.
    Individual SQL statements responsible for significant user I/O wait were
    found.
    Recommendation 1: SQL Tuning
    Estimated benefit is .11 active sessions, 2.22% of total activity.
    Action
    Run SQL Tuning Advisor on the SQL statement with SQL_ID "b3pnc5yctv2z5".
    Related Object
    SQL statement with SQL_ID b3pnc5yctv2z5.
    INSERT INTO ZX_TRANSACTION_LINES_GT( APPLICATION_ID ,ENTITY_CODE
    ,EVENT_CLASS_CODE ,TRX_ID ,TRX_LEVEL_TYPE ,TRX_LINE_ID ,LINE_CLASS
    ,LINE_LEVEL_ACTION ,TRX_LINE_TYPE ,TRX_LINE_DATE
    ,LINE_AMT_INCLUDES_TAX_FLAG ,LINE_AMT ,TRX_LINE_QUANTITY ,UNIT_PRICE
    ,PRODUCT_ID ,PRODUCT_ORG_ID ,UOM_CODE ,PRODUCT_CODE ,SHIP_TO_PARTY_ID
    ,SHIP_FROM_PARTY_ID ,BILL_TO_PARTY_ID ,BILL_FROM_PARTY_ID
    ,SHIP_FROM_PARTY_SITE_ID ,BILL_FROM_PARTY_SITE_ID
    ,SHIP_TO_LOCATION_ID ,SHIP_FROM_LOCATION_ID ,BILL_TO_LOCATION_ID
    ,SHIP_THIRD_PTY_ACCT_ID ,SHIP_THIRD_PTY_ACCT_SITE_ID ,HISTORICAL_FLAG
    ,TRX_LINE_CURRENCY_CODE ,TRX_LINE_CURRENCY_CONV_DATE
    ,TRX_LINE_CURRENCY_CONV_RATE ,TRX_LINE_CURRENCY_CONV_TYPE
    ,TRX_LINE_MAU ,TRX_LINE_PRECISION ,HISTORICAL_TAX_CODE_ID
    ,TRX_BUSINESS_CATEGORY ,PRODUCT_CATEGORY ,PRODUCT_FISC_CLASSIFICATION
    ,LINE_INTENDED_USE ,PRODUCT_TYPE ,USER_DEFINED_FISC_CLASS
    ,ASSESSABLE_VALUE ,INPUT_TAX_CLASSIFICATION_CODE ,ACCOUNT_CCID
    ,BILL_THIRD_PTY_ACCT_ID ,BILL_THIRD_PTY_ACCT_SITE_ID ,TRX_LINE_NUMBER
    ,TRX_LINE_DESCRIPTION ,PRODUCT_DESCRIPTION ,USER_UPD_DET_FACTORS_FLAG
    ,DEFAULTING_ATTRIBUTE1 ) SELECT :B4 ,:B3 ,:B2
    ,PRL.REQUISITION_HEADER_ID ,:B1 ,PRL.REQUISITION_LINE_ID ,'INVOICE'
    ,NVL(PRL.TAX_ATTRIBUTE_UPDATE_CODE,'UPDATE') ,'ITEM'
    ,NVL(PRL.NEED_BY_DATE, SYSDATE) ,'N' ,NVL(PRL.AMOUNT,
    PRL.UNIT_PRICE*PRL.QUANTITY) ,PRL.QUANTITY ,PRL.UNIT_PRICE
    ,PRL.ITEM_ID ,(SELECT FSP.INVENTORY_ORGANIZATION_ID FROM
    FINANCIALS_SYSTEM_PARAMS_ALL FSP WHERE FSP.ORG_ID=PRL.ORG_ID)
    ,(SELECT MUM.UOM_CODE FROM MTL_UNITS_OF_MEASURE MUM WHERE
    MUM.UNIT_OF_MEASURE=PRL.UNIT_MEAS_LOOKUP_CODE) ,MSIB.SEGMENT1
    ,PRL.DESTINATION_ORGANIZATION_ID ,PV.PARTY_ID ,PRH.ORG_ID
    ,PV.PARTY_ID ,PVS.PARTY_SITE_ID ,PVS.PARTY_SITE_ID
    ,PRL.DELIVER_TO_LOCATION_ID ,(SELECT HZPS.LOCATION_ID FROM
    HZ_PARTY_SITES HZPS WHERE HZPS.PARTY_SITE_ID = PVS.PARTY_SITE_ID)
    ,(SELECT LOCATION_ID FROM HR_ALL_ORGANIZATION_UNITS WHERE
    ORGANIZATION_ID=PRH.ORG_ID) ,PRL.VENDOR_ID ,PRL.VENDOR_SITE_ID ,NULL
    ,NVL(PRL.CURRENCY_CODE, :B9 ) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_DATE,
    SYSDATE) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE, :B8 )
    ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_TYPE, :B7 )
    ,FC.MINIMUM_ACCOUNTABLE_UNIT ,NVL(FC.PRECISION, 2) ,NULL
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.TRX_BUSINESS_CATEGORY, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_CATEGORY, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_FISC_CLASSIFICATION,
    NULL), NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.LINE_INTENDED_USE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_TYPE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.USER_DEFINED_FISC_CLASS, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.ASSESSABLE_VALUE, NULL), NULL )
    ,DECODE(:B6 , 'REQIMPORT', PRL.TAX_NAME,
    DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.INPUT_TAX_CLASSIFICATION_CODE,
    NULL), NULL ) ) ,NVL((SELECT PRD.CODE_COMBINATION_ID FROM
    PO_REQ_DISTRIBUTIONS_ALL PRD WHERE PRD.REQUISITION_LINE_ID =
    PRL.REQUISITION_LINE_ID AND ROWNUM = 1), MSIB.EXPENSE_ACCOUNT )
    ,PV.VENDOR_ID ,PVS.VENDOR_SITE_ID ,PRL.LINE_NUM ,PRL.ITEM_DESCRIPTION
    ,PRL.ITEM_DESCRIPTION ,(SELECT 'Y' FROM DUAL WHERE :B6 = 'REQIMPORT'
    AND PRL.TAX_NAME IS NOT NULL) ,PRL.DESTINATION_ORGANIZATION_ID FROM
    PO_REQUISITION_HEADERS_ALL PRH, PO_REQUISITION_LINES_ALL PRL,
    ZX_LINES_DET_FACTORS ZXLDET, PO_VENDORS PV, PO_VENDOR_SITES_ALL PVS,
    MTL_SYSTEM_ITEMS_B MSIB, FND_CURRENCIES FC WHERE
    PRH.REQUISITION_HEADER_ID = :B5 AND PRH.REQUISITION_HEADER_ID =
    PRL.REQUISITION_HEADER_ID AND ZXLDET.APPLICATION_ID(+) = :B4 AND
    ZXLDET.ENTITY_CODE(+) = :B3 AND ZXLDET.EVENT_CLASS_CODE(+) = :B2 AND
    ZXLDET.TRX_LEVEL_TYPE(+) = :B1 AND ZXLDET.TRX_LINE_ID(+) =
    PRL.PARENT_REQ_LINE_ID AND PV.VENDOR_ID(+) = PRL.VENDOR_ID AND
    PVS.VENDOR_SITE_ID(+) = PRL.VENDOR_SITE_ID AND
    MSIB.INVENTORY_ITEM_ID(+) = PRL.ITEM_ID AND MSIB.ORGANIZATION_ID(+) =
    PRL.ORG_ID AND FC.CURRENCY_CODE(+) = PRL.CURRENCY_CODE AND
    NVL(PRL.MODIFIED_BY_AGENT_FLAG, 'N') = 'N' AND NVL(PRL.CANCEL_FLAG,
    'N') = 'N' AND NVL(PRL.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' AND
    PRL.LINE_LOCATION_ID IS NULL AND PRL.AT_SOURCING_FLAG IS NULL
    Rationale
    SQL statement with SQL_ID "b3pnc5yctv2z5" was executed 3 times and had
    an average elapsed time of 138 seconds.
    Rationale
    Average time spent in User I/O wait events per execution was 137
    seconds.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Regards
    Athish

    Few days are am facing serious of performance problem in our Production instanceFor production issues, please log a SR.
    Was this working before? If yes, any changes been done recently?
    Do you have the statistics collected up to date?
    Please see these docs.
    AutoInvoice Performance Issue When Processing Tax [ID 1059275.1]
    R12 : System Hangs When Attempting To Save Blanket Release After Applying Patch 11817843 [ID 1333336.1]
    Thanks,
    Hussein

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance issues with FDK in large XML documents

    In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
    The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
    When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
    Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
    PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

    FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
    Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
    --Franz

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issue with brand new intel iMac extreme

    I am at a loss to explain a problem I've been having and I thought I might put it out to you guys.
    In September I purchased a macbook Pro (2.4 ghz, 4 GB RAM) to use in video editing with Final Cut Pro, and for the most part I've been thrilled. I use 1TB LaCie external drives connected via FW800, and perform Multiclip editing with 4-5 video streams at a time and only on occasion have dropped frames during the editing process.
    In December I determined that I needed to have an additional system, and thought a 2.8Ghz Intel iMac extreme would be an excellent choice, since for the same price I could get a little more power in the processor, more hard drive space and a bigger screen to work on. When we picked up the new system in the store (The Grove Apple Store in LA), we had them upgrade the memory to 4GB.
    Since day one we have had performance issues, including problems playing streaming and DVD video, severe delays mounting and unmounting drives (firewire and USB) and application images, and freezing while doing even simple tasks like printing or checking email. These problems occur even while there are no external drives are connected. I have none of these issues with the Macbook Pro, which has virtually an identical set of programs installed, and both running the same version of Leopard.
    I already took the original iMac back to the store, and they exchanged it, but did not have 4GB sets of RAM in stock so they took the RAM from the original machine and put it in the new one. They said if I continued to have problems then it was most likely the RAM and I should come back when they got more in stock. I DID have the same problems with the new machine, and took it back to the Apple Store and they swapped the memory. It seemed to improve the issue, but now I'm seeing the same severe performance issues again.
    All tech support can do is tell me to do a PRAM reset, which seems to improve things very temporarily (but that may be my imagination) or have me restart, which at least has the ability to make the printing of documents capable.
    What I'm wondering is if it is likely that the RAM is the issue and I just got another bad batch, or if the iMac has some weird glitch that isn't present in the macbook Pro...?? Or could I have possibly gotten 2 bad systems in a row? It's extremely frustrating, and I KNOW it shouldn't be this way! It's so bad I get better performance out of my single-core G5 tower! How do I get a good working system that operates like it should? Am I better off getting another Macbook Pro? I'd rather not for several reasons...
    I have xbench on both the MBP and the iMac and can provide test numbers if they'll help, as well as any other info.
    Thank you so much for reading my novella of a post and also for any insight you have!
    Best,
    Travis

    Hi!
    I got the same problem with my MacBook when it still was new in may 2006. It was supposed to be one of the faster Laptops around but it was soooo slow it drove me nuts. I can only advise to have a look if there is something hugging up your RAM and run some tests using these programs on your machine:
    Xbench:
    http://www.macupdate.com/info.php/id/10081
    MenuMeters:
    http://www.macupdate.com/info.php/id/10451
    If they show any unusual results you might have your problem...
    As to my problem with the MacBook: I did a complete re-install (writing the harddisk over with zeroes) and suddenly everything was just fine. (But be sure to back all your files before that, I learned this one the hard way.) I know it is just a standard answer, but it worked out for me this time...
    Hope this helps in some ways.
    Cheers,
    Rene

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Performance issue with pl/sql code

    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Elapsed Times are 30minutes , 40 minutes, 65 minutes , 3 minutes ,3 seconds.
    Expected elapsed time is maximum of 3 minutes. ( But some times it took 3 seconds too...! )
    Output on all different executions are same that is deletion and insertion of 12K records into a table.
    Here is the auto trace details of two different scenarios.
    Slow execution - 33.65 minutes
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                         1,712,343    1,712,342.6    41.4
    CPU Time (ms)                             1,679,689    1,679,688.6    44.7
    Executions                                        1            N/A     N/A
    Buffer Gets                              ##########  167,257,973.0    86.9
    Disk Reads                                    1,284        1,284.0     0.4
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                       4,264            N/A     N/A
    Cluster Wait Time (ms)                        3,468            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        6            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------Fast Exection : 5 seconds
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                            41,550       41,550.3     0.7
    CPU Time (ms)                                40,776       40,776.3     1.0
    Executions                                        1            N/A     N/A
    Buffer Gets                               2,995,677    2,995,677.0     4.2
    Disk Reads                                       22           22.0     0.0
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                         162            N/A     N/A
    Cluster Wait Time (ms)                          621            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                       55            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------For security reasons, I cannot share the actual code. Its a report generating code that deletes and load the data into table using insert into select statement.
    Delete from table ;
    cursor X to get the master data ( 98 records )
    For each X loop
    insert into tableA select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    insert into tableB select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    end loop ;1. The select query is complex with bind variables ( explain plan varies for each values )
    2. I have checked the tablespace of the tables involved, it is 82% used. DBA confirmed that it is not the reason.
    3. Disk reads are high during long execution.
    4. At long running times, I can see a db sequential read wait event on a index object. This index is on the table where data is inserted.
    All I need to find is why this code is taking 3 seconds and 60 minutes on the same day and on the consecutive executions ?
    Is there any other approach to find the root cause of this behaviour and to fix it ? Kindly adivse.
    Thanks in advance your help.
    Regards,
    Hari
    Edited by: BluShadow on 26-Sep-2012 08:24
    edited to add {noformat}{noformat} tags.  You've been a member long enough to know to do this yourself... so please do so in future.  ({message:id=9360002})                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hariharan ST wrote:
    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Please reedit your post and add some code tags around the trace information. This would improve readability greatly and will help us to help you
    example
    {<b></b>code}
    select * from dual;{<b></b>code}
    Based upon your description I can imagine two things.
    a) The execution plan for the select query does change frequently.
    A typical reason can be not up to date statistics.
    b) Some locking / wait conflict. For example upon a UK index.
    Are there any other operations going on while it is slow? If anybody inserts a value, then your session will wait, if the same (PK/UK) value also is to be inserted.
    Those wait events can be recognized using standard tools like oracle sql developer or enterprise manager while the query is slow.
    Also go through the links that are in the FAQ. They tell you how to get better information for makeing a tuning request.
    SQL and PL/SQL FAQ
    Edited by: Sven W. on Sep 25, 2012 6:41 PM

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • Performance issue with Adobe forms

    Dear SAP Experts,
    We have the following issue/requirement from our client. The client is on SAP ECC 6.0 - production environment.
    The client is highlighting  performance issue while accessing the adobe forms for HR and FI business process ( both static and interactive ).
                    Examples are
        FI – Invoice Approvals
                    HR – Job Salary Change
    The client is asking us to provide best practices surrounding:
    1.       How to improve the performance of the adobe forms while accessing in SAP.
    2.       Is there any other technology which we can use in SAP to replace the adobe forms which has better performance factor.
    3.       Are there solutions such as webdynpro floor plan manager, UI Fiori which can be alternately used?
    Regards,
    Sakthi

    Hello Priya,
         Adobe forms are easy to develop and much more comfortable than SAP Scripts and Smartforms. Initially they are a bit difficult but once you have your hands on, they are the most simplest things in ABAP.
    Performance in Adobe forms is a mix of both fine tuning the Layout as well as back end coding.
    Performance in Adobe forms cannot be done overnight. A lot of care has to be taken during the initial stage of development.
    As far as my experience is concerned, please consider the below points while developing SAP Adobe forms.
    1) Avoid Scripting (Javascript/Formcalc) as much as possible inside the form. It drastically reduces the performance and makes the form to execute slower. If you still want to use scripting(which cannot be avoided for some requirements), use Formcalc since it is comparatively faster than JavaScript.
    2) Try to avoid the coding inside the Form Interface. You can always handle the maximum coding in the Driver program and pass it to the form.
    3) Use Form Caching.
    For forms that have fixed layout, its a good way to increase the performance of form rendering. In the layout, go to Form Properties. Then Click on Defaults tab and select Allow Form Rendering To Be Cached On Server. Then Click OK.
    For forms that have flowable or dynamic layout, render the forms on the client side because it improves performance.
    Last but not the least, please go through the below post by Otto Gold which is worth a read at least once.
    How to write a messy form

Maybe you are looking for

  • Multiple Alerts getting raised for same Error

    Hi All, I have a Management Pack Which will will trigger an Exe. The Exe takes an input and that input is being provided as a parameter from the MP. Whenever the exe raises throws an error an event in the event viewer and also  we are raising an Aler

  • Adobe Flash Player not working on any browser in computer

    I really need help flash player not working in any browser in google chrome. For example in http://www.twitch.tv/ You need Adobe Flash Player to watch this video.  Download it from Adobe. I already did everything to fix it shown in support page pleas

  • Audio is not in sync with the video. how do I make audio run in time with video?

    when I play any video from any site, the video lags behind the audio. How do I correct this?

  • R/3, APO, BW Quality Assurance Refresh order

    All, We are determining the best and recommended way to refresh these QA systems that are integrated in our landscape. -R/3 -APO -BW Is there any recommended order/sequence in which they will have to be refreshed to maintain integrity? Can we refresh

  • SAP systems for public access with OSS ID

    Hi, experts, I heard that there is a community in SDN which allow users to access SAP systems (such as CRM, XI/PI, etc.) to test and try things. (1) Is that true ? What is the URL ?      If not, is there any site for public access to SAP systems with