Report Variable F4 help performance issue

Hi,
I have a BI report on DSO >> Multiprovider.
When i choose F4 Variable help it takes around 2-3 minutes to display the data for selection.
Currently XXXX chars is with -- Only Posted Values for Navigation. I have tried the below options but could not see any improvement in variable f4 help performance.
-- Only Values in Infoprovider
-- Values In Master Data table
Could someone please suggest if you have faced the similar issue.
Thanks
Ashok

Hello Ashok,
Ideally you should never built a report based on a DSO, this is because it stores data at a line item level. The way the filter works is that looks at all records (SIDs) lying in the range of selection in the info provider and then displays the filter pane after it populates the records.
There can only be 1 check in this case, is to check if your DB statistics are up-to-date. The DB statistics decide if an index is to be used or not. So check for the DSO active table and all related SID tables in transaction DB20. Check the date of creation of the statistics, if it's too much in the past, then the DB does not effectively use the indices and hence it takes time.
You can use program RSANAORA to recompute the statistics.
Regards,
Ansel D'Souza

Similar Messages

  • 'How to..use reporting variables in BPS'  syntax issue

    Hi,
    I am implementing the white paper 'How to...use reporting variables in BW-BPS' and I am getting a syntax error in the include YBW_BPS_VAR_READ. The error indicates 'Statement is not accessible' for my line:
    SELECT SINGLE * FROM ybw_bps_var_map
    ybw_bps_var_map being my transparent table as described in the white paper.
    I have never seen such a syntax error, has anyone had the same issue? My code is the one in the white paper.
    Thanks for any help.
    David

    Hi David,
    I have the coding up and running. I have not seen this error before and don't know how to resolve it. If you cut&paste it from the PDF, then maybe some special characters got pasted into the ABAP editor. Try to delete the line and type it in manually.
    Regards
    Marc
    SAP NetWeaver RIG

  • Please help - performance issues with Lr 2.3

    I really hope some user/s or Adobe support members could help me with tweaks/suggestions to improve Lightroom 2.3's sluggish performance on my computer.
    First of all I have read about 100 different forum topics this morning, have used the forum search extensively and I cannot find any one topic which talks about different methods for improving performance. So I really have tried to find the answer before asking for help and I have done a lot of googling to try and find some simple answers but after all this reading I am more confused than when I started out.
    Let me first explain what I am experiencing and I will try and be as detailed as possible.
    1 - When I browse the library it is incredibly slow rendering the thumbnail previews. I found a topic which talked about degfragging the LR catalog using sqlite and contig, I did that and it made no difference to performance.
    2 - I left my machine running an entire day to render the 1:1 previews thinking this would help me with the slow thumbnail issues but it made no difference. It is however faster now when I double click and image and get the immediate 1:1 preview :)
    3 - I have turned off the Automatically write changes into XMP, suggested on another topic, that made no noticeable differences to performance either.
    4 - A few months ago when I was still running 2.1 I found a topic either here or on Google that told me to delete the entire previews folders and let LR recreate them from scratch, I did that back then and that did not make any difference either, turned out to be 24 hours of waste time for me.
    5 - I tried changing the following settings under catalog settings (has not improved anything):
    size: 2048 Pixels
    quality: High
    automatically discard 1:1 previews: Never
    6 - The only slow performance I am experiencing in LR is the thumbnails and this slows down my workflow dramatically. Everything else in LR seems to be quite fluid and usable and fast. I notice many users have complained about the slow thumbnails issues and I was praying that 2.3 was going to sort this out.
    7 - I have also noticed that when scrolling up and down the thumnail list it sometimes uses more than 1 processing core and then sometimes not and then when I stop scrolling up and down the thumbnails and leave it alone for 5 seconds suddenly all 8 processing cores spike to around 50% usage with processing "something".
    8 - I have tried making my thumbnail previews different sizes, also makes no difference really, unless I go to the smallest size and then it seems ok but then it's hard to see what's going on at all with thumbnails that small.
    9 - Lightroom is only using up 1.1 GB of available memory so I do no think the slowness relates to memory and I am running my memory in a proper DDR3 configuration.
    10 - I have tried playing with my graphics card settings (below) but it does not seem to help no matter what I try.
    11 - Overall it "seems" that Lightroom is trying to re-render thumbnails continously, I know technically this may not be true but as an end user this is how it feels to me.
    My computer specifications are as follows:
    Intel i7 965 Extreme Edition (not overclocked)
    Intel DX58 Smackover Motherboard
    6GB DDR3 Ram (not overclocked)
    GEForce 9800GT Video Card (with WHQL driver)
    SATA 2 1TB Seagate Drives
    Windows Vista X64 (SP1)
    Lightroom 2.3 X64
    NEC 30" Display running at 2560x1600 resolution
    The way I have LR configured on my PC is as follows:
    - Program files installed on main C drive which is a 250GB partition on a 1TB Seagate Sata 2 drive with 190 GB Free space all the time.
    - Lightroom catalog is on my E drive which is a 1TB Seagate Sata 2 drive
    - Camera Raw Cache is set at 10GB and resides on the same E drive as the LR catalog
    - My E drive has 100GB Free Space
    - I have tried tweaks to the video card settings to be Lightroom spe

    Hi Don
    Thanks for the reply, I have tried changing any/all settings on the card set to performance versus quality and still it has no effect. I just got the latest video driver for the GT9800 which was released on the 3rd of March and that has no effect either. I tried loading the 32 bit version of Lightroom without any improvement to speed on the thumbnails whatsoever so that rules out the possibility of it being something to do with 64-bit.
    I read an Adobe knowledgebase article saying that the Nvidia control panel conflicts with certain Adobe software but they do not say how to turn it off and I also cannot even find the answer to that on Nvidia's forums either. So I still cannot say whether or not the Nvidia control panel is the problem because I cannot disable it.
    I phoned a friend this morning who is running Lightroom 2.3 on a simple Core-2 Duo machine and his thumbnails render in half the time that mine do and my hardware is top of the range stuff in comparison so you can imagine my frustration with this thumbnailing issue. My friend also has an Nvidia card 8800GT I think.
    I have spent so much time on this and I am no further to a solution and this blurry thumbnail issue is seriously messing with my eyes and it does not make for a very pleasant user experience.
    What concerns me is that while Lightroom is idle and the machine is idle Lightroom is not doing anything in the background like rendering thumbnails. It only starts rendering them once I scroll down a page, no doubt by design but for more powerful machines there is a lot of processing ability going unused. The thumbnails on the filmstrip render a lot faster compared to grid view but their rendering state is also not retained and once I have scrolled past 100 images and then scroll back they are all blurry again and I have to wait for them re-render all over again. All this re-rendering of thumbs is a really big waste of time.
    I also noticed something else interesting this morning, I opened task manager and left it open while I was scrolling thumbnails (continuous scrolling without waiting for them to render sharp) and only one core was being used to do this processing, I tested this multiple times and left it open while I scrolled through 5000+ images and only one core the whole time was being used. Only once I stopped scrolling and Lightroom began it's rendering of the thumbnails (about 10-15 seconds after stopping scrolling) did it only then start using multiple processors, I'm not sure if that has something to do with anything and memory wise I have 4.2 GB of available memory and Lightroom is only using about 425 Mb of memory.
    I have changed the size of my thumbnails to below halfway on the slider and that made only a slight difference, although now they are so small on this 30" screen that I am now suffering even more eye strain as a result.
    I am busy loading LR 1.x to confirm whether any of these behaviours existed back then and will report back.

  • Report preview and printing performance issues in CRXI R2

    Hello to all,
    We have successfully upgraded a corporate Web reporting site from CR8 to CRXI R2 Server SP3 VS2005 Asp2. Using managed reports, and native Oracle data access, performance has greatly improved. The CRXI site  displays the first page in half the time as CR8. These are often very large reports.
    The problem we are having is when you try to print, export, or just page thru a report in previewer. It takes as long to go to page 2, or to bring up the print dialog screen as it did for page 1 to display in the first place. This is drastically different from the performance on CR8. On the old site, when a report displayed, you could flip thru it like a Word document. Hardly any pause at all. Clicking on the printer icon brought up the print box immediately.
    Is there a way to tell the 'CrystalReportsViewer' to load all pages before showing the first page?
    If not, is anyone aware of a third party replacement for the CRV?
    Any help would be greatly appreciated.
    Joe Early

    Hello Joseph,
    What you're seeing is essentially expected behavior for a CR Server (BO Enterprise) based report.  When you try to page through a report or print it you're basically rerunning the report on each button click.
    To get around this behavior you can put your Report Object / InfoStore object into session and view, page, or print the session object from the viewer.
    You can review [Business Objects Note 1203389|https://bcp.wdf.sap.corp/sap/sapnotes/display/0001203389] for an example with the Crystal Reports .NET SDK.  You'll want to add check for the session on post back, etc. but the code should give you an idea of how to get started.
    Sincerely,
    Dan Kelleher

  • Bind variable & LIKE :1 - Performance issue

    I'm using ODP.NET version 10.2 and I'm facing a performance problem with LIKE key word in statements.
    I'm doing the following :
    OracleConnection conn = new OracleConnection();
    conn.ConnectionString = connectionString;
    string strQuery;
    OracleCommand cmd = new OracleCommand();
    cmd.Connection = conn;
    strQuery = "SELECT field FROM table WHERE field LIKE :1";
    cmd.Parameters.Add(":1", OracleDbType.Varchar2, 2, "x%",ParameterDirection.Input);
    -> This takes quite a long time to retrieve the data.
    If I do the following (without Parameters.Add) it flies :
    strQuery = "SELECT field FROM table WHERE field LIKE 'x%'";
    cmd.CommandText = strQuery;
    OracleDataAdapter da = new OracleDataAdapter();
    da.SelectCommand = cmd;
    DataSet ds = new DataSet();
    da.Fill(ds, "table");
    What am I doing wrong ?
    Thanks in advance
    Philippe

    Hi,
    Is this behavior specific to ODP.NET? Do you get the same behavior testing sqlplus with bind variables? If you're not sure how, here's how you can try it..
    Cheers,
    Greg
    SQL> var abc varchar2(100);
    SQL> exec :abc := 'K%';
    PL/SQL procedure successfully completed.
    SQL> set timing on
    SQL> select * from emp where ename like :abc;
    EMPNO ENAME JOB MGR
    HIREDATE SAL COMM DEPTNO
    7839 KING PRESIDENT
    17-NOV-81 5000 10
    Elapsed: 00:00:00.00
    SQL> select * from emp where ename like 'K%';
    EMPNO ENAME JOB MGR
    HIREDATE SAL COMM DEPTNO
    7839 KING PRESIDENT
    17-NOV-81 5000 10
    Elapsed: 00:00:00.01
    SQL>

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • XL Report performance issues

    Hello All,
    We are running some XL reports and facing some performance issues: 10 mins for a report to run.
    We don´t plan to split this report in many other ones, and I have already run the SQL tools for reorganizing and reindexing the tables (I believe this is nothing to do with performance but ran it anyway...) without any improvements.
    Can anybody be a help on this?
    Thanks!

    Hi Siavauch Saleki 
    Name the report which take too long to run, i have worked on XL report, when i run report from client system it takes some long time,
    try running to run from server
    another way of making it faster is go to Task manager in the process tab
    Change the priority of XL Reporter , excel to HIGH
    THIS WILL MAKE IT FASTER BY ALLOCATING MORE PROCESSOR TIME FROM OS TO IT
    This should improve the reporting speed
    Let me know how it works
    Regards
    Krish

  • Heaviest performance issue impacting business Free Pct showing 9.41%

    Dears,
    We are facing heaviest performance issue on our database and and Application, Our Key users Oracle Techno functional consultant complaining that when "Free Pct" was 18% performance was excellent but now the "Free Pct" has reached to 9.41%.
    Am wondering if any performance tuning steps can be done to come back to its 18%, your valuable help is required.
    Following are the details:-
    Tablespace | Size Mgs | Free Mgs |Used Mgs |**Free Pct** | Used Pct |Max Mb
    APPS_TS_TX_IDX | 84780.88 | 7973.75 |76807.13 |**9.41** | 90.41 | 84780.88
    APPS_TS_TX_DATA | 120540.25 | 16301.88 | 104238.38 |*13.52* | 86.48 | 149276.23
    Do I have to rebuild index tablespace? If yes, while doing this what are the precautions can be done like, do I need downtime from the business?
    will the command below if triggered resolves the issue:-
    alter index apps. APPS_TS_TX_IDX rebuild;
    Kind regards,
    Mohammed
    Edited by: user9007339 on 28/01/2013 04:22 ص
    Edited by: user9007339 on 28/01/2013 04:52 ص
    Edited by: user9007339 on 28/01/2013 05:48 ص

    Hi Helios,
    I shall certainly update, here are the details and feedback of our SR:-
    ======================================================================================================
    1, SR 3-6701198251 : Oracle Application PRODUCTION SHUTDOWN THREE TIMES
    === Data Collected ===
    Findings and Recommendations
    Finding 1: Commits and Rollbacks
    Impact is 4.76 active sessions, 52.92% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is 4.76 active sessions, 52.92% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 1788 K and
    the average time per write was 2766 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is 4.76 active sessions, 52.92% of total activity.
    Finding 2: Top SQL by DB Time
    Impact is 3.86 active sessions, 42.9% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 2.29 active sessions, 25.44% of total activity.
    Action
    Tune the PL/SQL block with SQL_ID "5t39uchjqpyfm". Refer to the "Tuning
    PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
    Reference".
    Related Object
    SQL statement with SQL_ID 5t39uchjqpyfm.
    BEGIN xla_accounting_pkg.unit_processor_batch(:errbuf,:rc,:A0,:A1,:A2
    ,:A3,:A4,:A5,:A6,:A7,:A8,:A9,:A10,:A11,:A12,:A13,:A14); END;
    log_buffer     10485760     
    log_checkpoint_interval     100000     
    log_checkpoint_timeout     1200     
    log_checkpoints_to_alert     TRUE
    === Action Plan ===
    Mohammed,
    AWR and ADDM reports clearly point to performance issues around redo logs. Top waits were :
    op 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    log file sync     1,517     13,265     8744     45.55     Commit
    log buffer space     9,048     8,218     908     28.22     Configuration
    buffer busy waits     6,519     3,743     574     12.85     Concurrency
    DB CPU     2,177     7.48     
    db file sequential read     54,769     540     10     1.85     User I/O
    You have two options here :
    1> Increase size of log_buffer . Set to 15M , unset other parameters ( log_checkpoint_interval , log_checkpoint_timeout )
    2> Increase size and number of online redo log files. Make sure that these are on fast disks.
    3> Run redo generating jobs , like XLAACCUP, during off hours where end-users are not in the system.
    ======================================================================================================
    Dear Helios,
    We are waiting for the RAM to come from our vendor to increase from 16GB to 32GB on Application Server it will take 3 weeks from now.
    Secondly, Our key users especially "Oracle Techno Functional Consultants and "Oracle Application Developers" are compalining about the APPS_TS_TX_IDX tablespace tat has been decreased by 50% from 18% to 9.41%................I was attempting to find if there any possibiities to tune this tablespace..........Your suggestion is appreciated.
    Regards,
    Mohammed
    Edited by: user9007339 on 29/01/2013 03:02 ص

  • Performance issue with JEST table

    Moved to correct forum by moderator
    Hi all,
    I have a report which is giving performance issue.
    It hits the function module "status_read", which in turn hits the table JEST..
    The select query is:
    SELECT SINGLE * FROM JSTO CLIENT SPECIFIED
    WHERE MANDT = MANDT
    AND   OBJNR = OBJNR.
    I know we should not use client specified,  but this is a SAP standard code..
    Since this query is hit many times, it results in TIME_OUT error..
    I observed that the table JEST has 133,523,962 entries in production and in technical details, the size catagory is metnioned as 3 - (Data records expected: 280,000 to 1,100,000).
    Since here, the data size is exceeded, if i change the size catagory to 4 would improve the performance?
    Or should I request Client to archive this table? If yes, please guide me how to go for it? I have heard there are archiving objects.. please specify which objects should be considered for archiving...
    I could only think of above two solutions, please let me know if there is any other workaround...
    thanks!
    Edited by: Matt on Jan 27, 2009 11:12 AM

    Hi,
    I'm not sure the exact archiving object for this, here are some archiving objects related to tabel JEST
    MM_EBAN
    MM_EKKO
    MM_MATNR
    PP_ORDER
    PR_ORDER
    PM_NET
    pl. go thru them using tcode: SARA
    thanks\
    Mahesh

  • Performance issue in DB need help with analysing this ADDM report

    Hi,
    My environment:
    Os: RHEL5U3 / 11.1.0.7 64 bit / R12.1.1 64 bit
    Issue:
    Few days are am facing serious of performance problem in our Production instance. Normally the issue will occur 5 to 10 minutes occasionally per day. At the time of issue we not able to access the EBS application its taking time to load. But backend all the oracle, listener and apps services are up and running. No locks at table and session level. Cpu and memory usage is normal.
    We have monitored using "Enterprise Manager" for this issue and we found the wait session present more in Active session tab. At this time EBS application is not able access its loading too time. After some time the in Active session tab the wait session came normal and when we try to access the EBS application its working fine.
    We try to find the cause of the issue by running addm report. But am not able to understand what its says. Kindly suggests me
    ADDM Report for Task 'TASK_42656'
    Analysis Period
    AWR snapshot range from 14754 to 14755.
    Time period starts at 17-APR-12 11.00.22 AM
    Time period ends at 17-APR-12 12.00.33 PM
    Analysis Target
    Database 'PRD' with DB ID 1789440879.
    Database version 11.1.0.7.0.
    ADDM performed an analysis of instance PRD, numbered 1 and hosted at
    advgrpdb.advgroup.ae.
    Activity During the Analysis Period
    Total database time was 18674 seconds.
    The average number of active sessions was 5.17.
    Summary of Findings
    Description Active Sessions Recommendations
    Percent of Activity
    1 Top SQL by DB Time 3.43 | 66.33 5
    2 Buffer Busy 2.52 | 48.81 5
    3 Buffer Busy 1.39 | 26.81 2
    4 Log File Switches .91 | 17.56 1
    5 Buffer Busy .56 | 10.87 2
    6 Undersized SGA .38 | 7.37 1
    7 Commits and Rollbacks .28 | 5.42 1
    8 Undo I/O .18 | 3.53 0
    9 CPU Usage .13 | 2.57 1
    10 Top SQL By I/O .11 | 2.21 1
    Findings and Recommendations
    Finding 1: Top SQL by DB Time
    Impact is 3.43 active sessions, 66.33% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 1.59 active sessions, 30.8% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "a49xsqhv0h31b" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    Rationale
    SQL statement with SQL_ID "a49xsqhv0h31b" was executed 4686 times and
    had an average elapsed time of 1.2 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 85% of the database time spent in processing the SQL
    statement with SQL_ID "a49xsqhv0h31b".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 9% of the database time spent in
    processing the SQL statement with SQL_ID "a49xsqhv0h31b".
    Recommendation 3: SQL Tuning
    Estimated benefit is .56 active sessions, 10.91% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "5d7957yktf3nn" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    SQL statement with SQL_ID "5d7957yktf3nn" was executed 266 times and had
    an average elapsed time of 7.6 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 86% of the database time spent in processing the SQL
    statement with SQL_ID "5d7957yktf3nn".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 7% of the database time spent in
    processing the SQL statement with SQL_ID "5d7957yktf3nn".
    Finding 2: Buffer Busy
    Impact is 2.52 active sessions, 48.81% of total activity.
    Read and write contention on database blocks was consuming significant
    database time.
    Recommendation 1: Application Analysis
    Estimated benefit is 1.42 active sessions, 27.44% of total activity.
    Action
    Trace the cause of object contention due to SELECT statements in the
    application using the information provided.
    Related Object
    Database object with ID 34562.
    Rationale
    The SELECT statement with SQL_ID "a49xsqhv0h31b" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 1: Schema Changes
    Estimated benefit is .03 active sessions, .62% of total activity.
    Action
    Consider rebuilding the TABLE "APPLSYS.FND_LOGIN_RESP_FORMS" with object
    ID 34651 using a higher value for PCTFREE.
    Related Object
    Database object with ID 34651.
    Rationale
    The UPDATE statement with SQL_ID "cqc5crhxxt36t" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID cqc5crhxxt36t.
    UPDATE FND_LOGIN_RESP_FORMS FLRF SET END_TIME = SYSDATE WHERE
    FLRF.LOGIN_ID = :B2 AND FLRF.LOGIN_RESP_ID = :B1 AND FLRF.END_TIME IS
    NULL AND (FLRF.FORM_ID, FLRF.FORM_APPL_ID) = (SELECT F.FORM_ID,
    F.APPLICATION_ID FROM FND_FORM F, FND_APPLICATION A WHERE F.FORM_NAME
    = :B4 AND F.APPLICATION_ID = A.APPLICATION_ID AND
    A.APPLICATION_SHORT_NAME = :B3 )
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 4: Log File Switches
    Impact is .91 active sessions, 17.56% of total activity.
    Log file switch operations were consuming significant database time while
    waiting for checkpoint completion.
    This problem can be caused by use of hot backup mode on tablespaces. DML to
    tablespaces in hot backup mode causes generation of additional redo.
    Recommendation 1: Database Configuration
    Estimated benefit is .91 active sessions, 17.56% of total activity.
    Action
    Verify whether incremental shipping was used for standby databases.
    Symptoms That Led to the Finding:
    Wait class "Configuration" was consuming significant database time.
    Impact is .91 active sessions, 17.63% of total activity.
    Finding 5: Buffer Busy
    Impact is .56 active sessions, 10.87% of total activity.
    A hot data block with concurrent read and write activity was found. The block
    belongs to segment "ICX.ICX_SESSIONS" and is block 243489 in file 36.
    Recommendation 1: Application Analysis
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Investigate application logic to find the cause of high concurrent read
    and write activity to the data present in this block.
    Related Object
    Database block with object number 37562, file number 36 and block
    number 243489.
    Rationale
    The SQL statement with SQL_ID "5d7957yktf3nn" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    The SQL statement with SQL_ID "326up1aym56dd" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 326up1aym56dd.
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 2: Schema Changes
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Consider rebuilding the TABLE "ICX.ICX_SESSIONS" with object ID 37562
    using a higher value for PCTFREE.
    Related Object
    Database object with ID 37562.
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 6: Undersized SGA
    Impact is .38 active sessions, 7.37% of total activity.
    The SGA was inadequately sized, causing additional I/O or hard parses.
    The value of parameter "sga_target" was "4096 M" during the analysis period.
    Recommendation 1: Database Configuration
    Estimated benefit is .12 active sessions, 2.33% of total activity.
    Action
    Increase the size of the SGA by setting the parameter "sga_target" to
    4608 M.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Impact is .13 active sessions, 2.51% of total activity.
    Contention for latches related to the shared pool was consuming
    significant database time.
    Impact is 0 active sessions, .03% of total activity.
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 7: Commits and Rollbacks
    Impact is .28 active sessions, 5.42% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is .28 active sessions, 5.42% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 163 K and
    the average time per write was 68 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is .28 active sessions, 5.42% of total activity.
    Finding 8: Undo I/O
    Impact is .18 active sessions, 3.53% of total activity.
    Undo I/O was a significant portion (26%) of the total database I/O.
    No recommendations are available.
    Symptoms That Led to the Finding:
    The throughput of the I/O subsystem was significantly lower than
    expected.
    Impact is .08 active sessions, 1.46% of total activity.
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Finding 9: CPU Usage
    Impact is .13 active sessions, 2.57% of total activity.
    Time spent on the CPU by the instance was responsible for a substantial part
    of database time.
    Recommendation 1: SQL Tuning
    Estimated benefit is .13 active sessions, 2.57% of total activity.
    Finding 10: Top SQL By I/O
    Impact is .11 active sessions, 2.21% of total activity.
    Individual SQL statements responsible for significant user I/O wait were
    found.
    Recommendation 1: SQL Tuning
    Estimated benefit is .11 active sessions, 2.22% of total activity.
    Action
    Run SQL Tuning Advisor on the SQL statement with SQL_ID "b3pnc5yctv2z5".
    Related Object
    SQL statement with SQL_ID b3pnc5yctv2z5.
    INSERT INTO ZX_TRANSACTION_LINES_GT( APPLICATION_ID ,ENTITY_CODE
    ,EVENT_CLASS_CODE ,TRX_ID ,TRX_LEVEL_TYPE ,TRX_LINE_ID ,LINE_CLASS
    ,LINE_LEVEL_ACTION ,TRX_LINE_TYPE ,TRX_LINE_DATE
    ,LINE_AMT_INCLUDES_TAX_FLAG ,LINE_AMT ,TRX_LINE_QUANTITY ,UNIT_PRICE
    ,PRODUCT_ID ,PRODUCT_ORG_ID ,UOM_CODE ,PRODUCT_CODE ,SHIP_TO_PARTY_ID
    ,SHIP_FROM_PARTY_ID ,BILL_TO_PARTY_ID ,BILL_FROM_PARTY_ID
    ,SHIP_FROM_PARTY_SITE_ID ,BILL_FROM_PARTY_SITE_ID
    ,SHIP_TO_LOCATION_ID ,SHIP_FROM_LOCATION_ID ,BILL_TO_LOCATION_ID
    ,SHIP_THIRD_PTY_ACCT_ID ,SHIP_THIRD_PTY_ACCT_SITE_ID ,HISTORICAL_FLAG
    ,TRX_LINE_CURRENCY_CODE ,TRX_LINE_CURRENCY_CONV_DATE
    ,TRX_LINE_CURRENCY_CONV_RATE ,TRX_LINE_CURRENCY_CONV_TYPE
    ,TRX_LINE_MAU ,TRX_LINE_PRECISION ,HISTORICAL_TAX_CODE_ID
    ,TRX_BUSINESS_CATEGORY ,PRODUCT_CATEGORY ,PRODUCT_FISC_CLASSIFICATION
    ,LINE_INTENDED_USE ,PRODUCT_TYPE ,USER_DEFINED_FISC_CLASS
    ,ASSESSABLE_VALUE ,INPUT_TAX_CLASSIFICATION_CODE ,ACCOUNT_CCID
    ,BILL_THIRD_PTY_ACCT_ID ,BILL_THIRD_PTY_ACCT_SITE_ID ,TRX_LINE_NUMBER
    ,TRX_LINE_DESCRIPTION ,PRODUCT_DESCRIPTION ,USER_UPD_DET_FACTORS_FLAG
    ,DEFAULTING_ATTRIBUTE1 ) SELECT :B4 ,:B3 ,:B2
    ,PRL.REQUISITION_HEADER_ID ,:B1 ,PRL.REQUISITION_LINE_ID ,'INVOICE'
    ,NVL(PRL.TAX_ATTRIBUTE_UPDATE_CODE,'UPDATE') ,'ITEM'
    ,NVL(PRL.NEED_BY_DATE, SYSDATE) ,'N' ,NVL(PRL.AMOUNT,
    PRL.UNIT_PRICE*PRL.QUANTITY) ,PRL.QUANTITY ,PRL.UNIT_PRICE
    ,PRL.ITEM_ID ,(SELECT FSP.INVENTORY_ORGANIZATION_ID FROM
    FINANCIALS_SYSTEM_PARAMS_ALL FSP WHERE FSP.ORG_ID=PRL.ORG_ID)
    ,(SELECT MUM.UOM_CODE FROM MTL_UNITS_OF_MEASURE MUM WHERE
    MUM.UNIT_OF_MEASURE=PRL.UNIT_MEAS_LOOKUP_CODE) ,MSIB.SEGMENT1
    ,PRL.DESTINATION_ORGANIZATION_ID ,PV.PARTY_ID ,PRH.ORG_ID
    ,PV.PARTY_ID ,PVS.PARTY_SITE_ID ,PVS.PARTY_SITE_ID
    ,PRL.DELIVER_TO_LOCATION_ID ,(SELECT HZPS.LOCATION_ID FROM
    HZ_PARTY_SITES HZPS WHERE HZPS.PARTY_SITE_ID = PVS.PARTY_SITE_ID)
    ,(SELECT LOCATION_ID FROM HR_ALL_ORGANIZATION_UNITS WHERE
    ORGANIZATION_ID=PRH.ORG_ID) ,PRL.VENDOR_ID ,PRL.VENDOR_SITE_ID ,NULL
    ,NVL(PRL.CURRENCY_CODE, :B9 ) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_DATE,
    SYSDATE) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE, :B8 )
    ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_TYPE, :B7 )
    ,FC.MINIMUM_ACCOUNTABLE_UNIT ,NVL(FC.PRECISION, 2) ,NULL
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.TRX_BUSINESS_CATEGORY, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_CATEGORY, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_FISC_CLASSIFICATION,
    NULL), NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.LINE_INTENDED_USE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_TYPE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.USER_DEFINED_FISC_CLASS, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.ASSESSABLE_VALUE, NULL), NULL )
    ,DECODE(:B6 , 'REQIMPORT', PRL.TAX_NAME,
    DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.INPUT_TAX_CLASSIFICATION_CODE,
    NULL), NULL ) ) ,NVL((SELECT PRD.CODE_COMBINATION_ID FROM
    PO_REQ_DISTRIBUTIONS_ALL PRD WHERE PRD.REQUISITION_LINE_ID =
    PRL.REQUISITION_LINE_ID AND ROWNUM = 1), MSIB.EXPENSE_ACCOUNT )
    ,PV.VENDOR_ID ,PVS.VENDOR_SITE_ID ,PRL.LINE_NUM ,PRL.ITEM_DESCRIPTION
    ,PRL.ITEM_DESCRIPTION ,(SELECT 'Y' FROM DUAL WHERE :B6 = 'REQIMPORT'
    AND PRL.TAX_NAME IS NOT NULL) ,PRL.DESTINATION_ORGANIZATION_ID FROM
    PO_REQUISITION_HEADERS_ALL PRH, PO_REQUISITION_LINES_ALL PRL,
    ZX_LINES_DET_FACTORS ZXLDET, PO_VENDORS PV, PO_VENDOR_SITES_ALL PVS,
    MTL_SYSTEM_ITEMS_B MSIB, FND_CURRENCIES FC WHERE
    PRH.REQUISITION_HEADER_ID = :B5 AND PRH.REQUISITION_HEADER_ID =
    PRL.REQUISITION_HEADER_ID AND ZXLDET.APPLICATION_ID(+) = :B4 AND
    ZXLDET.ENTITY_CODE(+) = :B3 AND ZXLDET.EVENT_CLASS_CODE(+) = :B2 AND
    ZXLDET.TRX_LEVEL_TYPE(+) = :B1 AND ZXLDET.TRX_LINE_ID(+) =
    PRL.PARENT_REQ_LINE_ID AND PV.VENDOR_ID(+) = PRL.VENDOR_ID AND
    PVS.VENDOR_SITE_ID(+) = PRL.VENDOR_SITE_ID AND
    MSIB.INVENTORY_ITEM_ID(+) = PRL.ITEM_ID AND MSIB.ORGANIZATION_ID(+) =
    PRL.ORG_ID AND FC.CURRENCY_CODE(+) = PRL.CURRENCY_CODE AND
    NVL(PRL.MODIFIED_BY_AGENT_FLAG, 'N') = 'N' AND NVL(PRL.CANCEL_FLAG,
    'N') = 'N' AND NVL(PRL.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' AND
    PRL.LINE_LOCATION_ID IS NULL AND PRL.AT_SOURCING_FLAG IS NULL
    Rationale
    SQL statement with SQL_ID "b3pnc5yctv2z5" was executed 3 times and had
    an average elapsed time of 138 seconds.
    Rationale
    Average time spent in User I/O wait events per execution was 137
    seconds.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Regards
    Athish

    Few days are am facing serious of performance problem in our Production instanceFor production issues, please log a SR.
    Was this working before? If yes, any changes been done recently?
    Do you have the statistics collected up to date?
    Please see these docs.
    AutoInvoice Performance Issue When Processing Tax [ID 1059275.1]
    R12 : System Hangs When Attempting To Save Blanket Release After Applying Patch 11817843 [ID 1333336.1]
    Thanks,
    Hussein

  • Strange performance issue in bex report

    Hello Experts,
    I have a performance issue on my bex report.
    I'm running the report with below selection criteria and getting 'too much data' error.
    Country :  equals EMEA
    Category: not equlas 13
    Date : 02/2010 to 12/2010.
    But when I ran the report for smaller date ranges the number of records are not exceeding 13000.
    02.2010 - 06.2010 - 6,555 rows
    07.2010 - 09.2010 - 3,671 rows
    10.2010 - 12.2010 - 2,780 rows
    I know excel can't fit more than 65000 records, but I'm expecting 13000 records for my wide date range which excel can easily fit.
    Any ideas on this one will be appreciated.
    Regards,
    Brahma Reddy

    Hi,
    For Question 1:
    In query designer Go to the Query properties and select the tab "Variable Sequence", here you can set the order of variables as per you requirement.
    For Question 2:
    There will be a option "Hide Repeated Key values", if you uncheck this option then you will have the values for each row even though the material values are same.
    Note; if you are viewing the report in web or WAD report you need to make the same changes in the Web template also because the settings in the query designer will be overridden when you run the query in web.
    Hope this helps.
    Regards,
    Rk.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Performance issues ..with apex in reports version 3.1

    Hello All,
    I am using apex 3.1 oracle 10g.
    I am facing with performance issues with apex . I am generating iteractive reports with apex and the number of records are huge - running in 30 to 40 thousands of records and the reports is taking almost 30 minutes.
    How I can improve the performance of this kind of report. I am using apex collections.
    How apex works in terms of retrieving the records -?
    Please let me know .
    Thanks/kumar
    Edited by: kumar73 on Jun 18, 2010 10:21 AM

    Hello Tony ,
    The following are the sequence of steps to run the test case.
    Note:- All the schemas , tables and variables are populated from database.
    From Schema and Relations tab choose the following:
    1)     Select P3I2008Q4 as schema.
    2)     Choose Relation as query path.
    3)     Select ECLA, ECLB, MTAB as relations.
    From Variables choose the following:
    4)     Choose the variables AGE_SEXA,CLODESCA,ALCNO from ECLA relation.
    5)     Choose the variables AGE_SEXB, ALCNO, CLODESCB from ECLB relation.
    6)     Choose the variables EXPNAME, ALCNO, COST_, COST from MTAB relation.
    From Conditions: Click the Run Report button this generated standard report ( Total no of records in report – 30150 )
    Click on Interactive report button –to generate an interactive report. ( Error occurred )
    We are using return sql statement in generationg the standard report and collections for interactive report.
    thanks/kumar

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

Maybe you are looking for