Performance issue in collection

Hi ,
I am dealing with lists,Maps in my program.I have huge amount of daya to process.I have 300000 elements to process
and them to the list.generally how much time it will take to process such huge amount of data if i have a simple for loop and adding to a list.

for (String each : Caps) {
            if (CodeMap.keySet().contains(each.toUpperCase())From there on i count two more calls to each.toUpperCase(), one of them in an inner loop, so will be called multiple times.
If you care about performance, why these repeated calls to each.toUpperCase()?
Do you believe you will get a different String each time?
Why not just
            String eachToUpperCase = each.toUpperCase();
            if (CodeMap.keySet().contains(eachToUpperCase)
                && CodeMap.get(eachToUpperCase).size() > 0) {Why do you guard each get() with a contains()?
If you care about performance, why not just one simple get(), which will return null if the Map does not contain a value for the key you have searched for.
The only case where contains makes sense is if you store null as a value explicitly in your map and want to execute a special action if null is stored.
As this is not the case in your code, just improve to:
            String eachToUpperCase = each.toUpperCase();
            List? <String? > codes = CodeMap.get(eachToUpperCase);
            if (codes==null && codes.size()> 0) {(Replace List? and String? with appropriate types).
The next line in your example is:
                if (BasedOnMap.get(each != null) {There definitely is a ")" missing here, so you want us to estimate the performance of code that does not even compile!
I would assume
                if (BasedOnMap.get(each) != null) {because in the next line you have
                    for (MatrixBo eachBo : BasedOnMap.get(each)) {Again a double lookup where a single one would have been enough!
Here i am guessing the type returned by BasedOnMap.get(each) is List<MatrixBo>.
Using this and adding missing brackets:
                List<MatrixBo> matrixBoList = BasedOnMap.get(each);
                if (matrixBoList != null) {
                    for (MatrixBo eachBo : matrixBoList ) {
                        String pointCode = eachBo.getPoint().getPointCode();
                        String pointCodeToUpperCase = pointCode.toUpperCase();
                        String? code=codes.get(pointCodeToUpperCase);
                        if (code!=null) {
                            if (!exceptionlist.contains(NameBoMap.get(pointCode))) {
                                filteredList.add(eachBo);
            }Same algorithm, improved performance, much more readable!
One further hint: The line
                            if (!exceptionlist.contains( ...indicates you are performing a search in a (potentially huge) List(!), which has linear performance for every single search.
As it is searched in an inner loop these searches will occur quite often.
As the exceptionlist appears to be unchanged through your algorithm, start with transforming it into a Set before starting the outer loop:
Set<String ?> exceptionSet = new HashSet<String ?>(exceptionlist);
for (String each : Caps) {then search into the set:
                            if (!exceptionSet.contains(NameBoMap.get(pointCode))) {
                                filteredList.add(eachBo);
                            }

Similar Messages

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Performance issues ..with apex in reports version 3.1

    Hello All,
    I am using apex 3.1 oracle 10g.
    I am facing with performance issues with apex . I am generating iteractive reports with apex and the number of records are huge - running in 30 to 40 thousands of records and the reports is taking almost 30 minutes.
    How I can improve the performance of this kind of report. I am using apex collections.
    How apex works in terms of retrieving the records -?
    Please let me know .
    Thanks/kumar
    Edited by: kumar73 on Jun 18, 2010 10:21 AM

    Hello Tony ,
    The following are the sequence of steps to run the test case.
    Note:- All the schemas , tables and variables are populated from database.
    From Schema and Relations tab choose the following:
    1)     Select P3I2008Q4 as schema.
    2)     Choose Relation as query path.
    3)     Select ECLA, ECLB, MTAB as relations.
    From Variables choose the following:
    4)     Choose the variables AGE_SEXA,CLODESCA,ALCNO from ECLA relation.
    5)     Choose the variables AGE_SEXB, ALCNO, CLODESCB from ECLB relation.
    6)     Choose the variables EXPNAME, ALCNO, COST_, COST from MTAB relation.
    From Conditions: Click the Run Report button this generated standard report ( Total no of records in report – 30150 )
    Click on Interactive report button –to generate an interactive report. ( Error occurred )
    We are using return sql statement in generationg the standard report and collections for interactive report.
    thanks/kumar

  • Performance issue in DB need help with analysing this ADDM report

    Hi,
    My environment:
    Os: RHEL5U3 / 11.1.0.7 64 bit / R12.1.1 64 bit
    Issue:
    Few days are am facing serious of performance problem in our Production instance. Normally the issue will occur 5 to 10 minutes occasionally per day. At the time of issue we not able to access the EBS application its taking time to load. But backend all the oracle, listener and apps services are up and running. No locks at table and session level. Cpu and memory usage is normal.
    We have monitored using "Enterprise Manager" for this issue and we found the wait session present more in Active session tab. At this time EBS application is not able access its loading too time. After some time the in Active session tab the wait session came normal and when we try to access the EBS application its working fine.
    We try to find the cause of the issue by running addm report. But am not able to understand what its says. Kindly suggests me
    ADDM Report for Task 'TASK_42656'
    Analysis Period
    AWR snapshot range from 14754 to 14755.
    Time period starts at 17-APR-12 11.00.22 AM
    Time period ends at 17-APR-12 12.00.33 PM
    Analysis Target
    Database 'PRD' with DB ID 1789440879.
    Database version 11.1.0.7.0.
    ADDM performed an analysis of instance PRD, numbered 1 and hosted at
    advgrpdb.advgroup.ae.
    Activity During the Analysis Period
    Total database time was 18674 seconds.
    The average number of active sessions was 5.17.
    Summary of Findings
    Description Active Sessions Recommendations
    Percent of Activity
    1 Top SQL by DB Time 3.43 | 66.33 5
    2 Buffer Busy 2.52 | 48.81 5
    3 Buffer Busy 1.39 | 26.81 2
    4 Log File Switches .91 | 17.56 1
    5 Buffer Busy .56 | 10.87 2
    6 Undersized SGA .38 | 7.37 1
    7 Commits and Rollbacks .28 | 5.42 1
    8 Undo I/O .18 | 3.53 0
    9 CPU Usage .13 | 2.57 1
    10 Top SQL By I/O .11 | 2.21 1
    Findings and Recommendations
    Finding 1: Top SQL by DB Time
    Impact is 3.43 active sessions, 66.33% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 1.59 active sessions, 30.8% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "a49xsqhv0h31b" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    Rationale
    SQL statement with SQL_ID "a49xsqhv0h31b" was executed 4686 times and
    had an average elapsed time of 1.2 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 85% of the database time spent in processing the SQL
    statement with SQL_ID "a49xsqhv0h31b".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 9% of the database time spent in
    processing the SQL statement with SQL_ID "a49xsqhv0h31b".
    Recommendation 3: SQL Tuning
    Estimated benefit is .56 active sessions, 10.91% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "5d7957yktf3nn" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    SQL statement with SQL_ID "5d7957yktf3nn" was executed 266 times and had
    an average elapsed time of 7.6 seconds.
    Rationale
    Waiting for event "buffer busy waits" in wait class "Concurrency"
    accounted for 86% of the database time spent in processing the SQL
    statement with SQL_ID "5d7957yktf3nn".
    Rationale
    Waiting for event "log file switch (checkpoint incomplete)" in wait
    class "Configuration" accounted for 7% of the database time spent in
    processing the SQL statement with SQL_ID "5d7957yktf3nn".
    Finding 2: Buffer Busy
    Impact is 2.52 active sessions, 48.81% of total activity.
    Read and write contention on database blocks was consuming significant
    database time.
    Recommendation 1: Application Analysis
    Estimated benefit is 1.42 active sessions, 27.44% of total activity.
    Action
    Trace the cause of object contention due to SELECT statements in the
    application using the information provided.
    Related Object
    Database object with ID 34562.
    Rationale
    The SELECT statement with SQL_ID "a49xsqhv0h31b" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID a49xsqhv0h31b.
    SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
    P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
    R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
    R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
    E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
    P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
    DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
    R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
    R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
    R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
    R.Priority, U.User_Name, O.Oracle_Username,
    O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
    A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
    HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
    R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
    R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
    R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
    R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
    R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
    TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
    Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
    NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
    'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
    R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
    R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
    R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
    R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
    R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
    X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
    X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
    X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
    X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
    X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
    X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
    X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
    X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
    X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
    X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
    X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
    X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
    X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
    X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
    X.Argument100, R.number_of_arguments, C.CD_Name,
    NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
    fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
    A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
    fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
    fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
    ((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
    (R.OPS_INSTANCE =
    decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
    R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
    P.Application_Id(+) And R.Concurrent_Program_Id =
    P.Concurrent_Program_Id(+) And R.Program_Application_Id =
    A.Application_Id(+) And P.Executable_Application_Id =
    E.Application_Id(+) And P.Executable_Id =
    E.Executable_Id(+) And P.Executable_Application_Id =
    A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
    = C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
    :q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
    is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
    R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
    'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
    'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
    Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
    'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
    Q.Max_Processes And R.Rowid = :reqname And
    ((P.Execution_Method_Code != 'S' OR
    (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
    ((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
    ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
    ((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
    60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
    R.status_code NoWait
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 1: Schema Changes
    Estimated benefit is .03 active sessions, .62% of total activity.
    Action
    Consider rebuilding the TABLE "APPLSYS.FND_LOGIN_RESP_FORMS" with object
    ID 34651 using a higher value for PCTFREE.
    Related Object
    Database object with ID 34651.
    Rationale
    The UPDATE statement with SQL_ID "cqc5crhxxt36t" was significantly
    affected by "buffer busy" waits.
    Related Object
    SQL statement with SQL_ID cqc5crhxxt36t.
    UPDATE FND_LOGIN_RESP_FORMS FLRF SET END_TIME = SYSDATE WHERE
    FLRF.LOGIN_ID = :B2 AND FLRF.LOGIN_RESP_ID = :B1 AND FLRF.END_TIME IS
    NULL AND (FLRF.FORM_ID, FLRF.FORM_APPL_ID) = (SELECT F.FORM_ID,
    F.APPLICATION_ID FROM FND_FORM F, FND_APPLICATION A WHERE F.FORM_NAME
    = :B4 AND F.APPLICATION_ID = A.APPLICATION_ID AND
    A.APPLICATION_SHORT_NAME = :B3 )
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 4: Log File Switches
    Impact is .91 active sessions, 17.56% of total activity.
    Log file switch operations were consuming significant database time while
    waiting for checkpoint completion.
    This problem can be caused by use of hot backup mode on tablespaces. DML to
    tablespaces in hot backup mode causes generation of additional redo.
    Recommendation 1: Database Configuration
    Estimated benefit is .91 active sessions, 17.56% of total activity.
    Action
    Verify whether incremental shipping was used for standby databases.
    Symptoms That Led to the Finding:
    Wait class "Configuration" was consuming significant database time.
    Impact is .91 active sessions, 17.63% of total activity.
    Finding 5: Buffer Busy
    Impact is .56 active sessions, 10.87% of total activity.
    A hot data block with concurrent read and write activity was found. The block
    belongs to segment "ICX.ICX_SESSIONS" and is block 243489 in file 36.
    Recommendation 1: Application Analysis
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Investigate application logic to find the cause of high concurrent read
    and write activity to the data present in this block.
    Related Object
    Database block with object number 37562, file number 36 and block
    number 243489.
    Rationale
    The SQL statement with SQL_ID "5d7957yktf3nn" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 5d7957yktf3nn.
    UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
    Rationale
    The SQL statement with SQL_ID "326up1aym56dd" spent significant time on
    "buffer busy" waits for the hot block.
    Related Object
    SQL statement with SQL_ID 326up1aym56dd.
    UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
    Recommendation 2: Schema Changes
    Estimated benefit is .56 active sessions, 10.87% of total activity.
    Action
    Consider rebuilding the TABLE "ICX.ICX_SESSIONS" with object ID 37562
    using a higher value for PCTFREE.
    Related Object
    Database object with ID 37562.
    Symptoms That Led to the Finding:
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 6: Undersized SGA
    Impact is .38 active sessions, 7.37% of total activity.
    The SGA was inadequately sized, causing additional I/O or hard parses.
    The value of parameter "sga_target" was "4096 M" during the analysis period.
    Recommendation 1: Database Configuration
    Estimated benefit is .12 active sessions, 2.33% of total activity.
    Action
    Increase the size of the SGA by setting the parameter "sga_target" to
    4608 M.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Impact is .13 active sessions, 2.51% of total activity.
    Contention for latches related to the shared pool was consuming
    significant database time.
    Impact is 0 active sessions, .03% of total activity.
    Wait class "Concurrency" was consuming significant database time.
    Impact is 2.53 active sessions, 48.87% of total activity.
    Finding 7: Commits and Rollbacks
    Impact is .28 active sessions, 5.42% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is .28 active sessions, 5.42% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 163 K and
    the average time per write was 68 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is .28 active sessions, 5.42% of total activity.
    Finding 8: Undo I/O
    Impact is .18 active sessions, 3.53% of total activity.
    Undo I/O was a significant portion (26%) of the total database I/O.
    No recommendations are available.
    Symptoms That Led to the Finding:
    The throughput of the I/O subsystem was significantly lower than
    expected.
    Impact is .08 active sessions, 1.46% of total activity.
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Finding 9: CPU Usage
    Impact is .13 active sessions, 2.57% of total activity.
    Time spent on the CPU by the instance was responsible for a substantial part
    of database time.
    Recommendation 1: SQL Tuning
    Estimated benefit is .13 active sessions, 2.57% of total activity.
    Finding 10: Top SQL By I/O
    Impact is .11 active sessions, 2.21% of total activity.
    Individual SQL statements responsible for significant user I/O wait were
    found.
    Recommendation 1: SQL Tuning
    Estimated benefit is .11 active sessions, 2.22% of total activity.
    Action
    Run SQL Tuning Advisor on the SQL statement with SQL_ID "b3pnc5yctv2z5".
    Related Object
    SQL statement with SQL_ID b3pnc5yctv2z5.
    INSERT INTO ZX_TRANSACTION_LINES_GT( APPLICATION_ID ,ENTITY_CODE
    ,EVENT_CLASS_CODE ,TRX_ID ,TRX_LEVEL_TYPE ,TRX_LINE_ID ,LINE_CLASS
    ,LINE_LEVEL_ACTION ,TRX_LINE_TYPE ,TRX_LINE_DATE
    ,LINE_AMT_INCLUDES_TAX_FLAG ,LINE_AMT ,TRX_LINE_QUANTITY ,UNIT_PRICE
    ,PRODUCT_ID ,PRODUCT_ORG_ID ,UOM_CODE ,PRODUCT_CODE ,SHIP_TO_PARTY_ID
    ,SHIP_FROM_PARTY_ID ,BILL_TO_PARTY_ID ,BILL_FROM_PARTY_ID
    ,SHIP_FROM_PARTY_SITE_ID ,BILL_FROM_PARTY_SITE_ID
    ,SHIP_TO_LOCATION_ID ,SHIP_FROM_LOCATION_ID ,BILL_TO_LOCATION_ID
    ,SHIP_THIRD_PTY_ACCT_ID ,SHIP_THIRD_PTY_ACCT_SITE_ID ,HISTORICAL_FLAG
    ,TRX_LINE_CURRENCY_CODE ,TRX_LINE_CURRENCY_CONV_DATE
    ,TRX_LINE_CURRENCY_CONV_RATE ,TRX_LINE_CURRENCY_CONV_TYPE
    ,TRX_LINE_MAU ,TRX_LINE_PRECISION ,HISTORICAL_TAX_CODE_ID
    ,TRX_BUSINESS_CATEGORY ,PRODUCT_CATEGORY ,PRODUCT_FISC_CLASSIFICATION
    ,LINE_INTENDED_USE ,PRODUCT_TYPE ,USER_DEFINED_FISC_CLASS
    ,ASSESSABLE_VALUE ,INPUT_TAX_CLASSIFICATION_CODE ,ACCOUNT_CCID
    ,BILL_THIRD_PTY_ACCT_ID ,BILL_THIRD_PTY_ACCT_SITE_ID ,TRX_LINE_NUMBER
    ,TRX_LINE_DESCRIPTION ,PRODUCT_DESCRIPTION ,USER_UPD_DET_FACTORS_FLAG
    ,DEFAULTING_ATTRIBUTE1 ) SELECT :B4 ,:B3 ,:B2
    ,PRL.REQUISITION_HEADER_ID ,:B1 ,PRL.REQUISITION_LINE_ID ,'INVOICE'
    ,NVL(PRL.TAX_ATTRIBUTE_UPDATE_CODE,'UPDATE') ,'ITEM'
    ,NVL(PRL.NEED_BY_DATE, SYSDATE) ,'N' ,NVL(PRL.AMOUNT,
    PRL.UNIT_PRICE*PRL.QUANTITY) ,PRL.QUANTITY ,PRL.UNIT_PRICE
    ,PRL.ITEM_ID ,(SELECT FSP.INVENTORY_ORGANIZATION_ID FROM
    FINANCIALS_SYSTEM_PARAMS_ALL FSP WHERE FSP.ORG_ID=PRL.ORG_ID)
    ,(SELECT MUM.UOM_CODE FROM MTL_UNITS_OF_MEASURE MUM WHERE
    MUM.UNIT_OF_MEASURE=PRL.UNIT_MEAS_LOOKUP_CODE) ,MSIB.SEGMENT1
    ,PRL.DESTINATION_ORGANIZATION_ID ,PV.PARTY_ID ,PRH.ORG_ID
    ,PV.PARTY_ID ,PVS.PARTY_SITE_ID ,PVS.PARTY_SITE_ID
    ,PRL.DELIVER_TO_LOCATION_ID ,(SELECT HZPS.LOCATION_ID FROM
    HZ_PARTY_SITES HZPS WHERE HZPS.PARTY_SITE_ID = PVS.PARTY_SITE_ID)
    ,(SELECT LOCATION_ID FROM HR_ALL_ORGANIZATION_UNITS WHERE
    ORGANIZATION_ID=PRH.ORG_ID) ,PRL.VENDOR_ID ,PRL.VENDOR_SITE_ID ,NULL
    ,NVL(PRL.CURRENCY_CODE, :B9 ) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_DATE,
    SYSDATE) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE, :B8 )
    ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_TYPE, :B7 )
    ,FC.MINIMUM_ACCOUNTABLE_UNIT ,NVL(FC.PRECISION, 2) ,NULL
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.TRX_BUSINESS_CATEGORY, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_CATEGORY, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_FISC_CLASSIFICATION,
    NULL), NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.LINE_INTENDED_USE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_TYPE, NULL), NULL )
    ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.USER_DEFINED_FISC_CLASS, NULL),
    NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.ASSESSABLE_VALUE, NULL), NULL )
    ,DECODE(:B6 , 'REQIMPORT', PRL.TAX_NAME,
    DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
    NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.INPUT_TAX_CLASSIFICATION_CODE,
    NULL), NULL ) ) ,NVL((SELECT PRD.CODE_COMBINATION_ID FROM
    PO_REQ_DISTRIBUTIONS_ALL PRD WHERE PRD.REQUISITION_LINE_ID =
    PRL.REQUISITION_LINE_ID AND ROWNUM = 1), MSIB.EXPENSE_ACCOUNT )
    ,PV.VENDOR_ID ,PVS.VENDOR_SITE_ID ,PRL.LINE_NUM ,PRL.ITEM_DESCRIPTION
    ,PRL.ITEM_DESCRIPTION ,(SELECT 'Y' FROM DUAL WHERE :B6 = 'REQIMPORT'
    AND PRL.TAX_NAME IS NOT NULL) ,PRL.DESTINATION_ORGANIZATION_ID FROM
    PO_REQUISITION_HEADERS_ALL PRH, PO_REQUISITION_LINES_ALL PRL,
    ZX_LINES_DET_FACTORS ZXLDET, PO_VENDORS PV, PO_VENDOR_SITES_ALL PVS,
    MTL_SYSTEM_ITEMS_B MSIB, FND_CURRENCIES FC WHERE
    PRH.REQUISITION_HEADER_ID = :B5 AND PRH.REQUISITION_HEADER_ID =
    PRL.REQUISITION_HEADER_ID AND ZXLDET.APPLICATION_ID(+) = :B4 AND
    ZXLDET.ENTITY_CODE(+) = :B3 AND ZXLDET.EVENT_CLASS_CODE(+) = :B2 AND
    ZXLDET.TRX_LEVEL_TYPE(+) = :B1 AND ZXLDET.TRX_LINE_ID(+) =
    PRL.PARENT_REQ_LINE_ID AND PV.VENDOR_ID(+) = PRL.VENDOR_ID AND
    PVS.VENDOR_SITE_ID(+) = PRL.VENDOR_SITE_ID AND
    MSIB.INVENTORY_ITEM_ID(+) = PRL.ITEM_ID AND MSIB.ORGANIZATION_ID(+) =
    PRL.ORG_ID AND FC.CURRENCY_CODE(+) = PRL.CURRENCY_CODE AND
    NVL(PRL.MODIFIED_BY_AGENT_FLAG, 'N') = 'N' AND NVL(PRL.CANCEL_FLAG,
    'N') = 'N' AND NVL(PRL.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' AND
    PRL.LINE_LOCATION_ID IS NULL AND PRL.AT_SOURCING_FLAG IS NULL
    Rationale
    SQL statement with SQL_ID "b3pnc5yctv2z5" was executed 3 times and had
    an average elapsed time of 138 seconds.
    Rationale
    Average time spent in User I/O wait events per execution was 137
    seconds.
    Symptoms That Led to the Finding:
    Wait class "User I/O" was consuming significant database time.
    Impact is .7 active sessions, 13.57% of total activity.
    Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Regards
    Athish

    Few days are am facing serious of performance problem in our Production instanceFor production issues, please log a SR.
    Was this working before? If yes, any changes been done recently?
    Do you have the statistics collected up to date?
    Please see these docs.
    AutoInvoice Performance Issue When Processing Tax [ID 1059275.1]
    R12 : System Hangs When Attempting To Save Blanket Release After Applying Patch 11817843 [ID 1333336.1]
    Thanks,
    Hussein

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How should I report forum performance issues?

    The forums rely heavily on the caching features of browsers to improve the speed of page rendering. Performance of these forums should greatly improve after a few pages because more and more of the images, css and javascript is cached in the browser. As a consequence, when reporting forums performance issues the report should include some information on the state of the browser cache to determine whether the issue is a browser issue or a server issue. Such detailed information is generally not available from just watching the browser screen, but needs to come from specialized tools such as performance monitor plugins and recording proxies.
    The preferred report method for performance issues is to use the speed reporting features build into or available as a plugin for a browser for both the page you want to report a problem with and several refence pages in the site. Detailed instructions are listed below separated out for different browsers. If possible, please use Firefox for submitting the report because it provides an export format that can be read back electronically.
    Known performance issues
    The performance issues with any screen with a Rich Text Editor, such as the Reply window and the compose Private Message window have been acknowleged and improvements are being implemented.
    Mozilla Firefox (preferred)
    Warning: it is currently not recommended to generate a speed report when logged in. The speed report has enough detail for somebody else to hijack your session and impersonate you on the forums. If you really must report while logged in, make sure you log out your browser after generating the speed report and wait at least 4 hours before posting.
    Install the Firebug plugin
    Install the NetExport 0.6 extension for Firebug
    Enable all Firebug panels
    Switch to the "Net" panel in Firebug
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    When you report a performance problem please attach the 6 exports from the Firebug Net panel and an explanation of how you are experiencing the issues (for instance how much slower it is then normal) and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting. If you have non-standard tweaks to your Firefox configuration (such as pipelining enabled) or are running any plugins please include that information in your report as well.
    Google Chrome
    Open the Developer Tools (Ctrl-Shift-J)
    Navigate to the resources tab
    Enable resource tracking.
    Click on this link
    Export the resource loading data.
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Since Google Chrome does not have an export format for the Resource Tracking information best current practice is to take a screenshot and note the hover details for any resource with a tail that is longer then 25% of the total load time. When you report a performance problem please attach the screenshots and an explanation of how you are experiencing the issues (for instance how much slower it is then normal)  and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting.
    Apple Safari
    The Apple Safari Web Inspector has a Resources panel similar to the Resources panel in the Google Chrome developer tools.To get there, follow these steps:
    Show the menu bar.
    Go to preferences
    Go to the Advanced Tab
    Check “Show  Develop menu in menu bar”.
    From the Develop menu select “Show Web  Inspector”. 
    Collecting the performance information and exporting works exactly the same as in Google Chrome. Please refer to the instructions for Google Chrome.
    Microsoft Internet Explorer
    IE does not have native features to analyze web traffic. No plugins have been found that produce the required information (please let us know if we missed any). For now, please reproduce the issue with Firefox, Chrome or Safari.
    Please note that due to the reliance on Javascript for the interactive effects the performance of these forums will be much better on MS IE 8 then on previous versions of MS IE.

    Hi
    It works, check once again...
    regards
    Swami

  • Performance Issue in Oracle EBS

    Hi Group,
    I am working in a performance issue at customer site, let me explain the behaviour.
    There is one node for the database and other for the application.
    Application server is running all the services.
    EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
    Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
    Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
    The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
    Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
    I have just send a ping command and the reponse time between servers is below to 5 ms.
    I have several activities in mind like:
    - OATM conversion.
    - ASM implementation.
    - Upgrade to 11.2.0.4.
    Has anybody had this behaviour?, any advice about this problem will be really appreciated.
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

    Hi Bashar, thank you very much for your quick response.
    If both servers are on the same network then the ping should not exceed 2 ms.
    If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
    Have you checked the network performance between the clients and the application server?
    Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
    What is the status of the CPU usage on both servers?
    There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
    Did this happen after you performed the hardware upgrade?
    Yes, it happened after changing some memory parameters in the JVM and the database.
    Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • ERP Sales Order : Performance issues with Product Proposal

    Hi
    we are working on CRM 2007 solution and are facing serious performance issues with the ERP Sales Order functionality provided in the ICWC of this version.
    In our development we are adding items in the ERP cart as soon as the user clicks on the 'New Sales Order' button of the sales order sreen. We get the items by very simple and optimized call to the ERP system and then add these entities in the Item Cart (Item collection ...in simple sense).
    For adding 10 items the application takes 10 seconds and this is too much for adding just 10 items.
    Can you please provide any Notes/alternative solution to resolve this issue.
    Regards
    Ajitabh

    Hi Ajithabh,
    Please apply the following SAP notes:
    1061423 - Interaction Center ERP Order Performance improvement
    1262277 - Performance: CRM value help causes dumps in ERP
    1292817 - Performance: Reduce RFC calls during creation of ERP order.
    1319885 - ERP sales order search with external reference
    1326527 - Reducing number of RFC calls in IC ERP Sales Order
    I hope it helps!
    Regards,
    Gabriel Santana

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • CS3 Performance Issues - Productivity Crippled - S.O.S.

    I was having performance issues and repeated crashes with AE CS3 on Win XP, so I talked my boss into upgrading me to CS5 and Win 7.  All was well until we hired a new guy to help me with my video work load.  He has inherited my old software and there's no money for another upgrade, so we're going to have to figure this out.
    He running CS3 Master Collection on a Dell PC running Windows XP (Intel Core II Duo 2.33Ghz, 3.25GB RAM).  The graphics card is the ATI Radeon HD 2400 Pro, which came stock with the machine.  Specs on that card here: http://reviews.cnet.com/graphics-cards/ati-radeon-hd-2400/4507-8902_7-32763888.html?tag=sp ecs.  Pretty wimpy, I know.
    Anyway, my new coworker has been experiencing at least as much frustration as I was, with frequent crashes, a fickle RAM Preview and bad renders.  His machine has more RAM and a faster processor than I had when I ran CS3, but his performance seems even worse  Since CS3 won't handle AVCHD files, I've been converting our HD camera footage into 720p MPG 2 (Adobe Media Encoder) for projects he's assigned.  I suppose that could have something to do with it, but I can't see why CS3 wouldn't run an MPG generated from CS5.  Anyway, I asked him to write up a short list of the problems he's experiencing and he gave me this ...
    - rendered movies are often corrupted with red frames that flash in spots
    - when i try to render an MPEG2 at full res it will give me an error message that says:
    After Effects: AEGP Plugin Media IO Plugin:
    There is a mismatch between Output Module settings and Transcode Settings. Please verify your settings and try again.
    Property Data Invalid!
    MediaIO2 error: 0x400e0004
    Frame dimensions out of bounds
    (5027 :: 12)
    - when i try to render a RAM preview, often it does not render the complete work area. it takes several attempts to preview the entire selection
    - occasionally, when i bring in new media it gives a message saying 'media pending' and i will have to shut the program down and re-import the media
    - it will often (1-3 times per hour) crash unexpectedly and give a message saying 'After Effects has crashed'
    - when i try to render a RAM preview an error message pops up saying it needs at least 2 frames to render when i clearly have more than that selected
    It's my hope that someone here will recognize a common thread between these various issues and be able to suggest a silver bullet that will fix it all.  I realize that's not likely.  Research to date has yielded glimpses of possible solutions involving cache and scratch discs, but there are few specifics and I'm not sure exactly what adjustments to make.  I also have a spare Nvidia GForce 6800 graphics card form my own computer at home that I could donate to this office if it would help.
    I'm about to pass these issues on to our Help Desk and let them deal with it, but I'd love to be able to at least point them in the right direction, since they mostly deal with standard office software and won't have much experience troubleshooting multimedia applications.  I believe they also have the option to call Adobe for tech support over the phone, but I'd like them to have a clue if they do.  Can anyone suggest what settings, drivers or hardware might be a good place to start to get my man's system running a little more smoothly?  I'd really appreciate some insight before we start poking around blindly and I offer my thanks in advance for any forthcoming wisdom.
    Thanks!
    P.S. For what it's worth, my copy of CS5 is running like a champ on Windows 7 with the 64 bit OS.  My machine has an ATI Radeon HD 2400 XT for a card, which can't be much better than the "Pro" version in my coworkwers machine, right?

    No legal issues, both my CS5 and his CS3 are bought and paid for.  However, if After Effects CS3 is incapable of rendering a decent looking file compressed to a reasonable file size, then I'm not sure how valuable it is to us anyway.  It seems unbelievable to me that one of the most popular industry standard video aps isn't designed to render anything more than draft quality files for client review.  Even if he renders an uncompressed file from AE, CS3 didn't ship with Adobe Media Encoder as a separate ap, so what can he do with it without resorting to third party software?  I don't get it, but I'm clearly no expert on the subject.
    So, things being what they are, what file format would you suggest I convert the AVCHD file to in order for my CS3 burdened coworkwer to be able to work with them?  We are a Windows shop, so I don't know if Dave's Quicktime/PNG suggestion will work for us or not.  You've always given good advice to me in the past, so if you have an alternate suggestion I'm eager to try it out.  Just go easy on me with the jargon because I'm primarily a print designer and my video skills at this point are (obviously) intermediate at best!
    In any case, I really appreciate both you and Dave taking the time to respond to my post.  This forum continues to be the best resource I have for solving problems and augmenting my understanding of these programs.  My thanks!

  • DB Performance issue

    Hi DB Gurus,
    Our application is inserting 60-70K records in a table in each transaction. When multiple sessions are open on this table user face performance issues like application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120 million each year! (has grown 60 million since end of June 2007)
    5.Storage params = 110 extents, Initial=40960, Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth may be the culprits behind performance issue. But to ascertain that, we need to dig out more facts so that we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the root cause of bad performance?
    2. Looking at given statistics, is there a way to resolve the performance issue - by using table partition or archiving or some other better way is there?
    We've already though of dropping some indexes but it looks difficult since they are used in reports based on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a table? Is there any limitation?
    Thanks in advance!!

    Run STATSPACK and check what it says are the issues. Try and find the particular INSERT statement in the list of all SQL. Look at all the sections of the report, including block contention, which may show you are waiting for data blocks or index blocks, etc, or even things like latch contention too. Make sure you run it when the INSERT is happening during one of your busy periods.
    Given that you are using Oracle 10g, I assume you are using all the automatic settings now:
    o Local Tablespace Management
    o Automatic Segment Space Management
    o Automatic Undo Management
    If not, you should be. Prior to all this, Oracle always inserted into the last block in a table, which could become a bottleneck point. And space allocation of new blocks was also a problem. When these settings were introduced it alleviated most of these problems, and meant that Oracle could scale far better on such INSERT intensive workloads. If you are not using these for some reason or other, then you need to look at the number of FREELISTS you have on the table, and the setting of INITRANS.
    Also, how many columns does this table have? And how big is an average row. And what is your block size? You can get these from the data dictionary:
    select count (*) from user_tab_columns where table_name = '<tablename>' ;
    select avg_row_len from user_tables where table_name = '<tablename>' ;
    show parameter db_block_size
    Replace <tablename> with the name of your table, in uppercase.
    I ask because a very large row in a small data block will always fill the block quickly and cause new blocks to be allocated. If so, you may just have to live with this.
    And I would be suspicious about all 14 indexes being needed. Are they all single column indexes, or do you have any multi-column indexes? Do any of them share the same leading columns? Again, if you need all 14 indexes, then you must suffer the overhead of maintaining these indexes. But unless you have something like 50 columns in this table, I would guess that there is some overlap between these indexes.
    John

  • Aperture performance issues.

    Dear members:
    After some disappointments with the latest version of Bridge (CS3) I decided to start working with Aperture as I found it offered some interesting tools for viewing and selecting images.
    First I worked on some test images and everything went well. However, last night I did my first import of "real world" images and experienced severe performance issues.
    I imported one folder containing 163 photos to two different locations - the folder and the images were the same but imported into two separate locations in Aperture as I was trying to create the ideal file structure for me.
    These are the questions and/or problems I have.
    1. This IS NOT a major problem. As much as I tried to import photos into an album or folder I couldn't do it. I don't like the project concept and prefer to have my images placed into folders or albums. My iPhoto library was imported by Aperture using this structure. The iPhoto library is a folder with albums as subdivisions as they were set up in iPhoto originally. However, this doesn't seem to be working as I import photos from other locations.
    Q: How can I import photos into folders or albums and completely avoid the projects concept and icons ?
    2. This IS major problem. Performance was very poor. I imported the folders last night and waited for approximately 30 min until I decided to turn my computer off. Aperture gave me a message stating that it was still generating previews and asking me if I wanted to quit. I pressed the OK button and turned the computer off. This morning I launced Aperture again and it went back to the spinning wheel on both projects. It must have taken approximately 45 min until the spinning wheels were no longer turning.
    This is a problem for me as I have a library with approximately 15,000 - 20,000 images. The ones I imported last night were CR2 generated by a Canon 1Ds MK II (17 MB each). I can only imagine how long it would have taken had I chosen to import the 120+ MB TIFF images I also have in my library from slide scans.
    My Aperture preferences have been set for Preview Quality = 12 and Limit Preview Size = Don't Limit. I have it set this way as I don't with to have reduced size previews so that (1) they can display with the highest possible quality as I mostly use the full screen mode for viewing and selecting images, and (2) in case I upgrade to a larger monitor in the near future (I have a 23" cinema display but am planning to upgrade to a 30") the previews will still work with that monitor.
    Is this performace typical of Aperture ? I understand my camera is a professional camera that generates large images but isn't Aperture supposed to be a professional application aimed at professional photographers ? And what about those who work with 39 MB images from a Hasselblad or with scanned 120+ MB slide images ?
    Is there something obvious I have forgotten to look at or set up in Aperture ?
    Thank you in advance,
    Joseph Chamberlain

    Steve:
    Thank you very much for your reply to my post and for your suggestions. Some comments about my experience appear below. I am grateful for your help and don't wish in any way to discuss what you recommend below. I just wish to share my view of this issue and also to try to find the best answers for my problem.
    For 1, use File > Import > Folders Into A Project. That will retain your folder
    structure using brown folders and albums.
    See:
    http://www.bagelturf.com/aparticles/library/fivesimple/index.html and
    http://www.bagelturf.com/aparticles/library/brown/index.html and
    http://www.bagelturf.com/aparticles/library/libinadv/index.html
    A. You can't. Projects are the container for everything in Aperture. No
    projects, no images. So just live with them and subnvert them any way you
    like. I don't have "projects" so I just use months, vacations, events, or
    whatever keeps my image collections a reasonable size.
    As an user I would like to have control over my own filing structure. This works quite well in iPhoto and I don't understand why Aperture chose to adopt this less flexible file structure. Also I noticed that the imported iPhoto library appears in Aperture inside a folder with multiple albums. Since Aperture can do this for iPhoto I find it hard to understand why it can't do for other imported images.
    2. Turn off previews and delete the ones you have. When you find you need > them, use them selectively:
    http://www.bagelturf.com/aparticles/previews/pwho/index.html
    As stated in my previous post I always (no exception) use the full screen mode for viewing my images which is similar to a slide show. So according to the web page you reference above I would fall under the category of users that need previews.
    You don't need high res previews. Aperture already generates thumbnails
    for you.
    General speed tips:
    * Get the best video card with the most RAM you can afford
    I can't. My computer is fairly new as it was purchased a little more than 2 years ago. Although it is a fairly new computer Apple no longer offers parts for it. My video card is an ATI Radeon 9600 Pro with 64 MB of VRAM installed. I have contacted Apple about this issue and they tell me there is nothing they can do. I have also contacted both ATI Radeon and nVidia and both have discontinued the only two cards they would work in my system (X800 XT Mac Edition and GeForce 6800, respectively).
    * Smaller screens are faster than larger screens
    My screen is 23" which I would consider to be a medium size screen by today's standards. However, isn't the purpose of working with Aperture to be able to develop a professional workflow ? And don't most professionals like to use large screens to view their work ?
    * Avoid H&S adjustments until all the others are done
    * Make sure you have sufficient RAM (2G minimum, 3G on a Mac Pro)
    My system has 2.5 GB RAM installed. It has been suggested to me that I should add another 1 or 2 GB RAM as it would improve performance significantly. I have no problem doing that and would welcome that solution if I knew for a fact it was going to address my issues. However, I have already invested too much on hardware and software while still finding myself struggling with the issues I have described. Do you think the additional RAM would solve the problem ?
    * Don't use previews unless you need them
    Based on what I have read on the pages you referenced it seems to me I am one of those users who needs previews.
    * Keep projects small. Use blue folders to group projects
    My current filing structure is simple - I four folders each with subfolders containing in average 200 to 1000 images each. Some have as little as 1 image and some have 1000. But the majority would fall in the 300 to 400 images range.
    * Rebuild the database once in a while
    * Quit other apps if memory is restrictive
    It seems in this case that the RAM upgrade I mention above would be helpful. Would it allow me to run other applications while also running Aperture without any noticeable performance alteration ?
    To a great extent you have to rethink your workflow once you use Aperture.
    Many people do a lot of unnecessary things because they are coming from
    an environment that forced them to. Start from scratch and ask yourself
    why you do everything you do. Much of the effort you will find is wasted
    because Aperture either does it for you or make it unnecessary.
    I am trying to simply my workflow as much as I can but not at the expense of quality. Bridge CS2 did a very good job for me. In many ways it was the perfect application althout it didn't have many of the great features I find in Aperture for reviewing and selecting images. First it was simple - all you had to do was to create your own file structure and then point Bridge to the folders as it would create its own previews. Second it was fast - this process happened a lot faster compared to Aperture and Bridge CS3. Third it was high quality - the previews generated were high quality and could be seen with amazing resolution while in slide show viewing mode on my 23" screen. My upgrade to Bridge CS3 was disastrous as (1) it has many bugs Adobe hasn't taken the time to fix, (2) it is slow on average machines requiring the latest hardware to run efficiently which is unrealistic for most consumers and (3) the previews generated are soft and appear pixilated and in poor quality while in slide show view.
    I am going back to Aperture after a very disappointing start as I was one of the very first to purchase the software as soon as it was introduced only to be frustrated with all of its bugs and design flaws. Aperture has one of the best interfaces I have seen on any imaging application and I would really like to use but after this new attempt to use and the barriers I have encountered I am not sure I can.
    Joseph Chamberlain

Maybe you are looking for

  • How can I execute a bash script by double clicking in finder

    Hello! How can I execute a bash script using finder? Or better: How can I create an alias, which executes my bash script? Thanks Johann

  • Bode magnitude graph cursor not following plot

    Hi, I was wondering if anyone would be able to shed some light on a problem that I am having regarding an XY graph cursor. I have developed a simple VI to determine the characteristics of a low pass filter using a sine wave ramped in frequency and su

  • Trying to update software on nano

    I have a 6th generation nano. When I try to update the software or restore the ipod I get the error message, ipod can not be updated because files are in use by another application. I deleted all of my photos and music and tried to restore but receiv

  • Deployment via JDev works, OEM - File Upload not!?

    Hello! A weird thing for you. I try to deploy a, 10.1.2 to 10.1.3 migrated, Struts-App. Deploying via JDev to Standalone OC4J 10.1.3 works fine! Deploying via OEM - File Upload to same OC4J ends in error: In XYZ.war missing Standard-Deployment in WEB

  • How Can insert the records into Excel_sheet by using SQL Task-SSIS ?

    As per requirement , insert the records in excel sheet(DT)  by using SQL Task-SSIS . I used SQL query in SQL Task-SSIS: e.g.., INSERT INTO [DT$B1:B1] VALUES ('COMMM') but error:Executing the query "INSERT INTO [DT$B1:B1] VALUES ('COMMM') " failed wit