Performance Problems on G5 Dual 2.7

Hi!
I am experiencing serious performance Problems within Final Cut Pro.
When I am cutting one single stream of SD DV (even) with the audio turned off, no composites, no nothing, and I play back Final Cut pops up a warning that frames have been skipped and that it seems that the harddrive and/or the system is to slow... What the...?
Isn't my system supposed to be capable of playing back HD in Realtime? And it struggles with one simple layer of DV footage?
The weird thing is that I experience similar problems within Logic Express! With just one audio track and maybe one other instrument playing, Logic Express tells me that my Audio System is too slow...
I had one Idea what could cause these problems: I have my internal hard drive split up in three partitions, with the first one holding the system software and all applications, and the second one with all data files of Final Cut und Logic AND all projects that I am working on... Could this be the Problem?

The throughput of a hard drive on the inner tracks is only half what it is on the outer tracks. When you partition a hard drive, the second partition gets the inner 50% of the tracks on the platters, so throughput is significantly degraded from the first partition.
You should listen to Randy. He speaks the truth. Buy and install a second drive, back up any needed files and folders from your first drive (both partitions), then re-partition the 1st drive back to one partition as you do an Erase & Install of the OS, and reinstall all your apps fresh and new.
You'll be glad you did this. Really.
BTW, how much RAM do you have?

Similar Messages

  • Jdeveloper dual core processor performance problem

    I have a dual core 2.4 ghz processor and 2 gig of ram and Im running Jdevloper 9.0.5.2 and my performace is terrible. Other developers in the company have non-dual core processors and they can start up in debug mode using the Jdeveloper embedded oc4j our application in 10 seconds where it takes me 4 min!!!!! It is a struts,ejb web application. Is there anything I can do to help in debug mode??? cheers.
    Murray

    Hi Bernard,
    Which version of McAfee are you using?
    On my (personal) laptop, I'm using McAfee VirusScan 9.0.10. I don't frequently run JDeveloper on this laptop, but when I do, it's not experiencing signficant startup delays (it's a very low power machine: PIII 650 512Mb)
    McAfee VirusScan seems to have very few configuration options (noticably different from Norton, which I use on my corporate desktop machine). I specifically remember changing the "File Types to Scan" option to "Program files and documents only". You can get to this by right clicking the "M" notification area icon, VirusScan->Options menu, Advanced button on the ActiveShield page.
    In Norton, I think I have it configured so that it only scans files on write rather than on read. I also exclude directories which contain jdeveloper installs or other large Java apps (although scanning only on write elliminates most of the performance problems anyway and still leaves your system reasonably secure).
    The easiest way to convince your MIS dept that the virus checker is the source of your problems might be if you ask them to allow you to turn it off in order to test the difference it makes to performance. It's a reasonable request to make if you're trying to elliminate possible causes for the slowdown (from the description you gave, it does sound like the AV upgrade is the first place I'd start looking).
    If the virus checker is the source of your problems, you'll probably be seeing massive slowness in most large Java applications that have a large number of JARs on their classpath.
    Thanks,
    Brian

  • Performance Problems - CPU

    Hi all,
    I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 50,318 41.7
    db file sequential read 6,688,472 32,711 5 27.1 User I/O
    Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
    db file scattered read 1,012,065 6,999 7 5.8 User I/O
    PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
    Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
    Statistic Total
    AVG_BUSY_TIME 3,221,704
    AVG_IDLE_TIME 4,923,831
    AVG_IOWAIT_TIME 2,302,776
    AVG_SYS_TIME 537,429
    AVG_USER_TIME 2,682,900
    BUSY_TIME 6,446,121
    IDLE_TIME 9,850,381
    IOWAIT_TIME 4,608,322
    SYS_TIME 1,077,598
    USER_TIME 5,368,523
    LOAD 0
    OS_CPU_WAIT_TIME 1,999,898,469,700
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 12,201,893,888
    VM_OUT_BYTES 476,655,616
    PHYSICAL_MEMORY_BYTES 8,568,512,512
    NUM_CPUS 2
    NUM_CPU_SOCKETS 2
    ###########################3
    I think that we are having CPU problems here !!
    All my memory caches are good, 99% hit.
    Anybody agree with me???
    Tks,
    Paulo

    I have problems on some queries that have another wait event related to RAC.
    "gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
    Example:
    1-Tables has the same number of rows!!!!!
    2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
    ####RAC DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     1     7
    PX BLOCK ITERATOR               2     1     7
    INDEX FAST FULL SCAN     BRCAPDB2     IMENSALIDADE1     2     1     7
    ----It takes more than 500 seconds to run
    ####STANDALONE DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     2     16
    PX BLOCK ITERATOR               2     2     16
    TABLE ACCESS FULL     BRCAPDB2     MENSALIDADE     2     2     16
    ----It takes 0.1 seconds to run

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Performance Problems PrPro CS5 - Upgrade to CS6?

    Hi
    we often notice massive performance problems with PrPro CS5 here.
    e.g. last week we were shooting with a Sony NEX-VG10. (MediaInfo:)
    Format: AVC
    Format profile: [email protected]
    Format settings, CABAC: Yes
    Format settings, ReFrames: 2 frames
    Format settings, GOP: M=2, N=13
    Bit rate: 16.0 Mbps
    Width: 1 920 pixels
    Height: 1 080 pixels
    Display aspect ratio: 16:9
    Frame rate: 25 fps
    After linking footage into PrPro plaback stutters.
    PrPro is very often not responsive to the keyboard at all.
    Sometimes PrPro needs some seconds before playback starts.
    This way editing is a pain...
    No fx in timeline, only one or two clips. Same behaviour if I play a clip in the source window.
    Project and sequence settings are correct. Playback quality set to 1/4.
    These problems occure in several projects with footage from different cameras.
    PrPro works ok  with this (PAL/SD-)footage for example - MediaInfo:
    Format: DV
    Commercial name: DVCPRO
    Width: 720 pixels
    Height: 576 pixels
    Display aspect ratio: 16:9
    About the computer:
    Dell Precision WorkStation T5500
    24 GB RAM
    2 processors w/ 8 cores together: Intel Xeon CPU E5620 @ 2.40GHz
    System: 250 GB (SSD)
    Media: 2 TB (this is one normal internal hard drive, no raid)
    NVIDIA Quadro 4000 (driver version 297.03)
    connected to internet (is a must in this company)
    connected to intranet, server (is a must in this company, too)
    optimized for performance
    windows firewall active
    AVIRA Porfessional
    What would  PrPro force to work with HD footage?
    Add an RAID? internal/external?
    Upgrade to CS6?
    Edncode footage to another codec before linking it into PrPro? (which codec? encode with AME?)
    Buy a new processor? (i7? which one?)
    Buy any Hardware pieces?
    TIA for your guidiance and best regards.

    Take a look at the Dell T7400 currently at rank # 568, 'Base2008PT1' on Benchmark Results. It is somewhat similar to your own system, but has more memory, a better video card and some raid0 arrays. Nevertheless, it is around 3.4 times slower than a fast system. My guess is that your system is even slower.
    The material you try to edit is very demanding and requires a beefy computer. Even though there is nothing wrong with the dual Xeon E5620's, their clock speed works against you (I have the same CPU's in a file server, but that is far less demanding than editing), as does the amount of memory in the system. You can be helped with more disks, but don't expect miraculous performance improvements. But all little things help.
    The alternative is a complete new system, but that can be pricey. Even if you build it yourself, see Planning & Building a NLE system it can be costly. It gives you very fast performance as demonstrated on the Reflections page and you should not get such a system from Dell or HP, unless you have unlimited funds.

  • NWDI Performance Problems

    Hello Experts!
    We have enormous performance problems with our NWDI.
    Our NWDI is runnning on a SAP Solution-Manager System (Double - Stack - System).
    NWDI is running on the Java-Stack of our Solution Manager.
    First of all I have created a new TRACK in CMS with the following Software-Components:
    SAP_BUILD 7.30, SAP_ENGFACADE 7.30.
    When I want to import these Software-Components at the tabs "Development" and "Consolidation", it takes about 15 minutes to import each Software-Component.
    After 15 MInutes, I get a exception, called "Lock Exception".
       com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread impl:3 15 failed to acquire exclusive lock on client session ClientSession(id=(xxxxxxxxXXX_XX)IDXXXXX). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Threadimpl:3_15, exclusive client session lock: ClientSessionLock(SAPEngine_Application_Thread impl:3_37), shared client session locks: ClientSessionSharedLockManager(), app session locks: ApplicationSessionLockManager(), current request: sap.com/tcSLCMS~WebUI/Cms).Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    I searched in the SDN for solving this problem and found out, that the problem is described in SAP Note 1234847.
    First of all, I will restart the J2EE of our Solution Manager this evening....
    Do you think that helps or is it useless to restart the J2EE?
    Thank you very much for helping.
    Best Reagards.

    Hello Ervin.
    I have contected my SAP-Basis....
    First of all, I would like to give you a more detailled overview about our landscape:
    - SAP Soluation Manager 7.01 SPS27, Dual-Stack-System
    - Installed Products:
    + ABAP-Stack: Solution Manager
    + Java-Stack: SLD, NWDI
    Our ABAP-Stack is on SP8, our Java-Stack on SP8, too.
    As you know is the SP-Level of DI_CMS, DI_CBS and DI_DTR on SP10,
    the whole rest of the Java-Stack-Software-Compoents is on SP8.
    So:
    Our SAP-Basis only wants to udgrade the Java-Stack from SP8 to SP10, not the ABAP-Stack
    My Question now is the following:
    Are there any risks / is it adisable to patch only the Java-Stack from SP8 to SP10 and not both Stacks from SP8 to SP10 ?
    If you know the answer, please could you give me an evidence, which I can send to my SAP-Basis?
    Thank you very much for helping.
    Best Regards.

  • Performance problem on function-based index

    Hi guys,
    I am having performance problems with the addition of new function-based indexes.
    alter session set nls_comp='ANSI';
    alter session set nls_sort='BINARY_CI';
    * have to run this because the of case-insensitivity requirements
    I have a view. for ex:
    create or replace view view1
    as
    select * from emp1,user
    where emp1.empno=user.empno
    union
    select * from emp2,user
    where emp2.empno=user.empno
    union
    select * from emp3,user
    where emp3.empno=user.empno and so on
    When I run this it works with a full table scan. Then when i created a function-based index:
    create index user_ix on
    user(nlssort(empno,'NLS_SORT=BINARY_CI'));
    analyze index user_ix compute statistics;
    analyze table user compute statistics;
    the view hangs. but when i run the individual select statements it works.
    Do you guys have any idea on what's going on? Any advise is greatly appreciated.
    Thanks.

    LC is absolutely right. Brain cramp on my part.
    On the other hand, I can't seem to coerce Oracle to apply a to_binary_double conversion as part of an implicit conversion.
    var bin_dbl binary_double;
    select to_binary_double(14) into :bin_dbl from dual;
    SCOTT @ nx102 JCAVE9420> select * from emp where empno = :bin_dbl;
    no rows selected
    Elapsed: 00:00:00.14
    Execution Plan
    Plan hash value: 2949544139
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    39 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    39 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=TO_NUMBER(:BIN_DBL))I'd expect that Oracle would try to convert the binary double to a number, not the other way around.
    Justin

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • JRC 2: Performance Problem

    Hi.
    Our reporting component used JRC 1.x before we upgraded to JRC 2.x. We got two issues after upgrading.
    First issue I solved already with a workaround which I published on stackoverflow.com. (1) Does anyone knows where I will find the issue management system to report this issue?
    Second issue occurs big performance problem within our project. We opened a report with 6 subreports (which includes 1 upto 3 tables) in 2-4 seconds using JRC 1. If we will open same report using JRC 2, we wait upto 60 seconds.
    This methods requires more time with JRC 2 comparing to JRC 1:
    ReportClientDocument#open(String, int);
    SubreportController#setTableLocation(String, ITable, ITable)
    DatabaseController#setTableLocation(ITable, ITable)
    Each invocation of one of these methods requires 2-4 seconds.
    Thank you in advance.
    Best regards
    Thomas
    (1) http://stackoverflow.com/questions/479405/replace-a-database-connection-for-subreports-with-jrc

    hello ....
    my report is  ''crystal report 11'' => "OLE DB"  => "Add Command(select * from table) " .
    code(JRC) : eclipse + crystal report for eclipse version 2 =>  "cr4e-all-in-one-win_2.0.1.zip"
    <%@ page contentType="text/html; charset=UTF-8"
    import="
    com.crystaldecisions.report.web.viewer.CrystalReportViewer,
    com.crystaldecisions.reports.sdk.ReportClientDocument,
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKExceptionBase,
    java.sql.Connection,
    java.sql.DriverManager,
    java.sql.ResultSet,
    java.sql.SQLException,
    java.sql.Statement" %>
    <%
         try {
              String reportName = "report.rpt";
              ReportClientDocument clientDoc = new ReportClientDocument();
              clientDoc.open(reportName, 0);
              String tableAlias = "Command";
              clientDoc.getDatabaseController().setDataSource(myResult("SELECT * FROM table"), tableAlias,tableAlias);
              CrystalReportViewer crystalReportPageViewer = new CrystalReportViewer();
              crystalReportPageViewer.setReportSource(clientDoc.getReportSource());
              crystalReportPageViewer.processHttpRequest(request, response, application, null);
         } catch (ReportSDKExceptionBase e) {
              e.printStackTrace();
             out.println(e);
    %>
    I simplified the code, *myResult("SELECT * FROM table") *  is absolutely no problem ,
    and this code is absolutely no problem in the "crystal report for eclipse "version 1
    but in  version 2 run error:
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4---- Error code:-2147467259 Error code name:failed
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(Unknown Source)
         at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.a(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.if(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.new(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.b9.onDataSourceChanged(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setDataSource(Unknown Source)
         at org.apache.jsp.No_005f1.Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp._jspService(Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp.java:106)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
         at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.crystaldecisions.reports.common.QueryEngineException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4
         at com.crystaldecisions.reports.queryengine.Connection.bf(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.z3(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.bL(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.zM(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Connection.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.if(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.try(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.u7(Unknown Source)
         at com.crystaldecisions.reports.datafoundation.DataFoundation.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.DFAdapter.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.CheckDatabaseHelper.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.datafoundation.CheckDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.VerifyDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.f.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(Unknown Source)
         ... 39 more
    Please help me and tell me why....

  • Performance problem with S873,S691,S679,S716

    Hi All,
    I am using tables S873,S691,S679,S716 in my report. Since these have huge data its creating performance problem. Right now i am selecting data from tables as Select single.. in Loop.. endloop.
    If i try to get it out and use FOR ALL ENTRIES, its taking longer time.
    I dont have all keys with me and i m trying to use index whereever possible.
    Any hints, comments are really appreciable.
    My code goes like this.
    This is not a full code... only a part where i am fetching data from 'S' tables.
    loop at i_mvke.
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    SELECT SUM( zznslqty ) FROM s691 INTO
    w_s691_net_bill_qty WHERE
    Use index Z5
    matnr EQ i_mvke-matnr AND
    spmon EQ w_spmon AND
    vrsio EQ p_versn AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr."
    SELECT SUM( zznslqty ) SUM( zzninvnum ) FROM s873 INTO
    (work01-zznslqty, work01-zzninvnum) WHERE
    use index Z02
    vkorg EQ i_mvke-vkorg AND
    werks EQ p_werks AND
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vrsio EQ p_versn AND
    kunnr IN s_kunnr.
    SELECT zcrdotov FROM s716 INTO s716-zcrdotov WHERE
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr AND
    werks = p_werks AND
    vrsio EQ p_versn.
    GET THE RECORD TOTAL
    ADD 1 TO w_s716_total_count.
    GET THE ONTIME TOTAL
    IF s716-zcrdotov = 1.
    ADD 1 TO w_s716_ontime_count.
    ENDIF.
    ENDSELECT.
    PERFORM fill_output_table.
    endloop. "i_mvke
    Message was edited by: Agasti Kale

    hi
    good
    wrong->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    write->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO CORRESPONDING FIELDS OF TABLE I_MVKE
    WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    do the changes accordingly in the below select statements. you have not post the detail report otherwise i could have help you in other select statements also.
    thanks
    mrutyun

  • Performance problem in ABAP code

    hai guys,
    I created report using tables like bsis,t001 etc,( tax report).
    I have performance problem in this report.
    COuld you pls tell me how to analyse the report and find out the place where process is taking more memory etc.
    i did abap trace and runtime analysis..but could not find out exact point.
    how to do this..
    i want to analysis each subroutine,internal table and query process.
    could you pls give me some ideas.
    ambichan

    There is an excellent tool available in SAP - <b>Code Inspector.
    </b>
    Transaction is SCII
    Try the following link and I am sure you will find a bunch of useful documents.
    <a href="http://www.google.co.in/search?hl=en&safe=off&q=site%3Asdn.sap.comfiletype%3ApdfCode+Inspector&btnG=Search&meta=">ABAP Performance</a>
    I use the Code Inspector to search for
    a) All the select statements which are present within the loop
    b) Nested Loops
    c) Select query without providing criteria for primary keys, depending upon situation
    d) Can the search be narrowed with extra conditions
    e) Using READ .. BINARY SEARCH if internal table has lots of records.
    The list is actually endless, but this is something to start with.
    You can actually have a checklist, and depending upon it, go through your code. The more you adhere to checklist, you will find that, the performance would dramatically improve.
    Also use <b>ST05</b> transaction, for SQL Trace and find out which select query is taking the maximum time for response.
    Regards,
    Subramanian V.

Maybe you are looking for