Performance problem Elements 11

When I want to save an image after editing in PS Elements 11 or Camera Raw, it takes very long time until the image is saved. Sometimes Organizer doesn´t even answer and I have to close the program and restart it.
I have plenty of space on my hard drive, 1.1 TB free of total 1.34. I use Windows 7 Home Premium. Intel Core i5 and 8 GB RAM
/Per

Hi pernilsson,
Thanks for posting on Adobe forums.
Please try following article:
Tips to enhance Photoshop Elements performance
Regards,
Sandeep

Similar Messages

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Performance problems with new Java Tiger style recommendations

    Performance problems with jdk 1.5 on Linux plattform
    (not tested on Windows, might be the same)
    using the new style recommendations.
    I need fast Vector loops for high speed mathematical calculations, some
    hints about the fastest way to program that loop would be also great!
    After refactoring using the new features from java 1.5 (as recommended from
    SUN) I lost performance significantly:
    using a vector:
    public Vector<unit> units;
    The new code (recommended from SUN for Java Tiger for redesign):
    for (unit u: units) u.accumulate();
    runs more than 30% slower than the old code:
    for (int i = 0; i < units.size(); i++) units.elementAt(i).accumulate();
    I expected the opposite.
    Is there any information available that helps?
    The following additional information I got from Mr. Shankar Unni:
    I got some fairly anomalous results comparing ArrayList and Vector: for the
    1.5-style loops, ArrayList was faster then Vector, but for a loop with get()
    calls, Vector was faster. Vector was even faster than that using
    elementAt(), which was a surprise:
    For a million summing iterations over a 100-element array:
    vector elementAt loop took 3446 ms.
    vector get loop took 3796 ms.
    vector iterator loop took 5469 ms.
    arraylist get loop took 4136 ms.
    arraylist iterator loop took 4668 ms.

    If your topic doesn't change, please stay in your original post.

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Question for ORACLE EXPERTS!!! PERFORMANCE PROBLEM!!!

    I have 2 nodes on RAC. On node1 the following query run in 15 seconds and on node2 the same query run in 40 minutes. This is a big problem because we are migrating a SQL Server database to Oracle RAC 10gR2 (10.2.0.2) on HP-UX Itanium and we can´t finish the project because of this performance problem!!! PLEASE HELP ME ORACLE GURUS !!! I have opened a TAR at Metalink a 3 month ago,but they can´t help me with this. Can anyone explain this event to me please??? " gc cr multi block request 92011 0.33 737.49
    This is the query that i run on both nodes and this is the tkprof of the trace file when i run this on second node:
    select /*+ NO_PARALLEL(t) */ sum(t.saldo_em_real)
    from
    brcapdb2.titulo t
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 13.76 790.26 0 107219 0 0
    total 3 13.76 790.26 0 107219 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 31
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01
    Please help me if you can!!
    Tks,
    Paulo.

    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01There's lots of global cache traffic going on. All those multiblock transfers seem to be saturating the interconnect. Some things to consider:
    - Have a look at metalink note 3951017.8 for a possible fix
    - Check that your interconnect is properly configured
    - Try running the query with PARALLEL hint. This would mean doing disk I/O instead of buffer & interconnect I/O.
    edit: changed "without NOPARALLEL" to "with PARALLEL"
    Message was edited by:
    antti.koskinen

  • Performance Problems with Excel Client

    Hello all,
    We installed BPC 7.5 SP7 NW on our customeru2019s system and experience massive performance problems on our local clients in combination with Excel and MS Office in general.
    u2022When opening u201CMaintain Dimension Membersu201D it takes some time (approx. 2 min) to open the Excel Sheet with the elements. The u201CSave to Serveru201D as well as the u201CProcess Dimensionu201D also takes time to execute. When executing u201CProcess Dimensionu201D the sandbox appears and it takes approx 2 min until the Process popup window appears. The process itself has normal performance.
    u2022We also experienced slow performance with the opening or saving dynamic templates dialogue in the Excel Client.  It takes some time until this dialogue appears.
    In general, we experience slow performance every time BPC is opening a pop up window! We also experience slow performance when opening a (non BPC) MS Office document.
    In order to resolve this issue we looked at the following notes and forum entries:
    Notes: 1541181, 1531059, 1507575, 1479436, 1448502, 1288810
    Forum: BPC NW slow Performance (Expand, Refresh, Saving) - How to enhance it?, [BPC for Excel] Dimension clicked, then Excel hangs
    Can anybody give me tipps on how to solve this issue?
    Thanks in Advance

    Hi,
    It will work if in IE, Advanced, Security you uncheck "Check for publisher's certificate revocation "
    =======================
    http://CSC3-2004-crl.verisign.com/CSC3-2004.crl
    http://ocsp.verisign.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBQ%2FxkCfyHfJr7GQ6M658NRZ4SHo%2FAQUCPVR6Pv%2BPT1kNnxoz1t4qN%2B5xTcCEH6eR%2B%2FPmw5g6qZVJ2HNXK4%3D
    http://crl.verisign.com/pca3.crl
    =========================
    Regards to all,
    Laurentiu Bogdan

  • Performance problem - XML and Hashtable

    Hello!
    I have a strange (or not strange, we don't know) performance problem. The situation is simple: our application runs on Oracle IAS, and gets XML messages from an MQ, and we have to process them.
    Sample message:
    <RequestDeal RequestId="RequestDeal-20090209200010-998493885" CurrentFragment="0" EndFragment="51" Expires="2009-03-7T08:00:10">
         <Deal>
              <Id>385011</Id>
              <ModifyDate>2009-02-09</ModifyDate>
              <PastDuesOnly>false</PastDuesOnly>
         </Deal>
         <Deal>
              <Id>385015</Id>
              <ModifyDate>2009-02-09</ModifyDate>
              <PastDuesOnly>false</PastDuesOnly>
         </Deal>
    </RequestDeal>(there's an average of 50000-80000 deals in the whole message)
    After the application receives the whole message, we'd like to parse it, and put the values into a hashtable for further processing:
        Hashtable ht = new Hashtable();
        Node eDeals = getElementNode(docReq, "/RequestDeal");
        Node eDeal = (Element)(eDeals.getFirstChild());
        long start = System.currentTimeMillis();
        while (eDeal != null) {
          String id = getElementText((Element)eDeal, "Id");
          Date modifyDate = getElementDateValue((Element)eDeal, "ModifyDate");
          boolean szukitett = getElementBool((Element)eDeal, "PastDuesOnly", false);
          ht.put(id, new UgyletStatusz(id, modifyDate, szukitett));
          eDeal = eDeal.getNextSibling();
        logger.info("Request adatok betöltve.");
        long end = System.currentTimeMillis();
        logger.info("Hosszu muvelet ideje: " + (end - start) + "ms");The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
    On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
    On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
    Do you have any idea, what can be the cause of the problem?

    gyulaur wrote:
    Hello!
    The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
    On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
    On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
    Do you have any idea, what can be the cause of the problem?All sorts of things are possible.
    - MQ is configured differently. For instance transactional versus non-transactional.
    - The customer uses a network (multiple computers) and you use a box (singlur) and something is using a lot of bandwidth on the network. Could be your process, one of the dependent processes or something else that has nothing to do with your system
    - The processing computer is loaded with something else or something is artificially limiting the processing time of your application (there is a app that allows one to limit another app to one CPU.)
    - At least one version of 1.4 had a xml bug that consumed massive amounts of memory when processing large xml. Sound familar? If the physical memory is not up to it then it will be thrashing the hard drive as it swaps virtual in and out.
    - Of course virtual memory causing swaps would impact it regardless of the cause.
    - The database is loaded.
    You can of course at least get the same version of java that they are using. Actually that would seem like a rather good idea to me regardless.

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • JRC 2: Performance Problem

    Hi.
    Our reporting component used JRC 1.x before we upgraded to JRC 2.x. We got two issues after upgrading.
    First issue I solved already with a workaround which I published on stackoverflow.com. (1) Does anyone knows where I will find the issue management system to report this issue?
    Second issue occurs big performance problem within our project. We opened a report with 6 subreports (which includes 1 upto 3 tables) in 2-4 seconds using JRC 1. If we will open same report using JRC 2, we wait upto 60 seconds.
    This methods requires more time with JRC 2 comparing to JRC 1:
    ReportClientDocument#open(String, int);
    SubreportController#setTableLocation(String, ITable, ITable)
    DatabaseController#setTableLocation(ITable, ITable)
    Each invocation of one of these methods requires 2-4 seconds.
    Thank you in advance.
    Best regards
    Thomas
    (1) http://stackoverflow.com/questions/479405/replace-a-database-connection-for-subreports-with-jrc

    hello ....
    my report is  ''crystal report 11'' => "OLE DB"  => "Add Command(select * from table) " .
    code(JRC) : eclipse + crystal report for eclipse version 2 =>  "cr4e-all-in-one-win_2.0.1.zip"
    <%@ page contentType="text/html; charset=UTF-8"
    import="
    com.crystaldecisions.report.web.viewer.CrystalReportViewer,
    com.crystaldecisions.reports.sdk.ReportClientDocument,
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKExceptionBase,
    java.sql.Connection,
    java.sql.DriverManager,
    java.sql.ResultSet,
    java.sql.SQLException,
    java.sql.Statement" %>
    <%
         try {
              String reportName = "report.rpt";
              ReportClientDocument clientDoc = new ReportClientDocument();
              clientDoc.open(reportName, 0);
              String tableAlias = "Command";
              clientDoc.getDatabaseController().setDataSource(myResult("SELECT * FROM table"), tableAlias,tableAlias);
              CrystalReportViewer crystalReportPageViewer = new CrystalReportViewer();
              crystalReportPageViewer.setReportSource(clientDoc.getReportSource());
              crystalReportPageViewer.processHttpRequest(request, response, application, null);
         } catch (ReportSDKExceptionBase e) {
              e.printStackTrace();
             out.println(e);
    %>
    I simplified the code, *myResult("SELECT * FROM table") *  is absolutely no problem ,
    and this code is absolutely no problem in the "crystal report for eclipse "version 1
    but in  version 2 run error:
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4---- Error code:-2147467259 Error code name:failed
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(Unknown Source)
         at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.a(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.if(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.new(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.b9.onDataSourceChanged(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setDataSource(Unknown Source)
         at org.apache.jsp.No_005f1.Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp._jspService(Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp.java:106)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
         at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.crystaldecisions.reports.common.QueryEngineException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4
         at com.crystaldecisions.reports.queryengine.Connection.bf(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.z3(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.bL(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.zM(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Connection.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.if(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.try(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.u7(Unknown Source)
         at com.crystaldecisions.reports.datafoundation.DataFoundation.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.DFAdapter.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.CheckDatabaseHelper.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.datafoundation.CheckDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.VerifyDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.f.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(Unknown Source)
         ... 39 more
    Please help me and tell me why....

  • Performance problem with S873,S691,S679,S716

    Hi All,
    I am using tables S873,S691,S679,S716 in my report. Since these have huge data its creating performance problem. Right now i am selecting data from tables as Select single.. in Loop.. endloop.
    If i try to get it out and use FOR ALL ENTRIES, its taking longer time.
    I dont have all keys with me and i m trying to use index whereever possible.
    Any hints, comments are really appreciable.
    My code goes like this.
    This is not a full code... only a part where i am fetching data from 'S' tables.
    loop at i_mvke.
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    SELECT SUM( zznslqty ) FROM s691 INTO
    w_s691_net_bill_qty WHERE
    Use index Z5
    matnr EQ i_mvke-matnr AND
    spmon EQ w_spmon AND
    vrsio EQ p_versn AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr."
    SELECT SUM( zznslqty ) SUM( zzninvnum ) FROM s873 INTO
    (work01-zznslqty, work01-zzninvnum) WHERE
    use index Z02
    vkorg EQ i_mvke-vkorg AND
    werks EQ p_werks AND
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vrsio EQ p_versn AND
    kunnr IN s_kunnr.
    SELECT zcrdotov FROM s716 INTO s716-zcrdotov WHERE
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr AND
    werks = p_werks AND
    vrsio EQ p_versn.
    GET THE RECORD TOTAL
    ADD 1 TO w_s716_total_count.
    GET THE ONTIME TOTAL
    IF s716-zcrdotov = 1.
    ADD 1 TO w_s716_ontime_count.
    ENDIF.
    ENDSELECT.
    PERFORM fill_output_table.
    endloop. "i_mvke
    Message was edited by: Agasti Kale

    hi
    good
    wrong->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    write->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO CORRESPONDING FIELDS OF TABLE I_MVKE
    WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    do the changes accordingly in the below select statements. you have not post the detail report otherwise i could have help you in other select statements also.
    thanks
    mrutyun

Maybe you are looking for

  • Problem with zoom in and zoom out

    hello, m doing zooming operation on tiff image. when i applied zoom in operation on image which have properties Bit per sample =1 Image Length = 2200 Pixel Image Width = 1700 Pixel Resolution(x) = 200 dpi Resolution(y) = 200 dpi then it require 2 sec

  • File Formats are limited?

    I imported a scanned tif. Silhoetted the area I wanted to mask, created a clipping path and tried to save as an eps but my saving options were limited to Photoshop, LDF, PS PDF, PS Raw and tiff. I didn't get the 20 or so other saving options which in

  • Drops connection completely in certain areas of town. Augusta, GA

    >> Duplicate post removed to comply with Verizon Wireless Terms of Service.  See Drops connection completely in certain areas of town. Augusta, GA << Message was edited by: Verizon Moderator

  • DVD's created from iDVD 6.0.2 won't play in other DVD players

    I currently purchased a MacBook Pro with the new iLife, I am running iDVD 6.0.2 and have created a DVD of a home movie. I have tested the playback of the DVD in the Mac and it works perfectly; however, I have now tried it in 3 other DVD players (a ho

  • HP Deskjet F4850 won't print but scans

    I just purchased an HP Deskjet F4850. I carried out all of the setup etc on my macbook pro but the printer won't print at all. It only scans when i issue a command to scan but always have an error message in the print queue when a document is sent fo