Performance problem - XML and Hashtable

Hello!
I have a strange (or not strange, we don't know) performance problem. The situation is simple: our application runs on Oracle IAS, and gets XML messages from an MQ, and we have to process them.
Sample message:
<RequestDeal RequestId="RequestDeal-20090209200010-998493885" CurrentFragment="0" EndFragment="51" Expires="2009-03-7T08:00:10">
     <Deal>
          <Id>385011</Id>
          <ModifyDate>2009-02-09</ModifyDate>
          <PastDuesOnly>false</PastDuesOnly>
     </Deal>
     <Deal>
          <Id>385015</Id>
          <ModifyDate>2009-02-09</ModifyDate>
          <PastDuesOnly>false</PastDuesOnly>
     </Deal>
</RequestDeal>(there's an average of 50000-80000 deals in the whole message)
After the application receives the whole message, we'd like to parse it, and put the values into a hashtable for further processing:
    Hashtable ht = new Hashtable();
    Node eDeals = getElementNode(docReq, "/RequestDeal");
    Node eDeal = (Element)(eDeals.getFirstChild());
    long start = System.currentTimeMillis();
    while (eDeal != null) {
      String id = getElementText((Element)eDeal, "Id");
      Date modifyDate = getElementDateValue((Element)eDeal, "ModifyDate");
      boolean szukitett = getElementBool((Element)eDeal, "PastDuesOnly", false);
      ht.put(id, new UgyletStatusz(id, modifyDate, szukitett));
      eDeal = eDeal.getNextSibling();
    logger.info("Request adatok betöltve.");
    long end = System.currentTimeMillis();
    logger.info("Hosszu muvelet ideje: " + (end - start) + "ms");The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
Do you have any idea, what can be the cause of the problem?

gyulaur wrote:
Hello!
The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
Do you have any idea, what can be the cause of the problem?All sorts of things are possible.
- MQ is configured differently. For instance transactional versus non-transactional.
- The customer uses a network (multiple computers) and you use a box (singlur) and something is using a lot of bandwidth on the network. Could be your process, one of the dependent processes or something else that has nothing to do with your system
- The processing computer is loaded with something else or something is artificially limiting the processing time of your application (there is a app that allows one to limit another app to one CPU.)
- At least one version of 1.4 had a xml bug that consumed massive amounts of memory when processing large xml. Sound familar? If the physical memory is not up to it then it will be thrashing the hard drive as it swaps virtual in and out.
- Of course virtual memory causing swaps would impact it regardless of the cause.
- The database is loaded.
You can of course at least get the same version of java that they are using. Actually that would seem like a rather good idea to me regardless.

Similar Messages

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • XML and Hashtable

    My XML is
    <dataroot >
    <OrderQuantities_All>
    <Site>35B43</Site>
    <Article>60224</Article>
    <Order>1</Order>
    </OrderQuantities_All>
    <OrderQuantities_All>
    <Site>35B43</Site>
    <Article>61766</Article>
    <Order>3</Order>
    </OrderQuantities_All>
    <OrderQuantities_All>
    <Site>35B43</Site>
    <Article>61925</Article>
    <Order>5</Order>
    </OrderQuantities_All>
    <OrderQuantities_All>
    <Site>35B43</Site>
    <Article>62301</Article>
    <Order>1</Order>
    </OrderQuantities_All>
    <OrderQuantities_All>
    <Site>35B43</Site>
    <Article>62318</Article>
    <Order>1</Order>
    </OrderQuantities_All>
    I have a method populateOrderQuantities_All(Element workingRoot). the input xml is converted into a list object. in the method, i create a Hashtable object and insert each of Site,Node and article values retreived from the list.After this, everytime i'll add the Hashtable object to a vector. Now, i need to consolidate values i.e., for a particular <site> i have to gather all values of <article> and <order> values in a hasthtable. i'm stuck up. Kindly help me to proceed

    For XML to Hashtable conversion refer
    http://www.awprofessional.com/articles/article.asp?p=27152&seqNum=6&rl=1

  • Jdeveloper dual core processor performance problem

    I have a dual core 2.4 ghz processor and 2 gig of ram and Im running Jdevloper 9.0.5.2 and my performace is terrible. Other developers in the company have non-dual core processors and they can start up in debug mode using the Jdeveloper embedded oc4j our application in 10 seconds where it takes me 4 min!!!!! It is a struts,ejb web application. Is there anything I can do to help in debug mode??? cheers.
    Murray

    Hi Bernard,
    Which version of McAfee are you using?
    On my (personal) laptop, I'm using McAfee VirusScan 9.0.10. I don't frequently run JDeveloper on this laptop, but when I do, it's not experiencing signficant startup delays (it's a very low power machine: PIII 650 512Mb)
    McAfee VirusScan seems to have very few configuration options (noticably different from Norton, which I use on my corporate desktop machine). I specifically remember changing the "File Types to Scan" option to "Program files and documents only". You can get to this by right clicking the "M" notification area icon, VirusScan->Options menu, Advanced button on the ActiveShield page.
    In Norton, I think I have it configured so that it only scans files on write rather than on read. I also exclude directories which contain jdeveloper installs or other large Java apps (although scanning only on write elliminates most of the performance problems anyway and still leaves your system reasonably secure).
    The easiest way to convince your MIS dept that the virus checker is the source of your problems might be if you ask them to allow you to turn it off in order to test the difference it makes to performance. It's a reasonable request to make if you're trying to elliminate possible causes for the slowdown (from the description you gave, it does sound like the AV upgrade is the first place I'd start looking).
    If the virus checker is the source of your problems, you'll probably be seeing massive slowness in most large Java applications that have a large number of JARs on their classpath.
    Thanks,
    Brian

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • XML data source - Performance problem

    Environment - Crystal Reports 2008 , crjava-runtime_12.2.200 jar files, Java 1.5  and XML data source
    We are generating reports from our java application. .rpt file is designed using CR 2008 .
    The problem we have is its taking longer time to generate a report. Atleast its taking 5-6 minutes. XML file size is around 300 KB. .rpt file size is 2000 KB.
    Steps involved in report generation in Java program.
    1. Creating ReportClientDocument.
    2.Opening ReportClientDocument( reportClientDocument.Open())
    3. constructing XML datasource IXMLDataSet(xml_ds). ( setting XML and XSD)
    4. setting the datasource to database controller. -- reportClientDocument.getDatabaseController().setDataSource(xml_ds, "", "")
    5. then exporting the report in pdf format.
    I don't know what I am doing wrong. Please advise me to improve the performance of this function.
    Thanks,
    Makesh

    Ted,
    I figured out the cause of the problem. I am using JDeveloper as my IDE thats causing the delay. When I deployed the application and ran it, it took just 30 secs for the report using "Pushing" mechanism. Thats amazing. Thanks for your support.
    I have another problem. I am not sure whether to post a separate thread for this problem. Anyway I explain the problem here. If you think it needs another thread, I will post a new one.
    When I use my IDE or the following command JRC runs fine and generates the reports with graphs and all.
    java -classpath C:\JDK\java1.5\bin\CERT.jar;C:\oracle\JDeveloper10.1.3\jdbc\lib\ojdbc14dms.jar;C:\oracle\JDeveloper10.1.3\jdbc\lib\orai18n.jar;C:\oracle\JDeveloper10.1.3\jdbc\lib\ocrs12.jar;C:\oracle\JDeveloper10.1.3\diagnostics\lib\ojdl.jar;C:\oracle\JDeveloper10.1.3\lib\dms.jar;C:\Apps\CERTLib\log4j-1.2.8.jar;C:\Apps\CERTLib\CRLib\com.azalea.ufl.barcode.1.0.jar;C:\Apps\CERTLib\CRLib\commons-collections-3.1.jar;C:\Apps\CERTLib\CRLib\commons-configuration-1.2.jar;C:\Apps\CERTLib\CRLib\commons-lang-2.1.jar;C:\Apps\CERTLib\CRLib\commons-logging.jar;C:\Apps\CERTLib\CRLib\CrystalCommon2.jar;C:\Apps\CERTLib\CRLib\CrystalReportsRuntime.jar;C:\Apps\CERTLib\CRLib\cvom.jar;C:\Apps\CERTLib\CRLib\DatabaseConnectors.jar;C:\Apps\CERTLib\CRLib\icu4j.jar;C:\Apps\CERTLib\CRLib\jai_imageio.jar;C:\Apps\CERTLib\CRLib\JDBInterface.jar;C:\Apps\CERTLib\CRLib\jrcerom.jar;C:\Apps\CERTLib\CRLib\keycodeDecoder.jar;C:\Apps\CERTLib\CRLib\log4j.jar;C:\Apps\CERTLib\CRLib\logging.jar;C:\Apps\CERTLib\CRLib\pfjgraphics.jar;C:\Apps\CERTLib\CRLib\QueryBuilder.jar;C:\Apps\CERTLib\CRLib\webreporting-jsf.jar;C:\Apps\CERTLib\CRLib\webreporting.jar;C:\Apps\CERTLib\CRLib\XMLConnector.jar;C:\Apps\CERTLib\CRLib\xpp3.jar;C:\Apps\CERTLib\iText-2.1.5.jar -Xms256m -Xmx512m com.cert.gui.CERTMainFrame
    Note: CERT.jar has all the application classes and library classes.
    But when I package all these jar files in to single jar file and run the below command, it generates reports with all the features except Graphs.
    java -Xms256m -Xmx512m -jar CERT.jar (OR)
    java -classpath C:\JDK\java1.5\bin\CERT.jar; -Xms256m -Xmx512m com.cert.gui.CERTMainFrame
    Note: CERT.jar has all the application classes and library classes
    I need the above command to run successfully as I am planning to use java web start for application deployment.
    Please help me to solve this problem.
    Thanks,
    Makesh
    Edited by: Makesh on May 28, 2009 12:07 AM
    Edited by: Makesh on May 28, 2009 12:13 AM
    Edited by: Makesh on May 28, 2009 1:32 AM

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Numbers Import and Load Performance Problems

    Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
    _Results using Numbers v1.0_
    Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
    _Results using Numbers v1.0.1_
    Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
    _Comparison to Excel_
    Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
    Summary
    Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.

    Hello
    It seems that you missed a detail.
    When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
    A Numbers document s not a file but a package which is a disguised folder.
    The document itself is described in an WML extremely verbose file stored in a gzip archive.
    Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
    The unpacked file may easily be 10 times larger than the packed one.
    Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
    And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
    As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
    Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
    Of course, it would also had added a supplementary stage duringthe save it process.
    I hope that they will adopt this scheme but of course I don't know if they will do that.
    Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
    The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
    Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)

  • Performance problems, do we need to upgrade. Server and Database level

    Problem:
    I'm a Java programmer and a Transact SQL DBA. So i have knowledge about databases. Nowwe have a database who performs very bad and got much deadlock problems and so on. It's an Oracle Database.
    We have Oracle version 9 and an application in Delphi. The bad performance is only since a while. We have cleaned the archive.
    My suggestion is why not migrate to a newer version of Oracle. Change some hardware specs get up to date.Then i think we will have less problems.
    But ofcourse this is more trail on error. That why i hope there is an Oracle specialist here who can help me with a few questions.
    Users and Specs
    I got 150 till 180 users
    i got a server with 1 processor XEON 233 GHZ
    4 gig memory, constant use 1,5 gig
    Questions
    1. Is it a good idea to upgrade? Maybe not to solve al the problems, but version 9 is old, there is version 10 or 11.
    2. Which version we should use 10 or 11? 11 is in use for a while so this sounds like a good idea.
    3. Are the specs OK or must i do something about the server to?
    Maybe dual core, or Enterprise (64 bit). Memory upgrade?
    4. Maybe for 64 bit i need Oracle version 11 to have good support on it?
    I hope somebody can help me a bit.
    Thanks,
    Kind regards,
    André

    Hi Andre,
    . Is it a good idea to upgrade? Maybe not to solve al the problems, but version 9 is old, there is version 10 or 11.
    2. Which version we should use 10 or 11? 11 is in use for a while so this sounds like a good idea.I suggest you to upgrade to latest available 11.2.0.2.
    But do complete testing your upgraded database before you move to PRODUCTION.
    . Are the specs OK or must i do something about the server to?It all depends on the usage and concurrent users :)
    4. Maybe for 64 bit i need Oracle version 11 to have good support on it?Regardless of bit version all Oracle Versions has good support.
    Refer MOS tech notes:
    *How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database [ID 286775.1]*
    *Minimizing Downtime During Production Upgrade [ID 478308.1]*
    *Different Upgrade Methods For Upgrading Your Database [ID  419550.1]*
    thanks,
    X A H E E R

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance problems with SAP GUI 7.10 and BEx 3.5 Patch 400?

    Hi everybody,
    we installed SAP GUI 7.10 and BEx 3.5 Patch 400 and detected hugh performance problems with this version in comparison to the SAP GUI 6.40 and BEx 3.5 or BEx 7.0 Patch 800.
    Does anybody detect the same problems?
    Best regards,
    Ulli

    Most important question when you are talking about performance-issues:
    which OC are you working on and which excel version?
    ciao
    Joke

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Performance problems with EP 6 and MS IE

    Hi everybody,
    since a couple of days, we are facing a sever performance problem with our SAP EP 6.0. When I access the system with MS Internet Explorer 6.0, it takes 5-10 minutes after the Login. With Firefox Browser, the performance is ok. Therefore I assume that it must be a problem with the IE settings. Does anybody know a solution?
    Best regards,
       Michael

    There are a few things that this could be.  I've seen the setting "Empty Temporary Internet Files folder when browser is closed" cause a lot of performance problems (This is in the advanced settings in your IE).
    This will cause your cache to be cleared out each time the browser is closed and cause a lot more data to be downloaded each time you login to the system.
    For more analysis I'd recommend putting a tool like HTTPWatch into your IE browser and seeing which requests are using the most time.

  • Performance Problems Bex 7.0 and Office 2007 Workbooks

    Hi
    we had a performance Problem with Bex 7.0 and Worksbooks in Office 2007.
    The Workbooks are created with Office 2003 and runs with good performance but in Office 2007 the performance is inacceptable.
    E.g. open Workbook with Office 2003   --    30 seconds
           open Workbook with Office 2007   --    15 minutes
    We do everything what we find in SAP Notes, Whitepapers oder SDN Messages.
    For Example:
    - We installed all Excel Patches witch descriped in: Microsoft Excel 2007 &
    SAP Business Explorer Compatibility
    - We set the optimize X: RS_FRONTEND_INIT setting u2018ANA_USE_OPTIMIZE_STG = Xu2019
    - We open worksbooks in Office 2007 with the repair Flag.
    - We used Flag open in XLS format
    But same Workbooks are extrem slow.
    We try to create a new Workbook with Office 2007 and it runs with good performance.
    But there are 500 Workbooks we didn`t wont to create all new.
    System Information:
    BW: 7.0 Netweaver 7.01 BI_CONT 7.05
    Client: SAP Gui 7.10 BI Explorer: 902
    Thank your for your Help.
    Edited by: Carsten Ziemann on Feb 2, 2011 4:36 PM

    Hello Carsten,
    Try to use Workbook compression:
      -  Open the specific workbook in BEx Analyzer
      -  Open Workbook Settings dailog
      -  Check "Use Optimized Storage"
      -  Click on OK Button
      -  Save the workbook
    But also, your front end tools are on a very old version.
    I would like to recommend you to install the latest patch of SAPGUI 7.20 and Business Explore 7.20.
    Front End Version 7.10 will be supported until April 2011.
    But, if you want to continue using 7.10, update to latest patch:
    http://service.sap.com/swdc
    > Support Packages and Patches
    > Browse our Download Catalog
      > SAP Frontend Components
    > SAP GUI FOR WINDOWS
    > SAP GUI FOR WINDOWS 7.10 CORE
    > Win32
    _ > gui710_20-10002995.exe
       |  > BI ADDON FOR SAP GUI
       |  > BI 7.0 ADDON FOR SAP GUI 7.10
       |_ > bi710sp14_1400-10004472.exe
    Cheers,
    Edward John

Maybe you are looking for