Another performance problem

Hi! I have following data structure - site/buyers/buyer/notices/notice/noticesTexts/noticeText defined in corresponding .xsd file. The storage type is schema based and structured.
I make data import to XML DB using appendChildXML(). So to add another buyer I call something like this: UPDATE xml_site SET OBJECT_VALUE = appendChildXML(OBJECT_VALUE,"/site/buyers", XMLType(?))
The issue is that performance degrades with the growth of nodes quantity - it takes 20ms to add buyer #30 and it takes 2000ms to add buyer #600.
Is this normal situation or am I doing something wrong?

Hm, the problem is then I use appendChildXML with the full path, appendChildXML just replaces child, not appends another one. You can see this in the output below :
Connect xxx/yyy@zzz
DELETE FROM "SCOTT"."xml_market";
INSERT INTO "SCOTT"."xml_market" VALUES(XMLType('<market><site><buyers></buyers></site></market>'));
UPDATE "SCOTT"."xml_market" SET OBJECT_VALUE = appendChildXML(OBJECT_VALUE,'/market/site/buyers', XMLType('<buyer buyerId="1"/>'));
SELECT extract(object_value,'/market/site/buyers') FROM "SCOTT"."xml_market";
UPDATE "SCOTT"."xml_market" SET OBJECT_VALUE = appendChildXML(OBJECT_VALUE,'/market/site/buyers', XMLType('<buyer buyerId="2"/>'));
SELECT extract(object_value,'/market/site/buyers') FROM "SCOTT"."xml_market";
UPDATE "SCOTT"."xml_market" SET OBJECT_VALUE = appendChildXML(OBJECT_VALUE,'//buyers', XMLType('<buyer buyerId="3"/>'));
SELECT extract(object_value,'/market/site/buyers') FROM "SCOTT"."xml_market";
UPDATE "SCOTT"."xml_market" SET OBJECT_VALUE = appendChildXML(OBJECT_VALUE,'//buyers', XMLType('<buyer buyerId="4"/>'));
SELECT extract(object_value,'/market/site/buyers') FROM "SCOTT"."xml_market";
Connected.
1 row removed.
1 row created.
1 row updated.
EXTRACT(OBJECT_VALUE,'/MARKET/SITE/BUYERS')
<buyers>
<buyer buyerId="1"/>
</buyers>
1 row selected.
1 row updated.
EXTRACT(OBJECT_VALUE,'/MARKET/SITE/BUYERS')
<buyers>
<buyer buyerId="2"/>
</buyers>
1 row selected.
1 row updated.
EXTRACT(OBJECT_VALUE,'/MARKET/SITE/BUYERS')
<buyers>
<buyer buyerId="2"/>
<buyer buyerId="3"/>
</buyers>
1 row selected.
1 row updated.
EXTRACT(OBJECT_VALUE,'/MARKET/SITE/BUYERS')
<buyers>
<buyer buyerId="2"/>
<buyer buyerId="3"/>
<buyer buyerId="4"/>
</buyers>
1 row selected.
So, good execution times were because of this "replacement"- there always was one buyer. If I use "//buyers" path - appendChildXML works properly - buyers are add correctly, but execution times grow respectively to the growth of nodes count - still have 2-5 sec on adding buyer #600, regardless of the domFidelity set to false. Also I found out that insertChildXML works fine with the full path "/market/site/buyers", but execution times grow the same way.
BTW I followed big red button "DOWNLOAD ORACLE SOFTWARE" and I can't find 10.2.0.3 for windows - only 10.2.0.1.

Similar Messages

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • Performance Problems - CPU

    Hi all,
    I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 50,318 41.7
    db file sequential read 6,688,472 32,711 5 27.1 User I/O
    Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
    db file scattered read 1,012,065 6,999 7 5.8 User I/O
    PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
    Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
    Statistic Total
    AVG_BUSY_TIME 3,221,704
    AVG_IDLE_TIME 4,923,831
    AVG_IOWAIT_TIME 2,302,776
    AVG_SYS_TIME 537,429
    AVG_USER_TIME 2,682,900
    BUSY_TIME 6,446,121
    IDLE_TIME 9,850,381
    IOWAIT_TIME 4,608,322
    SYS_TIME 1,077,598
    USER_TIME 5,368,523
    LOAD 0
    OS_CPU_WAIT_TIME 1,999,898,469,700
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 12,201,893,888
    VM_OUT_BYTES 476,655,616
    PHYSICAL_MEMORY_BYTES 8,568,512,512
    NUM_CPUS 2
    NUM_CPU_SOCKETS 2
    ###########################3
    I think that we are having CPU problems here !!
    All my memory caches are good, 99% hit.
    Anybody agree with me???
    Tks,
    Paulo

    I have problems on some queries that have another wait event related to RAC.
    "gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
    Example:
    1-Tables has the same number of rows!!!!!
    2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
    ####RAC DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     1     7
    PX BLOCK ITERATOR               2     1     7
    INDEX FAST FULL SCAN     BRCAPDB2     IMENSALIDADE1     2     1     7
    ----It takes more than 500 seconds to run
    ####STANDALONE DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     2     16
    PX BLOCK ITERATOR               2     2     16
    TABLE ACCESS FULL     BRCAPDB2     MENSALIDADE     2     2     16
    ----It takes 0.1 seconds to run

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problem with Mavericks.

    Performance problem with Mavericks. My Mac is extremly slow after upgrading to Mavericks. What can i do to solve that?

    If you are still experiencing slow down issues, it maybe because of a few other reasons.
    Our experience with OS X upgrades, and Mavericks is no exception, is that users have installed a combination of third party software and/or hardware that is incompatible and/or is outdated that causes many negative performance issues when upgrading to a new OS X version.
    Your Mac's hard drive maybe getting full.
    Do you run any antivirus software on your Mac? Commercial Antivirus software can slow down and negatively impact the normal operation of OS X.
    Do you have apps like MacKeeper or any other maintenance apps like CleanMyMac 1 or 2, TuneUpMyMac or anything like these apps, installed on your Mac? These types of apps, while they appear to be helpful, can do too good a job of data "cleanup" causing the potential to do serious data corruption or data deletion and render a perfectly running OS completely dead and useless leaving you with a frozen, non-functional Mac.
    Your Mac may have way too many applications launching at startup/login.
    Your Mac may have old, non-updated or incompatible software installed.
    Your Mac could have incompatible or outdated web browser extensions, plugins or add-ons.
    Your Mac could have connected third party hardware that needs updated device drivers.
    It would help us to help you if we could have some more technical info about your iMac.
    If you so choose, please download, install and run Etrecheck.
    Etrecheck was developed as a simple Mac diagnostic report tool by a regular Apple Support forum user and technical support contributor named Etresoft. Etrecheck is a small, unobstrusive app that compiles a static snapshot of your entire Mac hardware system and installed software.
    This is a free app that has been honestly created to provided help in diagnosing issues with Macs running the new OS X 10.9 Mavericks.
    It is not malware and can be safely downloaded and installed onto your Mac.
    http://www.etresoft.com/etrecheck
    Copy/paste and post its report here in another reply thread so that we have a complete profile of your Mac's hardware and installed software so we can all help continue with your Mac performance issues.
    Thank you.

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Serious performance problems on iWeb site

    Can anyone help me find the problem with peterlance.com? It looks like author Peter Lance uses iWeb to update his web site. He redesigned it recently (and did not use iWeb for the old design). Anyway, it has serious performance problems on Internet Explorer, which I was hoping you guys could help pinpoint? Is there something simple I can tell him to try to fix the problem? He is launching a new book on Tuesday so he will be getting a lot of traffic, but for now the response time is not very good. Any suggestions?
    In particular, each new page seems to hang for about ten seconds and freeze using Internet Explorer 6. Any fixes?
    I mean, fixes other than: "Don't use Internet Explorer!"
    http://www.peterlance.com/Peter%20Lance/Home/Home.html
    name="Generator" content="iWeb 1.1.2"

    Welcome to the discussions, TopDog08.
    There are several plain .jpg's and that's good, but you need to convert every .png on the page (or as many .png's as you can) to a .jpg. For example,
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/shapeimage46.png
    Convert that to a jpg, then replace the graphic and text on the iWeb page with this converted jpg (some have has success using Downsize http://www.stuntsoftware.com/Downsize/)
    The ones you won't be able to do anything about are the ones that are a part of the blog, but that will be only 5 images instead of the many you have now.
    This image
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/shapeimage50.png
    Is covering part of your menu which is why it appears that only the top half is sensitive to rolling over. You'll have to adjust your layout to account for this.
    Another area to look at, there are white boxed png's surrounding the amazon book covers at the top.
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/imageEffectsAboveAli%20Mug%20TPI%20jpg.png
    Another set of .png's you'll have to get rid of. How did those get put on the page?
    What you see happening with IE is a java script running to fix IE's problem with png's. The fewer .png's you have, the less of a problem with IE you have.

  • OBIEE 11g performance problem

    Hi,
    I am facing a performance problem in OBIEE 11g. When I run the query taking from nqquery.log in database, it is giving me the result within few seconds. But In the OBIEE Answers the query runs forever not showing any data.
    Attaching the query below.
    Please help to solve.
    Thanks
    Titas
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-23] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- General Query Info: [[
    Repository: Star, Subject Area: BM_BG Pascua Lama, Presentation: BG PL Project Analysis
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- Sending query to database named XXBG Pascua Lama (id: <<26911>>), connection pool named Connection Pool, logical request hash e3feca59, physical request hash 5ab00db6: [[
    WITH
    SAWITH0 AS (select sum(T6051.COST_AMT_PROJ_RATE) as c1,
    sum(T6051.COST_AMOUNT) as c2,
    T6051.AFE_NUMBER as c3,
    T6051.BUDGET_OWNER as c4,
    T6051.COMMENTS as c5,
    T6051.COMMODITY as c6,
    T6051.COST_PERIOD as c7,
    T6051.COST_SOURCE as c8,
    T6051.COST_TYPE as c9,
    T6051.DATA_SEL as c10,
    T6051.FACILITY as c11,
    T6051.HISTORICAL as c12,
    T6051.OPERATING_UNIT as c13,
    T5633.project_number as c14,
    T5637.task_number as c15
    from
    (SELECT project_id proj_id
    ,segment1 project_number
    ,org_id
    FROM pa.pa_projects_all
    WHERE org_id IN (825, 865, 962, 2161)) T5633,
    (SELECT project_id proj_id
    ,task_id
    ,task_number
    ,task_name
    FROM pa.pa_tasks) T5637,
    (SELECT xxbg_pl_proj_analysis_cost_v.AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.BUDGET_OWNER,
    xxbg_pl_proj_analysis_cost_v.COMMENTS,
    xxbg_pl_proj_analysis_cost_v.COMMODITY,
    xxbg_pl_proj_analysis_cost_v.COST_PERIOD,
    xxbg_pl_proj_analysis_cost_v.COST_SOURCE,
    xxbg_pl_proj_analysis_cost_v.COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.FACILITY,
    xxbg_pl_proj_analysis_cost_v.HISTORICAL,
    xxbg_pl_proj_analysis_cost_v.PO_NUMBER_COST_CONTROL,
    xxbg_pl_proj_analysis_cost_v.PREVIOUS_PROJECT,
    xxbg_pl_proj_analysis_cost_v.PREV_AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_CONTROL_ACC_CODE,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.PROJECT_NUMBER,
    xxbg_pl_proj_analysis_cost_v.SUPPLIER_NAME,
    xxbg_pl_proj_analysis_cost_v.TASK_DESCRIPTION,
    xxbg_pl_proj_analysis_cost_v.TASK_NUMBER,
    xxbg_pl_proj_analysis_cost_v.TRANSACTION_NUMBER,
    xxbg_pl_proj_analysis_cost_v.WORK_PACKAGE,
    xxbg_pl_proj_analysis_cost_v.WP_OWNER,
    xxbg_pl_proj_analysis_cost_v.OPERATING_UNIT,
    xxbg_pl_proj_analysis_cost_v.DATA_SEL,
    pa_periods_all.PERIOD_NAME,
    xxbg_pl_proj_analysis_cost_v.ORG_ID,
    xxbg_pl_proj_analysis_cost_v.COST_AMT_PROJ_RATE COST_AMT_PROJ_RATE,
    xxbg_pl_proj_analysis_cost_v.COST_AMOUNT COST_AMOUNT,
    xxbg_pl_proj_analysis_cost_v.project_id,
    xxbg_pl_proj_analysis_cost_v.task_id
    FROM (select xpac.*,
    decode(xpac.historical, 'Y', 'Historical', 'N', 'Current') data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac
    union
    select xpac.*, 'All' data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac) xxbg_pl_proj_analysis_cost_v,
    (select period_name, org_id from apps.pa_periods_all) pa_periods_all
    WHERE ((xxbg_pl_proj_analysis_cost_v.ORG_ID = pa_periods_all.ORG_ID))
    AND (xxbg_pl_proj_analysis_cost_v.ORG_ID IN (825,865,962,2161))
    AND (APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(xxbg_pl_proj_analysis_cost_v.COST_PERIOD) <=
    APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(pa_periods_all.PERIOD_NAME))) T6051
    where ( T5633.proj_id = T5637.proj_id and T5633.project_number = 'SUDPALAPAS11' and T5637.proj_id = T6051.PROJECT_ID and T5637.task_id = T6051.TASK_ID and T5637.task_number = '2100.2000.01.BC0100' and T6051.DATA_SEL = 'All' and T6051.OPERATING_UNIT = 'Compañía Minera Nevada SpA' and T6051.PERIOD_NAME = 'JUL-12' )
    group by T5633.project_number, T5637.task_number, T6051.AFE_NUMBER, T6051.BUDGET_OWNER, T6051.COMMENTS, T6051.COMMODITY, T6051.COST_PERIOD, T6051.COST_SOURCE, T6051.COST_TYPE, T6051.DATA_SEL, T6051.FACILITY, T6051.HISTORICAL, T6051.OPERATING_UNIT)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13, D1.c14 as c14, D1.c15 as c15, D1.c16 as c16 from ( select distinct 0 as c1,
    D1.c3 as c2,
    D1.c4 as c3,
    D1.c5 as c4,
    D1.c6 as c5,
    D1.c7 as c6,
    D1.c8 as c7,
    D1.c9 as c8,
    D1.c10 as c9,
    D1.c11 as c10,
    D1.c12 as c11,
    D1.c13 as c12,
    D1.c14 as c13,
    D1.c15 as c14,
    D1.c2 as c15,
    D1.c1 as c16
    from
    SAWITH0 D1
    order by c13, c14, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12 ) D1 where rownum <= 65001

    Hi Titas,
    with such problems typically, at least for me, the cause turns out to be something simple and embarrassing like:
    - I am connected to another database,
    - The database is right but I have made some manual adjustments without committing them,
    - I have got the wrong query from the query log,
    - I have got the right query but my request is based on multiple queries.
    Do other OBIEE reports work fine?
    Have you tried removing columns one by one to see whether it makes a difference?
    -JR

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • Performance problem using OBJECT tag

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

  • 1.1 performance problems related to system configuration?

    It seems like a lot of people are having serious performance problems with Aperture 1.1 in areas were they didn't have any (or at least not so much) problems in the previous 1.01 release.
    Most often these problems occur as slow behaviour of the application when switching views (especially into and out of full view), loading images into the viewer or doing image adjustments. In most cases Aperture works normal for some time and then starts to slow down gradually up to point were images are no longer refreshed correctly or the whole application crashes. Most of the time simply restarting Aperture doesn't help, one has to restart the OS.
    Most of the time the problems occur in conjunction with CPU usage rates which are much higher than in 1.0.1.
    For some people even other applications seem to be affected to a point where the whole system has to be restarted to get everything working up at full speed again. Also shutdown times seem to increase dramatically after such an Aperture slowdown.
    My intention in this thread is to collect information from users who are experiencing such problems about their system configuration. At the moment it does not look like these problems are related to special configurations only, but maybe we can find a common point when we collect as much information as possible about system where Aperture 1.1 shows this behaviour.
    Before I continue with my configuration, I would like to point out that this thread is not about general speed issues with Aperture. If you're not able to work smoothly with 16MPix RAW files on G5 systems with Radeon 9650 video cards or Aperture is generally slow on your iBook 14" system where you installed it with a hack, than this is not the right thread. I fully understand if you want to complain about these general speed issues, but please refrain from doing so in this thread.
    Here I only want to collect information from people who either know that some things works considerably faster in the previous release or who notice that Aperture 1.1 really slows down after some time of use.
    Enough said, here is my information:
    - Powermac G5 Dualcore 2.0
    - 2.5 GB RAM
    - Nvidia 7800GT (flashed PC version)
    - System disk: Software RAID0 (2 WD 10000rpm 74GB Raptor drives)
    - Aperture library on a hardware RAID0 (2 Maxtor 160GB drives) connected to Highpoint RocketRAID 2320 PCIe adapter
    - Displays: 17" and 15" TFT
    I do not think, that we need more information, things like external drives (apart from ones used for the actual library), superdrive types, connected USB stuff like printers, scanners etc. shouldn't make any difference so no need to report that. Also it is self-evident that Mac OS 10.4.6 is used.
    Of interest might be any internal cards (PCIe/PCI/PCI-X...) build into your system like my RAID adapter, Decklink cards (wasn't there a report about problems with them?), any other special video or audio cards or additional graphic cards.
    Again, please only post here if you're experiencing any of the mentioned problems and please try to keep your information as condensed as possible. This thread is about collecting data, there are already enough other threads where the specific problems (or other general speed issues) are discussed.
    Bye,
    Carsten
    BTW: Within the next week I will perform some tests which will include replacing my 7800GT with the original 6600 and removing as much extra stuff from my system as possible to see if that helps.

    Yesterday i had my first decent run in 1.1 and was pleased i avoided a lot perfromance issues that seemed to affect others.
    After i posted, i got hit by a big slow-down in system perfromance. I tried to quit Aperture but couldn't, it had no tasks in its activity window. However Activity Monitor showed Aperture as a 30 thread 1.4GB Virtual memory hairball soaking-up 80-90% of my 4 cpu's. Given the high cpu activity i suspected the reason was not my 2GB of RAM, althought its obviously better with more. So what caused the sudded decerease in system perfromance after 6 hours of relative trouble free editing/sorting with 1.1 ?
    This morning i re-created the issue. Before i go further, when i ran 1.1 for the first time i did not migrate my whole library to the new raw algorithum (its not called the bleeding edge for nothing). So this morning i selected one project to migrate all its raw images to 1.1 and after the progress bar completed its work, the cpus ramped and system got bogged-down again.
    So Aperture is doing a background task that is consuming large amounts of cpu power, shows nothing in its activity monitor and takes a very long time to complete. My project had 89 raw images migrated to the 1.1 algorithum and it took 4 minutes to complete those 'background processes' (more reconstituting of images?). I'm not sure what its doing, but it takes a long time and shows no obvious sign it is normal. If you leave it to complete its work, the system returns to normal. More of an issue is the system allows you to continue to work as the background processes crank, compounding the heavy workload.
    Bit of a guess this, but is this what is causing people system's problems ? As i said if i left my quad alone for 4 minutes all returns as normal. Its just not normal to think it will ever end, so you do more and compound the slow-down ?
    In the interests of research i did another project migrating 245 8MB raws to the 1.1 algorithum and it took 8 minutes. First 5mins consumed 1GB of virtual memory over 20 threads at average 250% CPU usage for Aperture alone. The last three minutes saw the cpus ramp higher to 350%, virtual memory increase to 1.2GB. After the 8 minutes all returned to nornal and fans slowed down (excellent fan/noise behaviour on these quads).
    Is this what others are seeing ?
    When you force quit Aperture during these system slow-downs what effect does this having on your images ? Do the uncompleted background processes restart when you go to try and view them ?
    If i get time i'll try and compare to my MBP.

Maybe you are looking for