Performance problem creating rows on viewobject

Hi,
When a user pushes a button in my Oracle ADF 11.1.1.3.0 GUI,
he triggers a method in my backing bean.
This method is called insertNewForecastTable, that takes a (Tree)Map called forecastMap as input. (see below)
( The key of this map is a Timestamp,
the value of this map is an object ForecastEntry.
A ForecastEntry consists out of 10 ForecastParts, and each Forecastpart contains 4 long values. )
private void insertNewForecastTable(Map forecastMap) {
DCBindingContainer bc = (DCBindingContainer)getBindings();
DCIteratorBinding ForecastIter = bc.findIteratorBinding("ForecastViewIterator");
DCDataControl dc = ForecastIter.getDataControl();
ApplicationModule am = (ApplicationModule)dc.getDataProvider();
ViewObject forecastVO = am.findViewObject("ForecastView");
Set keys = forecastMap.keySet();
Iterator keyIter = keys.iterator();
RowSetIterator it = ForecastIter.getRowSetIterator();
while (keyIter.hasNext()) {
Long timestamp = (Long)keyIter.next();
Timestamp ts = new Timestamp(timestamp);
Row r = it.createRow();
r.setAttribute(0, ts);
ForecastEntry forecastentry = (ForecastEntry)forecastMap.get(timestamp);
int j = 1;
for (int i=0;i<10;i++) {
ForecastPart forecastPart = forecastentry.getForecastPart(i);
r.setAttribute(j,forecastPart.getHistory());
j++;
r.setAttribute(j,forecastPart.getForecast());
j++;
r.setAttribute(j,forecastPart.getTrend());
j++;
r.setAttribute(j,forecastPart.getLimit());
j++;
forecastVO.insertRow(r);
am.getTransaction().commit();
Configuration.releaseRootApplicationModule(am,true);
Problem is : for 3360 entries in this table or viewobject, it takes 10 minutes (!!!) to complete this code.
Bottleneck is the for-loop and the forecastVO.insertRow(r);
Both timings are rising from 15 msec in the beginning to 500 msec at the end.
Anyone has some ideas how to improve my performance ?
Am I doing something wrong trying to create and insert 3360 rows in this table ?
Thanks.

Ok,
My binding is back ok, so the nullpointer is gone.
But the speed of processing 1000+ inserts into a table is still very poor.
This is my code at the moment :
  +private void insertNewForecastTable(Map forecastMap) {+
    +DCBindingContainer bc = (DCBindingContainer)getBindings();+
    +DCIteratorBinding ForecastIter = bc.findIteratorBinding("ForecastViewIterator");+
    +DCDataControl dc  = ForecastIter.getDataControl();+
    +ApplicationModule am = (ApplicationModule)dc.getDataProvider();+
    +ViewObject forecastVO = am.findViewObject("ForecastView");+ 
    +RowSetIterator it = ForecastIter.getRowSetIterator();+
    +Set keys = forecastMap.keySet();+        
    +Iterator keyIter = keys.iterator();+
    +List nameList = new ArrayList();+
    +nameList.add("Timestamp");+
    +for (int i=1 ; i<=10; i ++) {+
      +String H = "H"+i;+
      +String F = "F"+i;+
      +String T = "T"+i;+
      +String L = "L"+i;+
      +nameList.add(H);+
      +nameList.add(F);+
      +nameList.add(T);+
      +nameList.add(L);+
    +}+
    +long time_begin = System.currentTimeMillis();+
    +int counter = 0;+
    +// for each timestamp+
    +while (keyIter.hasNext()) {+
      +// Get the timestamp.+
       +Long timestamp = (Long)keyIter.next();+
       +// convert long to timestamp+
       +Timestamp ts = new Timestamp(timestamp);+
       +// create new row in table+
       +Row r = it.createRow();+
      +List valueList = new ArrayList();+
      +valueList.add(ts);+
      +ForecastEntry forecastentry = (ForecastEntry)forecastMap.get(timestamp);+
      +for (int i=0;i<10;i++) {+
          +ForecastPart forecastPart = forecastentry.getForecastPart(i);+
          +valueList.add(forecastPart.getHistory());+
          +valueList.add(forecastPart.getForecast());+
          +valueList.add(forecastPart.getTrend());+
          +valueList.add(forecastPart.getLimit());+
      +}+
      +r.setAttributeValues(nameList, valueList);+
      +forecastVO.insertRow(r);+
      +counter++;+
      +if (counter % 100 == 0) {+
        +am.getTransaction().commit();+
        +System.out.println("Committing rows " + (counter-100) + " to " + counter );+
      +}+
    +}+
    +long time_end = System.currentTimeMillis();+
    +// commit+
    +am.getTransaction().commit();+
   +// Configuration.releaseRootApplicationModule(am,true);+
    +System.out.println("Total time to insert all rows : " + (time_end-time_begin ));+
  +}+It takes up to 500 seconds to insert 3000 rows !
I also changed the update batch value from 100 to 5, but no difference.
I'm now also committing every 100 rows.
And at the end, I get a nullpointer exception at RowDataManager.getRowIndex(RowDataManager.java:191)
All help welcome.

Similar Messages

  • Performance problem: Converting rows to a comma separated column using STUFF() and self join

    Hi,
    This might be a really dumb one to ask but I am currently working on a table that has sequential data for steps that an invoice goes through in a particular system. Here is how it looks:
    ID InvoiceID
    InvoiceSteps
    Timestamp
    283403 0000210121_0002_2013
    Post FI Invoice
    2013-07-01 19:07:00.0000000
    389871 0000210121_0002_2013
    Clear Invoice
    2013-08-25 14:02:00.0000000
    Here is my extremely slow query that converts multiple rows of an invoice into a single one with 'InvoiceSteps' listed according to their timestamps in a sequential manner separated by commas.
    SELECT [InvoiceID],
    [InvoiceSteps] = STUFF((
    SELECT ',' + ma.InvoiceSteps
    FROM invoices ma
    WHERE m.InvoiceID = ma.InvoiceID
    ORDER BY [Timestamp]
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
    FROM invoices m
    GROUP BY InvoiceID
    ORDER BY InvoiceID;
    Here is the end result:
    InvoiceID InvoiceSteps
    0000210121_0002_2013
    Post FI Invoice,Clear Invoice
    My question: How can I improve the query so that it can process thousands of records as fast as possible (>600K in this case).
    Thanks you!

    There are many methods to concatenate the rows to columns. Assuming you have necessary indexes to support your query as Rishabh suggested, if you still find issues with performance, then you need to look at various other approaches as well. I have seen at
    certain places(huge data), CLR outperformed . Having said, we need to assess each and come to a conclusion for your scenario.
    Refer the below link for various approach, (please also look at the comment session as well):
    https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/

  • Performance problem: 1.000 queries over a 1.000.000 rows table

    Hi everybody!
    I have a difficult performance problem: I use JDBC over an ORACLE database. I need to build a map using data from a table with around 1.000.000 rows. My query is very simple (see the code) and takes an average of 900 milliseconds, but I must perform around 1.000 queries with different parameters. The final result is that user must wait several minutes (plus the time needed to draw the map and send it to the client)
    The code, very simplified, is the following:
    String sSQLCreateView =
    "CREATE VIEW " + sViewName + " AS " +
    "SELECT RIGHT_ASCENSION, DECLINATION " +
    "FROM T_EXO_TARGETS " +
    "WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
    "AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
    String sSQLSentence =
    "SELECT COUNT(*) FROM " + sViewName +
    " WHERE (RIGHT_ASCENSION BETWEEN ? AND ?) " +
    "AND (DECLINATION BETWEEN ? AND ?)";
    PreparedStatement pstmt = in_oDbConnection.prepareStatement(sSQLSentence);
    for (int i = 0; i < 1000; i++)
    pstmt.setDouble(1, a);
    pstmt.setDouble(2, b);
    pstmt.setDouble(3, c);
    pstmt.setDouble(4, d);
    ResultSet rset = pstmt.executeQuery();
    X = rset.getInt(1);
    I have yet created index with RIGHT_ASCENSION and DECLINATION fields (trying different combinations).
    I have tried yet multi-threads, with very bad results
    Has anybody a suggestion ?
    Thank you very much!

    How many total rows are there likely to be in the View you create?
    Perhaps just do a select instead of a view, and loop thru the resultset totalling the ranges in java instead of trying to have 1000 queries do the job. Something like:
    int     iMaxRanges = 1000;
    int     iCount[] = new int[iMaxRanges];
    class Range implements Comparable
         float fAMIN;
         float fAMAX;
         float fDMIN;
         float fDMAX;
         float fDelta;
         public Range(float fASC_MIN, float fASC_MAX, float fDEC_MIN, float fDEC_MAX)
              fAMIN = fASC_MIN;
              fAMAX = fASC_MAX;
              fDMIN = fDEC_MIN;
              fDMAX = fDEC_MAX;
         public int compareTo(Object range)
              Range     comp = (Range)range;
              if (fAMIN < comp.fAMIN)
                   return -1;
              if (fAMAX > comp.fAMAX)
                   return 1;
              if (fDMIN < comp.fDMIN)
                   return -1;
              if (fDMAX > comp.fDMAX)
                   return 1;
              return 0;
    List     listRanges = new ArrayList(iMaxRanges);
    listRanges.add(new Range(1.05, 1.10, 120.5, 121.5));
    //...etc.
    String sSQL =
    "SELECT RIGHT_ASCENSION, DECLINATION FROM T_EXO_TARGETS " +
    "WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
    "AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
    Statement stmt = in_oDbConnection.createStatement();
    ResultSet rset = stmt.executeQuery(sSQL);
    while (rset.next())
         float fASC = rset.getFloat("RIGHT_ASCENSION");
         flaot fDEC = rset.getFloat("DECLINATION");
         int iRange = Collections.binarySearch(listRanges, new Range(fASC, fASC, fDEC, fDEC));
         if (iRange >= 0)
              ++iCount[iRange];

  • Performance problem inserting lots of rows

    I'm a software developer; we have a J2EE-based product that works against Oracle or SQL Server. As part of a benchmarking suite, I insert about 70,000 records into our auditing table (using jdbc, going over the network to a database server). The database server is a smallish desktop Windows machine running Windows Server 2003; it has 384M of RAM, a 1GHz CPU, and plenty of disk.
    When using Oracle (9.2.0.3.0), I can insert roughly 2,000 rows per minute. Not too shabby!
    HOWEVER -- and this is what's making Oracle look bad -- SQL Server 2000 on the SAME MACHINE is inserting roughly 8,000 rows per minute!
    Why is Oracle so slow? The database server is using roughly 50% CPU, so I assume disk speed is an issue. My goal is to get Oracle to compare favorably with SQL Server on the same hardware. Any ideas or suggestions? (I've done a fair amount of Oracle tuning in the past to get SELECTs to run faster, but have never dealt with INSERT performance problems of this magnitude.)
    Thanks,
    Daniel Rabe

    I've tried using a PreparedStatement and a CallableStatement, always with bind variables. (I use a sequence to populate one of the columns, so initially my code was doing the insert, then a select to get the last inserted row. This was fast on SQL Server but slow on Oracle, so I conditionalized my code to use a pl/sql block that does INSERT... RETURNING so I could get the new rowid without doing the extra select - that required switching from PreparedStatement to CallableStatement). The Performance Manager shows "Executes without Parses" > 98%.
    Performance Manager also shows Application I/O Physical Reads approx 30/sec, and Background Process I/O Physical Writes approx 60/sec.
    File Write Operations showed most of the writes going to my tablespace (which is sized plenty big for the data I'm writing), but with occasional writes to UNTODBS01.DBF as well.
    The database is in NOARCHIVELOG mode.
    I'm NOT committing very often - I'm doing all 70,000 rows as one transaction. BTW, I realize this isn't a real-life scenario - this is just the setup I do so that I can run some benchmarks on various queries that our application performs. Once I get into those, I'm sure I'll have a whole new slew of questions for the group. ;-)
    I'll look into SQL TRACE and TKPROF - time to refresh some skills I haven't used in a while...
    Thanks,
    --Daniel Rabe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Problem while creating row with dependent select one choice in adf  table

    Iam having independent and dependent select one choice in a ROW in adf af:table
    unable to insert more than one row with dependent select one choice using create insert in adf table.
    Able to add more rows in UI af:table but ,ignoring previous rows (select one choice values) and only latest current row values is getting inserted to the database.
    Following is the code used to create row and for pointing to the current row
    public void addRowOnSecSettings(){
    SecurityGroupSettingsVOImpl SecGroupSetVO =(SecurityGroupSettingsVOImpl) this.getSecurityGroupSettingsVO1();
    try{
    int rowCount = SecGroupSetVO.getRowCount();
    SecurityGroupSettingsVORowImpl SecGroupSetRow =
    (SecurityGroupSettingsVORowImpl)SecGroupSetVO.createRow();
    SecGroupSetRow.setNewRowState(Row.STATUS_INITIALIZED);
    SecGroupSetVO.insertRowAtRangeIndex(rowCount, SecGroupSetRow);
    SecGroupSetVO.setCurrentRowAtRangeIndex(rowCount);
    SecGroupSetVO.setCurrentRow(SecGroupSetRow);
    } catch (Exception e) {
    e.printStackTrace();
    Regards,
    Bhagavan

    as it is dependent select one choice ,have already put auto submit="true".but no chance ,
    if i add two rows vo rowiterator showing count 2 but only current row select onechoice values are getting where as previous row select one choice values are null.

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • How to make sure newly created row is editable by default programatically

    Hi All,
    I got a problem with creating new row in single row selection table with on click property enabled for editing mode.
    Function requirement is like: I have Mater-Detail. While creating detail lines, line has to be created with default line number. Eg: Master1 can have line numbers 1,2,3..etc, Master2 can have 1,2,3.. etc.
    In applicationTable for Pattern Create
    Action Listener=”#{CreateAndEditFiscalDocumentBean.createChargeLine}”
    In Table rowSelection="single" and editingMode="clickToEdit"
    Here the problem is:
    When I first come to the page,the first row in the detail table is editable and could able to edit any other row on click . But When I create a new row , I got a new row with line number but it is not editable.
    I want it to be editable on create and previously selected row should be read only. I have tried several ways but nothing is working.
    My observation is, when I call the bean method in Action Listener of application Table in create patteren, I am facing this problem. If I do not call this method, it is working as expected. But I need to call this method because it has to create row with line number.
    I am putting below two scenarios which I have tried. I am not successful in the both the scenarios.
    Could you please help me in achieving expected functionality.
    Many thanks in advance for your time and help.
    Scenario 1:
    Jsff:
    ApplicationTable: createActionListener="#{CreateAndEditFiscalDocumentBean.createChargeLine}"
    Table: rowSelection="single", editingMode="clickToEdit"
    Bean Code:
    public void createChargeLine(ActionEvent actionEvent) {
    FacesContext fc = FacesContext.getCurrentInstance();
    ExpressionFactory factory = fc.getApplication().getExpressionFactory();
    MethodExpression method=factory.createMethodExpression(fc.getELContext(),"#{bindings.createChargeLine1.execute}",String.class,new Class[]{});
    method.invoke(fc.getELContext(),null);
    AMImpl Code:
    public void createChargeLine() {
    ViewObject itemChargeVO = this.getFiscalDocumentCharges();
    ViewObject fiscalDocumentHeaderVO = this.getFiscalDocumentHeader();
    Row toRow = fiscalDocumentHeaderVO.getCurrentRow();
    Row newRow = null;
    Row latestRow = itemChargeVO.first();
    Integer line_number = new Integer(0);
    int numberOfItemLines = 0;
    if (latestRow != null) {
    RowSet rs = itemChargeVO.getRowSet();
    numberOfItemLines = numberOfItemLines + 1;
    if (rs != null) {
    line_number =
    (Integer)rs.first().getAttribute("LineNumber");
    while (rs.hasNext()) {
    numberOfItemLines = numberOfItemLines + 1;
    Row row = rs.next();
    if (line_number.compareTo((Integer)row.getAttribute("LineNumber")) <
    0)
    line_number =
    (Integer)row.getAttribute("LineNumber");
    line_number = line_number + 1;
    newRow = itemChargeVO.createRow();
    newRow.setAttribute("LineNumber", line_number);
    itemChargeVO.insertRowAtRangeIndex(numberOfItemLines + 1, newRow);
    itemChargeVO.setCurrentRow(newRow);
    } else {
    newRow = itemChargeVO.createRow();
    newRow.setAttribute("LineNumber", new Integer(1));
    itemChargeVO.insertRowAtRangeIndex(0, newRow);
    itemChargeVO.setCurrentRow(newRow);
    Scenario 2:
    Bean method changes:
    public void createChargeLine1(ActionEvent actionEvent) {
    Row newLine = ApplicationsTableEventHandler.getInstance().processCreate(getChargeTable());
    newLine.setAttribute("LineNumber", new Integer(1));
    }

    Hi Jerry,
    Please refer to the following blog and check whether you followed all the steps:
    /people/harikrishna.sunku/blog/2008/12/18/work-center-and-navigation-link-creation-in-crm-2007
    You basically need to ensure that your custom work center is assigned to a navigation bar profile; and this navigation bar profile is assigned to your business role.
    Regards,
    Shiromani

  • Performance problem because of ignored index

    Hi,
    We have a performance problem with kodo ignoring indexes in Oracle:
    Our baseclass of all our persistent classes (LogasPoImpl) has a subclass
    CODEZOLLMASSNAHMENIMPL.
    We use vertical mapping for all subclasses and have 400.000 instances of
    CODEZOLLMASSNAHMENIMPL.
    We defined an additional index on an attribute of CODEZOLLMASSNAHMENIMPL.
    A query with a filter like "myIndexedAttribute = 'DE'" takes about 15
    seconds on Oracle 8.1.7.
    Kodo logs something like the following:
    [14903 ms] executing prepstmnt 6156689 SELECT (...)
    FROM CODEZOLLMASSNAHMENIMPL t0, LOGASPOIMPL t1
    WHERE (t0.myIndexedAttribute = ?)
    AND t1.JDOCLASS = ?
    AND t0.JDOID = t1.JDOID
    [params=(String) DE, (String)
    de.logas.zoll.eztneu.CodeZollMassnahmenImpl] [reused=0]
    When I execute the same statement from a SQL-prompt, it takes that long as
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the two statements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.
    Thanks,
    Wolfgang

    Thank you very much, Stefan & Alex.
    After computing statistics the index is used and the performance is fine
    now.
    - Wolfgang
    Alex Roytman wrote:
    ANALYZE TABLE MY_TABLE COMPUTE STATISTICS;
    "Stefan" <[email protected]> wrote in message
    news:btlqsj$f18$[email protected]..
    When I execute the same statement from a SQL-prompt, it takes that longas
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the twostatements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.I know that in DB2 there is a function called "Run Statistics" which you
    can (and should do) on all tables involved in a query (at least once a
    month, when there are heavy changes in the tables).
    On information gathered by this statistics DB2 can optimize your queries
    and execution path's
    Since I was once involved in query performance optimizing on DB/2 I can
    say you can get improvements of 80% on big tables on which statistics are
    run and not. (Since the execution plans created by the optimizer differ
    heavily)
    Since I'm working now with Oracle as well, at least I can say, that Oracle
    has a featere like statistics as well. (go into the manager enterprise
    Console and click on a table, you will find a row "statisitics last run")
    I don't know how to trigger these statistics nor whether they would
    influence the query execution path on oracle (thus "swapping" tablenames
    by itself), since I didn't have time to do further research on thatmatter.
    But it's worth a try to find out and maybe it helps on you problem ?

  • Performance problem with sdn_nn - new 10g install

    I am having a performance problem with sdn_nn after migrating to a new machine. The old Oracle version was 9.0.1.4. The new is 10g. The new machine is faster in general. Most (non-spatial) batch processes run in half the time. However, the below statement is radically slower. The below statement ran in 45 minutes before. On the new machine it hasn't finished after 10 hours. I am able to get a 5% sample of the customers to finish in 45 minutes.
    Does anyone have any ideas on how to approach this problem? Any chance something isn't installed correctly on the new machine (the nth version of the query finishe, albeit 20 times slower)?
    Appreciate any help. Thanks.
    - Jack
    create table nearest_store
    as
    select /*+ ordered */
    a.customer_id,
    b.store_id nearest_store,
    round(mdsys.sdo_nn_distance(1),4) distance
    from customers a,
    stores b
    where mdsys.sdo_nn(
    b.geometry,
    a.geometry,
    'sdo_num_res=1, unit=mile',
    1
    ) = 'TRUE'
    ;

    Dan,
    customers 110,000 (involved in this query)
    stores 28,000
    Here is the execution plan on the current machine:
    CREATE TABLE STATEMENT cost = 81947
    LOAD AS SELECT
    PX COORDINATOR
    PX SEND QC (RANDOM) :TQ10000
    ROW NESTED LOOPS
    1 1 PX BLOCK ITERATOR
    1 1ROW TABLE ACCESS FULL CUSTOMERS
    1 3 PARTITION RANGE ALL
    1 3 TABLE ACCESS BY LOCAL INDEX ROWID STORES
    DOMAIN INDEX STORES_SIDX
    I can't capture the execution plan on the old database. It is gone. I don't remember it being any different from the above (full scan customers, probe stores index once for each row in customers).
    I am trying the query without the create table (just doing a count). I'll let you know on that one.
    I am at 10.0.1.3.
    Here is how I created the index:
    create index stores_sidx
    on stores(geometry)
    indextype is mdsys.spatial_index LOCAL
    Note that the stores table is partitioned by range on store type. There are three store types (each in its own partition). The query returns the nearest store of each type (three rows per customer). This is by design (based on the documented behavior of sdo_nn).
    In addition to running the query without the CTAS, I am also going try running it on a different machine tonight. I let you know how that turns out.
    The reason I ask about the install, is that the Database Quick Installation Guide for Solaris says this:
    "If you intend to use Oracle JVM or Oracle interMedia, you must install the Oracle Database 10g Products installation type from the Companion CD. This installation optimizes the performance of those products on your system."
    And, the Database Installlation Guide says:
    "If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These libraries are required to improve the performance of the products on your platform."
    Based on that, I am suspicious that maybe I have the product installed on the new machine, but not correctly (forgot to set fast=true).
    Let me know if you think of anything else you'd like to see. Thanks.
    - Jack

  • Performance Problem while Aggregation

    Performance problem while aggrgating
    These r my dimension and cube i wrote customized Aggrgation map and i m aggragating all dimension (except for last level a unique one (PK) + cube .
    My system config is good ..
    But Aggrgation deployment (calculation ) is really really very slow compared to other vendors product
    i.e. It took me 3 hours to aggrgate all dimension (all levels except last) and cube (only containing 1000 rows to check and deleted all others rows)
    Dimensions Number of rows
    dim_product 156,0
    t_time 730
    dim_promotion 186,4
    dim_store 25
    dim_distributor 102,81
    Cube Number of Row
    Cube_SalesFact 300,000
    Plz solve my problem coz if it take that much time then i must say the performance of software is not that good where it should be....
    and i must suggest oracle to do some thing about this serious problem
    Thanks
    Well wisher of Oracle Corporation

    BEGIN
    cwm2_olap_manager.set_echo_on;
    CWM2_OLAP_MANAGER.BEGIN_LOG('D:\', 'AggMap_CubeSalesfact.log');
    DBMS_AW.EXECUTE('aw attach RTTARGET.AW_WH_SALES RW' );
    BEGIN
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME');
    --Deleting AW_CubeLoad_Spec
    DBMS_AWM.DELETE_AWCUBELOAD_SPEC('CUBESALESFACT', 'RTTARGET', 'CUBE_SALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    --Deleting AggMap
    DBMS_AWM.Delete_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    EXCEPTION WHEN OTHERS THEN NULL;
    END;
    --Creating Agg Map for cube cube_salesfact
    -- DBMS_AWM.CREATE_AWCUBEAGG_SPEC(AggMap_Name , USER , AW_NAME, CUBE_NAME);
    DBMS_AWM.CREATE_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
    --Specifying aggrgation for measures of cube
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORECOST');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORESALES');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'UNITSALES');
    --Specifying aggrgation for different level of dimensions
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_ALLYEARS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_YEAR');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_QUARTER');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_MONTH');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_ALLCOUNTRIES');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_COUNTRY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_PROVINCE');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_CITY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_ALLPRODUCTS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCLASS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCATEGORY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRAND');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_ALLDIST');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_DISTINCOME');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_ALLPROM');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_PROMOTIONMEDIA');
    Begin
    --************************     CODE      **********************************
    --aw_dim.sql
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_DISTRIBUTOR', 'DIM_DISTRIBUTOR');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PRODUCT', 'DIM_PRODUCT');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PROMOTION', 'DIM_PROMOTION');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_STORE', 'DIM_STORE');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_T_TIME', 'T_TIME');
    commit;
    --aw_cube.sql
    DBMS_AWM.CREATE_AWCUBELOAD_SPEC('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'LOAD_DATA');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORECOST', 'STORECOST', 'STORECOST');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORESALES', 'STORESALES', 'STORESALES');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'UNITSALES', 'UNITSALES', 'UNITSALES');
    DBMS_AWM.REFRESH_AWCUBE('RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'CUBE_SALESFACT');
    EXCEPTION WHEN OTHERS THEN NULL;
    END;
    -- Now build the cube. This may take some time on large cubes.
    -- DBMS_AWM.aggregate_awcube(USER, AW_NAME, CUBE_NAME, aggspec);
    DBMS_AWM.aggregate_awcube('RTTARGET','AW_WH_SALES', 'WH_CUBE_SALESFACT','AGG_CUBESALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    CWM2_OLAP_METADATA_REFRESH.MR_REFRESH();
    CWM2_OLAP_METADATA_REFRESH.MR_AC_REFRESH();
    DBMS_AW.Execute('aw detach RTTARGET.AW_WH_Sales');
    CWM2_OLAP_MANAGER.END_LOG;
    cwm2_olap_manager.set_echo_off;
    EXCEPTION WHEN OTHERS THEN NULL;
    -- EXCEPTION WHEN OTHERS THEN RAISE;
    END;

  • Performance problem querying multiple CLOBS

    We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
    Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
    select count(*) from articles
    where contains (abstract , 'matter OR dark OR detection') > 0
    or contains (subject , 'matter OR dark OR detection') > 0
    Columns abstract and subject are CLOBs. However, this query works fine for AND;
    select count(*) from articles
    where contains (abstract , 'matter AND dark AND detection') > 0
    or contains (subject , 'matter AND dark AND detection') > 0
    The explain plan gives a cost of 2157 for OR and 14.3 for AND.
    I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
    create index article_abstract_search on article(abstract)
    INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
    The data and index tables are on separate tablespaces.
    Can anyone suggest what is going on here, and any alternatives?
    Many thanks,
    Geoff Robinson

    Thanks for your reply, Omar.
    I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
    Add an extra CONTAINS and it becomes 300 times more costly!
    The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
    Regards
    Geoff

  • Performance problem with Oracle

    We are currently getting a system developed in Unix/Weblogic/Tomcat/Oracle environment. We have developed a screen that contains 5 or 6 different parameters to select from. We could select multiple parameters in each of these selections. The idea behind the subsequent screens is to attach information to already existing data/ possible future data that matches the selection criteria.
    Based on these selections, existing data located within the system in a table is searched and those that match are selected. Also new rows are created in the table against combinations that do not currently have a match. Frequently multiple parameters are selected, and 2000 different combinations need to be searched in the table. Of these selections, only about 100 or 200 combinations will be available in existing data. So the system is having to insert 1800 rows. The user meanwhile waits for the system to come up with data based on their selections. The user is not willing to wait more than 30 seconds to get to the next screen. In the above mentioned scenario, the system takes more than an hour to insert the new records and bring the information up. We need suggestions to see if the performance can be improved this drastically. If not what are the alternatives? Thanks

    The #1 cause for performance problems with Oracle is not using it correctly.
    I find it hard to believe that with the small data volumes mentioned, that you can have perfornance problems.
    You need to perform a sanity check. Are you using Oracle correctly? Do you know what bind variables are? Are you using indexes correctly? Are you using PL/SQL correctly? Is the instance setup correctly? What about storage, are you using SAME (RAID10) or something else? Etc.
    Facts. Oracle peforms exceptionally well. Oracle exceptionally well.
    Simple example from a benchmark I did on this exact same subject. App-tier developers not understanding and not using Oracle correctly. Incorrect usage of Oracle doing a 100,000 SQL statements. 24+ minutes elapsed time. Doing those exact same 100,000 SQL statement correctly (using bind variables) - 8 seconds elapsed time. (benchmark using Oracle 10.1.0.3 on a Sunfire V20z server)
    But then you need to use Oracle correctly. Are you familiar with the Oracle Concepts Guide? Have you read the Oracle Application Developer Fundamentals Guide?

  • List versions causing performance problems

    We have a List - not Library - List - with about 100 columns (Yes. We know that's a lot and have verified that one Item wraps to two DB rows), none of which is indexed.  An InfoPath form is used to enter and
    view the List Items. Versioning is on for auditing purposes. Versioning appears to be causing a significant performance issue - the more versions a record has, the longer it takes for the IP form to open.  Generally, each version adds one second to the
    time it takes for the IP form to open. So if a list item has 30 versions, it takes about 30 seconds from the time the user clicks the Title link until the IP form opens. Obviously having to wait 30 seconds or more for a form to open is a problem.
    The performance is similar when we open the Version History window for any one Item. We also tried using PowerShell to load a List Item's Versions and it, too, behaves the same...about 1s delay for each version an
    Item has. 
    Any suggestions on how we can configure the list or SP to prevent versions from slowing performance?

    Created PDFs for the results for the first two -
    dbcc show_statistics(AllUserData,AllUserData_PK)
    dbcc show_statistics(AllUserData,AllUserData_ParentID)
    Here are the results to the Select statement:
     select name,alloc_unit_type_desc,avg_fragmentation_in_percent from…
    name
    alloc_unit_type_desc
    avg_fragmentation_in_percent
    AllUserData_ParentId
    IN_ROW_DATA
    29.15961419
    AllUserData_ParentId
    LOB_DATA
    0
    AllUserData_PK
    IN_ROW_DATA
    28.0240832

Maybe you are looking for

  • Multiple custom login pages

    I have two WebApps. In central admin i set one different custom login page for each. now the problem: the second WebApp redirects to the login page of the first one. Known problem?

  • Add tape device to non-global zone

    Hi, I have a SCSI attached Ultrium tape device attached and configured against the global zone. The /dev/rmt/0* definitions in the global zone are links to ../../devices/pci@2* I need to be able to use this tape device from the non-global zones. To e

  • Financial statement version extraction

    Hi friends, I want to extract financial statment version(different accounts range ) information from R/3 to BI7 system. simply tr.code: fse3 information. What datasources are suitable to my requirement? could anybody tell procedure for extraction? th

  • BPA model to BPM conversion

    Hi, I am building a small prototype for a internal demo to showcase the BPA - BPM conversion capabilities. I have a BPA model in 11g R1 that i want to execute in BPM 10g environment... note i am not building any BPEL processes. The exercise is to tak

  • I notice my fan whirs fast during streaming video; is there a workaround?

    My fan gets up there (I am thinking it's in the 5000s RPMs, if I remember my reading correctly) when I'm playing streaming video (on netflix, using Silverlight). I get no fan noise when playing downloaded video (like an .mov) in Quicktime or MPlayer.