WebIntelligence - Refresh taking huge time

Hi Team,
I have an issue in BI launchpad. Created WEBI report using BEx as a source and enabled Refresh on open option too. The issue is when the user open the report, its taking lot of time for displaying the prompt screen(Approx 30 mins). And we have only one variable for prompt.
But when I used to create the same report in Advance analysis for OLAP, its running fast in BI Launchpad.
Is there any option to resolve this issue?. Please do the needful.
Awaiting for the reply,
Thanks in Advance..
Krishna.

Hi Mahesh,
Please go through this once.
Add 2 new registry entries (string keys) under
For the 32-bit using Rich Client: [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]:
For the 64-bit Web Intelligence:[HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]
"BICSLOVChunkSize"="4000"
"BICSResultChunkSize"="100000"
BICSLOVChunkSize is defined as a string. It help create a chunk of List of Value of maximum the number given. The bigger the number the more likely the retrieval of the list of value will crash. Consider using a value between 100 and 5000. If you have 1000 values in your database and you set the BICSLOVChunkSize to 200. Then you will see a hierarchical view of 5 groups having eah 200 list of value.
BICSResultChunkSize is defined as a string. It limits the number of results that can be retrieved as a List of Value. If you have a List of Value which contains 1 Millions values and set the BICSResultChunkSize to 1000 than only the first 1000 rows will be retrieved. Therefore you will get a partial result set for the list of value.
Regards,
Krishna.K

Similar Messages

  • Central confirmation is taking huge time for perticular user in SRM

    Hi Gurus.
    I am facing an issue in Production system. For Some users Central confirmation is taking huge time for user ,
    around 10 users reported issue as of now and taking 10 times more than usual. Any suggestions will be great help. If any users facing this issue.

    Hi Prabhakar,
    As Konstantin rightly mentioned, kindly check those BADI's implementations especially BBP_WF_LIST. In addition to that, please check whether you are getting any dump as below
    TSV_TNEW_PAGE_ALLOC_FAILED
    Best Regards,
    Bharathi

  • Materialized views with Complete refresh taking infinite time while refresh

    Hi,
    We have some materialized view which are configured for Complete refresh and are being refreshed manually everyday in the morning. Some times we face this issue of the MV taking longer time than usual for refresh. We tried running the below query to see if any object/MVs are locked and did not obtain any entries using the query:
    SELECT username U_NAME, owner OBJ_OWNER,
    object_name, object_type, s.osuser,
    DECODE(l.block,
    0, 'Not Blocking',
    1, 'Blocking',
    2, 'Global') STATUS,
    DECODE(v.locked_mode,
    0, 'None',
    1, 'Null',
    2, 'Row-S (SS)',
    3, 'Row-X (SX)',
    4, 'Share',
    5, 'S/Row-X (SSX)',
    6, 'Exclusive', TO_CHAR(lmode)
    ) MODE_HELD
    FROM gv$locked_object v, dba_objects d,
    gv$lock l, gv$session s
    WHERE v.object_id = d.object_id
    AND (v.object_id = l.id1)
    AND v.session_id = s.sid
    ORDER BY username, session_id;
    Hence please help us in troubleshooting this issue in Refreshing. and also let us know of any possible cause of Refresh taking longer time than usual.
    Thanking you in advance.
    Bharath

    >
    We have some materialized view which are configured for Complete refresh and are being refreshed manually everyday in the morning. Some times we face this issue of the MV taking longer time than usual for refresh
    >
    Which is it? 'some' MVs or 'the' MV that is taking longer than usual.
    1. How many MVs are being refreshed?
    2. What process are you using to refresh them
    3. Are you refreshing them one at a time serially or just starting the refresh of all of them at the same time?
    4. Do any of these MVs use the same base tables as the other MVs
    5. Are any of the MVs based on remote tables?
    6. Are any of the MVs based on other MVs?
    7. What does an AWR report show for the time period when things are taking longer than usual?
    All of the same issues that can cause a query to run slow can cause an MV refresh to go slow because an MV refresh is a query.
    But you need to provide a lot more information and metrics about your system if you want to get any advice that might be specific to your particular case.

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

  • Snapshot Refresh taking More Time

    Dear All,
    We are facing a Snapshot refresh problem currently in Production Environment.
    Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
    Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
    ex:
    CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
    AS
    SELECT a.* FROM SYN1 a, SYN2 b
    Where b.ACCT_NO=a.ACCT_NO;
    We have created a Index on the above Snapshot XYZ.
    Create index XYZ_IDX1 on XYZ (ACCT_NO);
    a. The Explain plan of the above query shows the Index Scan on SYN1.
    If we query above Select Statement,it hardly takes 2 seconds to exedute.
    b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
    If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
    Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.

    Dear All,
    While refreshing the Snapshot XYZ,I could find the following.
    a. Sort/Merge operation was performed while inserting the data into Snapshot.
    INSERT /*+ APPEND */ INTO "XYZ"
    SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
    The above operation performed huge disk reads.
    b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
    I would like to know why Sort/Merge Operation is performed in the above Insert?
    Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
    Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

  • Mview Refresh Taking more time

    Hi Team,
        Am using Materialized Mview refresh for my database,with refresh type as fast.
    Previously the mview refresh completed within 10-15minutes,but now the mview refresh taking more than 150min to complete.
    Please suggest on the above issue.
    With Regards,
    Venkatesh J.

    Venkateshj wrote:
    Hi Team,
        Am Using Oracle Database 11g, Version 11.1.0.7.0.
    In that am using Materialized Mview refresh for my database,with refresh type as fast.
    Previously the mview refresh completed within 10-15minutes,but now the mview refresh taking more than 150min to complete.
    Please suggest on the above issue for tuning.
    Thanks in advance
    With Regards,
    Venkatesh J.
    HOW To Make TUNING request
    https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003

  • Planning Desktop Refresh Taking more time than Expected

    Helllo Everyone,
    We are using system 9.2(essbase and planning).The planning desktop refresh takes about 19-20 min irrespective of single or multiple sparse dimension member additions, deletes or moves even after clearing the database.
    Going through the application log we noticed each time the location aliases are deleted and re-created.So its taking like 3-4 min for creating location aliases for each plan type( we have 3 plan types totally).Is that soemthing we can avoid when we perform refresh or need to change any performance settings to tweak the refresh timings.
    Any thoughts will be helpful.
    Thanks

    Hi,
    1. check your Planning Repository/DB and the connection between the DB to Essbase.
    2. check your DB server performance. Is there any other DB activity during your refresh?
    Zaky

  • Loading of data is taking much time.

    Data is coming from Oracle DB to a cube . It is taking huge time. After 1 day it is still in yellow state but it is giving a short dump <b>MESSAGE_TYPE_X</b> .
    <u><b>In the errror analysis it is giving :</b></u>
    <b>Diagnosis                                                                   
        The information available is not sufficient for analysis. You must      
         determine whether further IDocs exist in SAP BW that have not yet been  
         processed, and could deliver additional information.                                                                               
    Further analysis:                                                           
          Check the SAP BW ALE inbox. You can check the IDocs using the Wizard or 
         the "All IDocs" tabstrip</b>
    <u><b>Current status :</b></u>
        No selection information arrived from the source system.
    I have checked Syatem log and found same error. Moreover the RFC connection is OK.
    Please suggest.

    Rajib,
    What I mean is load to PSA (only PSA in Infopackage) and do not check the box "update subsequently..."
    When all data is in the PSA, then load it to the cube.
    (manually by clicking on the button in the monitor tab "Status")
    This means that the system has more resources available to do jst one step.
    Udo

  • ODS takes huge time

    Hi guys,
      The data lookup in the ODS is taking huge time .....it was working fine a week back it has around  200 mill records..
    Thanks,
    Your help will be greatly appreciated

    For two records, yes it shud not take that much time...
    Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
    So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
    Also check how is ur system is loaded with?
    Thanks..
    Hope this helps

  • Refresh document call in RESTful workflow is taking long time.

    Hi,
    I am facing an issue with the Refresh Document step, in Raylight Workflow using RESTful API.
    We are using this Refresh Document call, To make the dataprovider mappings formed.
    For few WEBI documents, this Refresh document step is taking much time to complete. After some investigation, we got to know that there are a series of SQL scripts running against actual data source, for each Refresh Document call. If the SQL script is taking much time, then the refresh document call is taking much time as well.
    Up to now, we are able to solve this problem in some cases, Like below.
    1. For a WEBI Document with Contexts to Answers during Refresh Document:
    Ex:
    Method: PUT
    URI: http://localhost:6405/biprws/raylight/v1/documents/1111/parameters
    Request Body: (setting default Context)
    <parameters>
      <parameter>
        <id>3</id>
        <answer>
          <values>
            <value id="4">Context1</value>
          </values>
        </answer>
      </parameter>
    </parameters>
    Result: Success (No Time out Exception)
    2. For a WEBI Document with some prompts to Answers during Refresh Document:
    Ex:
    Method: PUT
    URI: http://localhost:6405/biprws/raylight/v1/documents/1111/parameters
    Request Body: (With Some Dummy Values)
    <parameters>
      <parameter type="prompt" optional="true">
        <id>1</id>
        <answer type="Text">
          <values>
            <value id="1">Dummy Text</value>
          </values>
        </answer>
      </parameter>
      <parameter type="prompt" optional="true">
        <id>3</id>
        <answer type="Numeric">
          <values>
            <value id="1">0</value>
          </values>
        </answer>
      </parameter>
    </parameters>
    Result: Success (No Time out Exception)
    3. For a WEBI Document with No prompts and No Contexts to Answers during Refresh Document:
    Ex:
    Method: PUT
    Request Body: (Empty)
    Result: Time Out Exception (from my C# code)
    Only In this scenario (3) i am facing issue, As I have no answers to set any default values to Parameters. So the SQL scripts are executing. Due to this, if there is any time taking scripts, then the Refresh document step is taking much time or even throwing Time out exception from my C# code. I tried by increasing the timeout for WebRequest in C#, But still not solved.
    For creation of a WEBI document, Can we skip the SQL scripts, running in Refresh Document API call?
    If yes, then please let me know your valuable ideas to solve my problem.
    Please let me know for more information.
    Thanks in advance.
    Thanks,
    Mahendra

    Hi Anthony,
    Yes, I tried creating same WEBI Document using Web Intelligence Rich Client and also using Web Intelligence from BI Launch Pad.
    On both WEBI Clients I am able to create the same WEBI Document, which I am trying to create using RESTful API.
    My observations are like below:
    Observation 1: After selecting required Result Objects/columns in Query Panel, by clicking on Close -> Apply Changes and Close button, I can create the WEBI Document and Save it successfully.
    Observation 2: After selecting required Result Objects/columns in Query panel, If I click on Run Query, It is taking much time to bring fresh data.
    Just like in my first observation, I am only looking for creating WEBI Documents successfully, without bringing fresh data. It is enough for me, If I can create the WEBI documents from RESTful API successfully. Here I am not looking for any fresh data from this Refresh Document step, except the dataprovider mappings.
    As mentioned in my previous reply, Without dataprovider mappings, we are unable update Report Specification. For this, to make the dataprovider mappings formed, we are using this Refresh Document step, After updating dataprovider specification
    Is this Refresh Document call required in Workflow?
    Please suggest me if any other REST API Call, that I can use to get the dataprovider mappings formed, other than Refresh Document step.
    Thanks in advance. Currect me if I am wrong.
    Thanks,
    Mahendra

  • FAST REFRESH OF MV IS TAKING MORE TIME THEN COMPLETE REFRESH IN PRODUCTION

    Hi
    We have production enviroment in which FAST REFRESH OF MV IS TAKING MORE TIME THEN COMPLETE REFRESH. Can you tell me what are the differences between this refreshes?
    Regards,
    RJ.

    Sure:
    create table emp (id number(9));
    Table created.
    SQL> alter table emp add primary key (id);
    Table altered.
    create materialized view log on emp;
    Materialized view log created.
    create materialized view emp_mv as select * from emp;
    Materialized view created.
    Complete refresh:
    SQL> exec dbms_snapshot.refresh('EMP_MV','C');
    PL/SQL procedure successfully completed.
    Fast Refresh:
    SQL> exec dbms_snapshot.refresh('EMP_MV','F');
    PL/SQL procedure successfully completed.
    This is the basic setup. There are many more options to the create materialized view statement, just FYI.
    Idan.

  • Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.

    Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.
    Actual accounts & contacts need to be route around 2000 & 3000 but we have observed lakhs of records routing into local DB.
    We have verified all the Assignment Rules, Views.
    We ran docking object visibility rules and we have observed that some other accounts are routing due to Organization rule passing. (these records are not supposed to route).
    Version Siebel 7.7.2.12,
    OS Solaris.

    let me know what would be the reason that 1st million takes only 15 minuts and the time goes on increasing gradually with the increase of dataYes that's a little strange. I only can guess:
    1. You are in archivelog mode and the Archiver is not able to archive the redo logs fast enough
    2. You don't use Direct Load and DBWR ist not able to write the direty block to disk fast enough. You could create more DBWR processes in that case.
    3. Make a snapshot of v$system_event:
    create table begin as select * from v$system_event;After the import run
    create table end as select * from v$system_event;Now compare the values:
    select * from begin order by TIME_WAITED_MICRO descwith the values given you by
    select * from end order by TIME_WAITED_MICRO descSo you can look where your DB spent so much time waiting for something.
    Alternativly, you could start a 10046 trace on the loading session and use tkprof.
    Dim

  • Materialized view taking long time to refresh

    I want to understand why materialized view refresh takes more time than running the sql for the materialized view.
    For example say I have a materialized view test_mv which is created as below; When I run just the select statement i get the result within 34 secs whereas if I try to refresh it using
    BEGIN
    DBMS_MVIEW.refresh ('wi_mv7_monthly','C',ATOMIC_REFRESH=>false);
    end;
    /This takes around 74 secs
    MV creation
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    create materialized view TEST_MV
      COLUMN_NAME_1,
      OBJECT_NAME,
      COLUMN_ID,
      "SUM(C.DATA_LENGTH)"
    ) organization heap
    partition by list (COLUMN_ID) (
    PARTITION "first_10_col" VALUES
        1, 2, 3, 4, 5, 6, 7, 8, 9, 10
       PARTITION "After_10_col" VALUES
        11,12, 13, 14, 15, 16, 17, 18, 19, 20
       PARTITION "NONE" VALUES (DEFAULT)
    BUILD IMMEDIATE USING INDEX REFRESH COMPLETE ON DEMAND USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS
    as
    select  /*+ parallel(o,8) parallel(c,2) */
    C.COLUMN_NAME as COLUMN_NAME_1,O.OBJECT_NAME,C.COLUMN_ID,SUM(C.DATA_LENGTH) from ALL_OBJECTS O
    join ALL_tab_columns c on c.table_name=o.object_name
    group by
    C.COLUMN_NAME,O.OBJECT_NAME,C.COLUMN_IDEdited by: user627047 on Sep 6, 2012 6:56 AM

    I want to understand why materialized view refresh takes more time than running the sql for the materialized view.Mview has to save the data as well .

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

Maybe you are looking for

  • Facetime Activation An error occurred during activation. Try again

    User name correct. Pass word correct.  Internet up and running.  Software updates done.  What's going on?!!?  I bought this POS specifically to be able to Face time with family.  Between this and finding out my brand new iPad can subscribe to B&N mag

  • Isuue with SLR report

    Hi, We have solution manager with EHP1 in our landscape. We have configured the SLR report for the contents Hardware configuration, system availability, key performance indicators, ABAP run time errors but in the generated SLR report, I can find only

  • Paste date in strings in format of mm/dd/yy to Numbers

    I have trouble to paste a list of dates in the format of mm/dd/yy into numbers. For example: 04/01/10 04/23/10 After pasting into Numbers, I got: 2004-01-10 04/23/10 How can I paste a list fo date strings in the format as they are?

  • Weblogic 10 Deployment issues

    Hi, I am trying to use weblogic ant task - Wldeploy to automate the deployment. Unfortuantely deployment fails, I did some research and discovered that WLdeploy class is not present in weblogic.jar(..Bea\wlserver_10.0\server\lib). I also have version

  • Fade Effect Flickering Problem

    I have thumbnail gallery that utilizes the fade effect on the mouseover event. Unfortunately, if the user mouses over the thumbnails too quickly, the images flicker quite horribly. I was wondering if there was a way to remove this problem. Thanks You