Impdp operation taking more tablespace size in compare to expdp...

Hi All,
I have one issue with impdp operation. I am using 11gR2 database and schema's dmpfile size is 5G. When I start loading data through impdp schema's tablespace size grow more than 5G. I have to stop the impdp operation because of growing tablespace size. There is no compress parameter passed during expdp. Lastly I given tablespace maxsize= unlimited but seems like it still not sufficient and have to add one more dbf. so the tablespace size as of now is 60G and impdp operation is still running.
Can anyone guide me if dmp file size is 5G then how could be the tablespace size more than 5G? I have an assumption that if my dmpfile size is 5G then the tablespace size in which I loaded my data (using impdp)should not more than 5G.
Thanks in advance.

I was facing the same problem. After giving parameter TRANSFORM=SEGMENT_ATTRIBUTES:n, the problem has been resolved.
TRANSFORM = transform_name:value[:object_type]
The transform_name specifies the name of the transform. Some of possible options are as follows:
SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y. ====> IF THIS IS 'N' PHYSICAL STORAGE ATTRIBUTES ARE NOT INCLUDED.
STORAGE - If the value is specified as y, the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.
Although thread a quite old, but just updating this in case someone needs to refer in future. My system parameter deferred_segment_creation is set to TRUE.
Here is the complete syntax, I have used
impdp vygrdba/******* dumpfile=VYGRVS6I5_25DEC12.dmp logfile=VYGR_PT_09Jan13.log remap_schema= VYGRVS6I5:VYGR_PT TRANSFORM=SEGMENT_ATTRIBUTES:n
Edited by: 980762 on 9 Jan, 2013 3:58 AM

Similar Messages

  • My update query is taking more tablespace.. so please advice

    Hi All,
    My update statement is taking more tablespace suddently. so I got error 'unable to extend the tablespace'. After extended the tablespace, I got completed..but those extended tablespace also got filled immetiately. Can you guys please suggest me for some solutions for this case.
    "-- Proc to update the lvl 2 dfus in the dfuprojdraftstatic table with cust orders
    procedure update_lvl2_dpds_custorders is
    cv cur_type;
    v_lvl1_dmdgroup varchar2(50) := null;
    v_lvl1_loc varchar2(50) := null;
    v_first_one number;
    a number;
    begin
    for x in (select distinct dmdgroup lvl2_dmdgroup, loc lvl2_loc
    from dfuprojdraftstatic s
    where (dmdunit,dmdgroup,loc) in (select dmdunit,dmdgroup,loc from dfuview v
    where u_dfu_lvl = 2)
    and dmdunit in (select dmdunit from dmdunit
    where u_moe = in_moe) ) loop
    v_first_one := 1;
    -- Using dynamic sql, loop through the lvl 1 dmdgroups and locs for this lvl 2 dmdgroup and loc
    --open cv for v_sql_stmt using v_cntry_cd, x.lvl2_dmdgroup, v_cntry_cd, x.lvl2_loc;
    open cv for v_sql_stmt using x.lvl2_dmdgroup,in_moe;
    loop
    fetch cv into v_lvl1_dmdgroup, v_lvl1_loc;
    exit when cv%notfound;
    UPDATE dfuprojdraftstatic s
    SET u_cust_ordr = (SELECT decode(v_first_one, 1, 0, u_cust_ordr) + nvl(sum(qty), 0)
    FROM cust_orders o, mars_week_cal c
    WHERE o.mrkt_moe = in_moe
    AND o.dmdunit = s.dmdunit
    AND o.dmdgroup = v_lvl1_dmdgroup
    AND o.loc = v_lvl1_loc
    AND o.start_date BETWEEN c.week_start_dat AND c.week_end_dat
    AND c.week_start_dat = s.startdate
    WHERE startdate >= v_max_dpd
    AND (dmdunit,dmdgroup,loc) in (select dmdunit,dmdgroup,loc from dfuview v
    where u_dfu_lvl = 2 )
    AND dmdunit in (select dmdunit from dmdunit
    where u_moe = in_moe )
    AND s.dmdgroup = x.lvl2_dmdgroup
    AND s.loc = x.lvl2_loc;
    dbms_output.put_line('A '||a+1);
    if v_first_one = 1 then
    v_first_one := 0;
    end if;
    end loop;
    close cv;
    end loop;
    commit;
    end; -- update_lvl2_dpds_custorders
    Thanks and Regards,
    Sudhakar.M

    Without seeing the full code like, for example, the definition of v_sql_stmt, and without knowing you tables and data, my guess would be that, based on the code provided, the update process is extremely inefficient and is likely updating the same row multiple times in this procedure, and even more likely in the entire set of provedures.
    Based on the erro message, it appears that you are running a package called batch_upd_dfuprojdraftstatic. Based on the name of the procedure you show, I also suspect that there are several other procedures that update single columns of dfuprojdraftstatic one after the other.
    I know that the table dfuprojdraftstatic is at least part of the base for a materialized view (mlog$_dfuprojdraftstatic is the materialized view log attached to the table to track changes for refreshing the materialized view). For each update you issue, Oracle creates a record in the materialized view log tracking what changed. If you do update col1, that creates a record in the log for each row , if you subsequently do update col2 that creates a second record for each row. However, if you do update col1, col2, that only creates a single record for each row. So, updating all of the columns requiring updates at the same time will reduce the number of the materialized view log entries, which should reduce the space required.
    You may alos want to loko at purging some of the data form the log using the dbms_mview package.
    John

  • Taking more time in INDEX RANGE SCAN compare to the full table scan

    Hi all ,
    Below are the version og my database.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    I have gather the table statistics and plan change for sql statment.
    SELECT P1.COMPANY, P1.PAYGROUP, P1.PAY_END_DT, P1.PAYCHECK_OPTION,
    P1.OFF_CYCLE, P1.PAGE_NUM, P1.LINE_NUM, P1.SEPCHK  FROM  PS_PAY_CHECK P1
    WHERE P1.FORM_ID = :1 AND P1.PAYCHECK_NBR = :2 AND
    P1.CHECK_DT = :3 AND P1.PAYCHECK_OPTION <> 'R'
    Plan before the gather stats.
    Plan hash value: 3872726522
    | Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |              |       |       | *14306* (100)|          |
    |   1 |  *TABLE ACCESS FULL| PS_PAY_CHECK* |     1 |    51 | 14306   (4)| 00:02:52 |
    Plan after the gather stats:
    Operation     Object Name     Rows     Bytes     Cost
    SELECT STATEMENT Optimizer Mode=CHOOSE
              1           4
      *TABLE ACCESS BY INDEX ROWID     SYSADM.PS_PAY_CHECK*     1     51     *4*
        *INDEX RANGE SCAN     SYSADM.PS0PAY_CHECK*     1           3After gather stats paln look good . but when i am exeuting the query it take 5 hours. before the gather stats it finishing the within 2 hours. i do not want to restore my old statistics. below are the data for the tables.and when i am obserrving it lot of db files scatter rea
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    SQL> select count(*) from sysadm.ps_pay_check;
    select num_rows,blocks from dba_tables where table_name ='PS_PAY_CHECK';
      COUNT(*)
       1270052
    SQL> SQL> SQL>
      NUM_ROWS     BLOCKS
       1270047      63166
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    db file sequential read           1,584,677       6,375      4   36.6   User I/O
    db file scattered read            2,366,398       5,689      2   32.7   User I/Oplease let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?

    suresh.ratnaji wrote:
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
    Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
    On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?

  • Report rdf with size 8mb taking more time to open

    Hello All,
    I have a rdf ( reports 6i) report with size 8.5mb taking more time to open and more time to access each field.
    Please let me know how do i solve this issue.
    Please do the needful.
    Thanks.

    Thanks for immediate response.
    pls let me know how do i know this.
    Right now i have the below details from report->help
    Report Builder 6.0.8.11.3
    ORACLE Server Release 8.0.6.0.0
    Oracle Procedure Builder 6.0.8.11.0
    Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
    Oracle CORE Version 4.0.6.0.0 - Production
    Oracle Tools Integration Services 6.0.8.10.2
    Oracle Tools Common Area 6.0.5.32.1
    Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
    Resource Object Store 6.0.5.0.1
    Oracle Help 6.0.5.35.0
    Oracle Sqlmgr 6.0.8.11.3
    Oracle Query Builder 6.0.7.0.0 - Production
    PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
    Oracle ZRC 6.0.8.11.3
    Oracle Express 6.0.8.3.5
    Oracle XML Parser     1.0.2.1.0     Production
    Oracle Virtual Graphics System 6.0.5.35.0
    Oracle Image 6.0.5.34.0
    Oracle Multimedia Widget 6.0.5.34.0
    Oracle Tools GUI Utilities 6.0.5.35.0
    Thanks
    Edited by: Abdul Khan on Jan 26, 2010 11:54 PM

  • Oracle 11g: Oracle insert/update operation is taking more time.

    Hello All,
    In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
    1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
    2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
    Query1: How to find actually oracle is taking more time in insert/updates operation.
    Query2: How to rectify the problem.
    We are having multithread environment.
    Thanks
    With Regards
    Hemant.

    Liron Amitzi wrote:
    Hi Nicolas,
    Just a short explanation:
    If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
    When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
    My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
    The solution here is simply revuild the index once in a while.
    Hope it is clear.
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
    This is a misconception of how indexes are working.
    I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
    "+Index Rebuild Summary+
    +•*The vast majority of indexes do not require rebuilding*+
    +•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
    +•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
    +•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
    +•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
    +•To improve performance, indexes need to be regularly rebuilt is a myth+"
    Good reading,
    Nicolas.

  • Oracle Drop command taking more time in AIX When compared to Windows

    Hi
    We are firing a normal Drop command on our database and the databse version is 10.2.0.4.The catabase is running on AIX v5.The command is taking more time than usual .
    There are no waits on session.Let me try to put the scenario in more simple terms.I have a nested table NT_TEST with record count 0 and i am dropping the statement using "Drop table NT_TEST" in windows it is getting dropped say some 218 Milli seconds.
    The same table when i am dropping it in AIX it is taking around 6 seconds.
    There are no locks ,Flashback is off,Triggers are not involved.Drop is not waiting on any transaction to complete.
    Could anybody help me out on this.

    Quite possible. On most Unixes asynchronous I/O requires purchasing extra options, and most sites just purchase the default O/S. Or asynchronous I/O is not enabled by default or the database is located on an ufs file system.
    You provided no info at all on your AIX configuration. AIX does have asynchronous I/O, it is not always enabled, and the number of slaves is usually way too low.
    As root, run smit aio and report the results.
    At his point your unfavorable comparison of Windows and AIX is quite unfair. Unix doesn't run out of the box.
    You know that famous joke 'If operating systems were airplanes'?
    In that joke, the Unix airplane is assembled on the spot, but the Windows airplane is three months late.
    Sybrand Bakker
    Senior Oracle DBA

  • Sync and Create project operation from DTR is taking more than one hour

    Hi All.
    Recently basis team has implemented the track for  ESS/MSS application.So When we import the track to NWDS its showing 500 Dcs.
    I have successfully done the Sync and create project operation from DTR for 150 DCS and its take 5 min per Dcs.
    However after that when i am trying to sync Dc or create project DC from DTR the operation is taking more than 3 hour per DC.Which should not be the case because for rest 150 DC that i ahve done Sync operation adn Create project operation from DTR it hardly takes 5 min per Dc.As this operataion is taking so much time finally i have close the NWDS to stop this operation.
    I am using NWDS 2.0.15 and EP7.0 portal SP15 and NWDI is 7.0
    Can any body tell how to solve this issue so that i can Sync and Create project from DTR for a DC within 5 min?
    Thanks
    Susmita

    Hi Susmita,
    If the DCs are fine in CBS build, then I feel there is no need to test all of them locally in NWDS.
    You can verify some certain applications in these DCs, then you sync & create project for these DCs & test run these applications.
    As I get you only need to check ( no changes will be done ), yes you can verify them in small groups (say 20-25 DCs/group) in different workspaces so that no workspace is overloaded.
    But why do you want to keep a copy of them locally as you are not making any changes, you can Unsync & Remove these projects once verified & use the same workspace to work on the next set of DCs.
    Hope this clarifies your concerns.
    Kind Regards,
    Nitin
    Edited by: Nitin Jain on Apr 23, 2009 1:55 PM

  • Tablespace name taking too much size

    Hi all,
    As I have posted alot threads regrading to get free space from a tablespace.
    My problem is, we are using oracle 10g infrastructure and MT for messaging services.One of the tablespace name "IAS_META" is taking too much size and data in the table space is not valuable for us and i want to remove all the data. Please guide how can I do it and what will be the best procedure do it ASAP.
    Thanks in Advance.

    Hi Aijaz,
    I too having the same problem i want to get free space from IAS_META tablespace let me tell you i have 1 OAS R 2 (10.1.2) with Oracle Infrastructure i have deployed wireless application on OAS which is syncronized with Oracle Infrastructure & now IAS_META is totaly full and i want to get free space from this tablespace Please let me know how i will get
    Thanks,
    Waheed

  • How to shrink tablespace size if there is more unused space ??

    i have a tablespace it's size has increased upto 800MB but only 200MB is used, how can i minimize this gap b/w actual size and used size.
    i think i may have wrongly set the autoextent parameter.
    Please help

    You said is has increased upto 800M , but only used 200MB?
    How it is possible? Tablespace size increases to accomodate space for new data.
    Is it possible that data might have deleted?
    Use the following command to resize the datafile. If the HWM is set to higher than this, the command will with an errors.
    RROR at line 1:
    RA-03297: file contains used data beyond requested RESIZE value
    Syntax:
    ALTER DATABASE DATAFILE 'tools01.dbf' RESIZE 250m;
    Jaffar

  • Custom Report taking more time to complete Normat

    Hi All,
    Custom report(Aging Report) in oracle is taking more time to complete Normal.
    In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
    We have enabled the trace and checked the trace file, but all the queries are working fine.
    Could you please suggest me regarding this issue.
    Thanks in advance!!

    TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
    ORA-00922: missing or invalid option
    parse error offset: 1049
    EXPLAIN PLAN option disabled.
    SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
    FROM
    HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.05 11 22 0 3
    total 3 0.00 0.05 11 22 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
    3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
    3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
    3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
    3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
    18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
    3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
    3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
    3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
    3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 11 0.01 0.05
    asynch descriptor resize 2 0.00 0.00
    INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
    MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
    LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
    AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
    VALUES
    ( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
    :B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
    NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
    60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
    SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 20 0.00 0.03 4 1 304 20
    Fetch 0 0.00 0.00 0 0 0 0
    total 21 0.00 0.03 4 1 304 20
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
    1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.01 0.03
    SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
    SEVERITY
    FROM
    FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
    M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
    A.APPLICATION_ID
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 4 12 0 2
    total 5 0.00 0.03 4 12 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
    1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.03
    DELETE FROM MO_GLOB_ORG_ACCESS_TMP
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.02 3 4 4 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.02 3 4 4 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
    1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.01 0.02
    SET TRANSACTION READ ONLY
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.01 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    begin Fnd_Concurrent.Init_Request; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 148 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 148 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    log file sync 1 0.01 0.01
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
    :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 9 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 9 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 8 0.00 0.00
    SQL*Net message from client 8 0.00 0.00
    begin fnd_global.bless_next_init('FND_PERMIT_0000');
    fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
    :security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
    :conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
    :form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
    :server_id); fnd_profile.put('ORG_ID', :org_id);
    fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
    fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
    fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 8 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 8 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
    FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
    FPI.SIZING_FACTOR
    FROM
    FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
    WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
    FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 7 0 1
    total 4 0.00 0.00 0 7 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
    1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
    1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
    1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
    1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
    1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
    SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
    USERENV('LANGUAGE'), '.') + 1)
    FROM
    V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
    USERENV('SESSIONID')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
    1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
    182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
    1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
    181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
    1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 2 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 6 0.00 0.00 0 0 0 0
    Execute 6 0.01 0.02 0 165 0 4
    Fetch 1 0.00 0.00 0 0 0 1
    total 13 0.01 0.02 0 165 0 5
    Misses in library cache during parse: 0
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 37 0.00 0.00
    SQL*Net message from client 37 1.21 2.19
    log file sync 1 0.01 0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 49 0.00 0.00 0 0 0 0
    Execute 89 0.01 0.07 7 38 336 24
    Fetch 29 0.00 0.09 15 168 0 27
    total 167 0.02 0.16 22 206 336 51
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 6 0.00 0.00
    db file sequential read 22 0.01 0.14
    48 user SQL statements in session.
    1 internal SQL statements in session.
    49 SQL statements in session.
    0 statements EXPLAINed in this session.
    Trace file compatibility: 10.01.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    48 user SQL statements in trace file.
    1 internal SQL statements in trace file.
    49 SQL statements in trace file.
    48 unique SQL statements in trace file.
    928 lines in trace file.
    1338833753 elapsed seconds in trace file.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Purging operation taking too much time

    We are doing purging operation(Delete operation), there are many tables involved in this purge operation and that is handle by separate application. that is hard coded so I can not able to see the code. and the same application running on other server but its not taking much time.. so i need to find out why on this server, it is taking more time and suggest appropriate action to team. data size is different on both server.
    Currently operation is running on server. what should i check to get know that, why it is taking time?

    I am not deleting records by connecting sql prompt. our application running on other machine, which is deleting records from tables. As I seen there is lock tables list and same table available in current running sql query.. So I assume that, that session is our application session and cant delete that session, as I have no control on commit or rollback, because delete operation is doing by separate application. and the same application deployed on other database machine. but there it is not taking much time. so i want to find out root cause, why in this machine it is taking time.

  • Maximum Temp tablespace size you've seen

    DB version: 10.2.0.4
    Our DB caters a retail application . Total DB Size of 3TB. Its is a bit of mix of both OLTP and Batch processing environment.
    Our temp tablespace has 1 file and we had set the tempfile as AUTOEXTEND. Somehow its size has reached 25GB now !
    We don't actually need this. Do we?
    For a fairly busy DB of around 3TB size, what is the maximum temp tablespace size you've ever seen?
    What is the maximum temp tablespace size you've ever seen for Big DB like Telecom, Banking?

    The point about temp space is that your requirements are dynamic - the actively used area shrinks and grows.
    Comparing apples with oranges, tempfile sizes on system where I am currently here amount to 50 GB but to be honest that's mainly due to a not insignificant number of pretty poor queries, run concurrently, with multipass operations, etc.
    Kellyn Pedersen had an example of a process using 720GB here:
    http://dbakevlar.com/?p=43
    Whilst 25GB may not be earth shattering, a "large" temp area may perhaps be an indicator that you want to review some of your application code.
    (What is "large"? "unusual" compared to what is "normal" for your system may be a better metric)
    If you're using automatic pga memory management, then there's a limit to what each session can get, it may be that automatic work areas are not suitable for some of your code and some manual sizing is required to prevent operations spilling unnecessarily.
    Also, inaccurate CBO estimates on your queries can lead to inaccurately sized work areas that then spill into temp.
    There was a reminder of this in one of Randolf Geist's articles on hash aggregation:
    http://oracle-randolf.blogspot.com/2011/01/hash-aggregation.html

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

  • Create index is taking more time

    Hi,
    One of the concurrent program is taking more time , We generate the trace file and found that the create index is taking more time.
    Below is from the trace file and such type of index creation is happening lot of time in Oracle standard program.
    Can somebody let me know why there is a big difference between cpu and elapse time.
    We are seeing the PX Deq: Execute Reply Event as well.look idle time for database.
    Please let me know which parameter of the database is affecting this.
    CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
    STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 3 0 0
    Execute 1 0.35 364.82 131168 117945 60324 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.35 364.83 131168 117948 60324 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80 (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    reliable message 1 0.00 0.00
    enq: KO - fast object checkpoint 1 0.01 0.01
    PX Deq: Join ACK 6 0.00 0.00
    PX Deq Credit: send blkd 112 0.00 0.01
    PX qref latch 7 0.00 0.00
    PX Deq: Parse Reply 3 0.00 0.00
    PX Deq: Execute Reply 604 1.96 364.42
    log file sync 1 0.00 0.00
    PX Deq: Signal ACK 1 0.00 0.00
    latch: session allocation 2 0.00 0.00
    Regards,

    user12121524 wrote:
    CREATE  INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL  TABLESPACE MSCX
    STORAGE(  INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          3          0           0
    Execute      1      0.35     364.82     131168     117945      60324           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.35     364.83     131168     117948      60324           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80     (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on                             Times   Max. Wait  Total Waited
    ----------------------------------------   Waited  ----------  ------------
    reliable message                                1        0.00          0.00
    enq: KO - fast object checkpoint                1        0.01          0.01
    PX Deq: Join ACK                                6        0.00          0.00
    PX Deq Credit: send blkd                      112        0.00          0.01
    PX qref latch                                   7        0.00          0.00
    PX Deq: Parse Reply                             3        0.00          0.00
    PX Deq: Execute Reply                         604        1.96        364.42
    log file sync                                   1        0.00          0.00
    PX Deq: Signal ACK                              1        0.00          0.00
    latch: session allocation                       2        0.00          0.00
    What you've given us is the query co-ordinator trace, which basically tells us that the the coordinator waited 364 seconds for the PX slaves to tell it that they had completed their tasks ("PX Deq: Execute Reply" time). You need to look at the slave traces to find out where they spent their time - and that's probably not going to be easy if there are lots of parallel pieces of processing going on.
    If you want to do some debugging (in general) one option is to add a query against V$pq_tqstat after each piece of parallel processing and log the results to a named file, or write them to a table with a tag, as this will tell you how many slaves were involved, how, and what the distribution of work and time was.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

Maybe you are looking for

  • Looking for ideas for transferring large amounts of data between systems

    Hello, I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application. We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will req

  • Does the optical audio out work?

    I have the ATV HDMI connected to a HDMI switcher. The switcher output connected to an Asus 27" monitor. The other 2 units on the HDMI switcher is a PS3 and a Tivo. I also have a TosLink switcher for the audio on all the units. The output of my TosLin

  • Relink book file

    Hi scripters Here I am updating book file.contents from one folder to another folder. Below is my code I used to update the book file. While I am executing this script It's processing for sometime and finally closes the Indesign application. var new_

  • ITunes 6 Crash and Won't reinstall

    I tried to download iTunes 6. It went fine. It got installed. And then I opened it. And it played a file fine. All was well. But when I attempted to put a CD in for it to rip, It Froze my entire computer. I restarted and tried to open it again. My wh

  • Final Cut Express 2 on a Upgraded B&W powermac

    Will Final Cut Express 2 work on a B&W powermac upgraded to a G4 650 MHZ machine with a ATI Radeon 9200 128MB DDR PCI Video Card installed? Thank you