Query 1 shows less consistent gets but more cost than Query 2..

Hi ,
SQL> select dname from scott.dept where deptno not in (select deptno from scott.emp)
Ðñüãñáììá åêôÝëåóçò
Plan hash value: 3547749009
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |    1 |    22 | 4 (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| EMP  |    11 |   143 |  2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "SCOTT"."EMP" "EMP"
              WHERE LNNVL("DEPTNO"<>:B1)))
   3 - filter(LNNVL("DEPTNO"<>:B1))
Note
   - dynamic sampling used for this statement
ÓôáôéóôéêÜ
          0  recursive calls
          0  db block gets
         15 consistent gets
          0  physical reads
          0  redo size
        416  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL>
SQL> select dname from scott.dept,scott.emp where dept.deptno=emp.deptno(+)
  2    and emp.rowid is null;
Ðñüãñáììá åêôÝëåóçò
Plan hash value: 2146709594
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT    |      |   12 |   564 | 5 (20)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |
|*  2 |   HASH JOIN OUTER   |      |    12 |   564 | 5 (20)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| EMP  |    12 |   300 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - filter("EMP".ROWID IS NULL)
   2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO"(+))
Note
   - dynamic sampling used for this statement
ÓôáôéóôéêÜ
          0  recursive calls
          0  db block gets
          6 consistent gets
          0  physical reads
          0  redo size
        416  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processedI have two questions:
1) which one is preferable.... the first which is less costy to the system or the second which causes less consistent gets to the system and so is considered to be more scalable..????
2)Whereas the number of rows returned by both queries is 1.. why the is difference in the underlined values in the two queries (values 1 and 12 respectively)?
I use Oracle10g v.2
Thanks.. a lot
Sim

The less logical I/O's the better.
So always do it like your query 2 (btw. your title is the wrong way)
Your example is probably flawed. If I try it in SQL*Plus I get correct results:
SQL> get t
  1* select dname from dept where deptno not in (select deptno from emp)
SQL> /
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=3 Bytes=39)
   1    0   FILTER
   2    1     TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
   3    1     TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=1 Bytes=3)
Statistics
          0  recursive calls
          0  db block gets
         15  consistent gets
          0  physical reads
          0  redo size
        537  bytes sent via SQL*Net to client
        660  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> get tt
  1  select dname from dept,emp where dept.deptno=emp.deptno(+)
  2* and emp.rowid is null
SQL> /
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=14 Bytes=322)
   1    0   FILTER
   2    1     HASH JOIN (OUTER) (Cost=5 Card=14 Bytes=322)
   3    2       TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
   4    2       TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=140)
Statistics
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        537  bytes sent via SQL*Net to client
        660  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> I'm wondering for instance why you have there 11 rows in emp for query 1 (should be only 1 row) and why you have only 12 rows in query 2 (should be 14 rows)

Similar Messages

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Query showing less data than the listcube

    Hi Gurus,
    For the same keyfigure I am getting $20 less in the query but the LISTCUBE of the multiprovider is showing correct value compared to R/3.
    Please extend your ideas.
    Thanks
    Nisha

    Hi,
    It is exactly the same filter criteria that I have put in the LISTCUBE which is pulling correct data in the list cube.
    Thanks

  • Why 3g slower but more reliable than WiFi

    Hi all,
    I did a ton of testing on WiFi and 3g yesterday on my 32gig 3gs. The best I could get out of WiFi at my house was just about 11 Mbps down and 5 up. The best on 3g I've seen was just approaching 1 Mbps up and down. So I think WiFi should get my downloads about 10 times as fast and hopefully more reliable, but it doesn't seem to be the case. WiFi will load what seems to be just as slow as 3g but if I try to skip foreward on a video or something it will glitch out or just stop loading. But with 3g I can skip foreward and it will catch up to where I want to be and continue the video.
    I guess that's not really a question but I want to see if anyone else agrees or knows a possible fix.

    My wifi is far faster and more consistent then 3g too. On my home wifi (also in an apartment) I think I'm using channel 9, and I also have my wireless router set to 802.11g-only (for my iPhone, printer, and two computers all connected via wifi).
    If you are getting starts and stops and such, also check for placement and possible interferance. If you have cordless telephones, are they DECT, 5Ghz or what? Is your wifi router placed up high, or near the floor (higher will usually give better signal). Is it near a stereo, television, microwave, house fan, air purifier, or other device that may cause interference? Are you usually in a place where there is a wall (or two) between you and the router (and if so, are they bare walls, or are there nice big metal appliances up against them)?
    There are all sorts of things that can mess up a wifi signal - in most home's, I'm amazed it works at all! Just kidding, but there are a number of things you can check out to be sure you are getting the best, most robust signal possible.

  • Less data gets loaded to DSO than the file

    I just load some data from CSV file to our DSO. Let's say our file contains 10000 records, but only 9000 gets loaded to the DSO. Why are there 1000 got filtered out? Thanks!

    Hi,
    They might not be missing, but actually added due to the DSO keys. You can try this, might be a bit lngthy process: Open up the request in the PSA (Try to open it from the PSA table in SE16, else in the monitor you can see only package by package). Then find out the columns that actually make up the Key of the ODS. Move them next to each other (if they are not together) and sort one of them, may be the most granular one and look for repeats in the key columns.
    Also, if you just need to see that all the data is there, jsut sum up the key figures in the file or PSA and then in DSO contents...much easier than trying to find 3000 records.
    Hope this helps...

  • HT4519 sometimes when I send a mail it will go regardless of location or whether it has attachments, but more often than not they won't go and the message " a copy has been placed in your outbox. The sender address "my email blah blah was rejected by the

    Sometimes when sending a mail it will go more often it won't go, it doesn't seem to be relevant  to my geographic position and the message" a copy has been placed in your Outbox. The sender address" my email blah blah" was rejected by the server"     . Any ideas?

    Check the outgoing mail server setting. Make sure that your username and password are in there.
    Settings>Mail, Contacts, Calendars>Your email account>Account>Outgoing mail server - tap the server name next to SMTP and check in the primary server and make sure your username and password are entered and correct - even if it says that the password is optional.

  • Consistent gets and db block gets

    Hi...
    I wanted to know the difference between consistent gets and db block gets in v$sess_io.I have read that consistent gets is the blocks in consistent mode..so here what does consistent mode means????
    Thanks in Advance,
    Anand

    Here's the complete text of the answer I originally wrote nearly 5 years ago on the Oracle-L mailing list:
    A 'db block get' is a current mode get. That is, it's the most up-to-date copy of the data in that block, as it is right now, or currently. There can only be one current copy of a block in the buffer cache at any time. Db block gets generally are used when DML changes data in the database. In that case, row-level locks are implicitly taken on the updated rows. There is also at least one well-known case where a select statement does a db block get, and does not take a lock. That is, when it does a full table scan or fast full index scan, Oracle will read the segment header in current mode (multiple times, the number varies based on Oracle version).
    A 'consistent get' is when Oracle gets the data in a block which is consistent with a given point in time, or SCN. The consistent get is at the heart of Oracle's read consistency mechanism. When blocks are fetched in order to satisfy a query result set, they are fetched in consistent mode. If no block in the buffer cache is consistent to the correct point in time, Oracle will (attempt to) reconstruct that block using the information in the rollback segments. If it fails to do so, that's when a query errors out with the much dreaded, much feared, and much misunderstood ORA-1555 "snapshot too old".
    As to latching, and how it relates, well, consider that the block buffers are in the SGA, which is shared memory. To avoid corruption, latches are used to serialize access to many linked lists and data structures that point to the buffers as well as the buffers themselves. It is safe to say that each consistent get introduces serialization to the system, and by tuning SQL to use more efficient access paths, you can get the same answer to the same query but do less consistent gets. This not only consumes less CPU, it also can significantly reduce latching which reduces serialization and makes your system more scalable.
    Well, that turned out longer than I planned. If you're still reading, I hope it helped!
    Hope that helps,
    -Mark
    PS The original question asked about latching as well, which explains the reason for the third paragraph.
    Edited by: mbobak on Sep 2, 2008 11:07 PM

  • Consistent gets examination

    " You have 3,847.1 consistent gets examination per second. "Consistent gets - examination" is different than regular consistent gets. It is used to read undo blocks for consistent read purposes, but also for the first part of an index read and hash cluster I/O. To reduce logical I/O, you may consider moving your indexes to a large blocksize tablespace. Because index splitting and spawning are controlled at the block level, a larger blocksize will result in a flatter index tree structure.
    Can you explain the above and help me understand it?
    What is index split and spawning? why does it happen and what is the fix?
    Which indexes are affected? Is there a query to find out? What is an optimal block size to use for these affected index table spaces?

    I'd strongly suggest you start reading the Oracle DBA manual to get a bit more basics behind you before you get into the advanced tuning stuff. It's at http://docs.oracle.com

  • Trying to understand consistent gets

    1. We were very surprised to observe that the consistent gets for a query (shown below) changed (from 120K to 18K) when the only modification was adding a comment to it. How can this happen?
    2. Furthermore, the plans for the two queries were identical. How can consistent gets differ if the plan is the same?
    3. Is consistent gets the best metric for comparing the performance of two versions of a query, or is there a better one? (In the past we've tried to use execution timing for comparison, but the timing of a single query can vary greatly between runs.)
    select
           count(*) from (
    SELECT source_id, project_id,
           max_score,
           fields_matched
    FROM (SELECT source_id, project_id, MAX(scoring) as max_score,
                 apidb.tab_to_string(set(CAST(COLLECT(table_name) AS apidb.varchartab)), ', ')  fields_matched,
                 max(index_name) keep (dense_rank first order by scoring desc, source_id, table_name) as index_name,
                 max(oracle_rowid) keep (dense_rank first order by scoring desc, source_id, table_name) as oracle_rowid
          FROM (  SELECT  SCORE(1) * (select nvl(max(weight), 1) from apidb.TableWeight where table_name = 'Blastp')
                           as scoring,
                        'apidb.blastp_text_ix' as index_name, b.rowid as oracle_rowid, b.source_id, b.project_id,
                        external_database_name as table_name
                  FROM apidb.Blastp b
                  WHERE CONTAINS(b.description, 'protein', 1) > 0
                    AND 'Blastp' IN ('Product', 'Notes', 'Comments', 'InterPro', 'EcNumber', 'GoTerms', 'Phenotype', 'Notes', 'and the rest')
                    AND 'gene' = 'gene'
                    AND b.pvalue_exp < -30
                    AND b.query_organism IN ('Plasmodium falciparum', 'Plasmodium vivax', 'Plasmodium yoelii', 'Plasmodium berghei', 'Plasmodium chabaudi', 'Plasmodium knowlesi')
                UNION
                  SELECT
                         SCORE(1)* nvl(tw.weight, 1)
                           as scoring,
                         'apidb.gene_text_ix' as index_name, gt.rowid as oracle_rowid, gt.source_id, gt.project_id, gt.field_name as table_name
                  FROM apidb.GeneDetail gt, apidb.TableWeight tw, apidb.GeneAttributes ga
                  WHERE CONTAINS(content, 'protein', 1) > 0
                    AND gt.field_name IN ('Product', 'Notes', 'Comments', 'InterPro', 'EcNumber', 'GoTerms', 'Phenotype')
                    AND 'gene' = 'gene'
                    AND gt.field_name = tw.table_name(+)
                    AND gt.source_id = ga.source_id
                    AND ga.species IN ('Plasmodium falciparum', 'Plasmodium vivax', 'Plasmodium yoelii', 'Plasmodium berghei', 'Plasmodium chabaudi', 'Plasmodium knowlesi')
                UNION
                  SELECT SCORE(1) * nvl(tw.weight, 1) 
                           as scoring,
                        'apidb.isolate_text_ix' as index_name, wit.rowid as oracle_rowid, wit.source_id, wit.project_id, wit.field_name as table_name
                  FROM apidb.IsolateDetail wit, apidb.TableWeight tw
                  WHERE CONTAINS(content, 'protein', 1) > 0
                    AND wit.field_name in ('fred')
                    AND 'gene' = 'isolate'
                    AND wit.field_name = tw.table_name(+)
          GROUP BY source_id, project_id
          ORDER BY max_score desc, source_id
    );

    1. We were very surprised to observe that the consistent gets for a query (shown below) changed (from 120K to 18K) when the only modification was adding a comment to it. How can this happen?Adding a comment and getting a different anything makes no sense, but you already know this. Did you toggle the comment on and off to see any changes in performance?
    2. Furthermore, the plans for the two queries were identical. How can consistent gets differ if the plan is the same?Clearly something odd happened. Possibilities include
    * wrong execution plan reported in one case (unlikely, but possible)
    * system resource availability was different at different times
    3. Is consistent gets the best metric for comparing the performance of two versions of a queryI use consistent gets, but also look at disk reads (not useful after data has been cached), CPU, and overall execution time. On rare occasions queries can be CPU-bound
    Edited by: riedelme on Mar 3, 2010 1:06 PM

  • Can anyone do more simplify this query

    Following are ddl of two tables
    1.CREATE TABLE EMPL (
         SNO NUMBER,
         ENAME VARCHAR2(25),
         JOB VARCHAR2(25),
    PRIMARY KEY (SNO)
    2.CREATE TABLE EMPL_DET (
         SNO NUMBER,
         SAL VARCHAR2(25)
    Following are DMLs of tables
    INSERT INTO EMPL(SNO, ENAME, JOB) VALUES (1, 'SMITH', 'CLERK');
    INSERT INTO EMPL(SNO, ENAME, JOB) VALUES (2, 'SMITH', 'MANAGER');
    INSERT INTO EMPL(SNO, ENAME, JOB) VALUES (3, 'TOM', 'CLK');
    INSERT INTO EMPL_DET(SNO, SAL) VALUES (1, '1000');
    INSERT INTO EMPL_DET(SNO, SAL) VALUES (2, '10000');
    INSERT INTO EMPL_DET(SNO, SAL) VALUES (3, '900');
    I want to calculate TotalSAL(column:empl_det.SAL) of each employee(empl.ename) with job-description(empl.job).
    Means i want following rows in output
    1.(JOB,TotalSAL,ENAME)-->(CLERK,11000,SMITH)
    2.(JOB,TotalSAL,ENAME)-->(MANAGER,11000,SMITH)
    3.(JOB,TotalSAL,ENAME)-->(CLK,900,TOM)
    I tried to write down for single ename
    select JOB,x.sal,ename from empl,
    select sum(sal) sal from empl_det where sno in
    (select sno from empl where ename ='SMITH')
    )x
    where ename ='SMITH'
    order by ename
    for each ename i have to fire this query. How can i pass ename list(SMITH,TOM) to this query?
    Or can anyone do more simplify this query?

    Hi,
    When you put some code please enclose it between two lines starting with {noformat}{noformat}
    i.e.:
    {noformat}{noformat}
    SELECT ...
    {noformat}{noformat}
    Please read <a href="https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360002">How do I ask a question on the forums?</a>
    In order to do this you can use a IN.
    i.e.ename IN ('SMITH', 'TOM') etc.
    However your code can be simplified as it seems redundant to me.
    Please double check it and try to understand what you can improve.
    Just a hint: use join and group by without subquery.
    Regards.
    Al
    Edited by: Alberto Faenza on Nov 27, 2012 2:51 PM
    Added a hint                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Less sql query cost ... but more time to fetch the records...

    Hi,
    I faced up to a 'peculiar' situation with a costly db view.
    I attempted to reduce the total view cost
    specifically for 223 records fetched{the cost from 149 reduced to 74,
                                                           the recursive calls from 796 reduced to 224,
                                                           the consistent gets from 311516 reduced to 310341,
                                                            the physical reads from 7 reduced to 0}
    but the amount of time needed to fetch the results is greater than the old version of the db view....{it may be the double...}
    Have you any idea about this...????
    Note: I have got fresh statistics...
             I use db 10g v.2
    Thanks,
    Sim

    Try tracing the query and see what tkprofs shows you.
    alter session set events '10046 trace name context forever, level 12';
    run the query
    alter session set events '10046 trace name context off';
    Then run tkprofs on the trace file produced to see where the database is spending its time/effort. Do likewise for the baseline query in a different session (so you will generate a different trace file).
    If that does not produce anything useful, try using
    alter session set events '10053 trace name context forever';
    run the query
    alter session set events '10053 trace name context off';
    Then examine the trace file to see if you can learn anything.
    Since you have not given us anything more to go on, that is about all the help I can give....

  • Consistent gets are reduced by 50% but the query taking more elapsed time.

    Hi All,
    While tuning a application my consistent gets are reduced by 50% but the query is still taking the same time.
    In a Warehouse env data is coming from With clause that is having some 40000 rows .Then these 40K rows are joined to a table that is having 2 Billion records having indexes on primary key and date column(bitmap)indexes .
    It is using Hash Joining method.

    Try forcing a hash join, if possible:
    http://dba-oracle.com/tips_oracle_hash_joins.htm
    Can you post the plans?
    http://www.dba-oracle.com/plsql/t_plsql_plans.htm

  • Consistent gets are reduced by 50% but the query is still taking the same t

    Hi ,
    I am tuning a query in DWH env using a WITH clause .Doing that my consistent gets are reduced by 50% but the elapsed time in increased.
    In that one 40000 rows are coming from WITH clause and joined with a table having 250000000 rows joined through Hash join.
    can anybody help me out in this issue.
    Thanx
    Nitin

    NitinMishra wrote:
    Hi ,
    I am tuning a query in DWH env using a WITH clause .Doing that my consistent gets are reduced by 50% but the elapsed time in increased.Unusual! Normally time goes, down when system resources do - but not always as you are seeing :(. Last time I saw something like this the reads had gone down but the CPU usage had gone up. You can get CPU usage from V$SQL.
    Can you post the query, the execution plan, and statistics - both before and after the tuning?

  • HT1848 On itunes, when synced to my pc, I show less than half of the music that is on my iphone 4. How do I get all the music onto my pc?

    I am about to scream if I don't figure this out or just decide to leave Apple and get a Galaxy phone. On itunes, when synced to my pc, I show less than half of the music that is on my iphone 4. How do I get all the music onto my pc? I have read many forums and tried many things, but apple/iTunes is not intuitive at least not on a PC. When I go to devices it won't let me select "transfer music". It's grayed out. Why is this and what can I do? I'd like to delete some of the music off of my iPhone but don't want it gone forever. Pleae help.

    Thanks Turingtest2,
    I appreciate your response. I have done all of the things mentioned, but still when I click on "devices", the "transfer music" option is still not selectable. I see it, but can't select it.  I am running Windows 7 on my PC.
    However, one thing I read makes me take pause. At one point I did erase my computer back to factory settings using the mirrored image. We backed up all information and put the important stuff back on, but I don't believe we did anything with iTunes, so it's a possibility that the music that is in my library now, may only be music I have purchased since then. But, (thinking out loud) that was a while ago and I am pretty sure I have purchased more music than what is in my iTunes library, since then.  Maybe I will approach it from that angle and see what happens.
    It just doesn't make any logical sense that syncing goes from computer to device. Ostensibly then, my iPhone would only have the music that is on my computer after syncing and that is not the case. I still have all my music on my phone. I can see the new music that's on my phone get added to my iTunes library when I connect, which would make one think that syncing occurs both ways. I don't know. It's maddening. Maybe I should just follow my husbands advice and go to an Android. LOL. Or maybe I will take my laptop and phone to the Apple store and make them figure it out. Ha.
    If you have any thoughts about this, let me know. Thanks again.

  • Consistent gets are reduced but elapsed time is increased.

    Hi All,
    While tuning a application my consistent gets are reduced by 50% but the query is still taking the same time.
    In a Warehouse env data is coming from With clause that is having some 40000 rows .Then these 40K rows are joined to a table that is having 2 Billion records having indexes on primary key and date column(bitmap)indexes .

    This forum is for issues with the SQL Developer tool. You'd get more response in the General Database forum.
    Have fun,
    K.

Maybe you are looking for

  • Xml data convert into only element data

    Hello everybody, I create a form which use the web service. My output is come in xml form like.. <NewDataSet> <Table> <PRIVATE_MARKA_BATCH_NO>0622</PRIVATE_MARKA_BATCH_NO> </Table> <Table> <PRIVATE_MARKA_BATCH_NO>DOOR CABINET</PRIVATE_MARKA_BATCH_NO>

  • Call JSP

    I have a Help Menu -the contents of the about page are in jsp . How do I call it from the swing application. thnx

  • Unable To Build WriteFishPrice plugin in Sample

    Hi, I'm trying out the WriteFishPrice plugin in the samples in CS3 SDK using the InDesign Plugin Editor. I imported the project then tried to build it but I received a "/Developer/usr/bin/../libexec/gcc/i686-apple-darwin8/4.0.1/libtool: internal link

  • What are the steps in message control !

    I have a requirement When a credit memo is created an IDOC will be generated ! Can it be done by message control ? if so tell me the steps....

  • No Sim on ipad2/5.1

    My ipad2 has "no sim" problem. Reset, erased all/restored new, new sim from AT&T with no luck, any suggestions. The problem started after updating to 5.1 This seems to be a common problem after reading several posts on this board.